Nihilism stands at the door: whence comes this uncanniest of all guests?[1]

[Follow up post at SRHE Blog: https://srheblog.com/2024/08/27/restraining-the-uncanny-guest-ai-ethics-and-university-practice/ ]
Is Generative AI an uncanny and uninvited guest at the table of Higher Education? For Nietzsche the uncanny guest at the door is nihilism. This guest has arrived to undermine our existing sense of purpose and throw the world into turmoil as we realise that the prior sources of value and meaning we have relied upon are useless, and indeed ultimately harmful. Our prior bulwarks against life often seeming futile, empty and meaningless, are unmasked for Nietzsche, as anti-life lies. He rejects (largely, and often in mixed and complex measure) Christianity as a source of meaning, and all the ultimate-value-offering bedfellows that accompany it[2]. This tumult washes away our preconceived understandings of purpose and meaning. Nietzsche has much to say about how nihilism (and often he equates this to either his, or maybe Zarathustra’s, declamation of nihilism) will upend the world[3] and lead to cataclysmic catastrophes, and many see the outworking of this in the twentieth century. But the catastrophe is not forever, and ultimately there is hope that beyond nihilism there can be something better, some way of living a meaningful life, but not one based on metaphysical fictions, which is how Nietzsche largely characterised Christianity.
Bernard Reginster, in The Affirmation of Life: Nietzsche on Overcoming Nihilism[4] clarifies the term, but also Nietzsche’s overall goal in relation to it: Nietzsche’s philosophical project consists in determining whether there is a way to overcome nihilism. Nihilism is the conviction that life is meaningless, or not worth living. Reginster is, I would argue, accurate in identifying Nietzsche as ultimately an optimist about the hope for affirming life. Reginster, in the closing page of his book states:
In many ways, Nietzsche’s own life, and in particular the manner in which he practiced philosophy, exemplifies the life-affirming ideal he advocates. As he conceives of it, philosophical greatness consists in challenging hallowed and deeply entrenched views (what he often calls “the ideal”), and in setting off to discover new worlds of ideas. And few philosophers have been more successful in doing both than he has. His books are either severe polemics against traditions that reach back thousands of years, in which case he had to be a philosophical “warrior,” or they blaze trails to unexplored new worlds, for which he must have become a philosophical “discoverer.”[5]
In seeing AI as the uncanny guest that will upend so much of Higher Education, I suggest that we should also be optimistic. Many concerns centre on academic integrity and being sure that work students produce is their own work, and that with the loss of that we will have no way to assess if students have actually learned what their qualification states they have learnt. Assessment in Higher Education is meant to assess what a student knows and can do. If assessments don’t allow us to actually measure what we need to pass judgement on – whether a students has certain attributes and competencies – it is hard to imagine how a University might meaningfully function.
I believe many of our worries about Higher Education and Generative (and other forms of) Artificial Intelligence can be overcome. But as with Nietzsche’s announcement of the arrival of Nihilism, the overcoming will be eventual, following an upheaval that sees us of necessity take a hammer to much existing HE practice. It really is dämmerung[6] for much of what many hold dear in the University world. Before I imagine where we might be going in HE, I want to say a little about why uncanniness is such a good fit for AI, as it was for nihilism.
What made me think initially of the ‘uncanny’ was the phrase ‘the uncanny valley’, used to describe the disturbing aesthetic response we have as a portrayal of the human, or something like part of a person, nears the realistic, and yet something is ‘off’ about it. The more realistic the portrayal is, the more we feel affinity with it. Originally noted in 1970 by Masahiro Mori at the Tokyo Institute of Technology “Mori coined the term “uncanny valley” to describe his observation that as robots appear more humanlike, they become more appealing—but only up to a certain point. Upon reaching the uncanny valley, our affinity descends into a feeling of strangeness, a sense of unease, and a tendency to be scared or freaked out”[7] Mori’s graph below shows how we feel more affinity with a representation the more human it seems – until it is too realistic to be very obviously not human, but has something that gives it away.

The particular thing that drew me to think of Mori’s notion was not the much-remarked-upon oddness of AI-drawn hands, but the video trail for the soon-to-be-released Sora from Open AI, which I watched in a room full of colleagues from a range of universities. Everyone sat there being impressed until there are a couple of, I guess ‘odd’ is the right word, things with the walker’s legs near the end – and nearly everyone in the room gasped or said ‘eurgh’. This made me think of the uncanny valley, and of Nietzsche’s phrase about a different kind of uncanniness. The disquieting is perhaps initially the uncanniness of Mori‘s valley, but ultimately more like Nietzsche’s coming catastrophe and ’revaluation of all values’.


The more AI-written essays you generate and/or read the more you see, for now, a slight oddness to them, a detachment that can seem uncanny, even a little creepy. The AI-made (DALL·E 3) images in this post have the strangeness of AI-art that is recognisable, if hard to exactly capture definitionally. This may prove to be a transitional phase as AI gets ‘better’ at mimicking the human voice/eye in writing and art. This, combined with our skill and familiarity in the use of the systems,[8] may well push us to a point where this kind of uncanniness recedes. But as this happens is when the challenges for those of us in universities get really serious, possibly really fast.
The departure from the uncanny valley in aesthetic terms is when we see the interim risk-mitigation methods many of us in Higher Education have put in place around GAI start to falter and fail. If Generative AI starts to be less distinguishable from humanly penned work, even indistinguishable, and we are already seeing the limited efficacy of so-called GAI-detection software, where does this leave concerns about academic integrity? How will we be able to verify that a certain student is capable in some respect, knows and understands and can apply ideas, can argue, synthesise and evaluate sources, draw well-grounded conclusions, operate on a brain, design a safe bridge or any of the many things we hope graduates will be capable of? This is the challenge that we face and may seem to strike at the heart of Higher Education, but I believe this is also an opportunity to enact positive change. The uncanny guest may be uninvited be they are not leaving any time soon.
At this stage it might be argued that the best policy is to return to in-person, pen-and-paper examinations in large halls, with no technology. There may even be a place for some of this, but ultimately, we can’t examine everything and there are good reasons we have seen many move away from exams. Indeed it seems like a retrograde step which fails to acknowledge the new AI-based world that students will graduate into. One key point is these type of exams don’t actually test the skills we might want students in Higher Education to develop. Arguably, the memory-test element that sits at the heart of exams is the least useful skill in a world full of devices capable of fact-retrieval. But conversations about students ability to retain and retrieve facts are only part of the conversation. There is something about GAI that is qualitatively distinct from tools that retrieve information – the generative bit matters. So, while Higher Education has survived MOOCs, Wikipedia, Microsoft Encarta, and YouTube the challenge from Generative AI is of a different and more existential order. We can reasonably see this as a fundamental threat to the very nature of the engine of the industrial, culture and knowledge economies – the university.
At worst we might wonder what the point is, like those cast adrift into the kind of existential anxiety prompted by nihilism, but there really is another way to see the coming AI disruption of Higher Education. As tutors use GAI to help with their workload, and possibly to prepare sessions, and students use GAI to help them produce work, and feedback/grading is done by AI, might we get to a point where GAI sets work, does the work, and then marks the work. This starts to feel pretty futile and dystopian and there doesn’t seem to be a lot of learning going on. But this is not the inevitable conclusion of where we currently are. As Jon Dron blogs: It doesn’t have to be this way. We can choose to recognize the more important roles of our educational systems and redesign them accordingly, as many educational thinkers have been recommending for considerably more than a century.
We need to ask a serious question of ourselves in Higher Education. What do we really think it is that we have to offer students, society, commerce, politics, and the wider world? Are we still the repository of knowledge, or perhaps the arbiter and curator of competing knowledge collections and assertions? What do we think we have to give young people? Most of us in universities still think that being able to write and develop an argument, or design an experiment and draw conclusions, matters. Alongside knowing enough to know what to look up, or retrieve having the skills to weigh competing assertions, seems vital.
On this theme, Marc Watkins, in a piece called Our Obsession with Cheating is Ruining Our Relationship with Students writes: We’re going to have to search for moments of learning. One way of starting this exploration is to heed John Warner’s call to stop automating the writing process. Students write predictable, formulaic essays and teachers leave similarly predictable, formulaic responses. What was learned and what was gained in such a system? The reasons why we’ve adopted this structure are based on convenience and labor.
The current system, as Watkins partly notes, was never designed from scratch as the ideal way to promote learning and transmit skills, and knowledge. It is very much shaped by the economic circumstances and systems that Universities have grown in, and much of Higher Education practice is the result of imperfect accretions that have become fixed norms. Reforms have often been frustrated by funding systems, the conservativism of large groups of complex institutions, and perceptions of the value of ‘tradition’. The system under threat from GAI has much to value within it, but it is also riddled with non-trivial problems and is far from perfect.

As such it doesn’t seem to be merely rhetorical flamboyance on my part to compare the arrival of GAI in Universities to the shock that nihilism was about to deliver in Nietzsche’s analysis. Those of us in Higher Education will have to face up to the fact that we are on the cusp of massive change, and that much of our current massified HE practice is not going to survive the next decade. We can bemoan this, and I am sure I’ll do some of that on occasion, but we can also greet this future with optimism. In this turbulence there is the opportunity to reevaluate all the values of our undertaking. What kind of Higher Education emerges, alongside a world where all kinds of AI have become increasingly pervasive, is not yet determined. It is an open field of terrifying, exhilarating and surprising possibility. The only thing I would suggest that is determined is that the University of the future will look very different to the University of today.

[1] Nietzsche. The Will to Power, trans. Walter Kaufmann and R. J. Hollingdale, ed., with commentary, Walter Kaufmann, Vintage, 1968.
[2] We might want to add Platonism, and much of its legacy to this – but Catherine Zickert argues for a more sophisticated reading here, while maintaining Nietzsche’s nihilism and rejection of a metaphysical Platonism.
Zuckert, Catherine. “Nietzsche’s Rereading of Plato.” Political Theory, vol. 13, no. 2, 1985, pp. 213–38. JSTOR, http://www.jstor.org/stable/191529. Accessed 4 Mar. 2024.
[3] In the 1888 Preface to The Will to Power, he wrote:
What I relate is the history of the next two centuries. I describe what is coming, what can no longer come differently: the advent of nihilism. This history can be related even now; for necessity itself is at work here. This future speaks even now in a hundred signs, this destiny announces itself everywhere; for this music of the future all ears are cocked even now. For some time now, our whole European culture has been moving as toward a catastrophe, with a tortured tension that is growing from decade to decade: restlessly, violently, headlong, like a river that wants to reach the end, that no longer reflects, that is afraid to reflect.
Nietzsche, Friedrich. The Will to Power. Translated by Walter Kaufmann and R. J. Hollingdale. Edited, with commentary, by Walter Kaufmann. New York: Vintage Books, 1968.
[4] Reginster, Bernard. The Affirmation of Life: Nietzsche on Overcoming Nihilism, Harvard University Press, 2006. ProQuest Ebook Central, https://ebookcentral.proquest.com/lib/liverpool/detail.action?docID=3300356.
[5] ibid, p.267
[6] To refer to Nietzsche’s 1989 book Twilight of the Idols, or, How to Philosophize with a Hammer
[7] https://spectrum.ieee.org/what-is-the-uncanny-valley IEEE Spectrum article: What Is the Uncanny Valley? Creepy robots and the strange phenomenon of the uncanny valley: definition, history, examples, and how to avoid it. Rina Diane Caballar 06 Nov 2019
[8] The term “prompt engineering’ to refer to the instructions given to an AI system is seen by some as a whole new job class, and the subject of study. Others are less sure. Alex Warren in a blog post wrote: Prompt engineering isn’t rocket science. In fact, it’s not even engineering. It’s just good old-fashioned writing. Clear, concise copywriting, designed to convey meaning.


Leave a comment