AI, mass surveillance, doomscrolling, increasing wealth inequality and climate change – it’s as if the contemporary world has taken hold of fiction, transforming it into a disturbing reality.
This article is an opinion piece in the style of speculative journalism whose contents represent the standpoint of its author and not UPF Lund or The Perspective’s editorial board.
In a world of technological overreach and political institutional decay, we can observe that for decades, science fiction has mapped out how society can prosper, but also deteriorate. Sci-fi authors seemingly predicted AI, smartphones, synthetic meat, space rockets and atomic bombs. It explored different dimensions of society, power and technology through philosophy and fiction, and in doing so, it has potentially provided us with possible lessons on how to manage our dystopian realities.
We are progressively facing phenomena that only a decade ago could only be encountered on a page, so might the answers to our questions on how to deal with contemporary political challenges also be found on a page?
Now, more than ever, I find myself increasingly drawing parallels between my favourite books and current affairs. The announcement of the Dune part three movie coming out in December 2026 has left many fans, including myself, excited, and has prompted me to engage in yet another re-read of the original hexalogy by Frank Herbert.
10,000 years before Paul Atreides’ adventures on Arrakis, there was a brutal galaxy-wide war where humans fought to overthrow sentient machines. Humans succeeded and established the commandment that “thou shalt not make a machine in the likeness of a human mind”. This is an obvious allusion to AI and provides an ominous imaginary outcome of AI, which is increasingly topical as anxieties surrounding artificial intelligence rise.
In 2001: A Space Odyssey by Arthur C. Clarke, Hal 9000 is an intelligent spaceship computer that malfunctions due to a psychotic break, killing most of the crew on the spaceship. Hal is a warning, showing us that technology is not infallible and that technology is never really neutral. A lesson that we can take from this is that AI can go wrong, and the consequences can be… catastrophic to say the least.

There are examples in reality that are, thankfully, less theatrical, such as an Air Canada chatbot giving a traveller wrong advice, resulting in the airline having to pay for the damages, and AI advising small businesses to break the law – yikes. Additionally, the “Dead Internet Theory” posits that eventually all internet content will be generated by bots and AI rather than real humans, creating a synthetic, dead space, to control algorithms and influence consumers. So if the data being generated is flawed, then what we learn from the internet is, by association, flawed.
Though there is a European legal framework regarding AI, the EU has a desire to develop a strategy to safely integrate AI into various sectors. However, there is a clear lesson, not just in fiction, but also emerging in reality, that when things go wrong with AI, it can be unpleasant. Perhaps we should take a page out of Herbert’s and Clarke’s books and use fiction as a warning. Maybe we should avoid using AI for high-level executive functions, such as AI government ministers (yes, I am looking at you Albania). Because when AI gets it wrong again, and it will, it goes beyond asking whether it will be held accountable, but whether it can be held accountable.
Another interesting technological topic that science fiction often tends to present is the misuse of technology. Through censorship and propaganda, we can lose our grip on the “truth”, calling into question the media we consume. In a world of algorithms and targeted ads, we should challenge the powers behind who controls what we consume, and their agendas. This idea is presented by George Orwell in 1984, who also presents the issues of a surveillance state, as well as Ray Bradbury in Fahrenheit 451.
The lessons sci-fi books can provide us are, however, by no means limited to technology. Dune also warns against trusting charismatic leaders, criticising strongman politics and populist myths, because they can centralise power and erode institutions, leading to social and political instability. The literature calls you to criticise alleged “saviours” or “chosen ones” due to the events that unfold in the books (no spoilers).

If the lessons so far seem obvious to you, that is because sci-fi has a deep, reflective relationship with history. Fahrenheit 451, though futuristic, reflects the themes of the era in which it was written, being inspired by the book burnings in Nazi Germany. Octavia Butler’s novels frequently explore the themes of power and the traumatic history of black Americans through her sci-fi books, using these themes as a basis for speculative fiction. Foundation by Isaac Asimov explicitly draws on the fall of the Roman Empire to explore cycles of imperial collapse and return in civilisation.
Though this is by no means a comprehensive list of what sci-fi can enlighten us about the current world, these are examples of how literature turns abstract political and technological phenomena into an interesting anecdote, preserving historical memory and echoing past failures and sentiments. This can train us to recognise patterns of power and their consequences in our daily lives.
The famous quote by philosopher George Santayana goes, “Those who cannot remember the past are condemned to repeat it”. If we want to learn from history, then perhaps we could also learn through fiction.
By Carmen Elizabeth Kardan Calvo
January 27, 2026








