The world is increasingly digitalised. With a growing trend of democratic backsliding, we ought to be vigilant about how the digital world plays a role in the wider unfolding of our democratic societies. The arguably most tumultuous development in tech right now is generative AI, such as ChatGPT, which is able to produce new content – whether text, photos or videos – thanks to algorithms trained on pre-existing data. In a survey conducted last year, 25% of the Swedish population reported having used generative AI in the past three months, with youth aged 16-24 averaging over 50%.
The fact that the AI models are trained on past data is an incredibly important factor to keep in mind. This factor means that AI models risk reproducing social biases in the content that they put out. For example, asking AI to generate a speech from a CEO resulted in the speaker being male 100% of the time, and white 90%. And while OpenAI has put guardrails in place to hinder ChatGPT from using slurs, AI models have been shown to be covertly racist, discriminating against users who use African American Vernacular English. If AI assumes the world looks like what our biases show it, could that not spill over into other responses it creates?

So, how do these social biases play a role in our democracy? In an interview, the Swedish Prime Minister Ulf Kristersson told Dagens Industri that he and his colleagues regularly use AI, such as ChatGPT, in their work to get second opinions. This shows one avenue through which we permit generative AI to influence politics, through potentially skewed responses to prompts. Meanwhile, a proponent of the PM’s use of AI argues that he never said that he shares any sensitive information, and that it is surely better to investigate how AI can help in his work, rather than being too scared to try.
Taken one step further, Albania has appointed an AI-generated minister named Diella to its government. While the government claims the bot will have a positive effect, aimed at combating corruption and pushing for transparency, criticisms and concerns have been raised over its trustworthiness and accountability.
While Diella is not aimed to make decisions of her own, countless micro-decisions about which aspects of issues to prioritise and what to leave out will be left in her hands to inform the government, which may shape or constrain the human judgements that follow her operations. Another issue with using AI in government is the handling of sensitive government data, raising significant cybersecurity concerns.
But what of the general public? One way in which you may be affected is through prompting AI to give you information about an upcoming election and it generating a hallucinatory response, thereby spreading misinformation to you. Even if you would rather Google and find your own sources, the first thing we now see after Googling something is an AI-generated response. It is becoming increasingly impossible to escape the comfort of AI and the simplicity of trusting it – despite knowing that it may misinform.

Another way we may be affected by generative AI is through social media. There have been cases worldwide in which deepfakes of prominent political figures have sparked social media controversies. In Turkey, the consequences of such controversies became evident as a candidate had to resign from the presidential election in 2023 as a result of explicit deepfakes being spread online.
Lastly, the usage of deepfakes was exemplified in the New York mayoral campaigns, where candidate Andrew Cuomo briefly had an AI-generated campaign ad posted on his X-account before taking it down. The video depicted different criminals showing their support for the opposition candidate Zohran Mamdani, and it was criticised for perpetuating racist stereotypes. And, while being quite obviously AI-generated to the trained eye, it raises questions of how candidates may come to use AI to try to sway public opinion, especially as AI-generated videos continue to grow harder to distinguish from real-life recordings.
Whether you are for or against generative AI, you cannot doubt the fact that it is becoming increasingly integrated into our lives and that it becomes all the more difficult to decipher what is real from AI as the technology advances. What is important for all of us to bear in mind is that this is the reality that we live in. We cannot blindly trust all videos, audio or text that we see online. We have to be more critical than ever before. While discussions of how to implement policies to best extract the benefits while avoiding the worst harms of AI are being made, we need to be attentive to how AI works. How we use it, and how our elected officials use it, has an impact on our democracies, and it is up to us to shape the type of society we want to live in.
By Ella Målberg
November 27, 2025








