Transcript: AI is changing elections: How can we protect democracy?


SystemShift podcast looks for answers and stories of justice, solutions, and alternatives, collaboratively showing how other ways are possible, through a decolonising, intersectional and hopeful point of view. Season three of this series will explore how we move from a world that serves the economy to an economy that works for people and the planet. 

Across eight weekly episodes, co-hosts former politician Carl Schlyter, environmental justice technologist Jocelyn Longdon, and novelist Yewande Omotoso explore topics including taxes, mental health, and A.I.

Listen on YouTube, Apple Podcasts, Soundcloud, or wherever you get your podcasts.

Below is a transcript from this episode. It has not been fully edited for grammar, punctuation or spelling.


Joycelyn London (00:00:02)

Welcome to SystemShift, a podcast from Greenpeace which explores how we can move from a world that serves the economy, to an economy that serves people and the planet. The theme of this series is change, and in each episode we speak to guests across the world to hear how they’re changing the planet for the better. 

I’m Joycelyn, an environmental justice technologist, writer and educator…

Carl Schlyter (00:00:26)

And I’m Carl Schlyter, and I work for Greenpeace, former politician, bio technologist. And for this episode we’re asking the question “AI is changing elections; how can we protect democracy?”.

Joycelyn London (00:01:03)

We’ll explore how AI is reshaping elections, with potential opportunities, but deep increasing concerns about privacy, bias and the stability of democracy. Can AI make democracy “better” or does it risk undermining its core principles. 

Carl Schlyter (00:01:26)

Some of you might think that now we’re going to deal with AI from an environmental point of view, and it’s really interesting because you can have improved species monitoring, you can detect epidemiological problems early and so on. There are many health and environmental implications, both positive and negative. But the main focus for this episode is actually how it can affect the way we think in our elections, and also how can we, as citizens, make sure our governments make the right decisions here. And as usual we put a poll on our Instagram page, where we asked people “How do you think AI can impact elections and democracy? Joycelyn, what do you think” people answered here? 

Joycelyn London (00:02:10)

I think that given the messaging within the news and media that most people thought that it would be a threat to democracy. 

Carl Schlyter (00:02:18)

Yeah that’s exactly what I thought, a massive majority for that, and we are actually quite right here because 60% said yes, it can threaten democracy, while 8% said it could strengthen it. And quite a lot of people honestly answered “I don’t know” because nobody actually knows, it’s just what you think. 

Joycelyn London (00:02:36)

Yeah, and I think part of that is just not really understanding how AI is going to play out in democracy, and I think this is why this episode is so essential to break down exactly what the risks are, what the opportunities are, and what actions we need to take on many levels in order to ensure that it doesn’t threaten democracy. 

We also asked people how they felt about AI in elections and democracy, and here’s what some of you had to say. 

“AI is not that great since it can create a lot of misinformation and it will be difficult to tell it apart from reality.” 

“I am deeply worried knowing that powerful corporations can easily exploit the law.” 

“AI should be cleaning the oceans, not making decisions in society.”

I’m really excited for this episode, especially as someone working within the AI space, I mean I work at the intersection of technology and forest conservation, and this is the element or the quality about AI that I think is very interesting, but also very important to highlight is how wide ranging AI is. And someone like me, my work on a day-to-day basis will be so different, although there’ll be lots of similar themes. This is something that gets flattened in the media, that there’s so much to AI and so much expertise. I think this is something that affects us all and it’s important we get a deeper understanding beyond the scaremongering headlines of the media, or high praise and techno positivism that’s coming out of the big tech space, I’m really interested in getting a bit deeper into this conversation. 

Carl Schlyter (00:04:12)

Me too, I’m looking forward to it. I think it’s like eight or nine years ago I was in the (Swedish) National Parliament and I wrote a proposed bill on how to deal with AI and how to regulate it in a way that it would benefit humankind instead of exposing us to new risks. So it’s going to be interesting to see, because since then it has changed so much, I mean none of these AI tools that are accessible to normal people existed by then, so that it has exploded in the last year, but I think the legislators and the measures to deal with AI haven’t kept up. 

Joycelyn London (00:04:49)

Yeah, I think this is part of the question, is how regulation matches up to, or keeps up with, the advancements within the AI space – I think this is the question on everybody’s lips is – okay it’s here, and it’s not going anywhere, and it is impacting democracy, and so how do we protect democracy, and how do we even extend that question, not just to protecting democracy from AI but actually acknowledging that there may be some opportunities, and understanding how we can improve democracy. I don’t know, these are all open question marks. 

Carl Schlyter (00:05:24)

Yeah exactly, I mean you said it’s not going anywhere but it’s also at the same time going everywhere, and how can we guide it in a direction where it would actually benefit us and the planet.

Joycelyn London (00:05:36)

In this episode we’re joined by Dr Rumman Chowdhury. Rumman is a leader in ethical AI, creating tools to make technology more transparent and fair. She runs parity consulting, advises on AI ethics at Harvard and serves on global boards shaping responsible technology. 

Carl Schlyter (00:05:58)

Hello and a heartly welcome to you Dr Rumman. 

Dr Rumman Chowdhury (00:06:00)

Thank you so much for having me, excited to be here with you both. 

Joycelyn London (00:06:04)

Yeah, we’re super excited to have you on today and for this conversation. To start, usually we’ve been asking guests to share a word or phrase in their native language, I mean in some ways, your native language could be the language of technology, and I know that you have a word that you’re kind of interested in sharing with us, so maybe you can share it with us now. 

Dr Rumman Chowdhury (00:06:25)

Yes absolutely, and to your point I do find myself often in a position of being a translator of tech terms, especially as a social scientist. So the term I’ll define for everybody today is “generative AI”. Generative AI, one of the reasons why it’s really captured the imagination and why it looks so different from the AI we’ve seen before, is that it uses something called a “transformer technology” in order to use a pool of information to synthesize a probabilistically high response to what you have put in. So when you put something into a generative AI tool like chat GPT it’s called a prompt, and what the generative AI model does, is it predicts which words are likely to fulfil the answer to your question. And I’m choosing my words very carefully. So what does that mean? Generative AI is not, generative AI is not thinking, generative AI does not understand context, generative AI is not alive, generative AI will not be alive. It is approximating what human beings in aggregate tend to respond, or what is likely to be the response to the question that you have posed. 

The other part of that definition that’s very important, is that when a model does something like hallucinate, in other words fabricate an answer, that’s not a bug, that’s not a flaw in the model, that is by definition how the model works. Again if the model does not understand context, then it doesn’t understand when it’s making something up. 

Carl Schlyter (00:07:58)

That’s funny you mentioned that because the first time I ever used such a tool we asked how many red listed birds lived in a specific national park, and it came up with the number. We asked, what’s the source document for that, because that’s not correct. And then in the end the AI explained that “yeah I just guessed based on …”. 

Dr Rumman Chowdhury (00:08:15)

Yeah it just fabricated a number. Yes, I mean, we could go more into why that phenomenon happens, I’m unsurprised. So it’s very fascinating but yes, generative AI is neither magical, nor alive, nor thinking, nor sentient. It is a synthesis tool based on probability. 

Carl Schlyter (00:08:36)

I think it’s really important you bring this up because people attach AI to so many things today where it’s not the right term to use. Could you explain a little bit the difference between a normal computer program and what AI is. 

Dr Rumman Chowdhury (00:08:48)

Yes, so artificial intelligence and even machine learning models are making a prediction and these are probabilistic predictions, so when we think of let’s say a machine learning model, these are very sophisticated statistical predictions, so for someone like myself, I was a statistician by background, the way these models work is not particularly magical, it’s just mathematically dense. And the same holds true for generative AI models, neural networks, other kinds of AI models; they just get more and more mathematically complex, which also makes them more and more fragile. So a traditional software, let’s say a computer program, just relies on input/output, so you give it some sort of input and then you can predict with basically 100% certainty what the output will be. If it deviates from that then there is something wrong with the model. What’s quite interesting about generative AI is that the output of those models, because it is probabilistic and it is generated every time, it is not going to give the same answer every time you ask. 

Joycelyn London (00:09:53)

Yeah I think it’s really interesting that we’re starting with these definitions and just saying what AI is and what it is not, especially because the focus of this episode is on elections and democracy. And I think that this is particularly a space where there’s a huge amount of misinformation and disinformation, about misinformation and disinformation, and about AI more generally. And I think it would be interesting to kind of zoom in from these wider thoughts about what AI is and what it isn’t, to how you are thinking about AI in relation to democracy and elections more specifically.

Dr Rumman Chowdhury (00:10:32)

I think it is hard for us to grasp or understand how much generative AI content is already shaping and skewing our perceptions of the world, and how that’s increasingly related to election or political content. We’ve seen the impact of social media on polarisation, that is basically a truism, I don’t think anybody would debate that, we know filter bubbles are real, we know ideological bubbles are real, and we know manipulation campaigns are real. Now what generative AI does is actually allow existing malicious actors to supercharge the campaigns that exist. You can quite easily generate highly sophisticated fake content, you can also use programming generative AI modules to help you create programmes to post this content for multiple different accounts every hour on the hour. So you can now automate your campaigns, your influence campaigns. And it’s already been shown on platforms, especially like X which has very little bot control, that there are misinformation bots out there, and in some cases they’ve actually broken them, and under the hood it’s some sort of a generative AI model. So what the level of sophistication looks like, is that these models have been fine-tuned or trained to spread a particular kind of misinformation, and act like a person and respond like a person, so it’s harder and harder to find fake accounts. 

Joycelyn London (00:11:53)

Yeah I think it’s interesting that you make this point, that it’s an addition to, because part of the discourse frames AI as a new threat to democracy, and in the environmental space there are threats that are then magnified or amplified or accelerated, and I think that it also highlights how misinformation and disinformation and attacks on democracy have already existed and now are being amplified by generative AI. And so we have a dual task of not only addressing the AI issue but also the underlying issues around democracy that already exist. 

Dr Rumman Chowdhury (00:12:36)

That’s right, I mean, whether or not, it’s hard to determine causality in these cases, especially if we – I live in the US – when you think about the increasing polarisation in the US, it’s not just because social media or generative AI has polarised us, I think we have increasingly become, even all over the world, increasingly polarised, hard to say what’s causing it but certainly the amplification aspect is real.

I think it’s important to understand how generative AI amplifies this kind of content. I think what people wanted to see are these very clear examples of bad deep fakes, akin to how Donald Trump was making these images of Kamala Harris as a communist and going online, and to be honest I actually don’t think that’s how most of that manipulation is working. And anyone who’s studied elections and misinformation and disinformation for years, prior to generative AI, actually will say that cheap fakes work just as well if not better. You don’t need a tool of sophistication, you need a tool of scaling and automation, and that’s what generative AI is giving you. And in fact I wrote an op-ed for foreign policy looking at how generative AI has influenced the elections in Southeast Asia, and what was actually interesting to me is, we imagine often that malicious information or mis- and disinformation related to elections will be something like, trying to spread some bad information about somebody. And actually what it looks like is candidates generating what I called “soft fakes”, images, audio, video etc. of themselves to make them look good, or more approachable, or more interesting, what Gen Z would call a social media glow-up. And we saw that with multiple candidates who won, we saw it in Indonesia, we saw it in Pakistan, and sometimes these deep fakes are clearly fake, so in Indonesia we had them creating videos of Suharto who has been dead for years, supporting a particular political party. Now obviously people in Indonesia know that Suharto is not alive, so then why make that video? Well it would be the same as Republicans making a video of Abraham Lincoln saying “vote Republican because I was a Republican in my day”. It’s emotional. 

The last point I’ll make is that – and I’m putting on a slightly different hat, so I’m actually a political scientist by background so I have studied this – people vote emotionally. We think we vote rationally but overwhelmingly we vote with our feelings, we vote with our gut instinct, we vote for candidates who we think speak like us, look like us, talk like us. So when these subtle manipulations happen, when a candidate makes videos of them speaking in different languages – in the city of New York you had the current mayor Eric Adams, when he was running, make robocalls in Chinese and Yiddish languages he does not speak – I think these have a subtle manipulative effect, even if it’s not so obvious. So I think people were looking for this silver bullet of, “oh wow see, look this person made this deep fake that was so clearly real and these people got duped by it, and that’s what happened and that’s manipulation”, when the reality is manipulation looks more like robocalls where the candidate appears as if they speak a language they don’t speak, in order to get people to vote for them. 

Carl Schlyter (00:15:55)

That could be considered quite harmless, that it’s obvious that he probably won’t speak Chinese for example but, already 75 years ago Hannah Arendt wrote about totalitarianism and how lack of truth can impact elections and behaviours and power. So she was worried that when totalitarian regimes take over the first thing they do is to take away truth from people, so that there’s no real truth. And as you mentioned also, there’s quite a lot of emotions involved in elections and scientifically that’s well proven that in a very complex decision situation, emotions are a better guide to a proper decision than purely intellect and facts. Based on facts but you do the right decision only if you also involve emotions. So that’s kind of logical, but what happens then when it’s so easy to make people uncertain of what is true and what is not? How would that affect an election when this insecurity about the truth that Hannah Arendt warned about already 75 years ago, how would that impact, do you think? 

Dr Rumman Chowdhury (00:17:02)

Well there’s been an assault on science for 15 years plus at this point. Actually, to share an anecdote with you, what got me into tech and especially thinking about responsible AI was a class that I was teaching during my PhD programme. Now this was an intro to American politics class and I was teaching at a community college, to TA (Tuition Assistance) for extra cash while I was in my PhD programme and this was in 2009-2010. A student came up to me after class and very politely (asked) “teacher, I don’t believe in climate change” (as if) climate change is (a) belief, and she used the words “I don’t believe in climate change”. Now again, this was in the before days and in 2009, especially to me as a scientist, the concept of belief didn’t really play into science. Science is science, belief is belief. And you know I’m her professor so I try to explain to her, I can believe the sun is not coming up tomorrow but it’s going to come up, that’s just science, and I pointed her to the EPA’s websites, I’m like there are a lot of scientists that have done a lot of work to demonstrate how the environment is doing worse, we’re contributing to the decline of our environment, etc. waters, and she’s like “no no no” and then she points me to this this series of blog posts that she’d been reading, and I realised that she had no concept of scientific discernment, no concept of understanding good sources and bad sources. This is not to say she was not smart. 

I think there is, as I mentioned, a crisis of science, there’s an assault on science, that predates generative AI, predates social media. We saw this during COVID, we saw this in climate deniers, we see this in flat earthers, we see this in vaccine deniers. This is part of a bigger problem, where as you mentioned Carl, people are confused, because now it seems like an influencer’s opinion is the same as scientific fact, and so now scientists are in a really tough situation where to do their work with integrity, you need time, you need focus, you can’t be posting on social media constantly, you can’t be just dropping a buzzword because it’s trendy, and yet people don’t have the patience for that anymore. So while this is maybe stretching beyond your specific question, I think what we are seeing in a lack of information integrity, which is what you’re getting at is just pure information integrity, is rooted in the assault on science that’s been happening for many years at this point. And again as a scientist it’s been very hard for me to see that happen and not really understand how to combat this. 

Joycelyn London (00:19:42)

I wonder as more and more of the wealthy become very interested in AI, become very interested in owning AI organisations, investing in owning AI organisations, seeing it as a tool for profit and extraction, whether you’ve seen any alternatives or things that we should be incredibly aware of as the general public, where we might be able to have power against this accumulation of power through AI, accumulation of power and wealth through AI. 

Dr Rumman Chowdhury (00:20:12)

Well first I think being – and this has sort of been the case for a very long time – being mindful of what you are buying, what you are using, what you are consuming. I think tech as an industry started off by spoiling all of us, by giving us things that we thought were free. But it was never free, the currency was always data and information, the money actually meant nothing. Data is an infinitely resellable product, packageable, reusable, money is actually finite compared to data – once you’ve spent it, it’s out of your pocket. Now a company can sell data in many different ways and forever reap  the benefits from it. So I think that’s one, you know the very old adage  goes “if you don’t know what the product is, you’re the product”. So in this case it’s like if you’re not paying anything you’re paying for it in some way. 

I think  the second, and something that I’m increasingly advocating for – my non-profit  Human Intelligence focuses on creating the community of practice around algorithmic assessment. Now what that  looks like is very different for  different kinds of people, for let’s say a policy maker or maybe the average person, this could look like a better ability to critically discern the content that’s being put in front of you, or being smarter about questioning how an AI system works. Now the thing I am  personally working towards is this concept of a right to repair, so we don’t  have a reciprocal relationship with companies, currently it’s very extractionary in all ways. They take our data that we’ve provided to them for free and then make a product that they turn around and sell back to us, and we are paying for that too, and in doing so, as you’ve mentioned Joycelyn, they are accumulating, it’s not just wealth, I think people assume that what a lot of these people are after is wealth.  

These people already have more wealth than their grandchildren’s grandchildren could spend if they spent thousands of dollars every day of their lives. They don’t actually need more money, they’re looking for power, influence and in a sense almost immortality. These men – and they’re all  men – the names of these men are already in a sense immortalised and they’re trying to figure out ways in which they can exert, while they’re alive, the maximum amount of influence possible. So I think it’s important to understand, for example, how Jeff Bezos influenced the  Washington Post, to not have endorsement with Kamala Harris. Something very similar happened with the LA Times as well, not by somebody who is directly a tech owner but somebody who is very tech influenced and tech adjacent (to) a particular political perspective.  It’s interesting – I’m using the word interesting in a very broad sense – I  think people should have their eyes open when they look at the role Elon Musk is going to play in the new Administration and actually how he has similarly pledged to give money behind conservative candidates in the upcoming UK election, so these people are not limiting themselves only to the US they are exerting their influence all around the world. 

So what can we do, is always a hard one to answer. I would say just, as possible, use small  tech, use technology that’s maybe built by start-ups or by smaller  organisations that have committed to or can demonstrate how they’re using your data ethically and responsibly, how  they’re not going to sell and resell your information. I think at this point it is also a matter of personal  political protection, especially if you are a minority, a woman, anybody who is not a straight white man, these are things that you should be thinking about. All that data it’s now moved  beyond “oh a company may resell it”, to “a company may provide it to a government  that will use it in some way to track you or harm you. 

Carl Schlyter (00:23:49)

I am a middle-aged white man and I really don’t trust those guys either, so … 

Dr Rumman Chowdhury (00:23:53)

You shouldn’t either but you will be the last they come after. 

Carl Schlyter (00:23:57)

Well sometimes yeah, maybe. 

 [Music]  

Carl Schlyter (00:24:03)

How can you make corporations responsible for the algorithms that spread manifest untruths like we have the Rohingya catastrophe in Myanmar some years back where Facebook algorithms triggered engagement by spreading these untruths, so how can we hold corporations  responsible or how can we design legislation that doesn’t make big corporations make profits on spreading fundamental untruths. I mean you have freedom of expression so you could also tell that, but the algorithm shouldn’t put that (on the) top of every single hit, that’s my point here. 

Dr Rumman Chowdhury (00:24:38)

Yes okay, so I’ll give you two answers to that, one is a satisfactory answer and the second one’s an unsatisfactory answer. I’ll give you the satisfactory one first though, and the answer is really laws and regulation. So the Digital Services Act in the EU is an example of a law that’s – I actually am a big fan of – it’s quite ambitious. It doesn’t necessarily mean we’re going to get it right, or that the EU are going to figure out how to do it correctly the first time around, but it does actually state that very large online platforms ie.social media companies, amongst others, do need to demonstrate that their platforms are not unduly sharing mis- and disinformation as it relates to things like elections. There are other things like violations of fundamental human rights, online violence etc. So I do think that’s a positive step forward. In the US we have something called section 230 which is actually a law that provides immunity to social media companies, so pretty much very early on in social media days, somehow Congress was convinced to say “yep, social media companies, you are simply the tube by which information is shared, therefore you are not responsible for the content that exists, so we cannot hold you responsible if somebody else who you don’t control what they’re saying is sharing some information and your algorithm is amplifying it”. So that’s the satisfactory answer and I suppose there’s been conversations about rolling back section 230 which interestingly the incoming Administration has said they would want to do. And previously people have said it’s just too difficult, the companies are very ingrained. But we do see some movement as it relates to that. 

So now I’m going to give you a less satisfactory answer. I used to lead the machine learning ethics transparency and accountability team at Twitter. A few years ago my team did an analysis of multiple different countries, and we compared the algorithmic feed versus the reverse chronological feed. So basically Twitter, not X Twitter, has two feeds, one in which it shows you what the algorithm gives you and one is just a reverse timestamp of how people post, with the most current up top. So we can compare the two and say okay, what is the algorithm showing you that you would not have seen otherwise, and we saw that in seven out of eight democracies the only one that didn’t show is actually Germany, there was a centre right leaning content, so the algorithm tended to promote centre right content. 

So, also using careful words there again, as a scientist, we did not get time to finish our analysis unfortunately because Elon Musk took over and we all got fired, but there’s two working hypotheses for why that happens. One could be algorithmic bias, there’s some sort of way the algorithm is picking up centre right content and unduly amplifying it even though it is not a feature of the algorithm. The second and frankly the more likely hypothesis is that it’s what people are clicking on. So then the unsatisfactory answer becomes, if that is what people are clicking on, then who gets to decide, how we should or shouldn’t be consuming our media, what is fair … independent of why that’s happening, it’s wrong. Okay great so what’s right. 50/50? What if it’s a parliamentary system and there’s 30 different parties, does each political party get one thirtieth of the social media share? What about in the US, and isn’t that manipulation in and of itself, not showing people what they’re clicking on and what they want to see?

So that’s my unsatisfactory answer, that even if we were to decide that that’s wrong, I don’t know what right looks like, and I don’t know who is the person or entities that would decide that. Is that Jack Dorsey and Elon Musk, because they’re the ones that own these private companies. Is it Mark Zuckerberg, is it a politician, do you trust the politicians to be able to tell you what you should or should not be looking at? So in some way, in asking for things to be “corrected” we are asking for some authority or powerful entity – to go back to your question Joycelyn – to make this decision for us, because we don’t have clear methods of reciprocal accountability. So that’s the conundrum we end up in. 

Joycelyn London (00:28:50)

I think it’s really interesting speaking on this and speaking about what people want, and where the influence of algorithm stops, and the influence of ourselves and the things that we are interacting with, or that we are liking and clicking and engaging with, shape the way that we use the internet and also the information we receive. So as we’ve heard, and seen a lot in the media, 2024 was named the year of the election and the year that AI would have a massive impact on elections and democracy. But this year, 2025, we still have many elections around the world, including Germany, the Philippines, India, Canada, among many others. And so on this note about misinformation and disinformation, with these companies already spending so much, what does it actually look like, what does the work of say Facebook or Twitter working on mis- and disinformation look like, and where, if it is coming from AI, where disinformation is coming from AI, what is it that we can do to tackle the issues. 

Dr Rumman Chowdhury (00:29:52)

Yeah there are a few things that companies have been working on. So earlier this year there were multiple conventions to think about approaches and one of them was actually agreements to do things like content providence and watermarking. So what does that look like? In one sense it is a digital watermark that would basically indicate that something has been created or manipulated by AI. So there’s two ways that might work. One might be an actual like watermark style stamp on an image, that a person could see when they’re looking at the information itself. Now, that can have flaws in some ways where somebody can manipulate the image so they crop it out, you know that kind of thing. The other thing that’s more sophisticated that more organisations are leaning towards, is creating what are called hashes, so some code-based hidden way in an image that it tells you that it’s manipulated, that then relies on good actors down the chain, it relies on, not just the content generation company but also the content distribution company, whether it is an internet service, a social media provider, taking that information and doing something with it, either flagging to the viewer that it’s AI generated or manipulated etc. 

The second is that so many images are in some way manipulated by AI, like what is a beauty filter but an AI manipulation, and actually as Sir Nick Clegg who’s the head of global policy at Meta Facebook has pointed out, how would you assess how much manipulation is bad or what bad manipulation looks like. So for example I can manipulate 30% of a picture and it is a beauty filter that put a bunch of makeup and eyelashes on me and you’re like okay, whatever, who cares, I can manipulate 5% of a picture and put a gun in someone’s hand and that actually is going to be pretty bad. So what would you base it on, and then if it’s based on the content of it, well then now we’re back to content moderation. So to your question of how companies do this work today, it is a combination of AI and machine learning models and human oversight. So content moderation has become a very big topic. Even if you make highly sophisticated AI models that can identify let’s say a good 90% of manipulated content or whatever in some fashion, and even – let’s say I’m being incredibly generous – even 90% of it, and be able to discern good and bad, and that’s just a beauty filter, and that’s a bad manipulation or whatever, you’re still left with that 10%. Let me tell you that 10% takes up 90% of your time. These are the issues that are politically charged, these are the issues that are context specific, these are the issues that are culturally specific, these are the issues that have no clear ground truth. And again, predating generative AI, content moderation has been a constantly growing and increasingly difficult to manage aspect of media distribution and content distribution, with no end in sight, and gender DEA has just made that harder. 

Carl Schlyter (00:32:47)

These corporations have the world’s most powerful brains hired by them for very high salaries, so we must take that in mind too. You can’t defend yourself against such powerful interests. So if we have a combination of regulation and civic society actually defending itself, how can you use AI to change how AI works, could you fight it back with the same method. It can be used for so many good things in practice, can it also be used against the disinformation itself. 

Dr Rumman Chowdhury (00:33:24)

I think so, first of all there are people who are building AI models to identify disinformation and take it down. The use of AI and machine learning models for content moderation is quite an older practice. But I think one thing that’s interesting to explore is what might be an AI model that can serve as a fact checker for you, the individual. So you know people love community notes, on X, community notes – I always say this when people talk, whenever community notes come up – it was actually started as a project at Twitter before Elon Musk took over it was called bird watch. But you know community notes is a very interesting thing that people like, because it’s this idea that the community is correcting fake information. I wonder if there could be like an AI system that does that, but the thing is it would have to be an incredibly trusted AI bot and I wonder if it could be like fine-tuned for a specific person, a specific idea or context, and there would be a way to make it that it’s not a generic tool but a tool that helps you specifically. The thing about AI and all these models is that they’re personalised, so how can we make something personalised that’s useful for you. 

Joycelyn London (00:34:33)

Yeah, thank you Rumman so much, it’s been such an incredible conversation, I always love talking about AI so this was a bit of a selfish episode for me, to hear your thoughts and speak about the impacts of AI more more broadly, but also on democracy and misinformation and disinformation, and I think it will be a very eye opening episode for everybody listening. 

Dr Rumman Chowdhury (00:34:54)

Thank you so much. 

Carl Schlyter (00:34:55)

And we’re quite certain you were here as a person so we’re really grateful for that. 

Dr Rumman Chowdhury (00:35:01)

Not a bot! 

Carl Schlyter (00:35:04)

AI is a topic that is so difficult to narrow down to one specific thing because AI is not like a tool for one purpose it’s a general method of things so that’s why it’s so difficult to end the discussion. 

Joycelyn London (00:35:19)

I think it’s interesting because we will need action and regulation on AI and different industries. I think there’s this misconception that we will just, I don’t know, create a solution for AI as an entire space. And really we need very industry specific, domain specific regulation and action in these different fields, including democracy, that will tackle the specific risks and opportunities in that particular industry. And I guess this is where government comes in, we can do a lot as individuals to regulate our own interaction with algorithms and with media and with information, but it doesn’t just come from the individual, we’re up against very wealthy, powerful and influential organisations and individuals, and they need to be reigned in by systems of policy too. 

Carl Schlyter (00:36:13)

And that’s where I see a discrepancy between what’s needed and what’s done, because most politicians they just want not to be losing out on the AI race, they want to be first, they want to create the killer robot and they want the social media platforms, and they want the innovation to be done by AI now and they want to be first. So they all say yeah, let’s regulate but not kill the industry, and then nothing will happen. And also like the development is so incredible, in 2022 it was 10% of the internet was generated by AI and now it’s more than half – 57%  – and then in 2030 some researchers estimated that 99 to 99.99% of the content of the Internet will be generated by AI. So I think it’s really important that we design a regulatory package that is based on principles and not technology specific in certain applications, so it’s not outdated the same week it’s adopted. I think we need to find a way here, where the regulation could be adapted in such a way that it’s responding in a little bit longer term, and that we have authorities that check what’s going on, without falling into the trap of stopping free speech which I see a risk of. If you try to regulate something for good intentions, you’ll try to stop something, But that in the end makes people maybe trust information even less, because it can be perceived as an attack on free speech and it can be. So that’s the dilemma here.

Joycelyn London (00:37:45)

I think part of this though is having a collective vision about what we want AI to do, and what we don’t want it to do, and so the government also needs to engage community and engage society in civic participatory spaces where we can collectively decide what we want a world under AI to look like. As we’ve been talking about, there’s lots of complexities about, you know, you can regulate and that might lead to feelings of control or restriction of free speech that we might not want. We can’t just implement solutions without actually engaging the people that it’s going to impact, and so I think this must be a participatory and a collaborative effort, where government opens their ears and listens to scientists, and listens to people in order to create the policies that we really need and that are the most beneficial for us. Because it can’t just come from government interest, which is often tied to corporate interest, it must come from a collective idea of what we want the world to look like digitally. 

Carl Schlyter (00:38:53)

I think that’s crucially important because if we don’t do anything, we know that it will be a few rich billionaires who will decide how this will be used. And for people, if we don’t have a participatory action here and we can build something that helps our communities; we also have a risk of destabilizing communities. We talked in the episode about how you get more radicalised if you are exposed to the wrong kind of information repeatedly. So I think if we can have people talk about how we can use this multifaceted powerful tool, and then also better understand it, maybe we can guide our politicians (in a) direction in which it actually would benefit people and the planet. But if it’s only used to increase production, and increase profits and increase the traditional structures that have destroyed our planet from a social economic point of view and ecological point of view, then there’s not much help. So I think that’s totally true, the best way to deal with this is to build up a community where people can analyse this, get help in how to analyse that and then see what we can do, to use this in the best possible way and detect and stop uses that would undermine society or ecology. 

Joycelyn London (00:40:17)

So, as you know we always like to end with some calls to action, and this has been such an information heavy episode, and I’m sure you’ve learnt a lot and are wondering how you can play a part in resisting the negative impacts of AI on democracy. And I think the key message here is that rather than focusing specifically on AI, what’s really clear is that we need to continue to strengthen democracy as it stands. The AI is something that amplifies the issues that are already prevalent in our democratic systems. And so building a really strong foundation by registering to vote, by participating actively in not just national elections, but also local elections, being civically active in your local community will build those skills of being able to understand your place within democracy, and also when democracy is being threatened. 

This also includes your participation online by reporting fake news or misleading content, and better educating yourself and helping others in your life to understand and identify AI generated content. And when engaging with AI generated content building a level of critical thought around what you think about AI. As Rumman said, it’s about becoming an active consumer rather than a passive consumer. Another platform and resource that is very useful in navigating misinformation, disinformation is the work of Mozilla Foundation. They have a huge amount of resources online, on social media, but also on their website, with toolkits around building trustworthy AI and holding tech companies to account. So if you’re looking to learn more on the topic apart from diving deeper into Rumman’s work, you can also check out the Mozilla Foundation and the resources that they have too. 

Yewande Omotoso (00:42:12)

Thanks for listening to this episode of SystemShift. Join us next time when we’re asking the question, 

“How could climate change reshape jobs and workspaces?”. 

Subscribe to SystemShift wherever you get your podcasts, so you don’t miss an episode. [Music] 



Source link

SystemShift www.greenpeace.org