

But, Altman told the US Senators, those concerns could be overblown. “It’s an afternoon’s work for somebody with a bit of programming experience to create fake identities and just start generating these fake news stories,” he said.Īfter fake pictures of Donald Trump being arrested in New York went viral in March, shortly before eye-catching AI generated images of Pope Francis in a Balenciaga puffer jacket spread even further, others expressed concern about generated imagery being used to confused and misinform. Wooldridge said chatbots such as ChatGPT could produce tailored disinformation targeted at, for instance, a Conservative voter in the home counties, a Labour voter in a metropolitan area, or a Republican supporter in the midwest. But we now know that generative AI can produce disinformation on an industrial scale,” he said. We have elections coming up in the UK and the US and we know social media is an incredibly powerful conduit for misinformation. “Right now in terms of my worries for AI, it is number one on the list. Prof Michael Wooldridge, director of foundation AI research at the UK’s Alan Turing Institute, said AI-powered disinformation was his main concern about the technology.
#TECH EXPERT CLIPART MANUAL#
Where earlier waves of propaganda bots relied on simple pre-written messages sent en masse, or buildings full of “paid trolls” to perform the manual work of engaging with other humans, ChatGPT and other technologies raise the prospect of interactive election interference at scale.Īn AI trained to repeat talking points about Taiwan, climate breakdown or LGBT+ rights could tie up political opponents in fruitless arguments while convincing onlookers – over thousands of different social media accounts at once. Concerns over the technology have soared after breakthroughs in generative AI, where tools like ChatGPT and Midjourney produce convincing text, images and even voice on command. The prime minister, Rishi Sunak, said on Thursday the UK would lead on limiting the dangers of AI. The ability to really model … to predict humans, I think is going to require a combination of companies doing the right thing, regulation and public education.” “Regulation would be quite wise: people need to know if they’re talking to an AI, or if content that they’re looking at is generated or not. “The general ability of these models to manipulate and persuade, to provide one-on-one interactive disinformation is a significant area of concern,” he said.
