Skip to main content
11.08.2024 | ז אב התשפד

The Jekyll and Hyde of AI

A recent study from BIU’s School of Communication examines global media's AI anxiety with the emergence of ChatGPT

Image
A recent study from BIU’s School of Communication examines global media's AI anxiety with the emergence of ChatGPT

When ChatGPT burst onto the scene in November 2022, it didn't just make headlines - it rewrote them. This groundbreaking AI chatbot, capable of engaging in human-like conversation on almost any topic, captured the world's imagination and sparked intense debate. But what does ChatGPT really mean for our future? A recent study by Aya Yadlin of Bar-Ilan’s School of Communication and Avi Marciano from Ben-Gurion University dives deep into global news coverage to uncover the hopes, fears, and hidden narratives surrounding this technological advancement.

The researchers discovered that news outlets worldwide painted a complex picture of ChatGPT, one that oscillated between wonder and worry. On one hand, the technology promised advances in healthcare, education, and everyday problem-solving. On the other, it raised alarm bells about job losses, academic cheating, and the spread of misinformation.

But when it came to politics, the tone shifted dramatically. Here, journalists weren't just cautious - they were apocalyptic.

The first major narrative uncovered was a fear of the machine itself. News articles painted scenarios where ChatGPT became so powerful it could manipulate elections, spread unstoppable disinformation, and even surpass human intelligence.

NPR warned its audience: "When you ask it [ChatGPT] a question, it can do what's known as hallucinating, or confidently stating things that are just straight-up made up." The article continued, "That's obviously concerning. And if AI's the new engine behind how people search the internet, you could just imagine how things could go sideways pretty fast, especially when you're looking up information about subjects rife with misinformation, like elections."

But it wasn't just the machines that had journalists worried. A second, equally concerning narrative emerged: the fear of how humans might exploit ChatGPT's power for nefarious purposes.

One news source maintained that "With ChatGPT, almost anyone can [...] become a threat actor." Fox News argued, "As a society we have shown a propensity toward using new tools for what most would term 'evil' – the manipulation of thoughts and behaviors to reach a desired end for a particular group or entity."

The specter of authoritarian leaders using AI to manipulate their populations loomed large. As the BBC alerted its readership: "This eventually might create sub-goals like 'I need to get more power'."

While the researchers acknowledged the importance of raising awareness about potential risks, they also sounded a note of caution. By focusing so heavily on worst-case scenarios and conjectured futures, news outlets may be failing in their duty to provide balanced, actionable information to the public.

As ChatGPT and other AI technologies continue to evolve at breakneck speed, the need for nuanced, responsible reporting becomes ever more crucial. The challenge for journalists, policymakers, and citizens alike is to navigate the line between justified caution and unproductive panic, ensuring that our societal response to AI is guided by facts, not fear.

The ChatGPT revolution is undoubtedly reshaping our world. How we choose to frame and understand that change may well determine whether it leads us to a brighter future or a darker one.