The AI Blunder at Apple and Social Media’s Abdication of Truth

7 min read

In a world increasingly reliant on social media and artificial intelligence for news, this past week has been a sobering reminder of the chaos that ensues when tech companies attempt to rewrite the rules of journalism.

It all began with a highly publicized misstep by Apple, a company usually celebrated for its polished innovations. In December, Apple rolled out a new feature for its latest iPhones: AI-generated quick-read summaries of significant news stories. But rather than ushering in a new era of convenient news consumption, the feature became a lesson in why journalism is not a game for amateurs.

The AI’s “summaries” ranged from laughable to damaging. One claimed that murder suspect Luigi Mangione had shot himself, despite the fact that he is alive and imprisoned in Brooklyn. Another erroneously crowned darts player Luke Littler as world champion before the tournament had even concluded. Most bizarrely, it announced that retired tennis star Rafael Nadal had come out as gay—a claim likely as surprising to Nadal’s wife and child as to the rest of the world.

The debacle brought swift outrage, especially from the BBC, the original source cited by Apple’s AI. The BBC, a global leader in accurate and respected journalism, saw its reputation unfairly dragged into the mess. Apple, in turn, promised a rushed software update to address the issue. Yet the situation underscores a deeper problem: tech companies venturing into news with the same “beta-test-it-on-the-users” approach they use for software.

Unlike tech updates that can survive a bug fix, news demands absolute accuracy from the outset. A news organization’s credibility depends on it. Repeated errors, no matter how unintentional, erode trust until there’s nothing left of a publication’s reputation—something Apple’s engineers evidently failed to appreciate.

But Apple’s blunder was just the tip of this week’s iceberg of media mayhem. Over at Meta, the parent company of Facebook and Instagram, Mark Zuckerberg announced the end of a program that has independently fact-checked questionable posts since 2016. Instead, users themselves will now determine the credibility of content through “community notes.”

If this sounds familiar, it’s because Elon Musk implemented a similar approach when he acquired Twitter, now X, in 2022. The result has been anything but inspiring. Since Musk’s overhaul, X has seen an exodus of users, a 40% drop in revenue, and a transformation into a platform riddled with misinformation, conspiracy theories, and a shouting match among trolls.

Despite repeated claims by both Meta and X that they are merely neutral platforms, they are undeniably publishers. And in abdicating their responsibility to ensure the accuracy of what they distribute, they’ve thrown open the gates to unchecked falsehoods. Imagine a reputable newspaper printing a disclaimer that reads: “Nothing in this issue is guaranteed to be true. Feel free to fact-check it yourself.” Absurd? Yes. Yet this is effectively the policy these companies have adopted.

As a proud member of what some deride as the “legacy media,” I am no Luddite. I embrace the potential of technology and artificial intelligence in journalism. Properly harnessed and regulated, AI can play an invaluable role in gathering and disseminating information. But news is a craft that demands rigorous training, deep expertise, and a relentless commitment to accuracy—qualities that cannot be replaced by algorithms or crowdsourcing.

The events of this week are a stark reminder of why professional journalism matters. Obtaining and delivering the truth is neither easy nor cheap, but it remains essential. In the unregulated chaos of AI-driven summaries and self-policing social media, the lesson is clear: leave the news to the professionals.

For anyone foolish enough to obtain their news from the unregulated wild west of social media, or from websites populated by the increasingly bizarre output of artificial intelligence, this has not been a good week.
It began with a rare and highly amusing debacle at Apple, the tech giant that is usually so deft and sure-footed with its loyal customers. In December, the company introduced a feature for users of new iPhones that created quick-read summaries of significant news stories.

The summaries were produced by AI and it would be fair to say they were not an unqualified success. Among the riveting nuggets shared with iPhone users were that Luigi Mangione, the man accused of murdering a US health insurance boss, had shot himself dead (he is in prison in Brooklyn and very much alive); that the darts prodigy Luke Littler had won the world championship (at the time the final had not even started); and that retired tennis star Rafael Nadal had come out as gay (which would have been a surprise to his wife and son).

News must be accurate, first time, every time. When a news outlet gets something wrong, its reputation is diminished

Ross Anderson

If the only damage done by this drivel was to Apple’s reputation, few tears would have been shed. But the original source to which the tech company’s AI attributed these “summaries” was the BBC — which tops almost every survey as the world’s most trusted and respected news outlet: its reputation is the jewel in its crown.

BBC bosses, obviously, protested. Apple, obviously, said they would rush out a software update to address the issue, because that is what they do. This was the inevitable consequence of a company that is good at one thing (tech) getting involved in something it knows nothing about (news) and trying to deploy the same operating model.

Apple’s core business is phones, tablets, computers and the software that runs them. When the company, or any of its tech rivals, releases new software, any testing they have done will be rudimentary: the real testers are the end users. No sane smartphone owner downloads and installs an update to the operating system on the day it is released: they wait for the enthusiastic “early adopters” to identify the bugs, crashes and occasional full-scale meltdown that bricks the phone, followed by the inevitable version 2.0.

Unsatisfactory as it is, this has become acceptable practice with software. With news, it is far from that. News must be accurate, first time, every time. When a news outlet gets something wrong, its reputation is diminished. If it happens often enough, that reputation ceases to exist — and a news outlet without a reputation for accuracy is worthless. Thus, the wholly justifiable anger of the BBC.

Despite constant protestations by both Facebook and X that they are merely platforms, they are not. They are publishers

Ross Anderson

As if all that were not bad enough, it also emerged this week that Facebook and Instagram will now deploy the same “testing by user” model on their content. Company boss Mark Zuckerberg has ended the fact-checking program, introduced in 2016, which refers posts that appear to be false or misleading to independent organizations to assess their credibility. It will be replaced by “community notes” and establishing the veracity of content will be left to users themselves.

Not uncoincidentally, this is the same system introduced by Elon Musk when he bought Twitter, now X, in 2022, and which has been such an unmitigated triumph that analysts at Emarketer expect X to have lost 7 million monthly active users in the US alone since 2022. Meanwhile, its revenue plunged by 40 percent from 2023 to 2024, the brand is worth less than $700 million today compared with nearly $6 billion when Musk bought it and X is now populated almost wholly by nutjobs, fruitcakes, people shouting at each other while no one listens and “experts” who know for a fact that the Earth is flat, the moon landings were faked and there’s a guy works down the chip shop swears he’s Elvis.

Despite constant protestations by both Facebook and X that they are merely platforms, they are not. They are publishers — and both have now abdicated the core responsibility of a publisher, which is to ensure as far as possible the veracity of everything they publish. Imagine if, every day, Arab News were to place the following on the front page of the newspaper and the home page of our website: “Hi! Nothing you are about to read is necessarily true. Frankly, we have no idea, and we can’t be bothered checking — that’s your job. If you see anything that isn’t true, let us know, and we may or may not fix it.” That is, in effect, the policy now adopted by social media publishers.

I am a fully paid-up, card-carrying representative of what keyboard warriors like to deride as the “outdated legacy media,” but I am no Luddite. I am old enough to remember journalism before Google, Wikipedia and the wealth of accurate, verifiable information available on the internet with a few well-judged keystrokes, and I have no wish to return to those days. Moreover, it seems to me to be self-evident that artificial intelligence, properly controlled and regulated, has a key role to play in the information-gathering process.

However, obtaining and publishing accurate news is neither cheap nor easy. It requires rigorous training, attention to detail, skill, dedication, often considerable expenditure and an unshakable commitment to the truth — and it is a job best left to the professionals.

Leave a Reply

Your email address will not be published. Required fields are marked *