Fuelled by realistic artificial intelligence, a number of accounts are causing “reputational and emotional damage” with their sick posts.
Donald Trump has been known for many things over the past decade or so, one of which is his “fake news” catchphrase. Utilised by the US President to downplay statements and stories about him and his followers – often true ones – he has been able to shoot down rivals and critics and whip up his blindly loyal fanbase.
This, and other factors, has meant that brazenly lying without any consequence seems to have become commonplace in the US and, let’s face it, the UK too. One would hope that advances in technology could help stop the spread of fake news on social media and in our daily lives, but it seems to be doing the opposite.
Artificial intelligence is being heavily pushed by big tech companies, despite resistance in many areas. And while the ‘funny’ videos of The Office cast as babies or Joe Bloggs’ life in the style of Studio Ghibli may seem harmless – let’s save the debate on the damage to arts, culture, and the environment for another day – there is, unfortunately but somewhat inevitably, a much darker side to the groundbreaking tech.
AI is being used by cruel pranksters to give an air of legitimacy to social media pages spreading fake news about celebrities. The advances in AI have meant sick posts about the deaths of celebrities’ children or the health worries of other big names are being accompanied by realistic-looking images.
This vile content has been seen from many such pages, with victims branding the fake posts “sickening” and experts bemoaning the “irreversible reputational and emotional damage” caused by these disgusting and dangerous accounts. One such post included a picture of Manchester City star Phil Foden in tears.
It was shared widely on social media claiming that one of the 25-year-old midfielder’s children had died, with the other battling cancer. None of the content was true, with images and captions digitally edited and falsified to create these vile claims.
Foden sought legal advice after these stories began circulating on social media and Rebecca Cooke, the mother of his kids, publicly spoke out against the cruel posts.
“We are aware of the pages and accounts spreading these stories,” she said. “They are completely false and very disturbing. I don’t understand how people can make up these things up about anyone, especially children. It’s sickening.”
Get more Daily Record exclusives by signing up for free to Google’s preferred sources. Click HERE
But this is not the only case of such fake news around celebrities. An investigation by Manchester Evening News found dozens of social media pages, mostly on Facebook, using AI to generate false images of well known celebrities and their children, covering all sorts of falsified stories from deaths and health issues to relationship and pregnancy announcements.
Often using phrases like ‘verified information’ and ‘just confirmed’ to give their fake posts extra credence, some of the pages are racking up thousands of shares, likes, and comments from their output.
Meta, the company behind Facebook, says it has a range of policies to deal with misinformation on the platform, including adding warnings and context to content raised by fact-checkers, and by removing misinformation that ‘may contribute to imminent harm’.
But one post from UK Celeb Update, which has 42,000 followers, made a post about a popular TV presenter with an AI-generated image of her with a bald head and a caption reading: “Final Days of a Beloved Icon.”
It claimed that she is “bravely battling breast cancer” and in the “heartbreaking final chapters of her life”. The fake post had over 7,000 likes and garnered more than 1,700 comments with legitimate profiles offering well wishes to the celeb.
The British A-List, a page with 26,000 followers, shared an AI image of a TV couple holding a baby scan, unveiling “the biggest secret of their lives”. But, again, it was fake – as the couple have not confirmed they are having a baby – and it still racked up thousands of likes.
Another social media post announced the engagement of two TV presenters, saying they had “finally addressed their bond”, despite them never having publicly confirmed they are even in a relationship.
One page, with 65,000 followers, has posted a number of AI-generated images about celebrities. False posts included one Man City player’s three-year-old daughter having a “health battle”, a teammate losing an unborn child, and a third player having a “top secret shock medical examination”. The club did not respond when contacted by the MEN.
Manchester mum Hannah O’Donoghue-Hobbs founded social media consultancy firm January 92. The social media expert said the fake news pages cause “reputational and emotional damage” with their “emotionally manipulated content”.
She said: “From a social media expert perspective, pages like these are extremely problematic because they exploit trust, emotion and familiarity on a major scale. They’re deliberately designed to look credible, using AI generated imagery, recognisable faces, emotionally charged language and vague ‘breaking news’ framing, to stop people scrolling and trigger an instinctive reaction before critical thinking kicks in.
“The real harm isn’t just misinformation. It’s reputational and emotional damage. Announcing false terminal illnesses, deaths or pregnancies about real people is deeply distressing for the individuals involved and their families, and it erodes public trust more broadly.
“Once these posts gain traction, the damage is effectively irreversible, even when they’re later removed or debunked, the original narrative often travels much further than the correction. Platforms like Facebook have a clear responsibility here. Their algorithms currently reward engagement above accuracy, meaning emotionally manipulative content is actively amplified.
“Moderation systems are reactive, slow and inconsistent, and reporting tools place the burden on users and public figures rather than preventing this content from spreading in the first place. At minimum, platforms need stronger detection of AI-generated images, clearer labelling, faster takedowns for impersonation and false medical claims, and meaningful penalties for repeat offenders, not just page removals that are quickly replaced with new ones.”
James Bore, a Chartered Security Professional, said that algorithms on social media sites like Facebook are built to prioritise engagement over truth, paving the way for misinformation.
He said: “Meta isn’t only taking no visible action on mis- and disinformation, their approach actively promotes and encourages it. The algorithm is built to prioritise engagement, not veracity, and as they regularly reduce fact checking and moderation capability – handing it over to automation – the situation continues to get worse.
“All of this is very much by design, not an accident. This isn’t just Meta, we have seen the same thing on many other platforms, and with AI able to generate disinformation at a scale never seen before with minimal human effort we are only going to see the trend continue.”
Meta says it is “committed to fighting the spread of false information” and is making changes to the way that digitally-altered media is handled online. When contacted for a statement, Meta directed the MEN to an online page.
The platform said it was adding ‘AI info’ labels to video, audio and image content when it detects industry standard AI image indicators or when people confirm that they’re uploading AI-generated content.
It also said pages that repeatedly share false information will see their “distribution reduced”, though several of the pages mentioned above have been posting regularly for months.
A statement online read: “Our intent has always been to help people know when they see content that was made with AI, and we’ve continued to work with companies across the industry to improve our labelling process so that labels on our platforms are more in line with peoples’ expectations.
“For content that we detect was only modified or edited by AI tools, we are moving the ‘AI info’ label to the post’s menu. We will still display the ‘AI info’ label for content we detect was generated by an AI tool and share whether the content is labelled because of industry-shared signals or because someone self-disclosed.”
A post about misinformation read: “When fact-checkers write articles with more information, you’ll see a notice where you can click to see why. Pages and websites that repeatedly share false information will see their distribution reduced and their ability to advertise removed.”


















































