Business Fortitude
    🔥 Trending
    AI's Misquotes Are Redefining Brands. Companies Must Adapt Now.
    Marketing & Growth

    AI's Misquotes Are Redefining Brands. Companies Must Adapt Now.

    David AdamsByDavid Adams··6 min read
    • More than a quarter of UK adults now use generative AI weekly to seek information, rising to 40 per cent among younger demographics
    • Major news organisations including the BBC are now blocking AI scrapers over unresolved payment disputes
    • AI systems work by predicting the next most statistically likely word in a sequence—accuracy is a secondary consideration
    • When asked about Marks & Spencer, ChatGPT confidently cited a 17-year-old Guardian article alongside reporting from the Scottish Sun, treating both as equally current

    Companies are losing control of their reputations to machines they cannot see, challenge, or correct. Your brand is being defined not by the stories you tell or the campaigns you run, but by the statistically probable next word generated by a Large Language Model that may be drawing on Wikipedia edits, archived Reddit threads, or page 196 of an annual report from 2015. Each query represents a moment where perception is shaped by sources companies neither control nor can even identify.

    The systems work by predicting the next most statistically likely word in a sequence. Accuracy is a secondary consideration. If outdated content or an incorrectly edited Wikipedia page sits in the right corner of the internet, the model will present it as authoritative fact. The confidence never wavers.

    AI technology and corporate communications
    AI technology and corporate communications

    The phantom information economy

    Anna Fishlock, head of digital at H/Advisors, describes the impact precisely: these AI-generated impressions become the first reference point for journalists working to deadline, investors scanning a sector, candidates deciding whether to apply for a job. The systems rarely cite their sources clearly, which means brands may never discover which ancient blog post or obscure forum comment is shaping decisions worth millions.

    Enjoying this article?

    Get stories like this in your inbox every week.

    Matt Rogerson, the Financial Times's head of public policy, has seen the damage firsthand. He cites an instance where an LLM assembled fragments from multiple sources to generate what appeared to be a share buy recommendation attributed to Investors Chronicle. The publication had issued no such recommendation. To an uncritical reader scrolling through search results, it looked entirely plausible.

    Whether you like it or not, you're in the model.

    Rogerson regularly encounters investment analysis falsely attributed to real FT journalists, cobbled together from genuine commentary but distorted in ways that change meaning entirely. The reputational damage flows not to the AI companies producing these fabrications, but to the individuals and brands whose names are borrowed without permission.

    What makes this particularly treacherous is the narrowing information base these systems draw from. Major news organisations including the BBC are now blocking AI scrapers over unresolved payment disputes. According to Roa Powell from the IPPR think tank, neither Microsoft's Copilot nor Google's Gemini can access BBC content. The Guardian, by contrast, has become ChatGPT's dominant source by a considerable margin.

    Digital information and data systems
    Digital information and data systems

    As licensing walls rise, models default to whichever outlets have signed commercial deals or remain available to scrape. A handful of publishers become disproportionately influential whilst the absence of others creates blind spots in AI-generated knowledge. That vacuum is exploitable. Propagandists can seed narratives specifically designed to appear in AI outputs, knowing vast audiences will encounter them through systems that present information with unwavering confidence regardless of provenance.

    Regulation will arrive too late

    Andrew Griffith MP, the UK's shadow business and trade minister, draws uncomfortable parallels with social media. Lawmakers are still attempting to regulate platforms that emerged two decades ago. Given AI's development pace, he argues, regulators will not offer meaningful day-to-day protection for organisations any time soon. Meaningful intervention will come only after major AI-fuelled crises have already inflicted damage.

    This assessment tracks with historical precedent. Social media platforms operated in a regulatory vacuum for more than a decade whilst concerns about misinformation, data privacy and platform power mounted. By the time serious legislation arrived, the damage to democratic discourse, mental health and individual privacy had been done. AI appears set to follow the same trajectory, but compressed into a much shorter timeframe.

    What was once a proactive discipline—crafting narratives, building relationships with journalists, managing crises when they emerged—has become a defensive guessing game against invisible systems that operate without accountability.

    Businesses cannot wait for regulatory salvation that may arrive in five or ten years, if at all. The risks must be managed immediately, with resources organisations may not have allocated and expertise they likely haven't built. It represents a fundamental shift in how corporate communications functions.

    Building radars, not predictions

    Companies must start by understanding how they currently appear in AI systems. Wikipedia entries, despite their vulnerability to manipulation and error, carry disproportionate weight in LLM training data. Small inaccuracies on high-visibility pages can metastasise into systemic misrepresentation. Keeping corporate websites updated with structured, accurate information increases the likelihood that AI tools surface correct material, though it offers no guarantees.

    Business strategy and monitoring systems
    Business strategy and monitoring systems

    Economist Roger Bootle, speaking on the broader implications of AI adoption, advocates investing in "radars" rather than predictions. The right response is developing capability to monitor how your organisation appears in AI outputs, interrogate distortions when they emerge, and adapt quickly. Bootle strikes a note of cautious optimism, pointing to historical precedent: when spreadsheets arrived, doom-mongers predicted the end of accounting as a profession. Instead, the number of accountants in the United States surged in subsequent years.

    Griffith makes a similar point, recalling the prophets of gloom who believed broadcast news would spiral into chaos when ITV launched in 1955 to compete with the BBC. The system adjusted.

    Whether that historical pattern holds for AI depends on factors beyond any single company's control—regulatory frameworks that may or may not materialise, commercial agreements between AI developers and publishers, the evolution of the technology itself. What companies can control is their own preparedness. Those who invest now in understanding how AI systems represent them will fare better than those who wait for clarity that may never arrive.

    The uncomfortable truth is that your company's reputation is already being written by AI, presented to audiences who increasingly trust these systems as authoritative sources. You cannot opt out. You can only decide how seriously you take the threat. As research on the reputation risks of AI misquotes demonstrates, when AI-generated content misrepresents your business, the damage extends far beyond a simple factual error—it fundamentally reshapes how stakeholders perceive your organisation. Companies discovering how AI has mischaracterised their brand are finding that correction is neither simple nor guaranteed.

    • Invest in monitoring systems now to understand how AI platforms currently represent your organisation—waiting for regulatory clarity may mean waiting too long
    • Focus on maintaining accurate, structured information on high-authority platforms like Wikipedia and your corporate website, as these sources carry disproportionate weight in AI training data
    • Treat AI reputation management as a fundamental communications function rather than an optional add-on—your brand is already being defined by these systems whether you engage with them or not
    David Adams
    David Adams

    Co-Founder

    Former COO at Venntro Media Group with 13+ years scaling SaaS and dating platforms. Now founding partner at Lucennio Consultancy, focused on GTM automation and AI-powered revenue systems. Co-founder of Business Fortitude, dedicated to giving entrepreneurs the news and insight they need.

    More articles by David Adams

    Comments

    💬 What are your thoughts on this story? Join the conversation below.

    to join the conversation.

    More in Marketing & Growth

    View all →