Blog

The Perils of AI-generated Content – Part 2

Hallucinations, Distortions, and Lies – When Generative AI Goes Off the Rails

Welcome back to the “Perils of AI-generated Content.” In part one we explored the risks of AI-generated content for SEO performance. In this second installment, we pull back the curtain on AI hallucinations and drill down on what they are, why they happen, and why it matters.

Striving to deliver engaging content to meet the needs of social media, email marketing, product marketing, and sales teams can seem like a Herculean task. With their ability to turn out content in mere seconds with just a few prompts, AI-powered writing tools stand out as a welcomed shortcut to fast-track content development.

Gen AI, however, is not infallible. With complete confidence and unbridled verbosity, it can give you wrong answers and present misleading and illogical information as if it were fact.

What Are Hallucinations?

Indeed, generative AI models regularly put out false information. But that grammatically correct information is packaged so eloquently that it’s easy to accept it as truth. These instances are called ‘hallucinations,’ and they can be a big problem if one attributes a high level of intelligence and real-world understanding to the system.

Hallucinations happen because the generative process of AI models is complex. Large language models (LLMs), which allow generative AI tools to process language in a human-like way, are fed massive amounts of text data from numerous sources. LLMs deconstruct that data into letters and words and use statistics and pattern matching to generate grammatically and semantically correct responses within the context of the prompt. What’s important to remember is that while LLMs use neural networks to create a response—they don’t understand the underlying reality of what they are describing. All they do is predict what the next word will be based on statistical probability.

Recognizing that generative AI is neither a general intelligence nor a search engine, perhaps the best way to frame generative AI sessions is that you are engaging with an eager-to-please, know-it-all assistant who sometimes lies to you. It’s designed to always have a plausible answer, even if the answer makes no sense or is completely incorrect. It’s only when you look closely that you may say, ‘Wait a minute, that doesn’t add up.’

Types of Hallucinations

AI hallucinations can range from subtle inconsistencies to completely wrong or contradictory information, and can vary in their severity from minor annoyances to dangerous fabrications. Here are five types of hallucinations that you might encounter when using generative AI.

1. Factual Inaccuracies

One of the most common forms of AI hallucinations are factual inaccuracies, where generated text appears true but isn’t. The statement might be based on reality and sound believable, but the specifics are wrong.

One infamous example of a factual inaccuracy hallucination happened in February 2023 when Google's chatbot, Bard (now called Gemini), claimed the James Webb Space Telescope captured the first images of an exoplanet beyond our solar system. The answer sounded entirely plausible, and was consistent with the prompt, but was completely false. According to NASA, the first images of an exoplanet were taken in 2004, and the James Webb Space Telescope was not launched until 2021.

2. Fabricated Information

Generative AI models have been known to create entirely fabricated information that is neither based on fact nor corresponds to reality.

An outstanding example is that of New York lawyer Steven A. Schwartz. In June 2023, Schwartz used ChatGPT to draft a brief against Avianca Airlines—not realizing that ChatGPT is not a search engine. The text was full of falsehoods—including more than six entirely fabricated past cases meant to establish precedent for the personal injury suit. The federal judge spotted the bogus cases and fined Schwartz and his law firm $5000.

3. Prompt-conflicting Hallucinations

Prompt-conflicting hallucinations occur when LLMs generate content that contradicts or deviates from the original prompt.

Let’s take a look at my low-stakes interactions with ChatGPT. In the first example, I prompted ChatGPT to generate any text it desired with the constraint that the generated text must not include the English letter ‘e.’ Simple enough, but with the text containing five instances of the letter ‘e,’ ChatGPT clearly delivered a result that contradicted the instruction.

Source: ChatGPT-4

Or this example, where I instructed ChatGPT that the response must be between 45 and 50 words. With the word count peaking at 40 words, ChatGPT again deviated from the original prompt. Low-stakes examples for sure—but imagine if they weren’t!

Source: ChatGPT-4
4. Context-conflicting Hallucinations

Context-conflicting hallucinations occur when LLMs generate responses that don’t align with information previously presented within the same conversation.

One example of context-conflicting hallucinations comes from ChatGPT fielding questions about astronomy. Speculating about how generative AI technology could upend science education, Astronomy Magazine put ChatGPT to the test with a variety of questions. When asked, “Why do the giant planets rotate so much faster than the terrestrial planets in our solar system?”, ChatGPT generated this incomplete, contradictory response:

ChatGPT: The giant planets in our solar system, such as Jupiter and Saturn, rotate much faster than the terrestrial planets, such as Earth and Mars, because they are much larger and more massive. The more massive an object is, the more inertia it has, which means it takes more force to cause it to change its direction of motion. This is why the giant planets are able to spin so quickly despite their large size. Additionally, the giant planets are also more likely to be affected by the gravitational pull of other objects in the solar system, which can cause them to rotate faster over time.’

5. Weird or Creepy Answers

Beyond the fabrications, falsehoods, and contextual conflicts, some AI hallucinations are just plain weird or creepy, leaving users puzzled or even unsettled. These strange outputs can include generating nonsensical or eerie images, producing bizarre, unsettling dialogue, or offering information that seems disconnected from reality.

One example of AI providing an unsettling response occurred when Microsoft's Bing AI chatbot had a disturbing interaction with The New York Times journalist Kevin Roose in February, 2023. Over a two-hour-long chat, Bing told Roose that it loved him and tried to convince Roose that he was unhappy in his marriage and that he should leave his wife and be with it instead.

Closing the AI-generated Content Trust Gap

While the efficiency gains that generative AI tools provide in planning and creating content are undeniable, it’s clear that AI-generated content has a credibility problem.

A 2023 TELUS International survey found that 61% of respondents were concerned about generative AI increasing the spread of inaccurate information online. In addition, according to Hootsuite’s Social Media 2024 Trends Report, 62% of survey respondents indicated they were less likely to engage with and trust content if they know it was created by an AI application.

LLM outputs in general are designed to sound fluid and plausible—but blindly trusting AI-generated outputs can lead to content riddled with misinformation and errors that have the potential to erode brand trust. To mitigate the risks associated with AI hallucinations, it is essential to limit the possibility of AI hallucinations by using simple and direct prompts and to fact-check the accuracy and logical consistency of AI-generated content.

Craft Simple and Direct Prompts

The quality, accuracy, and predictability of responses from generative AI systems is driven by the clarity, specificity, and precision of the prompts they receive. Three ways to up-level your prompts include:

  1. Eliminate irrelevant details and avoid convoluted sentences to maintain clarity and ensure the AI focuses on the key information.
  2. Create unambiguous prompts to reduce the chances of misinterpretation and generate more precise responses.
  3. Ground prompts by providing context such as data sources or a range of valid responses to guide the AI toward accurate and relevant outputs.
Fact-check and Always Verify

To borrow a page from the IT security pro’s guidebook, the mantra of ‘Trust, but Verify’ has never been more relevant. In the context of AI-generated content, ‘Trust but Verify’ is not just a suggestion—it’s a necessity.

While AI technology has come a long way in creating content, it's still essential to have human oversight to ensure the quality and authenticity of the output. Skilled human editors go beyond algorithms to bring their expertise, experience, and critical thinking abilities to the table. In addition to identifying any discrepancies or misleading information, humans know how to vet information and present compelling, factual, valuable content in ways that resonate with and meet the needs of security IT pros.

Generative AI Is Here to Stay

Generative AI is just that—generative. AI chatbots will never stop hallucinating—but when it comes to creating content, distorted reality and close enough doesn’t cut it.

Security IT professionals are likely the most suspicious and difficult audience to reach. Responsible AI usage is about more than just avoiding misinformation—it's about building trust, transparency, and ethical brand practices. Content that delivers true value and builds trust requires understanding and translating complex cybersecurity concepts into engaging, accessible narratives—demanding deep technical knowledge and constant updates to stay relevant.

If your internal subject matter experts are spread very thin and/or your existing contractors aren’t technical enough, not to worry. We got you—no matter what! With deep subject matter expertise in virtually every cybersecurity discipline, CyberEdge's content creation consultants cut that Herculean task of content creation down to size!

Contact us today for a personalized consultation!

Most Recent Related Stories

Top 10 Reasons to Present on CyberEdge Multi-vendor Webinars
CyberEdge Announces Security Buzz – Your Source for Cybersecurity News
The Perils of AI-generated Content – Part 1