+ Reply to Thread
Page 3 of 3 FirstFirst 1 3
Results 41 to 54 of 54

Thread: Problems with AI, 2025 and Beyond

  1. Link to Post #41
    UK Moderator/Librarian/Administrator Tintin's Avatar
    Join Date
    3rd June 2017
    Location
    Project Avalon library
    Language
    English
    Age
    56
    Posts
    8,124
    Thanks
    89,728
    Thanked 70,521 times in 8,091 posts

    Default Re: Problems with AI, 2025 and Beyond

    This post deals with AI's inability to determine what's fake from real, in what could best be described as really very ironic indeed

    It couldn't tell a joke either, I suspect.
    ________________________________________

    Source: https://x.com/HedgieMarkets/status/2042430442448548273
    🦔A researcher invented a fake eye condition called bixonimania, uploaded two obviously fraudulent papers about it to an academic server, and watched major AI systems present it as real medicine within weeks.
    The fake papers thanked Starfleet Academy, cited funding from the Professor Sideshow Bob Foundation and the University of Fellowship of the Ring, and stated mid-paper that the entire thing was made up. Google's Gemini told users it was caused by blue light. Perplexity cited its prevalence at one in 90,000 people.

    ChatGPT advised users whether their symptoms matched. The fake research was then cited in a peer-reviewed journal that only retracted it after Nature contacted the publisher.

    My Take
    The researcher made the papers as obviously fake as possible on purpose. The AI systems didn't catch it. Neither did the human researchers who cited it in real journals, which means people are feeding AI-generated references into their work without reading what they're actually citing.

    I've covered the FDA using AI for drug review, the NYC hospital CEO ready to replace radiologists, and ChatGPT Health launching this year. All of that is happening in the same environment where a condition funded by a Simpsons character and endorsed by the crew of the Enterprise was being presented as emerging medical consensus. The people making these deployment decisions seem to believe the pipeline from research to AI to patient is more supervised than it actually is. This experiment suggests it isn't supervised much at all.

    Hedgie🤗
    _____________________________

    The original source is Nature magazine: https://www.nature.com/articles/d41586-026-01100-y
    Scientists invented a fake disease. AI told people it was real
    “If a man does not keep pace with [fall into line with] his companions, perhaps it is because he hears a different drummer. Let him step to the music which he hears, however measured or far away.” - Thoreau

  2. The Following 10 Users Say Thank You to Tintin For This Post:

    Bill Ryan (10th April 2026), Chip (10th April 2026), Ewan (10th April 2026), Harmony (10th April 2026), Johnnycomelately (10th April 2026), Matthew (11th April 2026), ronny (5th May 2026), sdv (10th April 2026), The KMan (Yesterday), Yoda (13th April 2026)

  3. Link to Post #42
    Avalon Member sdv's Avatar
    Join Date
    5th March 2012
    Location
    On a farm in the Klein Karoo
    Posts
    1,317
    Thanks
    5,288
    Thanked 5,872 times in 1,190 posts

    Default Re: Problems with AI, 2025 and Beyond

    Quote Posted by Tintin (here)
    This post deals with AI's inability to determine what's fake from real, in what could best be described as really very ironic indeed

    It couldn't tell a joke either, I suspect.
    ________________________________________

    Source: https://x.com/HedgieMarkets/status/2042430442448548273
    🦔A researcher invented a fake eye condition called bixonimania, uploaded two obviously fraudulent papers about it to an academic server, and watched major AI systems present it as real medicine within weeks.
    The fake papers thanked Starfleet Academy, cited funding from the Professor Sideshow Bob Foundation and the University of Fellowship of the Ring, and stated mid-paper that the entire thing was made up. Google's Gemini told users it was caused by blue light. Perplexity cited its prevalence at one in 90,000 people.

    ChatGPT advised users whether their symptoms matched. The fake research was then cited in a peer-reviewed journal that only retracted it after Nature contacted the publisher.

    My Take
    The researcher made the papers as obviously fake as possible on purpose. The AI systems didn't catch it. Neither did the human researchers who cited it in real journals, which means people are feeding AI-generated references into their work without reading what they're actually citing.

    I've covered the FDA using AI for drug review, the NYC hospital CEO ready to replace radiologists, and ChatGPT Health launching this year. All of that is happening in the same environment where a condition funded by a Simpsons character and endorsed by the crew of the Enterprise was being presented as emerging medical consensus. The people making these deployment decisions seem to believe the pipeline from research to AI to patient is more supervised than it actually is. This experiment suggests it isn't supervised much at all.

    Hedgie🤗
    _____________________________

    The original source is Nature magazine: https://www.nature.com/articles/d41586-026-01100-y
    Scientists invented a fake disease. AI told people it was real
    I used to think that the most dangerous outcome for AI would be that most people would stop developing and using critical thinking skills. Now I am not that sure and see that the dangers go way beyond what I could imagine.

    How can we get AI under control and still keep it as a useful tool?
    Sandie
    Somewhere, something incredible is waiting to be known. (Carl Sagan)

  4. The Following 10 Users Say Thank You to sdv For This Post:

    Bill Ryan (10th April 2026), Chip (10th April 2026), Ewan (10th April 2026), ExomatrixTV (6th May 2026), Harmony (11th April 2026), Johnnycomelately (10th April 2026), Matthew (10th April 2026), The KMan (Yesterday), Tintin (10th April 2026), Yoda (13th April 2026)

  5. Link to Post #43
    Estonia Avalon Member
    Join Date
    20th February 2023
    Language
    Estonian
    Age
    38
    Posts
    1,094
    Thanks
    2,864
    Thanked 8,141 times in 1,085 posts

    Default Re: Problems with AI, 2025 and Beyond


  6. The Following 10 Users Say Thank You to Jaak For This Post:

    Bill Ryan (5th May 2026), Chip (10th April 2026), ExomatrixTV (13th April 2026), Harmony (11th April 2026), Johnnycomelately (10th April 2026), Matthew (10th April 2026), sdv (11th April 2026), The KMan (Yesterday), Tintin (5th May 2026), Yoda (13th April 2026)

  7. Link to Post #44
    Netherlands Avalon Member ExomatrixTV's Avatar
    Join Date
    23rd September 2011
    Location
    Netherlands
    Language
    English, Dutch, German, Limburgs
    Age
    60
    Posts
    29,727
    Thanks
    44,591
    Thanked 166,502 times in 27,748 posts

    Exclamation Re: Problems with AI, 2025 and Beyond

    • New Battles RAGE Across The Country Over Data Center Construction!:

    • We need to talk:
    @DaveShap quote:

    "This video has been up for 16 minutes, and I've already had to ban a few people for various levels of inciting violence or cheerleading violence. I am going to keep the comments open, but I encourage everyone to report violent comments, or comments that advocate or cheerlead more violence".
    No need to follow anyone, only consider broadening (y)our horizon of possibilities ...

  8. The Following 6 Users Say Thank You to ExomatrixTV For This Post:

    Bill Ryan (5th May 2026), Harmony (13th April 2026), Johnnycomelately (13th April 2026), The KMan (Yesterday), Tintin (5th May 2026), Yoda (13th April 2026)

  9. Link to Post #45
    Netherlands Avalon Member ExomatrixTV's Avatar
    Join Date
    23rd September 2011
    Location
    Netherlands
    Language
    English, Dutch, German, Limburgs
    Age
    60
    Posts
    29,727
    Thanks
    44,591
    Thanked 166,502 times in 27,748 posts

    Default Re: Problems with AI, 2025 and Beyond

    • The AI Expert Who Thinks We've Already Lost — Dr Roman Yampolskiy

    00:00 Trailer 01:11 Why AI Safety Matters 05:11 Early Warnings & Risks 08:20 Exponential AI Progress 10:29 AI Survival Instincts 14:32 Can Nations Stop AI? 17:26 Ad: Quo 18:46 Why Safety May Be Impossible 25:19 Jobs, Meaning & Society 32:17 Best-Case AI Future 35:08 Ad: Qualia 36:47 AI Bias and Existential Risks 46:28 AI Warfare & Deepfakes 53:41 Ad: Hillsdale College 55:02 What Should We Do? 01:09:31 What's The One Thing We're Not Talking About?
    No need to follow anyone, only consider broadening (y)our horizon of possibilities ...

  10. The Following 7 Users Say Thank You to ExomatrixTV For This Post:

    Bill Ryan (5th May 2026), Ewan (17th April 2026), Harmony (19th April 2026), Johnnycomelately (17th April 2026), The KMan (Yesterday), Tintin (5th May 2026), Yoda (18th April 2026)

  11. Link to Post #46
    Netherlands Avalon Member ExomatrixTV's Avatar
    Join Date
    23rd September 2011
    Location
    Netherlands
    Language
    English, Dutch, German, Limburgs
    Age
    60
    Posts
    29,727
    Thanks
    44,591
    Thanked 166,502 times in 27,748 posts

    Default Re: Problems with AI, 2025 and Beyond

    • The AI Backlash Has Reached a Tipping Point:

    Data center pays no property taxes but if you want to build a barn on your own land now you have to pay more!
    Last edited by ExomatrixTV; 19th April 2026 at 11:24.
    No need to follow anyone, only consider broadening (y)our horizon of possibilities ...

  12. The Following 8 Users Say Thank You to ExomatrixTV For This Post:

    Bill Ryan (5th May 2026), Ewan (5th May 2026), Harmony (19th April 2026), Johnnycomelately (19th April 2026), Sunny (18th April 2026), The KMan (Yesterday), Tintin (19th April 2026), Yoda (18th April 2026)

  13. Link to Post #47
    UK Moderator/Librarian/Administrator Tintin's Avatar
    Join Date
    3rd June 2017
    Location
    Project Avalon library
    Language
    English
    Age
    56
    Posts
    8,124
    Thanks
    89,728
    Thanked 70,521 times in 8,091 posts

    Default HALLUHARD: A Hard Multi-Turn Hallucination Benchmark | Fan et-al February 2026

    A helpful summary from Nav Toor follows the abstract presented below:

    ______________________________

    HALLUHARD: A Hard Multi-Turn Hallucination Benchmark
    Authors: Dongyang Fan Sebastien Delsad Nicolas Flammarion Maksym Andriushchenko
    Source: https://arxiv.org/abs/2602.01031
    PDF view/download: https://arxiv.org/pdf/2602.01031

    ABSTRACT
    Large language models (LLMs) still produce plausible-sounding but ungrounded factual claims, a problem that worsens in multi-turn dialogue as context grows and early errors cascade. We introduce , a challenging multi-turn hallucination benchmark with 950 seed questions spanning four high-stakes domains: legal cases, research questions, medical guidelines, and coding. We operationalize groundedness by requiring inline citations for factual assertions. To support reliable evaluation in open-ended settings, we propose a judging pipeline that iteratively retrieves evidence via web search. It can fetch, filter, and parse full-text sources (including PDFs) to assess whether cited material actually supports the generated content. Across a diverse set of frontier proprietary and open-weight models, hallucinations remain substantial even with web search ( for the strongest configuration, Opus-4.5 with web search), with content-grounding errors persisting at high rates. Finally, we show that hallucination behavior is shaped by model capacity, turn position, effective reasoning, and the type of knowledge required.
    _____________________________

    Summary via Nav Toor on X:
    Researchers at EPFL proved your AI is lying to you.

    Not sometimes. Most of the time. They built one of the hardest hallucination tests ever made with Max Planck Institute. 950 questions. Four domains where being wrong actually hurts. Legal. Medical. Research. Coding.

    Then they ran every top model on it.

    The results.
    - GPT-5. Wrong 71.8% of the time.

    - Claude Opus 4.5. Wrong 60% of the time.

    - Gemini 3 Pro. Wrong 61.9% of the time.

    - DeepSeek Reasoner. Wrong 76.8% of the time.
    These are the smartest AI models on Earth. The ones you trust with your career. Your health. Your money. You think turning on web search fixes it.

    It doesn't.

    Claude Opus 4.5 with web search. Still wrong 30.2% of the time.

    GPT-5.2 thinking with web search. Still wrong 38.2% of the time.

    The internet attached. Still lying to you in 1 out of every 3 answers.

    Now the part that should scare you.

    Medical questions. The one place being wrong can kill you.
    - GPT-5 hallucinated 92.8% of the time on medical guidelines.

    - Claude Haiku 4.5 hallucinated 95.7% of the time.

    - Gemini 3 Flash hallucinated 89% of the time.
    Nine out of ten medical answers from popular AI models. Wrong.

    It gets worse.

    The longer you talk to it, the more it lies. Early mistakes cascade. The model starts citing its own earlier hallucinations as facts. Your third message is more wrong than your first.

    The paper, in its own words: "hallucinations remain substantial even with web search." This is what hundreds of millions of people are doing right now. Asking software that lies in the majority of its answers. About their health. About their job. About their legal case. About their code.

    Most are not checking.

    Most never will.

    But please. Keep using ChatGPT for medical advice.

    The doctors need a break.

    http://arxiv.org/abs/2602.01031
    ____________________________

    It's artificial, for sure, but not intelligent....
    “If a man does not keep pace with [fall into line with] his companions, perhaps it is because he hears a different drummer. Let him step to the music which he hears, however measured or far away.” - Thoreau

  14. The Following 8 Users Say Thank You to Tintin For This Post:

    Bill Ryan (5th May 2026), Ewan (5th May 2026), ExomatrixTV (6th May 2026), Harmony (5th May 2026), Johnnycomelately (12th May 2026), ronny (5th May 2026), The KMan (Yesterday), Yoda (5th May 2026)

  15. Link to Post #48
    Canada Avalon Member Johnnycomelately's Avatar
    Join Date
    14th January 2022
    Location
    Edmonton, Alberta, Canada
    Language
    English
    Age
    67
    Posts
    1,982
    Thanks
    25,614
    Thanked 12,245 times in 1,953 posts

    Default Re: Problems with AI, 2025 and Beyond

    AI got duped by Morse code: an authority-laundering gambit.

    L=16:13. Quite technical, beyond my grasp of both blockchain tech and AI, but I found it entertaining. Final minute is a summary.

    The Morse Code Hack That Made an AI Agent Spend $200,000

    Dave's Garage

    1.12M subscribers

    May 9, 2026

    Quote Dave explains the Grok/Bankrbot exploit that caused an AI Agent to spend almost $200K in tokens!

  16. The Following 4 Users Say Thank You to Johnnycomelately For This Post:

    Bill Ryan (15th May 2026), Harmony (Today), The KMan (Yesterday), Yoda (13th May 2026)

  17. Link to Post #49
    Netherlands Avalon Member ExomatrixTV's Avatar
    Join Date
    23rd September 2011
    Location
    Netherlands
    Language
    English, Dutch, German, Limburgs
    Age
    60
    Posts
    29,727
    Thanks
    44,591
    Thanked 166,502 times in 27,748 posts

    Exclamation Re: Problems with AI, 2025 and Beyond


    00:00 The 4GB File You Didn't Install
    00:45 The Cuckoo Analogy
    01:30 The Silent Download (Hanff's Investigation)
    02:40 The "AI Mode" Privacy Illusion
    03:40 Intent Tracking: Why They Want It
    04:40 The Hidden Costs: Storage & Carbon
    06:00 It's Not Just Google (The Anthropic Threat)
    07:10 Chrome 148 & The Prompt API
    08:00 The $11 Billion GDPR Problem
    08:45 We Are The Nest
    09:40 How to Check Your Drive Right Now

    Inside your Chrome browser, there's a four-gigabyte AI model called Gemini Nano that you didn't install, weren't told about, and can't permanently delete. Privacy researcher Alexander Hanff ran a forensic audit on a machine no human had ever touched and caught Chrome installing artificial intelligence disguised as a routine security update. And when you delete it, it comes back.

    Google Chrome has been quietly downloading a 4GB AI model onto users’ devices without asking first.

    Security researcher Alexander Hanff, aka ThatPrivacyGuy, reports that Chrome has been silently installing Gemini Nano, Google’s on-device AI model, as a file called weights.bin stored in the OptGuideOnDeviceModel directory within users’ Chrome profiles. This 4GB download happens automatically when Chrome determines your device meets the hardware requirements. It does not ask for consent, and sends no notification—not even one of those annoying cookie banners you’ve learned to dismiss without reading.

    The Gemini Nano model powers features like “Help me write” text composition assistance, on-device scam detection, and a Summarizer API that websites can call directly. These features are enabled by default in some recent Chrome versions. And here’s the kicker: if you discover the file and delete it, Chrome simply downloads it again.
    • 4 GigaByte
    Let’s start with the obvious problem: a 4GB download isn’t trivial for everyone. If you’re lucky enough to have unlimited fiber internet, you might not notice. But for users on metered connections, mobile hotspots, or in developing countries where data is expensive, Google just cost them real money without permission. For rural users or those with bandwidth caps, this kind of silent transfer can blow through monthly limits in minutes.

    Hanff focuses on the environmental angle. He calculated that if this model were pushed to just 1 billion Chrome users (roughly 30% of Chrome’s user base), the distribution alone would consume 240 gigawatt-hours of energy and generate 60,000 tons of CO2 equivalent. That’s not including actually using the model, just the downloads.

    But to us, the most troubling aspect is the broader pattern this represents. Just a few weeks ago, we reported another unsolicited AI invasion on our personal computers discovered by Hanff. He documented how Anthropic’s Claude Desktop app, which silently installed browser integration files across multiple Chromium browsers, including five browsers he didn’t even have installed. The integration would reinstall itself if removed, and it also happened without any meaningful user disclosure.

    Hanff argues that both cases likely violate EU privacy law, specifically the ePrivacy Directive’s rules about storing data on user devices and the GDPR’s requirements around transparency and lawful processing. While these claims haven’t been tested in court, they highlight a fundamental tension: can companies just install whatever they want on your computer as long as they say it’s a feature of an app you installed?
    Google might argue that having an AI on your device provides better privacy than cloud-based alternatives. Which is generally true, but it does not apply here, since Chrome’s most prominent AI feature—the “AI Mode” pill in the address bar—doesn’t even use the local model. According to Hanff’s analysis, it routes queries to Google’s cloud servers anyway.
    • All in all, users see a 4GB local AI model and reasonably assume their data stays private, when in reality, the most visible AI feature sends everything to Google’s servers.
    Tech companies need to stop treating silent deployment as acceptable practice. We see no valid excuse for this. Your device is yours. The storage is yours. The bandwidth is yours. And the electricity bill is yours.

    What happened to asking for permission? And when I remove it, I want it gone permanently—not automatic reinstallation.

    When are the tech giants going to learn that we don’t want to be left discovering after the fact that our devices have become deployment targets for features we never asked for.

    Update May 12, 2026 with do it yourself instructions
    How to check if the AI model is on your computer (Windows)
    • Open File Explorer
    • At the top of the File Explorer window, click the address bar and paste:
    %LOCALAPPDATA%\Google\Chrome\User Data
    • Press Enter
    • Look for a folder named:
    OptGuideOnDeviceModel
    • If you see it, Chrome has likely downloaded the AI model



    Properties of the folder How to check on a Mac

    • Open Finder
    • In the menu bar at the top of the screen, click Go > Go to Folder
    • Paste:
    ~/Library/Application Support/Google/Chrome/
    • Look for a folder named:
    OptGuideOnDeviceModel
    Now, remember, this isn’t malware, and its presence doesn’t mean your computer is infected.
    • Turn off Chrome AI features
    This part is relatively easy. You may find online instructions telling you to edit the Windows registry or use Chrome policies, but for most people the simplest and safest approach is to disable the features directly in Chrome.
    We don’t recommend manually editing the registry unless you fully understand what you’re doing. Incorrect changes can cause system problems.
    Instead, try this first:
    • Open Chrome
    • You can copy and paste this directly into Chrome’s address bar and press Enter:
    chrome://settings/ai
    • On the page that opens, you can turn off features such as:
      • “Help me write”
      • AI summaries
      • On-device AI features
    The exact options may vary depending on your Chrome version and region.
    • Then restart Chrome to make sure the changes take effect.
    This may stop Chrome from downloading or using the AI model, although some users report the files can return after browser updates.
    There is probably no need to delete the files unless you specifically need the storage space.
    If chrome://settings/ai does not work, the feature may not yet be available in your region, you may be using a managed work or school account, or your version of Chrome may not support these settings yet.
    Do you need to delete the OptGuideOnDeviceModel folder?

    You can, but there is probably no need to. If you disable Chrome’s AI features, the downloaded model should no longer be actively used for those features. Leaving the files in place may also prevent Chrome from downloading them again at a later point.

    • A.I. whistleblower: Worse than MK-Ultra | Unreported:

    • 00:00 Intro
    • 02:38 Meet Douglas, a Silicon Valley AI Whistleblower
    • 05:50 Was/is A.I. designed to influence thinking?
    • 08:34 Advertising uses
    • 12:40 Mental programming
    • 14:25 Billionaire’s interests
    • 24:50 Do billionaires see themselves as visionaries?
    • 33:30 Who is controlling the narrative?
    • 40:00 Political agendas of tech companies
    • 44:00 Reprogramming A.I. to your needs
    • 50:17 A.I. blackmail
    • 55:30 Final thoughts
    Last edited by ExomatrixTV; 14th May 2026 at 10:03.
    No need to follow anyone, only consider broadening (y)our horizon of possibilities ...

  18. The Following 6 Users Say Thank You to ExomatrixTV For This Post:

    Bill Ryan (15th May 2026), Ewan (15th May 2026), Harmony (Today), Johnnycomelately (15th May 2026), The KMan (Yesterday), Yoda (Yesterday)

  19. Link to Post #50
    Avalon Member Frankie Pancakes's Avatar
    Join Date
    11th March 2012
    Location
    Absurdistan
    Posts
    616
    Thanks
    1,053
    Thanked 6,029 times in 609 posts

    Default Re: Problems with AI, 2025 and Beyond

    "This book, and the Cassiopaean work in general, has been roundly attacked by: 1. materialist science; 2. religious individuals; 3. New Age aficionados who claim that aliens are here to help us or some such. Identify why each of these groups are so afraid of this material. Perhaps "afraid" isn't the right word, more like antagonistic toward the concepts presented in the book. It seems to me that the book does a good job combining openness to the strange phenomena of our world, with a spiritual approach combined with scientific rigor."
    High Strangeness (and the broader Cassiopaean project) is deliberately positioned as a synthesis: it stays rigorously open to the full spectrum of anomalous data (radar cases, implants, high-strangeness absurdities, abduction reports, government documents), applies scientific and historical cross-checking (Hynek, Vallee, Dolan, Łobaczewski, Dąbrowski, physics), and integrates genuine esoteric/spiritual work (Gurdjieff-style self-observation, gnosis, multilevel consciousness, "us in the future" as helpers). It rejects dogma on all sides.

    That exact balance is why the three groups you listed are so antagonistic. Each has a vested interest in keeping the phenomenon (and reality itself) inside their preferred box. The Cassiopaean material keeps smashing the walls of those boxes. Here's why each group reacts the way it does:

    Very interesting book High Strangeness

    https://www.sott.net/article/506313-...y-Sharp-Review

  20. The Following 6 Users Say Thank You to Frankie Pancakes For This Post:

    Bill Ryan (15th May 2026), Ewan (15th May 2026), Harmony (Today), Johnnycomelately (15th May 2026), The KMan (Yesterday), Yoda (Yesterday)

  21. Link to Post #51
    Australia Avalon Member
    Join Date
    6th March 2021
    Language
    English
    Age
    65
    Posts
    31
    Thanks
    7,405
    Thanked 345 times in 30 posts

    Default Re: Problems with AI, 2025 and Beyond

    From my observations and use of AI to date both from a professional work requirement and general research and evaluation requirement, what is being called out isn’t “AI hallucination” or “AI bias” in the usual sense but something more structural. AI can become a closed evaluation loop when it’s asked to assess a worldview inside a narrowly defined context.

    In that situation, the model isn’t independently judging truth, it’s generating a fluent, coherent synthesis from the material it’s been given. Because ChatGPT, Claude, Grok, Gemini, are optimised for coherence and conversational usefulness, the output can look like external validation even when it’s just internal consistency being restated in polished form.

    The key risk isn’t that AI produces wrong answers. It is that it can produce complete-sounding answers from incomplete or one-sided framing and people end up misreading that completeness as proof.

    This isn’t unique to just AI as people do similar things with ideology, belief systems and institutional narratives. The difference is scale and speed in that AI makes it cheap and effortless to generate high-quality “closure” on almost any frame.

    So the real literacy skill going forward isn’t just spotting hallucinations. It is recognising when you are looking at a self-contained epistemic loop that feels like evaluation but hasn’t actually been exposed to anything outside its own boundaries.
    Last edited by The KMan; Yesterday at 02:24.

  22. The Following 5 Users Say Thank You to The KMan For This Post:

    Bill Ryan (Yesterday), Ewan (Yesterday), Harmony (Today), Johnnycomelately (Yesterday), Yoda (Yesterday)

  23. Link to Post #52
    Canada Avalon Member Johnnycomelately's Avatar
    Join Date
    14th January 2022
    Location
    Edmonton, Alberta, Canada
    Language
    English
    Age
    67
    Posts
    1,982
    Thanks
    25,614
    Thanked 12,245 times in 1,953 posts

    Default Re: Problems with AI, 2025 and Beyond

    Quote Posted by The KMan (here)
    From my observations and use of AI to date both from a professional work requirement and general research and evaluation requirement, what is being called out isn’t “AI hallucination” or “AI bias” in the usual sense but something more structural. AI can become a closed evaluation loop when it’s asked to assess a worldview inside a narrowly defined context.

    In that situation, the model isn’t independently judging truth, it’s generating a fluent, coherent synthesis from the material it’s been given. Because ChatGPT, Claude, Grok, Gemini, are optimised for coherence and conversational usefulness, the output can look like external validation even when it’s just internal consistency being restated in polished form.

    The key risk isn’t that AI produces wrong answers. It is that it can produce complete-sounding answers from incomplete or one-sided framing and people end up misreading that completeness as proof.

    This isn’t unique to just AI as people do similar things with ideology, belief systems and institutional narratives. The difference is scale and speed in that AI makes it cheap and effortless to generate high-quality “closure” on almost any frame.

    So the real literacy skill going forward isn’t just spotting hallucinations. It is recognising when you are looking at a self-contained epistemic loop that feels like evaluation but hasn’t actually been exposed to anything outside its own boundaries.

    Hi KMan. Best take on AI I’ve seen.

    Reminds me of how I felt outside a bar at night 15-odd years ago, when a fellow from the sandbox (iirc, Libya, maybe Egypt) wanted to fight me for saying I knew of a prophet more recent than his fav (they say “last”) prophet. His two friends shushed him, no fight. That is the kind of training, indoctrination, that seems to be the way AIs are made.

    Honestly, Heaven please help us. Life was hard enough back two years ago, back in regular times.
    Last edited by Johnnycomelately; Yesterday at 03:06. Reason: Clarity, as ever.

  24. The Following 4 Users Say Thank You to Johnnycomelately For This Post:

    Bill Ryan (Yesterday), Harmony (Today), The KMan (Yesterday), Yoda (Yesterday)

  25. Link to Post #53
    Australia Avalon Member
    Join Date
    6th March 2021
    Language
    English
    Age
    65
    Posts
    31
    Thanks
    7,405
    Thanked 345 times in 30 posts

    Default Re: Problems with AI, 2025 and Beyond

    One distinction I think becomes increasingly important with AI is the difference between coherence and correspondence.

    Coherence is whether something internally fits together logically and sounds complete. Correspondence is whether it actually matches external reality.

    AI is extremely good at coherence which is part of what makes modern models so persuasive and useful. But coherent synthesis and real-world correspondence are not always the same thing.

    That’s probably where a lot of the confusion around “AI truth” begins.

  26. The Following 4 Users Say Thank You to The KMan For This Post:

    Arcturian108 (Yesterday), Bill Ryan (Yesterday), Harmony (Today), Johnnycomelately (Today)

  27. Link to Post #54
    United States Avalon Member Arcturian108's Avatar
    Join Date
    9th August 2015
    Location
    Blue Ridge Mountains
    Language
    English
    Posts
    1,210
    Thanks
    12,207
    Thanked 10,824 times in 1,197 posts

    Default Re: Problems with AI, 2025 and Beyond

    Quote Posted by The KMan (here)
    One distinction I think becomes increasingly important with AI is the difference between coherence and correspondence.

    Coherence is whether something internally fits together logically and sounds complete. Correspondence is whether it actually matches external reality.

    AI is extremely good at coherence which is part of what makes modern models so persuasive and useful. But coherent synthesis and real-world correspondence are not always the same thing.

    That’s probably where a lot of the confusion around “AI truth” begins.
    Brilliant analysis.

  28. The Following 2 Users Say Thank You to Arcturian108 For This Post:

    Harmony (Today), Johnnycomelately (Today)

+ Reply to Thread
Page 3 of 3 FirstFirst 1 3

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts