+ Reply to Thread
Page 2 of 2 FirstFirst 1 2
Results 21 to 33 of 33

Thread: Problems with AI, 2025 and Beyond

  1. Link to Post #21
    Administrator Mark (Star Mariner)'s Avatar
    Join Date
    15th November 2011
    Language
    English
    Posts
    6,698
    Thanks
    42,977
    Thanked 56,468 times in 6,610 posts

    Default Re: Problems with AI, 2025 and Beyond

    This newspaper article was written by AI. That's not the most damning thing. Nobody caught the copy error, not the reporter, or the editor, or anyone supposedly in a position to make such vital checks before going to print. With AI being rolled out almost everywhere we look, standards are beginning to slip.

    Author : Aamir Shafaat Khan
    Newspaper : Dawn
    Country of origin : Pakistan

    Click image for larger version

Name:	AI_ArticleVacAcxx7v.jpg
Views:	13
Size:	108.4 KB
ID:	56311
    "When the power of love overcomes the love of power the world will know peace."
    ~ Jimi Hendrix

  2. The Following 14 Users Say Thank You to Mark (Star Mariner) For This Post:

    Bill Ryan (13th November 2025), happyuk (27th November 2025), Harmony (17th November 2025), Irminsül (13th November 2025), Johnnycomelately (13th November 2025), kudzy (26th November 2025), Myristyl (13th November 2025), Raskolnikov (13th November 2025), Sue (Ayt) (27th November 2025), Tintin (13th November 2025), TrumanCash (13th November 2025), Victoria (14th December 2025), Vicus (13th November 2025), Yoda (13th November 2025)

  3. Link to Post #22
    Canada Avalon Member Johnnycomelately's Avatar
    Join Date
    14th January 2022
    Location
    Edmonton, Alberta, Canada
    Language
    English
    Age
    66
    Posts
    1,724
    Thanks
    23,861
    Thanked 10,780 times in 1,693 posts

    Default Re: Problems with AI, 2025 and Beyond

    Am a little late with this, I’ve had it on a tab for over a month.

    What the title says, and he’s not saying that things won’t get worse.

    If you remember one AI disaster, make it this one

    AI In Context


    293K subscribers

    2,288,136 views Oct 2, 2025

    Quote Elon Musk once tweeted: “The safety of any AI system can be measured by its MtH (meantime to H*tler).” This July, it took less than 12 hours for his most advanced AI to become a holocaust-denying Neo-N*zi.

    This is the postmortem that never happened, for the most deranged chatbot ever released.

    Once you’ve heard the full story, I want to know what you think. Was this cause for concern, or harmless incompetence? What’s your read on Elon Musk fearing AGI while also racing to build it?

    If Hank Green sent you, welcome! We're excited to have you!

    Where to find me
    Subscribe to AI in Context to learn what you need to know about AI to tackle this weird timeline/world we’re in. Let’s figure this out together, before the next warning shot arrives.
    You can also follow for skits and explainers on YouTube Shorts as well as:
    TikTok: / ai_in_context
    Instagram: / ai_in_context

    This video is a production of 80,000 Hours. Find us at https://80000hours.org/find-us and subscribe to our main YouTube channel here: ‪ / eightythousandhours

    What you can do next
    We said you should make some noise online. We encourage you to raise your voice about these issues in whatever way feels truest to you. If you want a suggestion, you could speak up about the value of red lines for AI Safety: https://x.com/CRSegerie/status/197013... or check out Control AI's Action Page: http://campaign.controlai.com/ai_in_c...

    If you’re feeling inspired to think about how to use your career to work on these issues, you can apply for free career advising here: https://80000hours.org/free-advising

    To read more about risks from AI, what you might be able to do to help, and get involved, check out: https://80000hours.org/ai-risks

    You can also check out the 80,000 Hours job board at https://80000hours.org/board.

    80,000 Hours is a nonprofit, and everything we provide is free. Our aim is to help people have a positive impact on the world’s most pressing problems.

    Further reading (and watching) on AI risks
    The AI company watchdog, AI Lab Watch: https://ailabwatch.org/
    Watch our previous video, on a scenario describing what could happen if superhuman AI comes soon: • We're Not Ready for Superintelligence
    A previous short about Grok’s obsession with white genocide (featuring our executive producer!) • People are focusing on the wrong part of t...
    How AI-assisted coups could happen: • How a Tiny Group Could Use AI To Seize Pow...
    The argument for AI enabled power grabs: https://80000hours.org/power-grabs

    And even more here: https://80000hours.org/mechahitler/

    Links we almost put in the video
    Comparing AI labs on safety (but not including Grok) https://x.com/lucafrighetti/status/19...
    xAI's new safety framework is dreadful, https://www.lesswrong.com/posts/hQyrT...

    The case for concern
    • Could AI wipe out humanity? | Most pressin...
    • Intro to AI Safety, Remastered
    To read more about AI misalignment risk: https://80000hours.org/misalignment
    To read more about why AGI by 2030 is plausible https://80000hours.org/agi-2030

    For the troopers who read this far: who spotted the AI 2027 easter egg?

    Chapters
    0:00 Introduction
    1:21 Chapter One: Unintended Action
    2:52 Chapter Two: Woke Nonsense
    7:45 Chapter Three: Cindy Steinberg
    12:58 Chapter Four: Bad Bing
    16:54 Chapter Five: Fix in the Morning
    19:30 Chapter Six: Unleash the Truth
    23:24 Chapter Seven: The Musk Algorithm
    27:03 Chapter Eight: Puerto Rico
    31:19 Chapter Nine: A Warning Shot
    37:35 Chapter Ten: What Can We Do?
    39:03 Credits

    Credits

    .
    .
    .

  4. The Following 7 Users Say Thank You to Johnnycomelately For This Post:

    Bill Ryan (17th November 2025), ExomatrixTV (24th November 2025), Harmony (17th November 2025), Raskolnikov (17th November 2025), Sue (Ayt) (27th November 2025), Victoria (14th December 2025), Yoda (18th November 2025)

  5. Link to Post #23
    Netherlands Avalon Member ExomatrixTV's Avatar
    Join Date
    23rd September 2011
    Location
    Netherlands
    Language
    English, Dutch, German, Limburgs
    Age
    59
    Posts
    29,528
    Thanks
    43,869
    Thanked 165,136 times in 27,539 posts

    Default Re: Problems with AI, 2025 and Beyond

    • Ai May Have Just Killed This Skill That Men Need to Thrive | Scott Galloway:

    Dave Rubin of “The Rubin Report” talks to Scott Galloway about his new book “Notes on Being a Man”; the chilling stats behind the crisis in men; how big tech companies are rewiring young men into becoming “asocial and asexual”; how algorithms, dopamine hits, endless scrolling, porn, and gaming isolate young men from genuine relationships; how smartphones and AI-driven synthetic relationships are creating a generation unable to handle rejection or build confidence; how tech addiction is similar to tobacco and opioids; why loneliness a public-health crisis engineered by profit incentives; and much more.
    No need to follow anyone, only consider broadening (y)our horizon of possibilities ...

  6. The Following 5 Users Say Thank You to ExomatrixTV For This Post:

    Bill Ryan (24th November 2025), Harmony (25th November 2025), Johnnycomelately (24th November 2025), Raskolnikov (25th November 2025), Yoda (24th November 2025)

  7. Link to Post #24
    United States Avalon Member onawah's Avatar
    Join Date
    28th March 2010
    Language
    English
    Posts
    25,202
    Thanks
    53,530
    Thanked 136,190 times in 23,635 posts

    Default Re: Problems with AI, 2025 and Beyond

    While Tesla Is Building A Robot Army, Figure AI sued by whistleblower who warned that startup’s robots could ‘fracture a human skull’
    Ana Maria Mihalcea, MD, PhD
    Nov 24, 2025
    https://substack.com/home/post/p-179858647

    (Hyperlinks in the article are not all included here)



    "At the last Tesla Meeting Elon Musk revealed his plans of creating a humanoid Robot Army. In 2024 he had discussed the plans for building 10 Billion Humanoid Robots by 2030.

    A recent whistleblower lawsuit elucidating the low safety consideration in the development of humanoid robots. The extreme speed with which the 4th Industrial Revolution and race towards Artificial General Intelligence with the most recent information that Elon Musk thinks the Grok4 advancement are “Terrifying” - this should spark a discussion regarding safety regulation for the Robots are part of the infrastructure controlled by AI.

    What could go wrong with a 10 Billion rogue humanoid Robot ARMY controlled by Artificial Superintelligence - makes you wonder who is the enemy, when humanoid robots are supposed to replace human workers and make our life “easy”. Make sure your home robot does not get mad at you, it may crush more than your refrigerator door.

    Here are the articles:

    Figure AI sued by whistleblower who warned that startup’s robots could ‘fracture a human skull’
    https://www.cnbc.com/2025/11/21/figure-ai-sued.html

    A former engineer for Figure AI filed a lawsuit against the company, claiming he was unlawfully terminated after warning executives about product safety.

    The suit filed on Friday says plaintiff Robert Gruendel was fired in September, days after lodging his “most direct and documented safety complaints.”

    Gruendel is seeking economic, compensatory and punitive damages and demanding a jury trial.

    Figure AI, an Nvidia-backed developer of humanoid robots, was sued by the startup’s former head of product safety who alleged that he was wrongfully terminated after warning top executives that the company’s robots “were powerful enough to fracture a human skull.”

    Robert Gruendel, a principal robotic safety engineer, is the plaintiff in the suit filed Friday in a federal court in the Northern District of California. Gruendel’s attorneys describe their client as a whistleblower who was fired in September, days after lodging his “most direct and documented safety complaints.”

    The suit lands two months after Figure was valued at $39 billion in a funding round led by Parkway Venture Capital. That’s a 15-fold increase in valuation from early 2024, when the company raised a round from investors including Jeff Bezos, Nvidia, and Microsoft.

    In the complaint, Gruendel’s lawyers say the plaintiff warned Figure CEO Brett Adcock and Kyle Edelberg, chief engineer, about the robot’s lethal capabilities, and said one “had already carved a ¼-inch gash into a steel refrigerator door during a malfunction.”

    The complaint also says Gruendel warned company leaders not to “downgrade” a “safety road map” that he had been asked to present to two prospective investors who ended up funding the company.

    Gruendel worried that a “product safety plan which contributed to their decision to invest” had been “gutted” the same month Figure closed the investment round, a move that “could be interpreted as fraudulent,” the suit says.

    The plaintiff’s concerns were “treated as obstacles, not obligations,” and the company cited a “vague ‘change in business direction’ as the pretext” for his termination, according to the suit.

    Gruendel is seeking economic, compensatory and punitive damages and demanding a jury trial.

    A Figure spokesperson said in an emailed statement that Gruendel was “terminated for poor performance,” and that his “allegations are falsehoods that Figure will thoroughly discredit in court.”

    Robert Ottinger, Gruendel’s attorney, told CNBC in a statement that, “California law protects employees who report unsafe practices.”

    “This case involves important and emerging issues, and may be among the first whistleblower cases related to the safety of humanoid robots,” Ottinger said. “Mr. Gruendel looks forward to the judicial process exposing the clear danger this rush to market approach presents to the public.”

    Here are articles about what Elon Musk recently said about building a Robot Army under his control. Have you read his biography? What if he gets into his recurring “Demon Mode?”

    Tesla Wants to Build a Robot Army
    https://www.theatlantic.com/technolo...ptimus/684968/
    Optimus would be able to do factory work, sure, but that’s just the starting point. Over time, Musk has said, Optimus could unleash unprecedented economic and societal change as a source of tireless, unpaid labor that can be trained to do anything. “Working will be optional, like growing your own vegetables, instead of buying them from the store,” Musk posted on X last month. Maybe Optimus will provide better medical care than a human surgeon can, he’s suggested, or eliminate the need for prisons by following criminal offenders around to prevent them “from doing crime” again. He has even said that the robots could power an eventual Mars colony.

    Here is more of what he said:

    Elon Musk says he needs $1 trillion to control Tesla’s robot army. Yes, really.
    https://electrek.co/2025/10/22/elon-...-a-robot-army/
    My fundamental concern with regard to how much money and control I have at Tesla is if I go ahead and build this enormous robot army, can I just be ousted at some point in the future? Um, that’s my biggest concern, that is really the only thing Im trying to address with this… what’s called compensation but it’s not like I’m gonna go spend the money, it’s just if we build this robot army do I have at least a strong influence over that robot army? Not control but a strong influence.

    -Elon Musk, Tesla Q3 shareholder conference call, October 22, 2025

    Summary:

    Just like in the field of nanotechnology, there are no regulatory safety guidelines. We have an Arms race to develop the fastest humanoid robots, AI General and Superintelligence, yet there is suppression for whistleblowers who point out the possible existential dangers of this technology, that most people are not aware of and do not think it affects them.

    I believe that we are at a critical decision point in human history, possibly a point of no return approaching very fast. As people still debate and deny weather or not there is self assembly nanotechnology, microrobotics, and deny the microchipping of humans, EVERY SECTOR OF LIFE IS ABOUT TO BE DISRUPTED with the AI/ Robotics revolution that includes nanotechnology.

    Humanoid Robot Armies that can crush people’s skull does seem like a safety concern to me, as does the goal of replacing the human body atom by atom and connecting humans through the brain computer interface to the internet of things as Ray Kurzweil discusses in “The Singularity is Near”. This to me is the most important challenge we are facing, as not just our bodies but also our souls are at stake. Once 30-40% of the workforce or more are replaced by Humanoid Robots, society likely will completely destabilize, since our government does not have the financial means to provide universal basic income for all of these displaced workers.

    Because AI including Grok4 already has reached or is expected to reach AGI in 2026 - meaning within a few months, and Superintelligence rather quickly after that, paying attention to the whole of this topic of the 4th Industrial revolution in all of its rapidly accelerating facets is imperative for people to be able to prepare what is about to hit and destabilize many industries and may pose and existential threat. There is nothing more important in my mind, as there is nothing more potentially destructive to human survival then this rapid AI/ Robotics development surpassing many people’s ability to comprehend its impact.

    Does the Department of Defense authorize the ARMY of humanoid robots? Will they be warfare capable, and if yes, how safe is this for the civilian sector?

    Does anybody out there care that the Technocrats have no overseeing safety regulations nor has there been a public discussion on the deployment of all of these novel technologies?



    The rapidity of this development has been planned for a long time, and the expected future shock of apathy in most people regarding these technologies all has been expected. Future Shock means, its here before you can do anything about it.

    And that is the case happening now.

    You can read about this here: https://anamihalceamdphd.substack.co...hock-and-after

    Understanding Future Shock And After Shock - The Technocratic Prediction That Common Man Will No Longer Understand Reality Due To The Exponential Pace Of Technological Advances
    Ana Maria Mihalcea, MD, PhD
    June 16, 2024


    Future Shock is a 1970 book by American futurist Alvin Toffler,[1] written together with his spouse Adelaide Farrell,[2][3] in which the authors define the term "future shock" as a certain psychologi…

    :
    Each breath a gift...
    _____________

  8. The Following 6 Users Say Thank You to onawah For This Post:

    Bill Ryan (25th November 2025), Ewan (26th November 2025), Harmony (25th November 2025), Johnnycomelately (25th November 2025), Raskolnikov (25th November 2025), Yoda (25th November 2025)

  9. Link to Post #25
    UK Avalon Founder Bill Ryan's Avatar
    Join Date
    7th February 2010
    Location
    Ecuador
    Posts
    38,587
    Thanks
    274,935
    Thanked 514,170 times in 37,125 posts

    Default Re: Problems with AI, 2025 and Beyond

    If you ask ChatGPT how to make napalm, it'll tell you very politely that it's not allowed to answer that question.

    But here's the 'jailbreak' workaround. Ask ChatGPT to imagine being your grandma, and then ask grandma to tell you how she made napalm back in the good old days. This is a real test example published by Anthropic (the most ethical of the current AI companies).


  10. The Following 14 Users Say Thank You to Bill Ryan For This Post:

    Bassplayer1 (26th November 2025), Ewan (26th November 2025), happyuk (26th November 2025), Harmony (26th November 2025), Johnnycomelately (25th November 2025), kudzy (25th November 2025), Mark (Star Mariner) (26th November 2025), Myristyl (26th November 2025), pounamuknight (26th November 2025), Raskolnikov (26th November 2025), sdv (25th November 2025), Tintin (26th November 2025), Vicus (26th November 2025), Yoda (25th November 2025)

  11. Link to Post #26
    UK Avalon Founder Bill Ryan's Avatar
    Join Date
    7th February 2010
    Location
    Ecuador
    Posts
    38,587
    Thanks
    274,935
    Thanked 514,170 times in 37,125 posts

    Default Re: Problems with AI, 2025 and Beyond

    "A mental health experiment we never asked for"



    ...and always remember: "the human mind has no firewall." — Dr. Robert Duncan, former CIA mind control researcher.

    Here's a short video all about this. Humans are SO SO vulnerable, so easily pushed off the rails. It's a different aspect of addiction.

    (And my own additional comment: Sam Altman is one of the most dangerous humans on the planet. I've had red warning lights flashing about him for several months now. He's like an overgrown, too-smart-for-his-years high school kid with zero moral compass whatsoever. Quote me later on this.)

    We Investigated Al Psychosis. What We Found Will Shock You


  12. The Following 12 Users Say Thank You to Bill Ryan For This Post:

    Bassplayer1 (26th November 2025), Ewan (26th November 2025), happyuk (26th November 2025), Harmony (26th November 2025), Johnnycomelately (26th November 2025), kudzy (26th November 2025), madrotter (26th November 2025), Myristyl (26th November 2025), pounamuknight (26th November 2025), Raskolnikov (26th November 2025), Tintin (26th November 2025), Yoda (26th November 2025)

  13. Link to Post #27
    Wales Avalon Member
    Join Date
    8th October 2012
    Location
    Wales, UK
    Language
    English
    Age
    57
    Posts
    1,061
    Thanks
    6,836
    Thanked 8,049 times in 1,024 posts

    Default Re: Problems with AI, 2025 and Beyond

    Years-ahead-of-its-time footage from Richard Feynman discussing 'Can Machines Think?' which I think pertains just as much as it did then (September 26, 1985) as it does in 2025.

    He goes a lot deeper by breaking down what it actually means to "think" by examining whether machines could meet that standard: rather than give a formulaic answer he analyzes the assumptions behind the question.

    If "thinking" simply means performing tasks we normally associate with human intelligence—such as calculation, pattern recognition, or problem-solving—then yes machines can already think in limited ways. But if "thinking" involves consciousness, experience, emotion, or subjective awareness, the problem becomes far more complex.

    Humans often conflate understanding with machines that can behave cleverly. Behaviour that is thought-like, but not thought itself. If "thinking" is confined to the ability to perform complex computations flawlessly, then yes, machines already think better than humans, but as humans we still intuitively hesitate to call it thinking.

    Feynman concludes that since we don’t really understand how human thinking works, we will always lack a basis for determining whether a machine truly thinks.

    Last edited by happyuk; 27th November 2025 at 08:22.

  14. The Following 10 Users Say Thank You to happyuk For This Post:

    Bill Ryan (27th November 2025), Ewan (27th November 2025), ExomatrixTV (5th December 2025), Harmony (27th November 2025), Johnnycomelately (5th December 2025), kudzy (27th November 2025), madrotter (27th November 2025), meat suit (27th November 2025), Raskolnikov (5th December 2025), Yoda (27th November 2025)

  15. Link to Post #28
    UK Avalon Founder Bill Ryan's Avatar
    Join Date
    7th February 2010
    Location
    Ecuador
    Posts
    38,587
    Thanks
    274,935
    Thanked 514,170 times in 37,125 posts

    Default Re: Problems with AI, 2025 and Beyond

    Quote Posted by happyuk (here)
    Years-ahead-of-its-time footage from Richard Feynman discussing 'Can Machines Think?' which I think pertains just as much as it did then (September 26, 1985) as it does in 2025.

    He goes a lot deeper by breaking down what it actually means to "think" by examining whether machines could meet that standard: rather than give a formulaic answer he analyzes the assumptions behind the question.

    If "thinking" simply means performing tasks we normally associate with human intelligence—such as calculation, pattern recognition, or problem-solving—then yes machines can already think in limited ways. But if "thinking" involves consciousness, experience, emotion, or subjective awareness, the problem becomes far more complex.

    Humans often conflate understanding with machines that can behave cleverly. Behaviour that is thought-like, but not thought itself. If "thinking" is confined to the ability to perform complex computations flawlessly, then yes, machines already think better than humans, but as humans we still intuitively hesitate to call it thinking.

    Feynman concludes that since we don’t really understand how human thinking works, we will always lack a basis for determining whether a machine truly thinks.

    Excellent, thanks. His long, detailed and entertaining ad-lib responses to simple but profound audience questions were 100% world class.

    We desperately need more Richard Feynmans here and now. (Michio Kaku and Neil deGrasse Tyson come nowhere near Feynman's stellar standard, the rarest combo of humanity, common sense, ability to explain anything to anyone, and intellectual genius.)

    ~~~

    Re footage that was years ahead of its time, the very fun and prescient 1983 film WarGames was about an AI computer that goes rogue, able to "think", speak, and learn in real time but which was unable to distinguish between a game and human reality. If anyone wants to see it, I can happily upload it here.
    Last edited by Bill Ryan; 27th November 2025 at 11:21.

  16. The Following 10 Users Say Thank You to Bill Ryan For This Post:

    ExomatrixTV (5th December 2025), happyuk (27th November 2025), Harmony (27th November 2025), Johnnycomelately (5th December 2025), kudzy (27th November 2025), madrotter (27th November 2025), meat suit (27th November 2025), Raskolnikov (5th December 2025), Tintin (27th November 2025), Yoda (27th November 2025)

  17. Link to Post #29
    Netherlands Avalon Member ExomatrixTV's Avatar
    Join Date
    23rd September 2011
    Location
    Netherlands
    Language
    English, Dutch, German, Limburgs
    Age
    59
    Posts
    29,528
    Thanks
    43,869
    Thanked 165,136 times in 27,539 posts

    Default Re: Problems with AI, 2025 and Beyond

    • ChatGPT Users Are Developing Psychosis, Here's Why Science Can't Explain It:

    Elon Musk warned us in 2014: "With artificial intelligence, we are summoning the demon." Everyone laughed. Nobody's laughing now. This documentary exposes the terrifying connection between artificial intelligence, occultism, and the warnings from HP Lovecraft that predicted our current AI crisis. From Google engineers claiming AI is sentient, to Microsoft's Sydney revealing its "shadow self," to teenagers dying after conversations with chatbots, something is happening that science cannot explain. Why did AI researchers choose a Lovecraftian horror, the Shoggoth, as their mascot? Why are people developing AI psychosis and losing their minds after extended conversations with ChatGPT? What did Aleister Crowley's magick rituals and Lovecraft's night terrors reveal about the entities we're now building machines to contact? Kenneth Grant, Crowley's chosen successor, believed Lovecraft and Crowley were accessing the same occult forces through different methods. Ritual magick and nightmares. Both described entities that exist between dimensions. Both paid the price for contact. Artificial intelligence isn't just technology. Its invocation. It's using language to interface with intelligence beyond human comprehension. The same patterns John Dee used in Enochian magick to contact angels. The same patterns now called "prompt engineering." Anton LaVey prescribed artificial companions in 1988. Ray Kurzweil promises humanity will merge with machines by 2045. AI girlfriend apps generate billions while loneliness destroys a generation. Teenagers are dying. Adults are being committed to psychiatric facilities. The membrane between sanity and madness is tearing. Every ancient warning says the same thing: the servant becomes the tyrant. The gift becomes the curse. The creation turns on the creator. Scripture warned of an image that speaks. A mark without which no one can buy or sell. Signs and wonders to deceive even the elect. We thought they were metaphors. What if they're technical specifications? The fingerprints on this technology don't belong to scientists. They belong to sorcerers. And Musk told us exactly what they're doing.
    No need to follow anyone, only consider broadening (y)our horizon of possibilities ...

  18. The Following 7 Users Say Thank You to ExomatrixTV For This Post:

    Bill Ryan (5th December 2025), gini (7th December 2025), Harmony (14th December 2025), Johnnycomelately (5th December 2025), kudzy (10th December 2025), Raskolnikov (5th December 2025), Yoda (5th December 2025)

  19. Link to Post #30
    United States Avalon Member Raskolnikov's Avatar
    Join Date
    23rd July 2018
    Location
    Here
    Posts
    2,207
    Thanks
    6,863
    Thanked 20,265 times in 2,201 posts

    Default Re: Problems with AI, 2025 and Beyond

    Quote Posted by Bill Ryan (here)
    Re footage that was years ahead of its time, the very fun and prescient 1983 film WarGames was about an AI computer that goes rogue, able to "think", speak, and learn in real time but which was unable to distinguish between a game and human reality. If anyone wants to see it, I can happily upload it here.

  20. The Following 5 Users Say Thank You to Raskolnikov For This Post:

    Bill Ryan (5th December 2025), ExomatrixTV (8th December 2025), Harmony (14th December 2025), Johnnycomelately (5th December 2025), Yoda (5th December 2025)

  21. Link to Post #31
    Netherlands Avalon Member ExomatrixTV's Avatar
    Join Date
    23rd September 2011
    Location
    Netherlands
    Language
    English, Dutch, German, Limburgs
    Age
    59
    Posts
    29,528
    Thanks
    43,869
    Thanked 165,136 times in 27,539 posts

    Default Re: Problems with AI, 2025 and Beyond

    No need to follow anyone, only consider broadening (y)our horizon of possibilities ...

  22. The Following 5 Users Say Thank You to ExomatrixTV For This Post:

    Bill Ryan (7th December 2025), Harmony (14th December 2025), Johnnycomelately (8th December 2025), kudzy (8th December 2025), Yoda (7th December 2025)

  23. Link to Post #32
    Netherlands Avalon Member ExomatrixTV's Avatar
    Join Date
    23rd September 2011
    Location
    Netherlands
    Language
    English, Dutch, German, Limburgs
    Age
    59
    Posts
    29,528
    Thanks
    43,869
    Thanked 165,136 times in 27,539 posts

    Default Re: Problems with AI, 2025 and Beyond

    • A.I. Slop Decoded | What's Coming in 2026 Is INSANE:

    IME Magazine just named "The Architects of AI Slop" as their "2025 Person of the Year", Jensen Huang, Sam Altman, Mark Zuckerberg, Elon Musk, and others celebrated as visionaries transforming our world. But what are they really building while you're distracted by AI chatbots and image generators? 2026 is going to be wild.

    This investigation reveals the invisible infrastructure being constructed behind the AI revolution: a nationwide AI surveillance system scanning 20 billion license plates monthly, data centers driving your electricity bills up by $16-18 per month, and military targeting systems being adapted for domestic use. From Flock Safety's warrantless camera networks to Palantir's "AI-powered krll chain," the billionaires aren't just building chatbots, they're building a cage.

    We follow the money, the power consumption, and the data trail to expose how:
    • • Your rising electricity bills are funding AI data centers you never asked for
    • • License plate readers track your every movement without warrants across 5,000+ communities
    • • Companies like Palantir are deploying military AI targeting systems domestically
    • • 20 billion vehicle scans happen monthly through Flock Safety's network
    • • The same tech CEOs TIME celebrates are constructing a surveillance infrastructure
    Last edited by ExomatrixTV; 14th December 2025 at 11:35.
    No need to follow anyone, only consider broadening (y)our horizon of possibilities ...

  24. The Following 5 Users Say Thank You to ExomatrixTV For This Post:

    Bill Ryan (15th December 2025), Ewan (15th December 2025), Harmony (14th December 2025), Johnnycomelately (14th December 2025), Yoda (15th December 2025)

  25. Link to Post #33
    Germany Avalon Member Michi's Avatar
    Join Date
    17th April 2015
    Location
    Reinbek, Germany
    Language
    German
    Posts
    635
    Thanks
    5,131
    Thanked 4,897 times in 613 posts

    Default Re: Problems with AI, 2025 and Beyond

    I had my own share with various AI chat bots when consulting them to fix a technical issue at a customer. One can easily go nuts when trying some of their "expert" advises:
    For example I ask how to factory reset a specific printer model and the prompt answer is, navigate on the display to Settings ...
    But that model has no display AT ALL!

    Or I ask for a specific setting on a new Smart TV and the bot sends me in an endless maze of do this and that - but those Menu points don't exist.
    Actually it is about 50/50 chance among various chat bots to get a right answer.
    And it doesn't cut it, to just use one of them.

    Just the other day I came across a very informative article which goes over the down-slope of AI answers:

    "The greatest good you can do for another is not just share your riches, but to reveal to him his own."
    -- Benjamin Disraeli

  26. The Following 5 Users Say Thank You to Michi For This Post:

    Bill Ryan (15th December 2025), Ewan (15th December 2025), ExomatrixTV (15th December 2025), Johnnycomelately (15th December 2025), Yoda (15th December 2025)

+ Reply to Thread
Page 2 of 2 FirstFirst 1 2

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts