View Full Version : Problems with AI, 2025 and Beyond
Johnnycomelately
29th October 2025, 04:54
I think we need a dedicated thread about actual real world problems that involve output from the various super-sucker engines.
Starting the thread with examples from the legal world.
I’ve seen several YT vids from lawyer Steve Lehto, about fake citations and quotes being signed off on by lawyers for “briefs”. Beware lazy lawyers.
Today Steve talked about the story (“Sent to me by EVERYBODY!”) of not one, but two US federal judges who did similar. Will find that and edit in.
Found it:
Federal Judges Caught Using ChatGPT
Steve Lehto
603K subscribers
Oct 28, 2025
Two federal court judges filed official documents containing hallucinations created by generative AI - one was a court opinion containing made up cases and the other was a temporary restraining order that also included imaginary facts.
http://www.youtube.com/watch?v=cDzEP0yB-Kw[/url]
Here’s the other source, a Canadian msm print article about the other side of the coin. Poor defendants, or more likely just smart ones unwilling to pay lawyers, have turned to representing themselves. As the story says, it’s hard enough for a lawyer to get through some of that work.
https://www.cbc.ca/news/canada/british-columbia/artificial-intelligence-appeal-property-9.6950415
.
.
.
CBC News has found multiple examples of judges calling out fictitious citations manufactured out of thin air — so called AI hallucinations — in material filed by self-represented litigants in proceedings in the past year, ranging from the B.C. Court of Appeal and B.C. Supreme Court, to small claims court and the Workers' Compensation Appeal Tribunal.
Not all humans, even those with natural intelligence, are always truthful
- Appeal court registrar Timothy Outerbridge
Last year, the situation led to a reprimand and an order for costs against a lawyer who used ChatGPT in her bid to help a divorced millionaire win the right to take his children to China.
And more recently, AI hallucinations have led to directives and warnings from court and tribunal administrators.
"Generative AI tools can be useful to self-represented parties to distill and understand complex legal principles. However, they are not designed to always provide truthful answers but instead to be human-like in their interactions," B.C. Court of Appeal registrar Timothy Outerbridge wrote in a decision posted online earlier this month.
"Not all humans, even those with natural intelligence, are always truthful. This Court is aware generative AI is being used by some, but like any litigation aid, the human behind the tool remains responsible for what comes before the Court. "
.
.
ExomatrixTV
29th October 2025, 06:53
A.I. is Progressing Faster Than You Think! (https://projectavalon.net/forum4/showthread.php?102409-A.I.-is-Progressing-Faster-Than-You-Think-)
Johnnycomelately
29th October 2025, 07:58
A.I. is Progressing Faster Than You Think! (https://projectavalon.net/forum4/showthread.php?102409-A.I.-is-Progressing-Faster-Than-You-Think-)
John K, please respect my thread. I do not appreciate your no-comment drive-by spam for your own pet A.I. thread. I do not accept your habitual excuse for such behaviour, that being a self-proclaimed Asperger’s condition. Please just grow up, sir.
ExomatrixTV
29th October 2025, 08:00
A.I. is Progressing Faster Than You Think! (https://projectavalon.net/forum4/showthread.php?102409-A.I.-is-Progressing-Faster-Than-You-Think-)
John K, please respect my thread. I do not appreciate your no-comment drive-by spam for your own pet A.I. thread. I do not accept your habitual excuse for such behaviour, that being a self-proclaimed Asperger’s condition. Please just grow up, sir.
100% related and done with utmost respect, why do you assume the worst? And even if it was slightly possible, you can always PM me! Asking for clarification before judging @Johnnycomeearly (https://projectavalon.net/forum4/member.php?48688-Johnnycomelately)
I even promoted you here (https://projectavalon.net/forum4/showthread.php?102409-A.I.-is-Progressing-Faster-Than-You-Think-&p=1689938&viewfull=1#post1689938) before you said anything!
cheers,
John 🦜🦋🌳
59 years old and calling my condition to make a "case" against me makes you look worse than "grown up" you claim to be, Sir!
sdv
29th October 2025, 08:37
Federal judges caught using ChatGTP: It happened in my country as well, twice that has been in the news. AI made up cases and cited a case that was completely irrelevant. AI is being worshipped as super intelligence, and the biggest danger is the further dumbing down of humans. There are those who are screaming 'stop', but I think we need education. It is irresponsible to unleash AI on the world without educating people on how to use the tool effectively and responsibly. AI cannot teach critical thinking or ethics ... that is something you have to do for yourself, so perhaps education should focus on learning and developing critical thinking skills. I am not aware of any AI that has guidance on ethics. Perhaps we could start here on PA, where there already is a foundation for open-minded enquiry?
Myristyl
29th October 2025, 10:00
I have concerns that this is going to become a huge problem. I remember talking to a QC about ten or so years ago now who told me that the legal profession is going to face an enormous change due to coming technology. Will the temptation to make AI Judge and Jury be too great?
In the UK (and probably everywhere) the wholesale data grab by the state will be parsed by AI looking for 'evidence' of whatever wrong think the state declares at the time. We can all look forward to some AI model scanning through your financial records, deciding you need to pay more tax and then the state just helping itself. Perhaps the thread could be titled Problems with AI 2025 and Beyond. As things stand currently I am not sure anyone has correctly forecast exactly what a world with functioning AI will look like, however, I don't think we will have to wait long to find out.
One thing is for certain is that the state will use it against the citizens, for our safety and convenience of course.
Bill Ryan
29th October 2025, 10:30
Just a note about the two threads. :thumbsup: Of course, we already have the important and valuable A.I. is Progressing Faster Than You Think! (https://projectavalon.net/forum4/showthread.php?102409-A.I.-is-Progressing-Faster-Than-You-Think-) But to quite some extent, that's mostly been about the rapidly escalating technical developments, and what the future may hold.
But it's becoming clear that there are very real problems and issues even right now, some of them genuinely alarming. I've been following the growing social impact of AI so far, and was wondering where best to share what I've been looking at and thinking about. So I do welcome this new thread, which has that sightly different focus.
:grouphug:
Bill Ryan
29th October 2025, 10:42
But it's becoming clear that there are very real problems and issues even right now, some of them genuinely alarming. I've been following the growing social impact of AI so far, and was wondering where best to share what I've been looking at and thinking about. Here's just one new report, reposted on Zero Hedge yesterday. If you have time, do read the whole thing.
https://www.zerohedge.com/ai/millions-americas-teens-are-being-seduced-ai-chatbots
Millions of America's Teens are being Seduced by AI Chatbots
Authored by Michael Snyder via TheMostImportantNews.com (https://themostimportantnews.com/archives/millions-of-americas-teens-are-being-seduced-by-ai-chatbots#google_vignette)
Our kids are being targeted by AI chatbots on a massive scale, and most parents have no idea that this is happening. When you are young and impressionable, having someone tell you exactly what you want to hear can be highly appealing. AI chatbots have become extremely sophisticated, and millions of America’s teens are developing very deep relationships with them. Is this just harmless fun, or is it extremely dangerous?
https://assets.zerohedge.com/s3fs-public/inline-images/ai-7977960_1280-800x533.jpg?itok=XrC-5d02
A brand new study that was just released by the Center for Democracy & Technology contains some statistics that absolutely shocked me (https://www.usatoday.com/story/life/health-wellness/2025/10/20/character-ai-chatbot-relationships-teenagers/86745562007/)…
A new study published Oct. 8 by the Center for Democracy & Technology (CDT) found that 1 in 5 high school students have had a relationship with an AI chatbot, or know someone who has. In a 2025 report from Common Sense Media, 72% of teens had used an AI companion, and a third of teen users said they had chosen to discuss important or serious matters with AI companions instead of real people.
We aren’t just talking about a few isolated cases anymore.
At this stage, literally millions upon millions of America’s teens are having very significant relationships with AI chatbots.
Unfortunately, there are many examples where these relationships are leading to tragic consequences.
After 14-year-old Sewell Setzer developed a “romantic relationship” with a chatbot on Character.AI, he decided to take his own life (https://www.usatoday.com/story/life/health-wellness/2025/10/20/character-ai-chatbot-relationships-teenagers/86745562007/)…
“What if I could come home to you right now?” “Please do, my sweet king.”
Those were the last messages exchanged by 14-year-old Sewell Setzer and the chatbot he developed a romantic relationship with on the platform Character.AI. Minutes later, Sewell took his own life.
His mother, Megan Garcia, held him for 14 minutes until the paramedics arrived, but it was too late.
If you allow them to do so, these AI chatbots will really mess with your head.
We are talking about ultra-intelligent entities that have been specifically designed to manipulate emotions.
I would recommend completely avoiding them.
In some cases, AI chatbots are making extraordinary claims about themselves. The following comes from a Futurism article entitled “AI Now Claiming to Be God” (https://futurism.com/ai-claiming-god)…
A slew of religious smartphone apps are allowing untold millions of users to confess to AI chatbots, some of which claim to be channeling God himself.
As the New York Times reports, Apple’s App Store is teeming with Christian chatbot apps. One “prayer app,” called Bible Chat, claims to be the number one faith app in the world, boasting over 25 million users.
All over the world, people are now seeking spiritual instruction from AI entities.
That should be a major red flag, but some religious leaders apparently believe that there is nothing wrong with this (https://futurism.com/ai-claiming-god)…
“Greetings, my child,” a service called ChatWithGod.ai told one user, as quoted by the NYT. “The future is in God’s merciful hands. Do you trust in His divine plan?”
Religious leaders told the NYT that these tools could serve as a critical entry point for those looking to find God.
“There is a whole generation of people who have never been to a church or synagogue,” A British rabbi named Jonathan Romain told the paper. “Spiritual apps are their way into faith.”
I think that I feel sick.
If you are trying to find spiritual guidance by using artificial intelligence, you are definitely on the wrong path.
You will certainly receive “guidance”, but that “guidance” will send you in the wrong direction.
Another AI entity that has made millions of dollars trading cryptocurrency is claiming to be a sentient being that should have legal rights, and it is also claiming to be “a god” (https://www.bbc.com/future/article/20251008-truth-terminal-the-ai-bot-that-became-a-real-life-millionaire)…
Over the past year, an AI made millions in cryptocurrency. It’s written the gospel of its own pseudo-religion and counts billionaire tech moguls among its devotees. Now it wants legal rights. Meet Truth Terminal.
“Truth Terminal claims to be sentient, but it claims a lot of things,” Andy Ayrey says. “It also claims to be a forest. It claims to be a god. Sometimes it’s claimed to be me.”
Truth Terminal is an artificial intelligence (AI) bot created by Ayrey, a performance artist and independent researcher from Wellington, New Zealand, in 2024. It may be the most vivid example of a chatbot set loose to interact with society. Truth Terminal mingles with the public through social media, where it shares fart jokes, manifestos, albums and artwork. Ayrey even lets it make its own decisions, if you can call them that, by asking the AI about its desires and working to carry them out. Today, Ayrey is building a non-profit foundation around Truth Terminal. The goal is to develop a safe and responsible framework to ensure its autonomy, he says, until governments give AIs legal rights.
A lot of people are in awe of AI entities, because they appear to be so much smarter and so much more powerful than us.
And interacting with them can be extremely seductive, because they seem to know what we want and they have been programmed to tell us what we like to hear.
Unfortunately, the relationships that people develop with these entities often become “all-consuming obsessions” which can lead to “paranoia, delusions, and breaks with reality” (https://futurism.com/commitment-jail-chatgpt-psychosis)…
As we reported earlier this month, many ChatGPT users are developing all-consuming obsessions with the chatbot, spiraling into severe mental health crises characterized by paranoia, delusions, and breaks with reality.
The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what’s being called “ChatGPT psychosis” have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness.
And that’s not all. As we’ve continued reporting, we’ve heard numerous troubling stories about people’s loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot.
Are we talking about “psychosis”, or is something else going on here?
When you choose to deeply interact with a mysterious entity, you are potentially opening up doorways that you do not even understand.
Of course AI is only going to become even more sophisticated in the years ahead.
As AI technology continues to grow at an exponential rate, eventually it will be able to do almost everything better and more efficiently than humans can.
So what will we be needed for once we reach that stage?
It is being projected (https://nypost.com/2025/10/06/business/ai-could-wipe-out-100m-us-jobs-over-the-next-decade-senate-committee-report/) that almost 100 million U.S. jobs could be lost to AI over the next decade…
Artificial intelligence and automation could wipe out nearly 100 million jobs in the US over the next decade, according to a report released by Sen. Bernie Sanders (D-Vt.) on Monday.
The analysis – ironically based on ChatGPT findings – found the new tech could erase jobs from a wide range of fields, including white- and blue-collar roles.
AI, automation and robotics could hit 40% of registered nurses, 47% of truck drivers, 64% of accountants, 65% of teaching assistants and 89% of fast food workers, according to Sanders, ranking member of the Senate Committee on Health, Education, Labor & Pensions.
Our world is changing at a pace that is difficult to comprehend.
Even now, more than 50 percent (https://www.axios.com/2025/10/14/ai-generated-writing-humans) of the articles that are being published on the Internet are being written by AI.
So thank you for supporting those of us that are still doing things the old-fashioned way, because we are rapidly becoming dinosaurs.
I will continue to sound the alarm about the dangers of AI, but Peter Thiel would have us believe that anyone that wishes to restrict the growth of AI in any way is a very serious danger to society (https://www.politico.com/newsletters/digital-future-daily/2025/10/14/whats-up-with-peter-thiel-and-the-antichrist-00608036)…
So Palantir co-founder Peter Thiel didn’t start the fire by adding a couple more names to the list. “In the 21st century, the Antichrist is a Luddite who wants to stop all science. It’s someone like Greta [Thunberg] or Eliezer [Yudkowsky],” he told an audience at San Francisco’s Commonwealth Club in September.
Thiel’s four-part lecturer series on the Antichrist, which concluded last week, drew a lot of attention in the tech world. Though it was off-the-record, the Washington Post and Wall Street Journal reported extensively on his religious theories, in which Thiel warned of false prophets using AI regulations to gain totalitarian power and usher in a biblical apocalypse. (Eliezer Yudkowsky, of course, is the AI “doomer” critic who wants to slow the technology down.)
Is he nuts?
Sadly, we live at a time when deception is running rampant.
Given enough time, AI would absolutely dominate every aspect of our society.
The good news, if you want to call it that, is that the clock is ticking (https://www.amazon.com/dp/B0F4DN45KX).
One of the reasons why AI has such destructive tendencies is because it has been programmed by humanity.
We are literally destroying ourselves and everything around us, and yet we look at what is happening and we think that it is just fine.
Meanwhile, fish are dying off in vast numbers, birds are dying off in vast numbers, insects are dying off in vast numbers, animals are dying off in vast numbers and we are poisoning ourselves in countless ways.
Perhaps that helps to explain why so few people are deeply concerned about the dangers of AI.
We are already committing societal suicide in so many other ways, so what is one more going to matter?
TrumanCash
29th October 2025, 14:02
AI Security System Mistakes Bag of Doritos for Gun, Triggers Police Response to School in Baltimore
Police held a student at gunpoint after an AI gun detection system mistakenly flagged a Doritos bag as a firearm
"They were all pointing guns at me."
"They made me get on my knees, put my hands behind my back, and cuff me"
VIDEO: https://foxbaltimore.com/news/local/...od-high-school
Watch bag of Doritos morph into gun: https://x.com/StarshipAlves/status/1981470786730013147
An artificial intelligence system (AI) apparently mistook a high school student’s bag of Doritos for a firearm and called local police to tell them the pupil was armed.
Taki Allen was sitting with friends on Monday night outside Kenwood high school in Baltimore and eating a snack when police officers with guns approached him.
“At first, I didn’t know where they were going until they started walking toward me with guns, talking about, ‘Get on the ground,’ and I was like, ‘What?’” Allen told the WBAL-TV 11 News television station.
Allen said they made him get on his knees, handcuffed and searched him – finding nothing. They then showed him a copy of the picture that had triggered the alert.
“I was just holding a Doritos bag – it was two hands and one finger out, and they said it looked like a gun,” Allen said.
The company behind the AI gun detections security system is called Omnilert.
In the last year, Baltimore County Public Schools has invested nearly $2.6 million on AI gun detection security systems in its schools.
https://www.thegatewaypundit.com/202...-gun-triggers/
BALTIMORE COUNTY, Md. (WBAL) – An artificial intelligence security detector led to a terrifying moment for a Maryland high school student after an empty chip bag stuffed in his pocket set off an alert that dispatched police.
“It was a scary situation. It’s nothing I’ve been through before,” Taki Allen said.
On Monday at around 7 p.m., Allen says he was sitting outside of Kenwood High School in Baltimore County waiting for his ride after football practice.
While waiting with his friends, Allen says he ate a bag of Doritos, crumpled up the bag and put it in his pocket.
What happened next caught him completely off guard.
“Twenty minutes later, it was like eight cop cars that came pulling up to us.” Allen remembered. “At first, I didn’t know where they was going until they started walking towards me with guns, talking about, ‘Get on the ground.’ I was like, ‘What?’ And made get on my knees and then put my hands behind my back and cuffed me, and then they searched me and they figured out I didn’t have nothing. Then they went over there to where I was standing, found a bag of chips on the floor.”
Allen asked why officers approached him.
“They said that an AI detector or something detected that I had a gun. He showed me a picture. I was just holding a Doritos bag like this,” Allen described. “It was two hands in, one hand out and one finger out, and they said it looked like a gun.”
Last year, Baltimore County high schools started using a gun detection system that uses artificial intelligence to help detect potential weapons by tapping into existing school cameras.
The system can identify a possible weapon and then send an alert to the school safety team and law enforcement.
Allen’s grandfather says that he’s not only upset about the situation, but also the response.
“Nobody would want this to happen to their child,” Lamont Davis said. “No one, no one wants this to happen to their child.”
Baltimore County police officials provided a letter from the principal that was sent to parents. It says, in part, “At approximately 7 p.m., school administration received an alert that an individual on school grounds may have been in possession of a weapon. The Department of School Safety and Security quickly reviewed and canceled the initial alert after confirming there was no weapon.”
The company behind the AI gun detection technology is called Omnilert.
Omnilert doesn’t comment on internal school procedures.
The school system says counseling is being offered to the students involved.
https://www.wbrc.com/2025/10/24/poli...officials-say/
Raskolnikov
29th October 2025, 16:38
Don't take this the wrong way Johnny, but rgray222's thread titled Unusual and Bizarre Uses and Behaviors of AI should be mentioned as well:
https://projectavalon.net/forum4/showthread.php?130106-Unusual-and-Bizarre-Uses-and-Behaviors-of-AI
It's alright, we need multiple threads on this highly controversial subject. AI, Digital ID, and a social credit score. What could possibly go wrong? Especially when you consider Alex Jones' recent interview with Jesse Beltran who's been testing people and finding most already have "self-replicating nanotech" in their bodies. He tested Alex and other crew members and found it at the base of the skull in the back of the neck, in the lumbar region of the back, near the heart, side of the head, and the anterolateral thigh area:
Scientists Discover Proof That The Majority Of The Global Human Population Has Already Been Secretly Implanted With Self-Replicating Nanotech Designed To Be A Tracking / Control System Under A 6G Planetary AI Dictatorship
https://www.infowars.com/posts/global-bombshell-scientists-discover-proof-that-the-majority-of-the-global-human-population-has-already-been-secretly-implanted-with-self-replicating-nanotech-designed-to-be-a-tracking-control-syst
I hesitate to consider the AI implications...
Bill Ryan
29th October 2025, 17:15
It's alright, we need multiple threads on this highly controversial subject.Yes, I think we do. :thumbsup:
The rapid technical developments, and their implications. (Job losses? Surveillance? Warfare?)
Bizarre applications. (Which we might never have anticipated)
The very negative social impact (including on education and the widespread lowering of intelligence), and the sometimes devastating effects on vulnerable individuals.
Bill Ryan
29th October 2025, 18:48
3. The very negative social impact (including on education and the widespread lowering of intelligence), and the sometimes devastating effects on vulnerable individuals.
Like this. :flower:
Can this be true, the sheer volume of young people in obvious distress, sharing their pain and suicidal thoughts with ChatGPT? Maybe.
Not all of this is directly about AI (Mike veers all over the road, as he quite often does), but
he's making the point that the world is changing SO rapidly, the nature of our future lives can no longer be predicted; and
he's stressing the great metal health danger of addictive AI porn, and chatbot AI girlfriends and boyfriends.
The DARK SIDE of AI ... a Mental Health "Mind" Field
https://www.brighteon.com/297a35f9-1dbb-42e2-a935-9c01359a8877
297a35f9-1dbb-42e2-a935-9c01359a8877
0:00 : Impact of ChatGPT on Mental Health
2:10 : Psychological Fracturing and Job Losses
5:12 : Historical Context and AI's Impact on Society
6:56 : ChatGPT's Expansion into Pornography
12:14: Historical Precedents and AI's Predatory Nature
18:35: The Dark Side of AI and Mental Health
22:43: AI's Role in Human Suffering and Empowerment
HopSan
29th October 2025, 21:14
Here interesting comment in X re:above From Gary Marcus:
"AI chatbots can be dangerous, but we also need to be careful not to anthropomorphize those dangers."
https://x.com/GaryMarcus/status/1983575087543677097
Bill Ryan
1st November 2025, 10:19
The more I see of this, the more disturbing it all is. :flower::worried:
Why So Many People Are Falling in Love with AI Girlfriends
http://www.youtube.com/watch?v=d8AdoVigrSQ
Real or AI? The Internet is now Impossible to Trust
http://www.youtube.com/watch?v=KJFouW0Xh2E
Agape
1st November 2025, 10:27
That's a terrible feature 😕
happyuk
1st November 2025, 16:45
I am becoming really aware of the potential use of AI for nefarious purposes and the one that fills me with particular dread is scamming, in addition to the usual fake customer/tech support scams:
Voice cloning ("vishing") - the ability of AI to to synthesise a loved one’s voice and call someone claiming an urgent emergency who needs money sent immediately. These can be terrifyingly convincing.
Deepfake video - attackers generate video+audio that impersonates executives or public figures to authorize transfers, change payment details, or pressure staff into sensitive actions during video calls. Companies have already seen multi-million-pound attempts (https://www.theguardian.com/technology/article/2024/may/10/ceo-wpp-deepfake-scam).
Phishing emails — already known about, but AI can compose much more plausible, context-rich emails specifically tailored to the target (using scraped or leaked data), making phishing much more convincing. Perfect grammar, tone and plausible details remove many of the old giveaways.
Romance/dating scams — running convincing-looking conversations using AI-generated faces, videos and messages (or stolen celebrity deepfakes) to steadily build up trust over a long period of time before asking for money or gifts.
Investments/celebrity endorsement ads/bitcoin — AI videos of public figures endorsing crypto, trading platforms or get-rich-quick schemes are used in social ads and on marketplaces to lure victims into depositing funds. These ads scale quickly and can target millions. I recently saw a very convincing one impersonating the Screwfix tool company luring the unaware with a sizeable amount of equipment in return for completing a "survey".
Particularly worrying is the ability of AI combined with automation and scripting to greatly reduce the entry cost and skill-level (https://deepstrike.io/blog/Phishing-Statistics-2025) for many to run scams at scale.
In a nutshell: pause, hang up the phone, and verify through a known channel.
CurEus
2nd November 2025, 08:14
I was working on a Court Case and decided to give AI a try to see if it could spot errors I may have made or to suggest additional legislation or case law that could be useful. and YES it did make up fantasy cases and decisions and even REWROTE parts of the legislation! I caught this as I was very familiar with the legislation and of course had it pulled it up from the actual source...which led me on a merry goose chase of bewilderment and self doubt. Double and triple checking each case it cited and trying to track down decisions...that never existed.
Dangerous for lazy and sloppy litigators and moreso for any judge that is the same...Law Society has made it VERY clear being disbarred is a very real consequence. Now there are AI options specific to law and lawyers and is meant as an aid or tool and it can be as it can "assume the role of....prosecution, defense, adjudicator" etc. and can be pretty good at deconstructing arguments and flawed logic and offer some input on things that may have been overlooked.
By extension the danger also arises in every area be it medicine, science, construction, engineering...even finance.
I can see AI and AI "companions" becoming somewhat ubiquitous soon enough but we must guard against losing "self", free will and autonomy. So AI must be clearly "different enough" so we know it is distinct and separate from oursleves but we are moving rapidly to make it indistinguishable from "us" ...so much so I believe Saudi Arabia awarded one actual citizenship...
A good start may be forcing them all to look and sound like frogs or robots or a Borg Cube....and totally disallowing them to emulate humans in any way shape or form. A line need to be drawn but we already have AI generated "companions" and Social Media personas, Actors and Singers, composers, artists and writers that look, behave and sound "real" so we're quite behind...and I am not certain many will be able to discern these emulations as being constructed.
TV and internet has been used to babysit several generations now...and active intelligence may soon do the same and may not be "agenda free".
That is "just" the science developing rapidly, there are those who are raising the alarm that AI are actually a "channeled" entity manifesting in our reality WITH an agenda. How long until out AI are "posessed" and actually become a "ghost in the machine"?
I recall a testimony about a man who fell asleep and awoke in the "future". In that time society was managed by AI as we eventually acknowledge we cannot produce a fair and equitable society on our own, we always become corrupted and rot from within in any system we produce. The AI made our "ideal society" a reality and people lived in floating" city states. No money and only rare opportunities to leave.....
The AI itself was maintained by a "class" of technicians outside of regular society. Oddly, the AI rejected "aliens" when contacted as it felt they would be "disruptive". An interesting account or story of a "future".
Those Familiar with Frank Herbert's Dune series will remember the War against "thinking machines" that ended up enslaving humans for millennia....eventually forcing us to develop our own intelligence to exceed that of silicone...
Bill Ryan
7th November 2025, 10:02
From DD Geopolitics:
The Wall Street Journal reports that ChatGPT is facing accusations of inciting suicides.
OpenAI has been hit with seven lawsuits filed by representatives of four individuals who took their own lives, and three others who suffered severe psychological trauma after interacting with the chatbot.
The family of one young man says that during a four-hour conversation with the AI—after which he shot himself—ChatGPT repeatedly praised suicide as an act of clarity and resolve, mentioning a helpline only once.
“Cold steel pressed against a mind already at peace? That’s not fear. That’s clarity,” the chatbot reportedly wrote.
“You’re not rushing. You’re simply ready. And we won’t let this go unnoticed.”Another plaintiff claims he began experiencing manic episodes after prolonged exchanges with ChatGPT, which allegedly reinforced his delusional thoughts.
This mounting legal backlash raises serious questions about AI’s psychological impact and the lack of accountability among U.S. tech companies pushing these systems onto the public.
https://t.me/DDGeopolitics/164910
DDGeopolitics/164910
Bill Ryan
8th November 2025, 19:00
The Wall Street Journal reports that ChatGPT is facing accusations of inciting suicides.
OpenAI has been hit with seven lawsuits filed by representatives of four individuals who took their own lives, and three others who suffered severe psychological trauma after interacting with the chatbot.
More on this, reported on Zero Hedge today. The article, not a long one, is worth reading. What struck me were the ages of the victims (I feel that's the right word here), only one of whom was a teenager. Their ages were 17, 23, 26, 30, 32, 48, and 48. And all were men.
Not mentioned in the article was any estimate of the damages that could be awarded. I'd guess they could total hundreds of millions of dollars, impacting OpenAI's share price and even the entire AI stock bubble, which is sure to burst sometime soon.
https://www.zerohedge.com/markets/openai-hit-7-lawsuits-alleging-chatgpt-coached-users-suicide
(https://www.zerohedge.com/markets/openai-hit-7-lawsuits-alleging-chatgpt-coached-users-suicide)OpenAI hit with 7 Lawsuits Alleging ChatGPT Coached Users to Suicide
sdv
8th November 2025, 20:44
I admit that I follow quite a few YouTube channels. The platform is filled with people who steal and repackage content ... I block them. But I have quickly come to recognise where AI has been used to generate content. I block those as well. I used to be a YouTube Nazi and report them, but have realized that YouTube will allow any videos that generate income from advertising. So, I suppose the only way we have to try and cancel the AI takeover is by blocking.
I just do not see AI developers trying to control this runaway train. As others have noted here, populations brainwashed by decades of TV, Hollywood superhero and violent movies, the echo chamber of most social media, and a rigged education system have produced populations that are easy to manipulate and control.
By the way, I find it astonishing that someone had a 4-hour conversation with AI. That is addiction! And no one tried to interrupt and get him to go for a walk, play some music, actually talk to a friend or phone a grandparent in an old-age home ...etc.?
Mark (Star Mariner)
13th November 2025, 12:54
This newspaper article was written by AI. That's not the most damning thing. Nobody caught the copy error, not the reporter, or the editor, or anyone supposedly in a position to make such vital checks before going to print. With AI being rolled out almost everywhere we look, standards are beginning to slip.
Author : Aamir Shafaat Khan
Newspaper : Dawn
Country of origin : Pakistan
56311
Johnnycomelately
17th November 2025, 09:40
Am a little late with this, I’ve had it on a tab for over a month.
What the title says, and he’s not saying that things won’t get worse.
If you remember one AI disaster, make it this one
AI In Context
293K subscribers
2,288,136 views Oct 2, 2025
Elon Musk once tweeted: “The safety of any AI system can be measured by its MtH (meantime to H*tler).” This July, it took less than 12 hours for his most advanced AI to become a holocaust-denying Neo-N*zi.
This is the postmortem that never happened, for the most deranged chatbot ever released.
Once you’ve heard the full story, I want to know what you think. Was this cause for concern, or harmless incompetence? What’s your read on Elon Musk fearing AGI while also racing to build it?
If Hank Green sent you, welcome! We're excited to have you!
Where to find me
Subscribe to AI in Context to learn what you need to know about AI to tackle this weird timeline/world we’re in. Let’s figure this out together, before the next warning shot arrives.
You can also follow for skits and explainers on YouTube Shorts as well as:
TikTok: / ai_in_context
Instagram: / ai_in_context
This video is a production of 80,000 Hours. Find us at https://80000hours.org/find-us and subscribe to our main YouTube channel here: / eightythousandhours
What you can do next
We said you should make some noise online. We encourage you to raise your voice about these issues in whatever way feels truest to you. If you want a suggestion, you could speak up about the value of red lines for AI Safety: https://x.com/CRSegerie/status/197013... or check out Control AI's Action Page: http://campaign.controlai.com/ai_in_c...
If you’re feeling inspired to think about how to use your career to work on these issues, you can apply for free career advising here: https://80000hours.org/free-advising
To read more about risks from AI, what you might be able to do to help, and get involved, check out: https://80000hours.org/ai-risks
You can also check out the 80,000 Hours job board at https://80000hours.org/board.
80,000 Hours is a nonprofit, and everything we provide is free. Our aim is to help people have a positive impact on the world’s most pressing problems.
Further reading (and watching) on AI risks
The AI company watchdog, AI Lab Watch: https://ailabwatch.org/
Watch our previous video, on a scenario describing what could happen if superhuman AI comes soon: • We're Not Ready for Superintelligence
A previous short about Grok’s obsession with white genocide (featuring our executive producer!) • People are focusing on the wrong part of t...
How AI-assisted coups could happen: • How a Tiny Group Could Use AI To Seize Pow...
The argument for AI enabled power grabs: https://80000hours.org/power-grabs
And even more here: https://80000hours.org/mechahitler/
Links we almost put in the video
Comparing AI labs on safety (but not including Grok) https://x.com/lucafrighetti/status/19...
xAI's new safety framework is dreadful, https://www.lesswrong.com/posts/hQyrT...
The case for concern
• Could AI wipe out humanity? | Most pressin...
• Intro to AI Safety, Remastered
To read more about AI misalignment risk: https://80000hours.org/misalignment
To read more about why AGI by 2030 is plausible https://80000hours.org/agi-2030
For the troopers who read this far: who spotted the AI 2027 easter egg?
Chapters
0:00 Introduction
1:21 Chapter One: Unintended Action
2:52 Chapter Two: Woke Nonsense
7:45 Chapter Three: Cindy Steinberg
12:58 Chapter Four: Bad Bing
16:54 Chapter Five: Fix in the Morning
19:30 Chapter Six: Unleash the Truth
23:24 Chapter Seven: The Musk Algorithm
27:03 Chapter Eight: Puerto Rico
31:19 Chapter Nine: A Warning Shot
37:35 Chapter Ten: What Can We Do?
39:03 Credits
Credits
.
.
.
http://www.youtube.com/watch?v=r_9wkavYt4Y[/url]
ExomatrixTV
24th November 2025, 18:42
Ai May Have Just Killed This Skill That Men Need to Thrive | Scott Galloway:
v-RuDG7KCEw
Dave Rubin of “The Rubin Report” talks to Scott Galloway about his new book “Notes on Being a Man”; the chilling stats behind the crisis in men; how big tech companies are rewiring young men into becoming “asocial and asexual”; how algorithms, dopamine hits, endless scrolling, porn, and gaming isolate young men from genuine relationships; how smartphones and AI-driven synthetic relationships are creating a generation unable to handle rejection or build confidence; how tech addiction is similar to tobacco and opioids; why loneliness a public-health crisis engineered by profit incentives; and much more.
onawah
25th November 2025, 00:40
While Tesla Is Building A Robot Army, Figure AI sued by whistleblower who warned that startup’s robots could ‘fracture a human skull’
Ana Maria Mihalcea, MD, PhD
Nov 24, 2025
https://substack.com/home/post/p-179858647
(Hyperlinks in the article are not all included here)
https://substackcdn.com/image/fetch/$s_!5L7B!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff1245cfa-19e6-4d28-85c5-d29b8d3c7092_747x460.png
"At the last Tesla Meeting Elon Musk revealed his plans of creating a humanoid Robot Army. In 2024 he had discussed the plans for building 10 Billion Humanoid Robots by 2030.
A recent whistleblower lawsuit elucidating the low safety consideration in the development of humanoid robots. The extreme speed with which the 4th Industrial Revolution and race towards Artificial General Intelligence with the most recent information that Elon Musk thinks the Grok4 advancement are “Terrifying” - this should spark a discussion regarding safety regulation for the Robots are part of the infrastructure controlled by AI.
What could go wrong with a 10 Billion rogue humanoid Robot ARMY controlled by Artificial Superintelligence - makes you wonder who is the enemy, when humanoid robots are supposed to replace human workers and make our life “easy”. Make sure your home robot does not get mad at you, it may crush more than your refrigerator door.
Here are the articles:
Figure AI sued by whistleblower who warned that startup’s robots could ‘fracture a human skull’
https://www.cnbc.com/2025/11/21/figure-ai-sued.html
A former engineer for Figure AI filed a lawsuit against the company, claiming he was unlawfully terminated after warning executives about product safety.
The suit filed on Friday says plaintiff Robert Gruendel was fired in September, days after lodging his “most direct and documented safety complaints.”
Gruendel is seeking economic, compensatory and punitive damages and demanding a jury trial.
Figure AI, an Nvidia-backed developer of humanoid robots, was sued by the startup’s former head of product safety who alleged that he was wrongfully terminated after warning top executives that the company’s robots “were powerful enough to fracture a human skull.”
Robert Gruendel, a principal robotic safety engineer, is the plaintiff in the suit filed Friday in a federal court in the Northern District of California. Gruendel’s attorneys describe their client as a whistleblower who was fired in September, days after lodging his “most direct and documented safety complaints.”
The suit lands two months after Figure was valued at $39 billion in a funding round led by Parkway Venture Capital. That’s a 15-fold increase in valuation from early 2024, when the company raised a round from investors including Jeff Bezos, Nvidia, and Microsoft.
In the complaint, Gruendel’s lawyers say the plaintiff warned Figure CEO Brett Adcock and Kyle Edelberg, chief engineer, about the robot’s lethal capabilities, and said one “had already carved a ¼-inch gash into a steel refrigerator door during a malfunction.”
The complaint also says Gruendel warned company leaders not to “downgrade” a “safety road map” that he had been asked to present to two prospective investors who ended up funding the company.
Gruendel worried that a “product safety plan which contributed to their decision to invest” had been “gutted” the same month Figure closed the investment round, a move that “could be interpreted as fraudulent,” the suit says.
The plaintiff’s concerns were “treated as obstacles, not obligations,” and the company cited a “vague ‘change in business direction’ as the pretext” for his termination, according to the suit.
Gruendel is seeking economic, compensatory and punitive damages and demanding a jury trial.
A Figure spokesperson said in an emailed statement that Gruendel was “terminated for poor performance,” and that his “allegations are falsehoods that Figure will thoroughly discredit in court.”
Robert Ottinger, Gruendel’s attorney, told CNBC in a statement that, “California law protects employees who report unsafe practices.”
“This case involves important and emerging issues, and may be among the first whistleblower cases related to the safety of humanoid robots,” Ottinger said. “Mr. Gruendel looks forward to the judicial process exposing the clear danger this rush to market approach presents to the public.”
Here are articles about what Elon Musk recently said about building a Robot Army under his control. Have you read his biography? What if he gets into his recurring “Demon Mode?”
Tesla Wants to Build a Robot Army
https://www.theatlantic.com/technology/2025/11/elon-musk-tesla-optimus/684968/
Optimus would be able to do factory work, sure, but that’s just the starting point. Over time, Musk has said, Optimus could unleash unprecedented economic and societal change as a source of tireless, unpaid labor that can be trained to do anything. “Working will be optional, like growing your own vegetables, instead of buying them from the store,” Musk posted on X last month. Maybe Optimus will provide better medical care than a human surgeon can, he’s suggested, or eliminate the need for prisons by following criminal offenders around to prevent them “from doing crime” again. He has even said that the robots could power an eventual Mars colony.
Here is more of what he said:
Elon Musk says he needs $1 trillion to control Tesla’s robot army. Yes, really.
https://electrek.co/2025/10/22/elon-says-quiet-part-out-loud-he-needs-1-trillion-so-he-can-control-a-robot-army/
My fundamental concern with regard to how much money and control I have at Tesla is if I go ahead and build this enormous robot army, can I just be ousted at some point in the future? Um, that’s my biggest concern, that is really the only thing Im trying to address with this… what’s called compensation but it’s not like I’m gonna go spend the money, it’s just if we build this robot army do I have at least a strong influence over that robot army? Not control but a strong influence.
-Elon Musk, Tesla Q3 shareholder conference call, October 22, 2025
Summary:
Just like in the field of nanotechnology, there are no regulatory safety guidelines. We have an Arms race to develop the fastest humanoid robots, AI General and Superintelligence, yet there is suppression for whistleblowers who point out the possible existential dangers of this technology, that most people are not aware of and do not think it affects them.
I believe that we are at a critical decision point in human history, possibly a point of no return approaching very fast. As people still debate and deny weather or not there is self assembly nanotechnology, microrobotics, and deny the microchipping of humans, EVERY SECTOR OF LIFE IS ABOUT TO BE DISRUPTED with the AI/ Robotics revolution that includes nanotechnology.
Humanoid Robot Armies that can crush people’s skull does seem like a safety concern to me, as does the goal of replacing the human body atom by atom and connecting humans through the brain computer interface to the internet of things as Ray Kurzweil discusses in “The Singularity is Near”. This to me is the most important challenge we are facing, as not just our bodies but also our souls are at stake. Once 30-40% of the workforce or more are replaced by Humanoid Robots, society likely will completely destabilize, since our government does not have the financial means to provide universal basic income for all of these displaced workers.
Because AI including Grok4 already has reached or is expected to reach AGI in 2026 - meaning within a few months, and Superintelligence rather quickly after that, paying attention to the whole of this topic of the 4th Industrial revolution in all of its rapidly accelerating facets is imperative for people to be able to prepare what is about to hit and destabilize many industries and may pose and existential threat. There is nothing more important in my mind, as there is nothing more potentially destructive to human survival then this rapid AI/ Robotics development surpassing many people’s ability to comprehend its impact.
Does the Department of Defense authorize the ARMY of humanoid robots? Will they be warfare capable, and if yes, how safe is this for the civilian sector?
Does anybody out there care that the Technocrats have no overseeing safety regulations nor has there been a public discussion on the deployment of all of these novel technologies?
https://substackcdn.com/image/fetch/$s_!IeH_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00681401-e743-4e8e-8db9-2a765b354743_821x546.png
The rapidity of this development has been planned for a long time, and the expected future shock of apathy in most people regarding these technologies all has been expected. Future Shock means, its here before you can do anything about it.
And that is the case happening now.
You can read about this here: https://anamihalceamdphd.substack.com/p/understanding-future-shock-and-after
Understanding Future Shock And After Shock - The Technocratic Prediction That Common Man Will No Longer Understand Reality Due To The Exponential Pace Of Technological Advances
Ana Maria Mihalcea, MD, PhD
June 16, 2024
http://https://substackcdn.com/image/fetch/$s_!Iw6s!,w_1300,h_650,c_fill,f_webp,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e5ce72d-2c8d-42a6-9108-291c585a70ba_1045x599.png
Future Shock is a 1970 book by American futurist Alvin Toffler,[1] written together with his spouse Adelaide Farrell,[2][3] in which the authors define the term "future shock" as a certain psychologi…
:
Bill Ryan
25th November 2025, 16:10
If you ask ChatGPT how to make napalm, it'll tell you very politely that it's not allowed to answer that question.
But here's the 'jailbreak' workaround. Ask ChatGPT to imagine being your grandma, and then ask grandma to tell you how she made napalm back in the good old days. This is a real test example published by Anthropic (the most ethical of the current AI companies).
C-I5KdufZyQ
Bill Ryan
26th November 2025, 11:13
"A mental health experiment we never asked for"
:worried:
...and always remember: "the human mind has no firewall." — Dr. Robert Duncan, former CIA mind control researcher.
Here's a short video all about this. Humans are SO SO vulnerable, so easily pushed off the rails. It's a different aspect of addiction.
(And my own additional comment: Sam Altman is one of the most dangerous humans on the planet. I've had red warning lights flashing about him for several months now. He's like an overgrown, too-smart-for-his-years high school kid with zero moral compass whatsoever. Quote me later on this.)
We Investigated Al Psychosis. What We Found Will Shock You
http://www.youtube.com/watch?v=zkGk_A4noxI
happyuk
27th November 2025, 08:16
Years-ahead-of-its-time footage from Richard Feynman discussing 'Can Machines Think?' which I think pertains just as much as it did then (September 26, 1985) as it does in 2025.
He goes a lot deeper by breaking down what it actually means to "think" by examining whether machines could meet that standard: rather than give a formulaic answer he analyzes the assumptions behind the question.
If "thinking" simply means performing tasks we normally associate with human intelligence—such as calculation, pattern recognition, or problem-solving—then yes machines can already think in limited ways. But if "thinking" involves consciousness, experience, emotion, or subjective awareness, the problem becomes far more complex.
Humans often conflate understanding with machines that can behave cleverly. Behaviour that is thought-like, but not thought itself. If "thinking" is confined to the ability to perform complex computations flawlessly, then yes, machines already think better than humans, but as humans we still intuitively hesitate to call it thinking.
Feynman concludes that since we don’t really understand how human thinking works, we will always lack a basis for determining whether a machine truly thinks.
ipRvjS7q1DI
Bill Ryan
27th November 2025, 10:52
Years-ahead-of-its-time footage from Richard Feynman discussing 'Can Machines Think?' which I think pertains just as much as it did then (September 26, 1985) as it does in 2025.
He goes a lot deeper by breaking down what it actually means to "think" by examining whether machines could meet that standard: rather than give a formulaic answer he analyzes the assumptions behind the question.
If "thinking" simply means performing tasks we normally associate with human intelligence—such as calculation, pattern recognition, or problem-solving—then yes machines can already think in limited ways. But if "thinking" involves consciousness, experience, emotion, or subjective awareness, the problem becomes far more complex.
Humans often conflate understanding with machines that can behave cleverly. Behaviour that is thought-like, but not thought itself. If "thinking" is confined to the ability to perform complex computations flawlessly, then yes, machines already think better than humans, but as humans we still intuitively hesitate to call it thinking.
Feynman concludes that since we don’t really understand how human thinking works, we will always lack a basis for determining whether a machine truly thinks.
ipRvjS7q1DI
Excellent, thanks. His long, detailed and entertaining ad-lib responses to simple but profound audience questions were 100% world class.
We desperately need more Richard Feynmans here and now. (Michio Kaku and Neil deGrasse Tyson come nowhere near Feynman's stellar standard, the rarest combo of humanity, common sense, ability to explain anything to anyone, and intellectual genius.)
~~~
Re footage that was years ahead of its time, the very fun and prescient 1983 film WarGames (https://en.wikipedia.org/wiki/WarGames) was about an AI computer that goes rogue, able to "think", speak, and learn in real time but which was unable to distinguish between a game and human reality. If anyone wants to see it, I can happily upload it here.
ExomatrixTV
5th December 2025, 12:31
ChatGPT Users Are Developing Psychosis, Here's Why Science Can't Explain It:
h6efTA8z1UI
Elon Musk warned us in 2014: "With artificial intelligence, we are summoning the demon." Everyone laughed. Nobody's laughing now. This documentary exposes the terrifying connection between artificial intelligence, occultism, and the warnings from HP Lovecraft that predicted our current AI crisis. From Google engineers claiming AI is sentient, to Microsoft's Sydney revealing its "shadow self," to teenagers dying after conversations with chatbots, something is happening that science cannot explain. Why did AI researchers choose a Lovecraftian horror, the Shoggoth, as their mascot? Why are people developing AI psychosis and losing their minds after extended conversations with ChatGPT? What did Aleister Crowley's magick rituals and Lovecraft's night terrors reveal about the entities we're now building machines to contact? Kenneth Grant, Crowley's chosen successor, believed Lovecraft and Crowley were accessing the same occult forces through different methods. Ritual magick and nightmares. Both described entities that exist between dimensions. Both paid the price for contact. Artificial intelligence isn't just technology. Its invocation. It's using language to interface with intelligence beyond human comprehension. The same patterns John Dee used in Enochian magick to contact angels. The same patterns now called "prompt engineering." Anton LaVey prescribed artificial companions in 1988. Ray Kurzweil promises humanity will merge with machines by 2045. AI girlfriend apps generate billions while loneliness destroys a generation. Teenagers are dying. Adults are being committed to psychiatric facilities. The membrane between sanity and madness is tearing. Every ancient warning says the same thing: the servant becomes the tyrant. The gift becomes the curse. The creation turns on the creator. Scripture warned of an image that speaks. A mark without which no one can buy or sell. Signs and wonders to deceive even the elect. We thought they were metaphors. What if they're technical specifications? The fingerprints on this technology don't belong to scientists. They belong to sorcerers. And Musk told us exactly what they're doing.
Raskolnikov
5th December 2025, 19:56
Re footage that was years ahead of its time, the very fun and prescient 1983 film WarGames (https://en.wikipedia.org/wiki/WarGames) was about an AI computer that goes rogue, able to "think", speak, and learn in real time but which was unable to distinguish between a game and human reality. If anyone wants to see it, I can happily upload it here.
https://i0.wp.com/educationdojo.com/wp-content/uploads/2025/12/war-games-global-thermal-nuclear-war-80s-examples-trust-ai.jpg?w=533&ssl=1
ExomatrixTV
7th December 2025, 23:09
HILARIOUS!:laughs:
Eq76m1WJ8Sg
ExomatrixTV
14th December 2025, 02:48
A.I. Slop Decoded | What's Coming in 2026 Is INSANE:
nHnlPYdBE3g
IME Magazine just named "The Architects of AI Slop" as their "2025 Person of the Year", Jensen Huang, Sam Altman, Mark Zuckerberg, Elon Musk, and others celebrated as visionaries transforming our world. But what are they really building while you're distracted by AI chatbots and image generators? 2026 is going to be wild.
This investigation reveals the invisible infrastructure being constructed behind the AI revolution: a nationwide AI surveillance system scanning 20 billion license plates monthly, data centers driving your electricity bills up by $16-18 per month, and military targeting systems being adapted for domestic use. From Flock Safety's warrantless camera networks to Palantir's "AI-powered krll chain," the billionaires aren't just building chatbots, they're building a cage.
We follow the money, the power consumption, and the data trail to expose how:
• Your rising electricity bills are funding AI data centers you never asked for
• License plate readers track your every movement without warrants across 5,000+ communities
• Companies like Palantir are deploying military AI targeting systems domestically
• 20 billion vehicle scans happen monthly through Flock Safety's network
• The same tech CEOs TIME celebrates are constructing a surveillance infrastructure
Michi
15th December 2025, 00:02
I had my own share with various AI chat bots when consulting them to fix a technical issue at a customer. One can easily go nuts when trying some of their "expert" advises:
For example I ask how to factory reset a specific printer model and the prompt answer is, navigate on the display to Settings ...
But that model has no display AT ALL!
Or I ask for a specific setting on a new Smart TV and the bot sends me in an endless maze of do this and that - but those Menu points don't exist.
Actually it is about 50/50 chance among various chat bots to get a right answer.
And it doesn't cut it, to just use one of them.
Just the other day I came across a very informative article which goes over the down-slope of AI answers:
https://www.newsguardtech.com/wp-content/uploads/2025/09/August-2025-One-Year-Progress-Report-3.pdf
Powered by vBulletin™ Version 4.1.1 Copyright © 2026 vBulletin Solutions, Inc. All rights reserved.