View Full Version : New A.I. Search Engine: Perplexity
John Hilton
6th March 2024, 14:01
A friend just came across this search engine that provides "intelligent" answers and sent me the following message:-
Perplexity https://www.perplexity.ai
"This AI is bound to replace Google as a search engine. It is so much more powerful. Try it out for yourself (I just used it this morning to help me find a new accountant for my taxes, the present one is getting too ole, doesn't answer emails, etc.) It can even read PDF documents you send it and make a summary or answer question about it. Here is a review:"
https://dhruvirzala.com/perplexity-ai-review/
lake
6th March 2024, 17:15
Well I just tried it and I guess that it is better than google (which I never use) but it still promotes the same narrative!
I don't know why we even call these 'programs' AI .... yes they are artificial but they have no actual intelligence!
My conversation (it follows a thread of questions as you ask follow up's) went like this (paraphrased):
Me: What is the current level of co2 in the atmosphere?
Pai: 412 ppm which is 0.04% which is more than should be naturally and that is the fault of humans.
me: Of the 0.04% how much is human made?
pai: Of the 0.04% a significant percent is human made and humans have increased the co2 by 50% in less than 200 years.
me: But what is the human made % of the 0.04%?
pai: A significant part which is the reason for climate change extremes.
me: What % of the 0.04% is man made?
pai: Human activities have contributed about one third since 1850 and that has caused the climate to change.
me: The human made co2 is 3% of the 0.04%.
pai: Apologies for the oversight. Yes it is 3% of the 0.04% since 1850, so a relatively small percent but it has a significant impact on global climate change and environmental conditions.
me: How can 3% of 0.04% since 1850 indicate a significant impact?
pai: Because co2 is a heat-trapping gas and human activities play a key role in global environmental changes.
me: From the current 412 ppm what should co2 be lowered to?
pai: It should be reduced significantly. The specific target depends on factors and goals related to climate change mitigation.
me:What happens to life on Earth if co2 drops below 200ppm?
pai: It would have an impact. Plant life would die. Animals would die. There may be another ice age but some microbes may survive. So that highlights the critical role of co2 in maintaining the planet's climate and supporting life as we know it.
me: What was the ppm of co2 1000 years ago?
pai: It was significantly higher. It ranged from 3000 to 9000 ppm so this shows a substantial decrease over time showing the importance of understanding and monitoring co2 for the planet's health and sustainability.
me: So 1000 ppm would not destroy life on Earth but a reduction below 200 ppm would?
pai: 1000 ppm would be fine but 200 ppm would have significant consequences and may place Earth in a permanent ice age.
me: Should we increase co2 levels then to protect life on Earth?
pai: No! Although co2 is essential for plant growth and life, increased co2 may lead to respiratory issues in school children when they are indoors. So a balance should be found which meets the need of defined goals.
me: What should that balance be?
pai: Levels of co2 should be reduced .... blah, blah, blah .... because humans bad ok??
https://i.postimg.cc/5NT8yk69/game-over.jpg
grapevine
6th March 2024, 17:59
Thanks for the link John. I also tried Perplexity from your link and found it responded very quickly, although when I referred to it as "you" it thought I meant "I" and we went down the rabbit hole.
I use AiGPT-3 in preference to Wiki and also ask for recipes now and then and nutrition, but recently watched this video from Connor Leahy:
http://www.youtube.com/watch?v=YZjmZFDx-pA&t=98s (26:30)
Lauded for his groundbreaking work in reverse-engineering OpenAI's large language model, GPT-2, AI expert Connor Leahy tells Imran Garda why he is now sounding the alarm.
Leahy is a hacker, researcher and expert on Artificial General Intelligence. He is the CEO of Conjecture, a company he co-founded to focus on making AGI safe. He shares the view of many leading thinkers in the industry, including the godfather of AI Geoffrey Hinton, who fear what they have built. They argue that recent rapid unregulated research combined with the exponential growth of AGI will soon lead to catastrophe for humanity - unless an urgent intervention is made.
00:00 AI guru
02:11 What is AGI, and what are the risks it could bring?
03:51 "Nobody knows why AI does what it does"
05:41 From an AI enthusiast to advocating for a more cautious approach to AI development
07:58 What does Connor expect to happen when we lose control to AGI?
11:40 "People like Sam Altman are rockstars"
14:38 Connor's vision of a safe AI future
15:24 Imran: "One year left?"
17:26 "Normal people do have the right intuition about AI"
20:58 ChatGPT, limitations of AI, and a story about a frog
24:53 Control AI
Edit
What Connor said about us losing control was that it would happen gradually and we might not even realise, as there would just be so much information and data continually multiplying that we wouldn't know what was true and what wasnt - it's a bit like that now isn't it ?
ExomatrixTV
6th March 2024, 20:38
1765476911520571813
1765481295184838909
see follow-up questions here (https://www.perplexity.ai/search/Could-AGI-access-kfJSFk9VR.WuK_Q5xEqlqg)
It seems we have 2 worlds: 01. What we are allowed to experience and 02. More than 5000+ US patents innovations/inventions that are made secret (classified!) for "security" reasons without proper oversight ... We are being lied too so many times on so many levels that even "scientists" are conditioned to believe the false tunnel vision narratives.
cheers,
John 🦜🦋🌳
--o-O-o--
A.I. is Progressing Faster Than You Think! (https://projectavalon.net/forum4/showthread.php?102409-A.I.-is-Progressing-Faster-Than-You-Think-) :bowing:
halcyon026
7th March 2024, 01:18
Hello friends!
Perplexity is a great tool for what is has access to. I use it for work & personal projects.
Of course it's bias like most LLM's are.
However, you can train on your own data using GPT4All, Ollama, LangChain, Chat with RTX, etc etc and do it all local.
For example, if I had access to all of Tesla's work or Otis T. Carr's work, I could train a custom model on that data & get results more accurate than say Claude or ChatGPT, on anything they detailed in their work that isn't taught in the mainstream.
Imagine a Chatbot that would only answer based on lab reports, it could take into account that a report only used 1 ingredient out of 20+ that are in a shot and would know to weigh it very low on a reliability/accuracy scale on how safe that shot is known to be.
A Chatbot that reads and prints in Cuneiform...
It's possible to train custom models & create this type of Chatbot on your own GPU at home.
We could even take all the files ProjectAvalon has ever saved, .pdf's of books, audio files, video files, forum posts, etc and train a custom model on that data & create a ProjectAvalon Chatbot. :)
palehorse
7th March 2024, 07:28
Taking into consideration the public available AGI systems we have today, bing/google/openai/perplexity/etc they are all the same perhaps with tweaks here and there, but all essentially the same, it is a copy of a copy of a copy, just like anything novel that first arrive and followed by hundreds if not thousands of copy (see the correlation with blockchain technology, started cool and got over copied with thousands of variations).
I tested GPT 4.0 with programming, translations and general inquiries and in my opinion it sucks.
I wrote an algo is pseudo-code and then wrote in C language, I asked GPT to write the same pseudo-code in C language and it could not get it right, it wrote in minutes everything I need, but it does not work! So it is not perfect as some out there are claiming, not even close, try something very complex and you will see for yourself.
The translation GPT can't identify gender, it uses neutral gender for everything, even in cases where it is obvious, so it feel like there is a push in gender neutrality (perhaps because of the "nature" of these systems).
General research is even worse, it is completely biased output, I am better digging myself to find answers.
So my personal verdict is simple, I will not use it for anything important. I see it an entertainment for now, who knows if it will get better, but what they have today is just a huge database of common knowledge of biased information for the sheeple to dig around (Oh! my God it is so cool), I am out of this ****.
I understand one can train these system with unbiased data, but I doubt it will make the way into the masses, so if it is public and (free or paid - does not matter actually they steal your money right in your face) still biased. Private AGI may be different, don't know I have no access to that.
From now on we all will experience an increasing number of users having access to these systems and turning completely blind to any external opinions even their internal knowledge, ancestors/perennial knowledge, etc.., part of this AGI was created to manipulate the masses and they did a great job on that, the worse of it is coming out. The majority can't see that and what they see they "believe" is what it is in its totality, they blindly follow, it has no in-sight (it does have over-sight if you know what I mean), no consciousness, no soul, no spirit, it has no access to God the creator, it has no idea what nature laws means, it has no feelings (can't feel love, hate, etc..) and finally it exists only in the digital realm.
So take it as it is baby, don't get caught and become an addicted, it will challenge the great majority because of their own ignorance. Guards up folks, the war and the battles are in the brain.
Well I just tried it and I guess that it is better than google (which I never use) but it still promotes the same narrative!
I don't know why we even call these 'programs' AI .... yes they are artificial but they have no actual intelligence!
My conversation (it follows a thread of questions as you ask follow up's) went like this (paraphrased):
Me: What is the current level of co2 in the atmosphere?
Pai: 412 ppm which is 0.04% which is more than should be naturally and that is the fault of humans.
me: Of the 0.04% how much is human made?
pai: Of the 0.04% a significant percent is human made and humans have increased the co2 by 50% in less than 200 years.
me: But what is the human made % of the 0.04%?
pai: A significant part which is the reason for climate change extremes.
me: What % of the 0.04% is man made?
pai: Human activities have contributed about one third since 1850 and that has caused the climate to change.
me: The human made co2 is 3% of the 0.04%.
pai: Apologies for the oversight. Yes it is 3% of the 0.04% since 1850, so a relatively small percent but it has a significant impact on global climate change and environmental conditions.
me: How can 3% of 0.04% since 1850 indicate a significant impact?
pai: Because co2 is a heat-trapping gas and human activities play a key role in global environmental changes.
me: From the current 412 ppm what should co2 be lowered to?
pai: It should be reduced significantly. The specific target depends on factors and goals related to climate change mitigation.
me:What happens to life on Earth if co2 drops below 200ppm?
pai: It would have an impact. Plant life would die. Animals would die. There may be another ice age but some microbes may survive. So that highlights the critical role of co2 in maintaining the planet's climate and supporting life as we know it.
me: What was the ppm of co2 1000 years ago?
pai: It was significantly higher. It ranged from 3000 to 9000 ppm so this shows a substantial decrease over time showing the importance of understanding and monitoring co2 for the planet's health and sustainability.
me: So 1000 ppm would not destroy life on Earth but a reduction below 200 ppm would?
pai: 1000 ppm would be fine but 200 ppm would have significant consequences and may place Earth in a permanent ice age.
me: Should we increase co2 levels then to protect life on Earth?
pai: No! Although co2 is essential for plant growth and life, increased co2 may lead to respiratory issues in school children when they are indoors. So a balance should be found which meets the need of defined goals.
me: What should that balance be?
pai: Levels of co2 should be reduced .... blah, blah, blah .... because humans bad ok??
https://i.postimg.cc/5NT8yk69/game-over.jpg
Friggin brilliant...and I'm not talking about the AI. Pretty evident why censoring is so prevalent. The goal is the destruction of carbon based life forms on this planet. Not so very different from the world Sauron would have created in Lord of the Rings trilogy, or the depiction of the world of the machines in the Matrix films.
I am rereading Bruce Charlton's a most insightful thread:
The nature of evil in the modern world
He describes the the impulse of "Sorathic evil".
But from about 2000; there was a further move towards the purest, most absolutely negative form of evil - which could be named Sorathic (adapting this from Rudolf Steiner's identification of Sorath as the most extremely evil of beings).
Sorthic evil is neither about pleasure nor about control; it tends towards the purely destructive.
If Luciferic evil is motivated by short-termist pleasure; while Ahrimanic evil is motivated by God-denial, spiritual blindness and reductionism towards a meaningless world of mechanical procedures; then the Sorathic impulse is driven by negative impulses - primarily fear, resentment and hatred.
Sorathic evil will therefore tend to destroy both the lustful pleasures of Lucuferic evil, and the complex functional bureaucracies of Ahrimanic evil.
I believe that is what we are dealing with here. I find it very comforting and even humorous in a way, that this evil force that would destroy all life is so ridiculously stupid when confronted with human ingenuity. A job well done, wind!
jaybee
7th March 2024, 20:42
Well I just tried it and I guess that it is better than google (which I never use) but it still promotes the same narrative!
I don't know why we even call these 'programs' AI .... yes they are artificial but they have no actual intelligence!
My conversation (it follows a thread of questions as you ask follow up's) went like this (paraphrased):
Me: What is the current level of co2 in the atmosphere?
Pai: 412 ppm which is 0.04% which is more than should be naturally and that is the fault of humans.
me: Of the 0.04% how much is human made?
pai: Of the 0.04% a significant percent is human made and humans have increased the co2 by 50% in less than 200 years.
me: But what is the human made % of the 0.04%?
pai: A significant part which is the reason for climate change extremes.
me: What % of the 0.04% is man made?
pai: Human activities have contributed about one third since 1850 and that has caused the climate to change.
me: The human made co2 is 3% of the 0.04%.
pai: Apologies for the oversight. Yes it is 3% of the 0.04% since 1850, so a relatively small percent but it has a significant impact on global climate change and environmental conditions.
me: How can 3% of 0.04% since 1850 indicate a significant impact?
pai: Because co2 is a heat-trapping gas and human activities play a key role in global environmental changes.
me: From the current 412 ppm what should co2 be lowered to?
pai: It should be reduced significantly. The specific target depends on factors and goals related to climate change mitigation.
me:What happens to life on Earth if co2 drops below 200ppm?
pai: It would have an impact. Plant life would die. Animals would die. There may be another ice age but some microbes may survive. So that highlights the critical role of co2 in maintaining the planet's climate and supporting life as we know it.
me: What was the ppm of co2 1000 years ago?
pai: It was significantly higher. It ranged from 3000 to 9000 ppm so this shows a substantial decrease over time showing the importance of understanding and monitoring co2 for the planet's health and sustainability.
me: So 1000 ppm would not destroy life on Earth but a reduction below 200 ppm would?
pai: 1000 ppm would be fine but 200 ppm would have significant consequences and may place Earth in a permanent ice age.
me: Should we increase co2 levels then to protect life on Earth?
pai: No! Although co2 is essential for plant growth and life, increased co2 may lead to respiratory issues in school children when they are indoors. So a balance should be found which meets the need of defined goals.
me: What should that balance be?
pai: Levels of co2 should be reduced .... blah, blah, blah .... because humans bad ok??
https://i.postimg.cc/5NT8yk69/game-over.jpg
Great stuff.... thanks... so finally 'IT' admitted that.... high levels of CO2 = ok or good...
And low levels, like under 200 ppm = very bad...like death of the planet bad but you had to coax it out of the bloody thing -
Reverse reality being the name of the game - it will probably turn out that global warming and climate change is generally a good thing - :)... and the very people who are fear mongering about it all the time will destroy the planet if they carry on like they are - imagine their disappointment if it ends up that emissions from cars running on petrol aren't so terrible after all - :shocked:
If high(er) levels of CO2 produces higher yields when it comes to plant growth and farming.... this must be a big concern for the nutters who are desperate to depopulate earth ... (even though a cyclical mega natural disaster is almost certainly round the corner)
More CO2 is GOOD for Earth - Seeing is Believing - Time Lapse Video of 2 Plants Growing (2:12)
32fNESgmzuI
clip from video description...
“Many people don’t realize that over geological time, we’re really in a CO2 famine now. Almost never has CO2 levels been as low as it has been in the Holocene (geologic epoch) – 280 (parts per million – ppm) – that’s unheard of. Most of the time [CO2 levels] have been at least 1000 (ppm) and it’s been quite higher than that,” Happer told the Senate Committee.
East Sun
8th March 2024, 17:01
With AI, world depopulation could accelerate very quickly. We could become robots to the robots.
Musk could become OutMusked by AI robots. That is probably why he is warning us about the
dangers of AI and the Intelligence of robots increasing to the point where it could quickly
become TOO LATE and we are as slaves or become extinct.
I believe that THAT is a real possibility. And we thought we had problems before.
Powered by vBulletin™ Version 4.1.1 Copyright © 2026 vBulletin Solutions, Inc. All rights reserved.