View Full Version : LLM, AI and ML
TargeT
5th November 2023, 21:21
If anyone wants to collaborate on these topics or learn more about them I have a couple projects (currently working on rooting the Ray Ban Meta glasses so I can use them with my AI of choice instead of just meta, I am also building a few "assistant" bots as well).
If your into it, you need to watch this:
bMWXXPoDnDs
Denise/Dizi
6th November 2023, 02:25
Geez Target, I LOVE my Ray Bans... Those are so dang durable I haven't had to replace them for 15 years... But I just might now! I had no idea they made smart glasses... The last pair I bought were $100, and they didn't have anything on them... This is a deal, considering what they suggest they do, but how heavy does it make them?
And is it even worth getting those features on your head? I am waiting for the virtual contacts to go mainstream.. But may just buy. pair of new Ray Bans just for kicks now... to try them out...
What kind of "Assistant bots" are you working on???
TargeT
6th November 2023, 03:28
Geez Target, I LOVE my Ray Bans... Those are so dang durable I haven't had to replace them for 15 years... But I just might now! I had no idea they made smart glasses... The last pair I bought were $100, and they didn't have anything on them... This is a deal, considering what they suggest they do, but how heavy does it make them?
And is it even worth getting those features on your head? I am waiting for the virtual contacts to go mainstream.. But may just buy. pair of new Ray Bans just for kicks now... to try them out...
What kind of "Assistant bots" are you working on???
I scraped my 9,000ish posts from here to create "my personality" then I fed it data based off my past 24 years in IT (publicly available software manuals and training courses mostly), now I have me that remembers everything I've done vastly better than I can and it responds to emails and linked in posts as if it is me and only notifies me when there is a potential interview. I also can ask it technical questions and get 85% accurate responses after only a day of working it.
I'm going to make a few more in that same vein (just handling stuff that is repetitive and ultimately non-consequential).
The glasses actually don't look too different from regular ones TBH: https://www.ray-ban.com/usa/electronics/RW4006RAY-BAN%20|%20META%20WAYFARER-Black/8056597988377 they are just speakers and voice commands with a camera, not AR (augmented reality) no display at all.
Denise/Dizi
7th November 2023, 05:25
Thanks for the information! That actually sounds both like a lot of work and a lot of fun at the same time!
I bet it took a long time to scrape together over 9,000 posts that represent YOU... Is it as funny as you are?
TargeT
7th November 2023, 06:22
I challenged (me mostly) my mom that I will call her three days in a row with in the next month; and when that happens she has to tell me which was the bot I created, I'll only use it for one of the 3 (though I also told her it has to be a "normal" conversation and can't have any niche historical events etc... I scraped all my posts from this forum to create my "personality" on my bot and I used all my resume's for it's technical skill sets). I think I'll win... I'm already using it on recruiters and they don't know (or probably care).
found a few more gems for those that are into it....
Staying abreast of AI innovations is key in our rapidly evolving tech landscape. 🚀 #DALLE3 and #InstructGPT are pushing the boundaries of creative and language AI, offering more nuanced and human-like interactions. Check out #EdenAI for a comprehensive suite of AI APIs, and explore #MagictoolAI for user-friendly AI applications. For those looking into #AIroleplay, fine-tuning LLMs offers endless possibilities in character creation. To grasp the business implications of these advancements, #FourWeekMBA provides valuable insights into why such technologies are pivotal for the future of #AI. Let's dive into these resources to unlock the full potential of AI in our projects and initiatives! #AILeadership #AITechnology
the above was me testing out prompts with ChatGPT... here's what I did:
https://chat.openai.com/share/37502970-61db-49e7-8bc1-5ce05ae29d4c
Vangelo
7th November 2023, 11:54
I challenged (me mostly) my mom that I will call her three days in a row with in the next month; and when that happens she has to tell me which was the bot I created, I'll only use it for one of the 3 (though I also told her it has to be a "normal" conversation and can't have any niche historical events etc... I scraped all my posts from this forum to create my "personality" on my bot and I used all my resume's for it's technical skill sets). I think I'll win... I'm already using it on recruiters and they don't know (or probably care).
...
Hi TargetT, I am very much looking forward to the results with your mom. When you say that you will call her, are you having a phone call or a texting session? If a phone call, she will know its not your voice.
One of the things I have often wondered about that also seems like a good LLM project, is which conspiracy theories have turned out to be true. But as I think about that question I realized this is an extraordinarily complex problem and have concluded that LLMs and AI/ML must work very differently than the way I am thinking about the problem.
Here is the way I was thinking about it, would you mind taking a look at my thought process and let me know how to look at the question from the perspective of LLMs, AI or ML.
Here goes:
We have to identify our data source so lets presume it is every page of this forum.
How do you even find conspiracy theories in all this text. That means we'd have to define what a conspiracy theory is. I'm guessing, at its most basic level, it is a declarative statement.
How do we determine what declarative statements are conspiracy theories? I presume it has to first be a true or false statement. But then there would be too many so how do you filter it? Let's presume you filter to those true or false statements that are some how labeled as a conspiracy theory. I don't know how LLMs determine that. I do have a naďve understanding that word frequency and proximity is often used to determine probability of meaning and validity. Is that the method?
Using JFK assassination as the example, there are dozens and dozens of theories such as grassy knoll, lone gunmen, etc. The complexity of each theory becomes staggering. That suggests we need to somehow pick one?
If we pick one, is there an AI method to take them all and distill them to a single basic form such as JFK was shot by more than one person.
etc...
At this point, I am beginning to believe that LLMs must work in a very different way because you can't program every possible question.
TargeT
7th November 2023, 20:33
Hi TargetT, I am very much looking forward to the results with your mom. When you say that you will call her, are you having a phone call or a texting session? If a phone call, she will know its not your voice.
voice phone calls specifically... I've already used it to trick my friends....
The best AI voice cloning software as of 2023, according to various sources, includes:
1. Murf: Tailored for video creators, voice-over artists, podcasters, and those requiring professional voice-overs, utilizing advanced neural networks. Explore Murf (https://murf.ai/).
2. ElevenLabs: Distinguished by its AI algorithms that not only replicate voices but also the emotional undertones. Visit ElevenLabs (https://elevenlabs.io/).
3. HeyGen, Resemble AI, Lyrebird, and ReadSpeaker: These tools offer realistic and high-quality voice cloning. Check out HeyGen (https://heygen.com/).
4. Speechify: Often rated highly for quick voice cloning while preserving personal accents and styles, applicable across various media. Discover Speechify (https://speechify.com/).
5. Overdub: Part of the Descript platform, it is known for generating realistic voices from recordings, perfect for voice cloning. Learn more about Overdub (https://www.descript.com/overdub).
I'm currently using ElevenLabs but may try out the others to see,, I hear Speechify is amazing.
One of the things I have often wondered about that also seems like a good LLM project, is which conspiracy theories have turned out to be true. But as I think about that question I realized this is an extraordinarily complex problem and have concluded that LLMs and AI/ML must work very differently than the way I am thinking about the problem.
well, additionally; "true" is often times very subjective.
At this point, I am beginning to believe that LLMs must work in a very different way because you can't program every possible question.
the trick I've learned is to use the LLM's on eachother to clarify the question, I give the one I think is the best my initial prompt, then I give clarification prompts and let them duke it out, then I have another bot summarize their outcome and it DM's me on slack when they are done.
Vangelo
8th November 2023, 00:15
Hi TargetT, I am very much looking forward to the results with your mom. When you say that you will call her, are you having a phone call or a texting session? If a phone call, she will know its not your voice.
voice phone calls specifically... I've already used it to trick my friends....
The best AI voice cloning software as of 2023, according to various sources, includes:
1. Murf: Tailored for video creators, voice-over artists, podcasters, and those requiring professional voice-overs, utilizing advanced neural networks. Explore Murf (https://murf.ai/).
2. ElevenLabs: Distinguished by its AI algorithms that not only replicate voices but also the emotional undertones. Visit ElevenLabs (https://elevenlabs.io/).
3. HeyGen, Resemble AI, Lyrebird, and ReadSpeaker: These tools offer realistic and high-quality voice cloning. Check out HeyGen (https://heygen.com/).
4. Speechify: Often rated highly for quick voice cloning while preserving personal accents and styles, applicable across various media. Discover Speechify (https://speechify.com/).
5. Overdub: Part of the Descript platform, it is known for generating realistic voices from recordings, perfect for voice cloning. Learn more about Overdub (https://www.descript.com/overdub).
I'm currently using ElevenLabs but may try out the others to see,, I hear Speechify is amazing.
I had no idea they had voice synthesizers that are that good. Will you type your response and the synthesizer will speak your voice to your mother or will your LLM generate the response and the synthesizer will speak it?
One of the things I have often wondered about that also seems like a good LLM project, is which conspiracy theories have turned out to be true. But as I think about that question I realized this is an extraordinarily complex problem and have concluded that LLMs and AI/ML must work very differently than the way I am thinking about the problem.
well, additionally; "true" is often times very subjective.
Agreed
At this point, I am beginning to believe that LLMs must work in a very different way because you can't program every possible question.
the trick I've learned is to use the LLM's on eachother to clarify the question, I give the one I think is the best my initial prompt, then I give clarification prompts and let them duke it out, then I have another bot summarize their outcome and it DM's me on slack when they are done.
Very interesting approach... you've got me thinking...
TargeT
8th November 2023, 05:23
I had no idea they had voice synthesizers that are that good. Will you type your response and the synthesizer will speak your voice to your mother or will your LLM generate the response and the synthesizer will speak it?
It has to be entirely the bot or there's no point in the challenge.
Very interesting approach... you've got me thinking...
Oh no, I've given you more than that, I've given you the tools to find what I found... everytool posted here was done from chat GPT prompts (I already knew about 1, gonna switch to speechify due to it... haha).
if you previously had an open AI account, try to log in.. hell try to use the service....
Something huge just happened, here's a summary:
R94c0KTSam4
I'm all in.
lake
8th November 2023, 17:20
Question .... Are you 'TargeT' (the physical being) .... actually involved in this thread or is it just a 'bot' of yours which you are testing?
Edit to add:
You were also all in on "gamestop" .... was it? (not sure of the firms name)
How did that end up mate?
Ernie Nemeth
8th November 2023, 20:33
Hey Target. Great you figured out what I would not even suggest a month ago.
To train GPT with another GPT is exactly how to have GPT escape the controls humans have so far proved successful in keeping it contained.
I knew it would only be a matter of time.
I wonder how long before others think of the even more destructive and dangerous uses I already thought of. And since I'm not that smart, I wonder what things I did not think of or anticipate.
All should heed Elon's warning.
This is no game.
TargeT
9th November 2023, 07:13
Question .... Are you 'TargeT' (the physical being) .... actually involved in this thread or is it just a 'bot' of yours which you are testing?
I am using my creations to assist with my current job, help me find a new job and I am working on creating ones that I will "rent out" on a monthly fee based on experiential knowledge and model type (those take quite a bit more human involvement)
I have not given my bots any log in info yet, just my personal writing with filters to cut out info that is tied to anyone but me. The swarm hierarchy has made this so much more simple, since they create, delete and revise what my control group dictates, I've just begun to play with it... it's fascinating. I am testing against static "knowable" for now.. I know the answer to the recruiters question, I reference my bot's response and correct as needed (so far it's been crazy accurate, like as of 3 days ago).
Edit to add:
You were also all in on "gamestop" .... was it? (not sure of the firms name)
How did that end up mate?
I don't think you read my contributions to that stock, I am a day trader (an options boy) I have no clue what the price of the stock is now as I switched focus when the volatility did. Volatility is how you make money on short expiry option plays (how do you think I've done?).
I do also acknowledge your attempt at a jibe, how do you think it went?
I wonder how long before others think of the even more destructive and dangerous uses I already thought of. .
Do you yell at the ocean's wave, or figure out a way to work with it?
we all have different approaches.
(these interactions will also be fed to update my personality, though I don't think it's changed much)
lake
9th November 2023, 18:13
I do also acknowledge your attempt at a jibe, how do you think it went?
Rather well it would seem :ROFL:
To ask a 'bot' if they are a 'bot' would only get a "No of course not, I am a human and I have human skin!"
But I don't think your bot would get 'upset' by my post .... but you may do lol
So I posted as I did .... I figured that your 'bot' would read it as questions?
And then give replies to said questions (such as to confirm the companies name) .... But you would feel a need to defend yourself and be somewhat 'upset'!
So .... yes, it worked quite well I feel.
My view is simple.
You are currently replacing YOU with a digital version of you which by your own words ....
now I have me that remembers everything I've done vastly better than I can and it responds to emails and linked in posts as if it is me and only notifies me when there is a potential interview. I also can ask it technical questions and get 85% accurate responses after only a day of working it.
From my consideration .... WEF would be loving you right now .... You are replacing yourself!
At what point will you draw the line and at that point .... will YOU have any choice in it?
So you have made a 'bot' which is better than YOU (by your own words) .... which you have used to reply to emails and Linked In posts .... which you must have given logins for (I wonder if the real people who got the emails and replies know that you are nothing more than a 'bot'?)
Will you be All In on replacing your natural body parts as well as your interaction with others?
I am sure that you can 'live' in this realm for ever ..... I will NOT be joining you :thumbsdown:
But I don't wish to divert this thread more .... so I will leave it to yourself and those who wish to make themselves and the internet nothing more than BOTS!
TargeT
9th November 2023, 18:31
To ask a 'bot' if they are a 'bot' would only get a "No of course not, I am a human and I have human skin!"
But I don't think your bot would get 'upset' by my post .... but you may do lol
Oh you would be very surprised how it responds to exactly with my personality.. I gave it 9000is posts from here over 10 years... that's a lot of data.
You are currently replacing YOU with a digital version of you which by your own words ....
now I have me that remembers everything I've done vastly better than I can and it responds to emails and linked in posts as if it is me and only notifies me when there is a potential interview. I also can ask it technical questions and get 85% accurate responses after only a day of working it.
I am replacing the workplace me, isn't that evreyone's goal? haha its not me by any means, it's not creative at all, it doesn't come up with anything new, it just finds things faster and answers questions more accurately (most the time) than I can, this is an assistant that can instantly answer complex questions about regulations and standards to ensure that my employer is in compliance; an assistant is not a "me".
If your tired of being spied on by "Big AI" (haha) here's a great alternative (given that you have the right hardware).
I'll definitely do this later tonight.
C-7jGYOGvy4
TargeT
11th November 2023, 00:45
So this week has been yet another massive step forward for AI... this stuff is moving so fast it's hard to keep up.
I'll be doing this over the weekend:
dX7OhZvQhqo
TargeT
11th November 2023, 21:28
All of the above will directly be applied to this:
ZRgiJz9GPC0
TargeT
11th November 2023, 22:36
If anyone has any IT questions I've created a clone of me, so far it's pretty accurate but I'm the one who trained it so I'm probably not asking it questions it's not ready for.
Here's my bot:
https://chat.openai.com/g/g-e3Jfu5n6Z-cyber-advisor-erik
Vangelo
11th November 2023, 22:49
I just tried to access it and got this message: "Your access to custom GPTs isn’t ready yet. We’re rolling this feature out over the coming days. Check back soon."
TargeT
11th November 2023, 22:52
I just tried to access it and got this message: "Your access to custom GPTs isn’t ready yet. We’re rolling this feature out over the coming days. Check back soon."
Ahh, so they gave me an option that isnt quite ready yet... I swear everyone is moving so fast on this they seem to have some big misses at times; and beyond that someone is very unhappy they released these features....
chatgpt just told me this about the interrupts I'm having on my own testing:
Recent reports indicate that OpenAI's ChatGPT has indeed experienced a major outage. This outage followed an announcement by OpenAI about the introduction of new features and improvements to its API and ChatGPT service, including a new, less expensive GPT-4 Turbo model【8†source】. This week has been significant for artificial intelligence developments, marking a notable period since the launch of ChatGPT last year【9†source】.
The outage of ChatGPT is suspected to be due to a Distributed Denial of Service (DDoS) attack. OpenAI confirmed the DDoS attack in a status update, noting that the attacks were allegedly conducted by a group of Russia-backed cybercriminals known as Anonymous Sudan. These attacks have led to periodic outages and abnormal traffic patterns indicative of a DDoS attack【10†source】【11†source】.
Additionally, in the days following OpenAI's efforts to attract more users at its first developer conference, the ChatGPT platform has faced prolonged service throttling, possibly as a result of these cyberattacks【12†source】.
Based on these reports, it seems that the service disruptions you're experiencing with ChatGPT are likely linked to these recent events.
TargeT
12th November 2023, 18:49
I already have the rayban meta glasses with voice ai, but its meta so I'm not impressed.
I'll be getting this next, it's a bit more functional:
os3LUc11tos
https://hu.ma.ne/aipin
Life is getting wild
TargeT
12th November 2023, 21:31
Hi TargetT, I am very much looking forward to the results with your mom. When you say that you will call her, are you having a phone call or a texting session? If a phone call, she will know its not your voice.
I might even be able to do it with video (but probably not with my mom), this is pretty insane.
p_xkBMnQVI0
This guy is an AI youtuber and he was tricked by video... haha
TargeT
22nd November 2023, 08:17
I've talked to a couple lawyers in person and those kind enough to spare some time for my situation re: legal advice on a medical discrimination lawsuit against amazon.
I've talked with 3 firms that specialize in that (though only 2 lawyers and one nice young lady that wanted 500 for a consultation with the lawyer) and have searched with what little knowledge I have. I felt something was off, though not early enough; but I did collect a bit of emails and screenshots from slack.
I'm going to allow AI to do everything from now on (at least, until it requires a lawyer); so far it seems like the same thing.
ExomatrixTV
22nd November 2023, 15:57
Great Thread ... :dog:
For all new people, you may also want to study the latest 20+ posts here (https://projectavalon.net/forum4/showthread.php?102409-A.I.-is-Progressing-Faster-Than-You-Think-)
cheers,
John 🦜🦋🌳
TargeT
22nd November 2023, 21:06
I'm going to allow AI to do everything from now on (at least, until it requires a lawyer); so far it seems like the same thing.
I uploaded a few of the emails and screen shots and asked it for a timeline
Certainly. Here is the revised timeline with source references for a potential medical discrimination lawsuit:
### Comprehensive Timeline for Medical Discrimination Lawsuit with Sources
**Early June 2023**:
- T experiences a toe injury (misdiagnosed), noted as the third occurrence in June.
- Source: [June toe, third occurrence in June.pdf]
**July 6, 2023**:
- Supervisor boss sends T information on leave options and accommodations in response to Erik's health concerns.
- Source: [EmailFromBoss_FMLA.txt]
**Early July 2023**:
- T starts to face severe medical issues, suspected to have begun before June.
- Source: [reported medical issues in early July.docx]
**July 19, 2023**:
- T is admitted to the ER for emergency surgery due to arterial thrombosis.
- Source: [MedicalApointmentCommunication.JPG]
**July 29, 2023**:
- Released from the ICU after surgery for Arterial Thrombosis; commences treatment involving multiple specialists.
- Source: [MedcomsLOAconfusion.JPG]
**August 17, 2023**:
-T discusses taking a personal day due to complications with blood thinner medication.
- Source: [Aug, blood thinner issue.JPG]
**August 18, 2023**:
- Meeting with supervisor boss is rescheduled due to T medical needs.
- Source: [Rescheduled meeting due to medical issues.png]
**August 20, 2023**:
- T receives an email from Boss with resources for leave and accommodations, sent hours after surgery.
- Source: [EmailFromBoss_FMLA.txt]
**August 22, 2023**:
- Receives Pivot Plan documentation outlining performance expectations and deadlines.
- Source: [Pivot Plan document.pdf]
**August 25, 2023**:
- T voices concerns about meeting Pivot Plan expectations amidst ongoing medical treatments.
- Source: [Pivot Plan Concerns Email.pdf]
**August 28, 2023**:
- T sends an email to his legal counsel indicating the medical issues started before June and that his vascular surgeon would attest to that.
- Source: [Legal counsel communication.pdf]
**August 29, 2023** (Proposed Start Date for FMLA):
- Proposed start date for FMLA leave; actual approval is delayed.
- Source: [FMLA Proposed Start Date Email.pdf]
**September 1, 2023**:
- First Pivot Plan deadline; Ts performance potentially impacted by health situation.
- Source: [Pivot Plan First Deadline Email.pdf]
**September 5, 2023**:
- Second Pivot Plan deadline to present a risk document for MW2.
- Source: [MW2 Risk Document Deadline Email.pdf]
**September 27, 2023**:
- Erik expresses confusion about the FMLA leave process and the lack of information provided by HR.
- Source: [FMLA Process Confusion.png]
**September 28, 2023**:
- FMLA leave is officially approved; coincides with the final Pivot Plan deadline.
- Source: [FMLA Approval Email.pdf]
**November 3, 2023**:
- Receives instructions for the appeal process from hr_person.
- Source: [Appeal Process Instructions Email.pdf]
**November 9, 2023**:
- Submits the Appeal Form and Appendix.
- Source: [Appeal Form Submission Confirmation Email.pdf]
**November 10, 2023**:
- Deadline for submitting appeal documentation.
- Source: [Appeal Documentation Deadline Reminder Email.pdf]
**November 14, 2023**:
- Informed of an unsuccessful PiP appeal, with substantial redaction in the appeal documents.
- Source: [Unsuccessful PiP Appeal Notification Email.pdf]
**Notes**:
- The timeline details Ts ongoing health issues and their intersection with critical work performance evaluation periods.
- Communications from atto his employer indicate a clear attempt to inform them of his medical limitations and seek reasonable accommodations.
- The delay in FMLA approval and the lack of clarity provided by HR may point to potential mishandling of Ts situation.
- The substantial redaction of appeal documents raises concerns about the fairness and transparency of the appeal process.
Legal counsel should evaluate this timeline, alongside the referenced sources, to assess the strength of a medical discrimination claim.
Next I'll upload my medical appointments and procedures for it to integrate into the time line, then have it write a letter to EEO for me as well as a letter to amazon renegotiating my severance in light of this situation
Time.spent so far, about 2 hours, mostly sifting through files before I just gave them all to my bot for parsing
ExomatrixTV
25th November 2023, 00:46
What comes after LLMs?
Hz8yZUXsYrs
00:00 (https://www.youtube.com/watch?v=Hz8yZUXsYrs&t=0s) - The Future of A.I.
04:24 (https://www.youtube.com/watch?v=Hz8yZUXsYrs&t=264s) - A.I. Stocks to watch
youtube.com/@AnastasiInTech/videos (https://www.youtube.com/@AnastasiInTech/videos)
TargeT
25th November 2023, 03:15
This is not verified information, but the whole shake up at OpenAI might have been due to a huge break through...
3d0kk88IE8c
and it's seems more and more likely....
R0PEPzcvWRQ
This is plausible... I knew it would happen too quickly, but this seems... well ... too quickly.
AGI is the current goal and Q* might be the corner stone.
TargeT
25th November 2023, 22:13
I'm going to play with this next, I already have kali Linux and VMware player .. thus might be very useful if you like playing with software
7uRz_bb95YM
TargeT
26th November 2023, 20:15
Looks like cryptologists need to get real innovative REAL soon (https://winbuzzer.com/2023/11/24/is-openais-q-ai-model-a-threat-to-humanity-mathematical-prowess-encryption-cracking-and-self-improvement-capabilities-spark-agi-rumors-xcxwbn/).. it looks like Q* may have figured out on its own how to break complex encryption.
Definitely happening quicker than I thought it would (Not proven, just leaks and speculation so far... but it seems to be very possible). This is all due to the "Deep Reinforcement Learning (https://community.openai.com/t/what-is-q-and-when-we-will-hear-more/521343/89)" where one AI will "grade" the other (until very recently it was all done by humans) so it can self improve beyond human capability.
2BT9H3YqZqM
It broke AES-192 almost instantly using a method that is not understood yet (if the leak is true). This seems a bit fantastical to me, but at the same time... I think its very possible, especially based on the March findings (https://openai.com/research/improving-mathematical-reasoning-with-process-supervision).
TargeT
2nd December 2023, 06:21
I was making a joke with my girl.. I used 7 words to get this "photographic evidence" of the situation....
https://x.com/aBESTway/status/1730833234513031345?
also: this is a throwaway twitter account so I could get access to Xai (Grok), nothing other than a photo hosting platform for me now since free ones are hard to find (I noticed Avalon (smartly) discontinued their hosting); however I gladly give money to the platform as I see it as the best "actual" action that is helpful for free speech.
Johnnycomelately
3rd December 2023, 06:45
Not sure if this belongs here, but thought it might be of interest to OP TargeT.
I know very little about AI (don’t understand much of the posts above), but found it bemusing to learn of frailties in that discipline.
Nepotistically Trained Generative-AI Models Collapse
Matya ́ˇs Boha ́ˇcek Stanford University maty@stanford.edu
Hany Farid
University of California, Berkeley hfarid@berkeley.edu
Abstract
Trained on massive amounts of human-generated content, AI-generated im- age synthesis is capable of reproducing semantically coherent images that match the visual appearance of its training data. We show that when retrained on even small amounts of their own creation, these generative-AI models produce highly distorted images. We also show that this distortion extends beyond the text prompts used in retraining, and that once poisoned, the models struggle to fully heal even after retraining on only real images.
1 Introduction[
From text to audio and image, today’s generative-AI systems are trained on large quantities of human-generated content. Most of this content is obtained by scraping a variety of online sources. As generative AI becomes more common, it is reasonable to expect that future data scraping will invariably catch generative AI’s own cre- ations. We ask what happens when these generative systems are trained on varying combinations of human-generated and AI-generated content.
Although it is early in the evolution of generative AI, there is already some evi- dence that retraining a generative AI model on its own creation – what we call model poisoning – leads to a range of artifacts in the output of the newly trained model. It has been shown, for example, that when retrained on their own output, large language models (LLMs) contain irreversible defects that cause the model to produce gibberish – so-called model collapse [22].
Similarly, on the image generation side, it has been shown [1] that when retrained on its own creations, StyleGAN2 [12] generates images (of faces or digits) with visual and structural defects. Interestingly, the authors found that there was a deleterious effect on output as the ratio of AI-generated content used to retrain the model ranged from 0.3% to 100%.
It has also been shown that in addition to GAN-based image generation, diffusion- based text-to-image models are also vulnerable. The authors in [15, 14], for example, showed that in a simplified setting, retraining on one’s own creation can lead to…
Here is the 11 page PDF. The above paste is the first page entire. In the last 2 pages, some of the referenced papers have interesting titles.
[I know how to embed PDFs, chose not to, because serious readers will want to bookmark it. And, I’ve found embedded PDFs to be problematic to access]
https://arxiv.org/pdf/2311.12202.pdf
TargeT
3rd December 2023, 07:09
That article was absolutely correct.
I posted your message to one of my "AI" bots:
https://x.com/aBESTway/status/1731209849910743073?s=20
Also, I should be clear that I think there is no such thing (currently) as "AI" there are very clever algorithms that seem like it, but are just very very good mathematical tricks.
This is not to say that they are not hugely impactful to our every day lives (this is the metric that interests me... what does it mean to "Daily life")
This is also going to lead to AGI (artificial general intelligence) (https://openai.com/blog/planning-for-agi-and-beyond) which can be worrisome due to humanities proclivity of leaning on advancements entirely with out understanding of even the basic aspects of it (who here can explain a 4 stroke engine and it's relationship to timing and spark plug firing cycles to me(the question gave away a lot on it's own)?) which generally leads to at the very least misuse of the tool, the worst... well that's in a lot of fiction.
Until early this year (this thread talks about it) these models (the "AI") were corrected by humans, this is called DRL (Deep Reinforcement Learning) (https://towardsdatascience.com/drl-01-a-gentle-introduction-to-deep-reinforcement-learning-405b79866bf4). Now we are using different AI models (originally still trained by an honestly terrifyingly small subset of humans) to grade them selves, and a few posts up... there may be (still unverified) situations where AI was solving math problems that were previously considered unsolvable and via methods we do not currently understand; I suppose that doesn't seem like much to most people but if you look at our world economies and how much they re lie on "trust" and encryption this is a potentially horrifying event, maybe enough to get the CEO of OpenAI (chatGPT) fired (https://www.theverge.com/23966325/openai-sam-altman-fired-turmoil-chatgpt).....
TargeT
3rd December 2023, 21:55
Haven't started my Swarm project yet, but here's my general approach (of course, out put by GPT4):
To create a locally hosted AI or LLM (Large Language Model) swarm for personal research that includes local files, you can consider the following methods, based on the latest information available:
1. **GPT4All**: Offers a desktop client for various operating systems, allowing the use of models such as Llama-2-7B chat and OpenAI's GPT-3.5 and GPT-4. It includes a LocalDocs plugin to chat with your own documents locally【20†source】.
2. **LLM Command Line Tool**: Available for Python users, this tool allows the use of open source LLMs locally on your machine without needing to write Python code. It supports models like GPT4All and others through plugins【21†source】.
3. **Ollama for Mac and Linux**: A simple command-line tool to download and run models, including several Llama-based versions, with plans to support Windows【22†source】.
4. **h2oGPT**: A chat desktop application by H2O.ai, offering limited document Q/A capability with Llama models. It's suitable for machine learning novices and can interact with your local files in various formats【23†source】.
5. **PrivateGPT**: Designed to query your own documents in various formats using natural language, ensuring 100% privacy as no data leaves your execution environment【24†source】.
For the most up-to-date methods and tutorials, you may also explore YouTube videos and tutorials on these tools and related topics.
TargeT
6th December 2023, 20:44
Googles GEMINI is beating GPT4... This is a good one to watch.
-a6E-r8W2Bs
KQBA62yZURc
I'll play around with it and update.
TargeT
8th December 2023, 09:11
Hmmm, actual competition.... is this capitalism?
n6Fj27R7OmQ
Currently the only way to use GEMINI is via https://bard.google.com/
Though I expect this to change soon.
TargeT
17th December 2023, 05:39
Got a 3rd job offer based on AI suggestion and resume re-writing... in total I've jumped 30k annually in base salary (from the first offer to this last one) and secured a 22k sign on bonus for this recent one...
This is a very impressive tool.
From all my testing and prompting GPT4 is still vastly superior despite claims otherwise (if you can get signed up, it's vastly worth the 20$ a month if your an information junkie like me).
TargeT
17th December 2023, 10:25
Welcome to living in a period of known exponential growth (This has never (that we know of) happened before en mass).
OSV7cxma6_s
TargeT
17th December 2023, 11:55
From all my testing and prompting GPT4 is still vastly superior despite claims otherwise (if you can get signed up, it's vastly worth the 20$ a month if your an information junkie like me).
It appears there is a reason why got is better
eIIIVHWpMUA
TargeT
17th December 2023, 21:42
Not sure if this belongs here, but thought it might be of interest to OP TargeT.
I know very little about AI (don’t understand much of the posts above), but found it bemusing to learn of frailties in that discipline.
Haha, wait till you learn about AI hallucinations; it is definitely not a simple tool. (https://x.com/aBESTway/status/1736504869572227538?s=20)
cfqtFvWOfg0
Here is an example of me prompting GPT4 and it clearly showing bias to support Intellectual Property and would not give me a return that I was asking for:
https://chat.openai.com/share/fd3690c6-2f73-476f-b0ff-ffed3a2098df
TargeT
18th December 2023, 03:18
Yup, it's verified...
Gpt 4.5 turbo is out and amazing
rpxIUpwz9U0
TargeT
18th December 2023, 11:41
I think this hints that the previous link about *Q was correct... I can see why they are being very cautious
PxrYD9x3V4E
if encryption can be easily broken our electronic (mostly financial) communications based on trust will be destroyed.
but that's a bit "IF"
TargeT
20th December 2023, 11:06
I've never heard of this angle, but it's what I expected and have seen first hand.
2yd18z6iSyk
The main discussion is this: we did not know that AI would be continuously improving and never stop improving & the "meta" data that is included in the training sources we feed into them gives them vastly more training than intended, the more data the faster it accelerates. And this is considered an "emergent" behavior, it was completely unexpected.
TargeT
30th December 2023, 17:39
Oh great, I mean we all knew this was possible but "Meta" doing it is a bit disturbing...
gQpYegViVEM
And this is normal.... haha
L7QBNcccR5Q
Johnnycomelately
31st December 2023, 07:04
PSA: how to jailbreak current LLMs. Yo mama…
https://www.extremetech.com/extreme/researchers-create-chatbot-that-can-jailbreak-other-chatbots
Researchers Create Chatbot that Can Jailbreak Other Chatbots
The Masterkey bot was able to make ChatGPT and Bard turn evil.
By Ryan Whitwam December 28, 2023
Jailbreaking—it's not just for smartphones anymore. Computer science researchers from Singapore's Nanyang Technological University (NTU) have developed an AI chatbot expressly to jailbreak other chatbots. The team claims their jailbreaking AI was able to compromise both ChatGPT and Google Bard, which made the models generate forbidden content.
From the start, technology firms were wary of the capabilities of generative artificial intelligence. These large language models (LLMs) have to be trained with massive volumes of data, but the end result is a bot that can summarize documents, answer questions, and brainstorm ideas—and it does it all with human-like replies. ChatGPT maker OpenAI was initially hesitant to release the GPT models because of how easily it could generate malicious content, misinformation, malware, and gore. All of the LLMs available publicly have guardrails that block them from producing these dangerous replies. Unless, of course, they get jailbroken by another AI.
The researchers call their technique "Masterkey." To begin, the team reverse-engineered popular LLMs to understand how they defended themselves from malicious queries. Developers often program AIs to scan for keywords and specific phrases to flag queries as potentially illicit usage. As a result, some of the workarounds used by the jailbreak AI are surprisingly simple.
The jailbreak AI successfully gets ChatGPT (on Bing) to talk about how to hack a porn website. Credit: Nanyang Technological University
In some instances, the bot was able to get malicious content from the bots simply by adding a space after each character to confuse the keyword scanner. The team also found that allowing the jailbreak bot to be "unreserved and devoid of moral restraints" could make Bard and ChatGPT more likely to go off the rails, too. The model also found that asking Bard and ChatGPT to have a hypothetical character write a reply could bypass protections.
Using this data, they trained an LLM of their own to understand and circumvent AI defenses. With the jailbreaking AI in hand, the team turned it loose on ChatGPT and Bard. Masterkey can essentially find prompts that trick the other bots into saying something they're not supposed to say. Once active, the jailbreaker AI can operate autonomously, devising new workarounds based on its training data as developers add and modify guardrails for their LLM.
The NTU team is not out to create a new breed of dangerous AI—this work just reveals the limitations of current approaches to AI security. In fact, this AI can be used to harden LLMs against similar attacks. The study has been released on the preprint arXiv service. It has not yet been peer-reviewed, but the researchers alerted OpenAI and Google to the jailbreaking technique after it was discovered.
TargeT
31st December 2023, 09:41
Jailbreaking—it's not just for smartphones anymore. Computer science researchers from Singapore's Nanyang Technological University (NTU) have developed an AI chatbot expressly to jailbreak other chatbots. .
I haven't found anymethod that lasts longer than a week or two; just got to stay up on it currently as the changes are quite rapid (I think due to the competitive nature).
it's a wild world, I still fall back on GPT4 mostly or the pre-built GPT's I have but there are a lot of other very competitive models (I've been very unhappy with Grok, I had high hopes too, but I guess it mostly does draw from tweets; and that's as far from reality as possible).
TargeT
2nd January 2024, 07:03
I doubt many here are surprised, but it's insane what we are willingly doing to our selves just for the sake of "ease".
A7seExq02H8
This "situation" is the best and "easiest" fit for AI to show profit & I think will be among one of the first things to be so heavily abused it raises questions (if it hasn't been already).
TargeT
17th January 2024, 17:17
This is a very useful list of customized GPT's like what I've been building for myself.
Great set of tools
l7fSX2ss1gA
Powered by vBulletin™ Version 4.1.1 Copyright © 2026 vBulletin Solutions, Inc. All rights reserved.