I'm teaching a Computer Literacy class for my community. It's open-ended - we learn about whatever the students have questions about. I don't want to shy away from more technical explanations, since that can give good context and demystify, but I don't go too in-depth. Here's a summary of what we learned.
Don't trust everything you read on the internet.
Especially if its coming from AI, since AI is known to just make things up.
Verify claims that you see on the internet.
Look for sources, go to those sources, and find out for yourself.
LLMs are not magic, quite the opposite.
Ultimately, LLMs are fancy neural networks that turn text into numbers, multiply those numbers a bunch of times, figure out the numbers for the words most likely to come next, and then regurgitate the most likely word.
Neural Networks work a bit like the human brain by sending signals between different nodes that alter those signals before sending them to other nodes.
Human brains are way more sophisticated than LLMs.
How LLM training works.
Human babies are better at training than LLMs because they're always absorbing information. LLMs only train from what you give them.
The big companies have taken a whole bunch of text from books, papers, and the internet, and used them to generate those Neural Network nodes.
LLMs depend on their training data to produce good output, but the training isn't continually being updated.
Most LLMs have safeguards to prevent bad actors from doing bad things with them. These safeguards can be circumvented though, and some LLMs have no safeguards.
Searching for results on Google is a skill you can hone.
Advanced Search is a good way to get started with making better search queries.
The Dark Web isn't just for bad guys (though it is mostly for bad guys). It's for anyone who wants to remain anonymous on the internet.
Though it can be circumvented in some cases.
Anything you put on the internet can and will stay on the internet forever.