Decades ago, please don't ask me how many, in the days before email and bots that auto-dialed cell phones to pester the unwary with questions, some companies hired high school kids to knock on the doors of strangers to ask survey questions during the summer. Yes, there was a time when teens were sent to enter the homes of complete strangers. Mostly we went in pairs. But on one occasion I got a late start. Since everyone else was already out hard at work, I grabbed a list of prospects and set out on my own.
To make a long story short, nothing bad happened to me that day. Nothing except boredom. I was bored and behind schedule, so, my human brain made a decision. I used my imagination to fill in forms until I got caught up. Meaning, I got paid. The company was happy with the results I turned in. See human intelligence solves problems.
Now, I told you about my youthful issue because knowing that might help explain my experiences with ChatGPT. I’m late to the party, again, and only decided to take ChatGPT for a test drive a few weeks ago.
I began simply, inputting a few sentences that described three characters and their situation. ChatGPT showed its brilliance by showering me with praise for creating such complex characters and situation.
Then it informed me they needed therapy. While that may be true, it is possible that every character I have ever written would benefit from deep therapy, benefit from therapy that was not what I wanted from the software. After that, no matter what I did with the software, it wouldn't deviate from the “send them to therapy” line. Except to add that the community should help when I said they had no access to therapy, and that they were fortunate to have lots of time to respond to therapy when I mentioned they were teenagers. Once it got an idea in it's "head", ChatGPT refused to change.
I decided to try something else and asked it to tell me about Eugene James Bullard. Just that one sentence. This is a real human being, one I spent a year researching and writing a biography about, one that is now out on submission. So I know a lot about the man, know that a lot of the information about him in books and on the web is contradictory and incorrect. Finding the truth required me to ignore some of the old biographies written about him, and going all the way back to grade school reports, census records, and foreign wedding and death records.
After thinking for a longtime, ChatGPT began its story. And right near the beginning I found something strange. ChatGPT claimed that Bullard and his family left the US and moved to Marseille France when he was young.
Marseille?In over a year of studying Bullard, and following his journeys across he American south, to Scotland, England, Paris, France, and then the French Foreign Legion, and eventually Portugal and on to New York, Marceille was never in the picture. Nor did he and his family ever go anywhere, he ran away from home as an adolescent.
I told the software that it was wrong. ChatGPT was silent for a little bit, apparently rechecking its data, then apologized to me for the mistake and rewrote the information. At least it was polite about being corrected. Not every human being is that nice when caugh making things up.
ChatGPT was exhibiting a phenomenon known in AI circles. I say it used its imagination, but the AI community calls it machine hallucination. Whatever the term you want to use, AI’s are known to sometimes give phony information instead of saying they simply do not know. Given my own history and that long ago survey, I'm tempted to call that a sign of intelligence, artificial or otherwise.
Some writers and illustragors fear AI taking over their ability to work, and call it's output plagarism. The mere fact that it has an imagination – okay hallucinations – and sometimes makes things up shows that something is going on beside copying. When a system reaches a certain level of complexity, not even its maker fully understands what is going on.
FYI, after receiving a Computer Science degree, I worked with AI for a bit. LISP was my computer language of choice. I know enough to know that ChatGPT is only a distant, poor cousin of some of the things out there. Even back then, there were AI systems that could make ChatGpt look like a minor child compared to its Doctoral candidate cousin. Those systems may never be seen in the hands of average people. And they also hallucinate, big time.
If we accept technology with our eyes open, it is possible he next generation will view AI as just another type of technological advantage. Unintelligent automation took over auto assembly lines, and the humans with foresight learned new skills, including managing the machines that had replaced them. Human Intelligence will always be needed for many areas, starting with spotting and fixing AI hallucinations.
5 comments:
Very informative, Barbara. And "Wow!" when you told Chat it was wrong it "rethought" it's answer? Very interesting. I think if I jump in the AI pond, I'll do it after I've written something and I've a couple of spots I want input on. Or maybe not. I see it as something to look into.
Fascinating post, Barbara!!! I think your description, "They used their imagination", is closer to the truth. Your paragraph about the advanced AI systems that are out there sent a shiver up my spine. Your paragraph about jobs in the future makes sense. There are always disrupters and changes in the job market, some bigger than others.
Thank you for this informative and thoughtful post!
I say ChatGPT thought things over because an amout of time passed before it answered, as if it were a person checking their references. Then it came back and apologized the mistake (it is so polite it should be obvious it is not a real person) and corrected it's mistake. The same thing happened later, when I found another error.
After that, I decided to try the old "two truths and a lie" game. The next time I corrected ChatGPT, I was the one making something up. I figured I had built credibility, and wanted to see if i was just agreeing with me. Once again it was quiet, then came back to say it was sorry but it did not believe me. So during those pauses it really was checking something, not just copying my words. BTW, ChatGPT was still very polite about my untruth. It even asked me for reputable references it could check about my lie.
This is still only machine intelligence, but it was doing more than just regurgitating words at me.
Thanks for the thoughtful post, Barbara. It's a good reminder that AI is fallible, and we can't predict all the ramifications of using it.
Very informative post. AI is definitely up and coming as a serious tool.
Post a Comment