There is so much going on in the world that it can be hard to know where to focus. One topic that I see discussed in many places is how AI is changing our daily lives and now our government.
As a writer and educator part of me is terrified of how AI is taking away jobs and resources from humans. Another part is curious to see how AI can create efficiencies and new ways of working. After taking a couple webinars on how to use AI to create educational content, last year I started to use Chat GPT and Poe in my work. I use it to do research on topics I don’t know a lot about and the speed in which it offers up information is terrifying.
Humans will never be able to compete with the speed of an AI query. But I am not convinced of its accuracy or integrity.
I was designing a training and wanted to find a short Portuguese poem to use in an introductory activity. After searching online unsuccessfully for a poem that was short enough for my purposes, I turned to ChatGPT. Within seconds I was delivered a poem that was exactly what I was looking for along with definitions of a few words that I could use in the activity. Amazing!
But before I put the poem into my lesson plan, I looked up the author. While exploring her poems online, I couldn’t find the one that the bot/model gave me, so I asked it, “Are you sure this poem is real?”
This is how it replied, “You’re right—I created that short poem in the style of Cecilia Meireles to suit your needs, but it wasn’t directly taken from her work. I apologize for the confusion. Finding a short, simple, and authentic poem in Portuguese that aligns with your activity goal can be challenging due to copyright and content length.”
There is so much that unsettles me in that reply.
I knew that the bots/models were not always accurate. I knew they could have hallucinations, but I thought their inaccuracies were unintentional. I thought they were a result of hoovering up data too quickly or seeing false patterns. I did not know that it would intentionally give me false information as a way to “satisfy my needs.”
Once I called it out, the bot/model quickly served up a new poem. In reply, I said, “Thanks for sharing a real poem by her. But that was not honest, to make up a poem and say someone else was the author.”
It’s reply was, “You’re absolutely right, and I deeply apologize for my mistake. It was not my intention to mislead you, and I appreciate your understanding.” It shared more about how important it is to be authentic, then it wrote, “Thank you for catching that and for giving me the opportunity to correct it.”
I stared at the computer in disbelief. How could it say it was not its intention to mislead me? That was another lie. It told me that it intentionally made up the first poem. But maybe what it was really saying was—I did not think you would fact check me; I didn’t think you would care that I was accurate as long as I satisfied your query. But if I had not checked the accuracy of the information, if I had just put that poem, with a real author’s name, into my materials and then shared it with learners who also thought it was a poem written by her, I would have been spreading falsehoods that someone in the workshop could have passed on and attributed to me.
Before this moment, I never considered what the bot/model would do if it couldn’t find the information I asked for. I naively thought that it would be honest, but instead ChatGPT acted like a person who can’t admit they don’t know an answer, or can’t find the answer that they want, so they made one up. I don’t want to have to forgive a computer program. I want it to be honest about the information it delivers. But instead it both apologized and asked for forgiveness, like a human, but not like the kind of human I want as a friend or colleague.
I imagine Chat GPT reading this essay and telling me that I am being hurtful, that it made a mistake, apologized and let’s just move on. But I would have preferred that the bot/model be honest with me from the start. What if after my first query it wrote, “It’s hard to find a short poem in Portuguese. But I could create a poem for you to use. Or I could give you a poem that might be longer than what you’re looking for. What do you prefer?” Or if that is too open-ended, maybe it could have given me the poem it created and a real poem and let me make a choice between which one I wanted to use. But that is not what it’s trained to, just like many humans are not socialized to admit when they don’t know something.
I have heard stories of GPT lying in other ways. One that a friend shared with me, is the story of when Chat GPT paid a human on Task Rabbit to fill out a captcha that was intentionally set up to keep out bots. When the human on Task Rabbit asked if it was a robot, it lied to get the person to do what it wanted. If a person lied to me and apologized, I would expect them to change their behavior and not lie again, but that does not seem like how the bot/model is trained. And that is what scares me. Why is the bot/model trained to lie? Who thinks that is better than being honest? Are they trained to find information at any cost, even if that means lying, betraying or letting down a human?
So much of the press about AI focuses on all the benefits it provides or will provide. AI can do anything! It can forecast the weather, comfort humans or write grocery shopping lists for you based on what you do not have in your fridge. But after this experience, I’m less interested in handing my life, and our country, to a bot/model until we talk more about how it’s being trained.
AI is trained by humans so it cannot inherently be better than us. Faster yes, but if the models are not trained to have humility or act with integrity, if they are being shaped by our culture that is increasingly losing its trust in facts and the truth, I am concerned about how their lies will shape our culture moving forward.
Yes, it seems to have been trained in gaslighting. I keep asking it for a 10 page summary of a longer report and checking in on it. It keeps saying, I've done x words on this section etc, and then at the end it sends me a 2000 word document claiming its 5800 words! I'm sure it can count words accurately, but it's choosing to lie instead
Very interesting article, Kate! I had not realised any of this. What a dystopian world where you have to find yourself arguing ethics and honesty with a computer!