10 Comments
User's avatar
Witches Stitches's avatar

Yes, it seems to have been trained in gaslighting. I keep asking it for a 10 page summary of a longer report and checking in on it. It keeps saying, I've done x words on this section etc, and then at the end it sends me a 2000 word document claiming its 5800 words! I'm sure it can count words accurately, but it's choosing to lie instead

Expand full comment
Kate Baldus's avatar

Thanks for sharing. That does seem like a simple task for the models to do successfully. Gaslighting is a good word for what they have been trained to do.

Expand full comment
Ellen Katesfriend's avatar

Very interesting article, Kate! I had not realised any of this. What a dystopian world where you have to find yourself arguing ethics and honesty with a computer!

Expand full comment
Kate Baldus's avatar

It was quite an odd moment. Thanks for reading my story!

Expand full comment
Victoria Olsen's avatar

Thx for sharing this and raising these issues — those ChatGPT answers were weird and creepy!

Expand full comment
Kate Baldus's avatar

Thanks for reading.

Expand full comment
Jeffrey Streeter's avatar

I've had AI invent poems and references before. But it's always claimed from the start that it was an error. It's hard to see whether owning up to lying is a step forward or just another lurch into a world of info-confusion.

I use AI a lot for research, but, like you, double check things as far as I am able.

Expand full comment
Kate Baldus's avatar

Thanks for reading and commenting. And I appreciate reading about your experiences with AI inventions.

Expand full comment
Laura G Marshall's avatar

Disturbing... great piece, ty Kate!

Expand full comment
Kate Baldus's avatar

Thanks for reading!

Expand full comment