Telling an AI chatbot your deepest secrets or revealing your political views is ‘extremely unwise’, Oxford don warns
- Mike Wooldridge warns ChatGPT shouldn’t hear your political views
Complaining about your boss or expressing political views on ChatGPT is “highly unwise,” according to an Oxford lecturer.
Mike Wooldridge said that an artificial intelligence tool should not be considered a proxy because it can confuse you.
Whatever you say to the chatbot will help train future versions, he added, and the technology simply “tells you what you want to hear.”
An artificial intelligence professor is giving a Christmas lecture at the Royal Institution this year and will reveal the “truth” on the subject.
He said humans are hardwired to seek consciousness, but we “attribute it too often.”
Comparing the idea of finding personalities in chatbots to seeing faces in clouds, he said of AI: “It has no empathy. He has no empathy.
“This is absolutely not what the technology does and, crucially, it has never tested anything.” Fundamentally, technology is designed to try to tell you what you want to hear—that’s literally all it does.”
Treating it as something more was especially risky because “you have to assume that everything you enter into ChatGPT will just be passed directly into future versions of ChatGPT.”
He said it would be “extremely unwise to initiate personal conversations, complain about your relationship with your boss, or express your political views.”
Professor Wooldridge added that due to the way artificial intelligence models work, it was almost impossible to get your data back once it was in the system. Earlier this year, OpenAI, the company behind ChatGPT, had to fix a bug that allowed users to see parts of other users’ chat histories.
The company promises to store them for only 30 days and not use them to train the chatbot.
The lectures will be broadcast on BBC Four and iPlayer on 26, 27 and 28 December at 8pm.