Recently, I had a chat with ChatGPT, an AI designed to answer questions in natural language. It does this by drawing on a large amount of information it has been trained on. When I asked about the benefits of meditation and the various practices involved, it handled the questions fairly well.
ChatGPT can be seen as a kind of search engine that rephrases information. Normally, when you use Google, it directs you to websites where you can find answers. In contrast, ChatGPT takes information from these websites and presents it to you without crediting the original sources, which is why I describe it as plagiaristic.
The day before, I interacted with another AI called Bard, developed by Google. Google is a highly trusted search engine in the English-speaking world, and its parent company, Alphabet, is one of the largest tech companies globally. You’d expect Bard to perform well. However, my experience was mixed, and frankly, I wasn’t impressed.
I tested Bard by asking questions about myself to verify its accuracy. Bard stated: “Bodhipaksa is a Buddhist meditation teacher and author. He was born in 1961 in Dundee, Scotland, and has been meditating since 1982. He is a member of the Triratna Buddhist Order and has published several books and guided meditations. He is the founder of Wildmind, an online meditation center.”
The initial biographical details were a paraphrase of bios I’ve posted online. The section on my meditation teaching was generally accurate, although describing it as based on the Theravada tradition is debatable. While many of the practices I teach are rooted in early Buddhist scriptures, equating them with the Theravada tradition is not entirely correct, and I teach some meditations that are definitely not Theravada.
Interestingly, Bard claimed that I’ve been featured or interviewed in The New York Times, The Wall Street Journal, Forbes, NPR, and ABC News, which is not true. It did mention features and interviews that never happened. I was curious how Bard generated this misinformation, so I investigated further.
Bard provided specific references to supposed interviews with me in The New York Times with detailed attributions, but none of these interviews ever took place. When asked for more information about one of these articles, Bard replied it couldn’t assist further because it’s a language model. When prompted slightly differently, it cited my own book “Living as a River” but fabricated an interview with Sandra Blakeslee that never happened.
This misrepresentation included a nonexistent “Bodhipaksa Retreat Center” in Barre, Massachusetts, with a real address unrelated to me. Even checking straightforward facts, Bard got things right, like my degrees in Buddhism and business from the University of Montana and Veterinary Medicine from the University of Glasgow. But on simpler questions, Bard made errors, such as listing numerous books I supposedly wrote, most of which were fictional.
I also came across inaccurate articles written by Bard on a music website, which had even made it into Wikipedia before being flagged as unreliable. This demonstrates how AI-generated misinformation can spread.
I worry we’re amidst a crisis of misinformation. Despite AI tools warning their information might be inaccurate, many people will ignore these cautions and spread falsehoods. The AI-generated content often cycles back into AI training data, compounding inaccuracies.
Science fiction author Ted Chiang compared this to repeatedly photocopying photocopies, where the image quality deteriorates. This analogy highlights the degradation of information quality we could face with AI tools like ChatGPT and Bard. The proliferation of such misinformation risks creating a blurry and messy landscape for reliable information.