Recently, I shared a conversation I had with ChatGPT, an artificial intelligence designed to answer questions in natural language. It uses a vast amount of information it has been trained on to provide these responses. When I asked ChatGPT about the benefits of meditation and the different types of meditation practices available, it mostly did a good job.
ChatGPT works similarly to a search engine but in a rephrased manner. Normally, you would ask Google a question, and it would direct you to websites that contain the needed information. ChatGPT, however, takes that information from various sources and presents it to you without giving credit, essentially rewording it. That’s why I refer to it as a plagiaristic search engine.
Yesterday, I interacted with another AI named Bard, developed by Google. Given Google’s reputation as a trusted search engine and its parent company, Alphabet’s, stature as a major tech player, one would expect Bard to perform well. However, the results were mixed, and, frankly, I wasn’t impressed.
I began by asking Bard questions about myself, to easily verify the accuracy of the information provided. Bard described me, Bodhipaksa, as a Buddhist meditation teacher and author born in Dundee, Scotland in 1961, and mentioned my meditation practice since 1982. It correctly identified me as a member of the Triratna Buddhist Order and talked about the various books and guided meditations I’ve created, as well as the fact that I founded Wildmind, an online meditation center.
While Bard accurately mentioned some aspects of my work and teachings, it incorrectly stated that my teachings are based on the Theravada tradition, which isn’t entirely true. Although many practices I teach are rooted in early Buddhist scriptures, I’m not affiliated with any Theravadin group and teach some practices that are not part of that tradition, such as mantra meditations.
In terms of my media presence, Bard claimed I had been featured in The New York Times, The Wall Street Journal, Forbes, NPR, and ABC News. This was untrue; though I have been interviewed by CBS and featured on the BBC. I found these fabrications curious and decided to dig deeper.
Bard provided detailed, but fictional, accounts of my supposed interviews with major media outlets and even specified the titles of articles and names of journalists. None of these interviews ever happened, and searching my name in the New York Times yielded no results. When I pressed Bard for more details, it simply stated that it couldn’t help further because it’s just a language model.
I found this response odd, considering Bard had just provided specific article titles and journalist names moments earlier. For instance, it claimed Sandra Blakeslee interviewed me for an article mentioned in my book “Living as a River,” but no such interview ever occurred. Bard had taken my reference to Blakeslee’s article and invented a whole interview narrative.
There were more inconsistencies. Bard mentioned the non-existent “Bodhipaksa Retreat Center” and provided an incorrect address, which turned out to be a private residence in Barre, Massachusetts. This wasn’t associated with me or any known meditation center. Bard also gave a phone number for the center, but I had no idea whose number it was.
While some of the information provided by Bard matched well with verifiable facts—such as my degrees from the University of Montana and the University of Glasgow—it still managed to fabricate many aspects, like claiming I had written over 20 books when I have authored only six.
This experience highlighted a growing concern about AI-generated misinformation. Although these tools come with disclaimers that their information may not be accurate, many people might overlook these warnings, causing misinformation to spread. The recycled and distorted information from AI could easily make its way back into other AI systems, leading to a blurrier understanding of facts.
Ted Chiang, the science fiction author, aptly summarized this issue in a New Yorker article, likening it to the degradation seen in photocopies of photocopies. It’s a digital equivalent where the quality of information only deteriorates with repeated use and misuse.
My concerns about AI tools have deepened. While they can provide a starting point in research or tasks, they certainly can’t replace human judgment and creativity. As far as Bard and ChatGPT are concerned, they seem to be sophisticated tools with significant limitations.