Recently, I had a chat with ChatGPT, an AI designed to answer questions in everyday language. It pulls from a large database of information to provide responses. When I asked it about the benefits of meditation and the different types of meditation practices, it mostly gave a decent summary.
However, ChatGPT functions somewhat like a plagiaristic search engine. When you use Google, it directs you to websites where you can find the information. But ChatGPT takes content from those sites and presents it in a reworded form without citing the original sources, which can be problematic.
I also tested another AI called Bard, developed by Google. Given Google’s reputation as a top search engine, I had high expectations. Sadly, the results were mixed and not very impressive. To test it, I asked Bard questions about myself.
Bard described me as a Buddhist meditation teacher and author born in Dundee, Scotland, in 1961. It went on to detail my teaching background and achievements, some of which were accurate, while others weren’t. For instance, Bard stated I had been featured in several high-profile publications like The New York Times and The Wall Street Journal, which is untrue. This made me curious about the source of its information, so I dug deeper.
When asked for the sources of its claims, Bard couldn’t provide them, attributing its responses to being a language model. Even when rephrasing my questions, it still provided inaccurate information, such as mentioning a retreat center that doesn’t exist and giving a false address and phone number.
Despite this, Bard did get some basic facts right, like my degree in Buddhism and business from the University of Montana and my veterinary degree from the University of Glasgow. However, it also exaggerated my publications, incorrectly stating I had written over 20 books.
I’ve come across websites publishing articles generated by Bard, often including inaccuracies. Although some editors flag these sites as unreliable, misinformation can still spread easily. This creates a cycle where AI-generated content is fed back into AI systems, leading to more inaccuracies over time.
There’s a broader concern about misinformation proliferation. Even though these AI tools come with disclaimers about potential inaccuracies, many people might overlook these warnings and spread incorrect information. This post itself could contribute to the spread of misinformation if taken out of context.
It’s a significant challenge that our digital age faces, and it’s one that we need to address seriously. As Ted Chiang, a science fiction writer, aptly said, this process is much like making photocopies of photocopies, gradually deteriorating the quality of information.