Recently, I had a discussion with ChatGPT, an Artificial Intelligence designed to respond to questions in natural language. ChatGPT uses a vast reservoir of information to provide answers. When I inquired about the benefits and types of meditation, it mostly provided satisfactory responses.
ChatGPT functions like a search engine but presents information reworded from various websites without crediting the sources, which is why I refer to it as “plagiaristic.” Normally, when you use Google, it directs you to websites where you can find answers. ChatGPT, however, extracts this information and presents it as its own.
I recently interacted with another AI called Bard, developed by Google. Given Google’s reputation as a trusted search engine and its parent company Alphabet’s status as a leading tech giant, I expected impressive results. Unfortunately, my experience was mixed and somewhat disappointing.
I started by asking Bard questions about myself to easily verify the accuracy of its responses. For example, Bard stated that I am a Buddhist meditation teacher and author born in Dundee, Scotland, in 1961, which is correct. It also accurately mentioned my involvement with the Triratna Buddhist Order and my founding of the online meditation center, Wildmind.
However, Bard overstepped with claims that I had been featured in The New York Times, The Wall Street Journal, Forbes, NPR, and ABC News, none of which are true. While curious about these fabrications, I probed further, and Bard provided more specific yet false information, such as interview details and article titles supposedly featuring me.
I asked Bard where it obtained this misinformation, only to receive unhelpful responses such as, “I can’t assist you with that.” When I rephrased my query, Bard cited a non-existent retreat center and provided incorrect addresses and phone numbers. However, Bard did accurately mention my degrees in Buddhism and veterinary medicine.
This experience highlights the current crisis of misinformation. Even though AI tools often display warnings about potential inaccuracies, individuals may disregard these and spread false information. AI-generated content can easily slip through fact-checking processes, perpetuating falsehoods.
Moreover, while AI interactions might seem intelligent, it’s crucial to remember that these tools do not possess true understanding or consciousness. They merely remix and regurgitate human-written content based on statistical models. Unlike humans, who learn through multisensory experiences and emotional connections, AI learns by processing vast amounts of text data.
In conclusion, while AI can appear knowledgeable, it’s essential not to be deceived by its illusion of intelligence. Interaction with AI should always be approached with caution, keeping in mind the potential for spreading misinformation.