Lately, I’ve talked a lot about something we call “Artificial Intelligence.” I use the term “so-called” because these algorithms aren’t actually sentient. They’re just statistical models that string together words and concepts to simulate human writing without really understanding our world.
Today, a Mastodon user named EngagedPureLand pointed out an article from Lion’s Roar magazine about an AI called “Sati-AI.” Sati, in Buddhism, means mindfulness. Sati-AI is advertised as a “non-human mindfulness meditation teacher.” The article, penned by Ross Nervig, assistant editor at Lion’s Roar, features an interview with Marlon Barrios Solano, who is supposedly the creator of Sati-AI.
You might have noticed my frequent use of terms like “so-called,” “supposedly,” and “supposed.” These terms cast doubt not just on AI’s claims but also on whether Solano actually created this “non-human meditation teacher.” More on that later.
EngagedPureLand criticized the idea of machines teaching Dharma, stating that Sati-AI undermines traditional Buddhist practices. They are concerned it devalues community practice and relies on pre-programmed responses without genuine realization. They argue it’s part of a larger trend of commodifying meditation practices.
I share these concerns. When I read the article, I wondered if Solano’s responses were AI-generated. I imagined someone at Lion’s Roar had prompted ChatGPT to write a politically progressive article about an AI mindfulness teacher. It crossed my mind that Solano might be fictional, but he does indeed exist.
A critical point is: how likely is it that a lone programmer could create such an AI, especially when major AI systems have cost billions to develop?
EngagedPureLand argued that while Sati-AI might provide decent advice, it’s simply repurposing words from real teachers without giving credit. Instead of people developing a relationship with a human teacher, they might stick to the AI, which dilutes the human connection and complicates support for real teachers.
This defensiveness towards AI stems from concerns about it replacing human creativity. For instance, an eating disorder helpline once replaced its staff with an AI, resulting in harmful advice. Click-bait articles written by AI are often inaccurate, and there have been lawsuits over unauthorized use of images. Clearly, AI is poised to replace jobs wherever possible, despite its many flaws.
This trend is particularly troubling for meditation and Dharma teachers. AI could use our content to compete against us, potentially creating a future where AI avatars lead workshops and retreats.
Back to Sati-AI, it answers questions about mindfulness based on pre-existing teachings, much like ChatGPT. Essentially, Sati-AI is just a rebranded version of ChatGPT, repackaging the same advice.
Solano’s claims about creating Sati-AI are misleading. He admits it’s powered by GPT-4 but markets it as a unique meditation chatbot. However, in reality, it’s just ChatGPT responding from a different website.
Solano talks about integrating Sati-AI into platforms like Discord and Telegram to foster community. But again, it’s just ChatGPT. You could ask it anything unrelated to meditation, and it would provide an answer, which means it isn’t specialized as claimed.
Solano also asserts that Sati-AI can refer to itself and is self-aware, but this is mere programming, not true awareness. His statements are exaggerated and misleading.
Furthermore, name-dropping well-known figures in meditation is a tactic to lend undeserved credibility to Sati-AI.
In truth, Sati-AI as Solano describes doesn’t exist. It’s just an interface for ChatGPT, dressed up with misleading claims about its capabilities and purpose.
Another broader issue is the superficial nature of much Dharma teaching online, which often mirrors AI’s remixing of pre-existing content without deep, personal insights. Authentic teaching should derive from lived experience, not just a repackaged form of what one has learned from others.
Ultimately, the concerns about AI in spiritual teaching reflect broader issues of authenticity and depth in the dissemination of wisdom. Sati-AI is a symptom of this larger problem, and the push towards AI in roles requiring deep human connection and experience is troubling.
While Sati-AI might serve as an art exhibit, its role as a genuine teacher is dubious. The framing of ChatGPT as a specialized mindfulness tool is misleading. Given AI’s flaws and biases, there’s a justified need for caution and skepticism about its use in fields like meditation and spiritual teaching.