
If you’ve been on YouTube recently, you may have been curious about some crudely vocalized songs with oddly animated visuals and improper grammar marketed as ads for a company called “Liven.” Liven is supposed to be a “self-discovery companion” according to their website, coaching people to live better lives — in theory. In reality, this supposed companion gets its userbase by manipulating vulnerable mentally ill people to buy their product with tailored AI-generated ads. They shame and expose the audience’s flaws using people’s information while promising a positive solution inside a paywall. Liven appears to prey on those watching their ads by providing a false sense of comfort. In layman’s terms, Liven is a pseudoscientific grift that is dangerous to the very people it’s marketed to support.
Liven ads use AI to craft the perfect video to bring up your complex traumas and propose to you a simple solution: their app. The formula for many of these videos start with someone having a psychological problem or social stressor causing emotional distress narrated by an AI vocalist singing an AI melody. The character wallows in depth about their specific struggles and how hard it is using surface level psychology jargon (especially “dopamine addiction”). An example includes feeling like you have to parent both your inner child and your immature partner’s emotions in a romance. After this development, they introduce the Liven app. Suddenly, the character visibly becomes much happier and reassures the audience that Liven can guide them to a more enjoyable future. An example from an ad is, “Just 15 minutes a day with Liven can change everything. In one week you’ll be feeling unstoppable.”
These Liven ads are incredibly scummy for a multitude of reasons. AI slop in general can make people, especially children, feel disillusioned and detached from the rest of the world. AI is detached from humanity, so there’s an irony in how Liven claims to care about bettering the human condition. Beyond that, calling out someone’s trauma as triangulated from the algorithm unsolicited, when they’re probably seeking entertainment or a distraction, can be incredibly harmful to the very things Liven supposedly protects. Ads and data collection do not see nor respect boundaries. The ads also imply Liven can save lives from further problems on its own. This is all incredibly irresponsible and negligent.
After clicking a link to their website’s quiz, the sinister nature of this company becomes more apparent. You’re barraged with battery of questions after being asked your gender (on the quiz there’s only male and female with no option for an alternative) and age from 18 onwards. Liven then asks you deep probing questions such as “Did you experience ongoing stress or emotional distance in childhood?” They then try to get you to download their app, give them your email and pay for their subscription to continue the service. The price on the website is left vague but digging deeper into one of their many blog posts reveals that the service is $35 a month. The lack of transparency surrounding the contents of the app beyond its goal as well as its exorbitant price raises significant suspicion.

If you delve deeper into Liven’s blog section, you can see how little effort they put into educating their userbase about psychological perspectives. Using mostly AI imagery for the blog posts, they seem to be taking a “quantity over quality” approach to mental health education. They focus on ADHD, depression and anxiety, and the tendency of people affected by these conditions to fall into (vague) dopamine addictions. This ignores so many other conditions in need of representation too and is generally organized poorly. There’s no search bar for easy ways to find helpful articles. They also like to push their app whenever they get the chance. An interesting note is that the entire “Psychology” section of the blog only has one article within the 10 months that Liven has been public on LinkedIn.
What Liven wants the audience to succumb to, however, is probably an addiction to their AI chatbot embedded in the app marketed as a “companion.” To make matters worse, AI chatbots have a record of neglecting and breaching mental health ethics codes when talking to humans. Either way, the AI does more harm than good with the detachment they provide from reality separate from human interactions even indirectly.
Overall, Liven exemplifies a dystopian view of the emotional support landscape in the AI boom. AI shouldn’t be trusted to advocate for people’s wellbeing in any facet because it detaches emotion from innate humanity. Liven demonstrates an amplification of a trend where trauma is exploited for monetary gain with a vague, “promising” solution. This is not psychologically healthy. Liven should know this.
