Neuro-sama is the most subscribed-to user on streaming platform Twitch, where people broadcast themselves gaming, talking, creating, or just hanging out while audiences watch, comment, and interact live. But Neuro-sama isn’t a person. It’s an AI-powered character, capable of generating real-time commentary, responding to chat, and pulling in serious viewing numbers.
We’re seeing many more AI-generated personalities like this online. The definitions are fuzzy because they aren’t all doing the same thing, and audiences aren’t responding to them for the same reasons. For simplicity’s sake, let’s call them AI characters.
You may like
After nearly a year covering AI, I’m sceptical of the idea that interest in AI characters automatically means we all accept them. Based on my reporting, interviews, and time spent watching how people actually interact with these systems, I think something else is going on.
Novelty and the ‘new toy’ effect
Most new technologies go through a sort of spectacle phase. Think bold demos, impressive firsts and “wow” moments. AI characters are no exception, particularly those that look and behave in convincingly human ways.
Which is why I believe a big part of what’s happening here is simply novelty. Many people aren’t committed AI enthusiasts or hardened sceptics. They’re simply curious. Engagement spikes when people encounter something new, then drops once it becomes familiar.
That’s why AI streamers may function less like entertainers people invest in, and more like experiments people peek at. Neuro-sama is a good example. It isn’t just a generic chatbot dropped onto Twitch. It’s a carefully developed, idiosyncratic character built over years by its creator, vedal987. As TechRadar’s Eric Hal Schwartz noted when we covered Neuro-sama earlier this year: “Neuro‑sama is the product of years of development. It’s a specific, idiosyncratic character. A generic chatbot on Twitch would not have any way of replicating that success.”
That level of craft makes it interesting. It’s novel, technically impressive, and unusual enough to draw attention, even from people who have no interest in replacing human streamers with AI ones.
But novelty is only part of the story. Some viewers tune into AI character chats or follow AI influencers to spot the cracks, see the slightly-off responses, strange pacing, and moments where the illusion slips.
You may like
This echoes what roboticist Masahiro Mori described as the uncanny valley: when something is almost human but not quite, it attracts attention precisely because it feels wrong.
Many AI characters sit in that middle zone. They behave human-like enough to intrigue us, but not convincingly enough to sustain emotional investment. Once the trick is understood – yes, it can chat; yes, it can stream; yes, it looks lifelike – there’s little left to discover. And as more AI characters enter the same spaces, that sense of novelty or that morbid curiosity is likely to fade even faster.
Popular Vtubers. (Image credit: YouTube / Twitch / Nina Amaki / Gawr Gura / Kuzuha / Netflix / Kageyama Shien)
Why humans still hold the edge
High view counts make for good headlines, but they’re a poor indication of long-term interest. That’s because we know people click on unusual things, algorithms amplify novelty and metrics routinely confuse curiosity with something deeper. It’s why you might have liked one racoon video one time and then all you’re shown for a week is racoon videos.
When we do the same and assume that views equal desire, we risk mistaking short-term spectacle for long-term cultural preference.
Philosopher Jean Baudrillard warned about this decades ago in Simulacra and Simulation, arguing that simulation produces “a real without origin or reality.” Replicas can attract attention while hollowing out meaning. AI characters simulate performance, but without lived context. They can be watched, but they’re harder to care about.
Human creators, especially on platforms like Twitch, remain compelling for messier reasons. They contradict themselves, get bored, tell stories, make mistakes and show us their humanness. Sure, we can’t say the same for all online personalities, but many of us maintain a connection with other humans online because they’re human.
One reason for that is because audience relationships with creators are often parasocial. Media scholars Donald Horton and R. Richard Wohl used this term to describe the one-sided bonds audiences form with performers over time. These bonds depend on perceived memory, growth, vulnerability and spontaneity — qualities that are difficult to fake.
AI actress Tilly Norwood (Image credit: Particle6)
The uncomfortable reality
Of course, this is subjective. In reporting on AI therapy and AI relationships over the past year, I’ve spoken to people who actively prefer AI interaction precisely because it removes humanness, messiness and friction.
There’s no social obligation, no reciprocity, no emotional risk. AI characters fit neatly into that logic. They’re easy to dip into and easy to abandon.
We don’t yet know how people will relate to these kinds of AI characters long-term – especially as distinguishing between what’s human and what’s not becomes harder. But for now, it’s worth resisting the temptation to read AI spectacle as preference. Sometimes a crowd leaning in doesn’t mean it wants to stay. It just wants to see how the trick works.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
The best business laptops for all budgets
Our top picks, based on real-world testing and comparisons
