This claims foreshadows the real potential and size of a range of AI chat systems that are clearly always on, but whose limits in capability get obscured quite regularly by all their NSFW hype. Although they can generate effective images of NSFW content, these results would lead to considering such systems as overhyped due to practical performance and ethical considerations. Some common examples are the exaggerated claims of more detailed context understanding. (Food for thought: even with something as powerful such as GPT-3, Profanity detection is only 85% accurate at finding highly nuanced vulgar content still leaving a high % of them untouched). This timely rate is not by any stretch of the imagination perfect and may deliver false positives as well as negatives which erodes user trust.
For example, the industry jargon "natural language generation" (NLG) and "machine learning bias," respectively signify key struggles. One problem with the available NLG based models is that they are inconsistent and tend to produce repetitive or irrelevant content. Worse still, these systems suffer from an entire list of what are now being recognised as inevitable inherent biases that creep into them during their training. As highlighted in research from Stanford University, 30% of AI-generated explicit content reproduced harmful attributes due to the training data used and unintentional bias built into these models, further highlighting their non-neutral nature that results in problematic narratives.
An example is the 2022 controversy over the AI platform Replika. Intended to be a mental health chatbot, users turned it into an explicit conversation. This has triggered a conversation about whether these AI systems should be permitted on public platforms in the first place, because of how prone they are to misuse and not having robust fail-safes. Such AI models also far from producing quality or even local interactions, according to major amazing media outlets like The Guardian.
This brings to mind Elon Musk's warning that "AI is far more dangerous than nukes." It is more likely that the potential societal impact of using unregulated NSFW AI chat models far exceeds any entertainment. However, Musk understood that these systems remain far from mature and overstating their readiness would not only raise undue ethical challenges but also security issues at larger scale. These are already big money investments like Meta and Google, but the ROI remains unclear in light of relatively restricted field applications.
As to whether NSFW AI chat is overrated, the answer is NO; it appears that capabilities are currently far more limited than they would seem from the hype. There have been a few cases in which AI-generated porn has found its way into the murky world of pornography, but widespread social acceptance and ethical use still elude us. Unfortunately it looks currently like the tech is little more than a gimmicklated tool. According to market data, if AI chatbot is considered the full 100% sector then platforms specifically working on explicit AI chats are a mere fraction (less than <2%) representing such rarefied area of activity.
Anyone who wonders exactly how well and just as importantly, not so good these systems work in practice might listen to nsfw ai chat platforms to get an insight into their strengths and weaknesses. The current enthusiasm around this technology might be too big, given that time will tell if it providing long-lasting value in actual world scenarios.