icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
15 Feb, 2022 15:28

AI learns how to ‘flirt’ with users

Program combines ‘acting performance’ and algorithms to act coy or to tease
AI learns how to ‘flirt’ with users

Researchers have developed a new artificial intelligence (AI) program that can sound flirty by mimicking human-like speech patterns, including subtle emotions and “non-speech sounds” such as sighs, laughter, crying, and pauses between words. A demo released on Monday touts the AI’s potential to “create hyper-realistic romantic encounters.”

The program has been developed by Sonantic, a London-based tech firm that produces AI voices for Hollywood films and video games. Describing the product as “the first AI that can flirt,” the company noted that “humans won’t be the only ones teasing and acting coy” any more.

In the video demonstration, titled ‘What’s her secret?,’ an actress emotes at the camera while a narrator muses about the concept of love. As the woman smiles, the voiceover says, “What could I do to make you fall in love with me?” and questions whether the viewer could “love me… [for] the sound of my voice.”

As the demo continues, the woman’s face morphs into a digital scan while the screen shows a series of texts and audio commands being inputted into the program. The AI reveals that the monologue was “never said by a human” and that the narrator was “not real” and “was never born.”

In a blog post, Sonantic CEO Zeena Qureshi noted that the company had achieved the “breakthrough” by combining two key elements: “acting performances and algorithms.” She claimed its AI ‘Voice Engine’ had “evolved” to allow models to now “breathe, laugh and even scoff” as part of an array of “new subtle styles” of speech behavior, such as being “flirty,” “coy” and “teasing.”

Capturing subtle emotions is “difficult” since AI voice models tend to “make emotions sound muted,” Qureshi noted, adding that this “muting effect” leads to more subtle emotions being “drowned out and [sounding] robotic.” 

The answer was in the “non-speech sounds that bring our conversations to life.” These include “pauses, deep breaths, oohs and ahhs” that provide “additional insight” into how people think and feel.

Besides piggy-backing on Valentine’s Day, the company picked the theme of love for the demo “because that’s when we feel and act the most vulnerable.” As well as subtlety, Qureshi wrote that the team was aiming for “true complexity, vulnerability and authenticity.”

The firm made headlines last August after announcing they had given Hollywood star Val Kilmer his voice back following his battle with throat cancer.