What would happen if synthetic speech got really good at hacking your emotions?
Sonantic, an AI voice startup, says it’s made a minor breakthrough in its development of audio deepfakes, creating a synthetic voice that can express subtleties like teasing and flirtation. The company says the key to its advance is the incorporation of non-speech sounds into its audio; training its AI models to recreate those small intakes of breath - tiny scoffs and half-hidden chuckles - that give real speech its stamp of biological authenticity.
Examples embedded.
TANGENTIALLY:
The non-speech sounds in the flirty synth-voice are the best bits.
I’m reminded of WaveNet which was the big breakthrough in computer-generated voices in 2016. They also released examples of “babbling” which is when you run the voice machine but without any words. So you ONLY hear half-breaths, the tack of the tongue in the mouth, the subtle echo of the mouth cavity, and so on. It’s incredible audio.
I posted about babbling in 2017 (there’s a description there of how to hear to the samples).
The article asks this question: what are the ethics of deploying a flirtatious AI? Is it fair to manipulate listeners in this way?
That’s the point: it’s coercive, right? Weaponised flirting has long been used by those people who try to get you to sign up to charity donations on the street.
People like flirting which is why it works.
EXAMPLE, this chatbot in China: Xiaoice was first developed by a group of researchers inside Microsoft Asia-Pacific in 2014, before the American firm spun off the bot as an independent business.
And: According to Xiaoice’s creators, the bot has reached over 600 million users.
(Mostly Chinese, mostly male.)
Unlike regular virtual assistants, Xiaoice is designed to set her users’ hearts aflutter. Appearing as an 18-year-old who likes to wear Japanese-style school uniforms, she flirts, jokes, and even sexts with her human partners, as her algorithm tries to work out how to become their perfect companion.
The platform capitalism data-growth-profit flywheel at work:
By forming deep emotional connections with her users, Xiaoice hopes to keep them engaged. This will help her algorithm become evermore powerful, which will in turn allow the company to attract more users and profitable contracts.
Generalising this to emotional engagement… flirtation won’t be the right unlock for everyone.
So it’s easy to imagine extending adtech. Adtech means using tons of datapoints to construct a profile of you which means that you are shown ads that you are more likely to elicit a response. For example: knowing that other people in your home location are reading content about interior design, the targeting engine can push you ads for home furnishing.
The profile could be extended to add an emotional profile – do you respond best to flirting, or negging, or imperatives, or status flattery, et cetera.
And then ads would be automatically inflected with a sentiment overlay to change the voice or change the copy of the message to increase the likelihood that you respond.
When voices are synthesised, it kinda doesn’t matter if they’re only slightly more effective at getting you to convert – because you can robo-call a million people at once.
And if synthesising is too hard (because it means solving for computer-generated conversations), then:
Why not build artificial flirtation into call centre software? Operators speak with whatever accent they have and with flat affect, and the machine automatically inflects their words to get you to agree to the broadband bundle upsell or whatever.
(Coercion prosthetics. Could a persuasive voice changer be built into my face mask?)
Inhumanly persuasive centaur deepfakes are going to be wild.
What is the anti-spam analogue in a world of coercive voice manipulation?
I look forward to AirPods with smart transparency mode, a kind of audio firewall (as previously speculated (2021)), with a new “anti enchantment” filter: you hear voices as normal, but with flirting and charisma automatically deducted.
What would happen if synthetic speech got really good at hacking your emotions?
Examples embedded.
TANGENTIALLY:
The non-speech sounds in the flirty synth-voice are the best bits.
I’m reminded of WaveNet which was the big breakthrough in computer-generated voices in 2016. They also released examples of “babbling” which is when you run the voice machine but without any words. So you ONLY hear half-breaths, the tack of the tongue in the mouth, the subtle echo of the mouth cavity, and so on. It’s incredible audio.
I posted about babbling in 2017 (there’s a description there of how to hear to the samples).
The article asks this question:
That’s the point: it’s coercive, right? Weaponised flirting has long been used by those people who try to get you to sign up to charity donations on the street.
People like flirting which is why it works.
EXAMPLE, this chatbot in China:
And:
(Mostly Chinese, mostly male.)The platform capitalism data-growth-profit flywheel at work:
Generalising this to emotional engagement… flirtation won’t be the right unlock for everyone.
So it’s easy to imagine extending adtech. Adtech means using tons of datapoints to construct a profile of you which means that you are shown ads that you are more likely to elicit a response. For example: knowing that other people in your home location are reading content about interior design, the targeting engine can push you ads for home furnishing.
The profile could be extended to add an emotional profile – do you respond best to flirting, or negging, or imperatives, or status flattery, et cetera.
And then ads would be automatically inflected with a sentiment overlay to change the voice or change the copy of the message to increase the likelihood that you respond.
When voices are synthesised, it kinda doesn’t matter if they’re only slightly more effective at getting you to convert – because you can robo-call a million people at once.
And if synthesising is too hard (because it means solving for computer-generated conversations), then:
Why not build artificial flirtation into call centre software? Operators speak with whatever accent they have and with flat affect, and the machine automatically inflects their words to get you to agree to the broadband bundle upsell or whatever.
(Coercion prosthetics. Could a persuasive voice changer be built into my face mask?)
Inhumanly persuasive centaur deepfakes are going to be wild.
What is the anti-spam analogue in a world of coercive voice manipulation?
I look forward to AirPods with smart transparency mode, a kind of audio firewall (as previously speculated (2021)), with a new “anti enchantment” filter: you hear voices as normal, but with flirting and charisma automatically deducted.