[ad_1]
On Monday 13 Could, OpenAI livestreamed an occasion to launch a flowery new product – a big language mannequin (LLM) dubbed GPT-4o – that the corporate’s chief know-how officer, Mira Murati, claimed to be extra user-friendly and sooner than boring ol’ ChatGPT. It was additionally extra versatile, and multimodal, which is tech-speak for with the ability to work together in voice, textual content and imaginative and prescient. Key options of the brand new mannequin, we had been instructed, had been that you can interrupt it in mid-sentence, that it had very low latency (delay in responding) and that it was delicate to the consumer’s feelings.
Viewers had been then handled to the customary toe-curling spectacle of “Mark and Barret”, a brace of tech bros straight out of central casting, interacting with the machine. First off, Mark confessed to being nervous, so the machine helped him to do some respiration workout routines to calm his nerves. Then Barret wrote a easy equation on a bit of paper and the machine confirmed him learn how to discover the worth of X, after which he confirmed it a bit of pc code and the machine was capable of cope with that too.
To date, so predictable. However there was one thing oddly acquainted in regards to the machine’s voice, as if it had been a sultry feminine – known as “Sky” – whose conversational repertoire spanned empathy, optimism, encouragement and maybe even some flirty overtones. It was paying homage to somebody. However who?
It turned out that it reminded many viewers of Scarlett Johansson, the celebrated Hollywood star who offered the feminine voice in Spike Jonze’s 2013 movie Her, which is a few man who falls in love together with his pc’s working system. This, apparently, is the favorite film of OpenAI’s chief govt, Sam Altman, who declared at an occasion in San Francisco in 2023 that the film had resonated with him greater than different sci-fi movies about AI.
The individual most stunned by GPT-4o’s voice, although, was Johansson herself. It seems that Altman had approached her final September, looking for to rent her because the chatbot’s voice. “He instructed me,” she stated in a press release, “that he felt that by my voicing the system, I may bridge the hole between tech corporations and creatives, and assist customers to really feel comfy with the seismic shift regarding people and AI. He stated that my voice can be comforting to individuals.”
She declined the provide, however after the demo was livestreamed she discovered herself besieged by “pals, household and most of the people” telling her how a lot GPT-4o appeared like her. And he or she was much more pissed off to find that Altman had tweeted the only phrase “Her” on X, which she interpreted as an insinuation that the similarity between the machine’s voice and her personal was intentional.
For sure, OpenAI vehemently denied any sharp follow. “The voice of Sky will not be Scarlett Johansson’s, and it was by no means meant to resemble hers,” an OpenAI spokesperson stated in a press release that the corporate attributed to Altman. “We solid the voice actor behind Sky’s voice earlier than any outreach to Ms Johansson.”
However, the assertion goes on: “Out of respect for Ms Johansson, we’ve paused utilizing Sky’s voice in our merchandise. We’re sorry to Ms Johansson that we didn’t talk higher.” Aw, shucks.
Now at one degree, after all, you can say that this can be a storm in a champagne goblet. It’s doable that OpenAI’s newfound “respect” for Johansson could don’t have anything to do with the truth that she is known and has costly legal professionals. It’s additionally conceivable that Altman wasn’t trolling her by tweeting “Her” when he did. Likewise, pigs could fly in shut formation.
On a broader degree, although, this little fracas, as tech author Charlie Warzel put it in The Atlantic, shines a helpful gentle on the darkish coronary heart of generative AI – a know-how that’s: constructed on theft; rationalised by three coats of prime legalistic posturing about “honest use”; and justified by a worldview which says that the hypothetical “superintelligence” tech corporations are constructing is just too large, too world-changing, too vital for mundane issues resembling copyright and attribution. Warzel is correct when he says that “the Johansson scandal is merely a reminder of AI’s manifest-destiny philosophy: that is taking place, whether or not you prefer it or not”. To which the right reply is: it’s, and most of us don’t.
What I’ve been studying
Liberal excellent
A stunning New Statesman profile of formidable political reformer Roy Jenkins by Simon Jenkins (no relation).
Notes for notes
Expertise skilled Om Malik explains how he writes in an attention-grabbing interview on the Individuals and Blogs web site.
Cyber insecurity
There’s a great reflective piece within the Register on how the British Library’s communications technique needed to change after a ransomware assault.
[ad_2]
Source link