Fostering trust and transparency in telco AI solutions are central to Ericsson’s approach
As AI capabilities reach what Ericsson’s Parth Radia called “an inflection point,” the company is focused on building and delivering AI-enabled solutions built around trust and transparency. Speaking to RCR Wireless News for the Telco AI Deep Dive series, Radia, vice president and general manager of voice and video AI for the Swedish firm, looked at how rich datasets could allow it to prevent future misuse.
Ericsson ONE is a venture arm within Ericsson that “makes all sorts of bets…on the future, generally kind of connected to communications, connection and how we speak or talk to one another,” Radia explained. Among his current focuses are a suite of Voice and Video AI solutions, including voice-to-video translation.
“You’ll be able to have a video call like this, you can speak a different language, it’ll sound like you and it’ll also look like you, so your face will be modified to look like it’s actually speaking the target language,” he explained.
He described this as a first step in a longer-term ambition to develop AI-enabled tools to enhance voice and video communications; other potential focus areas include photorealistic avatars and 3D communications.
As to how Ericsson can help prevent the misuse of this type of technology, Radia acknowledged it was a complex topic and added, “I think the answer is maybe not even just on the part of the safety and ethics of it, it’s even how you build it from the start.”
For Ericsson, that means assembling a dataset designed to remove bias from foundational algorithms. “When we started this, one of the core kind of tenets of what we were building was that this has to work for everybody, it has to work super well and super fairly.” He said the company is betting that this degree of care, along with the amount of data being collected, will pay dividends in the future.
“We’re kind of betting that putting all this effort in on the front end allows us to do those things more properly or at least empirically more correctly in the future. I think it’s an important thing for us to do but I will be completely frank here—there is no correct answer yet on how to do these things to prevent misuse.”