UW researchers designed a headphone system that translates several people speaking at once, following them as they move and preserving the direction and qualities of their voices. The team built the...
Ok, so this concept is cool, but has a few problems…
Privacy, this is far too complex to run on the headphones themselves, so the system will need to connect to a server to do the heavy lifting, what happens to the data once it used? For legal purposes I suspect it will need to be saved, meaning that any thing recorded could be analyzed or monitored.
Trust, AI models have rules in place to make them act in specific ways, the owner of the AI system used could tweak it to change what spoken or how it is said, this could push political agendas in everyday conversations.
Reduced lingual skills, an AI like this would reduce the incentive to learn another language, reducing people’s international direct communications, increasing dependancy on the AI service, further reducing our lingual skills.
Check whisper apk on fdroid. The thing runs local. It does just this. The model gets audio in an undetermined language, figures out which one automatically, transcribes it, translates it to English (only English atm) and then it speaks it out. It’s not using any acceleration and its a very early build. My Pixel 9 is getting about 3 seconds delay from input to output. It’s all running local.
For 1 they actually addressed that:
The system then translates the speech and maintains the expressive qualities and volume of each speaker’s voice while running on a device, such mobile devices with an Apple M2 chip like laptops and Apple Vision Pro. (The team avoided using cloud computing because of the privacy concerns with voice cloning.) Finally, when speakers move their heads, the system continues to track the direction and qualities of their voices as they change.
I’m with you on 1 and 2, but “reduced lingual skills” I think is a bit of a stretch. Becoming fluent in another language takes a lot of effort and people only do it if they have a good long term reason.
I think it’s more likely this would cover the vacation / short term business case that is already covered by human interpreters (or apps already) instead.
Ok, so this concept is cool, but has a few problems…
This is scary…
Check whisper apk on fdroid. The thing runs local. It does just this. The model gets audio in an undetermined language, figures out which one automatically, transcribes it, translates it to English (only English atm) and then it speaks it out. It’s not using any acceleration and its a very early build. My Pixel 9 is getting about 3 seconds delay from input to output. It’s all running local.
It’s doable.
For 1 they actually addressed that: The system then translates the speech and maintains the expressive qualities and volume of each speaker’s voice while running on a device, such mobile devices with an Apple M2 chip like laptops and Apple Vision Pro. (The team avoided using cloud computing because of the privacy concerns with voice cloning.) Finally, when speakers move their heads, the system continues to track the direction and qualities of their voices as they change.
The fact that all this can run on a phone is incredible, this sounds very processor intensive.
I wonder what it would do to your battery life?
If that is enough power, and you can run it without any internet access, then yes, it would probably adress point 1.
I’m with you on 1 and 2, but “reduced lingual skills” I think is a bit of a stretch. Becoming fluent in another language takes a lot of effort and people only do it if they have a good long term reason.
I think it’s more likely this would cover the vacation / short term business case that is already covered by human interpreters (or apps already) instead.