Stephen Hawking wowed the world four decades ago when he started speaking through a computer mounted on his wheelchair.
Now, thanks to AI, people with similar disabilities can go even further – switching the robotic voice for a life–like digital version of themselves speaking on a screen.
A new invention will allow people with degenerative diseases to create an avatar of themselves that can talk for them, complete with their own voice and face.
They will interact with a screen in front of them on the wheelchair and their responses will appear as an avatar on a screen above their head.
The avatar will look exactly like the user, featuring not only the same voice but the same facial expressions, emotions, tone, and inflections they had before.
It will be trained on their personality and experiences too, including past relationships and Whatsapp chats – and it will learn everything from their sense of humour to their family.
When chatting to someone, the AI will listen in with a microphone to the conversation and generate three responses for the user to choose from using just their eyes.
While it took Hawking around five minutes to compose a short sentence or two, the new system allows users to respond in real time within just three seconds.
A new invention will allow people with degenerative diseases to create an avatar of themselves that can talk for them, complete with their own voice and face
Stephen Hawking wowed the world four decades ago when he started speaking through a computer mounted on his wheelchair. Now, thanks to AI , people with similar disabilities can go even further
Over 100 million globally live with severe speech limitations from illnesses such as Motor Neurone Disease (MND), cerebral palsy, traumatic brain injuries, and stroke.
Yet 98 per cent of sufferers don’t have access to devices to help them communicate because the machines are often too expensive.
Launching the software at the AI Summit in New York today, LaVonne Roberts, chief executive of the Scott–Morgan Foundation (SMF), the charity driving the initiative, told the Daily Mail: ‘What I love is that it gives people their voice back.
‘For people who were so funny and witty but now have a face that is immobile, we can capture their personality and help them express it again through the avatar.’
Rather than just using a generic chatbot like ChatGPT, the AI is deeply trained on each user so that it can essentially think like them.
Over time, it will become increasingly more attuned to the patient’s preferences and thoughts.
In practice, the AI will listen in to every conversation the user has, work out the context, and then provide three possible answers on the screen for the patient to choose – allowing them to answer any question with a one to two word sentence within 3 seconds.
‘With the AI, the idea was to train it using multiple agents to really get it as close as we can to the user’s own personality,’ said Roberts.
In a world first, the software – called SMF VoXAI – was architected entirely via eye–tracking by SMF’s chief technologist Bernard Muller, who is fully paralysed with ALS
Rather than just using a generic chatbot like ChatGPT, the AI is deeply trained on each user so that it can essentially think like them
‘What many patients with speech issues found frustrating was they were unable to keep up with the flow of a conversation.
‘This technology speeds up their communication so they can now talk in real time.
‘Communication should be a basic right.
‘It’s the simple things – from going into Starbucks and ordering a coffee without having to hold the line up to being able to tell their kids they love them with full emotion.’
In a world first, the software – called SMF VoXAI – was architected entirely via eye–tracking by SMF’s chief technologist Bernard Muller, who is fully paralysed with ALS.
It was created in partnership with Israeli company D–ID, which worked on the avatars, British firm ElevenLabs, which provided the cloned voice, and chips from US–based Nvidia.
Gil Perry, chief executive and co–founder of D–ID, said they usually worked with America’s biggest companies to provide digital assistants for customer service or training videos, but the company had always aimed to have a social impact.
He told the Daily Mail: ‘Even if the person has lost the ability to show emotion, this is not a challenge any more to generate an avatar that looks, talks, and moves exactly like them.
‘It’s been amazing to hear people tell us we brought their smile and their life back.’
The software will be available for free and the final device will be adapted and tailored to the specific needs and abilities of each user.
The SMF has designed a prototype that fixes two screens to the patient’s wheelchair and is now looking for a hardware company that would allow it to scale this up.


