UCL, in collaboration with Intel, Microsoft and IBM, develops UCL MotionInput V3, a software for touchless computing.
Play Video
Paris Baker, a 31-year-old mother of two, was an elite acrobatic gymnast who represented Great Britain and won silver at the European and World championships. At 26, Paris received a diagnosis of motor neurone disease (MND), which causes muscle weakness that gradually worsens over time and leads to disability. Of the many things that changed in her life, an essential element was losing the ability to play video games with her children.
That was, until she discovered MotionInput.
Developed by academics and students at University College London’s (UCL) Computer Science department, in collaboration with Intel, Microsoft and IBM, UCL MotionInput V3 enables truly touchless computing. With MotionInput and a common webcam, a user can control a PC by gesturing with their hands, head, face and full body or by using speech. The software analyzes these interactions and converts them into mouse, keyboard and joystick signals making full use of existing software.
See how technology is made more accessible with UCL MotionInput and touchless computing
Intel has a long-standing relationship with UCL for mentoring computer science projects, says Phillippa Chick, global account director, Health and Life Sciences, Intel UK. “We work with Professor Dean Mohamedally, Professor Graham Roberts, and Mrs. Sheena Visram on mentoring projects as well as support structure for the students. This idea was first proposed by the UCL team in the summer of 2020 as a series of UCL Computer Science IXN [Industry Exchange Network] student projects and stemmed from the need to help healthcare workers during COVID-19 when it was necessary to keep shared computers clean and germ-free.” The team brought onboard Dr. Atia Rafiq, an NHS GP doctor, to improve upon the clinical requirements needed by frontline healthcare workers.
A University College London student’s eye movement being tracked by UCL MotionInput V3 in the UK on Monday, 03 28, 2022. (Credit: Intel Corporation)
University College London students in a mentorship session using Intel-powered laptops in the UK on Monday, 03 28, 2022. (Credit: Intel Corporation)
Paris Baker, a former gymnast, using UCL MotionInput V3 in the UK on Wednesday, 04 24, 2022. (Credit: Intel Corporation)
A University College London student’s eye movement being tracked by UCL MotionInput V3 in the UK on Monday, 03 28, 2022. (Credit: Intel Corporation)
University College London students in a mentorship session using Intel-powered laptops in the UK on Monday, 03 28, 2022. (Credit: Intel Corporation)
Paris Baker, a former gymnast, using UCL MotionInput V3 in the UK on Wednesday, 04 24, 2022. (Credit: Intel Corporation)
A University College London student’s eye movement being tracked by UCL MotionInput V3 in the UK on Monday, 03 28, 2022. (Credit: Intel Corporation)
MotionInput can open up a world of use cases by using hands or eyes simultaneously with speech. Every game can now be accessible, progress of patient movements can be recorded in physical therapy, and, in a hospital setting, surgeons can take notes through hand gestures and speech without having to touch a computer. The solution does not require connectivity or a cloud service, making it that much easier to deploy.
“It has great opportunity to positively impact the lives of people with chronic conditions that affect movement,” Phillippa says.
Intel provides UCL students with mentoring and technology, including hardware and software capabilities like Intel’s OpenVINO™ toolkit. The toolkit eases the development of AI-based applications and helps boost their performance.
The pre-trained models provided by OpenVINO™ enabled faster development of the various components and features of MotionInput, allowing students to move forward without training their own models — typically a lengthy, computing-intensive process.
Costas Stylianou, technical specialist for Health and Life Sciences at Intel UK, explains that the optimization means MotionInput V3 “has several orders of magnitude improvements in efficiency and an architecture for supporting the growth of touchless computing apps as an ecosystem.” The software engineering development and architecture for V3 was led by UCL students, Sinead V. Tattan and Carmen Meinson. Together they led over 50 UCL students on various courses at UCL computer science to build upon the work. The team also worked with mentors from Microsoft and IBM, notably Prof. Lee Stott and Prof. John McNamara.
The solution employs a mix of machine learning and computer vision models to allow for responsive interaction. It’s customizable by allowing the user to choose from a variety of modules, such as:
- Facial navigation: The user can use their nose or eyes and a set of facial expressions to trigger actions like mouse button clicks, or with speech by saying “click.”
- Hand gestures: A selection of hand gestures can be recognized and mapped to specific keyboard commands and shortcuts, mouse movements, native multitouch sensing, and digital pens with depth in the air.
- Eye-gaze with grid and magnet modes: To align the cursor in accessibility scenarios, an auto-calibration method is implemented for eye-tracking that obtains the gaze estimation, including both a grid mode and magnetic mode.
- Full body tracking: Users can set physical exercises and tag regions in their surrounding space to play existing computer games.
- Speech hotkeys and live captions: Ask-KITA (Know-It-All) allows users to interact with the computer from a set of voice commands, live captioning and overriding keyboard shortcuts.
- In-air joypad: Users can play games with the usual ABXY joypad buttons in the air with analog trigger controls.
“What makes this software so special is that it is fully accessible,” says Phillippa. “The code does not require expensive equipment to run. It works with any standard webcam, including the one in your laptop. It’s just a case of downloading and you are ready to go.”
Because MotionInput enables facial navigation using the nose, eyes and mouth, adds Costas, “it’s ideal for people that suffer from MND.”
What’s next for MotionInput?
“The project will continue and is looking to collaborate with industry sectors. The academics and mentors are looking into what can be done to expand use cases and continuously improve the user experience,” says Phillippa. “We love working with the students and teaching staff at UCL, as it’s inspiring to see what they can do with technology.”
Or as Paris says, while playing a video game with her children, “The potential for UCL MotionInput to change lives is limitless.”
More Context: Download MotionInput Version 3 (June 2022) software and instructions | See demonstrations of MotionInput Version 3 | Touchless Computing: UCL MotionInput 3 (UCL computer science news article) | More on Linkedin
Intel Partner Stories: Intel Customer Spotlight on Intel.com | Partner Stories on Intel Newsroom