AI-Powered System Physically Guides Users Through Unfamiliar Tasks
A new system developed by researchers at the University of Chicago combines artificial intelligence (AI) and electrical muscle stimulation (EMS) to physically guide users through unfamiliar tasks. This development represents a step toward general-purpose, context-aware embodied assistance.
This new system, termed "embodied AI," utilizes modern multimodal AI to integrate visual information, location, and body pose, allowing it to generate movement guidance dynamically, adapting to the current context rather than following a fixed script.
The system was developed by PhD students Yun Ho and Romain Nith, under the supervision of Associate Professor Pedro Lopes in the Department of Computer Science. Their work received the Best Paper Award at the ACM CHI 2026 conference.
From Fixed Scripts to Adaptive Guidance
Previous EMS research involved strapping electrodes to bodies to teach specific programmed tasks, such as piano sequences or sign language. These systems were specialized and non-contextual, meaning they could not adapt to new situations beyond their programmed scope.
The new system focuses on transmitting "procedural knowledge"—the inherent understanding of how to perform a task—directly to muscles. This differs from methods that provide only factual information. Context-aware, generative EMS enables users to be physically guided through complex, unfamiliar physical tasks, even when they cannot articulate their needs.
User Experience and Applications
Yun Ho stated that the research explores how individuals interpret and build relationships with devices that communicate through body movements. She noted insights gained from participants verbalizing their thoughts while using the system.
A user study demonstrated participants successfully completing tasks such as opening pill bottles with locking mechanisms, operating a disposable camera, and using an avocado tool, guided by the dynamically generated muscle cues. During intentional AI errors, participants identified, adapted, and corrected for the mistakes.
Pedro Lopes suggested the system could be transformative for highly physical tasks, including learning skills in manufacturing, materials work, or musical instruments. It could also assist individuals who are situationally impaired, such as when multitasking or operating in low-light conditions.
Potential Applications
- Healthcare and rehabilitation: Guiding physical therapy patients or elderly individuals through movements at home.
- Industrial and skilled labor: Physically guiding workers through motions for new equipment, potentially reducing injury risk and training times.
- Accessibility: Providing direct bodily guidance for blind or low-vision users, making environments more accessible by teaching new gestures and tasks hands-on.
- Everyday life: Assisting travelers with unfamiliar appliances or hobbyists with new gadgets.
Current Limitations and Future Work
Lopes and the researchers acknowledge current limitations, including the need for electrode calibration, the physical sensation of EMS, and the fact that muscle memory cannot always be fully replicated by stimulation alone. They anticipate rapid advancements in AI and EMS hardware.
This system is intended to complement, not replace, audiovisual guidance. The research team has open-sourced their code to encourage further development and critique.
The project emphasizes user control and safety, with the AI activating only when invited and participants maintaining the ability to interrupt or adjust the guidance at any stage. The recognition of this work with the Best Paper Award highlights its contribution to the field of AI-powered interaction and embodied co-pilots for daily tasks.