Abstract
In recent decades, human-robot interaction and brain-machine interfaces have both advanced as tools for neurorehabilitation, but their integration remains largely unexplored. This workshop demo presents a real-time hierarchical machine-learning approach that detects the onset and offset of motor imagery to control passive reaching with an upper-body exoskeleton. Instead of aiming for continuous control, the system allows a more natural sense of functional interaction by initiating movement when motor imagery begins and terminating movement when motor imagery ends, while remaining robust to exoskeleton-induced motion and noise.