Development of a multimodal Brain-Computer

Development of a multimodal Brain-Computer Interface based on artificial intelligence for rehabilitation of people with motor disorders


Faculty of Medicine, Tel Aviv University

Dr Friedman studies human movement in typical and clinical populations. He builds models to help understand how the brain controls movement and how this knowledge can be used to enhance rehabilitation. He began studying Computer Science, and during his M.Sc and Ph.D at the Weizmann Institute of Science, he became interested in the field of human motor control, when his research included studying arm movements and grasping.

About the project:

The project aims to develop a multimodal brain-computer interface (BCI) for control of devices, which is based on a combination of brain signals and residual movement recordings. The system performs decoding of brain and body signals, measured by EEG, accelerometers and gyroscopes, by means of advanced artificial intelligence techniques. The technology aims to improve quality of life of millions of people with severe movement disorders by satisfying their needs to be independent and able to participate in modern life by means of a software-hardware system for touchless control of devices and neuro-rehabilitation.
The developed AI-based classification system performs a real time cycle of 1) multimodal parallel acquisition of EEG and IMU signals; 2) signal processing; 3) advanced feature extraction (including principal component analysis, kinematic landmarks analysis, spectrum analysis, wavelet transform and feature selection); 4) decoding of motor commands by means of classifiers based on machine learning; and 5) control of devices. A detailed block diagram of the developed system is shown in Appendix Fig. 1. Results of the preliminary study demonstrated required performance of the system for control of devices in real time based on decoding voluntary movements from a combination of residual movement recordings and motor imagery (registered in EEG).