Ghada Zamzmi

Research Fellow in the National Institutes of Health (NIH)



STATS:

AGE:

33


NATIONALITY:

Saudi Arabia


EDUCATION:

Research Fellow in the National Institutes of Health (NIH)


Ghada Zamzmi

Research Fellow in the National Institutes of Health (NIH)


Ghada is a researcher who’s interested in using her expertise to design and develop intelligent solutions for real-world problems that impact the most vulnerable among us. Her research is centered around Computer Vision, Machine/Deep Learning, Mathematical Modeling, and Affective/Cognitive Computing. The invention is an intelligent system that provides continuous monitoring and assessment to prevent delayed intervention and mitigate the inconsistency resulting from caregivers’ subjectivity. The system takes behavioral (facial expression, sounds, body/head movements) and physiological signals as input followed by analyzing these signals to detect pain and its intensity, and send an alert whenever a neonate experiences pain. Video and audio signals are recorded using a single RGB camera, while the physiological signals (e.g., body temperature, heart rate, respiratory patterns) are recorded using sensors attached to the neonate's body. The system also has the ability to measure these physiological signals based on the analysis of RGB channels and trunk wall motion. The first step of the system involves pre-processing the signals to remove noise and detect the regions of interest (ROIs). She modified the spatial transformer network (STN) and used it for localization and alignment. This STN was placed in a customized 3D neonatal convolutional neural network (3D N-CNN), which has three branches: video, audio, and physiological branches. This multimodal network extracts spatio-temporal features from the video and audio branches. Then, the output of both branches are fed to deep belief networks (DBNs) for pain classification. In addition, DBN takes the signals from the physiological branch as input to generate an output label. To provide context-sensitive assessment, she added different contextual information (e.g., medication dose) as coded labels (provided by doctors). Finally, she performed score fusion to combine the contextual information with the output of the network’s branches to generate the final label of a given sequence/episode. To provide prediction, the proposed network was trained to learn a function that maps a sequence of past observations as input to an output observation.