PhD Researcher · Old Dominion University
I build AI systems that understand human emotion — bridging physiological signals, multimodal data, and human-computer interaction to help people & machines communicate and thrive.
My name is Lawrence Obiuwevwi — an engineer, researcher, and builder at the intersection of AI and human experience. I am a 3rd year PhD Student in the Department of Computer Science at Old Dominion University, Virginia.
I hold a background in Electrical Engineering from the University of Nigeria, Nsukka (UNN), where I graduated in the top 10 of my class. This hardware foundation informs how I approach software: with systems thinking, precision, and a deep respect for the physical world that computation must ultimately serve.
My research integrates multimodal data fusion, physiological signal analysis, eye tracking, and speech processing to build AI systems that understand and respond to human emotion — with a particular focus on helping differently-abled people communicate more effectively and live more independently.
Beyond research, I am the Co-Founder and sole developer of SolversBoard, an EdTech startup applying AI to personalized mathematics education.
Neuro-Inspired Research & Development in Systems Lab, ODU. Multimodal biosignal research and eye tracking.
Web Science and Digital Libraries group, Old Dominion University.
Virginia Institute of Modeling, Analysis & Simulation Center (VMASC). Event modeling and simulation.
O-RAN / 5G security research, real-time AI inference on FlexRIC testbeds.
Detecting and classifying human emotional states using physiological signals — GSR, EEG, eye gaze — to enable AI that genuinely understands users.
Combining speech, text, physiological, and behavioral data streams into unified representations for richer, more robust human-AI collaboration.
Designing and evaluating interfaces that adapt to user cognitive and emotional state — enabling more natural and effective interaction with AI systems.
Real-time AI inference on Open RAN testbeds using FlexRIC and xApps; interference detection for 5G-Radar coexistence scenarios.
Leveraging AI and physiological computing to help differently-abled individuals communicate more effectively and live more independently.
Building intelligent tutoring systems and adaptive learning platforms — applied through SolversBoard, an EdTech startup focused on mathematics.
Department of Electrical Engineering, University of Nigeria, Nsukka
Government Science College, Izom, Nigeria
Featured · Most Recent
📄 arXiv Preprint · 2026Presents the Cognitive Prosthetic Multimodal System (CPMS) — a proof-of-concept integrating speech, GSR, and eye gaze for structured episodic capture and natural language retrieval in knowledge work settings.
Multimodal AI
L. Obiuwevwi et al. Real-time hypoglycemia detection using only inexpensive wearable sensors — a pathway toward accessible glucose monitoring in underserved communities.
L. Obiuwevwi et al. GSR-based hypoglycemia classification using LSTM (perfect recall) and XGBoost on the OhioT1DM dataset. 3 citations.
K. Thennakoon, L. Obiuwevwi et al. ACM/IEEE JCDL 2025. Eye-tracking analysis of cognitive load across different reading devices.
Co-Founder & sole developer. AI tutor (Meg), digital whiteboard, XP/streaks gamification, two-step AI grading pipeline. Live at solversboard.com.
I am open to research collaborations at the cutting edge of AI, emotion detection, modeling & simulation, and hybrid intelligence. The capabilities are limitless — let's build something that matters.
Collaboration Areas