Research

My research explores how humans and technology interact through sound, neurophysiological signals, and adaptive AI. By combining expertise in auditory interfaces, brain-computer interfaces (BCIs), affective computing, and adaptive learning, I develop novel ways to enhance accessibility, emotion-driven interaction, and intelligence amplification. My work spans wearable devices, mixed reality, and AI-driven learning systems, with applications in health, education, and human-computer interaction (HCI).

1. Auditory Intelligence: Sonification, Perceptualization, and Data-Driven Sound Design – Using sound to visualize data, enhance accessibility, and improve human-computer interaction.

2. Multimodal Interaction and Neuroadaptive Interfaces in Mixed Reality – Developing hands-free, voice-free, and brain-driven interfaces for AR/VR and wearable technologies.

3. Affective Computing and Humanized AI: Emotion, Empathy, and Social Connection – Exploring emotionally intelligent AI, biometric-driven empathy, and affective interfaces.

4. Adaptive Learning, Cognitive Augmentation, and Intelligence Amplification – Creating neuroadaptive, AI-driven learning systems to personalize and enhance human skill acquisition.

Each of these areas represents my commitment to building more intuitive, adaptive, and human-centered technology.



Auditory Intelligence: Sonification, Perceptualization, and Data-Driven Sound Design

Sound is more than just a sensory experience—it is a powerful tool for making information more intuitive, accessible, and emotionally engaging. My research in sonification and auditory intelligence transforms complex data into sound, making it perceptible in new and meaningful ways. From representing astronomical data in Solar System Sonification to improving melanoma diagnosis through AI-driven auditory feedback, I explore how auditory displays can extend human perception.

Beyond scientific applications, I develop auditory interfaces that enhance accessibility and human-computer interaction (HCI). My work on Sonification of Facebook Reactions redefines how we engage with social media by translating digital interactions into sound. Similarly, my research on Expressive Gesture Sonification enables motion and performance analysis through audio feedback, fostering a deeper understanding of movement and expression.

Auditory intelligence also plays a crucial role in real-world decision-making and safety-critical environments. In the automotive sector, I have contributed to research on how auditory displays in Highly Automated Vehicles improves driver awareness in self-driving systems, using sound to convey critical situational information. By advancing AI-powered sonification and auditory interfaces, I aim to create more perceptive, accessible, and emotionally rich interactions between humans and technology.


Multimodal Interaction and Neuroadaptive Interfaces in Mixed Reality

As we move toward a future of wearable and immersive computing, traditional input methods—keyboards, touchscreens, and voice—are not always viable. My research focuses on developing neuroadaptive, hands-free, and multimodal interfaces that enable seamless interaction in Mixed Reality (MR) and wearable devices.

One of my core areas of exploration is Brain-Computer Interfaces (BCIs) for real-world applications. I have worked on integrating EEG-based adaptive interfaces into AR/VR headsets, allowing brain signals to control AI and interaction models. Additionally, my research in soft, wearable EEG sensors has helped bridge the gap between neuroscience and consumer technology, making neural interfaces more practical and comfortable.

Beyond BCIs, I develop multimodal interaction systems that integrate biometric and motion-based control. My work on tongue-gesture and silent speech recognition has introduced new ways to interact with head-worn devices without relying on voice or hand gestures. These innovations are particularly useful for accessibility, privacy, and enhanced user experience in Mixed Reality, paving the way for a more intuitive and inclusive computing future.



Affective Computing and Humanized AI: Emotion, Empathy, and Social Connection

Technology is at its most powerful when it understands and responds to human emotions. My work in affective computing and humanized AI explores how biometric signals, sonification, and AI-driven interfaces can foster deeper emotional connections between humans and machines.

One of my key research contributions has been exploring how sound and physiological signals influence empathy. Studies such as Hearing Heartbeats to Induce Empathy demonstrate how real-time biometric audio feedback can heighten emotional awareness and connection. Similarly, my work on Brainwave Synchronization in Dyads investigates how neurophysiological alignment can enhance human communication and relationships.

In addition to research on empathy, I have developed emotionally aware AI systems that make digital interactions more engaging and accessible. My work on the Sonification of Facebook Reactions allows users to perceive online interactions in a more intuitive and affective manner. I also explore Synthetic Biomusic, where AI generates real-time musical compositions based on physiological data, creating deeply personalized and emotionally resonant experiences. These advancements in affective computing and human-centered AI aim to make technology more emotionally intelligent and socially aware.



Adaptive Learning, Cognitive Augmentation, and Intelligence Amplification

The future of learning lies in adaptive, AI-driven educational systems that respond to individual needs. My research in cognitive augmentation and neuroadaptive learning interfaces focuses on using brain signals, physiological feedback, and sonification to enhance skill acquisition, retention, and accessibility.

One of my primary areas of study is neuroadaptive learning, where EEG-based adaptive systems personalize education by adjusting training based on cognitive state. My work on EEG-Based Adaptive Training in VR has demonstrated how physiological signals can optimize skill development, allowing for more efficient and individualized learning experiences.

Sonification also plays a critical role in education. My Solar System Sonification project has brought auditory-based learning into science education, making abstract concepts more tangible for diverse learners. Similarly, my work on Expressive Gesture Sonification helps musicians and performers enhance their technique through real-time auditory feedback. By combining AI, neuroscience, and adaptive technology, I aim to create learning systems that amplify human intelligence, enhance accessibility, and reshape the future of education.