Multimodal Human Behavioral Informatics
Shrikanth (Shri) Narayanan (Signal Analysis and Interpretation Laboratory (SAIL), University of Southern California)



Human behavior expression and experience are inherently multimodal, and characterized by vast individual and contextual heterogeneity.The confluence of sensing, communication and computing is allowing access to data, in diverse forms and modalities, that is enabling us understand and model human behavior in ways that were unimaginable even a few years ago. No domain exemplifies these opportunities more than that related to human health and well being. Behavioral signal processing advances can enable not only new possibilities for gathering data in a variety of settings--from laboratory and clinics to free living conditions―but in offering computational models to advance evidence-driven theory and practice. Consider for example the domain of Autism where crucial diagnostic information comes from manually-analyzed audiovisual data of verbal and nonverbal behavior. Likewise novel personalized metabolic health monitoring and obesity intervention are enabled by gathering, analyzing and responding to behavior data in free living conditions.

This talk will describe our ongoing efforts on multimodal Behavioral Signal Processing―technology and algorithms for quantitatively and objectively understanding typical, atypical and distressed human behavior―with a specific focus on communicative, affective and social behavior. Using examples drawn from different domains, the talk will also illustrate Behavioral Informatics applications of these processing techniques that contribute to quantifying higher-level, often subjectively described, human behavior in a domain-sensitive fashion. In particular, we will draw on examples from our work on health domains such as Autism, Addiction, Family Studies, and Obesity to illustrate the challenges and opportunities for multimedia behavioral signal processing.

[Work supported by NIH, DARPA, ONR, and NSF].


*Shrikanth (Shri) Narayanan*is the Andrew J. Viterbi Professor of Engineering at the University of Southern California (USC), where he holds appointments as Professor of Electrical Engineering, Computer Science, Linguistics and Psychology. Prior to USC he was with AT&T Bell Labs and AT&T Research. His research focuses on human-centered information processing and communication technologies.Shri Narayanan is an Editor for the Computer Speech and Language Journal and an Associate Editor for the IEEE Transactions on Multimedia, IEEE Transactions on Affective Computing and the Journal of the Acoustical Society of America. He is a Fellow of the Acoustical Society of America, IEEE, and the American Association for the Advancement of Science (AAAS).He is a recipient of several awards including Best Paper awards from the IEEE Signal Processing society in 2005 (with Alex Potamianos) and in 2009 (with Chul Min Lee) and was a Distinguished Lecturer for the IEEE Signal Processing society for 2010-11. He has published over 475 papers and has twelve granted U.S. patents.