Prof. James Pustejovsky delivered the opening keynote at COLING 2018, the 27th International Conference on Computational Linguistics, in Santa Fe, New Mexico, on August 21st, 2018. Titled “Visualizing Meaning: Modeling Communication through Multimodal Simulations,” the talk covered the need for situational grounding for deeper understanding in human-computer communication, and demonstrated the use of voxemes, and showcased a multimodal human-avatar interaction using the latest version of VoxSim.
Although not recorded, the talk was live-tweeted by conference co-PC Emily M. Bender of the University of Washington here and by Dan Simonson here.
The slide deck is made available here. It contains some sound, animated GIFs, and links to videos, and so we recommend viewing it in Adobe Acrobat Reader. Videos are external links accessible via the “link” button on the relevant slide.
The following are publications central to the VoxML framework as of March, 2017:
Generating Simulations of Motion Events from Verbal Descriptions (Pustejovsky and Krishnaswamy, 2014) – *SEM Workshop, COLING 2014, Dublin, Ireland
VoxML: A Visualization Modeling Language (Pustejovsky and Krishnaswamy, 2016) – LREC 2016, Portorož, Slovenia
Multimodal Semantic Simulations of Linguistically Underspecified Motion Events (Krishnaswamy and Pustejovsky, 2016) – Spatial Cognition 2016, Philadelphia, PA, USA
The Development of Multimodal Lexical Resources (Pustejovsky, et al., 2016) – GramLex Workshop, COLING 2016, Osaka, Japan
VoxSim: A Visual Platform for Modeling Motion Language (Krishnaswamy and Pustejovsky, 2016) – COLING 2016, Osaka, Japan