The rules of improvisation apply beautifully to life. Never say no, you have to be interested to be interesting, and your job is to support your partners.
I am fascinated by the role artificial intelligence (AI) can play to enable and augment human creativity. I practice a human-centered AI research methodology — applying a combination of design, human-computer interaction, and AI/machine learning methods — to explore questions about AI and creativity. My current research at Microsoft Research Cambridge focuses on understanding how AI can best support designers and developers with the creation of engaging game agents (non-player characters and bots) in real-world commercial games.
I received a Ph.D. in Computer Science (2019) from the Georgia Institute of Technology (Atlanta, USA) with the Expressive Machinery Lab. My dissertation investigated the effects of creative arc negotiation — a novel real-time decision-making paradigm for improvisation between people and computers — on player experience within VR games for improvisational theatre. I have previously studied the application of human-computer co-creativity in problems ranging from improvisational dance and pretend play to music recommendation. I received an M.S. in Computer Science (2013) from the Georgia Institute of Technology and a B.E. in Computer Science Engineering (2011) from the Manipal Institute of Technology (Manipal, India).
For a (relatively) quick overview of my research over (nearly) the last decade, you can watch this talk.
At my current role with Microsoft Research Cambridge, I am currently researching human in the loop machine learning techniques, user work flows, and interaction design to best support commercial game developers and designers with creating engaging game agents.
The EarSketch Co-creative AI research project is exploring how autonomous agents can collaborate with students (potentially as peers) on the online Earsketch platform to increase student exploration of ideas, both musically and computationally.
The EarSketch Co-creative AI research project is exploring how autonomous agents can collaborate with students (potentially as peers) on the online Earsketch platform to increase student exploration of ideas, both musically and computationally
Previously, researched, designed, and implemented a recommendation engine for musical samples in Earsketch for students to increase the diversity, novelty, and serendipity of sample usage.
The research produced two peer-reviewed publications.
The Robot Improv Circus is a VR installation where non-expert users can play the Props game, i.e. improvise open-ended movement-based vignettes with a virtual character using abstract props.
The Robot Improv Circus is a virtual reality (VR) installation where non-expert users can play the Props game, i.e. improvise open-ended movement-based vignettes with a virtual character using abstract props.
The CARNIVAL (Creative Arc Negotiating Intelligent Virtual Agent pLatform) architecture enables the virtual character, improvising with the human in VR, to select actions along a given ‘creative arc’ over the course of the improvised performance evolving the user’s experience over time.
Previously, researched, designed, and implementing the CARNIVAL intelligent agent architecture that performs affordance-based deep neural action generation; improvisational reasoning using various reasoning strategies; real-time strategy selection to follow a given creative arc; and evaluation of agent/human creativity using computational models of novelty, unexpectedness, and quality.
Research results summarised in this video. More detail in my dissertation, available here
The research was awarded a Creative Curricular Initiatives (CCI) grant, completed two Invited installations, was featured on the cover of the Georgia Institute of Technology Annual Report 2018, and produced three peer-reviewed publications as well as my dissertation.
The TuneTable is a tangible computing museum installation for informal learning of computational thinking concepts using open-ended sample-based music composition.
The TuneTable is a tangible computing museum installation for informal learning of computational thinking concepts using open-ended sample-based music composition.
Previously, researched, designed, and constructed two iterations of the interactive tabletop using computer vision.
Two iterations of TuneTable were installed for field experiments at the Museum of Science and Industry, Chicago with peer reviewed results pending.
LuminAI (formerly Viewpoints AI) was an installation that explored how to create a truly open-ended human-AI improvisational embodied interaction experience with minimal pre-authored content knowledge for the AI character.
LuminAI (formerly Viewpoints AI) was an installation that explored how to create a truly open-ended human-AI improvisational embodied interaction experience with minimal pre-authored content knowledge for the AI character.
The system used interactive learning techniques, reasoning strategies from human improvisers, and the Viewpoints framework from theatre & dance were used to learn, procedurally represent, and reason about movement and improvise contemporary movement and dance performances with non-expert users.
Previously, researched, designed, and implemented case-based and imitation learning methods to teach the agent movement improvisation through observation and interaction, as well as procedural reasoning about improvisational response generation within the Soar cognitive architecture.
The research was a Field Experiment grant finalist, successfully completed a collaboration with the T Lang Dance Company (Atlanta) as a hybrid improvised-choreographed dance performance piece called Post, was selected to the ACCelerate Festival at the Smithsonian Institution National Museum of American History, completed numerous international (and domestic) invited and peer-reviewed installations, was the winner of the Neukom Institute Turing Test in Creative Arts 2017: DanceX Prize, and produced six peer-reviewed publications.
The Computational Representations of Play project studied human cognition during pretend play and used the results to computationally model playful behavior in software agents and robots for playful task execution and pretend play with toys.
The Computational Representations of Play project studied human cognition during pretend play and used the results to computationally model playful behavior in software agents and robots for playful task execution and pretend play with toys.
Previously, researched, designed, and implemented conceptual blending with objects for object-based pretense within a toy-based pretend play experience between human and robot (or embodied virtual agent).
The project included the design of a conceptual cognitive architecture called the Co-creative Cognitive Architecture (CoCoA) extending Soar with an emphasis on co-creativity and real-time improvisation.
The research produced three peer-reviewed publications
The Digital Improv Project was part of an effort to understand improvisational cognition, enabling humans and agents to co-creatively perform improvisational theatre.
The Digital Improv Project was part of an effort to understand improvisational cognition, enabling humans and agents to co-creatively perform improvisational theatre.
Researched computational reasoning about status & representation of character status in improv theatre
Researched, designed, and implemented workflow / process for using crowd-sourcing to collect data forconstructing cognitive scripts for improv theatre.
The Game Adaptive Intelligent Agents (GAIA) project was an agent design system that created agents containing knowledge-based models of their own behavior in order localize the source of reasoning failures, adapt its reasoning to the current situation, and to enable rapid agent prototyping.
The Game Adaptive Intelligent Agents (GAIA) project was an agent design system that created agents containing knowledge-based models of their own behavior in order localize the source of reasoning failures, adapt its reasoning to the current situation, and to enable rapid agent prototyping.
Previously, researched new adaptations to the agent design domain (Tic Tac Toe) and implemented models for example game playing agents through domain variations (Miseré TicTacToe and Drawbridge).
Jacob, M., Devlin, S., & Hofmann, K. (2020). “'It’s Unwieldy and It Takes a Lot of Time' — Challenges and Opportunities for Creating Agents in Commercial Games.” In the Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE) 2020. Alberta, Canada. pdf
AIIDE 2020 Best Paper Award Winner.
Jacob, M. (2019). “Improvisational artificial intelligence for embodied co-creativity.” Doctoral dissertation, Georgia Institute of Technology 2019. Atlanta, USA. pdf
Long, D., Jacob, M., & Magerko, B. (2019). “Designing co-creative AI for public spaces.” In the Proceedings of the 12th Conference on Creativity and Cognition (C&C) 2019. San Diego, USA. pdf
Smith, J., Jacob, M., Freeman, J., Magerko, B., & Mcklin, T. (2019). “Combining Collaborative and Content Filtering in a Recommendation System for a Web-based DAW.” In the Proceedings of the 5th International Web Audio Conference (WAC) 2019. Trondheim, Norway. pdf
Smith, J., Weeks, D., Jacob, M., Freeman, J., & Magerko, B. (2019). “Towards a hybrid recommendation system for a sound library.” In IUI Workshops: Proceedings of the 2nd Workshop on Intelligent Music Interfaces for Listening and Creation (MILC) 2019. Los Angeles, USA. pdf
Jacob, M., Chawla, P., Douglas, L., He, Z., Lee, J., Sawant, T., & Magerko, B. (2019). “Affordance-based generation of pretend object interaction variants for human-computer improvisational theater.” In the Proceedings of the 10th International Conference on Computational Creativity (ICCC) 2019, Charlotte, USA. pdf
Jacob, M. & Magerko, B. (2018). “Creative Arcs in Improvised Human Computer Embodied Performances.” In the Proceedings of the 1st Curiosity in Games Workshop at the International Conference on the Foundations of Digital Games (FDG) 2018. Malmö, Sweden. pdf
Jacob, M. (2017). “Towards Lifelong Interactive Learning for Open-ended Embodied Co-creative Narrative Improvisation.” In the Proceedings of the Doctoral Consortium at the Eighth International Conference on Computational Creativity (ICCC) 2017, Atlanta, USA. pdf
Long, D., Jacob, M., Davis, N., & Magerko, B. (2017). “Designing for Socially Interactive Systems.” In the Proceedings of the 11th Conference on Creativity and Cognition (C&C) 2017, Singapore. pdf
Jacob, M. (2017). “Towards Lifelong Interactive Learning for Open-ended Embodied Narrative Improvisation.” In the Proceedings of the Graduate Student Symposium at the 11th Conference on Creativity and Cognition (C&C) 2017, Singapore. pdf
Singh, K.Y., Davis, N., Hsiao, C.-P., Jacob, M., Patel, K., Magerko, B. (2016). “Recognizing Actions in Motion Trajectories using Deep Neural Networks.” In the Proceedings of the 12th Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE) 2016, Burlingame, USA. pdf
Jacob, M. & Magerko, B. (2015). “Interaction-based Authoring for Scalable Co-creative Agents.” In the Proceedings of the 6th International Conference on Computational Creativity (ICCC) 2015, Provo, USA. pdf
Davis, N., Comerford, M., Jacob, M., Hsiao, C.-P., & Magerko, B. (2015). “An Enactive Characterization of Pretend Play.” In the Proceedings of the 10th ACM conference on Creativity and Cognition (C&C) 2015. Glasgow, Scotland. pdf
Jacob, M. & Magerko, B. (2015). “Viewpoints AI.” In the Proceedings of the Artwork Exhibition at the 10th ACM conference on Creativity and Cognition (C&C) 2015. Glasgow, Scotland. pdf
Magerko B., Permar, J., Jacob, M., Comerford, M., & Smith, J. (2014). “An Overview of Computational Co-creative Pretend Play with a Human.” In the Proceedings of the 1st Workshop on Playful Virtual Characters at the 14th Annual Conference on Intelligent Virtual Agents (IVA) 2014, Boston, USA. pdf
Jacob, M., Coisne, G., Gupta, A., Sysoev, I., Verma, G., & Magerko, B. (2013). “Viewpoints AI.” In the Proceedings of the 9th Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE) 2013, Boston, USA. pdf
Jacob, M., Coisne, G., Gupta, A., Sysoev, I., Verma, G., & Magerko, B. (2013). “Viewpoints AI: Demonstration.” In the Proceedings of the 9th Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE) 2013, Boston, USA. pdf
Jacob, M., Zook, A., & Magerko, B. (2013). “Viewpoints AI: Procedural Representation and Reasoning on Gesture Meaning.” In the Proceedings of the Digital Games Research Association (DiGRA) 2013, Atlanta, USA. pdf