Workshop on Interactive Learning for Natural Language Processing

Contact the organizers:


Interactive machine learning studies algorithms that learn from data collected through interaction with a computational agent or human in a shared environment, through feedback on model decisions. In contrast to the common paradigm of supervised learning, IML does not assume access to pre-collected labeled data, thereby decreasing data costs.
Although most downstream applications of NLP involve interactions with humans—e.g., via labels, demonstrations, corrections, or evaluation—common NLP models are not built to learn from or adapt to users through interaction. There remains a large research gap that must be closed to enable NLP systems that adapt on-the-fly to the changing needs of humans and dynamic environments through interaction.


We leverage the foundation built in the prior workshops to continue to grow the community of researchers whose long-term goal is to develop NLP models that learn from interaction with humans and the world. The goal of this current workshop is to bring together researchers to:

  1. Create a forum for the exchange and synthesis of recent findings and forward-looking ideas in IML with focus on NLP.
  2. Broaden the community of researchers working on IML, both generally and in the context of NLP, through showcasing top-tier research and discussions with experts in the field.
  3. Surface key challenges and problems in IML motivated by examples in NLP, including through discussion of evaluation, experimental scenarios, and algorithms.
  4. Deepen the focus of the community on human and societal aspects of IML, both generally and for applications in NLP.

We aim to bring together researchers to share insights on interactive learning from a wide range of NLP-related fields, including, but not limited to, dialogue systems, question answering, summarization, and educational applications. As an emerging sub-field across the NLP and ML communities, a workshop provides the ideal focus and audience size for a vibrant exchange of ideas to help grow the community of interest.

We encourage submissions investigating various dimensions of interactive learning, such as (but not restricted to):

  • Interactive machine learning methods: the wide range of topics discussed above, from active learning with a user to methods that extract, interpret and aggregate user feedback or preferences from complex interactions, such as natural language instructions.
  • User effort: the amount of user effort required for different types of feedback; explicit labels require higher user effort than feedback deduced from user interaction (e.g., clicks, viewtime); how users cope with the system misinterpreting instructions.
  • Feedback types: different types of feedback require different techniques to incorporate them into a model. E.g., explicit labels allow us to directly train while user instructions require interpretation.
  • Evaluation methods: approaches to assessing interactive methods, such as low-effort, easily reproducible approaches with real-world users and simulated user models for automated evaluation.
  • Reproducibility: procedures for documenting user evaluations and ensuring they are reproducible.
  • Data: Introduce novel datasets for training and evaluating interactive models.
  • Applications: empirical results for applications of interactive methods.