Second Workshop on Interactive Learning for Natural Language Processing

Workshop at NeurIPS 2022

Contact the organizers: internlp2022@googlegroups.com

The 2nd Workshop on Interactive Learning for Natural Language Processing (InterNLP 2022) will be co-located with NeurIPS 2022 and will be held on December 3rd, 2022.

Motivation

Interactive machine learning studies algorithms that learn from data collected through interaction with either a computational or human agent in a shared environment, through feedback on model decisions. In contrast to the common paradigm of supervised learning, IML does not assume access to pre-collected labeled data, thereby decreasing data costs. Instead, it allows systems to improve over time, empowering non-expert users to provide feedback. IML has seen wide success in areas such as video games and recommendation systems.
Although most downstream applications of NLP involve interactions with humans - e.g., via labels, demonstrations, corrections, or evaluation - common NLP models are not built to learn from or adapt to users through interaction. There remains a large research gap that must be closed to enable NLP systems that adapt on-the-fly to the changing needs of humans and dynamic environments through interaction.

Goals

We leverage the foundation built in the prior workshop InterNLP 2021 to continue to grow the community of researchers whose long-term goal is to develop NLP models that learn from interaction with humans and the world. The goal of this current workshop is to bring together researchers to:

  1. Create a forum for the exchange and synthesis of recent findings and forward-looking ideas in IML with focus on NLP.
  2. Broaden the community of researchers working on IML, both generally and in the context of NLP, through showcasing top-tier research and discussions with experts in the field.
  3. Surface key challenges and problems in IML motivated by examples in NLP, including through discussion of evaluation, experimental scenarios, and algorithms.
  4. Deepen the focus of the community on human and societal aspects of IML, both generally and for applications in NLP.

Previous work has been split across different tracks, task-focused workshops (e.g., Visually Grounded Interaction and Language (ViGIL) workshop at NAACL) and conference venues. These issues have made it hard to disentangle applications from broadly-applicable methodologies or establish common evaluation practices. The NeurIPS 2021 workshop on Human-Centered AI (HCAI) indicates growing interest in interactive AI, but is very broad in scope and does not focus on NLP or language-related interaction. We aim to bring together researchers to share insights on interactive learning from a wide range of NLP-related fields, including, but not limited to, dialogue systems, question answering, summarization, and educational applications. As an emerging sub-field across the NLP and ML communities, a workshop provides the ideal focus and audience size for a vibrant exchange of ideas to help grow the community of interest.

We encourage submissions investigating various dimensions of interactive learning, such as (but not restricted to):

  • Interactive machine learning methods: the wide range of topics discussed above, from active learning with a user to methods that extract, interpret and aggregate user feedback or preferences from complex interactions, such as natural language instructions.
  • User effort: the amount of user effort required for different types of feedback; explicit labels require higher user effort than feedback deduced from user interaction (e.g., clicks, viewtime); how users cope with the system misinterpreting instructions.
  • Feedback types: different types of feedback require different techniques to incorporate them into a model. E.g., explicit labels allow us to directly train while user instructions require interpretation.
  • Evaluation methods: approaches to assessing interactive methods, such as low-effort, easily reproducible approaches with real-world users and simulated user models for automated evaluation.
  • Reproducibility: procedures for documenting user evaluations and ensuring they are reproducible.
  • Data: Introduce novel datasets for training and evaluating interactive models.
  • Applications: empirical results for applications of interactive methods.