Second Workshop on Interactive Learning for Natural Language Processing

Call for Papers

The 2nd Workshop on Interactive Learning for Natural Language Processing (InterNLP 2022) will be co-located with NeurIPS 2022 and will be held on December 3rd, 2022.

Important Dates (preliminary)

  • First Call for Workshop Papers: July 31, 2022
  • Second Call for Workshop Papers: August 30, 2022
  • Submission deadline: September 22, 2022
  • Notification of acceptance: October 14, 2022
  • Camera-ready papers due: tbd. , 2022
  • Workshop: December 3, 2022

All deadlines are 11.59 pm UTC-12 (anywhere on Earth)

Motivation

As the impact of machine learning on all aspects of our lives continues to grow, the need for systems that learn through interaction with users and the world becomes more and more pressing. Unfortunately, much of the recent success of NLP relies on large datasets and extensive compute resources to train and fine-tune models, which then remain fixed. This leaves a research gap for systems that adapt to the changing needs of individual users or allow users to continually correct errors as they emerge. Learning from user interaction is crucial for tasks that require a high grade of personalization and for rapidly changing or complex, multi-step tasks where collecting and annotating large datasets is not feasible, but an informed user can provide guidance.

Topics of Interest

We define Interactive Learning for NLP as training, fine-tuning or otherwise adapting an NLP model to inputs from a human user or teacher. Relevant approaches range from active learning with a human in the loop, to training with implicit user feedback (e.g. clicks), dialogue systems that adapt to user utterances, and training with new forms of human input. Interactive learning is the converse of passive learning from datasets collected offline with no human input during the training process.

We encourage submissions in the following topics, including but not limited to:

  • Interactive machine learning methods, theory and practice: from active learning with a user to methods that extract, interpret and aggregate user feedback or preferences from complex interactions, such as natural language instructions.
  • User effort: the amount of user effort required for different types of feedback and how users cope with the system misinterpreting instructions.
  • Different kinds of user feedback: beyond providing training labels, users may influence and interrogate NLP models in different ways, such as providing natural language instructions or implicit feedback through mouse clicks.
  • Evaluation methods: approaches to assessing interactive methods, such as low-effort, easily reproducible methods with real-world users and simulated user models for automated evaluation.
  • Reproducibility: procedures for documenting user evaluations and ensuring they are reproducible.
  • Data: novel datasets for training and evaluating interactive models.
  • Empirical results that investigate scenarios where interactive learning is effective.

Submission

**Detailed information will follow soon**

Invited Speakers and Panelists (confirmed, in alphabetical order)

  • Anca Dragan (Associate Professor, University of California–Berkeley)
  • Raquel Fernandez (Professor, University of Amsterdam)
  • Karthik Narasimhan (Assistant Professor, Princeton University)
  • Daniel Weld (Professor, University of Washington and General Manager, Semantic Scholar)
  • Qian Yang (Assistant Professor, Cornell University)

Organizers (confirmed, in alphabetical order)

  • Yoav Artzi (Cornell University)
  • Kianté Brantley (Cornell University)
  • Soham Dan (University of Pennsylvania)
  • Khanh Nguyen (University of Maryland–College Park)
  • Ji-Ung Lee (Technical University of Darmstadt)
  • Edwin Simpson (University of Bristol)
  • Alane Suhr (Cornell University)