Alane Suhr


*, ** indicate equal contribution.
Continual learning for instruction following from realtime feedback. Alane Suhr and Yoav Artzi.
A demonstration of ambiguity in the sentence 'The cat was lost after leaving the house' and its relationship with the hypothesis 'The cat could not find its way'. If 'lost' is interpreted as 'unable to find its own way', there is an entailment relationship, and this is accompanied with an illustration of a confused cat. If the interpretation of 'lost' is 'unable to be found', there is a neutral relationship between premise and hypothesis; this is accompanied with an illustration of a poster for a lost cat. We're afraid language models aren't modeling ambiguity. Alisa Liu, Zhaofeng Wu, Julian Michael, Alane Suhr, Peter West, Alexander Koller, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi.
Do embodied agents dream of pixelated sheep?: Embodied decision making using language guided world modelling. Kolby Nottingham, Prithviraj Ammanabrolu, Alane Suhr, Yejin Choi, Hannaneh Hajishirzi, Sameer Singh, and Roy Fox. In ICML.

To appear at the Reincarnating RL workshop held at ICLR 2023.
Visualization of ToM reasoning, where character Bob maintains a belief about the state of the world (the apple is in the box, which is in the room where the basket also is), as well as a belief state over what another character Anne thinks (that the apple is actually in the basket). Minding language models' (lack of) theory of mind: A plug-and-play multi-character belief tracker. Melanie Sclar, Sachin Kumar, Peter West, Alane Suhr, Yejin Choi, and Yulia Tsvetkov. In ACL.
A Tangram puzzle showing an abstract figure labeled \ Abstract visual reasoning with tangram puzzles. Anya Ji, Noriyuki Kojima*, Noah J. Rush*, Alane Suhr*, Wai Keen Vong, Robert Hawkins, and Yoav Artzi. In EMNLP.
Best Long Paper Award
Analysis of language change in collaborative instruction following. Anna Effenberger, Rhia Singh*, Eva Yan*, Alane Suhr, and Yoav Artzi. In Findings of EMNLP.

Also appeared at SCiL 2022.
Crowdsourcing beyond annotation: Case studies in benchmark data collection. Alane Suhr, Clara Vania, Nikita Nangia, Maarten Sap, Mark Yatskar, Samuel R. Bowman, and Yoav Artzi. Tutorial presented at EMNLP.
Continual learning for grounded instruction generation by observing human following behavior. Noriyuki Kojima, Alane Suhr, and Yoav Artzi. In TACL. code
Exploring underexplored generalization challenges for cross-database semantic parsing. Alane Suhr, Ming-Wei Chang, Peter Shaw, and Kenton Lee. In ACL. code
Executing instructions in situated collaborative interactions. Alane Suhr, Claudia Yan, Charlotte Schluger*, Stanley Yu*, Hadi Khader**, Marwa Mouallem**, Iris Zhang, and Yoav Artzi. In EMNLP. data
Two images of dogs, plus an NLVR2 caption beneath. The left image contains two dogs standing in sand; the right image contains a single dog standing on grass. The NLVR2 caption is: \ A corpus for reasoning about natural language grounded in photographs. Alane Suhr*, Stephanie Zhou*, Ally Zhang, Iris Zhang, Huajuan Bai, and Yoav Artzi. In ACL.

Also appeared at the 2017 AAAI Fall Symposium on Natural Communication for Human-Robot Collaboration.
Touchdown: Natural language navigation and spatial reasoning in visual street environments. Howard Chen, Alane Suhr*, Dipendra Misra, Noah Snavely, and Yoav Artzi. In CVPR. code
Situated mapping of sequential instructions to actions with single-step reward observation. Alane Suhr and Yoav Artzi. In ACL. code
Neural semantic parsing. Matt Gardner, Pradeep Dasigi, Srinivasan Iyer, Alane Suhr, and Luke Zettlemoyer. Tutorial presented at ACL.
Learning to map context-dependent sentences to executable formal queries. Alane Suhr, Srinivasan Iyer, and Yoav Artzi. In NAACL.
Outstanding Paper Award

A corpus of natural language for visual reasoning. Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. In ACL.
Best Resource Paper Award

Featured in AI Magazine and NLP Highlights.