Alane Suhr

/əˈleɪn ˈsuəɹ/

PhD Candidate
Computer Science
Cornell University
suhr@cs.cornell.edu

about news publications misc. CV (pdf)

A random picture either of me or that I took. Click photo to view the entire collection.


About

I am a final-year PhD candidate in Computer Science at Cornell University, based at Cornell Tech in New York, NY. My advisor is Yoav Artzi and my research area is natural language processing. In 2016, I graduated from Ohio State University with a BS in Computer Science and Engineering and a minor in Linguistics.

My research spans natural language processing, machine learning, and computer vision. I build systems that use language to interact with people, e.g., in collaborative interactions (like CerealBar). I design models and datasets that address and represent problems in language grounding (e.g., NLVR). I also develop learning algorithms for systems that learn language through interaction.

I will be on the academic job market this year! :)

Here are links to my application materials. All are pdf.

CV research teaching DEI

*These materials are general, so might be slightly different than what I submitted in each application.


News

9 Dec, 2021 Talk at Cornell CS Colloquium (Ithaca, NY)

17 Nov, 2021 Talk at NLP with Friends (virtual)

10 Nov, 2021 Tutorial at EMNLP: Crowdsourcing Beyond Annotation: Case Studies in Benchmark Data Collection (virtual)

26 Aug, 2021 New paper: Analysis of language change in collaborative instruction following (Effenberger et al.) will appear in Findings of EMNLP

5 Aug, 2021 New paper: Continual learning for grounded instruction generation by observing human following behavior (Kojima et al.) will appear in TACL

10 June, 2021 Workshop at NAACL: ViGIL: Visually Grounded Interaction and Language (virtual)

21 May, 2021 Talk at DeepMind's NLP reading group (virtual)


Publications

*, ** indicate equal contribution

Analysis of language change in collaborative instruction following.
Anna Effenberger, Rhia Singh*, Eva Yan*, Alane Suhr, and Yoav Artzi. To appear in Findings of EMNLP, 2021.
pdf code
Continual learning for grounded instruction generation by observing human following behavior.
Noriyuki Kojima, Alane Suhr, and Yoav Artzi. To appear in TACL, 2021.
pdf web
Exploring unexplored generalization challenges for cross-database semantic parsing.
Alane Suhr, Ming-Wei Chang, Peter Shaw, and Kenton Lee. In ACL, 2020.
pdf code talk
Executing instructions in situated collaborative interactions.
Alane Suhr, Claudia Yan, Jack Schluger*, Stanley Yu*, Hadi Khader**, Marwa Mouallem**, Iris Zhang, and Yoav Artzi. In EMNLP, 2019.
pdf appendix code and data demo web
A corpus for reasoning about natural language grounded in photographs.
Alane Suhr*, Stephanie Zhou*, Ally Zhang, Iris Zhang, Huajuan Bai, and Yoav Artzi. In ACL, 2019.
pdf appendix data slides poster web
Touchdown: Natural language navigation and spatial reasoning in visual street environments.
Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, and Yoav Artzi. In CVPR, 2019.
pdf appendix code
Learning to map context-dependent sentences to executable formal queries.
Alane Suhr, Srinivasan Iyer, and Yoav Artzi. In NAACL, 2018.
pdf appendix code talk slides
Outstanding Paper Award
A corpus of natural language for visual reasoning.
Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. In ACL, 2017.
pdf appendix data talk slides interview web
Best Resource Paper Award

Tutorials, workshop papers, etc.
Crowdsourcing beyond annotation: Case studies in benchmark data collection.
Alane Suhr, Clara Vania, Nikita Nangia, Maarten Sap, Mark Yatskar, Samuel R. Bowman, and Yoav Artzi. Tutorial to appear at EMNLP, 2021.
web
Neural semantic parsing.
Matt Gardner, Pradeep Dasigi, Srinivasan Iyer, Alane Suhr, and Luke Zettlemoyer. Tutorial at ACL, 2018.
slides
Evaluating visual reasoning through grounded language understanding.
Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. In AI Magazine, 2018.
Visual reasoning with natural language.
Stephanie Zhou*, Alane Suhr*, and Yoav Artzi. In AAAI Fall Symposium on Natural Communication for Human-Robot Collaboration, 2017.
pdf