Hi! I'm an Ph.D student at the University of Southern California, where I work with Jesse Thomason on language grounding and human-robot interaction. I build robots that interact with their environment and other humans, and study how humans percieve their actions.

I was a visiting scholar at Cornell with Tapo Bhattacharjee working on assistive robotics for people with mobility limitations. Previously at UT Austin, I worked on human-robot interaction with the Building-Wide Intelligence Project with Justin Hart and Peter Stone. I also completed an honors thesis with Chandrajit Bajaj.

I previously interned at Sandia National Labs with the Neural Exploration and Research Lab. Over three years, I had the pleasure of working with James Aimone, Craig Vineyard, and many others!

profile photo
abrar.anwar [at] usc [dot] edu

news

Oct 2024 Gave a talk at the NVIDIA Jetson AI Lab on ReMEmbR
Sept 2024 My paper on contrast sets for language-guided robot evaluation got accepted to CoRL 2024. See you in Munich!
June 2024 I'll be presenting my paper on the role of context in multi-view language grounding at NAACL 2024 in Mexico City!
May 2024 Started my internship at NVIDIA working on foundation models for autonomous mobile robots with Prof. Joydeep Biswas and Dr. Yan Chang
Mar 2024 Glad to have helepd Rajat and his team on robot-assisted inside-mouth bite transfer. Our work was nominated for best systems paper at HRI 2024!
Nov 2023 My work on efficient robot evaluation was accepted as an oral presentation at the CoRL 2023 LangRob Workshop!
Feb 2023 I was awarded a $100k fellowship as a Horatio Algers Graduate Scholar.
Dec 2022 I will be presenting my work on Human-Robot Commensality at CoRL 2022!
July 2022 My work on social bite timing was featured on the New Scientist!
May 2022 I will be a Visiting Scholar at Cornell University with Tapo Bhattacharjee to work on robot-assisted dining in social settings

2021 and Earlier
June 2021 I will be presenting our work on gaze for social navigation at ICRA 2021
May 2021 I defended my honors thesis on deep reinforcement learning for mesh refinement and finally graduated from UT Austin
May 2021 I will be working with Tapo Bhattacharjee from Cornell this summer thanks to the Google exploreCSR program
May 2021 I will be joining the University of Southern California for my PhD to work on HRI and language grounding with Jesse Thomason
Jan 2021 I attended the AAAI-21 Undergraduate Consortium, and met so many wonderful people while presenting my research on spiking weight-agnostic neural networks
Jan 2021 I will be participating in Google Research's CS Research Mentorship program
July 2020 I (virtually) attended my first conference, ICONS, and presented a poster on spiking neural networks

research

2024

project image ReMEmbR: Building and Reasoning Over Long-Horizon Spatio-Temporal Memory for Robot Navigation
Abrar Anwar*, John Welsh, Joydeep Biswas, Soha Pouya, Yan Chang
The space of language commands a robot can execute grows combinatorially with scene complexity. Evaluating a robot on this large domain is impractical + takes time, so we introduce contrast sets for robots to make small perturbations to test instances. This leads to good test set estimation and less experimenter effort.

Preprint, 2024
arxiv code website blog twitter

project image Contrast Sets for Evaluating Language-Guided Robot Policies
Abrar Anwar*, Rohan Gupta*, Jesse Thomason
The space of language commands a robot can execute grows combinatorially with scene complexity. Evaluating a robot on this large domain is impractical + takes time, so we introduce contrast sets for robots to make small perturbations to test instances. This leads to good test set estimation and less experimenter effort.

CoRL, 2024
arxiv twitter

project image Which One? Leveraging Context Between Objects and Multiple Views for Language Grounding
Chancharik Mitra*, Abrar Anwar*, Rodolfo Corona, Dan Klein, Trevor Darrell, Jesse Thomason
We present the MAGiC model which selects an object referent based on language meant to distinguish between two similar objects by reasoning over both objects from multiple vantage points.

NAACL, 2024
arxiv

project image Generating Contextually-Relevant Navigation Instructions for Blind and Low Vision People
Zain Merchant, Abrar Anwar, Emily Wang, Souti Chattopadhyay, Jesse Thomason
Navigating unfamiliar environments presents significant challenges for blind and low-vision (BLV) individuals. We investigate how grounded instruction generation methods can provide contextually-relevant navigational guidance to BLV users.

ROMAN 2024 Late Breaking Report (LBR)
ROMAN 2024 Interactive AI Workshop (Best paper)
, 2024

arxiv

project image Feel the Bite: Robot-Assisted Inside-Mouth Bite Transfer using Robust Mouth Perception and Physical Interaction-Aware Control
Rajat Jenamani, Daniel Stabile, Ziang Liu, Abrar Anwar, Katherine Dimitropoulou, Tapomayukh Bhattacharjee
We design a system to feed people with disabilities in their mouth using real-time mouth perception and tactile-informed control.

HRI. (Best paper nominee), 2024
arxiv video website

2023

project image Exploring Strategies for Efficient VLN Evaluation
Abrar Anwar*, Rohan Gupta*, Elle Szabo, Jesse Thomason
Evaluation in the real world is often time-consuming and expensive, so we propose a targeted contrast set-based evaluation strategy to efficiently evaluate the linguistic and visual capabilities of an end-to-end VLN policy.

Workshop on Language and Robot Learning (LangRob) @ CoRL. (Oral Presentation), 2023
paper

2022

project image Human-Robot Commensality: Bite Timing Prediction for Robot-Assisted Feeding in Groups
Jan Ondras*, Abrar Anwar*, Tong Wu*, Fanjun Bu, Malte Jung, Jorge Jose Ortiz, Tapomayukh Bhattacharjee
We develop data-driven models to predict when a robot should feed during social dining scenarios. We build a dataset of human-human commensality, develop novel models to learn social dynamics of when to feed, and conduct a human-robot commensality study.

CoRL, 2022
paper arxiv

2021

project image Deep Reinforcement Learning for Optimal Refinement of Cross-Sectional Mesh Sequence Finite Elements
Abrar Anwar
Developed the first deep reinforcement learning framework for mesh refinement and refined “good” quality surface reconstructions of cross-sectional contours using soft-actor critic

Honors Thesis, 2021

project image Watch Where You're Going! Gaze and Head Orientation as Predictors for Social Robot Navigation
Blake Holman, Abrar Anwar, Akash Singh, Mauricio Tec, Justin Hart, Peter Stone
We leverage virtual reality to collect gaze and position data to create a predictive model and a mixed effects model to show gaze orientation precedes other features

ICRA, 2021
paper video

2020

project image Evolving Spiking Circuit Motifs using Weight Agnostic Neural Networks.
Abrar Anwar, Craig Vineyard, William Severa, Srideep Musuvathy, Suma Cardwell
An evolutionary, weight agnostic method is used to generate spiking neural networks used for classification, control, and various other tasks

AAAI-21 Undergraduate Consortium , 2021
Computer Science Research Institute Summer Proceedings, 2020
International Conference on Neuromorphic Systems (poster)
, 2020

paper tech report poster

2019

project image BrainSLAM: Robust autonomous navigation in sensor-deprived contexts
Felix Wang, James B. Aimone, Abrar Anwar, Srideep Musuvathy.
We explore using brain-inspired approaches to navigation and localization in a noisy, data-sparse environment for a hypersonic glide vehicle. Rotation invariant feature representations are used to increase accuracy and reduce map storage

Sandia National Labs Technical Report, 2019
tech report


Design and source code from Jon Barron's website