Social Navigation

This is a simple simulation environment I made, where the circles are people walking taken from the ETH Pedestrian Dataset, and our goal is to navigate the space efficiently, and without collisions. Our robot is the black star, and we try to stay as close to the green path while avoiding human obstacles. Seen here, the robot is just taking random actions. I built the environment on top of OpenAI Gym making it very nice to interface with for Deep RL, which is currently in the works.

I spent a portion of this past summer implementing RRTX, an asymptotically optimal, sampling based planning and replanning algorithm. We build a dense graph through our state space, find the minimum cost path to goal and update the wiring based on obstacles and future obstacle predictions. This is a simple test, running in real time, of a robot that updates its path when new obstacles are discovered.

About this research

If autonomous robots are to mesh into daily human lives, socially-aware and socially responsible behavior is a necessity. There has been significant research into human-robot interaction, but efficient and non-disruptive navigation for populated areas has yet to be solved. My research has been focused on developing path planning algorithms for traversing dense human crowds.

This work began with an investigation and extension of state-of-the-art planning and replanning algorithms to navigation in human crowds. I implemented RRTX, an asymptotically optimal dynamic replanning algorithm, and showed its effectiveness avoiding humans given a predictive motion model. This work has transitioned to using deep reinforcement learning to learn socially-aware path planning policies.

As a paper-ready work we hope to learn a policy on top of RRTX so we can quickly replan given our prediction of human movements, as well as deviate from that path when cooperating (influencing behavior) with humans will yield a more efficient path.