• Arun.Home

Kod*lab Menu

Internal Links (Login Required)

Arunkumar Byravan

Master’s Student in the MEAM Dept (Graduated in May 2011)

Contact: barun@alumni.upenn.edu, arun.sriram2008@gmail.com

I was a member of the Kodlab from May ‘10 - Sep ‘11 where I worked primarily on vision based attitude estimation for dynamic mobile robots. Currently, I am an Associate Manager at Procter & Gamble India, based out of Baddi. Prior to Penn, I did my bachelors in mechanical engineering at Anna University, Chennai.

A copy of my resume can be found here

Research Interests:

My general interests are in enhancing the capability of robots to act autonomously in dynamic environments, specifically focusing on perception, navigation and control. Other interests in computer vision include Structure from motion, 3D reconstruction and Camera Calibration.


A Robust Visual Compass for Highly Dynamic Legged Robots
The objective of this project was to develop a system for vision based tracking of the orientation of RHex. Initially, an algorithm for detecting straight lines in fisheye images was developed using the unit sphere representation. Using this to detect straight lines, an EM algorithm was used to estimate the vanishing points. Matching these across frames, a boolean based belief model was used to accumulate evidence on the strength of a vanishing direction. The McKenzie Wahba estimator was then used to compute orientation from the matches with strong belief and a Kalman Filter was used to track it. Online testing of the system indoors yielded RMS errors of 2–4 degrees in yaw, pitch and roll, at frame rates of 14 Hz. A sample result from a test run in a VICON bay can be seen here, where the red, green & blue arrows are the vanishing points with high beliefs tracked across multiple frames & here, where red is VICON & blue is actual. Tests conducted outdoors and on environments with ramps and loops also produced good results.
Fisheye Camera Calibration
In this project, I proposed a new model for representing fisheye lens projection. Incorporating this model into a calibration routine using checkerboard images, I successfully recovered the camera parameters of an OmniTech Robotics ORIFL-190 lens with a 190 degree FOV, achieving re-projection errors of 0.3 pixels in a 640 X 480 image. A presentation detailing the calibration model can be found here.
A turn-in-place gait for the RHex robot
This was my first project at the Kodlab in Summer 2010. I worked on a parametrized gait to enable the RHex to turn efficiently and accurately. Using feedback from a VICON system, I tuned the parameters using the Nelder-Mead algorithm with the objective function based on power usage and turn velocity. A video of the maneuver obtained using the best tuned parameters can be found here. Also, the gait parameters can be modified for accurate turns in steps of 5 to 70 degrees.
SLAM with Kinect
This project was done in two stages. Initially, a quaternion based Unscented Kalman Filter (based on this paper) was implemented to track the robot’s pitch and roll. Using a horizontal LIDAR, IMU & Encoders, a grid-based map of the environment was generated & ramps in the environment were detected. Videos of the results can be seen here & here. Using a Kinect mounted on the robot, RGBD slam was implemented in the second stage. The ground plane was detected from the Kinect data and its color was projected onto the grid map. Results of the floor mapping can be seen here & here. A report of the system can be found here.
Learning planning costs
This project was aimed at developing an imitation based planner that would learn to plan optimally on satellite imagery based on reference plans given to it. Using the concepts of Maximum Margin Planning & Learning to Search, the planner (based on this paper & this paper) was implemented and tested on aerial images of Penn campus. A report on the supervised learning planner along with results can be found here.
Planner for robots with complex body structure
In this project, I worked on an efficient real time planner for robots without a uniform body shape over their height. Collision free paths were generated by projecting the 3D environment into discrete elevations and checking for collisions in these planes. The planner was implemented in ROS and the performance was compared to those of existing methods. Report detailing the planner can be found here. A test of ROS’s existing navigation function in a constrained environment can be found here. Using the proposed planner, the robot successfully backs out without hitting the obstacle (click here for the video).
RASC-AL refers to Revolutionary Aerospace Systems Concepts-Academic Linkage, a NASA competition where teams are challenged to build a tele-operated rover capable of traversing rough terrains to collect objects of interest. As a part of Penn’s team for the competition, I developed the control architecture, GUI, perception and localization modules for the robot. I also served as the pilot for the final run, where we placed a joint third. The report on the RASC-AL robot can be found here, while a detailed report on a few subsystems can be seen here.


Copyright Kodlab, 2017