@import url(http://kodlab.seas.upenn.edu/pub/skins/sinorca/basic.css); @import url(http://kodlab.seas.upenn.edu/pub/skins/sinorca/layout.css); @import url(http://kodlab.seas.upenn.edu/pub/skins/sinorca/sinorca.css);
This summer at Kodlab, I worked on writing a camera motion tracking program, constructing a virtual 3D model of robot testing facility, and creating a GPS guided gait optimization program for outdoor testing. The projects are challenging and educating, I learned a great deal about Edubot, python programming, camera projective vision, and image processing.
The objective of the robot motion tracking project is to track a moving robot on with fixed cameras. I first attempted to use background subtraction method with a updated background frame. The program is tested using a video clip of Edubot climbing a flight of stairs indoor. Shadows and moving legs produced substantial errors on estimated robot position. Outdoor environment would only exacerbate the program’s performance. I then decided to use LK tracking method, which tracks corners. The LK tracking program indeed gave a better performance, reducing the effect of moving legs and shadows. I improved the program further by enabling it to recover good features, i.e. corners being tracked, when one is lost. Further improvement can be done by adding a Kalman filter to the tracker.
I constructed a simple camera projective model in MATLAB. The forward projection is described by equation x = KR[I|-C’]X, where x is in a camera coordinate frame, K is the intrinsic camera parameters, R is the camera rotation, C is the position of the camera center, and X is in the world coordinate frame. Further, the 3D position of the object in the world coordinate can be calculated, given its pixel positions in two or more cameras. This back-projection is described by equation: X = P+x + C. X and C form a projective ray that intersects the point of interest in the world coordinate frame. Because of the ever-existing instrumental errors, two back-projection rays do not usually intersect, it is necessary to find the point with the shortest distance between the two lines. This point locates on the midpoint of a segment that is perpendicular to both rays, and the crossproduct of two perpendicular lines is 0. Therefore, the point can be found by solving a linear equation.
I was given the task of converting the current gait optimization program to a GPS guided gait optimization program. The initial objective is to have the robot run back and forth along a track. I created a small algorithm to change the GPS data to a flat-earth Cartesian coordinate. It gives a very accurate estimation because it used a spheroid earth model. Once everything is nicely represented in Cartesian, the difference of the actual heading and desired heading of the robot is calculated. This difference determines the turning input to the robot. Simulation test of the program was successful. It also produced a good result when I carried the robot and followed the turning direction given by the program. When I ran the program on Penn1, however, the robot walked erratically. I realized that GPS has about 10 meters measurement error when it is not moving. The GPS measurements become accurate enough to guide the robot when it is in motion. Consequently, the robot moves randomly at starting point, or when it comes to a halt. This problem remains unresolved.
Copyright Kodlab, 2017