Complex manipulation tasks, such as rearrangement planning of numerous objects, are combinatorially hard problems. Existing algorithms either do not scale well or assume a great deal of prior knowledge about the environment, and few offer any rigorous guarantees. In this paper, we propose a novel hybrid control architecture for achieving such tasks with mobile manipulators. On the discrete side, we enrich a temporal logic specification with mobile manipulation primitives such as moving to a point, and grasping or moving an object. Such specifications are translated to an automaton representation, which orchestrates the physical grounding of the task to mobility or manipulation controllers. The grounding from the discrete to the continuous reactive controller is online and can respond to the discovery of unknown obstacles or decide to push out of the way movable objects that prohibit task accomplishment. Despite the problem complexity, we prove that, under specific conditions, our architecture enjoys provable completeness on the discrete side, provable termination on the continuous side, and avoids all obstacles in the environment. Simulations illustrate the efficiency of our architecture that can handle tasks of increased complexity while also responding to unknown obstacles or unanticipated adverse configurations.
2020. IEEE Robotics and Automation Letters (RA-L). Authors: Vasileios Vasilopoulos, Georgios Pavlakos, Sean L. Bowman, J. Diego Caporale, Kostas Daniilidis, George J. Pappas, and Daniel E. Koditschek. This letter presents a reactive planning system that enriches the topological representation of an environment with a tightly integrated semantic representation, achieved by incorporating and exploiting advances in deep perceptual learning and probabilistic semantic reasoning. Our architecture combines object detection with semantic SLAM, affording robust, reactive logical as well as geometric planning in unexplored environments. Moreover, by incorporating a human mesh estimation algorithm, our system is capable of reacting and responding in real time to semantically labeled human motions and gestures. New formal results allow tracking of suitably non-adversarial moving targets, while maintaining the same collision avoidance guarantees. We suggest the empirical utility of the proposed control architecture with a numerical study including comparisons with a state-of-the-art dynamic replanning algorithm, and physical implementation on both a wheeled and legged platform in different settings with both geometric and semantic goals.
2020. 3rd IEEE International Conference on Soft Robotics (RoboSoft), April 6-9, 2020. Authors: Wei-Hsi Chen, Shivangi Misra, J. Diego Caporale, Daniel E. Koditschek, Shu Yang, Cynthia R. Sung. Our prior work demonstrated that a REBO structure could be used to juggle (repeatedly loft and catch) a 1kg load. Here, we modify that design to achieve actual locomotion: translation of the mechanism’s mass center through a two degree-of-freedom workspace.
2020. IEEE Robotics and Automation Letter. Authors: Wei-Hsi Chen, Shivangi Misra, Yuchong Gao, Young-Joo Lee, Daniel E. Koditschek, Shu Yang, Cynthia R. Sung. We present an approach to overcoming challenges in dynamical dexterity for robots through programmably compliant origami mechanisms.
2019. The International Symposium on Robotics Research (ISRR 2019) Authors: T. Turner Topping, Vasileios Vasilopoulos, Avik De and Daniel E. Koditschek We document the reliably repeatable dynamical mounting and dismounting of wheeled stools and carts, and of fixed ledges, by the Minitaur robot. Because these tasks span a range of length scales that preclude quasi-static execution, we use a hybrid dynamical systems framework to variously compose and thereby systematically reuse a small lexicon of templates (low degree of freedom behavioral primitives). The resulting behaviors comprise the key competences beyond mere locomotion required for robust implementation on a legged mobile manipulator of a simple version of the warehouseman’s problem.
2019. Authors: Garrett Wenger, Avik De, Daniel Koditschek. The Jerboa, a tailed bipedal robot with two hip-actuated, passive-compliant legs and a doubly actuated tail, has been shown both formally and empirically to exhibit a variety of stable hopping and running gaits in the sagittal plane. In this paper we take the first steps toward operating Jerboa as a fully spatial machine by addressing the predominant mode of destabilization away from the sagittal plane: body roll. We develop a provably stable controller for underactuated aerial stabilization of the coupled body roll and tail angles, that uses just the tail torques. We show that this controller is successful at reliably reorienting the Jerboa body in roughly 150 ms of freefall from a large set of initial conditions. This controller also enables (and appears intuitively to be crucial for) sustained empirically stable hopping in the frontal plane by virtue of its substantial robustness against destabilizing perturbations and calibration errors. The controller as well as the analysis methods developed here are applicable to any robotic platform with a similar doubly-actuated spherical tail joint.
2018. Authors: Omur Arslan and Daniel E. Koditschek. We construct a sensor-based feedback law that provably solves the real-time collision-free robot navigation problem in a compact convex Euclidean subset cluttered with unknown but sufficiently separated and strongly convex obstacles. Our algorithm introduces a novel use of separating hyperplanes for identifying the robot’s local obstacle-free convex neighborhood, affording a reactive (online-computed) continuous and piecewise smooth closed-loop vector field whose smooth flow brings almost all configurations in the robot’s free space to a designated goal location, with the guarantee of no collisions along the way. Specialized attention to planar navigable environments yields a necessary and sufficient condition on convex obstacles for almost global navigation towards any goal location in the environment. We further extend these provable properties of the planar setting to practically motivated limited range, isotropic and anisotropic sensing models, and the nonh! olonomically constrained kinematics of the standard differential drive vehicle. We conclude with numerical and experimental evidence demonstrating the effectiveness of the proposed sensory feedback motion planner.
2018. The efficiency of sampling-based motion planning algorithms is dependent on how well a steering procedure is capable of capturing both system dynamics and configuration space geometry to connect sample configurations. This paper considers how metrics describing local system dynamics may be combined with convex subsets of the free space to describe the local behavior of a steering function for sampling-based planners. Subsequently, a framework for using these subsets to extend the steering procedure to incorporate this information is introduced. To demonstrate our framework, three specific metrics are considered: the LQR cost-to-go function, a Gram matrix derived from system linearization, and the Mahalanobis distance of a linear-Gaussian system. Finally, numerical tests are conducted for a second-order linear system, a kinematic unicycle, and a linear-Gaussian system to demonstrate that our framework increases the connectivity of sampling-based planners and allows them to better explore the free space.
A challenge of pan/tilt/zoom (PTZ) camera networks for efficient and flexible visual monitoring is automated active network reconfiguration in response to environmental stimuli. In this paper, given an event/activity distribution over a convex environment, we propose a new provably correct reactive coverage control algorithm for PTZ camera networks that continuously (re)configures camera orientations and zoom levels (i.e., angles of view) in order to locally maximize their total coverage quality. Our construction is based on careful modeling of visual sensing quality that is consistent with the physical nature of cameras, and we introduce a new notion of conic Voronoi diagrams, based on our sensing quality measures, to solve the camera network allocation problem: that is, to determine where each camera should focus in its field of view given all the other cameras’ configurations. Accordingly, we design simple greedy gradient algorithms for both continuous- and discrete-time first-order PTZ camera dynamics that asymptotically converge a locally optimal coverage configuration. Finally, we provide numerical and experimental evidence demonstrating the effectiveness of the proposed coverage algorithms.
2018. We construct a sensor-based feedback law that provably solves the real-time collision-free robot navigation problem in a compact convex Euclidean subset cluttered with unknown but sufficiently separated and strongly convex obstacles. Our algorithm introduces a novel use of separating hyperplanes for identifying the robot’s local obstacle-free convex neighborhood, affording a reactive (online-computed) piecewise smooth and continuous closed-loop vector field whose smooth flow brings almost all configurations in the robot’s free space to a designated goal location, with the guarantee of no collisions along the way. We further extend these provable properties to practically motivated limited range sensing models.
2018. This paper presents a provably correct method for robot navigation in 2D environments cluttered with familiar but unexpected non-convex, star-shaped obstacles as well as completely unknown, convex obstacles. We presuppose a limited range onboard sensor, capable of recognizing, localizing and (leveraging ideas from constructive solid geometry) generating online from its catalogue of the familiar, non-convex shapes an implicit representation of each one. These representations underlie an online change of coordinates to a completely convex model planning space wherein a previously developed online construction yields a provably correct reactive controller that is pulled back to the physically sensed representation to generate the actual robot commands. We extend the construction to differential drive robots, and suggest the empirical utility of the proposed control architecture using both formal proofs and numerical simulations.
2018. Sampling-based algorithms offer computationally efficient, practical solutions to the path finding problem in high-dimensional complex configuration spaces by approximately capturing the connectivity of the underlying space through a (dense) collection of sample configurations joined by simple local planners. In this paper, we address a long-standing bottleneck associated with the difficulty of finding paths through narrow passages. Whereas most prior work considers the narrow passage problem as a sampling issue (and the literature abounds with heuristic sampling strategies) very little attention has been paid to the design of new effective local planners. Here, we propose a novel sensory steering algorithm for sampling- based motion planning that can “feel” a configuration space locally and significantly improve the path planning performance near difficult regions such as narrow passages. We provide computational evidence for the effectiveness of the proposed local planner through a variety of simulations which suggest that our proposed sensory steering algorithm outperforms the standard straight-line planner by significantly increasing the connectivity of random motion planning graphs.
2018. We present the first fully spatial hopping gait of a 12 DoF tailed biped driven by only 4 actuators. The control of this physical machine is built up from parallel compositions of controllers for progressively higher DoF extensions of a simple 2 DoF, 1 actuator template. These template dynamics are still not themselves integrable, but a new hybrid averaging analysis yields a conjectured closed form representation of the approximate hopping limit cycle as a function of its physical and control parameters. The resulting insight into the role of the machine’s kinematic and dynamical design choices affords a redesign leading to the newly achieved behavior.
We demonstrate the physical rearrangement of wheeled stools in a moderately cluttered indoor environment by a quadrupedal robot that autonomously achieves a user’s desired configuration. The robot’s behaviors are planned and executed by a three layer hierarchical architecture consisting of: an offline symbolic task and motion planner; a reactive layer that tracks the reference output of the deliberative layer and avoids unanticipated obstacles sensed online; and a gait layer that realizes the abstract unicycle commands from the reactive module through appropriately coordinated joint level torque feedback loops. This work also extends prior formal results about the reactive layer to a broad class of nonconvex obstacles. Our design is verified both by formal proofs as well as empirical demonstration of various assembly tasks.
2018. This paper applies an extension of classical averaging methods to hybrid dynamical systems, thereby achieving formally specified, physically effective and robust instances of all virtual bipedal gaits on a quadrupedal robot. Gait specification takes the form of a three parameter family of coupling rules mathematically shown to stabilize limit cycles in a low degree of freedom template: an abstracted pair of vertical hoppers whose relative phase locking encodes the desired physical leg patterns. These coupling rules produce the desired gaits when appropriately applied to the physical robot.
2018. This paper considers the problem of completing assemblies of passive objects in nonconvex environments, cluttered with convex obstacles of unknown position, shape and size that satisfy a specific separation assumption. A differential drive robot equipped with a gripper and a LIDAR sensor, capable of perceiving its environment only locally, is used to position the passive objects in a desired configuration. The method combines the virtues of a deliberative planner generating high-level, symbolic commands, with the formal guarantees of convergence and obstacle avoidance of a reactive planner that requires little onboard computation and is used online. The validity of the proposed method is verified both with formal proofs and numerical simulations.
This paper demonstrates a fully sensor-based reactive homing behavior on a physical quadrupedal robot, using onboard sensors, in simple (convex obstacle-cluttered) unknown, GPS-denied environments. Its implementation is enabled by our empirical success in controlling the legged machine to approximate the (abstract) unicycle mechanics assumed by the navigation algorithm, and our proposed method of range-only target localization using particle filters.
2017. We document empirically stable bounding using an actively powered spine on the Inu quadrupedal robot, and propose a reduced-order model to capture the dynamics associated with this additional, actuated spine degree of freedom. This model is sufficiently accurate as to roughly describe the robots mass center trajectory during a bounding limit cycle, thus making it a potential option for low dimensional representations of spine actuation in steady-state legged locomotion.
This video documents our field experiments at White Sands National Monument with the RHex robot, in March 2016. It demonstrates the great potential for RHex to assist aeolian scientists in desert research. By collecting data through sensors mounted on RHex, we gather transformative datasets that are required to calibrate and verify existing and future dune dynamics and sand transport models. This work is produced by Nicholas Lancaster, Desert Research Institute, and will be presented at the 2016 Geological Society of America Conference.
The Penn Jerboa showcases new leaping behaviors and demonstrates an innovative method of describing and categorizing these leaps across robot platforms.
http://www.ghostrobotics.io Ghost Minitaur™ is a patent-pending medium-sized legged robot highly adept at perceiving tactile sensations. Its high torque motors, motor controllers, and specialized leg design allow this machine run and jump over difficult terrain, climb fences and stairs, and even open doors. High-speed and high-resolution encoders let the robot see and feel the ground through the motors and adapt faster than the blink of an eye. Minitaur was developed in Kod*lab.
Playlist of all of Kod*lab Research Videos