International Journal of Robotics Research

Công bố khoa học tiêu biểu

* Dữ liệu chỉ mang tính chất tham khảo

Sắp xếp:  
Homography-based 2D Visual Tracking and Servoing
International Journal of Robotics Research - Tập 26 Số 7 - Trang 661-676 - 2007
Selim Benhimane, Ezio Malis
The objective of this paper is to propose a new homography-based approach to image-based visual tracking and servoing. The visual tracking algorithm proposed in the paper is based on a new efficient second-order minimization method. Theoretical analysis and comparative experiments with other tracking approaches show that the proposed method has a higher convergence rate than standard first-order minimization techniques. Therefore, it is well adapted to real-time robotic applications. The output of the visual tracking is a homography linking the current and the reference image of a planar target. Using the homography, a task function isomorphic to the camera pose has been designed. A new image-based control law is proposed which does not need any measure of the 3D structure of the observed target (e.g. the normal to the plane). The theoretical proof of the existence of the isomorphism between the task function and the camera pose and the theoretical proof of the stability of the control law are provided. The experimental results, obtained with a 6 d.o.f. robot, show the advantages of the proposed method with respect to the existing approaches.
Generalized reciprocal collision avoidance
International Journal of Robotics Research - Tập 34 Số 12 - Trang 1501-1514 - 2015
Daman Bareiss, Jur van den Berg
Reciprocal collision avoidance has become a popular area of research over recent years. Approaches have been developed for a variety of dynamic systems ranging from single integrators to car-like, differential-drive, and arbitrary, linear equations of motion. In this paper, we present two contributions. First, we provide a unification of these previous approaches under a single, generalized representation using control obstacles. In particular, we show how velocity obstacles, acceleration velocity obstacles, continuous control obstacles, and LQR-obstacles are special instances of our generalized framework. Secondly, we present an extension of control obstacles to general reciprocal collision avoidance for non-linear, non-homogeneous systems where the robots may have different state spaces and different non-linear equations of motion from one another. Previous approaches to reciprocal collision avoidance could not be applied to such systems, as they use a relative formulation of the equations of motion and can, therefore, only apply to homogeneous, linear systems where all robots have the same linear equations of motion. Our approach allows for general mobile robots to independently select new control inputs while avoiding collisions with each other. We implemented our approach in simulation for a variety of mobile robots with non-linear equations of motion: differential-drive, differential-drive with a trailer, car-like, and hovercrafts. We also performed physical experiments with a combination of differential-drive, differential-drive with a trailer, and car-like robots. Our results show that our approach is capable of letting a non-homogeneous group of robots with non-linear equations of motion safely avoid collisions at real-time computation rates.
The DEXMART hand: Mechatronic design and experimental evaluation of synergy-based control for human-like grasping
International Journal of Robotics Research - Tập 33 Số 5 - Trang 799-824 - 2014
Gianluca Palli, Claudio Melchiorri, Gabriele Vassura, Umberto Scarcia, Lorenzo Moriello, Giovanni Berselli, Alberto Cavallo, Giulia Maria De Benedictis, Ciro Natale, Salvatore Pirozzi, Chris May, Fanny Ficuciello, Bruno Siciliano
This paper summarizes recent activities carried out for the development of an innovative anthropomorphic robotic hand called the DEXMART Hand. The main goal of this research is to face the problems that affect current robotic hands by introducing suitable design solutions aimed at achieving simplification and cost reduction while possibly enhancing robustness and performance. While certain aspects of the DEXMART Hand development have been presented in previous papers, this paper is the first to give a comprehensive description of the final hand version and its use to replicate human-like grasping. In this paper, particular emphasis is placed on the kinematics of the fingers and of the thumb, the wrist architecture, the dimensioning of the actuation system, and the final implementation of the position, force and tactile sensors. The paper focuses also on how these solutions have been integrated into the mechanical structure of this innovative robotic hand to enable precise force and displacement control of the whole system. Another important aspect is the lack of suitable control tools that severely limits the development of robotic hand applications. To address this issue, a new method for the observation of human hand behavior during interaction with common day-to-day objects by means of a 3D computer vision system is presented in this work together with a strategy for mapping human hand postures to the robotic hand. A simple control strategy based on postural synergies has been used to reduce the complexity of the grasp planning problem. As a preliminary evaluation of the DEXMART Hand’s capabilities, this approach has been adopted in this paper to simplify and speed up the transfer of human actions to the robotic hand, showing its effectiveness in reproducing human-like grasping.
Incremental learning of full body motion primitives and their sequencing through human motion observation
International Journal of Robotics Research - Tập 31 Số 3 - Trang 330-345 - 2012
Dana Kulić, Christian Ott, Dongheui Lee, Junichi Ishikawa, Yoshihiko Nakamura
In this paper we describe an approach for on-line, incremental learning of full body motion primitives from observation of human motion. The continuous observation sequence is first partitioned into motion segments, using stochastic segmentation. Next, motion segments are incrementally clustered and organized into a hierarchical tree structure representing the known motion primitives. Motion primitives are encoded using hidden Markov models, so that the same model can be used for both motion recognition and motion generation. At the same time, the temporal relationship between motion primitives is learned via the construction of a motion primitive graph. The motion primitive graph can then be used to construct motions, consisting of sequences of motion primitives. The approach is implemented and tested during on-line observation and on the IRT humanoid robot.
Transition state clustering: Unsupervised surgical trajectory segmentation for robot learning
International Journal of Robotics Research - Tập 36 Số 13-14 - Trang 1595-1618 - 2017
Sanjay Krishnan, Animesh Garg, Sachin Patil, Colin Lea, Gregory D. Hager, Pieter Abbeel, Ken Goldberg
Demonstration trajectories collected from a supervisor in teleoperation are widely used for robot learning, and temporally segmenting the trajectories into shorter, less-variable segments can improve the efficiency and reliability of learning algorithms. Trajectory segmentation algorithms can be sensitive to noise, spurious motions, and temporal variation. We present a new unsupervised segmentation algorithm, transition state clustering (TSC), which leverages repeated demonstrations of a task by clustering segment endpoints across demonstrations. TSC complements any motion-based segmentation algorithm by identifying candidate transitions, clustering them by kinematic similarity, and then correlating the kinematic clusters with available sensory and temporal features. TSC uses a hierarchical Dirichlet process Gaussian mixture model to avoid selecting the number of segments a priori. We present simulated results to suggest that TSC significantly reduces the number of false-positive segments in dynamical systems observed with noise as compared with seven probabilistic and non-probabilistic segmentation algorithms. We additionally compare algorithms that use piecewise linear segment models, and find that TSC recovers segments of a generated piecewise linear trajectory with greater accuracy in the presence of process and observation noise. At the maximum noise level, TSC recovers the ground truth 49% more accurately than alternatives. Furthermore, TSC runs 100× faster than the next most accurate alternative autoregressive models, which require expensive Markov chain Monte Carlo (MCMC)-based inference. We also evaluated TSC on 67 recordings of surgical needle passing and suturing. We supplemented the kinematic recordings with manually annotated visual features that denote grasp and penetration conditions. On this dataset, TSC finds 83% of needle passing transitions and 73% of the suturing transitions annotated by human experts.
Quantifying teaching behavior in robot learning from demonstration
International Journal of Robotics Research - Tập 39 Số 1 - Trang 54-72 - 2020
Aran Sena, Matthew Howard
Learning from demonstration allows for rapid deployment of robot manipulators to a great many tasks, by relying on a person showing the robot what to do rather than programming it. While this approach provides many opportunities, measuring, evaluating, and improving the person’s teaching ability has remained largely unexplored in robot manipulation research. To this end, a model for learning from demonstration is presented here that incorporates the teacher’s understanding of, and influence on, the learner. The proposed model is used to clarify the teacher’s objectives during learning from demonstration, providing new views on how teaching failures and efficiency can be defined. The benefit of this approach is shown in two experiments ([Formula: see text] and [Formula: see text], respectively), which highlight the difficulty teachers have in providing effective demonstrations, and show how [Formula: see text]–180% improvement in teaching efficiency can be achieved through evaluation and feedback shaped by the proposed framework, relative to unguided teaching.
Toward Reliable Off Road Autonomous Vehicles Operating in Challenging Environments
International Journal of Robotics Research - Tập 25 Số 5-6 - Trang 449-483 - 2006
Alonzo Kelly, Anthony Stentz, Omead Amidi, M. F. Bode, David M. Bradley, Antonio Díaz-Calderón, Mike Happold, Herman Herman, Robert Mandelbaum, Tom Pilarski, Peter Rander, Scott Thayer, Nick Vallidis, Randy Warner
The DARPA PerceptOR program has implemented a rigorous evaluative test program which fosters the development of field relevant outdoor mobile robots. Autonomous ground vehicles were deployed on diverse test courses throughout the USA and quantitatively evaluated on such factors as autonomy level, waypoint acquisition, failure rate, speed, and communications bandwidth. Our efforts over the three year program have produced new approaches in planning, perception, localization, and control which have been driven by the quest for reliable operation in challenging environments. This paper focuses on some of the most unique aspects of the systems developed by the CMU PerceptOR team, the lessons learned during the effort, and the most immediate challenges that remain to be addressed.
Fast loop-closure detection using visual-word-vectors from image sequences
International Journal of Robotics Research - Tập 37 Số 1 - Trang 62-82 - 2018
Loukas Bampis, Angelos Amanatiadis, Αντώνιος Γαστεράτος
In this paper, a novel pipeline for loop-closure detection is proposed. We base our work on a bag of binary feature words and we produce a description vector capable of characterizing a physical scene as a whole. Instead of relying on single camera measurements, the robot’s trajectory is dynamically segmented into image sequences according to its content. The visual word occurrences from each sequence are then combined to create sequence-visual-word-vectors and provide additional information to the matching functionality. In this way, scenes with considerable visual differences are firstly discarded, while the respective image-to-image associations are provided subsequently. With the purpose of further enhancing the system’s performance, a novel temporal consistency filter (trained offline) is also introduced to advance matches that persist over time. Evaluation results prove that the presented method compares favorably with other state-of-the-art techniques, while our algorithm is tested on a tablet device, verifying the computational efficiency of the approach.
Sensor Models and Multisensor Integration
International Journal of Robotics Research - Tập 7 Số 6 - Trang 97-113 - 1988
Hugh Durrant‐Whyte
We maintain that the key to intelligent fusion of disparate sensory information is to provide an effective model of sensor capabilities. A sensor model is an abstraction of the actual sensing process. It describes the information a sensor is able to provide, how this information is limited by the environ ment, how it can be enhanced by information obtained from other sensors, and how it may be improved by active use of the physical sensing device. The importance of having a model of sensor performance is that capabilities can be esti mated a priori and, thus, sensor strategies developed in line with information requirements. We describe a technique for modeling sensors and the information they provide. This model treats each sensor as an individual decision maker, acting as a member of a team with common goals. Each sensor is considered as a source of uncertain geometric information, able to communicate to, and coordinate its activities with, other members of the sens ing team. We treat three components of this sensor model: the observation model, which describes a sensor's measure ment characteristics; the dependency model, which describes a sensor's dependence on information from other sources; and the state model, which describes how a sensor's observa tions are affected by its location and internal state. We show how this mechanism can be used to manipulate, communi cate, and integrate uncertain sensor observations. We show that these sensor models can deal effectively with cooperative, competitive, and complementary interactions between differ ent disparate information sources.
Leader-Follower Formation Control of Multiple Non-holonomic Mobile Robots Incorporating a Receding-horizon Scheme
International Journal of Robotics Research - Tập 29 Số 6 - Trang 727-747 - 2010
Jian Chen, Dong Sun, Jie Yang, Haoyao Chen
In this paper we present a receding-horizon leader—follower (RH-LF) control framework to solve the formation problem of multiple non-holonomic mobile robots with a rapid error convergence rate. To maintain the desired leader—follower relationship, we propose a separation—bearing—orientation scheme (SBOS) for two-robot formations and separation—separation—orientation scheme (SSOS) for three-robot formations in deriving the desired postures of the followers. Unlike the other leader—follower approaches in the existing literature, the orientation deviations between the leaders and followers are explicitly controlled in our framework, which enables us to successfully solve formation controls when robots move backwards, which is termed as a formation backwards problem in this paper. Further, we propose to incorporate the receding-horizon scheme into our leader—follower controller to yield a fast convergence rate of the formation tracking errors. Experiments are finally performed on a group of mobile robots to demonstrate the effectiveness of the proposed formation control framework.
Tổng số: 43   
  • 1
  • 2
  • 3
  • 4
  • 5