Filtering by
- All Subjects: Robotics
Robots are often used in long-duration scenarios, such as on the surface of Mars,where they may need to adapt to environmental changes. Typically, robots have been built specifically for single tasks, such as moving boxes in a warehouse or surveying construction sites. However, there is a modern trend away from human hand-engineering and toward robot learning. To this end, the ideal robot is not engineered,but automatically designed for a specific task. This thesis focuses on robots which learn path-planning algorithms for specific environments. Learning is accomplished via genetic programming. Path-planners are represented as Python code, which is optimized via Pareto evolution. These planners are encouraged to explore curiously and efficiently. This research asks the questions: “How can robots exhibit life-long learning where they adapt to changing environments in a robust way?”, and “How can robots learn to be curious?”.







In this work, I combined existing dynamical models of collective transport in ants to create a stochastic model that describes these behaviors and can be used to control multi-robot systems to perform collective transport. In this model, each agent transitions stochastically between roles based on the force that it senses the other agents are applying to the load. The agent’s motion is governed by a proportional controller that updates its applied force based on the load velocity. I developed agent-based simulations of this model in NetLogo and explored leader-follower scenarios in which agents receive information about the transport destination by a newly informed agent (leader) joining the team. From these simulations, I derived the mean allocations of agents between “puller” and “lifter” roles and the mean forces applied by the agents throughout the motion.
From the simulation results obtained, we show that the mean ratio of lifter to puller populations is approximately 1:1. We also show that agents using the role update procedure based on forces are required to exert less force than agents that select their role based on their position on the load, although both strategies achieve similar transport speeds.

Robust camera pose estimation is fundamental to autonomous navigation, robotic perception, and non-line-of-sight (NLOS) tracking. While conventional visual odometry and Simultaneous Localization and Mapping (SLAM) techniques rely heavily on discriminative feature correspondences in texture-rich environments, they often fail in feature-poor conditions, such as low-light, foggy, or textureless scenes. This dissertation proposes novel methodologies to improve pose estimation robustness in these challenging environments by leveraging multi-modal sensor fusion, geometric constraints, and learning-based feature matching.
First, this dissertation presents a Visual-Inertial Odometry (VIO) framework that integrates 3D points, lines, and planes as geometric primitives in an Extended Kalman Filtering (EKF) pipeline. By directly incorporating structural elements into pose estimation, this framework mitigates the limitations of sparse visual features in degraded conditions. The approach is validated using real-world experiments with an instrumented unmanned aerial vehicle (UAV), demonstrating superior pose accuracy compared to traditional feature-based methods.
Second, this dissertation introduces a Stereo Visual Odometry technique with an Attention Graph Neural Network, designed to enhance feature matching under adverse weather and dynamic lighting conditions. By incorporating a deep-learning-based point and line matching mechanism, this approach significantly improves robustness in low-visibility scenarios. Experimental results on synthetic and real-world datasets confirm its effectiveness in reducing trajectory drift.
Finally, these methodologies are extended to dynamic Non-Line-of-Sight (NLOS) tracking, where a mobile robot estimates the trajectory of an object outside its camera’s field of view using scattered light information. The proposed approach includes a novel transformer-based NLOS-Patch Network, which extracts geometric priors from relay surfaces and refines object trajectories using an optimization-based inference pipeline. The tracking framework is evaluated on both synthetic and real-world datasets and validated on in-the-wild scenes with a UAV, showing its potential for applications in surveillance, search-and-rescue, and autonomous exploration.
Together, these contributions advance the field of robust camera pose estimation by enabling reliable localization in visually challenging scenarios. The proposed techniques pave the way for more resilient robotic perception systems capable of operating in real-world conditions where conventional methods often fail.