Matching Items (14)
Filtering by
- Genre: Doctoral Dissertation

Description
Imitation learning is a promising methodology for teaching robots how to physically interact and collaborate with human partners. However, successful interaction requires complex coordination in time and space, i.e., knowing what to do as well as when to do it. This dissertation introduces Bayesian Interaction Primitives, a probabilistic imitation learning framework which establishes a conceptual and theoretical relationship between human-robot interaction (HRI) and simultaneous localization and mapping. In particular, it is established that HRI can be viewed through the lens of recursive filtering in time and space. In turn, this relationship allows one to leverage techniques from an existing, mature field and develop a powerful new formulation which enables multimodal spatiotemporal inference in collaborative settings involving two or more agents. Through the development of exact and approximate variations of this method, it is shown in this work that it is possible to learn complex real-world interactions in a wide variety of settings, including tasks such as handshaking, cooperative manipulation, catching, hugging, and more.
ContributorsCampbell, Joseph (Author) / Ben Amor, Heni (Thesis advisor) / Fainekos, Georgios (Thesis advisor) / Yamane, Katsu (Committee member) / Kambhampati, Subbarao (Committee member) / Arizona State University (Publisher)
Created2021

Description
Autonomous systems powered by Artificial Neural Networks (NNs) have shown remarkable capabilities in performing complex tasks that are difficult to formally specify. However, ensuring the safety, reliability, and trustworthiness of these NN-based systems remains a significant challenge, especially when they encounter inputs that fall outside the distribution of their training data. In robot learning applications, such as lower-leg prostheses, even well-trained policies can exhibit unsafe behaviors when faced with unforeseen or adversarial inputs, potentially leading to harmful outcomes. Addressing these safety concerns is crucial for the adoption and deployment of autonomous systems in real-world, safety-critical environments. To address these challenges, this dissertation presents a neural network repair framework aimed at enhancing safety in robot learning applications. First, a novel layer-wise repair method utilizing Mixed-Integer Quadratic Programming (MIQP) is introduced that enables targeted adjustments to specific layers of a neural network to satisfy predefined safety constraints without altering the network’s structure. Second, the practical effectiveness of the proposed methods is demonstrated through extensive experiments on safety-critical assistive devices, particularly lower-leg prostheses, to ensure the generation of safe and reliable neural policies. Third, the integration of predictive models is explored to enforce implicit safety constraints, allowing for anticipation and mitigation of unsafe behaviors through a two-step supervised learning approach that combines behavioral cloning with neural network repair. By addressing these areas, this dissertation advances the state-of-the-art in neural network repair for robot learning. The outcome of this work promotes the development of robust and secure autonomous systems capable of operating safely in unpredictable and dynamic real-world environments.
ContributorsMajd, Keyvan (Author) / Ben Amor, Heni (Thesis advisor) / Fainekos, Georgios (Thesis advisor) / Srivastava, Siddharth (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2024

Description
Autonomous Vehicles (AVs) have the potential to significantly evolve transportation. AVs are expected to make transportation safer by avoiding accidents that happen due to human errors. When AVs become connected, they can exchange information with the infrastructure or other Connected Autonomous Vehicles (CAVs) to efficiently plan their future motion and therefore, increase the road throughput and reduce energy consumption. Cooperative algorithms for CAVs will not be deployed in real life unless they are proved to be safe, robust, and resilient to different failure models. Since intersections are crucial areas where most accidents happen, this dissertation first focuses on making existing intersection management algorithms safe and resilient against network and computation time, bounded model mismatches and external disturbances, and the existence of a rogue vehicle. Then, a generic algorithm for conflict resolution and cooperation of CAVs is proposed that ensures the safety of vehicles even when other vehicles suddenly change their plan. The proposed approach can also detect deadlock situations among CAVs and resolve them through a negotiation process. A testbed consisting of 1/10th scale model CAVs is built to evaluate the proposed algorithms. In addition, a simulator is developed to perform tests at a large scale. Results from the conducted experiments indicate the robustness and resilience of proposed approaches.
ContributorsKhayatian, Mohammad (Author) / Shrivastava, Aviral (Thesis advisor) / Fainekos, Georgios (Committee member) / Ben Amor, Heni (Committee member) / Yang, Yezhou (Committee member) / Lou, Yingyan (Committee member) / Iannucci, Bob (Committee member) / Arizona State University (Publisher)
Created2021

Description
A swarm describes a group of interacting agents exhibiting complex collective behaviors. Higher-level behavioral patterns of the group are believed to emerge from simple low-level rules of decision making at the agent-level. With the potential application of swarms of aerial drones, underwater robots, and other multi-robot systems, there has been increasing interest in approaches for specifying complex, collective behavior for artificial swarms. Traditional methods for creating artificial multi-agent behaviors inspired by known swarms analyze the underlying dynamics and hand craft low-level control logics that constitute the emerging behaviors. Deep learning methods offered an approach to approximate the behaviors through optimization without much human intervention.
This thesis proposes a graph based neural network architecture, SwarmNet, for learning the swarming behaviors of multi-agent systems. Given observation of only the trajectories of an expert multi-agent system, the SwarmNet is able to learn sensible representations of the internal low-level interactions on top of being able to approximate the high-level behaviors and make long-term prediction of the motion of the system. Challenges in scaling the SwarmNet and graph neural networks in general are discussed in detail, along with measures to alleviate the scaling issue in generalization is proposed. Using the trained network as a control policy, it is shown that the combination of imitation learning and reinforcement learning improves the policy more efficiently. To some extent, it is shown that the low-level interactions are successfully identified and separated and that the separated functionality enables fine controlled custom training.
This thesis proposes a graph based neural network architecture, SwarmNet, for learning the swarming behaviors of multi-agent systems. Given observation of only the trajectories of an expert multi-agent system, the SwarmNet is able to learn sensible representations of the internal low-level interactions on top of being able to approximate the high-level behaviors and make long-term prediction of the motion of the system. Challenges in scaling the SwarmNet and graph neural networks in general are discussed in detail, along with measures to alleviate the scaling issue in generalization is proposed. Using the trained network as a control policy, it is shown that the combination of imitation learning and reinforcement learning improves the policy more efficiently. To some extent, it is shown that the low-level interactions are successfully identified and separated and that the separated functionality enables fine controlled custom training.
ContributorsZhou, Siyu (Author) / Ben Amor, Heni (Thesis advisor) / Walker, Sara I (Thesis advisor) / Davies, Paul (Committee member) / Pavlic, Ted (Committee member) / Presse, Steve (Committee member) / Arizona State University (Publisher)
Created2020

Description
A complex social system, whether artificial or natural, can possess its macroscopic properties as a collective, which may change in real time as a result of local behavioral interactions among a number of agents in it. If a reliable indicator is available to abstract the macrolevel states, decision makers could use it to take a proactive action, whenever needed, in order for the entire system to avoid unacceptable states or con-verge to desired ones. In realistic scenarios, however, there can be many challenges in learning a model of dynamic global states from interactions of agents, such as 1) high complexity of the system itself, 2) absence of holistic perception, 3) variability of group size, 4) biased observations on state space, and 5) identification of salient behavioral cues. In this dissertation, I introduce useful applications of macrostate estimation in complex multi-agent systems and explore effective deep learning frameworks to ad-dress the inherited challenges. First of all, Remote Teammate Localization (ReTLo)is developed in multi-robot teams, in which an individual robot can use its local interactions with a nearby robot as an information channel to estimate the holistic view of the group. Within the problem, I will show (a) learning a model of a modular team can generalize to all others to gain the global awareness of the team of variable sizes, and (b) active interactions are necessary to diversify training data and speed up the overall learning process. The complexity of the next focal system escalates to a colony of over 50 individual ants undergoing 18-day social stabilization since a chaotic event. I will utilize this natural platform to demonstrate, in contrast to (b), (c)monotonic samples only from “before chaos” can be sufficient to model the panicked society, and (d) the model can also be used to discover salient behaviors to precisely predict macrostates.
ContributorsChoi, Taeyeong (Author) / Pavlic, Theodore (Thesis advisor) / Richa, Andrea (Committee member) / Ben Amor, Heni (Committee member) / Yang, Yezhou (Committee member) / Liebig, Juergen (Committee member) / Arizona State University (Publisher)
Created2020

Description
Autonomous systems should satisfy a set of requirements that guarantee their safety, efficiency, and reliability when working under uncertain circumstances. These requirements can have financial, or legal implications or they can describe what is assigned to autonomous systems.As a result, the system controller needs to be designed in order to comply with these - potentially complicated - requirements, and the closed-loop system needs to be tested and verified against these requirements.
However, when the complexity of the system and its requirements increases, designing a requirement-based controller for the system and analyzing the closed-loop system against the requirement becomes very challenging. In this case, existing design and test methodologies based on trial-and-error would fail, and hence disciplined scientific approaches should be considered.
To address some of these challenges, in this dissertation, I present different methods that facilitate efficient testing, and control design based on requirements:
1. Gradient-based methods for improved optimization-based testing,
2. Requirement-based learning for the design of neural-network controllers,
3. Methods based on barrier functions for designing control inputs that ensure the satisfaction of safety constraints.
ContributorsYaghoubi, Shakiba (Author) / Fainekos, Georgios (Thesis advisor) / Ben Amor, Heni (Committee member) / Bertsekas, Dimitri (Committee member) / Pedrielli, Giulia (Committee member) / Sankaranarayanan, Sriram (Committee member) / Arizona State University (Publisher)
Created2021

Description
Automated driving systems are in an intensive research and development stage, and the companies developing these systems are targeting to deploy them on public roads in a very near future. Guaranteeing safe operation of these systems is crucial as they are planned to carry passengers and share the road with other vehicles and pedestrians. Yet, there is no agreed-upon approach on how and in what detail those systems should be tested. Different organizations have different testing approaches, and one common approach is to combine simulation-based testing with real-world driving.
One of the expectations from fully-automated vehicles is never to cause an accident. However, an automated vehicle may not be able to avoid all collisions, e.g., the collisions caused by other road occupants. Hence, it is important for the system designers to understand the boundary case scenarios where an autonomous vehicle can no longer avoid a collision. Besides safety, there are other expectations from automated vehicles such as comfortable driving and minimal fuel consumption. All safety and functional expectations from an automated driving system should be captured with a set of system requirements. It is challenging to create requirements that are unambiguous and usable for the design, testing, and evaluation of automated driving systems. Another challenge is to define useful metrics for assessing the testing quality because in general, it is impossible to test every possible scenario.
The goal of this dissertation is to formalize the theory for testing automated vehicles. Various methods for automatic test generation for automated-driving systems in simulation environments are presented and compared. The contributions presented in this dissertation include (i) new metrics that can be used to discover the boundary cases between safe and unsafe driving conditions, (ii) a new approach that combines combinatorial testing and optimization-guided test generation methods, (iii) approaches that utilize global optimization methods and random exploration to generate critical vehicle and pedestrian trajectories for testing purposes, (iv) a publicly-available simulation-based automated vehicle testing framework that enables application of the existing testing approaches in the literature, including the new approaches presented in this dissertation.
One of the expectations from fully-automated vehicles is never to cause an accident. However, an automated vehicle may not be able to avoid all collisions, e.g., the collisions caused by other road occupants. Hence, it is important for the system designers to understand the boundary case scenarios where an autonomous vehicle can no longer avoid a collision. Besides safety, there are other expectations from automated vehicles such as comfortable driving and minimal fuel consumption. All safety and functional expectations from an automated driving system should be captured with a set of system requirements. It is challenging to create requirements that are unambiguous and usable for the design, testing, and evaluation of automated driving systems. Another challenge is to define useful metrics for assessing the testing quality because in general, it is impossible to test every possible scenario.
The goal of this dissertation is to formalize the theory for testing automated vehicles. Various methods for automatic test generation for automated-driving systems in simulation environments are presented and compared. The contributions presented in this dissertation include (i) new metrics that can be used to discover the boundary cases between safe and unsafe driving conditions, (ii) a new approach that combines combinatorial testing and optimization-guided test generation methods, (iii) approaches that utilize global optimization methods and random exploration to generate critical vehicle and pedestrian trajectories for testing purposes, (iv) a publicly-available simulation-based automated vehicle testing framework that enables application of the existing testing approaches in the literature, including the new approaches presented in this dissertation.
ContributorsTuncali, Cumhur Erkan (Author) / Fainekos, Georgios (Thesis advisor) / Ben Amor, Heni (Committee member) / Kapinski, James (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2019

Description
Models that learn from data are widely and rapidly being deployed today for real-world use, and have become an integral and embedded part of human lives. While these technological advances are exciting and impactful, such data-driven computer vision systems often fail in inscrutable ways. This dissertation seeks to study and improve the reliability of machine learning models from several perspectives including the development of robust training algorithms to mitigate the risks of such failures, construction of new datasets that provide a new perspective on capabilities of vision models, and the design of evaluation metrics for re-calibrating the perception of performance improvements. I will first address distribution shift in image classification with the following contributions: (1) two methods for improving the robustness of image classifiers to distribution shift by leveraging the classifier's failures into an adversarial data transformation pipeline guided by domain knowledge, (2) an interpolation-based technique for flagging out-of-distribution samples, and (3) an intriguing trade-off between distributional and adversarial robustness resulting from data modification strategies. I will then explore reliability considerations for \textit{semantic vision} models that learn from both visual and natural language data; I will discuss how logical and semantic sentence transformations affect the performance of vision--language models and my contributions towards developing knowledge-guided learning algorithms to mitigate these failures. Finally, I will describe the effort towards building and evaluating complex reasoning capabilities of vision--language models towards the long-term goal of robust and reliable computer vision models that can communicate, collaborate, and reason with humans.
ContributorsGokhale, Tejas (Author) / Yang, Yezhou (Thesis advisor) / Baral, Chitta (Thesis advisor) / Ben Amor, Heni (Committee member) / Anirudh, Rushil (Committee member) / Arizona State University (Publisher)
Created2023

Description
This dissertation explores the use of artificial intelligence and machine learningtechniques for the development of controllers for fully-powered robotic prosthetics.
The aim of the research is to enable prosthetics to predict future states and control
biomechanical properties in both linear and nonlinear fashions, with a particular focus
on ergonomics.
The research is motivated by the need to provide amputees with prosthetic devices
that not only replicate the functionality of the missing limb, but also offer a high
level of comfort and usability. Traditional prosthetic devices lack the sophistication to
adjust to a user’s movement patterns and can cause discomfort and pain over time.
The proposed solution involves the development of machine learning-based controllers
that can learn from user movements and adjust the prosthetic device’s movements
accordingly.
The research involves a combination of simulation and real-world testing to evaluate
the effectiveness of the proposed approach. The simulation involves the creation of a
model of the prosthetic device and the use of machine learning algorithms to train
controllers that predict future states and control biomechanical properties. The real-
world testing involves the use of human subjects wearing the prosthetic device to
evaluate its performance and usability.
The research focuses on two main areas: the prediction of future states and the
control of biomechanical properties. The prediction of future states involves the
development of machine learning algorithms that can analyze a user’s movements
and predict the next movements with a high degree of accuracy. The control of
biomechanical properties involves the development of algorithms that can adjust the
prosthetic device’s movements to ensure maximum comfort and usability for the user.
The results of the research show that the use of artificial intelligence and machine
learning techniques can significantly improve the performance and usability of pros-
thetic devices. The machine learning-based controllers developed in this research are
capable of predicting future states and adjusting the prosthetic device’s movements in
real-time, leading to a significant improvement in ergonomics and usability. Overall,
this dissertation provides a comprehensive analysis of the use of artificial intelligence
and machine learning techniques for the development of controllers for fully-powered
robotic prosthetics.
ContributorsCLARK, GEOFFEY M (Author) / Ben Amor, Heni (Thesis advisor) / Dasarathy, Gautam (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Ward, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2023

Description
Robot learning aims to enable robots to acquire new skills and adapt to their environment through advanced learning algorithms. As an embodiment of AI, robots continue to face the challenges of precisely estimating a robot’s state across varied environments and executing actions based on these state estimates. Although many approaches focus on developing end-to-end models and policies, they often lack explainability and do not effectively integrate algorithmic priors to understand the underlying robot models. This thesis addresses the challenges of robot learning through the application of state-space models, demonstrating their efficacy in representing a wide range of robotic systems within a differentiable Bayesian framework that integrates states, observations, and actions. It establishes that foundational state-space models possess the adaptability to be learned through data-driven approaches, enabling robots to accurately estimate their states from environmental interactions and to use these estimated states to execute more complex tasks. Additionally, the thesis shows that state-space modeling can be effectively applied in multimodal settings by learning latent state representations for sensor fusion. Furthermore, it demonstrates that state-space models can be utilized to impose conditions on robot policy networks, thereby enhancing their performance and consistency. The practical implications of deep state-space models are evaluated across a variety of robot manipulation tasks in both simulated and real-world environments, including pick-and-place operations and manipulation in dynamic contexts. The state estimation methods are also applied to soft robot systems, which present significant modeling challenges. In the final part, the thesis discusses the connection between robot learning and foundation models, exploring whether state-space agents based on large language models (LLMs) serve as a more conducive reasoning framework for robot learning. It further explores the use of foundation models to enhance data quality, demonstrating improved success rates for robot policy networks with enriched task context.
ContributorsLiu, Xiao (Author) / Ben Amor, Heni (Thesis advisor) / Yang, Yezhou (Committee member) / Seifi, Hasti (Committee member) / Zhang, Wenlong (Committee member) / Ikemoto, Shuhei (Committee member) / Arizona State University (Publisher)
Created2024