Matching Items (22)
Filtering by

Clear all filters

Description

Robots are often used in long-duration scenarios, such as on the surface of Mars,where they may need to adapt to environmental changes. Typically, robots have been built specifically for single tasks, such as moving boxes in a warehouse

Robots are often used in long-duration scenarios, such as on the surface of Mars,where they may need to adapt to environmental changes. Typically, robots have been built specifically for single tasks, such as moving boxes in a warehouse or surveying construction sites. However, there is a modern trend away from human hand-engineering and toward robot learning. To this end, the ideal robot is not engineered,but automatically designed for a specific task. This thesis focuses on robots which learn path-planning algorithms for specific environments. Learning is accomplished via genetic programming. Path-planners are represented as Python code, which is optimized via Pareto evolution. These planners are encouraged to explore curiously and efficiently. This research asks the questions: “How can robots exhibit life-long learning where they adapt to changing environments in a robust way?”, and “How can robots learn to be curious?”.

ContributorsSaldyt, Lucas P (Author) / Ben Amor, Heni (Thesis director) / Pavlic, Theodore (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description
Not enough students are earning bachelor’s degrees in Computer Science, which is shocking as computing jobs are growing by the thousands (Zampa, 2016). These jobs have high-paying salaries and are not going to fade from the future any time soon, that is why the falling rates of computer science graduates

Not enough students are earning bachelor’s degrees in Computer Science, which is shocking as computing jobs are growing by the thousands (Zampa, 2016). These jobs have high-paying salaries and are not going to fade from the future any time soon, that is why the falling rates of computer science graduates are alarming. The working hypothesis on why so few college students major in computer science is that most think that it is too hard to learn (Wang, 2017). But I believe the real reason lies in that computer science is not an educational subject that is taught before university, which is too late for most students because by ages 12 to 13 (about seventh to eighth grade) they have decided that computer science concepts are “too difficult” for them to learn (Learning, 2022). Implementing a computer science-based education at an earlier age can possibly circumvent this seen development where students begin to lose confidence and doubt their abilities to learn computer science. This can be done easily by integrating computer science into academic subjects that are already taught in elementary schools such as science, math, and language arts as computer science uses logic, syntax, and other skills that are broadly applicable. Thus, I have created a introductory lesson plan for an elementary school class that incorporates learning how to code with robotics to promote learning computer science principles and destigmatize that it is “too hard” to learn in university.
ContributorsWong, Erika (Author) / Hedges, Craig (Thesis director) / Fischer, Adelheid (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
Description
Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and

Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and daily operations. One of the most important parts is being able to predict and foreshadow failures in the system, in order to make sure that those are fixed before they turn into large issues. One specific area where preventive maintenance is a very big part of daily activity is the automotive industry. Automobile owners are encouraged to take their cars in for maintenance on a routine schedule (based on mileage or time), or when their car signals that there is an issue (low oil levels for example). Although this level of maintenance is enough when people are in charge of cars, the rise of autonomous vehicles, specifically self-driving cars, changes that. Now instead of a human being able to look at a car and diagnose any issues, the car needs to be able to do this itself. The objective of this project was to create such a system. The Electronics Preventive Maintenance System is an internal system that is designed to meet all these criteria and more. The EPMS system is comprised of a central computer which monitors all major electronic components in an autonomous vehicle through the use of standard off-the-shelf sensors. The central computer compiles the sensor data, and is able to sort and analyze the readings. The filtered data is run through several mathematical models, each of which diagnoses issues in different parts of the vehicle. The data for each component in the vehicle is compared to pre-set operating conditions. These operating conditions are set in order to encompass all normal ranges of output. If the sensor data is outside the margins, the warning and deviation are recorded and a severity level is calculated. In addition to the individual focus, there's also a vehicle-wide model, which predicts how necessary maintenance is for the vehicle. All of these results are analyzed by a simple heuristic algorithm and a decision is made for the vehicle's health status, which is sent out to the Fleet Management System. This system allows for accurate, effortless monitoring of all parts of an autonomous vehicle as well as predictive modeling that allows the system to determine maintenance needs. With this system, human inspectors are no longer necessary for a fleet of autonomous vehicles. Instead, the Fleet Management System is able to oversee inspections, and the system operator is able to set parameters to decide when to send cars for maintenance. All the models used for the sensor and component analysis are tailored specifically to the vehicle. The models and operating margins are created using empirical data collected during normal testing operations. The system is modular and can be used in a variety of different vehicle platforms, including underwater autonomous vehicles and aerial vehicles.
ContributorsMian, Sami T. (Author) / Collofello, James (Thesis director) / Chen, Yinong (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description
Education in computer science is a difficult endeavor, with learning a new programing language being a barrier to entry, especially for college freshman and high school students. Learning a first programming language requires understanding the syntax of the language, the algorithms to use, and any additional complexities the language carries.

Education in computer science is a difficult endeavor, with learning a new programing language being a barrier to entry, especially for college freshman and high school students. Learning a first programming language requires understanding the syntax of the language, the algorithms to use, and any additional complexities the language carries. Often times this becomes a deterrent from learning computer science at all. Especially in high school, students may not want to spend a year or more simply learning the syntax of a programming language. In order to overcome these issues, as well as to mitigate the issues caused by Microsoft discontinuing their Visual Programming Language (VPL), we have decided to implement a new VPL, ASU-VPL, based on Microsoft's VPL. ASU-VPL provides an environment where users can focus on algorithms and worry less about syntactic issues. ASU-VPL was built with the concepts of Robot as a Service and workflow based development in mind. As such, ASU-VPL is designed with the intention of allowing web services to be added to the toolbox (e.g. WSDL and REST services). ASU-VPL has strong support for multithreaded operations, including event driven development, and is built with Microsoft VPL users in mind. It provides support for many different robots, including Lego's third generation robots, i.e. EV3, and any open platform robots. To demonstrate the capabilities of ASU-VPL, this paper details the creation of an Intel Edison based robot and the use of ASU-VPL for programming both the Intel based robot and an EV3 robot. This paper will also discuss differences between ASU-VPL and Microsoft VPL as well as differences between developing for the EV3 and for an open platform robot.
ContributorsDe Luca, Gennaro (Author) / Chen, Yinong (Thesis director) / Cheng, Calvin (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12
Description
Robots programmed to follow human commands have struggled to accurately interpret human intentions down to the motion level. There has been growing research in robotics to integrate descriptive natural language commands into robot actions. Existing correction methods often rely on less intuitive approaches like direct manipulation, requiring full user attention.

Robots programmed to follow human commands have struggled to accurately interpret human intentions down to the motion level. There has been growing research in robotics to integrate descriptive natural language commands into robot actions. Existing correction methods often rely on less intuitive approaches like direct manipulation, requiring full user attention. To address this challenge, I present a framework that uses natural language adverbial corrections to bridge the gap between human intent and robotic execution. Integrating these natural language corrections with robot demonstrations allows for iterative policy adjustments without sacrificing performance. This leverages the flexibility of natural language to convey complex goals, potentially surpassing the limitations of reward functions and expert demonstrations. Formulating the problem with parameterized robot trajectories, the method tested two robotic domains: a 7-degree-of-freedom Franka robotic arm and a synthetic 1-degree-of-freedom arm performing reaching and waving tasks. The action generation model employs a regressive transformer and an action chunking approach. The experiments demonstrate that robots can modulate their trajectories according to adverbial instructions — such as "move faster" — 86% and "move slower" — 79% of the time. This enhances human-robot collaboration by enabling robots to understand and execute tasks with language-informed precision, potentially improving assistive robotics and making human-robot interactions safer.
ContributorsKondepudi, Naga Suresh Krishna (Author) / Gopalan, Nakul (Thesis advisor) / Zhang, Wenlong (Committee member) / Senanayake, Ransalu (Committee member) / Arizona State University (Publisher)
Created2024
Description
Robots are increasingly integrated into our daily routines, yet their current tasks are mostly specific and pre-programmed, lacking flexibility and scalability. In anticipation of a future where robots will handle diverse household chores—from basic tasks like picking and placing items to more complex activities such as cooking—there's a critical need

Robots are increasingly integrated into our daily routines, yet their current tasks are mostly specific and pre-programmed, lacking flexibility and scalability. In anticipation of a future where robots will handle diverse household chores—from basic tasks like picking and placing items to more complex activities such as cooking—there's a critical need for them to master long-term planning and motion challenges. Current methods addressing this demand typically rely on manually crafted abstractions and expert-guided task planning. This work deals with a novel approach - developing strategies to learn relational abstractions directly from raw trajectory data. These learned abstractions are then used for inventing symbolic vocabularies and action models. The learnt action models are then used to solve complex long-horizon task and motion planning problems, which are not seen in the training demonstrations. The results show that the approach discussed is robust and capable of learning the model just from few demonstrations. Additionally, this work also discusses an interactive AI platform aimed at making advanced robot planning accessible to users without extensive computer science backgrounds. Such platforms play a crucial role as AI and robotics increasingly intertwine with everyday life, offering intuitive interfaces that teach users the basics of robot planning.
ContributorsNagpal, Jayesh (Author) / Srivastava, Siddharth (Thesis advisor) / Pedrielli, Giulia (Committee member) / Gopalan, Nakul (Committee member) / Arizona State University (Publisher)
Created2024
Description
Bimanual robot manipulation, involving the coordinated control of two robot arms, holds great promise for enhancing the dexterity and efficiency of robotic systems across a wide range of applications, from manufacturing and healthcare to household chores and logistics. However, enabling robots to perform complex bimanual tasks with the same level

Bimanual robot manipulation, involving the coordinated control of two robot arms, holds great promise for enhancing the dexterity and efficiency of robotic systems across a wide range of applications, from manufacturing and healthcare to household chores and logistics. However, enabling robots to perform complex bimanual tasks with the same level of skill and adaptability as humans remains a challenging problem. The control of a bimanual robot can be tackled through various methods like inverse dynamic controller or reinforcement learning, but each of these methods have their own problems. Inverse dynamic controller cannot adapt to a changing environment, whereas Reinforcement learning is computationally intensive and may require weeks of training for even simple tasks, and reward formulation for Reinforcement Learning is often challenging and is still an open research topic. Imitation learning, leverages human demonstrations to enable robots to acquire the skills necessary for complex tasks and it can be highly sample-efficient and reduces exploration. Given the advantages of Imitation learning we want to explore the application of imitation learning techniques to bridge the gap between human expertise and robotic dexterity in the context of bimanual manipulation. In this thesis, an examination of the Implicit Behavioral Cloning imitation learning algorithm is conducted. Implicit behavioral cloning aims to capture the fundamental behavior or policy of the expert by utilizing energy-based models, which frequently demonstrate superior performance when compared to explicit behavior cloning policies. The assessment encompasses an investigation of the impact of expert demonstrations' quality on the efficacy of the acquired policies. Furthermore, computational and performance metrics of diverse training and inference techniques for energy-based models are compared.
ContributorsRayavarapu, Ravi Swaroop (Author) / Amor, Heni Ben (Thesis advisor) / Gopalan, Nakul (Committee member) / Senanayake, Ransalu (Committee member) / Arizona State University (Publisher)
Created2023
Description
As robots become increasingly integrated into the environments, they need to learn how to interact with the objects around them. Many of these objects are articulated with multiple degrees of freedom (DoF). Multi-DoF objects have complex joints that require specific manipulation orders, but existing methods only consider objects with a

As robots become increasingly integrated into the environments, they need to learn how to interact with the objects around them. Many of these objects are articulated with multiple degrees of freedom (DoF). Multi-DoF objects have complex joints that require specific manipulation orders, but existing methods only consider objects with a single joint. To capture the joint structure and manipulation sequence of any object, I introduce the "Object Kinematic State Machines" (OKSMs), a novel representation that models the kinematic constraints and manipulation sequences of multi-DoF objects. I also present Pokenet, a deep neural network architecture that estimates the OKSMs from the sequence of point cloud data of human demonstrations. I conduct experiments on both simulated and real-world datasets to validate my approach. First, I evaluate the modeling of multi-DoF objects on a simulated dataset, comparing against the current state-of-the-art method. I then assess Pokenet's real-world usability on a dataset collected in my lab, comprising 5,500 data points across 4 objects. Results showcase that my method can successfully estimate joint parameters of novel multi-DoF objects with over 25% more accuracy on average than prior methods.
ContributorsGUPTA, ANMOL (Author) / Gopalan, Nakul (Thesis advisor) / Zhang, Yu (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2024
Description
Manipulator motion planning has conventionally been solved using sampling and optimization-based algorithms that are agnostic to embodiment and environment configurations. However, these algorithms plan on a fixed environment representation approximated using shape primitives, and hence struggle to find solutions for cluttered and dynamic environments. Furthermore, these algorithms fail to produce

Manipulator motion planning has conventionally been solved using sampling and optimization-based algorithms that are agnostic to embodiment and environment configurations. However, these algorithms plan on a fixed environment representation approximated using shape primitives, and hence struggle to find solutions for cluttered and dynamic environments. Furthermore, these algorithms fail to produce solutions for complex unstructured environments under real-time bounds. Neural Motion Planners (NMPs) are an appealing alternative to algorithmic approaches as they can leverage parallel computing for planning while incorporating arbitrary environmental constraints directly from raw sensor observations. Contemporary NMPs successfully transfer to different environment variations, however, fail to generalize across embodiments. This thesis proposes "AnyNMP'', a generalist motion planning policy for zero-shot transfer across different robotic manipulators and environments. The policy is conditioned on semantically segmented 3D pointcloud representation of the workspace thus enabling implicit sim2real transfer. In the proposed approach, templates are formulated for manipulator kinematics and ground truth motion plans are collected for over 3 million procedurally sampled robots in randomized environments. The planning pipeline consists of a state validation model for differentiable collision detection and a sampling based planner for motion generation. AnyNMP has been validated on 5 different commercially available manipulators and showcases successful cross-embodiment planning, achieving an 80% average success rate on baseline benchmarks.
ContributorsRath, Prabin Kumar (Author) / Gopalan, Nakul (Thesis advisor) / Yu, Hongbin (Thesis advisor) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2024
Description
Learning longer-horizon tasks is challenging with techniques such as reinforcement learning and behavior cloning. Previous approaches have split these long tasks into shorter tasks that are easier to learn by using statistical change point detection methods. However, classical changepoint detection methods function only with low-dimensional robot trajectory data and not

Learning longer-horizon tasks is challenging with techniques such as reinforcement learning and behavior cloning. Previous approaches have split these long tasks into shorter tasks that are easier to learn by using statistical change point detection methods. However, classical changepoint detection methods function only with low-dimensional robot trajectory data and not with high-dimensional inputs such as vision. In this thesis, I have split a long horizon tasks, represented by trajectories into short-horizon sub-tasks with the supervision of language. These shorter horizon tasks can be learned using conventional behavior cloning approaches. I found comparisons between the techniques from the video moment retrieval problem and changepoint detection in robot trajectory data consisting of high-dimensional data. The proposed moment retrieval-based approach shows a more than 30% improvement in mean average precision (mAP) for identifying trajectory sub-tasks with language guidance compared to that without language. Several ablations are performed to understand the effects of domain randomization, sample complexity, views, and sim-to-real transfer of this method. The data ablation shows that just with a 100 labeled trajectories a 42.01 mAP can be achieved, demonstrating the sample efficiency of using such an approach. Further, behavior cloning models trained on the segmented trajectories outperform a single model trained on the whole trajectory by up to 20%.
ContributorsRaj, Divyanshu (Author) / Gopalan, Nakul (Thesis advisor) / Baral, Chitta (Committee member) / Senanayake, Ransalu (Committee member) / Arizona State University (Publisher)
Created2024