Matching Items (17)
Filtering by

Clear all filters

Description
Critical infrastructures in healthcare, power systems, and web services, incorporate cyber-physical systems (CPSes), where the software controlled computing systems interact with the physical environment through actuation and monitoring. Ensuring software safety in CPSes, to avoid hazards to property and human life as a result of un-controlled interactions, is essential and

Critical infrastructures in healthcare, power systems, and web services, incorporate cyber-physical systems (CPSes), where the software controlled computing systems interact with the physical environment through actuation and monitoring. Ensuring software safety in CPSes, to avoid hazards to property and human life as a result of un-controlled interactions, is essential and challenging. The principal hurdle in this regard is the characterization of the context driven interactions between software and the physical environment (cyber-physical interactions), which introduce multi-dimensional dynamics in space and time, complex non-linearities, and non-trivial aggregation of interaction in case of networked operations. Traditionally, CPS software is tested for safety either through experimental trials, which can be expensive, incomprehensive, and hazardous, or through static analysis of code, which ignore the cyber-physical interactions. This thesis considers model based engineering, a paradigm widely used in different disciplines of engineering, for safety verification of CPS software and contributes to three fundamental phases: a) modeling, building abstractions or models that characterize cyberphysical interactions in a mathematical framework, b) analysis, reasoning about safety based on properties of the model, and c) synthesis, implementing models on standard testbeds for performing preliminary experimental trials. In this regard, CPS modeling techniques are proposed that can accurately capture the context driven spatio-temporal aggregate cyber-physical interactions. Different levels of abstractions are considered, which result in high level architectural models, or more detailed formal behavioral models of CPSes. The outcomes include, a well defined architectural specification framework called CPS-DAS and a novel spatio-temporal formal model called Spatio-Temporal Hybrid Automata (STHA) for CPSes. Model analysis techniques are proposed for the CPS models, which can simulate the effects of dynamic context changes on non-linear spatio-temporal cyberphysical interactions, and characterize aggregate effects. The outcomes include tractable algorithms for simulation analysis and for theoretically proving safety properties of CPS software. Lastly a software synthesis technique is proposed that can automatically convert high level architectural models of CPSes in the healthcare domain into implementations in high level programming languages. The outcome is a tool called Health-Dev that can synthesize software implementations of CPS models in healthcare for experimental verification of safety properties.
ContributorsBanerjee, Ayan (Author) / Gupta, Sandeep K.S. (Thesis advisor) / Poovendran, Radha (Committee member) / Fainekos, Georgios (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2012
Description
One of the main challenges in testing artificial intelligence (AI) enabled cyber physicalsystems (CPS) such as autonomous driving systems and internet­-of-­things (IoT) medical
devices is the presence of machine learning components, for which formal properties are
difficult to establish. In addition, operational components interaction circumstances, inclusion of human­-in-­the-­loop, and environmental changes result

One of the main challenges in testing artificial intelligence (AI) enabled cyber physicalsystems (CPS) such as autonomous driving systems and internet­-of-­things (IoT) medical
devices is the presence of machine learning components, for which formal properties are
difficult to establish. In addition, operational components interaction circumstances, inclusion of human­-in-­the-­loop, and environmental changes result in a myriad of safety concerns
all of which may not only be comprehensibly tested before deployment but also may not
even have been detected during design and testing phase. This dissertation identifies major challenges of safety verification of AI­-enabled safety critical systems and addresses the
safety problem by proposing an operational safety verification technique which relies on
solving the following subproblems:
1. Given Input/Output operational traces collected from sensors/actuators, automatically
learn a hybrid automata (HA) representation of the AI-­enabled CPS.
2. Given the learned HA, evaluate the operational safety of AI­-enabled CPS in the field.
This dissertation presents novel approaches for learning hybrid automata model from time
series traces collected from the operation of the AI­-enabled CPS in the real world for linear
and non­linear CPS. The learned model allows operational safety to be stringently evaluated
by comparing the learned HA model against a reference specifications model of the system.
The proposed techniques are evaluated on the artificial pancreas control system
ContributorsLamrani, Imane (Author) / Gupta, Sandeep Ks (Thesis advisor) / Banerjee, Ayan (Committee member) / Zhang, Yi (Committee member) / Runger, George C. (Committee member) / Rodriguez, Armando (Committee member) / Arizona State University (Publisher)
Created2020
Description
Power systems are transforming into more complex and stressed systems each day. These stressed conditions could lead to a slow decline in the power grid's voltage profile and sometimes lead to a partial or total blackout. This phenomenon can be identified by either solving a power flow problem or using

Power systems are transforming into more complex and stressed systems each day. These stressed conditions could lead to a slow decline in the power grid's voltage profile and sometimes lead to a partial or total blackout. This phenomenon can be identified by either solving a power flow problem or using measurement-based real-time monitoring algorithms. The first part of this thesis focuses on proposing a robust power flow algorithm for ill-conditioned systems. While preserving the stable nature of the fixed point (FP) method, a novel distributed FP equation is proposed to calculate the voltage at each bus. The proposed algorithm's performance is compared with existing methods, showing that the proposed method can correctly find the solutions when other methods cannot work due to high condition number matrices. It is also empirically shown that the FP algorithm is more robust to bad initialization points. The second part of this thesis focuses on identifying the voltage instability phenomenon using real-time monitoring algorithms. This work proposes a novel distributed measurement-based monitoring technique called voltage stability index (VSI). With the help of PMUs and communication of voltage phasors between neighboring buses, the processors embedded at each bus in the smart grid perform simultaneous online computations of VSI. VSI enables real-time identification of the system's critical bus with minimal communication infrastructure. Its benefits include interpretability, fast computation, and low sensitivity to noisy measurements. Furthermore, this work proposes the ``local static-voltage stability index" (LS-VSI) that removes the minimal communication requirement in VSI by requiring only one PMU at the bus of interest. LS-VSI also solves the issue of Thevenin equivalent parameter estimation in the presence of noisy measurements. Unlike VSI, LS-VSI incorporates the ZIP load models and load tap changers (LTCs) and successfully identifies the bifurcation point considering ZIP loads' impact on voltage stability. Both VSI and LS-VSI are useful to monitor the voltage stability margins in real-time using the PMU measurements from the field. However, they cannot indicate the onset of voltage emergency situations. The proposed LD-VSI uses the dynamic measurements of the power system to identify the onset of a voltage emergency situation with an alarm. Compared to existing methods, it is shown that it is more robust to PMU measurement noise and can also identify the voltage collapse point while the existing methods have issues with the same.
ContributorsGuddanti, Kishan Prudhvi (Author) / Weng, Yang (Thesis advisor) / Banerjee, Ayan (Committee member) / Zhang, Baosen (Committee member) / Vittal, Vijay (Committee member) / Arizona State University (Publisher)
Created2021
Description
In recent years, brain signals have gained attention as a potential trait for biometric-based security systems and laboratory systems have been designed. A real-world brain-based security system requires to be usable, accurate, and robust. While there have been developments in these aspects, there are still challenges to be met. With

In recent years, brain signals have gained attention as a potential trait for biometric-based security systems and laboratory systems have been designed. A real-world brain-based security system requires to be usable, accurate, and robust. While there have been developments in these aspects, there are still challenges to be met. With regard to usability, users need to provide lengthy amount of data compared to other traits such as fingerprint and face to get authenticated. Furthermore, in the majority of works, medical sensors are used which are more accurate compared to commercial ones but have a tedious setup process and are not mobile. Performance wise, the current state-of-art can provide acceptable accuracy on a small pool of users data collected in few sessions close to each other but still falls behind on a large pool of subjects over a longer time period. Finally, a brain security system should be robust against presentation attacks to prevent adversaries from gaining access to the system. This dissertation proposes E-BIAS (EEG-based Identification and Authentication System), a brain-mobile security system that makes contributions in three directions. First, it provides high performance on signals with shorter lengths collected by commercial sensors and processed with lightweight models to meet the computation/energy capacity of mobile devices. Second, to evaluate the system's robustness a novel presentation attack was designed which challenged the literature's presumption of intrinsic liveness property for brain signals. Third, to bridge the gap, I formulated and studied the brain liveness problem and proposed two solution approaches (model-aware & model agnostic) to ensure liveness and enhance robustness against presentation attacks. Under each of the two solution approaches, several methods were suggested and evaluated against both synthetic and manipulative classes of attacks (a total of 43 different attack vectors). Methods in both model-aware and model-agnostic approaches were successful in achieving an error rate of zero (0%). More importantly, such error rates were reached in face of unseen attacks which provides evidence of the generalization potentials of the proposed solution approaches and methods. I suggested an adversarial workflow to facilitate attack and defense cycles to allow for enhanced generalization capacity for domains in which the decision-making process is non-deterministic such as cyber-physical systems (e.g. biometric/medical monitoring, autonomous machines, etc.). I utilized this workflow for the brain liveness problem and was able to iteratively improve the performance of both the designed attacks and the proposed liveness detection methods.
ContributorsSohankar Esfahani, Mohammad Javad (Author) / Gupta, Sandeep K.S. (Thesis advisor) / Santello, Marco (Committee member) / Dasgupta, Partha (Committee member) / Banerjee, Ayan (Committee member) / Arizona State University (Publisher)
Created2021
Description
The advancement of Artificial Intelligence (AI) and Deep Learning networks has opened doors to many applications, especially in the computer vision domain. Image-based techniques now perform at the human level or above, thanks to millions of images available for training and GPU-based computing power. One of the challenges with these

The advancement of Artificial Intelligence (AI) and Deep Learning networks has opened doors to many applications, especially in the computer vision domain. Image-based techniques now perform at the human level or above, thanks to millions of images available for training and GPU-based computing power. One of the challenges with these modern approaches is their reliance on vast amounts of labeled datasets and the prohibitive cost of acquiring suitable datasets for training. Although techniques like transfer learning have been developed to allow for fine-tuning models with much smaller datasets, it remains a tedious and costly task for many applications. Another challenge is the wide-ranging deployment of such AI systems in human-facing applications and the black-box nature of current deep learning techniques. There is a need for greater transparency and a need to design systems with explainability in mind. Given the enormous impact AI may have on human lives and livelihoods, AI systems need to develop trust with their human users and provide adequate feedback. Considering the inherent challenges with modern AI techniques, this research focused on the specific case of gestural language, particularly American Sign Language (ASL), in previous work. With most of the industry's interests directed at wide public-facing applications or topics like autonomous cars and large language models (LLMs), there is a need to design frameworks and advance fundamental research for communities and applications that are vastly underserved. One such community is the Deaf and Hard of Hearing (DHH), which uses gestural languages like ASL to communicate. ASL datasets tend to be limited and expensive to collect, while being complex and used by millions of users worldwide. This dissertation presents a gesture comprehension framework that decomposes gestures into conceptual semantic trees, enabling the incorporation of human-level concepts and semantic rules to improve recognition tasks with limited data, synthesize new concepts, and enhance explainability. The framework is evaluated through zero-shot recognition of previously unseen gestures, automated feedback generation for ASL learners, and testing against military, aviation, human activity, ASL, and other gestural language datasets. The results show improved accuracy over some state-of-the-art methods without large datasets, while facilitating human-level concepts, recognizing unseen examples, generating understandable feedback, and enhancing explainability.
ContributorsKamzin, Azamat (Author) / Gupta, Sandeep K.S. (Thesis advisor) / Banerjee, Ayan (Committee member) / Lee, Kookjin (Committee member) / Paudyal, Prajwal (Committee member) / Arizona State University (Publisher)
Created2024
Description
To optimize solar cell performance, it is necessary to properly design the doping profile in the absorber layer of the solar cell. For CdTe solar cells, Cu is used for providing p-type doping. Hence, having an estimator that, given the diffusion parameter set (time and Temperature) and the doping concentration

To optimize solar cell performance, it is necessary to properly design the doping profile in the absorber layer of the solar cell. For CdTe solar cells, Cu is used for providing p-type doping. Hence, having an estimator that, given the diffusion parameter set (time and Temperature) and the doping concentration at the junction, gives the junction depth of the absorber layer, is essential in the design process of CdTe solar cells (and other cell technologies). In this work it is called a forward (direct) estimation process. The backward (inverse) problem then is the one in which, given the junction depth and the desired concentration of Cu doping at the CdTe/CdS heterointerface, the estimator gives the time and/or the Temperature needed to achieve the desired doping profiles. This is called a backward (inverse) estimation process. Such estimators, both forward and backward, do not exist in the literature for solar cell technology. To train the Machine Learning (ML) estimator, it is necessary to first generate a large set of data that are obtained by using the PVRD-FASP Solver, which has been validated via comparison with experimental values. Note that this big dataset needs to be generated only once. Next, one uses Machine Learning (ML), Deep Learning (DL) and Artificial Intelligence (AI) to extract the actual Cu doping profiles that result from the process of diffusion, annealing, and cool-down in the fabrication sequence of CdTe solar cells. Two deep learning neural network models are used: (1) Multilayer Perceptron Artificial Neural Network (MLPANN) model using a Keras Application Programmable Interface (API) with TensorFlow backend, and (2) Radial Basis Function Network (RBFN) model to predict the Cu doping profiles for different Temperatures and durations of the annealing process. Excellent agreement between the simulated results obtained with the PVRD-FASP Solver and the predicted values is obtained. It is important to mention here that it takes a significant amount of time to generate the Cu doping profiles given the initial conditions using the PVRD-FASP Solver, because solving the drift-diffusion-reaction model is mathematically a stiff problem and leads to numerical instabilities if the time steps are not small enough, which, in turn, affects the time needed for completion of one simulation run. The generation of the same with Machine Learning (ML) is almost instantaneous and can serve as an excellent simulation tool to guide future fabrication of optimal doping profiles in CdTe solar cells.
ContributorsSalman, Ghaith (Author) / Vasileska, Dragica (Thesis advisor) / Goodnick, Stephen M. (Thesis advisor) / Ringhofer, Christian (Committee member) / Banerjee, Ayan (Committee member) / Arizona State University (Publisher)
Created2021
Description
The significance of visual gesture recognition is growing in our digital era, particularly in human-computer interactions (HCI) that utilize hand gestures. It plays a vital role in ubiquitous HCI applications like sign language recognition, monitoring hand hygiene practices, and gesture- based smart home interfaces. These applications often rely on supervised

The significance of visual gesture recognition is growing in our digital era, particularly in human-computer interactions (HCI) that utilize hand gestures. It plays a vital role in ubiquitous HCI applications like sign language recognition, monitoring hand hygiene practices, and gesture- based smart home interfaces. These applications often rely on supervised machine learning algorithms, trained on labeled data, to continuously recognize hand gestures. However, accurately segmenting static or dynamic gestures and reliably detecting hand gestures within a continuous stream remains challenging, especially when considering real-world testing scenarios. Challenges include background noise complexity, varying speeds of hand gestures, and co-articulation, which can hinder continuous hand gesture recognition.This dissertation presents a novel approach for enhancing cross-domain gesture recognition performance in deep learning architectures through the Grammar-Driven Machine Learning (GramML) framework. The focus is on meticulously identifying frames corresponding to specific gestures within continuous signing streams, based on key characteristics like hand morphology, spatial positioning, and dynamic movement patterns. The GramML method utilizes a predefined syntactic structure of tokens to capture spatial temporal features that closely align with the semantic meaning of individual hand gestures. The effectiveness of this approach is evaluated through an analysis of performance degradation in an Inflated 3D ConvNet (I3D) model under varying data distributions. Furthermore, the study underscores the importance of robust classification methodologies in practical scenarios, exemplified by the validation of gesture sequence compliance, such as hand-washing routines. By integrating Grammar-Driven Machine Learning (GramML) into deep learning architectures, this research aims to enhance the reliability, adaptability, and compliance of gesture recognition systems across diverse sign language contexts.
ContributorsAmperayani, Venkata Naga Sai Apurupa (Author) / Gupta, Sandeep K S (Thesis advisor) / Banerjee, Ayan (Committee member) / Yang, Yezhou (Committee member) / Paudyal, Prajwal (Committee member) / Arizona State University (Publisher)
Created2024
Description
Deaf and Hard of Hearing (DHH) students' access to technical education is impeded by- difficulty in communicating technical terms effectively and few resources to learn, assess and adopt standard gestures for these technical terms. While only 20% of DHH individuals attend post secondary education institutions each year, an even smaller

Deaf and Hard of Hearing (DHH) students' access to technical education is impeded by- difficulty in communicating technical terms effectively and few resources to learn, assess and adopt standard gestures for these technical terms. While only 20% of DHH individuals attend post secondary education institutions each year, an even smaller subset will enroll in a computer science course. Only 0.19% of DHH students attend any postgraduate education as opposed to nearly 15% of hearing individuals. This reduces the access of DHH individuals to high quality skilled jobs in the technological fields that require postgraduate education where they may earn 31% more. I identified significant variance in the accessibility requirements of DHH students for STEM education based on their respective disability profiles. I focus on deaf students who rely on American Sign Language (ASL), and based on my interviews with expert educators, and interpreters, I derive the unique accessibility requirements for them. In this thesis, I present a framework, CASE, that consists of tools that aid DHH students in technical higher education to communicate, assess, standardize technical gestures and provides different methodologies to evaluate these tools. I present a Computer Science Accessible Virtual Education (CSAVE) platform consisting of a crowd-sourcing gesture learning and generation tool, CSignGen. CSignGen is designed to aid in building a stronger ASL user base to facilitate communication, and a robust database of standard technical gestures. The standardization tool is based on the concept of iconicity Rating. In order to build a repository of new standard gestures, I need to ensure that the newly generated gestures are highly iconic so that they are easily recognizable and hence adopted by higher number of users. I evaluated learners' recognition using retention and execution tests. Results from gesture recognition based on iconicity supported my hypothesis that high iconic gestures are more recognizable. Performance evaluation of the automated gesture standardization showed that the accuracy of the automated iconicity rating assigner is 80.76%. The second step evaluation by an expert in technical gestures, showed that there is little discernible difference between the iconicity ratings assigned by the automated assigner and the manual observational assignment. The expert agreed with the automated Iconicity assigner 62% of the time.
ContributorsHossain, Sameena (Author) / Gupta, Sandeep K. S. (Thesis advisor) / Azuma, Tamiko (Committee member) / Banerjee, Ayan (Committee member) / Paudyal, Prajwal (Committee member) / VanLehn, Kurt (Committee member) / Arizona State University (Publisher)
Created2024
Description
With the advent of new advanced analysis tools and access to related published data, it is getting more difficult for data owners to suppress private information from published data while still providing useful information. This dual problem of providing useful, accurate information and protecting it at the same time has

With the advent of new advanced analysis tools and access to related published data, it is getting more difficult for data owners to suppress private information from published data while still providing useful information. This dual problem of providing useful, accurate information and protecting it at the same time has been challenging, especially in healthcare. The data owners lack an automated resource that provides layers of protection on a published dataset with validated statistical values for usability. Differential privacy (DP) has gained a lot of attention in the past few years as a solution to the above-mentioned dual problem. DP is defined as a statistical anonymity model that can protect the data from adversarial observation while still providing intended usage. This dissertation introduces a novel DP protection mechanism called Inexact Data Cloning (IDC), which simultaneously protects and preserves information in published data while conveying source data intent. IDC preserves the privacy of the records by converting the raw data records into clonesets. The clonesets then pass through a classifier that removes potential compromising clonesets, filtering only good inexact cloneset. The mechanism of IDC is dependent on a set of privacy protection metrics called differential privacy protection metrics (DPPM), which represents the overall protection level. IDC uses two novel performance values, differential privacy protection score (DPPS) and clone classifier selection percentage (CCSP), to estimate the privacy level of protected data. In support of using IDC as a viable data security product, a software tool chain prototype, differential privacy protection architecture (DPPA), was developed to utilize the IDC. DPPA used the engineering security mechanism of IDC. DPPA is a hub which facilitates a market for data DP security mechanisms. DPPA works by incorporating standalone IDC mechanisms and provides automation, IDC protected published datasets and statistically verified IDC dataset diagnostic report. DPPA is currently doing functional, and operational benchmark processes that quantifies the DP protection of a given published dataset. The DPPA tool was recently used to test a couple of health datasets. The test results further validate the IDC mechanism as being feasible.
Contributorsthomas, zelpha (Author) / Bliss, Daniel W (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Banerjee, Ayan (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2023
Description

Artificial intelligence (AI) systems are increasingly being deployed in safety-critical real-world environments involving human participants, which may pose risks to human safety. The unpredictable nature of real-world conditions and the inherent variability of human behavior often lead to situations that were not anticipated during the systems design or verification phases.

Artificial intelligence (AI) systems are increasingly being deployed in safety-critical real-world environments involving human participants, which may pose risks to human safety. The unpredictable nature of real-world conditions and the inherent variability of human behavior often lead to situations that were not anticipated during the systems design or verification phases. Moreover, the inclusion of AI components like large language models (LLM) often regarded as "black boxes”, adds complexity to these systems, heightening the likelihood of encountering unforeseen challenging scenarios or “unknown-unknowns”.

Unknown-unknowns present a significant challenge because their causes and impacts on the system are often not identified or are not known to the human-in-the-loop at the time of the error. These errors often precipitate a chain of events over time, starting with errors, leading to faults, which may escalate into hazards, and ultimately lead to accidents or safety violations, adversely affecting the human participants. To address these challenges, this thesis considers a conformal inference-based detection framework for identifying unknown-unknowns. This framework relearns operational models using physics guided surrogate models. The incorporation of physical into the framework ensures that it preemptively detects unknown-unknowns before causing any harm or safety violation. Unlike traditional rare class detection and anomaly detection methods, this approach does not rely on predefined error traces or definitions since for unknown-unknowns these are completely new scenarios not present during training or validation.

Lastly, to distinguish unknown-unknowns from traditional anomalies or rare events, this thesis proposes categorizing them into two main subclasses: those stemming from predictive model errors and those arising from other physical dynamics and human interactions. Additionally, this thesis also investigates the effects of LLM-based agents in real world scenarios and their role in introducing more unknown-unknowns into the overall system.

This research aims to make the interactions between AI enabled assistive devices and humans safer in the real world which is essential for the widespread deployment of such systems. By addressing the problem of unknown-unknowns associated with these safety-critical systems, this research contributes to increased trust and acceptance in diverse sectors such as healthcare, daily planning, transportation, and industrial automation.

ContributorsMaity, Aranyak (Author) / Gupta, Sandeep K.S. (Thesis advisor) / Banerjee, Ayan (Committee member) / Lee, Kookjin (Committee member) / Gupta, Vivek (Committee member) / Lamrani, Imane (Committee member) / Arizona State University (Publisher)
Created2025