Filtering by
- Genre: Doctoral Dissertation


devices is the presence of machine learning components, for which formal properties are
difficult to establish. In addition, operational components interaction circumstances, inclusion of human-in-the-loop, and environmental changes result in a myriad of safety concerns
all of which may not only be comprehensibly tested before deployment but also may not
even have been detected during design and testing phase. This dissertation identifies major challenges of safety verification of AI-enabled safety critical systems and addresses the
safety problem by proposing an operational safety verification technique which relies on
solving the following subproblems:
1. Given Input/Output operational traces collected from sensors/actuators, automatically
learn a hybrid automata (HA) representation of the AI-enabled CPS.
2. Given the learned HA, evaluate the operational safety of AI-enabled CPS in the field.
This dissertation presents novel approaches for learning hybrid automata model from time
series traces collected from the operation of the AI-enabled CPS in the real world for linear
and nonlinear CPS. The learned model allows operational safety to be stringently evaluated
by comparing the learned HA model against a reference specifications model of the system.
The proposed techniques are evaluated on the artificial pancreas control system




Forward and Backward Machine Learning for Modeling Copper Diffusion in Cadmium Telluride Solar Cells



Artificial intelligence (AI) systems are increasingly being deployed in safety-critical real-world environments involving human participants, which may pose risks to human safety. The unpredictable nature of real-world conditions and the inherent variability of human behavior often lead to situations that were not anticipated during the systems design or verification phases. Moreover, the inclusion of AI components like large language models (LLM) often regarded as "black boxes”, adds complexity to these systems, heightening the likelihood of encountering unforeseen challenging scenarios or “unknown-unknowns”.
Unknown-unknowns present a significant challenge because their causes and impacts on the system are often not identified or are not known to the human-in-the-loop at the time of the error. These errors often precipitate a chain of events over time, starting with errors, leading to faults, which may escalate into hazards, and ultimately lead to accidents or safety violations, adversely affecting the human participants. To address these challenges, this thesis considers a conformal inference-based detection framework for identifying unknown-unknowns. This framework relearns operational models using physics guided surrogate models. The incorporation of physical into the framework ensures that it preemptively detects unknown-unknowns before causing any harm or safety violation. Unlike traditional rare class detection and anomaly detection methods, this approach does not rely on predefined error traces or definitions since for unknown-unknowns these are completely new scenarios not present during training or validation.
Lastly, to distinguish unknown-unknowns from traditional anomalies or rare events, this thesis proposes categorizing them into two main subclasses: those stemming from predictive model errors and those arising from other physical dynamics and human interactions. Additionally, this thesis also investigates the effects of LLM-based agents in real world scenarios and their role in introducing more unknown-unknowns into the overall system.
This research aims to make the interactions between AI enabled assistive devices and humans safer in the real world which is essential for the widespread deployment of such systems. By addressing the problem of unknown-unknowns associated with these safety-critical systems, this research contributes to increased trust and acceptance in diverse sectors such as healthcare, daily planning, transportation, and industrial automation.