Filtering by
- Genre: Doctoral Dissertation

Artificial intelligence (AI) systems are increasingly being deployed in safety-critical real-world environments involving human participants, which may pose risks to human safety. The unpredictable nature of real-world conditions and the inherent variability of human behavior often lead to situations that were not anticipated during the systems design or verification phases. Moreover, the inclusion of AI components like large language models (LLM) often regarded as "black boxes”, adds complexity to these systems, heightening the likelihood of encountering unforeseen challenging scenarios or “unknown-unknowns”.
Unknown-unknowns present a significant challenge because their causes and impacts on the system are often not identified or are not known to the human-in-the-loop at the time of the error. These errors often precipitate a chain of events over time, starting with errors, leading to faults, which may escalate into hazards, and ultimately lead to accidents or safety violations, adversely affecting the human participants. To address these challenges, this thesis considers a conformal inference-based detection framework for identifying unknown-unknowns. This framework relearns operational models using physics guided surrogate models. The incorporation of physical into the framework ensures that it preemptively detects unknown-unknowns before causing any harm or safety violation. Unlike traditional rare class detection and anomaly detection methods, this approach does not rely on predefined error traces or definitions since for unknown-unknowns these are completely new scenarios not present during training or validation.
Lastly, to distinguish unknown-unknowns from traditional anomalies or rare events, this thesis proposes categorizing them into two main subclasses: those stemming from predictive model errors and those arising from other physical dynamics and human interactions. Additionally, this thesis also investigates the effects of LLM-based agents in real world scenarios and their role in introducing more unknown-unknowns into the overall system.
This research aims to make the interactions between AI enabled assistive devices and humans safer in the real world which is essential for the widespread deployment of such systems. By addressing the problem of unknown-unknowns associated with these safety-critical systems, this research contributes to increased trust and acceptance in diverse sectors such as healthcare, daily planning, transportation, and industrial automation.