Matching Items (9)
Filtering by

Clear all filters

Description
As people begin to live longer and the population shifts to having more olderadults on Earth than young children, radical solutions will be needed to ease the burden on society. It will be essential to develop technology that can age with the individual. One solution is to keep older adults in their

As people begin to live longer and the population shifts to having more olderadults on Earth than young children, radical solutions will be needed to ease the burden on society. It will be essential to develop technology that can age with the individual. One solution is to keep older adults in their homes longer through smart home and smart living technology, allowing them to age in place. People have many choices when choosing where to age in place, including their own homes, assisted living facilities, nursing homes, or family members. No matter where people choose to age, they may face isolation and financial hardships. It is crucial to keep finances in mind when developing Smart Home technology. Smart home technologies seek to allow individuals to stay inside their homes for as long as possible, yet little work looks at how we can use technology in different life stages. Robots are poised to impact society and ease burns at home and in the workforce. Special attention has been given to social robots to ease isolation. As social robots become accepted into society, researchers need to understand how these robots should mimic natural conversation. My work attempts to answer this question within social robotics by investigating how to make conversational robots natural and reciprocal. I investigated this through a 2x2 Wizard of Oz between-subjects user study. The study lasted four months, testing four different levels of interactivity with the robot. None of the levels were significantly different from the others, an unexpected result. I then investigated the robot’s personality, the participant’s trust, and the participant’s acceptance of the robot and how that influenced the study.
ContributorsMiller, Jordan (Author) / McDaniel, Troy (Thesis advisor) / Michael, Katina (Committee member) / Cooke, Nancy (Committee member) / Bryan, Chris (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2022
Description
The widespread usage of technology has led to an increase in cyber threats. Organizations use indices to measure, understand, and make decisions in response to cybersecurity threats. However, the same tools do not exist to help individuals to make informed cybersecurity decisions. This work aims to understand the impact of

The widespread usage of technology has led to an increase in cyber threats. Organizations use indices to measure, understand, and make decisions in response to cybersecurity threats. However, the same tools do not exist to help individuals to make informed cybersecurity decisions. This work aims to understand the impact of cyber threats on individuals and take steps toward developing a composite indicator that engages them in conversations around cybersecurity. A composite indicator consolidates single indicators around a complex topic, such as cybersecurity, into one, thereby providing a means for measuring a non-trivial topic. A tool such as a composite indicator will help individuals make better cybersecurity policy decisions and enable researchers to benchmark cybersecurity consequences for the general public. However, more data and information are needed to create such a tool.To this end, this work presents semi-structured interviews with people about their exposure to cyber threats and documents some of the challenges and harms of a cyber-related incident. Based on interviews and a literature survey, this work proposes a Cyber Harm Framework for Citizens that reflects the dimensions of harm experienced by users. This framework provides a conceptual starting point for building a composite indicator. In order to develop a human-centered cyber indicator, this work explores the potential social, ethical, and design challenges that must be considered. Future work will focus on integrating the framework into a cyber-harm composite indicator, enabling individuals to make informed cybersecurity decisions.
ContributorsJacobs, Danielle R (Author) / McDaniel, Troy (Thesis advisor) / Li, Baoxin (Committee member) / Bryan, Chris (Committee member) / Michael, Katina (Committee member) / Gall, Melanie (Committee member) / Bao, Tiffany (Committee member) / Arizona State University (Publisher)
Created2024
Description
Data visualization is essential for communicating complex information to diverse audiences. However, a gap persists between visualization design objectives and the understanding of non-expert users, with limited experience. This dissertation addresses challenges in designing for non-experts, referred to as the D.U.C.K. bridge: (i) user unfamiliarity with DATA analysis domains, (ii)

Data visualization is essential for communicating complex information to diverse audiences. However, a gap persists between visualization design objectives and the understanding of non-expert users, with limited experience. This dissertation addresses challenges in designing for non-experts, referred to as the D.U.C.K. bridge: (i) user unfamiliarity with DATA analysis domains, (ii) variation in user UNDERSTANDING mechanisms, (iii) catering to individual differences in CREATING visualizations, and (iv) promoting KNOWLEDGE synthesis and application. By developing human-driven principles and tools, this research aims to enhance visualization creation and consumption by non-experts. Leveraging linked interactive visualizations, this dissertation explores the iterative education of non-experts when navigating unfamiliar DATA realms. VAIDA guides crowd workers in creating better NLP benchmarks through real-time visual feedback. Similarly, LeaderVis allows users to interactively customize AI leaderboards and select model configurations suited to their application. Both systems demonstrate how visual analytics can flatten the learning curve associated with complex data and technologies. Next, this dissertation examines how individuals internalize real-world visualizations—either as images or information. Experimental studies investigate the impact of design elements on perception across visualization types and styles, and an LSTM model predicts the framing of the recall process. The findings reveal mechanisms that shape the UNDERSTANDING of visualizations, enabling the design of tailored approaches to improve recall and comprehension among non-experts. This research also investigates how known design principles apply to CREATING visualizations for underrepresented populations. Findings reveal that multilingual individuals prefer varying text volumes based on annotation language, and older age groups engage more emotionally with affective visualizations than younger age groups. Additionally, underlying cognitive processes, like mind wandering, affect recall focus. These insights guide the development of more inclusive visualization solutions for diverse user demographics. This dissertation concludes by presenting projects aimed at preserving cognitive and affective KNOWLEDGE synthesized through visual analysis. The first project examines the impact of data visualizations in VR on personal viewpoints about climate change, offering insights for using VR in public scientific education. The second project introduces LINGO, which enables the creation of diverse natural language prompts for generative models across multiple languages, potentially facilitating custom visualization creation via streamlined prompting.
ContributorsArunkumar, Anjana (Author) / Bryan, Chris (Thesis advisor) / Maciejewski, Ross (Committee member) / Baral, Chitta (Committee member) / Bae, Gi-Yeul (Committee member) / Arizona State University (Publisher)
Created2024
Description
Machine Learning and Artificial Intelligence algorithms are deeply embedded in our everyday experiences, from the moment we wake up to tracking our REM sleep through wearable devices. These technologies are not only applied to a wide array of challenges but have also become ubiquitous, with many people interacting and relying

Machine Learning and Artificial Intelligence algorithms are deeply embedded in our everyday experiences, from the moment we wake up to tracking our REM sleep through wearable devices. These technologies are not only applied to a wide array of challenges but have also become ubiquitous, with many people interacting and relying on AI decisions - often without fully understanding how these models work.Despite the widespread use of AI, the reasoning behind its decisions is frequently opaque. This lack of transparency can lead to users either underestimating or overestimating the capabilities of AI systems. Even in scenarios where AI models provide explanations for their decisions, the impact of these justifications on end-users' perceptions remains unclear. These issues raise important questions about ways to improve model transparency, methods to aid in decision making and understanding the impact of such explanations on user's trust. To address these issues, this thesis focuses on: 1. Explainability of Reinforcement Learning for non expert users: From 2019 to 2023, HCI saw 154 Explainable ML papers in many HCI conferences like IEEEVis, EuroVis and CHI. Comparatively, field of explainable RL (XRL) has however been underdeveloped. I contributed two novel visualization-driven systems for explainable RL: PolicyExplainer that provides a visual explanations and PolicySummarizer that provides policy summaries for novice users. 2. Accessibility to perform downstream decision making tasks: As AI becomes more accessible, users may struggle to leverage these tools within their limits. I studied how to design effective interfaces for identifying underlying coverage diversity among different news organizations, operationalized as NewsKaleidoscope that helps users identify coverage biases in news and PromptAid to offer AI-based prompt recommendations for better task performance. 3. User perceptions and impact of machine generated rationales: Finally, explanations can be a double-edged sword, potentially increasing trust in flawed models. I explored how user perceptions are affected by rationales generated by ChatGPT, especially when it justifies incorrect predictions due to hallucinations and ways to circumvent the issue.
ContributorsMishra, Aditi (Author) / Bryan, Chris (Thesis advisor) / Biswas, Ayan (Committee member) / Baral, Chitta (Committee member) / Seifi, Hasti (Committee member) / Arizona State University (Publisher)
Created2024
Description
In today’s world, artificial intelligence (AI) is increasingly becoming a part of our daily lives. For this integration to be successful, it’s essential that AI systems can effectively interact with humans. This means making the AI system’s behavior more understandable to users and allowing users to customize the system’s behavior

In today’s world, artificial intelligence (AI) is increasingly becoming a part of our daily lives. For this integration to be successful, it’s essential that AI systems can effectively interact with humans. This means making the AI system’s behavior more understandable to users and allowing users to customize the system’s behavior to match their preferences. However, there are significant challenges associated with achieving this goal. One major challenge is that modern AI systems, which have shown great success, often make decisions based on learned representations. These representations, often acquired through deep learning techniques, are typically inscrutable to the users inhibiting explainability and customizability of the system. Additionally, since each user may have unique preferences and expertise, the interaction process must be tailored to each individual. This thesis addresses these challenges that arise in human-AI interaction scenarios, especially in cases where the AI system is tasked with solving sequential decision-making problems. This is achieved by introducing a framework that uses a symbolic interface to facilitate communication between humans and AI agents. This shared vocabulary acts as a bridge, enabling the AI agent to provide explanations in terms that are easy for humans to understand and allowing users to express their preferences using this common language. To address the need for personalization, the framework provides mechanisms that allow users to expand this shared vocabulary, enabling them to express their unique preferences effectively. Moreover, the AI systems are designed to take into account the user’s background knowledge when generating explanations tailored to their specific needs.
ContributorsSoni, Utkarsh (Author) / Kambhampati, Subbarao (Thesis advisor) / Baral, Chitta (Committee member) / Bryan, Chris (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2024
Description
Geospatial machine learning (ML) models and their applications have recently gained significant attention due to the rising availability of raster and spatiotemporal datasets. Three important limitations in ML for the geospatial domain are the following. Firstly, real-world geospatial datasets are often too large, and many geospatial ML algorithms represent the

Geospatial machine learning (ML) models and their applications have recently gained significant attention due to the rising availability of raster and spatiotemporal datasets. Three important limitations in ML for the geospatial domain are the following. Firstly, real-world geospatial datasets are often too large, and many geospatial ML algorithms represent the geographical region in terms of a grid. If the granularity of the grid is too fine, it results in a large number of grid cells, leading to long training time and high memory consumption issues during the model training. Secondly, current machine learning systems are mainly designed for text, image, audio, and video data, and they often fall short of adequately supporting geospatial datasets. This is because machine learning and data preprocessing techniques in this domain fail to capture the spatial autocorrelation property, a key characteristic available in geospatial datasets. Thirdly, many real-world inference workflows in this domain involve preprocessing steps that join data from multiple data silos to assemble feature vectors. Often, these geospatial joins are expensive and become bottlenecks in the inference process. In this dissertation, I will discuss novel solutions to these three major concerns of spatiotemporal machine learning. In particular, the dissertation includes three main research components. The first one presents a machine learning-aware technique for re-partitioning geospatial data to shorten the training duration of spatial machine learning models; the second component introduces an end-to-end framework for deep learning and data preprocessing with spatiotemporal vector and raster dataset; the third solution presents a strategy to co-optimize a preprocessing and inference pipeline consisting of costly join queries and model inferencing. Additionally, I will present experimental evaluation results using a variety of real-world datasets to demonstrate the effectiveness of all three solutions.
ContributorsChowdhury, Kanchan (Author) / Sarwat, Mohamed (Thesis advisor) / Zou, Jia (Thesis advisor) / Davulcu, Hasan (Committee member) / Bryan, Chris (Committee member) / Arizona State University (Publisher)
Created2024
Description
Artificial Intelligence (AI) technology has advanced significantly, enabling AI models to learn from diverse data and automate tasks previously performed solely by humans. This capability has excited people from various domains outside the AI research community and has driven technical experts from non-AI backgrounds to leverage AI for their domain-specific

Artificial Intelligence (AI) technology has advanced significantly, enabling AI models to learn from diverse data and automate tasks previously performed solely by humans. This capability has excited people from various domains outside the AI research community and has driven technical experts from non-AI backgrounds to leverage AI for their domain-specific tasks. However, when these experts attempt to use AI, they face several challenges: understanding AI models' behavior and results in intuitive ways, adapting pre-trained models to their own datasets, and finding comprehensive guidelines for AI integration practices. This dissertation takes a user-centered approach to address these challenges by designing and developing interactive systems and frameworks that make AI more interpretable and accessible. The dissertation focuses on three key areas: Explaining AI Behavior: For domain experts from a non-AI background, understanding AI models is challenging. Automated explanations alone often fall short, as users need an iterative approach to form, test, and refine hypotheses. We introduce two visual analytics systems, ConceptExplainer and ASAP, designed to provide intuitive explanations and analysis tools, helping users better comprehend and interpret AI's inner workings and outcomes. Simplifying AI Workflows: Adapting pre-trained AI models for specific downstream tasks can be challenging for users with limited AI expertise. We present InFiConD, an interactive no-code interface for streamlining the knowledge distillation process, allowing users to easily adapt large models to their specific needs. Integrating AI in Domain-Specific Tasks: The integration of AI into domain-specific visual analytics systems is growing, but clear guidance is often lacking. Users with non-AI backgrounds face challenges in selecting appropriate AI tools and avoiding common pitfalls due to the vast array of available options. Our survey, AI4DomainVA, addresses this gap by reviewing existing practices and developing a roadmap for AI integration. This guide helps domain experts understand the synergy between AI and visual analytics, choose suitable AI methods, and effectively incorporate them into their workflows.
ContributorsHuang, Jinbin (Author) / Bryan, Chris (Thesis advisor) / Maciejewski, Ross (Committee member) / Seifi, Hasti (Committee member) / Kwon, Bum Chul (Committee member) / Arizona State University (Publisher)
Created2019
Description
Machine learning models are increasingly being deployed in real-world applications where their predictions are used to make critical decisions in a variety of domains. The proliferation of such models has led to a burgeoning need to ensure the reliability and safety of these models, given the potential negative consequences of

Machine learning models are increasingly being deployed in real-world applications where their predictions are used to make critical decisions in a variety of domains. The proliferation of such models has led to a burgeoning need to ensure the reliability and safety of these models, given the potential negative consequences of model vulnerabilities. The complexity of machine learning models, along with the extensive data sets they analyze, can result in unpredictable and unintended outcomes. Model vulnerabilities may manifest due to errors in data input, algorithm design, or model deployment, which can have significant implications for both individuals and society. To prevent such negative outcomes, it is imperative to identify model vulnerabilities at an early stage in the development process. This will aid in guaranteeing the integrity, dependability, and safety of the models, thus mitigating potential risks and enabling the full potential of these technologies to be realized. However, enumerating vulnerabilities can be challenging due to the complexity of the real-world environment. Visual analytics, situated at the intersection of human-computer interaction, computer graphics, and artificial intelligence, offers a promising approach for achieving high interpretability of complex black-box models, thus reducing the cost of obtaining insights into potential vulnerabilities of models. This research is devoted to designing novel visual analytics methods to support the identification and analysis of model vulnerabilities. Specifically, generalizable visual analytics frameworks are instantiated to explore vulnerabilities in machine learning models concerning security (adversarial attacks and data perturbation) and fairness (algorithmic bias). In the end, a visual analytics approach is proposed to enable domain experts to explain and diagnose the model improvement of addressing identified vulnerabilities of machine learning models in a human-in-the-loop fashion. The proposed methods hold the potential to enhance the security and fairness of machine learning models deployed in critical real-world applications.
ContributorsXie, Tiankai (Author) / Maciejewski, Ross (Thesis advisor) / Liu, Huan (Committee member) / Bryan, Chris (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Created2023
Description

Data integration involves the reconciliation of data from diverse data sources in order to obtain a unified data repository, upon which an end user such as a data analyst can run analytics sessions to explore the data and obtain useful insights. Supervised Machine Learning (ML) for data integration tasks such

Data integration involves the reconciliation of data from diverse data sources in order to obtain a unified data repository, upon which an end user such as a data analyst can run analytics sessions to explore the data and obtain useful insights. Supervised Machine Learning (ML) for data integration tasks such as ontology (schema) or entity (instance) matching requires several training examples in terms of manually curated, pre-labeled matching and non-matching schema concept or entity pairs which are hard to obtain. On similar lines, an analytics system without predictive capabilities about the impending workload can incur huge querying latencies, while leaving the onus of understanding the underlying database schema and writing a meaningful query at every step during a data exploration session on the user. In this dissertation, I will describe the human-in-the-loop Machine Learning (ML) systems that I have built towards data integration and predictive analytics. I alleviate the need for extensive prior labeling by utilizing active learning (AL) for dataintegration. In each AL iteration, I detect the unlabeled entity or schema concept pairs that would strengthen the ML classifier and selectively query the human oracle for such labels in a budgeted fashion. Thus, I make use of human assistance for ML-based data integration. On the other hand, when the human is an end user exploring data through Online Analytical Processing (OLAP) queries, my goal is to pro-actively assist the human by predicting the top-K next queries that s/he is likely to be interested in. I will describe my proposed SQL-predictor, a Business Intelligence (BI) query predictor and a geospatial query cardinality estimator with an emphasis on schema abstraction, query representation and how I adapt the ML models for these tasks. For each system, I will discuss the evaluation metrics and how the proposed systems compare to the state-of-the-art baselines on multiple datasets and query workloads.

ContributorsMeduri, Venkata Vamsikrishna (Author) / Sarwat, Mohamed (Thesis advisor) / Bryan, Chris (Committee member) / Liu, Huan (Committee member) / Ozcan, Fatma (Committee member) / Popa, Lucian (Committee member) / Arizona State University (Publisher)
Created2022