Matching Items (26)
Filtering by
- Member of: ASU Electronic Theses and Dissertations

Description
Working memory plays an important role in human activities across academic,professional, and social settings. Working memory is dened as the memory extensively
involved in goal-directed behaviors in which information must be retained and
manipulated to ensure successful task execution. The aim of this research is to understand
the effect of image captioning with image description on an individual's
working memory. A study was conducted with eight neutral images comprising situations
relatable to daily life such that each image could have a positive or negative
description associated with the outcome of the situation in the image. The study
consisted of three rounds where the first and second round involved two parts and
the third round consisted of one part. The image was captioned a total of five times
across the entire study. The findings highlighted that only 25% of participants were
able to recall the captions which they captioned for an image after a span of 9-15
days; when comparing the recall rate of the captions, 50% of participants were able
to recall the image caption from the previous round in the present round; and out of
the positive and negative description associated with the image, 65% of participants
recalled the former description rather than the latter. The conclusions drawn from the
study are participants tend to retain information for longer periods than the expected
duration for working memory, which may be because participants were able to relate
the images with their everyday life situations and given a situation with positive and
negative information, the human brain is aligned towards positive information over
negative information.
ContributorsUppara, Nithiya Shree (Author) / McDaniel, Troy (Thesis advisor) / Venkateswara, Hemanth (Thesis advisor) / Bryan, Chris (Committee member) / Arizona State University (Publisher)
Created2021

Description
While significant qualitative, user study-focused research has been done on augmented reality, relatively few studies have been conducted on multiple, co-located synchronously collaborating users in augmented reality. Recognizing the need for more collaborative user studies in augmented reality and the value such studies present, a user study is conducted of collaborative decision-making in augmented reality to investigate the following research question: “Does presenting data visualizations in augmented reality influence the collaborative decision-making behaviors of a team?” This user study evaluates how viewing data visualizations with augmented reality headsets impacts collaboration in small teams compared to viewing together on a single 2D desktop monitor as a baseline. Teams of two participants performed closed and open-ended evaluation tasks to collaboratively analyze data visualized in both augmented reality and on a desktop monitor. Multiple means of collecting and analyzing data were employed to develop a well-rounded context for results and conclusions, including software logging of participant interactions, qualitative analysis of video recordings of participant sessions, and pre- and post-study participant questionnaires. The results indicate that augmented reality doesn’t significantly change the quantity of team member communication but does impact the means and strategies participants use to collaborate.
ContributorsKintscher, Michael (Author) / Bryan, Chris (Thesis advisor) / Amresh, Ashish (Thesis advisor) / Hansford, Dianne (Committee member) / Johnson, Erik (Committee member) / Arizona State University (Publisher)
Created2020

Description
Augmented Reality (AR) has progressively demonstrated its helpfulness for novicesto learn highly complex and abstract concepts by visualizing details in an immersive
environment. However, some studies show that similar results could also be obtained
in environments that do not involve AR. To explore the potential of AR in advancing
transformative engagement in education, I propose modeling facial expressions
as implicit feedback when one is being immersed in the environment. I developed a
Unity application to record and log the users' application operations and facial images.
A neural network-based model, Visual Geometry Group 19 (VGG19, Simonyan
and Zisserman (2014)), is adopted to recognize emotions from the captured facial
images. A within-subject user study was designed and conducted to assess the sentiment
and user engagement differences in AR and non-AR tasks. To analyze the
collected data, Dynamic Time Warping (DTW) was applied to identify the emotional
similarities between AR and non-AR environments. The results indicate that users
showed an increase in emotion patterns and application operations throughout the
AR tasks in comparison to non-AR tasks. The emotion patterns observed in the
analysis show that non-AR provides less implicit feedback compared to AR tasks.
The DTW analysis reveals that users' emotion change patterns appear to be more
distant from neutral emotions in AR than non-AR tasks. Succinctly put, the users
in the AR task demonstrated more active use of the application and yielded ranges
of emotions while operating it.
ContributorsPapakannu, Kushal Reddy (Author) / Hsiao, Ihan (Thesis advisor) / Bryan, Chris (Committee member) / Glenberg, Mina Johnson (Committee member) / Arizona State University (Publisher)
Created2021
Description
Component-based models are commonly employed to simulate discrete dynamicalsystems. These models lend themselves to formalizing the structures of systems at multiple levels of granularity. Visual development of component-based models serves to simplify the iterative and incremental model specification activities. The Parallel Discrete Events System Specification (DEVS) formalism offers a flexible yet rigorous approach for decomposing a whole model into its components or alternatively, composing a whole model from components. While different concepts, frameworks, and tools offer a variety of visual modeling capabilities, most pose limitations, such as visualizing multiple model hierarchies at any level with arbitrary depths. The visual and persistent layout of any number of hierarchy levels of models can be maintained and navigated seamlessly. Persistence storage is another capability needed for the modeling, simulating, verifying, and validating lifecycle. These are important features to improve the demanding task of creating and changing modular, hierarchical simulation models. This thesis proposes a new approach and develops a tool for the visual development of models. This tool supports storing and reconstructing graphical models using a NoSQL database. It offers unique capabilities important for developing increasingly larger and more complex models essential for analyzing, designing, and building Digital Twins.
ContributorsMohite, Sheetal Chandrakant (Author) / Sarjoughian, Hessam S (Thesis advisor) / Bryan, Chris (Committee member) / Pavlic, Theodore (Committee member) / Arizona State University (Publisher)
Created2023

Description
As people begin to live longer and the population shifts to having more olderadults on Earth than young children, radical solutions will be needed to ease the
burden on society. It will be essential to develop technology that can age with the
individual. One solution is to keep older adults in their homes longer through smart
home and smart living technology, allowing them to age in place. People have many
choices when choosing where to age in place, including their own homes, assisted
living facilities, nursing homes, or family members. No matter where people choose to
age, they may face isolation and financial hardships. It is crucial to keep finances in
mind when developing Smart Home technology.
Smart home technologies seek to allow individuals to stay inside their homes for
as long as possible, yet little work looks at how we can use technology in different
life stages. Robots are poised to impact society and ease burns at home and in the
workforce. Special attention has been given to social robots to ease isolation. As
social robots become accepted into society, researchers need to understand how these
robots should mimic natural conversation. My work attempts to answer this question
within social robotics by investigating how to make conversational robots natural and
reciprocal.
I investigated this through a 2x2 Wizard of Oz between-subjects user study. The
study lasted four months, testing four different levels of interactivity with the robot.
None of the levels were significantly different from the others, an unexpected result. I
then investigated the robot’s personality, the participant’s trust, and the participant’s
acceptance of the robot and how that influenced the study.
ContributorsMiller, Jordan (Author) / McDaniel, Troy (Thesis advisor) / Michael, Katina (Committee member) / Cooke, Nancy (Committee member) / Bryan, Chris (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2022

Description
Visual Question Answering (VQA) is an increasingly important multi-modal task where models must answer textual questions based on visual image inputs. Numerous VQA datasets have been proposed to train and evaluate models. However, existing benchmarks exhibit a unilateral focus on textual distribution shifts rather than joint shifts across modalities. This is suboptimal for properly assessing model robustness and generalization. To address this gap, a novel multi-modal VQA benchmark dataset is introduced for the first time. This dataset combines both visual and textual distribution shifts across training and test sets. Using this challenging benchmark exposes vulnerabilities in existing models relying on spurious correlations and overfitting to dataset biases. The novel dataset advances the field by enabling more robust model training and rigorous evaluation of multi-modal distribution shift generalization. In addition, a new few-shot multi-modal prompt fusion model is proposed to better adapt models for downstream VQA tasks. The model incorporates a prompt encoder module and dual-path design to align and fuse image and text prompts. This represents a novel prompt learning approach tailored for multi-modal learning across vision and language. Together, the introduced benchmark dataset and prompt fusion model address key limitations around evaluating and improving VQA model robustness. The work expands the methodology for training models resilient to multi-modal distribution shifts.
ContributorsJyothi Unni, Suraj (Author) / Liu, Huan (Thesis advisor) / Davalcu, Hasan (Committee member) / Bryan, Chris (Committee member) / Arizona State University (Publisher)
Created2023

Description
Mid-air ultrasound haptic technology can enhance user interaction and immersion in extended reality (XR) applications through contactless touch feedback. However, existing design tools for mid-air haptics primarily support the creation of static tactile sensations (tactons), which lack adaptability at runtime. These tactons do not offer the required expressiveness in interactive scenarios where a continuous closed-loop response to user movement or environmental states is desirable. This thesis proposes AdapTics, a toolkit featuring a graphical interface for the rapid prototyping of adaptive tactons—dynamic sensations that can adjust at runtime based on user interactions, environmental changes, or other inputs. A software library and a Unity package accompany the graphical interface to enable integration of adaptive tactons in existing applications. The design space provided by AdapTics for creating adaptive mid-air ultrasound tactons is presented, along with evidence that the design tool enhances Creativity Support Index ratings for Exploration and Expressiveness, as demonstrated in a user study involving 12 XR and haptic designers.
ContributorsJohn, Kevin (Author) / Seifi, Hasti (Thesis advisor) / Bryan, Chris (Committee member) / Schneider, Oliver (Committee member) / Arizona State University (Publisher)
Created2024

Description
The widespread usage of technology has led to an increase in cyber threats. Organizations use indices to measure, understand, and make decisions in response to cybersecurity threats. However, the same tools do not exist to help individuals to make informed cybersecurity decisions. This work aims to understand the impact of cyber threats on individuals and take steps toward developing a composite indicator that engages them in conversations around cybersecurity. A composite indicator consolidates single indicators around a complex topic, such as cybersecurity, into one, thereby providing a means for measuring a non-trivial topic. A tool such as a composite indicator will help individuals make better cybersecurity policy decisions and enable researchers to benchmark cybersecurity consequences for the general public. However, more data and information are needed to create such a tool.To this end, this work presents semi-structured interviews with people about their exposure to cyber threats and documents some of the challenges and harms of a cyber-related incident. Based on interviews and a literature survey, this work proposes a Cyber Harm Framework for Citizens that reflects the dimensions of harm experienced by users. This framework provides a conceptual starting point for building a composite indicator. In order to develop a human-centered cyber indicator, this work explores the potential social, ethical, and design challenges that must be considered. Future work will focus on integrating the framework into a cyber-harm composite indicator, enabling individuals to make informed cybersecurity decisions.
ContributorsJacobs, Danielle R (Author) / McDaniel, Troy (Thesis advisor) / Li, Baoxin (Committee member) / Bryan, Chris (Committee member) / Michael, Katina (Committee member) / Gall, Melanie (Committee member) / Bao, Tiffany (Committee member) / Arizona State University (Publisher)
Created2024

Description
This research project seeks to develop an innovative data visualization tool tailored for beginners to enhance their ability to interpret and present data effectively. Central to the approach is creating an intuitive, user-friendly interface that simplifies the data visualization process, making it accessible even to those with no prior background in the field. The tool will introduce users to standard visualization formats and expose them to various alternative chart types, fostering a deeper understanding and broader skill set in data representation. I plan to leverage innovative visualization techniques to ensure the tool is compelling and engaging. An essential aspect of my research will involve conducting comprehensive user studies and surveys to assess the tool's impact on enhancing data visualization competencies among the target audience. Through this, I aim to gather valuable insights into the tool's usability and effectiveness, enabling further refinements. The outcome of this project is a powerful and versatile tool that will be an invaluable asset for students, researchers, and professionals who regularly engage with data. By democratizing data visualization skills, I envisage empowering a broader audience to comprehend and creatively present complex data in a more meaningful and impactful manner.
ContributorsNarula, Jai (Author) / Bryan, Chris (Thesis advisor) / Seifi, Hasti (Committee member) / Bansal, Srividya (Committee member) / Arizona State University (Publisher)
Created2024

Description
Data visualization is essential for communicating complex information to diverse audiences. However, a gap persists between visualization design objectives and the understanding of non-expert users, with limited experience. This dissertation addresses challenges in designing for non-experts, referred to as the D.U.C.K. bridge: (i) user unfamiliarity with DATA analysis domains, (ii) variation in user UNDERSTANDING mechanisms, (iii) catering to individual differences in CREATING visualizations, and (iv) promoting KNOWLEDGE synthesis and application. By developing human-driven principles and tools, this research aims to enhance visualization creation and consumption by non-experts. Leveraging linked interactive visualizations, this dissertation explores the iterative education of non-experts when navigating unfamiliar DATA realms. VAIDA guides crowd workers in creating better NLP benchmarks through real-time visual feedback. Similarly, LeaderVis allows users to interactively customize AI leaderboards and select model configurations suited to their application. Both systems demonstrate how visual analytics can flatten the learning curve associated with complex data and technologies. Next, this dissertation examines how individuals internalize real-world visualizations—either as images or information. Experimental studies investigate the impact of design elements on perception across visualization types and styles, and an LSTM model predicts the framing of the recall process. The findings reveal mechanisms that shape the UNDERSTANDING of visualizations, enabling the design of tailored approaches to improve recall and comprehension among non-experts. This research also investigates how known design principles apply to CREATING visualizations for underrepresented populations. Findings reveal that multilingual individuals prefer varying text volumes based on annotation language, and older age groups engage more emotionally with affective visualizations than younger age groups. Additionally, underlying cognitive processes, like mind wandering, affect recall focus. These insights guide the development of more inclusive visualization solutions for diverse user demographics. This dissertation concludes by presenting projects aimed at preserving cognitive and affective KNOWLEDGE synthesized through visual analysis. The first project examines the impact of data visualizations in VR on personal viewpoints about climate change, offering insights for using VR in public scientific education. The second project introduces LINGO, which enables the creation of diverse natural language prompts for generative models across multiple languages, potentially facilitating custom visualization creation via streamlined prompting.
ContributorsArunkumar, Anjana (Author) / Bryan, Chris (Thesis advisor) / Maciejewski, Ross (Committee member) / Baral, Chitta (Committee member) / Bae, Gi-Yeul (Committee member) / Arizona State University (Publisher)
Created2024