Matching Items (756)
Filtering by

Clear all filters

Description

Critical flicker fusion thresholds (CFFTs) describe when quick amplitude modulations of a light source become undetectable as the frequency of the modulation increases and are thought to underlie a number of visual processing skills, including reading. Here, we compare the impact of two vision-training approaches, one involving contrast sensitivity training

Critical flicker fusion thresholds (CFFTs) describe when quick amplitude modulations of a light source become undetectable as the frequency of the modulation increases and are thought to underlie a number of visual processing skills, including reading. Here, we compare the impact of two vision-training approaches, one involving contrast sensitivity training and the other directional dot-motion training, compared to an active control group trained on Sudoku. The three training paradigms were compared on their effectiveness for altering CFFT. Directional dot-motion and contrast sensitivity training resulted in significant improvement in CFFT, while the Sudoku group did not yield significant improvement. This finding indicates that dot-motion and contrast sensitivity training similarly transfer to effect changes in CFFT. The results, combined with prior research linking CFFT to high-order cognitive processes such as reading ability, and studies showing positive impact of both dot-motion and contrast sensitivity training in reading, provide a possible mechanistic link of how these different training approaches impact reading abilities.

ContributorsZhou, Tianyou (Author) / Nanez, Jose (Author) / Zimmerman, Daniel (Author) / Holloway, Steven (Author) / Seitz, Aaron (Author) / New College of Interdisciplinary Arts and Sciences (Contributor)
Created2016-10-26
Description

Although autism spectrum disorder (ASD) is a serious lifelong condition, its underlying neural mechanism remains unclear. Recently, neuroimaging-based classifiers for ASD and typically developed (TD) individuals were developed to identify the abnormality of functional connections (FCs). Due to over-fitting and interferential effects of varying measurement conditions and demographic distributions, no

Although autism spectrum disorder (ASD) is a serious lifelong condition, its underlying neural mechanism remains unclear. Recently, neuroimaging-based classifiers for ASD and typically developed (TD) individuals were developed to identify the abnormality of functional connections (FCs). Due to over-fitting and interferential effects of varying measurement conditions and demographic distributions, no classifiers have been strictly validated for independent cohorts. Here we overcome these difficulties by developing a novel machine-learning algorithm that identifies a small number of FCs that separates ASD versus TD. The classifier achieves high accuracy for a Japanese discovery cohort and demonstrates a remarkable degree of generalization for two independent validation cohorts in the USA and Japan. The developed ASD classifier does not distinguish individuals with major depressive disorder and attention-deficit hyperactivity disorder from their controls but moderately distinguishes patients with schizophrenia from their controls. The results leave open the viable possibility of exploring neuroimaging-based dimensions quantifying the multiple-disorder spectrum.

ContributorsYahata, Noriaki (Author) / Morimoto, Jun (Author) / Hashimoto, Ryuichiro (Author) / Lisi, Giuseppe (Author) / Shibata, Kazuhisa (Author) / Kawakubo, Yuki (Author) / Kuwabara, Hitoshi (Author) / Kuroda, Miho (Author) / Yamada, Takashi (Author) / Megumi, Fukuda (Author) / Imamizu, Hiroshi (Author) / Nanez, Jose (Author) / Takahashi, Hidehiko (Author) / Okamoto, Yasumasa (Author) / Kasai, Kiyoto (Author) / Kato, Nobumasa (Author) / Sasaki, Yuka (Author) / Watanabe, Takeo (Author) / Kawato, Mitsuo (Author) / New College of Interdisciplinary Arts and Sciences (Contributor)
Created2016-04-14
Description
Food waste is a prominent issue today in both environmental and economic terms with households being among the top contributors. The rise of artificial intelligence (AI) in consumer technology has created new opportunities to address everyday challenges, such as food waste. Solutions are being developed to bridge the gap between

Food waste is a prominent issue today in both environmental and economic terms with households being among the top contributors. The rise of artificial intelligence (AI) in consumer technology has created new opportunities to address everyday challenges, such as food waste. Solutions are being developed to bridge the gap between sustainable living and accessible solutions to make a difference in people’s lives. This thesis explores the market viability and societal impact of FridgeScan AI, an artificial intelligence (AI) driven mobile application designed to help users manage the contents of their refrigerators. The application was designed to use image recognition and machine learning to scan food items, manage fridge contents, generate shopping lists, and provide recipe suggestions based on what users have in their fridge. This research builds on the FridgeScan AI capstone project developed at Arizona State University. It combines a literature review and a user survey to assess the current market for smart kitchen technologies, specifically user expectations and concerns. Key themes include sustainability, cost savings, and data privacy. The survey was distributed among 53 individuals aged 18 and over to collect insights on the value, features, and ethical considerations of such an application. The results from the survey then informed the development of a simplified business model that analyzes potential revenue and strategies for deployment. Ultimately, the goal is to assess user interest in tools that may help reduce food waste and improve household organization, and how their concerns around data collection and camera use could impact that. This thesis concludes by exploring both the potential and the challenges of adopting AI-based home applications and offers possible directions for further development and evaluation.
ContributorsFarias, Sabrina (Author) / Chavez Echeagaray, Maria Elena (Thesis director) / Lee, Quak Foo (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2025-05
Description
Advancements in large language models (LLMs) have enabled the development of intelligent educational tools that support inquiry-based learning across technical domains. In cybersecurity education, where accuracy and safety are paramount, systems must go beyond surface-level relevance to provide information that is both trustworthy and domain-appropriate. To address this challenge, we

Advancements in large language models (LLMs) have enabled the development of intelligent educational tools that support inquiry-based learning across technical domains. In cybersecurity education, where accuracy and safety are paramount, systems must go beyond surface-level relevance to provide information that is both trustworthy and domain-appropriate. To address this challenge, we introduce CyberBOT, a question-answering chatbot that leverages a retrieval-augmented generation (RAG) pipeline to incorporate contextual information from course-specific materials and validate responses using a domain-specific cybersecurity ontology. The ontology serves as a structured reasoning layer that constrains and verifies LLM-generated answers, reducing the risk of misleading or unsafe guidance. CyberBOT has been deployed in a large graduate-level course at Arizona State University (ASU), where more than one hundred students actively engage with the system through a dedicated web-based platform. Computational evaluations in lab environments highlight the potential capacity of CyberBOT, and a forthcoming field study will evaluate its pedagogical impact. By integrating structured domain reasoning with modern generative capabilities, CyberBOT illustrates a promising direction for developing reliable and curriculum-aligned AI applications in specialized educational contexts.
ContributorsDe Maria, Riccardo (Author) / Liu, Huan (Thesis director) / Agrawal, Garima (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2025-05
Description
The efficient solution of large-scale linear systems, particularly those arising from sparse matrices, is fundamental to numerous applications in science, engineering, and machine learning. Direct methods, such as LU decomposition, offer robustness but face challenges related to computational cost and memory usage when applied naively to sparse problems, primarily due

The efficient solution of large-scale linear systems, particularly those arising from sparse matrices, is fundamental to numerous applications in science, engineering, and machine learning. Direct methods, such as LU decomposition, offer robustness but face challenges related to computational cost and memory usage when applied naively to sparse problems, primarily due to the phenomenon of fill-in. This thesis investigates the practical performance characteristics of LU decomposition for sparse matrices, focusing on MATLAB's widely used built-in lu() function. A baseline comparison is first established by contrasting a custom LU implementation (myLU) designed for dense matrices with MATLAB's lu() applied to both dense and sparse inputs. Performance is evaluated based on execution time, numerical accuracy, and factor sparsity (fill-in). Subsequently, the thesis explores a key enabler of sparse algorithm efficiency: data storage schemes. The impact of Coordinate (COO), Compressed Sparse Row (CSR), and Compressed Sparse Column (CSC) formats on the performance of the fundamental Sparse Matrix-Vector Multiplication (SpMV) operation is experimentally analyzed and compared against MATLAB's optimized built-in SpMV. Results demonstrate that MATLAB's sparsity-aware lu() function significantly outperforms dense approaches in both speed and memory efficiency, via controlled fill-in, when handling sparse matrices. Furthermore, the SpMV analysis confirms the superior performance of compressed storage formats (CSR/CSC) over COO, while highlighting the exceptional optimization of MATLAB's internal routines. Collectively, these findings underscore the critical importance of utilizing both sparsity-aware algorithms and efficient underlying data structures for tackling large-scale sparse linear systems effectively.
ContributorsVincentsundar, Vishal (Author) / Osburn, Steven (Thesis director) / Zhou, Ben (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2025-05
Description
This thesis addresses the need to introduce foundational computer science concepts (particularly coding and data structures) to elementary and middle school students through an AI-enhanced curriculum. Existing educational resources often neglect structured teaching of data structures which is a key element in computational thinking and problem-solving. This work bridges the

This thesis addresses the need to introduce foundational computer science concepts (particularly coding and data structures) to elementary and middle school students through an AI-enhanced curriculum. Existing educational resources often neglect structured teaching of data structures which is a key element in computational thinking and problem-solving. This work bridges the educational gap by developing a curriculum that uses adaptive AI methodologies, gamification, and personalized instruction to create engaging and individual learning experiences. The curriculum integrated interactive digital activities, adaptive learning paths, real-time feedback, and project-based scenarios. A proof of concept prototype demonstrates the practical application of AI-driven instructional techniques and highlights how adaptive challenges and personalized feedback can significantly enhance student engagement and conceptual understanding. The thesis also discusses practical implementation strategies, assessment methodologies, and ethical considerations surrounding the use of AI in educational contexts. By highlighting both potential benefits and limitations, this research provides valuable insights and recommendations for educators, curriculum developers, and policymakers aiming to integrate advanced technological tools effectively in early computer science education.
ContributorsYoung, Danika (Author) / Zhu, Haolin (Thesis director) / Millman, Steven (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / School of Applied Sciences and Arts (Contributor) / Dean, W.P. Carey School of Business (Contributor)
Created2025-05
Description
This paper outlines the development of an efficient glossary navigation system that uses artificial intelligence to provide instant results through a dynamic search bar. The project uses several tools, including Python, Hypertext Markup Language (HTML), Cascading Style Sheets (CSS), JavaScript, and the Google Gemini API. Market research was conducted to ensure the glossary stands

This paper outlines the development of an efficient glossary navigation system that uses artificial intelligence to provide instant results through a dynamic search bar. The project uses several tools, including Python, Hypertext Markup Language (HTML), Cascading Style Sheets (CSS), JavaScript, and the Google Gemini API. Market research was conducted to ensure the glossary stands out from other products, as few competitors integrate AI. Unlike the traditional long, continuous pages of most competitors, this project displays only the words, showing definitions only when needed. The integration of the Google Gemini-powered dynamic search engine was successful, achieving the goal of eliminating manual navigation and simplifying the user experience with a streamlined, quick-access layout.
ContributorsPatel, Samir (Author) / Osburn, Steven (Thesis director) / Pokidaylo, Boris (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2025-05
Description
Maps are crucial in emergency response operations, offering vital geographical data that aids teams in navigation and strategy. However, during catastrophic events like forest fires or floods, which can sever major routes, maps can quickly become outdated. In such scenarios, search and rescue teams and other first responders may find

Maps are crucial in emergency response operations, offering vital geographical data that aids teams in navigation and strategy. However, during catastrophic events like forest fires or floods, which can sever major routes, maps can quickly become outdated. In such scenarios, search and rescue teams and other first responders may find themselves scrambling to navigate and plan without dependable maps, complicating their mission. Emergency response teams, specifically aerial imaging companies and wildfire teams, have a clear need to generate maps using aerial imaging. The conventional approach to synthesizing maps from aerial imagery is taking aerial images and accurately stitching them together, called an orthomosaic. Unfortunately, this process is both very expensive and slow. Highly accurate sensors and cameras required for real time map creation can cost hundreds of thousands of dollars. In other cases where this equipment is not available, producing maps can take overnight. This results in issues of inaccessibility and information arriving too slowly for effective emergency response. This study investigates a cost effective, efficient solution utilizing standard cameras mounted on drones to capture aerial images and projecting them over satellite data in real time. By utilizing homography estimation image transforms, we evaluate two methods of image transformations: standard key point based matching and EXIF data camera extrinsics estimation. Both methods have strengths and weaknesses that inform future developments, including creating a hybrid model where the strengths of both methods are leveraged. Ultimately, the work done in this experiment serves as a foundation for future research into achieving real time cost effective orthomosaic creation. Successfully implementing these methods would empower emergency responders with situational awareness enhancing the speed and accuracy of critical missions.
ContributorsKattenbraker, Luke (Author) / Nayak, Samik (Co-author) / Osburn, Steven (Thesis director) / O'Connor, Peter (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2025-05
Description
Athletic Breakdown is an interactive and educational website detailing the nuances of the current NFL landscape, using relevant computer science technologies and practices. This platform is used to help break down the strengths and weaknesses of each player and team in an easy to understand format for those who are

Athletic Breakdown is an interactive and educational website detailing the nuances of the current NFL landscape, using relevant computer science technologies and practices. This platform is used to help break down the strengths and weaknesses of each player and team in an easy to understand format for those who are looking to expand their knowledge of the game. It consists of a home page, player pages, and teams pages. The home page contains relevant news and game scores. Each player and team page, there are high-level overviews of their strengths and weaknesses, with each aspect broken down into easier to understand terms. The website is built using TypeScript, React, Vite, and TailwindCSS for the front-end. The back-end is built using PostgreSQL database and SupaBase for the back-end as a service.
ContributorsWill, Adrian (Author) / Atkinson, Robert (Thesis director) / Osburn, Steven (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2025-05
Description
As interest in food and beverage education grows among consumers and professionals alike, there is increasing demand for learning tools that are accessible, interactive, and tailored to individual needs. Traditional methods such as static tasting guides or instructor-led classes often lack personalization and real-time feedback, limiting their impact on learners

As interest in food and beverage education grows among consumers and professionals alike, there is increasing demand for learning tools that are accessible, interactive, and tailored to individual needs. Traditional methods such as static tasting guides or instructor-led classes often lack personalization and real-time feedback, limiting their impact on learners with varying experience levels. In response to this gap, this thesis presents the design, development, and evaluation of the Sip & Savor Study chatbot: an AI-powered virtual sommelier embedded within a mobile application that supports personalized beverage education. Built using React Native, FastAPI, Firebase, Render and OpenAI’s GPT-4 API, the chatbot delivers real-time recommendations and interactive learning experiences across four major beverage categories: wine, beer, saké, and cocktails. It dynamically adapts to user preferences (e.g., dietary needs, favorite drink types, and taste profiles) and supports contextual conversations that simulate expert guidance. The system architecture was developed through modular backend/frontend integration, and iteratively refined through usability feedback and internal testing cycles. A user study involving thirty participants at Arizona State University was conducted to evaluate the chatbot’s effectiveness. Results from post-interaction surveys showed high user satisfaction in areas such as response clarity, beverage recommendation accuracy, and conversational tone. Most users found the chatbot easy to use, educational, and engaging, while personalization features were well-received—though opportunities for refinement in response speed and interface clarity were identified. Updates made based on this feedback included onboarding instructions, improved preference visibility, and backend optimizations to reduce latency. This work demonstrates how generative AI models can be applied meaningfully in experiential learning contexts, particularly those requiring nuanced guidance and dynamic user engagement. The findings contribute to ongoing discussions about the role of large language models in education and present a scalable model for future AI-driven learning applications within lifestyle and hospitality domains.
ContributorsLin, Waley (Author) / Echeagaray, Maria (Thesis director) / Ortiz, Michael (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2025-05