Matching Items (14)
Filtering by

Clear all filters

Description
This project aspires to develop an AI capable of playing on a variety of maps in a Risk-like board game. While AI has been successfully applied to many other board games, such as Chess and Go, most research is confined to a single board and is inflexible to topological changes.

This project aspires to develop an AI capable of playing on a variety of maps in a Risk-like board game. While AI has been successfully applied to many other board games, such as Chess and Go, most research is confined to a single board and is inflexible to topological changes. Further, almost all of these games are played on a rectangular grid. Contrarily, this project develops an AI player, referred to as GG-net, to play the online strategy game Warzone, which is based on the classic board game Risk. Warzone is played on a wide variety of irregularly shaped maps. Prior research has struggled to create an effective AI for Risk-like games due to the immense branching factor. The most successful attempts tended to rely on manually restricting the set of actions the AI considered while also engineering useful features for the AI to consider. GG-net uses no human knowledge, but rather a genetic algorithm combined with a graph neural network. Together, these methods allow GG-net to perform competitively across a multitude of maps. GG-net outperformed the built-in rule-based AI by 413 Elo (representing an 80.7% chance of winning) and an approach based on AlphaZero using graph neural networks by 304 Elo (representing a 74.2% chance of winning). This same advantage holds across both seen and unseen maps. GG-net appears to be a strong opponent on both small and medium maps, however, on large maps with hundreds of territories, inefficiencies in GG-net become more significant and GG-net struggles against the rule-based approach. Overall, GG-net was able to successfully learn the game and generalize across maps of a similar size, albeit further work is required for GG-net to become more successful on large maps.
ContributorsBauer, Andrew (Author) / Yang, Yezhou (Thesis director) / Harrison, Blake (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2022-05
Description

The aim of this project is to understand the basic algorithmic components of the transformer deep learning architecture. At a high level, a transformer is a machine learning model based off of a recurrent neural network that adopts a self-attention mechanism, which can weigh significant parts of sequential input data

The aim of this project is to understand the basic algorithmic components of the transformer deep learning architecture. At a high level, a transformer is a machine learning model based off of a recurrent neural network that adopts a self-attention mechanism, which can weigh significant parts of sequential input data which is very useful for solving problems in natural language processing and computer vision. There are other approaches to solving these problems which have been implemented in the past (i.e., convolutional neural networks and recurrent neural networks), but these architectures introduce the issue of the vanishing gradient problem when an input becomes too long (which essentially means the network loses its memory and halts learning) and have a slow training time in general. The transformer architecture’s features enable a much better “memory” and a faster training time, which makes it a more optimal architecture in solving problems. Most of this project will be spent producing a survey that captures the current state of research on the transformer, and any background material to understand it. First, I will do a keyword search of the most well cited and up-to-date peer reviewed publications on transformers to understand them conceptually. Next, I will investigate any necessary programming frameworks that will be required to implement the architecture. I will use this to implement a simplified version of the architecture or follow an easy to use guide or tutorial in implementing the architecture. Once the programming aspect of the architecture is understood, I will then Implement a transformer based on the academic paper “Attention is All You Need”. I will then slightly tweak this model using my understanding of the architecture to improve performance. Once finished, the details (i.e., successes, failures, process and inner workings) of the implementation will be evaluated and reported, as well as the fundamental concepts surveyed. The motivation behind this project is to explore the rapidly growing area of AI algorithms, and the transformer algorithm in particular was chosen because it is a major milestone for engineering with AI and software. Since their introduction, transformers have provided a very effective way of solving natural language processing, which has allowed any related applications to succeed with high speed while maintaining accuracy. Since then, this type of model can be applied to more cutting edge natural language processing applications, such as extracting semantic information from a text description and generating an image to satisfy it.

ContributorsCereghini, Nicola (Author) / Acuna, Ruben (Thesis director) / Bansal, Ajay (Committee member) / Barrett, The Honors College (Contributor) / Software Engineering (Contributor)
Created2023-05
Description
This thesis project focused on determining the primary causes of flight delays within the United States then building a machine learning model using the collected flight data to determine a more efficient flight route from Phoenix Sky Harbor International Airport in Phoenix, Arizona to Harry Reid International Airport in Las

This thesis project focused on determining the primary causes of flight delays within the United States then building a machine learning model using the collected flight data to determine a more efficient flight route from Phoenix Sky Harbor International Airport in Phoenix, Arizona to Harry Reid International Airport in Las Vegas, Nevada. In collaboration with Honeywell Aerospace as part of the Ira A. Fulton Schools of Engineering Capstone Course, CSE 485 and 486, this project consisted of using open source data from FlightAware and the United States Bureau of Transportation Statistics to identify 5 primary causes of flight delays and determine if any of them could be solved using machine learning. The machine learning model was a 3-layer Feedforward Neural Network that focused on reducing the impact of Late Arriving Aircraft for the Phoenix to Las Vegas route. Evaluation metrics used to determine the efficiency and success of the model include Mean Squared Error (MSE), Mean Average Error (MAE), and R-Squared Score. The benefits of this project are wide-ranging, for both consumers and corporations. Consumers will be able to arrive at their destination earlier than expected, which would provide them a better experience with the airline. On the other side, the airline can take credit for the customer's satisfaction, in addition to reducing fuel usage, thus making their flights more environmentally friendly. This project represents a significant contribution to the field of aviation as it proves that flights can be made more efficient through the usage of open source data.
Created2024-05
Description
The performance of modern machine learning algorithms depends upon the selection of a set of hyperparameters. Common examples of hyperparameters are learning rate and the number of layers in a dense neural network. Auto-ML is a branch of optimization that has produced important contributions in this area. Within Auto-ML, multi-fidelity approaches, which eliminate poorly-performing

The performance of modern machine learning algorithms depends upon the selection of a set of hyperparameters. Common examples of hyperparameters are learning rate and the number of layers in a dense neural network. Auto-ML is a branch of optimization that has produced important contributions in this area. Within Auto-ML, multi-fidelity approaches, which eliminate poorly-performing configurations after evaluating them at low budgets, are among the most effective. However, the performance of these algorithms strongly depends on how effectively they allocate the computational budget to various hyperparameter configurations. We first present Parameter Optimization with Conscious Allocation 1.0 (POCA 1.0), a hyperband- based algorithm for hyperparameter optimization that adaptively allocates the inputted budget to the hyperparameter configurations it generates following a Bayesian sampling scheme. We then present its successor Parameter Optimization with Conscious Allocation 2.0 (POCA 2.0), which follows POCA 1.0’s successful philosophy while utilizing a time-series model to reduce wasted computational cost and providing a more flexible framework. We compare POCA 1.0 and 2.0 to its nearest competitor BOHB at optimizing the hyperparameters of a multi-layered perceptron and find that both POCA algorithms exceed BOHB in low-budget hyperparameter optimization while performing similarly in high-budget scenarios.
ContributorsInman, Joshua (Author) / Sankar, Lalitha (Thesis director) / Pedrielli, Giulia (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2024-05
Description

For my Honors Thesis, I decided to create an Artificial Intelligence Project to predict Fantasy NFL Football Points of players and team's defense. I created a Tensorflow Keras AI Regression model and created a Flask API that holds the AI model, and a Django Try-It Page for the user to

For my Honors Thesis, I decided to create an Artificial Intelligence Project to predict Fantasy NFL Football Points of players and team's defense. I created a Tensorflow Keras AI Regression model and created a Flask API that holds the AI model, and a Django Try-It Page for the user to use the model. These services are hosted on ASU's AWS service. In my Flask API, it actively gathers data from Pro-Football-Reference, then calculates the fantasy points. Let’s say the current year is 2022, then the model analyzes each player and trains on all data from available from 2000 to 2020 data, tests the data on 2021 data, and predicts for 2022 year. The Django Website asks the user to input the current year, then the user clicks the submit button runs the AI model, and the process explained earlier. Next, the user enters the player's name for the point prediction and the website predicts the last 5 rows with 4 being the previous fantasy points and the 5th row being the prediction.

ContributorsPanikulam, Caleb (Author) / De Luca, Gennaro (Thesis director) / Chen, Yinong (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-12
Description

With the rapid increase of technological capabilities, particularly in processing power and speed, the usage of machine learning is becoming increasingly widespread, especially in fields where real-time assessment of complex data is extremely valuable. This surge in popularity of machine learning gives rise to an abundance of potential research and

With the rapid increase of technological capabilities, particularly in processing power and speed, the usage of machine learning is becoming increasingly widespread, especially in fields where real-time assessment of complex data is extremely valuable. This surge in popularity of machine learning gives rise to an abundance of potential research and projects on further broadening applications of artificial intelligence. From these opportunities comes the purpose of this thesis. Our work seeks to meaningfully increase our understanding of current capabilities of machine learning and the problems they can solve. One extremely popular application of machine learning is in data prediction, as machines are capable of finding trends that humans often miss. Our effort to this end was to examine the CVE dataset and attempt to predict future entries with Random Forests. The second area of interest lies within the great promise being demonstrated by neural networks in the field of autonomous driving. We sought to understand the research being put out by the most prominent bodies within this field and to implement a model on one of the largest standing datasets, Berkeley DeepDrive 100k. This thesis describes our efforts to build, train, and optimize a Random Forest model on the CVE dataset and a convolutional neural network on the Berkeley DeepDrive 100k dataset. We document these efforts with the goal of growing our knowledge on (and usage of) machine learning in these topics.

ContributorsSelzer, Cora (Author) / Smith, Zachary (Co-author) / Ingram-Waters, Mary (Thesis director) / Rendell, Dawn (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2022-05
Description
The goal of this project is to measure the effects of the use of dynamic circuit technology within quantum neural networks. Quantum neural networks are a type of neural network that utilizes quantum encoding and manipulation techniques to learn to solve a problem using quantum or classical data. In their

The goal of this project is to measure the effects of the use of dynamic circuit technology within quantum neural networks. Quantum neural networks are a type of neural network that utilizes quantum encoding and manipulation techniques to learn to solve a problem using quantum or classical data. In their current form these neural networks are linear in nature, not allowing for alternative execution paths, but using dynamic circuits they can be made nonlinear and can execute different paths. We measured the effects of these dynamic circuits on the training time, accuracy, and effective dimension of the quantum neural network across multiple trials to see the impacts of the nonlinear behavior.
ContributorsLynch, Brian (Author) / De Luca, Gennaro (Thesis director) / Chen, Yinong (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-12
Description
Food safety is vital to the well-being of society; therefore, it is important to inspect food products to ensure minimal health risks are present. A crucial phase of food inspection is the identification of foreign particles found in the sample, such as insect body parts. The presence of certain species

Food safety is vital to the well-being of society; therefore, it is important to inspect food products to ensure minimal health risks are present. A crucial phase of food inspection is the identification of foreign particles found in the sample, such as insect body parts. The presence of certain species of insects, especially storage beetles, is a reliable indicator of possible contamination during storage and food processing. However, the current approach to identifying species is visual examination by human analysts; this method is rather subjective and time-consuming. Furthermore, confident identification requires extensive experience and training. To aid this inspection process, we have developed in collaboration with FDA analysts some image analysis-based machine intelligence to achieve species identification with up to 90% accuracy. The current project is a continuation of this development effort. Here we present an image analysis environment that allows practical deployment of the machine intelligence on computers with limited processing power and memory. Using this environment, users can prepare input sets by selecting images for analysis, and inspect these images through the integrated pan, zoom, and color analysis capabilities. After species analysis, the results panel allows the user to compare the analyzed images with referenced images of the proposed species. Further additions to this environment should include a log of previously analyzed images, and eventually extend to interaction with a central cloud repository of images through a web-based interface. Additional issues to address include standardization of image layout, extension of the feature-extraction algorithm, and utilizing image classification to build a central search engine for widespread usage.
ContributorsMartin, Daniel Luis (Author) / Ahn, Gail-Joon (Thesis director) / Doupé, Adam (Committee member) / Xu, Joshua (Committee member) / Computer Science and Engineering Program (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description
Feedback represents a vital component of the learning process and is especially important for Computer Science students. With class sizes that are often large, it can be challenging to provide individualized feedback to students. Consistent, constructive, supportive feedback through a tutoring companion can scaffold the learning process for students.

This work

Feedback represents a vital component of the learning process and is especially important for Computer Science students. With class sizes that are often large, it can be challenging to provide individualized feedback to students. Consistent, constructive, supportive feedback through a tutoring companion can scaffold the learning process for students.

This work contributes to the construction of a tutoring companion designed to provide this feedback to students. It aims to bridge the gap between the messages the compiler delivers, and the support required for a novice student to understand the problem and fix their code. Particularly, it provides support for students learning about recursion in a beginning university Java programming course. Besides also providing affective support, a tutoring companion could be more effective when it is embedded into the environment that the student is already using, instead of an additional tool for the student to learn. The proposed Tutoring Companion is embedded into the Eclipse Integrated Development Environment (IDE).

This thesis focuses on the reasoning model for the Tutoring Companion and is developed using the techniques of a neural network. While a student uses the IDE, the Tutoring Companion collects 16 data points, including the presence of certain key words, cyclomatic complexity, and error messages from the compiler, every time it detects an event, such as a run attempt, debug attempt, or a request for help, in the IDE. This data is used as inputs to the neural network. The neural network produces a correlating single output code for the feedback to be provided to the student, which is displayed in the IDE.

The effectiveness of the approach is examined among 38 Computer Science students who solve a programming assignment while the Tutoring Companion assists them. Data is collected from these interactions, including all inputs and outputs for the neural network, and students are surveyed regarding their experience. Results suggest that students feel supported while working with the Companion and promising potential for using a neural network with an embedded companion in the future. Challenges in developing an embedded companion are discussed, as well as opportunities for future work.
ContributorsDay, Melissa (Author) / Gonzalez-Sanchez, Javier (Thesis advisor) / Bansal, Ajay (Committee member) / Mehlhase, Alexandra (Committee member) / Arizona State University (Publisher)
Created2019
Description
In recent years, the development of new Machine Learning models has allowed for new technological advancements to be introduced for practical use across the world. Multiple studies and experiments have been conducted to create new variations of Machine Learning models with different algorithms to determine if potential systems would prove

In recent years, the development of new Machine Learning models has allowed for new technological advancements to be introduced for practical use across the world. Multiple studies and experiments have been conducted to create new variations of Machine Learning models with different algorithms to determine if potential systems would prove to be successful. Even today, there are still many research initiatives that are continuing to develop new models in the hopes to discover potential solutions for problems such as autonomous driving or determining the emotional value from a single sentence. One of the current popular research topics for Machine Learning is the development of Facial Expression Recognition systems. These Machine Learning models focus on classifying images of human faces that are expressing different emotions through facial expressions. In order to develop effective models to perform Facial Expression Recognition, researchers have gone on to utilize Deep Learning models, which are a more advanced implementation of Machine Learning models, known as Neural Networks. More specifically, the use of Convolutional Neural Networks has proven to be the most effective models for achieving highly accurate results at classifying images of various facial expressions. Convolutional Neural Networks are Deep Learning models that are capable of processing visual data, such as images and videos, and can be used to identify various facial expressions. The purpose of this project, I focused on learning about the important concepts of Machine Learning, Deep Learning, and Convolutional Neural Networks to implement a Convolutional Neural Network that was previously developed by a recommended research paper.
ContributorsFrace, Douglas R (Author) / Demakethepalli Venkateswara, Hemanth Kumar (Thesis director) / McDaniel, Troy (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05