For my Honors Thesis, I decided to create an Artificial Intelligence Project to predict Fantasy NFL Football Points of players and team's defense. I created a Tensorflow Keras AI Regression model and created a Flask API that holds the AI model, and a Django Try-It Page for the user to…
For my Honors Thesis, I decided to create an Artificial Intelligence Project to predict Fantasy NFL Football Points of players and team's defense. I created a Tensorflow Keras AI Regression model and created a Flask API that holds the AI model, and a Django Try-It Page for the user to use the model. These services are hosted on ASU's AWS service. In my Flask API, it actively gathers data from Pro-Football-Reference, then calculates the fantasy points. Let’s say the current year is 2022, then the model analyzes each player and trains on all data from available from 2000 to 2020 data, tests the data on 2021 data, and predicts for 2022 year. The Django Website asks the user to input the current year, then the user clicks the submit button runs the AI model, and the process explained earlier. Next, the user enters the player's name for the point prediction and the website predicts the last 5 rows with 4 being the previous fantasy points and the 5th row being the prediction.
Narrative generation is an important field due to the high demand for stories in video game design and also in stories used in learning tools in the classroom. As these stories should contain depth, it is desired for these stories to ideally be more descriptive. There are tools that hel…
Narrative generation is an important field due to the high demand for stories in video game design and also in stories used in learning tools in the classroom. As these stories should contain depth, it is desired for these stories to ideally be more descriptive. There are tools that help with the creation of these stories, such as planning, which requires a domain as input, or GPT-3, which requires an input prompt to generate the stories. However, other aspects to consider are the coherence and variation of stories. To save time and effort and create multiple possible stories, we combined both planning and the Large Language Model (LLM) GPT-3 similar to how they were used in TattleTale to generate such stories while examining whether descriptive input prompts to GPT-3 affect the outputted stories. The stories generated are readable to the general public and overall, the prompts do not consistently affect descriptiveness of outputs across all stories tested. For this work, three stories with three variants each were created and tested for descriptiveness. To do so, adjectives, adverbs, prepositional phrases, and suboordinating conjunctions were counted using Natural Language Processing (NLP) tool spaCy for Part Of Speech (POS) tagging. This work has shown that descriptiveness is highly correlated with the amount of words in the story in general, so running GPT-3 to obtain longer stories is a feasible option to consider in order to obtain more descriptive stories. The limitations of GPT-3 have an impact on the descriptiveness of resulting stories due to GPT-3’s inconsistency and transformer architecture, and other methods of narrative generation such as simple planning could be more useful.
Artistic expression can be made more accessible through the use of technological interfaces such as auditory analysis, generative artificial intelligence models, and simplification of complicated systems, providing a way for human driven creativity to serve as an input that allow users to creatively express themselves. Studies and testing were done…
Artistic expression can be made more accessible through the use of technological interfaces such as auditory analysis, generative artificial intelligence models, and simplification of complicated systems, providing a way for human driven creativity to serve as an input that allow users to creatively express themselves. Studies and testing were done with industry standard performance technology and protocols to create an accessible interface for creative expression. Artificial intelligence models were created to generate art based on simple text inputs. Users were then invited to display their creativity using the software, and a comprehensive performance showcased the potential of the system for artistic expression.
Artistic expression can be made more accessible through the use of technological interfaces such as auditory analysis, generative artificial intelligence models, and simplification of complicated systems, providing a way for human driven creativity to serve as an input that allow users to creatively express themselves. Studies and testing were done…
Artistic expression can be made more accessible through the use of technological interfaces such as auditory analysis, generative artificial intelligence models, and simplification of complicated systems, providing a way for human driven creativity to serve as an input that allow users to creatively express themselves. Studies and testing were done with industry standard performance technology and protocols to create an accessible interface for creative expression. Artificial intelligence models were created to generate art based on simple text inputs. Users were then invited to display their creativity using the software, and a comprehensive performance showcased the potential of the system for artistic expression.
In the age of information, collecting and processing large amounts of data is an integral part of running a business. From training artificial intelligence to driving decision making, the applications of data are far-reaching. However, it is difficult to process many types of data; namely, unstructured data. Unstructured data is…
In the age of information, collecting and processing large amounts of data is an integral part of running a business. From training artificial intelligence to driving decision making, the applications of data are far-reaching. However, it is difficult to process many types of data; namely, unstructured data. Unstructured data is “information that either does not have a predefined data model or is not organized in a pre-defined manner” (Balducci & Marinova 2018). Such data are difficult to put into spreadsheets and relational databases due to their lack of numeric values and often come in the form of text fields written by the consumers (Wolff, R. 2020). The goal of this project is to help in the development of a machine learning model to aid CommonSpirit Health and ServiceNow, hence why this approach using unstructured data was selected. This paper provides a general overview of the process of unstructured data management and explores some existing implementations and their efficacy. It will then discuss our approach to converting unstructured cases into usable data that were used to develop an artificial intelligence model which is estimated to be worth $400,000 and save CommonSpirit Health $1,200,000 in organizational impact.
With the increasing presence and importance of machine learning, artificial intelligence, and big data in our daily lives, there comes the necessity to re-evaluate how magical, enchanted lines of thinking may or may not survive alongside the turn of the century. There exists a set of connections between magic and…
With the increasing presence and importance of machine learning, artificial intelligence, and big data in our daily lives, there comes the necessity to re-evaluate how magical, enchanted lines of thinking may or may not survive alongside the turn of the century. There exists a set of connections between magic and the aforementioned field of technology, in that this specific field has the potential to become sufficiently advanced and complex as to cause unpredictable problems down the line. This discussion will explore several different topics ranging from the comparisons between magic and technology to the dangers of these systems being “black box” and rather ambiguous in how they turn data input into prediction output, all central to the idea that this increasingly tech-focused world should be thought about in a magical and re-enchanted way, especially as legislation is drafted up and decided upon that can determine how these impressive new technologies will be regulated going forward.
This work focuses on combining multiple different technologies to produce a scalable, full-stack music generation and sharing application meant to be deployed to a cloud environment while keeping operating costs as low as possible. The key feature of this app is that it allows users to generate tracks from scratch…
This work focuses on combining multiple different technologies to produce a scalable, full-stack music generation and sharing application meant to be deployed to a cloud environment while keeping operating costs as low as possible. The key feature of this app is that it allows users to generate tracks from scratch by providing a text description, or customize existing tracks by supplying both an audio file and a track description. Users will be able to share these tracks with other users, via this app, so that they can collaborate with others and jumpstart their creative process, allowing creators to produce more content for their fans. A web app was developed; Contak. This application requires a database, REST API, object storage, music generation artificial intelligence models, and a web application (GUI) to interact with the user. In order to define the best music generation model, a small exploratory study was conducted to compare the quality of different music generation models, including MusicGen, MusicLM, and Riffusion. Results found that the MusicGen model, selected for this work, outperformed the competing models: MusicLM and Riffusion. This exploratory study includes rankings of the three models based on how well each one adhered to a text description of a track. The purpose was to test the hypothesis that MusicGen produces higher quality music that adheres to text descriptions better than other models because it encodes audio at a higher bit rate (32 kHz). While the web app generates high quality tracks with above average text adherence, the main limitation of this work is the response time needed to generate tracks from existing audio using the currently available backend infrastructure, as this can take up to 7 minutes to complete. In the future, this app can be deployed to a cloud environment with GPU acceleration to improve response times and throughput. Additionally, new methods of input besides text and audio input can be implemented using MIDI instructions and the Magenta music model, providing increased track generation precision for advanced music creators with MIDI experience.
Navigation for the visually impaired and blind remains to be a major barrier to
independence. Existing assistive tools like guide dogs or white mobility canes provide
limited, immediate information within a range of about 5 feet. Alternatively, assistive
applications for navigation only provide static, generalizable information about a
broader area that could be a…
Navigation for the visually impaired and blind remains to be a major barrier to
independence. Existing assistive tools like guide dogs or white mobility canes provide
limited, immediate information within a range of about 5 feet. Alternatively, assistive
applications for navigation only provide static, generalizable information about a
broader area that could be a few hundred feet radius to miles. Currently, no solution
effectively covers the 5 to 20 feet range, leaving users without crucial information
about their surroundings in this mid-distance area. This project explores the potential
of state-of-the-art vision-language models (VLMs) to provide new navigation solutions
for the visually impaired and blind that bridge the aforementioned gap in information
about the environment. VLMs prove capable of identifying key objects and reasoning
from corresponding text and images in real time, making them the ideal candidate for
assistive technology. Leveraging these capabilities, these models may be integrated
into wearable or extendable devices that allow users to receive continuous support in
unfamiliar environments, improving their independence and maintaining safety. This
project investigates the practical application of VLMs in real-world scenarios, with
an emphasis on ease of use and reliability. This work has the potential to expand the
role of assistive technology in daily life and complement existing solutions for more
intuitive and responsive understanding.
The rapid growth of published research has increased the time and energy researchers invest in
literature review to stay updated in their field. While existing research tools assist with organizing papers, providing basic summaries, and improving search, there is a need for an assistant that copilots researchers to drive innovation. In…
The rapid growth of published research has increased the time and energy researchers invest in
literature review to stay updated in their field. While existing research tools assist with organizing papers, providing basic summaries, and improving search, there is a need for an assistant that copilots researchers to drive innovation. In response, we introduce buff, a research assistant framework employing large language models to summarize papers, identify research gaps and trends, and recommend future directions based on semantic analysis of the literature landscape, Wikipedia, and the broader internet. We demo buff through a user-friendly chat interface, powered by a citation network encompassing over 5600 research papers, amounting to over 133 million tokens of textual information. buff utilizes a network structure to fetch and analyze factual scientific information semantically. By streamlining the literature review and scientific knowledge discovery process, buff empowers researchers to concentrate their efforts on pushing the boundaries of their fields, driving innovation, and optimizing the scientific research landscape.
Machine learning continues to grow in applications and its influence is felt across the world. This paper builds off the foundations of machine learning used for sports analysis and its specific implementations in tennis by attempting to predict the winner of ATP men’s singles tennis matches. Tennis provides a unique…
Machine learning continues to grow in applications and its influence is felt across the world. This paper builds off the foundations of machine learning used for sports analysis and its specific implementations in tennis by attempting to predict the winner of ATP men’s singles tennis matches. Tennis provides a unique challenge due to the individual nature of singles and the varying career lengths, experiences, and backgrounds of players from around the globe. Related work has explored prediction with features such as rank differentials, physical characteristics, and past performance. This work expands on the studies by including raw player statistics and relevant environment features. State of the art models such as LightGBM and XGBoost, as well as a standard logistic regression are trained and evaluated against a dataset containing matches from 1991 to 2023. All models surpassed the baseline and each has their own strengths and weaknesses. Future work may involve expanding the feature space to include more robust features such as player profiles and ELO ratings, as well as utilizing deep neural networks to improve understanding of past player performance and better comprehend the context of a given match.