Matching Items (151)
Filtering by

Clear all filters

Description

Speedsolving, the art of solving twisty puzzles like the Rubik's Cube as fast as possible, has recently benefitted from the arrival of smartcubes which have special hardware for tracking the cube's face turns and transmitting them via Bluetooth. However, due to their embedded electronics, existing smartcubes cannot be used in

Speedsolving, the art of solving twisty puzzles like the Rubik's Cube as fast as possible, has recently benefitted from the arrival of smartcubes which have special hardware for tracking the cube's face turns and transmitting them via Bluetooth. However, due to their embedded electronics, existing smartcubes cannot be used in competition, reducing their utility in personal speedcubing practice. This thesis proposes a sound-based design for tracking the face turns of a standard, non-smart speedcube consisting of an audio processing receiver in software and a small physical speaker configured as a transmitter. Special attention has been given to ensuring that installing the transmitter requires only a reversible centercap replacement on the original cube. This allows the cube to benefit from smartcube features during practice, while still maintaining compliance with competition regulations. Within a controlled test environment, the software receiver perfectly detected a variety of transmitted move sequences. Furthermore, all components required for the physical transmitter were demonstrated to fit within the centercap of a Gans 356 speedcube.

ContributorsHale, Joseph (Author) / Heinrichs, Robert (Thesis director) / Li, Baoxin (Committee member) / Barrett, The Honors College (Contributor) / Software Engineering (Contributor) / School of International Letters and Cultures (Contributor)
Created2022-05
Description
The scientific manuscript review stage is a key part of the modern scientific process. It involves rigorous evaluation of new papers by peers to assess the significance of contributions in a particular area of study and ensure that papers meet high standards. This process helps maintain the quality and credibility

The scientific manuscript review stage is a key part of the modern scientific process. It involves rigorous evaluation of new papers by peers to assess the significance of contributions in a particular area of study and ensure that papers meet high standards. This process helps maintain the quality and credibility of research. However, some reviews can be toxic or overly discouraging, leading to unintentional psychological damage (such as anxiety or depression) to paper authors and detracting from the constructive tone of the review space. This Thesis/Creative Project was completed alongside a capstone project. Our capstone project aims to address this issue. The goal is to fine tune a Large Language Model (LLM) that can first accurately identify toxic sentences within a paper review. Then, the LLM will revise any toxic sentences in a way that maintains the criticism but delivers it in a more friendly or encouraging tone. To effectively use this LLM, it requires a Graphical User Interface (GUI) so that end-users (such as editors, associate editors, reviewers) can easily interact with it. This allows them to update the wording of the review in an effective manner while maintaining scientific integrity. While the GUI provides a user-friendly interface for interacting with the LLM, there are some technical challenges in running a LLM application in a web-based framework. LLMs are computationally expensive to run. They require significant GPU RAM, which can be a limiting factor, especially in a web-based framework with limited resources. One potential solution to this problem is model quantization, which can reduce the memory footprint of the model. However, this introduces the problem of model drift, as the model’s performance may decrease when quantized. This needs to be measured to ensure the model continues to provide accurate results.
ContributorsRamalingame, Hari (Author) / Banerjee, Imon (Thesis director) / Li, Baoxin (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2024-05
Description

Cornhole, traditionally seen as tailgate entertainment, has rapidly risen in popularity since the launching of the American Cornhole League in 2016. However, it lacks robust quality control over large tournaments, since many of the matches are scored and refereed by the players themselves. In the past, there have been issues

Cornhole, traditionally seen as tailgate entertainment, has rapidly risen in popularity since the launching of the American Cornhole League in 2016. However, it lacks robust quality control over large tournaments, since many of the matches are scored and refereed by the players themselves. In the past, there have been issues where entire competition brackets have had to be scrapped and replayed because scores were not handled correctly. The sport is in need of a supplementary scoring solution that can provide quality control and accuracy over large matches where there aren’t enough referees present to score games. Drawing from the ACL regulations as well as personal experience and testimony from ACL Pro players, a list of requirements was generated for a potential automatic scoring system. Then, a market analysis of existing scoring solutions was done, and it found that there are no solutions on the market that can automatically score a cornhole game. Using the problem requirements and previous attempts to solve the scoring problem, a list of concepts was generated and evaluated against each other to determine which scoring system design should be developed. After determining that the chosen concept was the best way to approach the problem, the problem requirements and cornhole rules were further refined into a set of physical assumptions and constraints about the game itself. This informed the choice, structure, and implementation of the algorithms that score the bags. The prototype concept was tested on their own, and areas of improvement were found. Lastly, based on the results of the tests and what was learned from the engineering process, a roadmap was set out for the future development of the automatic scoring system into a full, market-ready product.

ContributorsGillespie, Reagan (Author) / Sugar, Thomas (Thesis director) / Li, Baoxin (Committee member) / Barrett, The Honors College (Contributor) / Engineering Programs (Contributor) / Dean, W.P. Carey School of Business (Contributor)
Created2023-05
Description
My thesis focuses on improving enemy intelligence in 3D games. The development of reactive yet unpredictable agents is vital to the creation of interactive and immersive gameplay. I attempted to achieve this through two approaches: using a machine-learning model and integrating fuzzy logic to simulate enemy personalities. The machine learning model

My thesis focuses on improving enemy intelligence in 3D games. The development of reactive yet unpredictable agents is vital to the creation of interactive and immersive gameplay. I attempted to achieve this through two approaches: using a machine-learning model and integrating fuzzy logic to simulate enemy personalities. The machine learning model I developed aimed to create adaptive agents that learn from their environment, while the fuzzy logic state machine adds variance to enemy behaviors, creating more challenging opponents. My machine-learning approach involved the implementation of a Python-based machine-learning package within the Unity game engine to simulate the learning of various games. Fuzzy logic was integrated by giving each instance of an enemy a personality matrix that governs the flow of their state machine. I encountered a variety of problems when trying to train my machine-learning model but was still able to learn about the potential applications. My work with fuzzy logic showed great promise in creating a better gaming experience for players through more dynamic enemies. I conclude by emphasizing the potential of these approaches to enhance the gaming experience and the importance of continued research in improving enemy intelligence.
ContributorsShaw, Nicholas (Author) / Li, Baoxin (Thesis director) / Selgrad, Justin (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / Dean, W.P. Carey School of Business (Contributor) / Computer Science and Engineering Program (Contributor)
Created2024-05
Description

Standardization is sorely lacking in the field of musical machine learning. This thesis project endeavors to contribute to this standardization by training three machine learning models on the same dataset and comparing them using the same metrics. The music-specific metrics utilized provide more relevant information for diagnosing the shortcomings of

Standardization is sorely lacking in the field of musical machine learning. This thesis project endeavors to contribute to this standardization by training three machine learning models on the same dataset and comparing them using the same metrics. The music-specific metrics utilized provide more relevant information for diagnosing the shortcomings of each model.

ContributorsHilliker, Jacob (Author) / Li, Baoxin (Thesis director) / Libman, Jeffrey (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2021-12
Description
With the advent of Internet, the data being added online is increasing at enormous rate. Though search engines are using IR techniques to facilitate the search requests from users, the results are not effective towards the search query of the user. The search engine user has to go through certain

With the advent of Internet, the data being added online is increasing at enormous rate. Though search engines are using IR techniques to facilitate the search requests from users, the results are not effective towards the search query of the user. The search engine user has to go through certain webpages before getting at the webpage he/she wanted. This problem of Information Overload can be solved using Automatic Text Summarization. Summarization is a process of obtaining at abridged version of documents so that user can have a quick view to understand what exactly the document is about. Email threads from W3C are used in this system. Apart from common IR features like Term Frequency, Inverse Document Frequency, Term Rank, a variation of page rank based on graph model, which can cluster the words with respective to word ambiguity, is implemented. Term Rank also considers the possibility of co-occurrence of words with the corpus and evaluates the rank of the word accordingly. Sentences of email threads are ranked as per features and summaries are generated. System implemented the concept of pyramid evaluation in content selection. The system can be considered as a framework for Unsupervised Learning in text summarization.
ContributorsNadella, Sravan (Author) / Davulcu, Hasan (Thesis advisor) / Li, Baoxin (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2015
Description
One of the most remarkable outcomes resulting from the evolution of the web into Web 2.0, has been the propelling of blogging into a widely adopted and globally accepted phenomenon. While the unprecedented growth of the Blogosphere has added diversity and enriched the media, it has also added complexity. To

One of the most remarkable outcomes resulting from the evolution of the web into Web 2.0, has been the propelling of blogging into a widely adopted and globally accepted phenomenon. While the unprecedented growth of the Blogosphere has added diversity and enriched the media, it has also added complexity. To cope with the relentless expansion, many enthusiastic bloggers have embarked on voluntarily writing, tagging, labeling, and cataloguing their posts in hopes of reaching the widest possible audience. Unbeknown to them, this reaching-for-others process triggers the generation of a new kind of collective wisdom, a result of shared collaboration, and the exchange of ideas, purpose, and objectives, through the formation of associations, links, and relations. Mastering an understanding of the Blogosphere can greatly help facilitate the needs of the ever growing number of these users, as well as producers, service providers, and advertisers into facilitation of the categorization and navigation of this vast environment. This work explores a novel method to leverage the collective wisdom from the infused label space for blog search and discovery. The work demonstrates that the wisdom space can provide a most unique and desirable framework to which to discover the highly sought after background information that could aid in the building of classifiers. This work incorporates this insight into the construction of a better clustering of blogs which boosts the performance of classifiers for identifying more relevant labels for blogs, and offers a mechanism that can be incorporated into replacing spurious labels and mislabels in a multi-labeled space.
ContributorsGalan, Magdiel F (Author) / Liu, Huan (Thesis advisor) / Davulcu, Hasan (Committee member) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2015
Description
Blur is an important attribute in the study and modeling of the human visual system. In this work, 3D blur discrimination experiments are conducted to measure the just noticeable additional blur required to differentiate a target blur from the reference blur level. The past studies on blur discrimination have measured

Blur is an important attribute in the study and modeling of the human visual system. In this work, 3D blur discrimination experiments are conducted to measure the just noticeable additional blur required to differentiate a target blur from the reference blur level. The past studies on blur discrimination have measured the sensitivity of the human visual system to blur using 2D test patterns. In this dissertation, subjective tests are performed to measure blur discrimination thresholds using stereoscopic 3D test patterns. The results of this study indicate that, in the symmetric stereo viewing case, binocular disparity does not affect the blur discrimination thresholds for the selected 3D test patterns. In the asymmetric viewing case, the blur discrimination thresholds decreased and the decrease in threshold values is found to be dominated by the eye observing the higher blur.



The second part of the dissertation focuses on texture granularity in the context of 2D images. A texture granularity database referred to as GranTEX, consisting of textures with varying granularity levels is constructed. A subjective study is conducted to measure the perceived granularity level of textures present in the GranTEX database. An objective index that automatically measures the perceived granularity level of textures is also presented. It is shown that the proposed granularity metric correlates well with the subjective granularity scores and outperforms the other methods presented in the literature.

A subjective study is conducted to assess the effect of compression on textures with varying degrees of granularity. A logarithmic function model is proposed as a fit to the subjective test data. It is demonstrated that the proposed model can be used for rate-distortion control by allowing the automatic selection of the needed compression ratio for a target visual quality. The proposed model can also be used for visual quality assessment by providing a measure of the visual quality for a target compression ratio.

The effect of texture granularity on the quality of synthesized textures is studied. A subjective study is presented to assess the quality of synthesized textures with varying levels of texture granularity using different types of texture synthesis methods. This work also proposes a reduced-reference visual quality index referred to as delta texture granularity index for assessing the visual quality of synthesized textures.
ContributorsSubedar, Mahesh M (Author) / Karam, Lina (Thesis advisor) / Abousleman, Glen (Committee member) / Li, Baoxin (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2015
Description
Understanding the complexity of temporal and spatial characteristics of gene expression over brain development is one of the crucial research topics in neuroscience. An accurate description of the locations and expression status of relative genes requires extensive experiment resources. The Allen Developing Mouse Brain Atlas provides a large number of

Understanding the complexity of temporal and spatial characteristics of gene expression over brain development is one of the crucial research topics in neuroscience. An accurate description of the locations and expression status of relative genes requires extensive experiment resources. The Allen Developing Mouse Brain Atlas provides a large number of in situ hybridization (ISH) images of gene expression over seven different mouse brain developmental stages. Studying mouse brain models helps us understand the gene expressions in human brains. This atlas collects about thousands of genes and now they are manually annotated by biologists. Due to the high labor cost of manual annotation, investigating an efficient approach to perform automated gene expression annotation on mouse brain images becomes necessary. In this thesis, a novel efficient approach based on machine learning framework is proposed. Features are extracted from raw brain images, and both binary classification and multi-class classification models are built with some supervised learning methods. To generate features, one of the most adopted methods in current research effort is to apply the bag-of-words (BoW) algorithm. However, both the efficiency and the accuracy of BoW are not outstanding when dealing with large-scale data. Thus, an augmented sparse coding method, which is called Stochastic Coordinate Coding, is adopted to generate high-level features in this thesis. In addition, a new multi-label classification model is proposed in this thesis. Label hierarchy is built based on the given brain ontology structure. Experiments have been conducted on the atlas and the results show that this approach is efficient and classifies the images with a relatively higher accuracy.
ContributorsZhao, Xinlin (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Thesis advisor) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2016
Description
The quality of real-world visual content is typically impaired by many factors including image noise and blur. Detecting and analyzing these impairments are important steps for multiple computer vision tasks. This work focuses on perceptual-based locally adaptive noise and blur detection and their application to image restoration.

In the context of

The quality of real-world visual content is typically impaired by many factors including image noise and blur. Detecting and analyzing these impairments are important steps for multiple computer vision tasks. This work focuses on perceptual-based locally adaptive noise and blur detection and their application to image restoration.

In the context of noise detection, this work proposes perceptual-based full-reference and no-reference objective image quality metrics by integrating perceptually weighted local noise into a probability summation model. Results are reported on both the LIVE and TID2008 databases. The proposed metrics achieve consistently a good performance across noise types and across databases as compared to many of the best very recent quality metrics. The proposed metrics are able to predict with high accuracy the relative amount of perceived noise in images of different content.

In the context of blur detection, existing approaches are either computationally costly or cannot perform reliably when dealing with the spatially-varying nature of the defocus blur. In addition, many existing approaches do not take human perception into account. This work proposes a blur detection algorithm that is capable of detecting and quantifying the level of spatially-varying blur by integrating directional edge spread calculation, probability of blur detection and local probability summation. The proposed method generates a blur map indicating the relative amount of perceived local blurriness. In order to detect the flat
ear flat regions that do not contribute to perceivable blur, a perceptual model based on the Just Noticeable Difference (JND) is further integrated in the proposed blur detection algorithm to generate perceptually significant blur maps. We compare our proposed method with six other state-of-the-art blur detection methods. Experimental results show that the proposed method performs the best both visually and quantitatively.

This work further investigates the application of the proposed blur detection methods to image deblurring. Two selective perceptual-based image deblurring frameworks are proposed, to improve the image deblurring results and to reduce the restoration artifacts. In addition, an edge-enhanced super resolution algorithm is proposed, and is shown to achieve better reconstructed results for the edge regions.
ContributorsZhu, Tong (Author) / Karam, Lina (Thesis advisor) / Li, Baoxin (Committee member) / Bliss, Daniel (Committee member) / Myint, Soe (Committee member) / Arizona State University (Publisher)
Created2016