Matching Items (19)
Filtering by

Clear all filters

Description
Culturally responsive teaching refers to an approach to teaching and learning that facilitates the achievement of all students by including content that is relatable to all cultures and creating a culturally supported and learner-centered environment. The CSE 110 course at ASU would greatly benefit from the incorporation of culturally relevant

Culturally responsive teaching refers to an approach to teaching and learning that facilitates the achievement of all students by including content that is relatable to all cultures and creating a culturally supported and learner-centered environment. The CSE 110 course at ASU would greatly benefit from the incorporation of culturally relevant learning, as it would help them thrive in their chosen field of study while being able to uphold and value cultural relevance. The incorporation of culturally relevant pedagogy would further help students from marginalized communities feel more accepted and capable of thriving in STEM education. We began our research by first understanding the foundations of culturally responsive pedagogy, including how it is currently being used in classrooms. Concurrently, we studied the CSE 110 curriculum to see where we can implement this teaching strategy. Our research helped us develop a set of worksheets. In the second semester of our research, we distributed these worksheets and a set of control worksheets. Students were randomly assigned to an experiment or control group each of the four weeks of the study. We then analyzed this information to quantitatively see how culturally responsive pedagogy affects their outcomes. To follow up we also conducted a survey to get some qualitative feedback about student experience. Our final findings consisted of an analysis of how culturally responsive pedagogy affects learning outcomes in an introductory computer science course.
ContributorsSathe, Isha (Author) / Tripathi, Tejal (Co-author) / Mane, Rhea (Co-author) / Tadayon-Navabi, Farideh (Thesis director) / Nkrumah, Tara (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Dean, W.P. Carey School of Business (Contributor)
Created2024-05
Description
This thesis details a Python-based software designed to calculate the Jones polynomial, a vital mathematical tool from Knot Theory used for characterizing the topological and geometrical complexity of curves in 3-space, which is essential in understanding physical systems of filaments, including the behavior of polymers and biopolymers. The Jones polynomial serves as a topological

This thesis details a Python-based software designed to calculate the Jones polynomial, a vital mathematical tool from Knot Theory used for characterizing the topological and geometrical complexity of curves in 3-space, which is essential in understanding physical systems of filaments, including the behavior of polymers and biopolymers. The Jones polynomial serves as a topological invariant capable of distinguishing between different knot structures. This capability is fundamental to characterizing the architecture of molecular chains, such as proteins and DNA. Traditional computational methods for deriving the Jones polynomial have been limited by closure-schemes and high execu- tion costs, which can be impractical for complex structures like those that appear in real life. This software implements methods that significantly reduce calculation times, allowing for more efficient and practical applications in the study of biological poly- mers. It utilizes a divide-and-conquer approach combined with parallel computing and applies recursive Reidemeister moves to optimize the computation, transitioning from an exponential to a near-linear runtime for specific configurations. This thesis provides an overview of the software’s functions, detailed performance evaluations using protein structures as test cases, and a discussion of the implications for future research and potential algorithmic improvements.
ContributorsMusfeldt, Caleb (Author) / Panagiotou, Eleni (Thesis director) / Richa, Andrea (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Historical, Philosophical & Religious Studies, Sch (Contributor)
Created2024-05
Description

This thesis explores how large scale cyber exercises work in the 21st century, going in-depth on Exercise Cyber Shield, the Department of Defense’s largest unclassified cyber defense exercise run by the Army National Guard. It highlights why these cyber exercises are so relevant, going over several large scale cyber attacks

This thesis explores how large scale cyber exercises work in the 21st century, going in-depth on Exercise Cyber Shield, the Department of Defense’s largest unclassified cyber defense exercise run by the Army National Guard. It highlights why these cyber exercises are so relevant, going over several large scale cyber attacks that have occurred in the past year and the impact they caused. This research aims to illuminate the intricacies around cyber exercise assessment involving manual vs automated scoring systems; this is brought back to work on creating an automated scoring engine for Exercise Cyber Shield. This thesis provides an inside look behind the scenes of the operations of the largest unclassified cyber defense exercise in the United States, including conversations with the Exercise Officer-In-Charge of Cyber Shield as well as a cyber exercise expert working on assessment of Exercise Cyber Shield, and the research also includes information from past final reports for Cyber Shield. Issues that these large scale cyber exercises have faced over the years are brought to light, and attempts at solutions are discussed.

ContributorsZhao, Henry (Author) / Chavez Echeagaray, Maria Elena (Thesis director) / Rhodes, Brad (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2021-12
Description

Among classes in the Computer Science curriculum at Arizona State University, Automata Theory is widely considered to be one of the most difficult. Many Computer Science concepts have strong visual components that make them easier to understand. Binary trees, Dijkstra's algorithm, pointers, and even more basic concepts such as arrays

Among classes in the Computer Science curriculum at Arizona State University, Automata Theory is widely considered to be one of the most difficult. Many Computer Science concepts have strong visual components that make them easier to understand. Binary trees, Dijkstra's algorithm, pointers, and even more basic concepts such as arrays all have very strong visual components. Not only that, but resources for them are abundantly available online. Automata Theory, on the other hand, is the first Computer Science course students encounter that has a significant focus on deep theory. Many of the concepts can be difficult to visualize, or at least take a lot of effort to do so. Furthermore, visualizers for finite state machines are hard to come by. Because I thoroughly enjoyed learning about Automata Theory and parsers, I wanted to create a program that involved the two. Additionally, I thought creating a program for visualizing automata would help students who struggle with Automata Theory develop a stronger understanding of it.

ContributorsSmith, Andrew (Author) / Burger, Kevin (Thesis director) / Meuth, Ryan (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2021-12
Description
Many forms of programmable matter have been proposed for various tasks. We use an abstract model of self-organizing particle systems for programmable matter which could be used for a variety of applications, including smart paint and coating materials for engineering or programmable cells for medical uses. Previous research using this

Many forms of programmable matter have been proposed for various tasks. We use an abstract model of self-organizing particle systems for programmable matter which could be used for a variety of applications, including smart paint and coating materials for engineering or programmable cells for medical uses. Previous research using this model has focused on shape formation and other spatial configuration problems, including line formation, compression, and coating. In this work we study foundational computational tasks that exceed the capabilities of the individual constant memory particles described by the model. These tasks represent new ways to use these self-organizing systems, which, in conjunction with previous shape and configuration work, make the systems useful for a wider variety of tasks. We present an implementation of a counter using a line of particles, which makes it possible for the line of particles to count to and store values much larger than their individual capacities. We then present an algorithm that takes a matrix and a vector as input and then sets up and uses a rectangular block of particles to compute the matrix-vector multiplication. This setup also utilizes the counter implementation to store the resulting vector from the matrix-vector multiplication. Operations such as counting and matrix multiplication can leverage the distributed and dynamic nature of the self-organizing system to be more efficient and adaptable than on traditional linear computing hardware. Such computational tools also give the systems more power to make complex decisions when adapting to new situations or to analyze the data they collect, reducing reliance on a central controller for setup and output processing. Finally, we demonstrate an application of similar types of computations with self-organizing systems to image processing, with an implementation of an image edge detection algorithm.
ContributorsPorter, Alexandra Marie (Author) / Richa, Andrea (Thesis director) / Xue, Guoliang (Committee member) / School of Music (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
Description
In this paper, I will show that news headlines of global events can predict changes in stock price by using Machine Learning and eight years of data from r/WorldNews, a popular forum on Reddit.com. My data is confined to the top 25 daily posts on the forum, and due to

In this paper, I will show that news headlines of global events can predict changes in stock price by using Machine Learning and eight years of data from r/WorldNews, a popular forum on Reddit.com. My data is confined to the top 25 daily posts on the forum, and due to the implicit filtering mechanism in the online community, these 25 posts are representative of the most popular news headlines and influential global events of the day. Hence, these posts shine a light on how large-scale social and political events affect the stock market. Using a Logistic Regression and a Naive Bayes classifier, I am able to predict with approximately 85% accuracy a binary change in stock price using term-feature vectors gathered from the news headlines. The accuracy, precision and recall results closely rival the best models in this field of research. In addition to the results, I will also describe the mathematical underpinnings of the two models; preceded by a general investigation of the intersection between the multiple academic disciplines related to this project. These range from social to computer science and from statistics to philosophy. The goal of this additional discussion is to further illustrate the interdisciplinary nature of the research and hopefully inspire a non-monolithic mindset when further investigations are pursued.
Created2016-12
Description
Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.
ContributorsMurray, Patrick Charles (Author) / Colbourn, Charles (Thesis director) / Czygrinow, Andrzej (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2014-12
Description
Polar ice masses can be valuable indicators of trends in global climate. In an effort to better understand the dynamics of Arctic ice, this project analyzes sea ice concentration anomaly data collected over gridded regions (cells) and builds graphs based upon high correlations between cells. These graphs offer the opportunity

Polar ice masses can be valuable indicators of trends in global climate. In an effort to better understand the dynamics of Arctic ice, this project analyzes sea ice concentration anomaly data collected over gridded regions (cells) and builds graphs based upon high correlations between cells. These graphs offer the opportunity to use metrics such as clustering coefficients and connected components to isolate representative trends in ice masses. Based upon this analysis, the structure of sea ice graphs differs at a statistically significant level from random graphs, and several regions show erratically decreasing trends in sea ice concentration.
ContributorsWallace-Patterson, Chloe Rae (Author) / Syrotiuk, Violet (Thesis director) / Colbourn, Charles (Committee member) / Montgomery, Douglas (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-05
Description
The areas of cloud computing and web services have grown rapidly in recent years, resulting in software that is more interconnected and and widely used than ever before. As a result of this proliferation, there needs to be a way to assess the quality of these web services in order

The areas of cloud computing and web services have grown rapidly in recent years, resulting in software that is more interconnected and and widely used than ever before. As a result of this proliferation, there needs to be a way to assess the quality of these web services in order to ensure their reliability and accuracy. This project explores different ways in which services can be tested and evaluated through the design of various testing techniques and their implementations in a web application, which can be used by students or developers to test their web services.
ContributorsHilliker, Mark Paul (Author) / Chen, Yinong (Thesis director) / Nakamura, Mutsumi (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description
Many systems in the world \u2014 such as cellular networks, the post service, or transportation pathways \u2014 can be modeled as networks or graphs. The practical applications of graph algorithms generally seek to achieve some goal while minimizing some cost such as money or distance. While the minimum linear arrangement

Many systems in the world \u2014 such as cellular networks, the post service, or transportation pathways \u2014 can be modeled as networks or graphs. The practical applications of graph algorithms generally seek to achieve some goal while minimizing some cost such as money or distance. While the minimum linear arrangement (MLA) problem has been widely-studied amongst graph ordering and embedding problems, there have been no developments into versions of the problem involving degree higher than 2. An application of our problem can be seen in overlay networks in telecommunications. An overlay network is a virtual network that is built on top of another network. It is a logical network where the links between nodes represent the physical paths connecting the nodes in the underlying infrastructure. The underlying physical network may be incomplete, but as long as it is connected, we can build a complete overlay network on top of it. Since some nodes may be overloaded by traffic, we can reduce the strain on the overlay network by limiting the communication between nodes. Some edges, however, may have more importance than others so we must be careful about our selection of which nodes are allowed to communicate with each other. The balance of reducing the degree of the network while maximizing communication forms the basis of our d-degree minimum arrangement problem. In this thesis we will look at several approaches to solving the generalized d-degree minimum arrangement d-MA problem where we embed a graph onto a subgraph of a given degree. We first look into the requirements and challenges of solving the d-MA problem. We will then present a polynomial-time heuristic and compare its performance with the optimal solution derived from integer linear programming. We will show that a simple (d-1)-ary tree construction provides the optimal structure for uniform graphs with large requests sets. Finally, we will present experimental data gathered from running simulations on a variety of graphs to evaluate the efficiency of our heuristic and tree construction.
ContributorsWang, Xiao (Author) / Richa, Andrea (Thesis director) / Nakamura, Mutsumi (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05