Matching Items (14)
Filtering by

Clear all filters

Description

Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only

Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only a few nanometers big. Researchers at MIT have successfully created Syncells, which are micro-<br/>scale robots with limited computation power and memory that can communicate locally to achieve<br/>complex collective tasks. In order to control these Syncells for a desired outcome, they must each run a<br/>simple distributed algorithm. As they are only capable of local communication, Syncells cannot receive<br/>commands from a control center, so their algorithms cannot be centralized. In this work, we created a<br/>distributed algorithm that each Syncell can execute so that the system of Syncells is able to find and<br/>converge to a specific target within the environment. The most direct applications of this problem are in<br/>medicine. Such a system could be used as a safer alternative to invasive surgery or could be used to treat<br/>internal bleeding or tumors. We tested and analyzed our algorithm through simulation and visualization<br/>in Python. Overall, our algorithm successfully caused the system of particles to converge on a specific<br/>target present within the environment.

ContributorsMartin, Rebecca Clare (Author) / Richa, Andréa (Thesis director) / Lee, Heewook (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

Bad actor reporting has recently grown in popularity as an effective method for social media attacks and harassment, but many mitigation strategies have yet to be investigated. In this study, we created a simulated social media environment of 500,000 users, and let those users create and review a number of

Bad actor reporting has recently grown in popularity as an effective method for social media attacks and harassment, but many mitigation strategies have yet to be investigated. In this study, we created a simulated social media environment of 500,000 users, and let those users create and review a number of posts. We then created four different post-removal algorithms to analyze the simulation, each algorithm building on previous ones, and evaluated them based on their accuracy and effectiveness at removing malicious posts. This thesis work concludes that a trust-reward structure within user report systems is the most effective strategy for removing malicious content while minimizing the removal of genuine content. This thesis also discusses how the structure can be further enhanced to accommodate real-world data and provide a viable solution for reducing bad actor online activity as a whole.

ContributorsYang, Lucas (Author) / Atkinson, Robert (Thesis director) / O'Neil, Erica (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
Description

This paper explores the inner workings of algorithms that computers may use to play Chess. First, we discuss the classical Alpha-Beta algorithm and several improvements, including Quiescence Search, Transposition Tables, and more. Next, we examine the state-of-the-art Monte Carlo Tree Search algorithm and relevant optimizations. After that, we consider a

This paper explores the inner workings of algorithms that computers may use to play Chess. First, we discuss the classical Alpha-Beta algorithm and several improvements, including Quiescence Search, Transposition Tables, and more. Next, we examine the state-of-the-art Monte Carlo Tree Search algorithm and relevant optimizations. After that, we consider a recent algorithm that transforms Alpha-Beta into a “Rollout” search, blending it with Monte Carlo Tree Search under the rollout paradigm. We then discuss our C++ Chess Engine, Homura, and explain its implementation of a hybrid algorithm combining Alpha-Beta with MCTS. Finally, we show that Homura can play master-level Chess at a strength currently exceeding that of our backtracking Alpha-Beta.

ContributorsMoore, Evan (Author) / Kobayashi, Yoshihiro (Thesis director) / Kambhampati, Subbarao (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
Description

For my Thesis Project, I worked to operationalize an algorithmic trading application called Trading Dawg. Over the year, I was able to implement several analysis models, including accuracy, performance, volume, and hyperparameter analysis. With these improvements, we are in a strong position to create valuable tools in the algorithmic trading

For my Thesis Project, I worked to operationalize an algorithmic trading application called Trading Dawg. Over the year, I was able to implement several analysis models, including accuracy, performance, volume, and hyperparameter analysis. With these improvements, we are in a strong position to create valuable tools in the algorithmic trading space.

ContributorsPayne, Colton (Author) / Shakarian, Paulo (Thesis director) / Brandt, William (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / Department of Finance (Contributor)
Created2023-05
Description
Many forms of programmable matter have been proposed for various tasks. We use an abstract model of self-organizing particle systems for programmable matter which could be used for a variety of applications, including smart paint and coating materials for engineering or programmable cells for medical uses. Previous research using this

Many forms of programmable matter have been proposed for various tasks. We use an abstract model of self-organizing particle systems for programmable matter which could be used for a variety of applications, including smart paint and coating materials for engineering or programmable cells for medical uses. Previous research using this model has focused on shape formation and other spatial configuration problems, including line formation, compression, and coating. In this work we study foundational computational tasks that exceed the capabilities of the individual constant memory particles described by the model. These tasks represent new ways to use these self-organizing systems, which, in conjunction with previous shape and configuration work, make the systems useful for a wider variety of tasks. We present an implementation of a counter using a line of particles, which makes it possible for the line of particles to count to and store values much larger than their individual capacities. We then present an algorithm that takes a matrix and a vector as input and then sets up and uses a rectangular block of particles to compute the matrix-vector multiplication. This setup also utilizes the counter implementation to store the resulting vector from the matrix-vector multiplication. Operations such as counting and matrix multiplication can leverage the distributed and dynamic nature of the self-organizing system to be more efficient and adaptable than on traditional linear computing hardware. Such computational tools also give the systems more power to make complex decisions when adapting to new situations or to analyze the data they collect, reducing reliance on a central controller for setup and output processing. Finally, we demonstrate an application of similar types of computations with self-organizing systems to image processing, with an implementation of an image edge detection algorithm.
ContributorsPorter, Alexandra Marie (Author) / Richa, Andrea (Thesis director) / Xue, Guoliang (Committee member) / School of Music (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
Description
Computer Science and Dance are choice driven disciplines. The output of their processes are compositions of experience. Dancers are not computers and computers are not people but there are comparable traces of humanity in the way each interpret and interact with their respective inputs, outputs, and environments. These overlaps are

Computer Science and Dance are choice driven disciplines. The output of their processes are compositions of experience. Dancers are not computers and computers are not people but there are comparable traces of humanity in the way each interpret and interact with their respective inputs, outputs, and environments. These overlaps are perhaps not obvious, but in an increasingly specialized world it is important to discuss them. Dynamic Programming and improvisational movement exist within exclusive corners of their respective fields and are characterized by their inherent adaption to change. Inspired by the work of Ivar Hagendoorn, John Cage and other interdisciplinary artists, complexMovement is motivated by the need to create space for intersections between these two powerful groups and find overlaps in the questions they ask to achieve their goals. Dance and Computer Science are just one example of hidden partnerships between their respective fields. Their respective sides allow for ample side by side comparisons but for the purpose of this work, we will focus upon two smaller sectors of their studies: improvisational movement and the design of Dynamic Programming algorithms.
ContributorsOhlsen, Lai Yi Ni (Author) / Britt, Melissa (Thesis director) / Crissman, Angel (Committee member) / Standley, Eileen (Committee member) / Computer Science and Engineering Program (Contributor) / School of Art (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description
While network problems have been addressed using a central administrative domain with a single objective, the devices in most networks are actually not owned by a single entity but by many individual entities. These entities make their decisions independently and selfishly, and maybe cooperate with a small group of other

While network problems have been addressed using a central administrative domain with a single objective, the devices in most networks are actually not owned by a single entity but by many individual entities. These entities make their decisions independently and selfishly, and maybe cooperate with a small group of other entities only when this form of coalition yields a better return. The interaction among multiple independent decision-makers necessitates the use of game theory, including economic notions related to markets and incentives. In this dissertation, we are interested in modeling, analyzing, addressing network problems caused by the selfish behavior of network entities. First, we study how the selfish behavior of network entities affects the system performance while users are competing for limited resource. For this resource allocation domain, we aim to study the selfish routing problem in networks with fair queuing on links, the relay assignment problem in cooperative networks, and the channel allocation problem in wireless networks. Another important aspect of this dissertation is the study of designing efficient mechanisms to incentivize network entities to achieve certain system objective. For this incentive mechanism domain, we aim to motivate wireless devices to serve as relays for cooperative communication, and to recruit smartphones for crowdsourcing. In addition, we apply different game theoretic approaches to problems in security and privacy domain. For this domain, we aim to analyze how a user could defend against a smart jammer, who can quickly learn about the user's transmission power. We also design mechanisms to encourage mobile phone users to participate in location privacy protection, in order to achieve k-anonymity.
ContributorsYang, Dejun (Author) / Xue, Guoliang (Thesis advisor) / Richa, Andrea (Committee member) / Sen, Arunabha (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2013
Description
Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as

Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as particles) with limited computational power that each perform fully distributed, local, asynchronous algorithms to solve system-wide problems of movement, configuration, and coordination. In this thesis, we focus on the compression problem, in which the particle system gathers as tightly together as possible, as in a sphere or its equivalent in the presence of some underlying geometry. While there are many ways to formalize what it means for a particle system to be compressed, we address three different notions of compression: (1) local compression, in which each individual particle utilizes local rules to create an overall convex structure containing no holes, (2) hole elimination, in which the particle system seeks to detect and eliminate any holes it contains, and (3) alpha-compression, in which the particle system seeks to shrink its perimeter to be within a constant factor of the minimum possible value. We analyze the behavior of each of these algorithms, examining correctness and convergence where appropriate. In the case of the Markov Chain Algorithm for Compression, we provide improvements to the original bounds for the bias parameter lambda which influences the system to either compress or expand. Lastly, we briefly discuss contributions to the problem of leader election--in which a particle system elects a single leader--since it acts as an important prerequisite for compression algorithms that use a predetermined seed particle.
ContributorsDaymude, Joshua Jungwoo (Author) / Richa, Andrea (Thesis director) / Kierstead, Henry (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description
This dissertation studies routing in small-world networks such as grids plus long-range edges and real networks. Kleinberg showed that geography-based greedy routing in a grid-based network takes an expected number of steps polylogarithmic in the network size, thus justifying empirical efficiency observed beginning with Milgram. A counterpart for the grid-based

This dissertation studies routing in small-world networks such as grids plus long-range edges and real networks. Kleinberg showed that geography-based greedy routing in a grid-based network takes an expected number of steps polylogarithmic in the network size, thus justifying empirical efficiency observed beginning with Milgram. A counterpart for the grid-based model is provided; it creates all edges deterministically and shows an asymptotically matching upper bound on the route length. The main goal is to improve greedy routing through a decentralized machine learning process. Two considered methods are based on weighted majority and an algorithm of de Farias and Megiddo, both learning from feedback using ensembles of experts. Tests are run on both artificial and real networks, with decentralized spectral graph embedding supplying geometric information for real networks where it is not intrinsically available. An important measure analyzed in this work is overpayment, the difference between the cost of the method and that of the shortest path. Adaptive routing overtakes greedy after about a hundred or fewer searches per node, consistently across different network sizes and types. Learning stabilizes, typically at overpayment of a third to a half of that by greedy. The problem is made more difficult by eliminating the knowledge of neighbors' locations or by introducing uncooperative nodes. Even under these conditions, the learned routes are usually better than the greedy routes. The second part of the dissertation is related to the community structure of unannotated networks. A modularity-based algorithm of Newman is extended to work with overlapping communities (including considerably overlapping communities), where each node locally makes decisions to which potential communities it belongs. To measure quality of a cover of overlapping communities, a notion of a node contribution to modularity is introduced, and subsequently the notion of modularity is extended from partitions to covers. The final part considers a problem of network anonymization, mostly by the means of edge deletion. The point of interest is utility preservation. It is shown that a concentration on the preservation of routing abilities might damage the preservation of community structure, and vice versa.
ContributorsBakun, Oleg (Author) / Konjevod, Goran (Thesis advisor) / Richa, Andrea (Thesis advisor) / Syrotiuk, Violet R. (Committee member) / Czygrinow, Andrzej (Committee member) / Arizona State University (Publisher)
Created2011
Description
We live in a networked world with a multitude of networks, such as communication networks, electric power grid, transportation networks and water distribution networks, all around us. In addition to such physical (infrastructure) networks, recent years have seen tremendous proliferation of social networks, such as Facebook, Twitter, LinkedIn, Instagram, Google+

We live in a networked world with a multitude of networks, such as communication networks, electric power grid, transportation networks and water distribution networks, all around us. In addition to such physical (infrastructure) networks, recent years have seen tremendous proliferation of social networks, such as Facebook, Twitter, LinkedIn, Instagram, Google+ and others. These powerful social networks are not only used for harnessing revenue from the infrastructure networks, but are also increasingly being used as “non-conventional sensors” for monitoring the infrastructure networks. Accordingly, nowadays, analyses of social and infrastructure networks go hand-in-hand. This dissertation studies resource allocation problems encountered in this set of diverse, heterogeneous, and interdependent networks. Three problems studied in this dissertation are encountered in the physical network domain while the three other problems studied are encountered in the social network domain.

The first problem from the infrastructure network domain relates to distributed files storage scheme with a goal of enhancing robustness of data storage by making it tolerant against large scale geographically-correlated failures. The second problem relates to placement of relay nodes in a deployment area with multiple sensor nodes with a goal of augmenting connectivity of the resulting network, while staying within the budget specifying the maximum number of relay nodes that can be deployed. The third problem studied in this dissertation relates to complex interdependencies that exist between infrastructure networks, such as power grid and communication network. The progressive recovery problem in an interdependent network is studied whose goal is to maximize system utility over the time when recovery process of failed entities takes place in a sequential manner.

The three problems studied from the social network domain relate to influence propagation in adversarial environment and political sentiment assessment in various states in a country with a goal of creation of a “political heat map” of the country. In the first problem of the influence propagation domain, the goal of the second player is to restrict the influence of the first player, while in the second problem the goal of the second player is to have a larger market share with least amount of initial investment.
ContributorsMazumder, Anisha (Author) / Sen, Arunabha (Thesis advisor) / Richa, Andrea (Committee member) / Xue, Guoliang (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2016