Self-efficacy in engineering, engineering identity, and coping in engineering have been shown in previous studies to be highly important in the advancement of one’s development in the field of engineering. Through the creation and deployment of a 17 question survey, undergraduate and first year masters students were asked to provide information on their engagement at their university, their demographic information, and to rank their level of agreement with 22 statements relating to the aforementioned ideas. Using the results from the collected data, exploratory factor analysis was completed to identify the factors that existed and any correlations. No statistically significant correlations between the identified three factors and demographic or engagement information were found. There needs to be a significant increase in the data sample size for statistically significant results to be found. Additionally, there is future work needed in the creation of an engagement measure that successfully reflects the level and impact of participation in engineering activities beyond traditional coursework.
In this creative thesis project I use digital “scrolleytelling” (an interactive scroll-based storytelling) to investigate diversity & inclusion at big tech companies. I wanted to know why diversity numbers were flatlining at Facebook, Apple, Amazon, Microsoft and Google, and took a data journalism approach to explore the relationship between what corporations were saying versus what they were doing. Finally, I critiqued diversity and inclusion by giving examples of how the current way we are addressing D&I is not fixing the problem.
Neoliberal feminism has gained significant popularity in fourth-wave feminist media. In this paper, I analyze the 2017 limited television series "Big Little Lies" to uncover the intricacies of neoliberal feminist theory in practice, particularly how it speaks to gender, race, and class relations.
Affective video games are still a relatively new field of research and entertainment. Even
so, being a form of entertainment media, emotion plays a large role in video games as a whole.
This project seeks to gain an understanding of what emotions are most prominent during game
play. From there, a system will be created wherein the game will record the player’s facial
expressions and interpret those expressions as emotions, allowing the game to adjust its difficulty
to create a more tailored experience.
The first portion of this project, understanding the relationship between emotions and
games, was done by recording myself as I played three different games of different genres for
thirty minutes each. The same system that would be used in the later game I created to evaluate
emotions was used to evaluate these recordings.
After the data was interpreted, I created three different versions of the same game, based
on a template created by Stan’s Assets, which was a version of the arcade game Stacker. The
three versions of the game included one where no changes were made to the gameplay
experience, it simply recorded the player’s face and extrapolated emotions from that recording,
one where the speed increased in an attempt to maintain a certain level of positive emotions, and
a third where, in addition to increasing the speed of the game, it also decreased the speed in an
attempt to minimize negative emotions.
These tests, together, show that the emotional experience of a player is heavily dependent
on how tailored the game is towards that particular emotion. Additionally, in creating a system
meant to interact with these emotions, it is easier to create a one-dimensional system that focuses
on one emotion (or range of emotions) as opposed to a more complex system, as the system
begins to become unstable, and can lead to undesirable gameplay effects.
In collaboration with Moog Broad Reach and Arizona State University, a<br/>team of five undergraduate students designed a hardware design solution for<br/>protecting flash memory data in a spaced-based radioactive environment. Team<br/>Aegis have been working on the research, design, and implementation of a<br/>Verilog- and Python-based error correction code using a Reed-Solomon method<br/>to identify bit changes of error code. For an additional senior design project, a<br/>Python code was implemented that runs statistical analysis to identify whether<br/>the error correction code is more effective than a triple-redundancy check as well<br/>as determining if the presence of errors can be modeled by a regression model.
This thesis proposes hardware and software security enhancements to the robotic explorer of a capstone team, in collaboration with the NASA Psyche Mission Student Collaborations program. The NASA Psyche Mission, launching in 2022 and reaching the metallic asteroid of the same name in 2026, will explore from orbit what is hypothesized to be remnant core material of an early planet, potentially providing key insights to planet formation. Following this initial mission, it is possible there would be scientists and engineers interested in proposing a mission to land an explorer on the surface of Psyche to further document various properties of the asteroid. As a proposal for a second mission, an interdisciplinary engineering and science capstone team at Arizona State University designed and constructed a robotic explorer for the hypothesized surfaces of Psyche, capable of semi-autonomously navigating simulated surfaces to collect scientific data from onboard sensors. A critical component of this explorer is the command and data handling subsystem, and as such, the security of this system, though outside the scope of the capstone project, remains a crucial consideration. This thesis proposes the pairing of Trusted Platform Module (TPM) technology for increased hardware security and the implementation of SELinux (Security Enhanced Linux) for increased software security for Earth-based testing as well as space-ready missions.
Through this creative project, I analyzed how COVID-19 has affected the theatre industry. I created a mini-documentary following ASU’s production of Runaways, which was performed without an audience. The final product was a combination of pre-filmed and self-taped scenes. I documented how students were still able to learn and cultivate their skills during a time where most things are virtual. In addition, I analyzed how the shift to filmed theatre has changed the definition of live theatre, including increased accessibility. I also explored the importance of theatre through analyzing the themes of musical theatre performances such as Rent and Runaways. During a time where people cannot gather, artists are still finding a way to create and tell stories.
Lossy compression is a form of compression that slightly degrades a signal in ways that are ideally not detectable to the human ear. This is opposite to lossless compression, in which the sample is not degraded at all. While lossless compression may seem like the best option, lossy compression, which is used in most audio and video, reduces transmission time and results in much smaller file sizes. However, this compression can affect quality if it goes too far. The more compression there is on a waveform, the more degradation there is, and once a file is lossy compressed, this process is not reversible. This project will observe the degradation of an audio signal after the application of Singular Value Decomposition compression, a lossy compression that eliminates singular values from a signal’s matrix.
Radiation hardening of electronic devices is generally necessary when designing for the space environment. Non-volatile memory technologies are of particular concern when designing for the mitigation of radiation effects. Among other radiation effects, single-event upsets can create bit flips in non-volatile memories, leading to data corruption. In this paper, a Verilog implementation of a Reed-Solomon error-correcting code is considered for its ability to mitigate the effects of single-event upsets on non-volatile memories. This implementation is compared with the simpler procedure of using triple modular redundancy.
Every communication system has a receiver and a transmitter. Irrespective if it is wired or wireless.The future of wireless communication consists of a massive number of transmitters and receivers. The question arises, can we use computer vision to help wireless communication? To satisfy the high data requirement, a large number of antennas are required. The devices that employ large-antenna arrays have other sensors such as RGB camera, depth camera, or LiDAR sensors.These vision sensors help us overcome the non-trivial wireless communication challenges, such as beam blockage prediction and hand-over prediction.This is further motivated by the recent advances in deep learning and computer vision that can extract high-level semantics from complex visual scenes, and the increasing interest of leveraging machine/deep learning tools in wireless communication problems.[1] <br/><br/>The research was focused solely based on technology like 3D cameras,object detection and object tracking using Computer vision and compression techniques. The main objective of using computer vision was to make Milli-meter Wave communication more robust, and to collect more data for the machine learning algorithms. Pre-build lossless and lossy compression algorithms, such as FFMPEG, were used in the research. An algorithm was developed that could use 3D cameras and machine learning models such as YOLOV3, to track moving objects using servo motors and low powered computers like the raspberry pi or the Jetson Nano. In other words, the receiver could track the highly mobile transmitter in 1 dimension using a 3D camera. Not only that, during the research, the transmitter was loaded on a DJI M600 pro drone, and then machine learning and object tracking was used to track the highly mobile drone. In order to build this machine learning model and object tracker, collecting data like depth, RGB images and position coordinates were the first yet the most important step. GPS coordinates from the DJI M600 were also pulled and were successfully plotted on google earth. This proved to be very useful during data collection using a drone and for the future applications of position estimation for a drone using machine learning. <br/><br/>Initially, images were taken from transmitter camera every second,and those frames were then converted to a text file containing hex-decimal values. Each text file was then transmitted from the transmitter to receiver, and on the receiver side, a python code converted the hex-decimal to JPG. This would give an efect of real time video transmission. However, towards the end of the research, an industry standard, real time video was streamed using pre-built FFMPEG modules, GNU radio and Universal Software Radio Peripheral (USRP). The transmitter camera was a PI-camera. More details will be discussed as we further dive deep into this research report.