
For waste management in Asunción, Paraguay to improve, so too must the rate of public recycling participation. However, due to minimal public waste management infrastructure, it is up to individual citizens and the private sector to develop recycling solutions in the city. One social enterprise called Soluciones Ecológicas (SE) has deployed a system of drop-off recycling stations called ecopuntos, which allow residents to deposit their paper and cardboard, plastic, and aluminum. For SE to maximize the use of its ecopuntos, it must understand the perceived barriers to, and benefits of, their use. To identify these barriers and benefits, a doer on-doer survey based on the behavioral determinants outlined in the Designing for Behavior Change Framework was distributed among Asunción residents. Results showed that perceived self-efficacy, perceived social norms, and perceived positive consequences – as well as age – were influential in shaping ecopunto use. Other determinants such as perceived negative consequences, access, and universal motivators were significant predictors of gender and age. SE and other institutions looking to improve recycling can use these results to design effective behavior change interventions.

Machine learning is a rapidly growing field, with no doubt in part due to its countless applications to other fields, including pedagogy and the creation of computer-aided tutoring systems. To extend the functionality of FACT, an automated teaching assistant, we want to predict, using metadata produced by student activity, whether a student is capable of fixing their own mistakes. Logs were collected from previous FACT trials with middle school math teachers and students. The data was converted to time series sequences for deep learning, and ordinary features were extracted for statistical machine learning. Ultimately, deep learning models attained an accuracy of 60%, while tree-based methods attained an accuracy of 65%, showing that some correlation, although small, exists between how a student fixes their mistakes and whether their correction is correct.
The pandemic that hit in 2020 has boosted the growth of online learning that involves the booming of Massive Open Online Course (MOOC). To support this situation, it will be helpful to have tools that can help students in choosing between the different courses and can help instructors to understand what the students need. One of those tools is an online course ratings predictor. Using the predictor, online course instructors can learn the qualities that majority course takers deem as important, and thus they can adjust their lesson plans to fit those qualities. Meanwhile, students will be able to use it to help them in choosing the course to take by comparing the ratings. This research aims to find the best way to predict the rating of online courses using machine learning (ML). To create the ML model, different combinations of the length of the course, the number of materials it contains, the price of the course, the number of students taking the course, the course’s difficulty level, the usage of jargons or technical terms in the course description, the course’s instructors’ rating, the number of reviews the instructors got, and the number of classes the instructors have created on the same platform are used as the inputs. Meanwhile, the output of the model would be the average rating of a course. Data from 350 courses are used for this model, where 280 of them are used for training, 35 for testing, and the last 35 for validation. After trying out different machine learning models, wide neural networks model constantly gives the best training results while the medium tree model gives the best testing results. However, further research needs to be conducted as none of the results are not accurate, with 0.51 R-squared test result for the tree model.

For this thesis, behavior metrics obtained by the Organic Practice Environment (OPE) LMS at Arizona State University were compared to student performance in Dr. Ian Gould’s Organic Chemistry I course. Each metric gathered was generic enough to be potentially used by any LMS, allowing the results to be relevant to a larger amount of classrooms. By using a combination of bivariate correlation analysis, group mean comparisons, linear regression model generation, and outlier analysis, the metrics that correlate best to exam performance were identified. The results indicate that the total usage of the LMS, amount of cramming done before exams, correctness of the responses submitted, and duration of the responses submitted all demonstrate a strong correlation with exam scores.