Matching Items (34)
Description
This dissertation addresses access management problems that occur in both emergency and outpatient clinics with the objective of allocating the available resources to improve performance measures by considering the trade-offs. Two main settings are considered for estimating patient willingness-to-wait (WtW) behavior for outpatient appointments with statistical analyses of data: allocation

This dissertation addresses access management problems that occur in both emergency and outpatient clinics with the objective of allocating the available resources to improve performance measures by considering the trade-offs. Two main settings are considered for estimating patient willingness-to-wait (WtW) behavior for outpatient appointments with statistical analyses of data: allocation of the limited booking horizon to patients of different priorities by using time windows in an outpatient setting considering patient behavior, and allocation of hospital beds to admitted Emergency Department (ED) patients. For each chapter, a different approach based on the problem context is developed and the performance is analyzed by implementing analytical and simulation models. Real hospital data is used in the analyses to provide evidence that the methodologies introduced are beneficial in addressing real life problems, and real improvements can be achievable by using the policies that are suggested.

This dissertation starts with studying an outpatient clinic context to develop an effective resource allocation mechanism that can improve patient access to clinic appointments. I first start with identifying patient behavior in terms of willingness-to-wait to an outpatient appointment. Two statistical models are developed to estimate patient WtW distribution by using data on booked appointments and appointment requests. Several analyses are conducted on simulated data to observe effectiveness and accuracy of the estimations.

Then, this dissertation introduces a time windows based policy that utilizes patient behavior to improve access by using appointment delay as a lever. The policy improves patient access by allocating the available capacity to the patients from different priorities by dividing the booking horizon into time intervals that can be used by each priority group which strategically delay lower priority patients.

Finally, the patient routing between ED and inpatient units to improve the patient access to hospital beds is studied. The strategy that captures the trade-off between patient safety and quality of care is characterized as a threshold type. Through the simulation experiments developed by real data collected from a hospital, the achievable improvement of implementing such a strategy that considers the safety-quality of care trade-off is illustrated.
ContributorsKilinc, Derya (Author) / Gel, Esma (Thesis advisor) / Pasupathy, Kalyan (Committee member) / Sefair, Jorge (Committee member) / Sir, Mustafa (Committee member) / Yan, Hao (Committee member) / Arizona State University (Publisher)
Created2019
Description
Graphs are commonly used visualization tools in a variety of fields. Algorithms have been proposed that claim to improve the readability of graphs by reducing edge crossings, adjusting edge length, or some other means. However, little research has been done to determine which of these algorithms best suit human perception

Graphs are commonly used visualization tools in a variety of fields. Algorithms have been proposed that claim to improve the readability of graphs by reducing edge crossings, adjusting edge length, or some other means. However, little research has been done to determine which of these algorithms best suit human perception for particular graph properties. This thesis explores four different graph properties: average local clustering coefficient (ALCC), global clustering coefficient (GCC), number of triangles (NT), and diameter. For each of these properties, three different graph layouts are applied to represent three different approaches to graph visualization: multidimensional scaling (MDS), force directed (FD), and tsNET. In a series of studies conducted through the crowdsourcing platform Amazon Mechanical Turk, participants are tasked with discriminating between two graphs in order to determine their just noticeable differences (JNDs) for the four graph properties and three layout algorithm pairs. These results are analyzed using previously established methods presented by Rensink et al. and Kay and Heer.The average JNDs are analyzed using a linear model that determines whether the property-layout pair seems to follow Weber's Law, and the individual JNDs are run through a log-linear model to determine whether it is possible to model the individual variance of the participant's JNDs. The models are evaluated using the R2 score to determine if they adequately explain the data and compared using the Mann-Whitney pairwise U-test to determine whether the layout has a significant effect on the perception of the graph property. These tests indicate that the data collected in the studies can not always be modelled well with either the linear model or log-linear model, which suggests that some properties may not follow Weber's Law. Additionally, the layout algorithm is not found to have a significant impact on the perception of some of these properties.
ContributorsClayton, Benjamin (Author) / Maciejewski, Ross (Thesis advisor) / Kobourov, Stephen (Committee member) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2019
Description
Breeding seeds to include desirable traits (increased yield, drought/temperature resistance, etc.) is a growing and important method of establishing food security. However, besides breeder intuition, few decision-making tools exist that can provide the breeders with credible evidence to make decisions on which seeds to progress to further stages of development.

Breeding seeds to include desirable traits (increased yield, drought/temperature resistance, etc.) is a growing and important method of establishing food security. However, besides breeder intuition, few decision-making tools exist that can provide the breeders with credible evidence to make decisions on which seeds to progress to further stages of development. This thesis attempts to create a chance-constrained knapsack optimization model, which the breeder can use to make better decisions about seed progression and help reduce the levels of risk in their selections. The model’s objective is to select seed varieties out of a larger pool of varieties and maximize the average yield of the “knapsack” based on meeting some risk criteria. Two models are created for different cases. First is the risk reduction model which seeks to reduce the risk of getting a bad yield but still maximize the total yield. The second model considers the possibility of adverse environmental effects and seeks to mitigate the negative effects it could have on the total yield. In practice, breeders can use these models to better quantify uncertainty in selecting seed varieties
ContributorsOzcan, Ozkan Meric (Author) / Armbruster, Dieter (Thesis advisor) / Gel, Esma (Thesis advisor) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2019
Description
In the first chapter, I consider a capacity and price bounded profit maximization problem in which a firm determines prices of multiple substitutable products when the supply or capacity of the products is limited and the prices are bounded. This problem applies broadly to many pricing decision settings such as

In the first chapter, I consider a capacity and price bounded profit maximization problem in which a firm determines prices of multiple substitutable products when the supply or capacity of the products is limited and the prices are bounded. This problem applies broadly to many pricing decision settings such as for hotel rooms, airline seats, fashion, or other seasonal retail products, as well as any product line with shared production capacity. In this paper, I characterize structural properties of the constrained profit maximization problems under the Multinomial Logit (MNL) model and the optimal pricing solutions, and present efficient solution approaches. In the second chapter, I consider a data-driven profit maximization problem in which a firm determines the prices of multiple substitutable products. This problem applies broadly to many pricing decision settings such as for hotel rooms, airline seats, fashion, or other seasonal retail products. A typical data-driven optimization problem takes a two-step approach of parameter estimation and optimization for decisions. However, this often returns a suboptimal solution as the estimation error due to the variability in data impacts the quality of the optimal solution. I present the relationship between estimation error and quality of the optimal solution and provide a possible way to reduce the impact of the error on the optimal pricing decision under the MNL model. In the last chapter, I consider a facility layout design problem of a semiconductor fabrication facility (FAB). In designing a facility layout, the traditional approach has been to minimize the flow-weighted distance of materials through the automated material handling system (AMHS). However, distance focused approach sometimes yields one major issue, traffic congestion, that there is a question if it is truly a good criterion to design a layout. In this study, I try to understand what makes such congestion by analyzing the system dynamics and propose another approach with a concept of ``balancing the flow" that focuses more on resolving the congestion. Finally, I compare the performance of the two methods through the simulation of semiconductor FAB layouts.
ContributorsYU, GWANGJAE (Author) / Li, Hongmin (Thesis advisor) / Webster, Scott (Thesis advisor) / Fowler, John (Committee member) / Arizona State University (Publisher)
Created2021
Description
Researchers and practitioners have widely studied road network traffic data in different areas such as urban planning, traffic prediction and spatial-temporal databases. For instance, researchers use such data to evaluate the impact of road network changes. Unfortunately, collecting large-scale high-quality urban traffic data requires tremendous efforts because participating vehicles must

Researchers and practitioners have widely studied road network traffic data in different areas such as urban planning, traffic prediction and spatial-temporal databases. For instance, researchers use such data to evaluate the impact of road network changes. Unfortunately, collecting large-scale high-quality urban traffic data requires tremendous efforts because participating vehicles must install Global Positioning System(GPS) receivers and administrators must continuously monitor these devices. There have been some urban traffic simulators trying to generate such data with different features. However, they suffer from two critical issues (1) Scalability: most of them only offer single-machine solution which is not adequate to produce large-scale data. Some simulators can generate traffic in parallel but do not well balance the load among machines in a cluster. (2) Granularity: many simulators do not consider microscopic traffic situations including traffic lights, lane changing, car following. This paper proposed GeoSparkSim, a scalable traffic simulator which extends Apache Spark to generate large-scale road network traffic datasets with microscopic traffic simulation. The proposed system seamlessly integrates with a Spark-based spatial data management system, GeoSpark, to deliver a holistic approach that allows data scientists to simulate, analyze and visualize large-scale urban traffic data. To implement microscopic traffic models, GeoSparkSim employs a simulation-aware vehicle partitioning method to partition vehicles among different machines such that each machine has a balanced workload. The experimental analysis shows that GeoSparkSim can simulate the movements of 200 thousand cars over an extensive road network (250 thousand road junctions and 300 thousand road segments).
ContributorsFu, Zishan (Author) / Sarwat, Mohamed (Thesis advisor) / Pedrielli, Giulia (Committee member) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2019
Description
The shift in focus of manufacturing systems to high-mix and low-volume production poses a challenge to both efficient scheduling of manufacturing operations and effective assessment of production capacity. This thesis considers the problem of scheduling a set of jobs that require machine and worker resources to complete their manufacturing operations.

The shift in focus of manufacturing systems to high-mix and low-volume production poses a challenge to both efficient scheduling of manufacturing operations and effective assessment of production capacity. This thesis considers the problem of scheduling a set of jobs that require machine and worker resources to complete their manufacturing operations. Although planners in manufacturing contexts typically focus solely on machines, schedules that only consider machining requirements may be problematic during implementation because machines need skilled workers and cannot run unsupervised. The model used in this research will be beneficial to these environments as planners would be able to determine more realistic assignments and operation sequences to minimize the total time required to complete all jobs. This thesis presents a mathematical formulation for concurrent scheduling of machines and workers that can optimally schedule a set of jobs while accounting for changeover times between operations. The mathematical formulation is based on disjunctive constraints that capture the conflict between operations when trying to schedule them to be performed by the same machine or worker. An additional formulation extends the previous one to consider how cross-training may impact the production capacity and, for a given budget, provide training recommendations for specific workers and operations to reduce the makespan. If training a worker is advantageous to increase production capacity, the model recommends the best time window to complete it such that overlaps with work assignments are avoided. It is assumed that workers can perform tasks involving the recently acquired skills as soon as training is complete. As an alternative to the mixed-integer programming formulations, this thesis provides a math-heuristic approach that fixes the order of some operations based on Largest Processing Time (LPT) and Shortest Processing Time (SPT) procedures, while allowing the exact formulation to find the optimal schedule for the remaining operations. Computational experiments include the use of the solution for the no-training problem as a starting feasible solution to the training problem. Although the models provided are general, the manufacturing of Printed Circuit Boards are used as a case study.
ContributorsAdams, Katherine Bahia (Author) / Sefair, Jorge (Thesis advisor) / Askin, Ronald (Thesis advisor) / Webster, Scott (Committee member) / Arizona State University (Publisher)
Created2019
Description
Platform business models have become pervasive in many aspects of the economy,particularly in the areas experiencing rapid growth such as retailing (e.g., Amazon and eBay) and last-mile transportation (e.g., Instacart and Amazon Flex). The popularity of platform business models is, in part, due to the asset-light prospect which allows businesses to maintain flexibility

Platform business models have become pervasive in many aspects of the economy,particularly in the areas experiencing rapid growth such as retailing (e.g., Amazon and eBay) and last-mile transportation (e.g., Instacart and Amazon Flex). The popularity of platform business models is, in part, due to the asset-light prospect which allows businesses to maintain flexibility while scaling up their operations. Yet, this ease of growth may not necessarily be conducive to viable outcomes. Because scalability in a platform depends on the intermediary’s role it plays in facilitating matching between users on each side of the platform, the efficiency of matching could be eroded as growth increases search frictions and matching costs. This phenomenon is demonstrated in recent studies on platform growth (e.g. Fradkin, 2017; Lian and Van Ryzin, 2021; Li and Netessine, 2020). To sustain scalability during growth, platforms must rely on effective platformdesign to mitigate challenges arising in facilitating efficient matching. Market design differs in its focus between retail and last-mile transportation platforms. In retail platforms, platform design’s emphasis is on helping consumers navigate through a variety of product offerings to match their needs while connecting vendors to a large consumer base (Dinerstein et al., 2018; Bimpikis et al., 2020). Because these platforms exist to manage two-sided demand, scalability depends on the realization of indirect network economies where benefits for users to participate on the platforms are commensurate with the size of users on the other side (Parker and Van Alstyne, 2005; Armstrong, 2006; Rysman, 2009). Thus, platform design plays a critical role in the realization of indirect network economies on retail platforms. Last-mile transportation platforms manage independent drivers on one side andretailers on the other, both parties holding flexibility in switching between platforms. High demand for independent drivers along with their flexibility in work participation induces platforms to use subsidies to incentivize retention. This leads to short-term improvements in retention at the expense of significant increases in platforms’ compensation costs. Acute challenges to driver retention call for effective compensation strategies to better coordinate labor participation from these drivers (Nikzad, 2017; Liu et al., 2019; Guda and Subramanian, 2019). In addition to driver turnover, retailers’ withdrawal can undermine the operating efficiency of last-mile transportation platforms (Borsenberger et al., 2018). This dissertation studies platforms’ scalability and operational challenges faced by platforms in the growth.
ContributorsWang, Lina (Author) / Rabinovich, Elliot (Thesis advisor) / Richards, Timothy (Committee member) / Webster, Scott (Committee member) / Guda, Harish (Committee member) / Arizona State University (Publisher)
Created2021
Description
To maintain long term success, a manufacturing company should be managed and operated under the guidance of properly designed capacity, production and logistics plans that are formulated in coordination with its manufacturing footprint, so that its managerial goals on both strategic and tactical levels can be fulfilled. In particular, sufficient

To maintain long term success, a manufacturing company should be managed and operated under the guidance of properly designed capacity, production and logistics plans that are formulated in coordination with its manufacturing footprint, so that its managerial goals on both strategic and tactical levels can be fulfilled. In particular, sufficient flexibility and efficiency should be ensured so that future customer demand can be met at a profit. This dissertation is motivated by an automobile manufacturer's mid-term and long-term decision problems, but applies to any multi-plant, multi-product manufacturer with evolving product portfolios and significant fixed and variable production costs. Via introducing the concepts of effective capacity and product-specific flexibility, two mixed integer programming (MIP) models are proposed to help manufacturers shape their mid-term capacity plans and long-term product allocation plans. With fixed tooling flexibility, production and logistics considerations are integrated into a mid-term capacity planning model to develop well-informed and balanced tactical plans, which utilize various capacity adjustment options to coordinate production, inventory, and shipping schedules throughout the planning horizon so that overall operational and capacity adjustment costs are minimized. For long-term product allocation planning, strategic tooling configuration plans that empower the production of multi-generation products at minimal configuration and operational costs are established for all plants throughout the planning horizon considering product-specific commonality and compatibility. New product introductions and demand uncertainty over the planning horizon are incorporated. As a result, potential production sites for each product and corresponding process flexibility are determined. An efficient heuristic method is developed and shown to perform well in solution quality and computational requirements.
ContributorsYao, Xufeng (Author) / Askin, Ronald (Thesis advisor) / Sefair, Jorge (Thesis advisor) / Escobedo, Adolfo (Committee member) / Yan, Hao (Committee member) / Arizona State University (Publisher)
Created2021
Description
Computer vision and tracking has become an area of great interest for many reasons, including self-driving cars, identification of vehicles and drivers on roads, and security camera monitoring, all of which are expanding in the modern digital era. When working with practical systems that are constrained in multiple ways, such

Computer vision and tracking has become an area of great interest for many reasons, including self-driving cars, identification of vehicles and drivers on roads, and security camera monitoring, all of which are expanding in the modern digital era. When working with practical systems that are constrained in multiple ways, such as video quality or viewing angle, algorithms that work well theoretically can have a high error rate in practice. This thesis studies several ways in which that error can be minimized.This thesis describes an application in a practical system. This project is to detect, track and count people entering different lanes at an airport security checkpoint, using CCTV videos as a primary source. This thesis improves an existing algorithm that is not optimized for this particular problem and has a high error rate when comparing the algorithm counts with the true volume of users. The high error rate is caused by many people crowding into security lanes at the same time. The camera from which footage was captured is located at a poor angle, and thus many of the people occlude each other and cause the existing algorithm to miss people. One solution is to count only heads; since heads are smaller than a full body, they will occlude less, and in addition, since the camera is angled from above, the heads in back will appear higher and will not be occluded by people in front. One of the primary improvements to the algorithm is to combine both person detections and head detections to improve the accuracy. The proposed algorithm also improves the accuracy of detections. The existing algorithm used the COCO training dataset, which works well in scenarios where people are visible and not occluded. However, the available video quality in this project was not very good, with people often blocking each other from the camera’s view. Thus, a different training set was needed that could detect people even in poor-quality frames and with occlusion. The new training set is the first algorithmic improvement, and although occasionally performing worse, corrected the error by 7.25% on average.
ContributorsLarsen, Andrei (Author) / Askin, Ronald (Thesis advisor) / Sefair, Jorge (Thesis advisor) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
Description
Coastal areas are susceptible to man-made disasters, such as oil spills, which not

only have a dreadful impact on the lives of coastal communities and businesses but also

have lasting and hazardous consequences. The United States coastal areas, especially

the Gulf of Mexico, have witnessed devastating oil spills of varied sizes and durations

that

Coastal areas are susceptible to man-made disasters, such as oil spills, which not

only have a dreadful impact on the lives of coastal communities and businesses but also

have lasting and hazardous consequences. The United States coastal areas, especially

the Gulf of Mexico, have witnessed devastating oil spills of varied sizes and durations

that resulted in major economic and ecological losses. These disasters affected the oil,

housing, forestry, tourism, and fishing industries with overall costs exceeding billions

of dollars (Baade et al. (2007); Smith et al. (2011)). Extensive research has been

done with respect to oil spill simulation techniques, spatial optimization models, and

innovative strategies to deal with spill response and planning efforts. However, most

of the research done in those areas is done independently of each other, leaving a

conceptual void between them.

In the following work, this thesis presents a Spatial Decision Support System

(SDSS), which efficiently integrates the independent facets of spill modeling techniques

and spatial optimization to enable officials to investigate and explore the various

options to clean up an offshore oil spill to make a more informed decision. This

thesis utilizes Blowout and Spill Occurrence Model (BLOSOM) developed by Sim

et al. (2015) to simulate hypothetical oil spill scenarios, followed by the Oil Spill

Cleanup and Operational Model (OSCOM) developed by Grubesic et al. (2017) to

spatially optimize the response efforts. The results of this combination are visualized

in the SDSS, featuring geographical maps, so the boat ramps from which the response

should be launched can be easily identified along with the amount of oil that hits the

shore thereby visualizing the intensity of the impact of the spill in the coastal areas

for various cleanup targets.
ContributorsPydi Medini, Prannoy Chandra (Author) / Maciejewski, Ross (Thesis advisor) / Grubesic, Anthony (Committee member) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2018