Matching Items (25)
Filtering by

Clear all filters

Description
Image segmentation is of great importance and value in many applications. In computer vision, image segmentation is the tool and process of locating objects and boundaries within images. The segmentation result may provide more meaningful image data. Generally, there are two fundamental image segmentation algorithms: discontinuity and similarity. The idea

Image segmentation is of great importance and value in many applications. In computer vision, image segmentation is the tool and process of locating objects and boundaries within images. The segmentation result may provide more meaningful image data. Generally, there are two fundamental image segmentation algorithms: discontinuity and similarity. The idea behind discontinuity is locating the abrupt changes in intensity of images, as are often seen in edges or boundaries. Similarity subdivides an image into regions that fit the pre-defined criteria. The algorithm utilized in this thesis is the second category.

This study addresses the problem of particle image segmentation by measuring the similarity between a sampled region and an adjacent region, based on Bhattacharyya distance and an image feature extraction technique that uses distribution of local binary patterns and pattern contrasts. A boundary smoothing process is developed to improve the accuracy of the segmentation. The novel particle image segmentation algorithm is tested using four different cases of particle image velocimetry (PIV) images. The obtained experimental results of segmentations provide partitioning of the objects within 10 percent error rate. Ground-truth segmentation data, which are manually segmented image from each case, are used to calculate the error rate of the segmentations.
ContributorsHan, Dongmin (Author) / Frakes, David (Thesis advisor) / Adrian, Ronald (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2015
Description
This dissertation describes a process for interface capturing via an arbitrary-order, nearly quadrature free, discontinuous Galerkin (DG) scheme for the conservative level set method (Olsson et al., 2005, 2008). The DG numerical method is utilized to solve both advection and reinitialization, and executed on a refined level set grid (Herrmann,

This dissertation describes a process for interface capturing via an arbitrary-order, nearly quadrature free, discontinuous Galerkin (DG) scheme for the conservative level set method (Olsson et al., 2005, 2008). The DG numerical method is utilized to solve both advection and reinitialization, and executed on a refined level set grid (Herrmann, 2008) for effective use of processing power. Computation is executed in parallel utilizing both CPU and GPU architectures to make the method feasible at high order. Finally, a sparse data structure is implemented to take full advantage of parallelism on the GPU, where performance relies on well-managed memory operations.

With solution variables projected into a kth order polynomial basis, a k+1 order convergence rate is found for both advection and reinitialization tests using the method of manufactured solutions. Other standard test cases, such as Zalesak's disk and deformation of columns and spheres in periodic vortices are also performed, showing several orders of magnitude improvement over traditional WENO level set methods. These tests also show the impact of reinitialization, which often increases shape and volume errors as a result of level set scalar trapping by normal vectors calculated from the local level set field.

Accelerating advection via GPU hardware is found to provide a 30x speedup factor comparing a 2.0GHz Intel Xeon E5-2620 CPU in serial vs. a Nvidia Tesla K20 GPU, with speedup factors increasing with polynomial degree until shared memory is filled. A similar algorithm is implemented for reinitialization, which relies on heavier use of shared and global memory and as a result fills them more quickly and produces smaller speedups of 18x.
ContributorsJibben, Zechariah J (Author) / Herrmann, Marcus (Thesis advisor) / Squires, Kyle (Committee member) / Adrian, Ronald (Committee member) / Chen, Kangping (Committee member) / Treacy, Michael (Committee member) / Arizona State University (Publisher)
Created2015
Description
The application of novel visualization and modeling methods to the study of cardiovascular disease is vital to the development of innovative diagnostic techniques, including those that may aid in the early detection and prevention of cardiovascular disorders. This dissertation focuses on the application of particle image velocimetry (PIV) to the

The application of novel visualization and modeling methods to the study of cardiovascular disease is vital to the development of innovative diagnostic techniques, including those that may aid in the early detection and prevention of cardiovascular disorders. This dissertation focuses on the application of particle image velocimetry (PIV) to the study of intracardiac hemodynamics. This is accomplished primarily though the use of ultrasound based PIV, which allows for in vivo visualization of intracardiac flow without the requirement for optical access, as is required with traditional camera-based PIV methods.

The fundamentals of ultrasound PIV are introduced, including experimental methods for its implementation as well as a discussion on estimating and mitigating measurement error. Ultrasound PIV is then compared to optical PIV; this is a highly developed technique with proven accuracy; through rigorous examination it has become the “gold standard” of two-dimensional flow visualization. Results show good agreement between the two methods.

Using a mechanical left heart model, a multi-plane ultrasound PIV technique is introduced and applied to quantify a complex, three-dimensional flow that is analogous to the left intraventricular flow. Changes in ventricular flow dynamics due to the rotational orientation of mechanical heart valves are studied; the results demonstrate the importance of multi-plane imaging techniques when trying to assess the strongly three-dimensional intraventricular flow.

The potential use of ultrasound PIV as an early diagnosis technique is demonstrated through the development of a novel elasticity estimation technique. A finite element analysis routine is couple with an ensemble Kalman filter to allow for the estimation of material elasticity using forcing and displacement data derived from PIV. Results demonstrate that it is possible to estimate elasticity using forcing data derived from a PIV vector field, provided vector density is sufficient.
ContributorsWesterdale, John Curtis (Author) / Adrian, Ronald (Thesis advisor) / Belohlavek, Marek (Committee member) / Squires, Kyle (Committee member) / Trimble, Steve (Committee member) / Frakes, David (Committee member) / Arizona State University (Publisher)
Created2015
Description
Cerebral aneurysms are pathological balloonings of blood vessels in the brain, commonly found in the arterial network at the base of the brain. Cerebral aneurysm rupture can lead to a dangerous medical condition, subarachnoid hemorrhage, that is associated with high rates of morbidity and mortality. Effective evaluation and management of

Cerebral aneurysms are pathological balloonings of blood vessels in the brain, commonly found in the arterial network at the base of the brain. Cerebral aneurysm rupture can lead to a dangerous medical condition, subarachnoid hemorrhage, that is associated with high rates of morbidity and mortality. Effective evaluation and management of cerebral aneurysms is therefore essential to public health. The goal of treating an aneurysm is to isolate the aneurysm from its surrounding circulation, thereby preventing further growth and rupture. Endovascular treatment for cerebral aneurysms has gained popularity over traditional surgical techniques due to its minimally invasive nature and shorter associated recovery time. The hemodynamic modifications that the treatment effects can promote thrombus formation within the aneurysm leading to eventual isolation. However, different treatment devices can effect very different hemodynamic outcomes in aneurysms with different geometries.

Currently, cerebral aneurysm risk evaluation and treatment planning in clinical practice is largely based on geometric features of the aneurysm including the dome size, dome-to-neck ratio, and parent vessel geometry. Hemodynamics, on the other hand, although known to be deeply involved in cerebral aneurysm initiation and progression, are considered to a lesser degree. Previous work in the field of biofluid mechanics has demonstrated that geometry is a driving factor behind aneurysmal hemodynamics.

The goal of this research is to develop a more combined geometric/hemodynamic basis for informing clinical decisions. Geometric main effects were analyzed to quantify contributions made by geometric factors that describe cerebral aneurysms (i.e., dome size, dome-to-neck ratio, and inflow angle) to clinically relevant hemodynamic responses (i.e., wall shear stress, root mean square velocity magnitude and cross-neck flow). Computational templates of idealized bifurcation and sidewall aneurysms were created to satisfy a two-level full factorial design, and examined using computational fluid dynamics. A subset of the computational bifurcation templates was also translated into physical models for experimental validation using particle image velocimetry. The effects of geometry on treatment were analyzed by virtually treating the aneurysm templates with endovascular devices. The statistical relationships between geometry, treatment, and flow that emerged have the potential to play a valuable role in clinical practice.
ContributorsNair, Priya (Author) / Frakes, David (Thesis advisor) / Vernon, Brent (Committee member) / Chong, Brian (Committee member) / Pizziconi, Vincent (Committee member) / Adrian, Ronald (Committee member) / Arizona State University (Publisher)
Created2016
Description
The flow around a golf ball is studied using direct numerical simulation (DNS). An immersed boundary approach is adopted in which the incompressible Navier-Stokes equations are solved using a fractional step method on a structured, staggered grid in cylindrical coordinates. The boundary conditions on the surface are imposed using momentum

The flow around a golf ball is studied using direct numerical simulation (DNS). An immersed boundary approach is adopted in which the incompressible Navier-Stokes equations are solved using a fractional step method on a structured, staggered grid in cylindrical coordinates. The boundary conditions on the surface are imposed using momentum forcing in the vicinity of the boundary. The flow solver is parallelized using a domain decomposition strategy and message passing interface (MPI), and exhibits linear scaling on as many as 500 processors. A laminar flow case is presented to verify the formal accuracy of the method. The immersed boundary approach is validated by comparison with computations of the flow over a smooth sphere. Simulations are performed at Reynolds numbers of 2.5 × 104 and 1.1 × 105 based on the diameter of the ball and the freestream speed and using grids comprised of more than 1.14 × 109 points. Flow visualizations reveal the location of separation, as well as the delay of complete detachment. Predictions of the aerodynamic forces at both Reynolds numbers are in reasonable agreement with measurements. Energy spectra of the velocity quantify the dominant frequencies of the flow near separation and in the wake. Time-averaged statistics reveal characteristic physical patterns in the flow as well as local trends within dimples. A mechanism of drag reduction due to the dimples is confirmed, and metrics for dimple optimization are proposed.
ContributorsSmith, Clinton E (Author) / Squires, Kyle D (Thesis advisor) / Balaras, Elias (Committee member) / Herrmann, Marcus (Committee member) / Adrian, Ronald (Committee member) / Stanzione, Daniel C (Committee member) / Calhoun, Ronald (Committee member) / Arizona State University (Publisher)
Created2011
Description
A cerebral aneurysm is an abnormal ballooning of the blood vessel wall in the brain that occurs in approximately 6% of the general population. When a cerebral aneurysm ruptures, the subsequent damage is lethal damage in nearly 50% of cases. Over the past decade, endovascular treatment has emerged as an

A cerebral aneurysm is an abnormal ballooning of the blood vessel wall in the brain that occurs in approximately 6% of the general population. When a cerebral aneurysm ruptures, the subsequent damage is lethal damage in nearly 50% of cases. Over the past decade, endovascular treatment has emerged as an effective treatment option for cerebral aneurysms that is far less invasive than conventional surgical options. Nonetheless, the rate of successful treatment is as low as 50% for certain types of aneurysms. Treatment success has been correlated with favorable post-treatment hemodynamics. However, current understanding of the effects of endovascular treatment parameters on post-treatment hemodynamics is limited. This limitation is due in part to current challenges in in vivo flow measurement techniques. Improved understanding of post-treatment hemodynamics can lead to more effective treatments. However, the effects of treatment on hemodynamics may be patient-specific and thus, accurate tools that can predict hemodynamics on a case by case basis are also required for improving outcomes.Accordingly, the main objectives of this work were 1) to develop computational tools for predicting post-treatment hemodynamics and 2) to build a foundation of understanding on the effects of controllable treatment parameters on cerebral aneurysm hemodynamics. Experimental flow measurement techniques, using particle image velocimetry, were first developed for acquiring flow data in cerebral aneurysm models treated with an endovascular device. The experimental data were then used to guide the development of novel computational tools, which consider the physical properties, design specifications, and deployment mechanics of endovascular devices to simulate post-treatment hemodynamics. The effects of different endovascular treatment parameters on cerebral aneurysm hemodynamics were then characterized under controlled conditions. Lastly, application of the computational tools for interventional planning was demonstrated through the evaluation of two patient cases.
ContributorsBabiker, M. Haithem (Author) / Frakes, David H (Thesis advisor) / Adrian, Ronald (Committee member) / Caplan, Michael (Committee member) / Chong, Brian (Committee member) / Vernon, Brent (Committee member) / Arizona State University (Publisher)
Created2013
Description
A method of determining nanoparticle temperature through fluorescence intensity levels is described. Intracellular processes are often tracked through the use of fluorescence tagging, and ideal temperatures for many of these processes are unknown. Through the use of fluorescence-based thermometry, cellular processes such as intracellular enzyme movement can be studied and

A method of determining nanoparticle temperature through fluorescence intensity levels is described. Intracellular processes are often tracked through the use of fluorescence tagging, and ideal temperatures for many of these processes are unknown. Through the use of fluorescence-based thermometry, cellular processes such as intracellular enzyme movement can be studied and their respective temperatures established simultaneously. Polystyrene and silica nanoparticles are synthesized with a variety of temperature-sensitive dyes such as BODIPY, rose Bengal, Rhodamine dyes 6G, 700, and 800, and Nile Blue A and Nile Red. Photographs are taken with a QImaging QM1 Questar EXi Retiga camera while particles are heated from 25 to 70 C and excited at 532 nm with a Coherent DPSS-532 laser. Photographs are converted to intensity images in MATLAB and analyzed for fluorescence intensity, and plots are generated in MATLAB to describe each dye's intensity vs temperature. Regression curves are created to describe change in fluorescence intensity over temperature. Dyes are compared as nanoparticle core material is varied. Large particles are also created to match the camera's optical resolution capabilities, and it is established that intensity values increase proportionally with nanoparticle size. Nile Red yielded the closest-fit model, with R2 values greater than 0.99 for a second-order polynomial fit. By contrast, Rhodamine 6G only yielded an R2 value of 0.88 for a third-order polynomial fit, making it the least reliable dye for temperature measurements using the polynomial model. Of particular interest in this work is Nile Blue A, whose fluorescence-temperature curve yielded a much different shape from the other dyes. It is recommended that future work describe a broader range of dyes and nanoparticle sizes, and use multiple excitation wavelengths to better quantify each dye's quantum efficiency. Further research into the effects of nanoparticle size on fluorescence intensity levels should be considered as the particles used here greatly exceed 2 ìm. In addition, Nile Blue A should be further investigated as to why its fluorescence-temperature curve did not take on a characteristic shape for a temperature-sensitive dye in these experiments.
ContributorsTomforde, Christine (Author) / Phelan, Patrick (Thesis advisor) / Dai, Lenore (Committee member) / Adrian, Ronald (Committee member) / Arizona State University (Publisher)
Created2011
Description
This study performs numerical modeling for the climate of semi-arid regions by running a high-resolution atmospheric model constrained by large-scale climatic boundary conditions, a practice commonly called climate downscaling. These investigations focus especially on precipitation and temperature, quantities that are critical to life in semi-arid regions. Using the Weather Research

This study performs numerical modeling for the climate of semi-arid regions by running a high-resolution atmospheric model constrained by large-scale climatic boundary conditions, a practice commonly called climate downscaling. These investigations focus especially on precipitation and temperature, quantities that are critical to life in semi-arid regions. Using the Weather Research and Forecast (WRF) model, a non-hydrostatic geophysical fluid dynamical model with a full suite of physical parameterization, a series of numerical sensitivity experiments are conducted to test how the intensity and spatial/temporal distribution of precipitation change with grid resolution, time step size, the resolution of lower boundary topography and surface characteristics. Two regions, Arizona in U.S. and Aral Sea region in Central Asia, are chosen as the test-beds for the numerical experiments: The former for its complex terrain and the latter for the dramatic man-made changes in its lower boundary conditions (the shrinkage of Aral Sea). Sensitivity tests show that the parameterization schemes for rainfall are not resolution-independent, thus a refinement of resolution is no guarantee of a better result. But, simulations (at all resolutions) do capture the inter-annual variability of rainfall over Arizona. Nevertheless, temperature is simulated more accurately with refinement in resolution. Results show that both seasonal mean rainfall and frequency of extreme rainfall events increase with resolution. For Aral Sea, sensitivity tests indicate that while the shrinkage of Aral Sea has a dramatic impact on the precipitation over the confine of (former) Aral Sea itself, its effect on the precipitation over greater Central Asia is not necessarily greater than the inter-annual variability induced by the lateral boundary conditions in the model and large scale warming in the region. The numerical simulations in the study are cross validated with observations to address the realism of the regional climate model. The findings of this sensitivity study are useful for water resource management in semi-arid regions. Such high spatio-temporal resolution gridded-data can be used as an input for hydrological models for regions such as Arizona with complex terrain and sparse observations. Results from simulations of Aral Sea region are expected to contribute to ecosystems management for Central Asia.
ContributorsSharma, Ashish (Author) / Huang, Huei-Ping (Thesis advisor) / Adrian, Ronald (Committee member) / Herrmann, Marcus (Committee member) / Phelan, Patrick E. (Committee member) / Vivoni, Enrique (Committee member) / Arizona State University (Publisher)
Created2012
Description
Assessments for the threats posed by volcanic eruptions rely in large part on the accurate prediction of volcanic plume motion over time. That predictive capacity is currently hindered by a limited understanding of volcanic plume dynamics. While eruption rate is considered a dominant control on volcanic plume dynamics, the effects

Assessments for the threats posed by volcanic eruptions rely in large part on the accurate prediction of volcanic plume motion over time. That predictive capacity is currently hindered by a limited understanding of volcanic plume dynamics. While eruption rate is considered a dominant control on volcanic plume dynamics, the effects of variable eruption rates on plume rise and evolution are not well understood. To address this aspect of plume dynamics, I conducted an experimental investigation wherein I quantified the relationship between laboratory jet development and highly-variable discharge rates under conditions analogous to those which may prevail in unsteady, short-lived explosive eruptions. I created turbulent jets in the laboratory by releasing pressurized water into a tank of still water. I then measured the resultant jet growth over time using simple video images and particle image velocimetry (PIV). I investigated jet behavior over a range of jet Reynolds numbers which overlaps with estimates of Reynolds numbers for short-duration volcanic plumes. By analysis of the jet boundary and velocity field evolution, I discovered a direct relationship between changes in vent conditions and jet evolution. Jet behavior evolved through a sequence of three stages - jet-like, transitional, and puff-like - that correlate with three main injection phases - acceleration, deceleration and off. While the source was off, jets were characterized by relatively constant internal velocity distributions and flow propagation followed that of a classical puff. However, while the source was on, the flow properties - both in the flows themselves and in the induced ambient flow - changed abruptly with changes at the source. On the basis of my findings for unsteady laboratory jets, I conclude that variable eruption rates with characteristic time scales close to eruption duration have first-order control over volcanic plume evolution. Prior to my study, the significance of this variation was largely uncharacterized as the volcanology community predominately uses steady eruption models for interpretation and prediction of activity. My results suggest that unsteady models are necessary to accurately interpret behavior and assess threats from unsteady, short-lived eruptions.
ContributorsChojnicki, Kirsten (Author) / Clarke, Amanda (Thesis advisor) / Williams, Stanley (Committee member) / Adrian, Ronald (Committee member) / Phillips, Jeremy (Committee member) / Fernando, Harindra (Committee member) / Arizona State University (Publisher)
Created2012
Description
The numerical climate models have provided scientists, policy makers and the general public, crucial information for climate projections since mid-20th century. An international effort to compare and validate the simulations of all major climate models is organized by the Coupled Model Intercomparison Project (CMIP), which has gone through several phases

The numerical climate models have provided scientists, policy makers and the general public, crucial information for climate projections since mid-20th century. An international effort to compare and validate the simulations of all major climate models is organized by the Coupled Model Intercomparison Project (CMIP), which has gone through several phases since 1995 with CMIP5 being the state of the art. In parallel, an organized effort to consolidate all observational data in the past century culminates in the creation of several "reanalysis" datasets that are considered the closest representation of the true observation. This study compared the climate variability and trend in the climate model simulations and observations on the timescales ranging from interannual to centennial. The analysis focused on the dynamic climate quantity of zonal-mean zonal wind and global atmospheric angular momentum (AAM), and incorporated multiple datasets from reanalysis and the most recent CMIP3 and CMIP5 archives. For the observation, the validation of AAM by the length-of-day (LOD) and the intercomparison of AAM revealed a good agreement among reanalyses on the interannual and the decadal-to-interdecadal timescales, respectively. But the most significant discrepancies among them are in the long-term mean and long-term trend. For the simulations, the CMIP5 models produced a significantly smaller bias and a narrower ensemble spread of the climatology and trend in the 20th century for AAM compared to CMIP3, while CMIP3 and CMIP5 simulations consistently produced a positive trend for the 20th and 21st century. Both CMIP3 and CMIP5 models produced a wide range of the magnitudes of decadal and interdecadal variability of wind component of AAM (MR) compared to observation. The ensemble means of CMIP3 and CMIP5 are not statistically distinguishable for either the 20th- or 21st-century runs. The in-house atmospheric general circulation model (AGCM) simulations forced by the sea surface temperature (SST) taken from the CMIP5 simulations as lower boundary conditions were carried out. The zonal wind and MR in the CMIP5 simulations are well simulated in the AGCM simulations. This confirmed SST as an important mediator in regulating the global atmospheric changes due to GHG effect.
ContributorsPaek, Houk (Author) / Huang, Huei-Ping (Thesis advisor) / Adrian, Ronald (Committee member) / Wang, Zhihua (Committee member) / Anderson, James (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2013