Matching Items (104)
Filtering by
- Genre: Doctoral Dissertation
- Creators: Liu, Huan

Description
Glycosaminoglycans (GAGs) are a class of complex biomolecules comprised of linear, sulfated polysaccharides whose presence on cell surfaces and in the extracellular matrix involve them in many physiological phenomena as well as in interactions with pathogenic microbes. Decorin binding protein A (DBPA), a Borrelia surface lipoprotein involved in the infectivity of Lyme disease, is responsible for binding GAGs found on decorin, a small proteoglycan present in the extracellular matrix. Different DBPA strains have notable sequence heterogeneity that results in varying levels of GAG-binding affinity. In this dissertation, the structures and GAG-binding mechanisms for three strains of DBPA (B31 and N40 DBPAs from B. burgdorferi and PBr DBPA from B. garinii) are studied to determine why each strain has a different affinity for GAGs. These three strains have similar topologies consisting of five α-helices held together by a hydrophobic core as well as two long flexible segments: a linker between helices one and two and a C-terminal tail. This structural arrangement facilitates the formation of a basic pocket below the flexible linker which is the primary GAG-binding epitope. However, this GAG-binding site can be occluded by the flexible linker, which makes the linker a negative regulator of GAG-binding. ITC and NMR titrations provide KD values that show PBr DBPA binds GAGs with higher affinity than B31 and N40 DBPAs, while N40 binds with the lowest affinity of the three. Work in this thesis demonstrates that much of the discrepancies seen in GAG affinities of the three DBPAs can be explained by the amino acid composition and conformation of the linker. Mutagenesis studies show that B31 DBPA overcomes the pocket obstruction with the BXBB motif in its linker while PBr DBPA has a retracted linker that exposes the basic pocket as well as a secondary GAG-binding site. N40 DBPA, however, does not have any evolutionary modifications to its structure to enhance GAG binding which explains its lower affinity for GAGs. GMSA and ELISA assays, along with NMR PRE experiments, confirm that structural changes in the linker do affect GAG-binding and, as a result, the linker is responsible for regulating GAG affinity.
ContributorsMorgan, Ashli M (Author) / Wang, Xu (Thesis advisor) / Allen, James (Committee member) / Yarger, Jeffery (Committee member) / Arizona State University (Publisher)
Created2015

Description
The purpose of information source detection problem (or called rumor source detection) is to identify the source of information diffusion in networks based on available observations like the states of the nodes and the timestamps at which nodes adopted the information (or called infected). The solution of the problem can be used to answer a wide range of important questions in epidemiology, computer network security, etc. This dissertation studies the fundamental theory and the design of efficient and robust algorithms for the information source detection problem.
For tree networks, the maximum a posterior (MAP) estimator of the information source is derived under the independent cascades (IC) model with a complete snapshot and a Short-Fat Tree (SFT) algorithm is proposed for general networks based on the MAP estimator. Furthermore, the following possibility and impossibility results are established on the Erdos-Renyi (ER) random graph: $(i)$ when the infection duration $<\frac{2}{3}t_u,$ SFT identifies the source with probability one asymptotically, where $t_u=\left\lceil\frac{\log n}{\log \mu}\right\rceil+2$ and $\mu$ is the average node degree, $(ii)$ when the infection duration $>t_u,$ the probability of identifying the source approaches zero asymptotically under any algorithm; and $(iii)$ when infection duration $
In practice, other than the nodes' states, side information like partial timestamps may also be available. Such information provides important insights of the diffusion process. To utilize the partial timestamps, the information source detection problem is formulated as a ranking problem on graphs and two ranking algorithms, cost-based ranking (CR) and tree-based ranking (TR), are proposed. Extensive experimental evaluations of synthetic data of different diffusion models and real world data demonstrate the effectiveness and robustness of CR and TR compared with existing algorithms.
For tree networks, the maximum a posterior (MAP) estimator of the information source is derived under the independent cascades (IC) model with a complete snapshot and a Short-Fat Tree (SFT) algorithm is proposed for general networks based on the MAP estimator. Furthermore, the following possibility and impossibility results are established on the Erdos-Renyi (ER) random graph: $(i)$ when the infection duration $<\frac{2}{3}t_u,$ SFT identifies the source with probability one asymptotically, where $t_u=\left\lceil\frac{\log n}{\log \mu}\right\rceil+2$ and $\mu$ is the average node degree, $(ii)$ when the infection duration $>t_u,$ the probability of identifying the source approaches zero asymptotically under any algorithm; and $(iii)$ when infection duration $
In practice, other than the nodes' states, side information like partial timestamps may also be available. Such information provides important insights of the diffusion process. To utilize the partial timestamps, the information source detection problem is formulated as a ranking problem on graphs and two ranking algorithms, cost-based ranking (CR) and tree-based ranking (TR), are proposed. Extensive experimental evaluations of synthetic data of different diffusion models and real world data demonstrate the effectiveness and robustness of CR and TR compared with existing algorithms.
ContributorsZhu, Kai (Author) / Ying, Lei (Thesis advisor) / Lai, Ying-Cheng (Committee member) / Liu, Huan (Committee member) / Shakarian, Paulo (Committee member) / Arizona State University (Publisher)
Created2015

Description
Discriminative learning when training and test data belong to different distributions is a challenging and complex task. Often times we have very few or no labeled data from the test or target distribution, but we may have plenty of labeled data from one or multiple related sources with different distributions. Due to its capability of migrating knowledge from related domains, transfer learning has shown to be effective for cross-domain learning problems. In this dissertation, I carry out research along this direction with a particular focus on designing efficient and effective algorithms for BioImaging and Bilingual applications. Specifically, I propose deep transfer learning algorithms which combine transfer learning and deep learning to improve image annotation performance. Firstly, I propose to generate the deep features for the Drosophila embryo images via pretrained deep models and build linear classifiers on top of the deep features. Secondly, I propose to fine-tune the pretrained model with a small amount of labeled images. The time complexity and performance of deep transfer learning methodologies are investigated. Promising results have demonstrated the knowledge transfer ability of proposed deep transfer algorithms. Moreover, I propose a novel Robust Principal Component Analysis (RPCA) approach to process the noisy images in advance. In addition, I also present a two-stage re-weighting framework for general domain adaptation problems. The distribution of source domain is mapped towards the target domain in the first stage, and an adaptive learning model is proposed in the second stage to incorporate label information from the target domain if it is available. Then the proposed model is applied to tackle cross lingual spam detection problem at LinkedIn’s website. Our experimental results on real data demonstrate the efficiency and effectiveness of the proposed algorithms.
ContributorsSun, Qian (Author) / Ye, Jieping (Committee member) / Xue, Guoliang (Committee member) / Liu, Huan (Committee member) / Li, Jing (Committee member) / Arizona State University (Publisher)
Created2015

Description
One of the most remarkable outcomes resulting from the evolution of the web into Web 2.0, has been the propelling of blogging into a widely adopted and globally accepted phenomenon. While the unprecedented growth of the Blogosphere has added diversity and enriched the media, it has also added complexity. To cope with the relentless expansion, many enthusiastic bloggers have embarked on voluntarily writing, tagging, labeling, and cataloguing their posts in hopes of reaching the widest possible audience. Unbeknown to them, this reaching-for-others process triggers the generation of a new kind of collective wisdom, a result of shared collaboration, and the exchange of ideas, purpose, and objectives, through the formation of associations, links, and relations. Mastering an understanding of the Blogosphere can greatly help facilitate the needs of the ever growing number of these users, as well as producers, service providers, and advertisers into facilitation of the categorization and navigation of this vast environment. This work explores a novel method to leverage the collective wisdom from the infused label space for blog search and discovery. The work demonstrates that the wisdom space can provide a most unique and desirable framework to which to discover the highly sought after background information that could aid in the building of classifiers. This work incorporates this insight into the construction of a better clustering of blogs which boosts the performance of classifiers for identifying more relevant labels for blogs, and offers a mechanism that can be incorporated into replacing spurious labels and mislabels in a multi-labeled space.
ContributorsGalan, Magdiel F (Author) / Liu, Huan (Thesis advisor) / Davulcu, Hasan (Committee member) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2015

Description
With the rise of social media, user-generated content has become available at an unprecedented scale. On Twitter, 1 billion tweets are posted every 5 days and on Facebook, 20 million links are shared every 20 minutes. These massive collections of user-generated content have introduced the human behavior's big-data.
This big data has brought about countless opportunities for analyzing human behavior at scale. However, is this data enough? Unfortunately, the data available at the individual-level is limited for most users. This limited individual-level data is often referred to as thin data. Hence, researchers face a big-data paradox, where this big-data is a large collection of mostly limited individual-level information. Researchers are often constrained to derive meaningful insights regarding online user behavior with this limited information. Simply put, they have to make thin data thick.
In this dissertation, how human behavior's thin data can be made thick is investigated. The chief objective of this dissertation is to demonstrate how traces of human behavior can be efficiently gleaned from the, often limited, individual-level information; hence, introducing an all-inclusive user behavior analysis methodology that considers social media users with different levels of information availability. To that end, the absolute minimum information in terms of both link or content data that is available for any social media user is determined. Utilizing only minimum information in different applications on social media such as prediction or recommendation tasks allows for solutions that are (1) generalizable to all social media users and that are (2) easy to implement. However, are applications that employ only minimum information as effective or comparable to applications that use more information?
In this dissertation, it is shown that common research challenges such as detecting malicious users or friend recommendation (i.e., link prediction) can be effectively performed using only minimum information. More importantly, it is demonstrated that unique user identification can be achieved using minimum information. Theoretical boundaries of unique user identification are obtained by introducing social signatures. Social signatures allow for user identification in any large-scale network on social media. The results on single-site user identification are generalized to multiple sites and it is shown how the same user can be uniquely identified across multiple sites using only minimum link or content information.
The findings in this dissertation allows finding the same user across multiple sites, which in turn has multiple implications. In particular, by identifying the same users across sites, (1) patterns that users exhibit across sites are identified, (2) how user behavior varies across sites is determined, and (3) activities that are observed only across sites are identified and studied.
This big data has brought about countless opportunities for analyzing human behavior at scale. However, is this data enough? Unfortunately, the data available at the individual-level is limited for most users. This limited individual-level data is often referred to as thin data. Hence, researchers face a big-data paradox, where this big-data is a large collection of mostly limited individual-level information. Researchers are often constrained to derive meaningful insights regarding online user behavior with this limited information. Simply put, they have to make thin data thick.
In this dissertation, how human behavior's thin data can be made thick is investigated. The chief objective of this dissertation is to demonstrate how traces of human behavior can be efficiently gleaned from the, often limited, individual-level information; hence, introducing an all-inclusive user behavior analysis methodology that considers social media users with different levels of information availability. To that end, the absolute minimum information in terms of both link or content data that is available for any social media user is determined. Utilizing only minimum information in different applications on social media such as prediction or recommendation tasks allows for solutions that are (1) generalizable to all social media users and that are (2) easy to implement. However, are applications that employ only minimum information as effective or comparable to applications that use more information?
In this dissertation, it is shown that common research challenges such as detecting malicious users or friend recommendation (i.e., link prediction) can be effectively performed using only minimum information. More importantly, it is demonstrated that unique user identification can be achieved using minimum information. Theoretical boundaries of unique user identification are obtained by introducing social signatures. Social signatures allow for user identification in any large-scale network on social media. The results on single-site user identification are generalized to multiple sites and it is shown how the same user can be uniquely identified across multiple sites using only minimum link or content information.
The findings in this dissertation allows finding the same user across multiple sites, which in turn has multiple implications. In particular, by identifying the same users across sites, (1) patterns that users exhibit across sites are identified, (2) how user behavior varies across sites is determined, and (3) activities that are observed only across sites are identified and studied.
ContributorsZafarani, Reza, 1983- (Author) / Liu, Huan (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Xue, Guoliang (Committee member) / Leskovec, Jure (Committee member) / Arizona State University (Publisher)
Created2015

Description
Traditionally, visualization is one of the most important and commonly used methods of generating insight into large scale data. Particularly for spatiotemporal data, the translation of such data into a visual form allows users to quickly see patterns, explore summaries and relate domain knowledge about underlying geographical phenomena that would not be apparent in tabular form. However, several critical challenges arise when visualizing and exploring these large spatiotemporal datasets. While, the underlying geographical component of the data lends itself well to univariate visualization in the form of traditional cartographic representations (e.g., choropleth, isopleth, dasymetric maps), as the data becomes multivariate, cartographic representations become more complex. To simplify the visual representations, analytical methods such as clustering and feature extraction are often applied as part of the classification phase. The automatic classification can then be rendered onto a map; however, one common issue in data classification is that items near a classification boundary are often mislabeled.
This thesis explores methods to augment the automated spatial classification by utilizing interactive machine learning as part of the cluster creation step. First, this thesis explores the design space for spatiotemporal analysis through the development of a comprehensive data wrangling and exploratory data analysis platform. Second, this system is augmented with a novel method for evaluating the visual impact of edge cases for multivariate geographic projections. Finally, system features and functionality are demonstrated through a series of case studies, with key features including similarity analysis, multivariate clustering, and novel visual support for cluster comparison.
This thesis explores methods to augment the automated spatial classification by utilizing interactive machine learning as part of the cluster creation step. First, this thesis explores the design space for spatiotemporal analysis through the development of a comprehensive data wrangling and exploratory data analysis platform. Second, this system is augmented with a novel method for evaluating the visual impact of edge cases for multivariate geographic projections. Finally, system features and functionality are demonstrated through a series of case studies, with key features including similarity analysis, multivariate clustering, and novel visual support for cluster comparison.
ContributorsZhang, Yifan (Author) / Maciejewski, Ross (Thesis advisor) / Mack, Elizabeth (Committee member) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2016

Description
Deoxyribonucleic acid (DNA) has emerged as an attractive building material for creating complex architectures at the nanometer scale that simultaneously affords versatility and modularity. Particularly, the programmability of DNA enables the assembly of basic building units into increasingly complex, arbitrary shapes or patterns. With the expanding complexity and functionality of DNA toolboxes, a quantitative understanding of DNA self-assembly in terms of thermodynamics and kinetics, will provide researchers with more subtle design guidelines that facilitate more precise spatial and temporal control. This dissertation focuses on studying the physicochemical properties of DNA tile-based self-assembly process by recapitulating representative scenarios and intermediate states with unique assembly pathways.
First, DNA double-helical tiles with increasing flexibility were designed to investigate the dimerization kinetics. The higher dimerization rates of more rigid tiles result from the opposing effects of higher activation energies and higher pre-exponential factors from the Arrhenius equation, where the pre-exponential factor dominates. Next, the thermodynamics and kinetics of single tile attachment to preformed “multitile” arrays were investigated to test the fundamental assumptions of tile assembly models. The results offer experimental evidences that double crossover tile attachment is determined by the electrostatic environment and the steric hindrance at the binding site. Finally, the assembly of double crossover tiles within a rhombic DNA origami frame was employed as the model system to investigate the competition between unseeded, facet and seeded nucleation. The results revealed that preference of nucleation types can be tuned by controlling the rate-limiting nucleation step.
The works presented in this dissertation will be helpful for refining the DNA tile assembly model for future designs and simulations. Moreover, The works presented here could also be helpful in understanding how individual molecules interact and more complex cooperative bindings in chemistry and biology. The future direction will focus on the characterization of tile assembly at single molecule level and the development of error-free tile assembly systems.
First, DNA double-helical tiles with increasing flexibility were designed to investigate the dimerization kinetics. The higher dimerization rates of more rigid tiles result from the opposing effects of higher activation energies and higher pre-exponential factors from the Arrhenius equation, where the pre-exponential factor dominates. Next, the thermodynamics and kinetics of single tile attachment to preformed “multitile” arrays were investigated to test the fundamental assumptions of tile assembly models. The results offer experimental evidences that double crossover tile attachment is determined by the electrostatic environment and the steric hindrance at the binding site. Finally, the assembly of double crossover tiles within a rhombic DNA origami frame was employed as the model system to investigate the competition between unseeded, facet and seeded nucleation. The results revealed that preference of nucleation types can be tuned by controlling the rate-limiting nucleation step.
The works presented in this dissertation will be helpful for refining the DNA tile assembly model for future designs and simulations. Moreover, The works presented here could also be helpful in understanding how individual molecules interact and more complex cooperative bindings in chemistry and biology. The future direction will focus on the characterization of tile assembly at single molecule level and the development of error-free tile assembly systems.
ContributorsJiang, Shuoxing (Author) / Yan, Hao (Thesis advisor) / Liu, Yan (Thesis advisor) / Hayes, Mark (Committee member) / Wang, Xu (Committee member) / Arizona State University (Publisher)
Created2016

Description
The rapid growth of social media in recent years provides a large amount of user-generated visual objects, e.g., images and videos. Advanced semantic understanding approaches on such visual objects are desired to better serve applications such as human-machine interaction, image retrieval, etc. Semantic visual attributes have been proposed and utilized in multiple visual computing tasks to bridge the so-called "semantic gap" between extractable low-level feature representations and high-level semantic understanding of the visual objects.
Despite years of research, there are still some unsolved problems on semantic attribute learning. First, real-world applications usually involve hundreds of attributes which requires great effort to acquire sufficient amount of labeled data for model learning. Second, existing attribute learning work for visual objects focuses primarily on images, with semantic analysis on videos left largely unexplored.
In this dissertation I conduct innovative research and propose novel approaches to tackling the aforementioned problems. In particular, I propose robust and accurate learning frameworks on both attribute ranking and prediction by exploring the correlation among multiple attributes and utilizing various types of label information. Furthermore, I propose a video-based skill coaching framework by extending attribute learning to the video domain for robust motion skill analysis. Experiments on various types of applications and datasets and comparisons with multiple state-of-the-art baseline approaches confirm that my proposed approaches can achieve significant performance improvements for the general attribute learning problem.
Despite years of research, there are still some unsolved problems on semantic attribute learning. First, real-world applications usually involve hundreds of attributes which requires great effort to acquire sufficient amount of labeled data for model learning. Second, existing attribute learning work for visual objects focuses primarily on images, with semantic analysis on videos left largely unexplored.
In this dissertation I conduct innovative research and propose novel approaches to tackling the aforementioned problems. In particular, I propose robust and accurate learning frameworks on both attribute ranking and prediction by exploring the correlation among multiple attributes and utilizing various types of label information. Furthermore, I propose a video-based skill coaching framework by extending attribute learning to the video domain for robust motion skill analysis. Experiments on various types of applications and datasets and comparisons with multiple state-of-the-art baseline approaches confirm that my proposed approaches can achieve significant performance improvements for the general attribute learning problem.
ContributorsChen, Lin (Author) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Wang, Yalin (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2016

Description
With the rise of social media, hundreds of millions of people spend countless hours all over the globe on social media to connect, interact, share, and create user-generated data. This rich environment provides tremendous opportunities for many different players to easily and effectively reach out to people, interact with them, influence them, or get their opinions. There are two pieces of information that attract most attention on social media sites, including user preferences and interactions. Businesses and organizations use this information to better understand and therefore provide customized services to social media users. This data can be used for different purposes such as, targeted advertisement, product recommendation, or even opinion mining. Social media sites use this information to better serve their users.
Despite the importance of personal information, in many cases people do not reveal this information to the public. Predicting the hidden or missing information is a common response to this challenge. In this thesis, we address the problem of predicting user attributes and future or missing links using an egocentric approach. The current research proposes novel concepts and approaches to better understand social media users in twofold including, a) their attributes, preferences, and interests, and b) their future or missing connections and interactions. More specifically, the contributions of this dissertation are (1) proposing a framework to study social media users through their attributes and link information, (2) proposing a scalable algorithm to predict user preferences; and (3) proposing a novel approach to predict attributes and links with limited information. The proposed algorithms use an egocentric approach to improve the state of the art algorithms in two directions. First by improving the prediction accuracy, and second, by increasing the scalability of the algorithms.
Despite the importance of personal information, in many cases people do not reveal this information to the public. Predicting the hidden or missing information is a common response to this challenge. In this thesis, we address the problem of predicting user attributes and future or missing links using an egocentric approach. The current research proposes novel concepts and approaches to better understand social media users in twofold including, a) their attributes, preferences, and interests, and b) their future or missing connections and interactions. More specifically, the contributions of this dissertation are (1) proposing a framework to study social media users through their attributes and link information, (2) proposing a scalable algorithm to predict user preferences; and (3) proposing a novel approach to predict attributes and links with limited information. The proposed algorithms use an egocentric approach to improve the state of the art algorithms in two directions. First by improving the prediction accuracy, and second, by increasing the scalability of the algorithms.
ContributorsAbbasi, Mohammad Ali, 1975- (Author) / Liu, Huan (Thesis advisor) / Davulcu, Hasan (Committee member) / Ye, Jieping (Committee member) / Agarwal, Nitin (Committee member) / Arizona State University (Publisher)
Created2014

Description
Social media platforms such as Twitter, Facebook, and blogs have emerged as valuable
- in fact, the de facto - virtual town halls for people to discover, report, share and
communicate with others about various types of events. These events range from
widely-known events such as the U.S Presidential debate to smaller scale, local events
such as a local Halloween block party. During these events, we often witness a large
amount of commentary contributed by crowds on social media. This burst of social
media responses surges with the "second-screen" behavior and greatly enriches the
user experience when interacting with the event and people's awareness of an event.
Monitoring and analyzing this rich and continuous flow of user-generated content can
yield unprecedentedly valuable information about the event, since these responses
usually offer far more rich and powerful views about the event that mainstream news
simply could not achieve. Despite these benefits, social media also tends to be noisy,
chaotic, and overwhelming, posing challenges to users in seeking and distilling high
quality content from that noise.
In this dissertation, I explore ways to leverage social media as a source of information and analyze events based on their social media responses collectively. I develop, implement and evaluate EventRadar, an event analysis toolbox which is able to identify, enrich, and characterize events using the massive amounts of social media responses. EventRadar contains three automated, scalable tools to handle three core event analysis tasks: Event Characterization, Event Recognition, and Event Enrichment. More specifically, I develop ET-LDA, a Bayesian model and SocSent, a matrix factorization framework for handling the Event Characterization task, i.e., modeling characterizing an event in terms of its topics and its audience's response behavior (via ET-LDA), and the sentiments regarding its topics (via SocSent). I also develop DeMa, an unsupervised event detection algorithm for handling the Event Recognition task, i.e., detecting trending events from a stream of noisy social media posts. Last, I develop CrowdX, a spatial crowdsourcing system for handling the Event Enrichment task, i.e., gathering additional first hand information (e.g., photos) from the field to enrich the given event's context.
Enabled by EventRadar, it is more feasible to uncover patterns that have not been
explored previously and re-validating existing social theories with new evidence. As a
result, I am able to gain deep insights into how people respond to the event that they
are engaged in. The results reveal several key insights into people's various responding
behavior over the event's timeline such the topical context of people's tweets does not
always correlate with the timeline of the event. In addition, I also explore the factors
that affect a person's engagement with real-world events on Twitter and find that
people engage in an event because they are interested in the topics pertaining to
that event; and while engaging, their engagement is largely affected by their friends'
behavior.
- in fact, the de facto - virtual town halls for people to discover, report, share and
communicate with others about various types of events. These events range from
widely-known events such as the U.S Presidential debate to smaller scale, local events
such as a local Halloween block party. During these events, we often witness a large
amount of commentary contributed by crowds on social media. This burst of social
media responses surges with the "second-screen" behavior and greatly enriches the
user experience when interacting with the event and people's awareness of an event.
Monitoring and analyzing this rich and continuous flow of user-generated content can
yield unprecedentedly valuable information about the event, since these responses
usually offer far more rich and powerful views about the event that mainstream news
simply could not achieve. Despite these benefits, social media also tends to be noisy,
chaotic, and overwhelming, posing challenges to users in seeking and distilling high
quality content from that noise.
In this dissertation, I explore ways to leverage social media as a source of information and analyze events based on their social media responses collectively. I develop, implement and evaluate EventRadar, an event analysis toolbox which is able to identify, enrich, and characterize events using the massive amounts of social media responses. EventRadar contains three automated, scalable tools to handle three core event analysis tasks: Event Characterization, Event Recognition, and Event Enrichment. More specifically, I develop ET-LDA, a Bayesian model and SocSent, a matrix factorization framework for handling the Event Characterization task, i.e., modeling characterizing an event in terms of its topics and its audience's response behavior (via ET-LDA), and the sentiments regarding its topics (via SocSent). I also develop DeMa, an unsupervised event detection algorithm for handling the Event Recognition task, i.e., detecting trending events from a stream of noisy social media posts. Last, I develop CrowdX, a spatial crowdsourcing system for handling the Event Enrichment task, i.e., gathering additional first hand information (e.g., photos) from the field to enrich the given event's context.
Enabled by EventRadar, it is more feasible to uncover patterns that have not been
explored previously and re-validating existing social theories with new evidence. As a
result, I am able to gain deep insights into how people respond to the event that they
are engaged in. The results reveal several key insights into people's various responding
behavior over the event's timeline such the topical context of people's tweets does not
always correlate with the timeline of the event. In addition, I also explore the factors
that affect a person's engagement with real-world events on Twitter and find that
people engage in an event because they are interested in the topics pertaining to
that event; and while engaging, their engagement is largely affected by their friends'
behavior.
ContributorsHu, Yuheng (Author) / Kambhampati, Subbarao (Thesis advisor) / Horvitz, Eric (Committee member) / Krumm, John (Committee member) / Liu, Huan (Committee member) / Sundaram, Hari (Committee member) / Arizona State University (Publisher)
Created2014