I am currently based at the Computer Science Department of University College London. My primary research focus is the analysis of user-generated content published on the Web (social media, search query logs, blogs and so on). I am also interested in interdisciplinary research tasks that bring together Computer Science, Healthcare, Statistics and Social Sciences.
Recent news snippets
- Paper that improves influenza prevalence modelling from search query logs published in Nature Scientific Reports [ blog post ]
- Paper that proposes a method for estimating the impact of a health intervention via user-generated web data published at Data Mining and Knowledge Discovery [ blog post ]
- $50,000 sponsorship by Google to advance research on influenza modelling based on user-generated resources
- Paper where we infer the occupational class of a user based on Twitter activity presented at ACL '15 [ blog post ]
- Invited talks at the universities of Cambridge and Warwick [ slides ]
Research highlights Machine Learning; Natural Language Processing; User-Generated Content; Social Media; Big Data
Advances in the modelling of influenza from search query logs (Scientific Reports, 2015)
Previous attempts to model influenza from the time series of search query frequencies (such as Google Flu Trends) ended up with high error rates in their estimations during recent flu seasons. In this paper, we revisit the modelling of influenza rates, proposing a novel nonlinear approach based on a composite Gaussian Process that operates on top of search query clusters. We also extend the proposed query-only models with an autoregressive component that incorporates our most recent knowledge about the flu prevalence in the population. Our analysis performs a rigorous experimentation that spans across 10 US flu seasons, reveals the pitfalls of the previous approach and provides a qualitative perspective for this research task. See my blog post for a brief summary.
Modelling voting intention from Twitter content - Bilinear text regression (ACL '13)
Instead of just modelling word frequencies (or in general word characteristics) by learning a weight vector w, we are also learning a weight for each user (uT). Thus, from a linear regression model, we now go bilinear. This idea was applied for predicting voting intention polls from tweets in two countries (Austria and the UK), but it is also applicable on various other NLP tasks, such as the extraction of socioeconomic patterns from the news. You may download a beta version of Bilinear Elastic Net (BEN) for MATLAB; relevant slides are available.
Modelling user impact on Social Media with Gaussian Processes (EACL '14)
What are the most important factors for determining user impact on Social Media platforms? Can we identify user actions that have a significant effect on their impact? In this work, we propose a set of nonlinear models based on Gaussian Processes for inferring user impact on Twitter. Our modelling is based on actions under the direct control of a user, including textual features such as word or topic (word-cluster) frequencies. Given the strong inference performance, we then dig further into our models and qualitatively analyse their properties from a variety of angles in an effort to discover the specific user behaviours that are decisive impact-wise. A brief summary of this work is given in this blog post.
Affective patterns in books
What happens if we quantify affective expression in millions of books? We can probably identify periods with dominant emotions, extract temporal emotion patterns through the century and come up with interesting scenarios that may explain them (PLOS ONE, 2013). Additionally, we could explain those patterns by looking at their reflection in real-world tendencies such as indices about the main driving factor of the system we are living in, the economy (PLOS ONE, 2014).
An effort to assess the statistical robustness of the above findings together with comparative figures across different emotion detection tools are presented in this paper (Big Data '13).
Nowcasting events from the Social Web (ACM TIST, 2012)
Can we exploit text generated by Social Media (e.g. Twitter) users to quantify the magnitude of events, such as an infectious disease (e.g. flu) or even a rainfall by applying Machine Learning methods?
CIP '10 & ECML/PKDD '10)
This is the first work showing that Social Media can be used to track the level of an infectious disease, such as influnza-like-illness (ILI), in the population. To achieve that we collected geolocated tweets from 54 UK cities, used them in a regularised regression model which was trained and evaluated against ILI rates from the Health Protection Agency. Flu Detector is a demonstration that used (now stopped!) the content of Twitter for nowcasting the level of flu-like illness in several UK regions on a daily basis. We've recently came up with an improved visualisation of predicted flu rates from Twitter data, but it is still in its alpha version.
Press release: Computer Science Department, University of Bristol
Media coverage: MIT Technology Review, New Scientist
Extracting collective mood patterns from Twitter on a daily (WWW '12) or an hourly (arXiv, 2013) basis
Mood of the Nation used (now stopped!) more than half a million geolocated tweets on a daily basis to detect mood and affect trends in the UK population focus on 4 categories of affect: joy, sadness, anger and fear. A simple assessment of those patterns reveals quite interesting results. Check this out for example!