My Blog List

Tuesday, October 11, 2016

Entity/Event-level Sentiment Detection and Inference

Summary

In this talk, Dr. Deng summarized their research work in the field of sentiment analysis aiming at understanding the opinions in the text. As she discussed, sentiment can be generally divided into three categories - positive, negative and neutral, and specifically the positive and negative are more people’s concerns. Much work in sentiment analysis and opinion mining have been done at both general document level as well as fined grained levels such as sentence level, phrase level and aspect level. Deng and her colleagues make a contribution at entity/event level.

Dr. Deng indicated that some implicit sentiment cannot be recognized merely relying on sentiment lexicons. For example, in the following sentences:

Sentence: It is great that the bill was defeated.

There is a positive sentiment explicitly expressed via the word “great”, and the target of this sentiment is the fact that “the bill was defeated”. It is not difficult to detect that the writer or speaker shows a positive attitude towards the fact. But it becomes more complicated to identify the person’s attitude towards the target “bill”. Dr. Deng stated that compared to detecting the explicit sentiment, it requires inference to recognize the implicit sentiment.




To achieve the reference goal, she and her colleagues develop inference rules (Table 1), and conduct extensive experiments to demonstrate the inference ability of their rules. Besides, they also developed computational models aiming at automatically infer sentiments. In their model, they not only take into consideration the information from sentiment analysis but also from other Natural Language Processing tasks. Results show that their framework simultaneously beats the baselines by more than 10 points in F-measure on sentiment detection and more than 7 points in accuracy on GoodFor/BadFor (gfbf) polarity disambiguation.

Details about talk:
Title: Entity/Event-level Sentiment Detection and Inference
Speaker: Lingjia Deng

Sunday, October 9, 2016

Link prediction in dynamic networks: Statistical models and evaluation metrics

Summary:

Dr. Xu’s work mainly focuses on link prediction in a dynamic setting where networks are constantly evolving, so it is possible that edges are added and deleted over time. For example, in academic community, two authors might collaborate to publish one paper this year, they might not continue their coauthor relation in the next year. In the real word, people meet new friends and construct new links, at the same time they might lose contact with old friends and existing edges are removed. We can see that, such link prediction problem is more complicated than on static networks because researchers not only need to predict whether a new edge will appear, but also have to predict whether an existing link will disappear.

This interesting topic has been studied extensively in prior literatures across many disciplines.
As Dr. Xu stated, most statistical models for such dynamic networks assume a hidden Markov-like structure that can accurately replicate the structure of a network at a particular time but cannot accurately replicate how these structures persist or evolve over time, resulting in poor link prediction accuracy. Therefore, in this talk, Xu talks about their model named “Stochastic Block Transition Model (SBTM)” which is an extension of stochastic block model (SBM). Unlike in the hidden Markov SBM, where edges are formed independently and identically with probabilities according to block probability matrix, their SBTM allows for edges to form following two block transition matrices. One denotes the probability of forming new edges within blocks and the other denotes the probability of existing edges re-occurring within blocks. Figure 1 shows a comparison of estimation accuracies measured by the adjusted rand indices among three different algorithms: SBTM presented in this talk, HM-SBM (Xu and Hero III, 2014) and a static SBM (SSBM). It reveals that SSBM does not improve the performance as more information is provided; HM-SBM has the poorest prediction accuracy; and SBTM approach is more accurate than the two baselines.

Figure 1: Adjusted Rand indices with 95% confidence bands for the stochastic block transition model (SBTM), hidden Markov stochastic block model (HM- SBM), and static stochastic block model (SSBM) on 50 runs of the simulated networks experiment.
In addition, Dr. Xu also discusses a phenomenon that there is a clear disagreement between current matrices (F-score, AUC, PRAUC etc) for dynamic link prediction accuracy. Considering the severe class imbalance in link prediction, they propose a novel and more appropriate approach called GMAUC - the geometric mean of two quantities - PRAUC and AUC - after a baseline correction.


Details about talk:
Title: Link prediction in dynamic networks: Statistical models and evaluation metrics
Speaker: Kevin S. Xu, Assistant Professor, University of Toledo

URLs: https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxwaXR0YmlnZGF0YXxneDo3N2JkNDg5ZWIzMWVhYWU4