Publications

Welcome to the MIRROR Project’s Publications!

 

Unravelling Social Perceptions & Behaviors towards Migrants on Twitter

We draw insights from the social psychology literature to identify two facets of Twitter deliberations about migrants, i.e., perceptions about migrants and behaviors towards migrants. Our theoretical anchoring helped us in identifying two prevailing perceptions (i.e., sympathy and antipathy) and two dominant behaviors (i.e., solidarity and animosity) of social media users towards migrants. We have employed unsupervised and supervised approaches to identify these perceptions and behaviors. In the domain of applied NLP, our study offers a nuanced understanding of migrant-related Twitter de-liberations. Our proposed transformer-based model, i.e., BERT + CNN, has reported an F1-score of 0.76 and outperformed other models. Additionally, we argue that tweets conveying antipathy or animosity can be broadly considered hate speech towards migrants, but they are not the same. Thus, our approach has fine-tuned the binary hate speech detection task by highlighting the granular differences between perceptual and behavioral aspects of hate speeches.

Summarizing Videos Using Concentrated Attention and Considering the Uniqueness and Diversity of the Video Frames

In this work, we describe a new method for unsupervised video summarization. To overcome limitations of existing unsupervised video summarization approaches, that relate to the unstable training of Generator-Discriminator architectures, the use of RNNs for modeling long-range frames’ dependencies and the ability to parallelize the training process of RNN-based network architectures, the developed method relies solely on the use of a self-attention mechanism to estimate the importance of video frames. Instead of simply modeling the frames’ dependencies based on global attention, our method integrates a concentrated attention mechanism that is able to focus on non-overlapping blocks in the main diagonal of the attention matrix, and to enrich the existing information by extracting and exploiting knowledge about the uniqueness and diversity of the associated frames of the video. In this way, our method makes better estimates about the significance of different parts of the video, and drastically reduces the number of learnable parameters. Experimental evaluations using two benchmarking datasets (SumMe and TVSum) show the competitiveness of the proposed method against other state-of-the-art unsupervised summarization approaches, and demonstrate its ability to produce video summaries that are very close to the human preferences. An ablation study that focuses on the introduced components, namely the use of concentrated attention in combination with attention-based estimates about the frames’ uniqueness and diversity, shows their relative contributions to the overall summarization performance.

The “should” and “could” Dilemmas of Using Open Source Information for Border Security

When it comes to the security field, there seems to be an ongoing tension between what is possible, e.g. available technology in terms of intelligence collection and analysis – in this the case the “could” and what is necessary, proportionate and legal in a certain field – summarized as the ‘should’. This paper explores this struggle as reflected in the debate surrounding the use of open source information to better understand migrants’ attitudes and identify perception-related threats as part of border security risk assessment procedures.

Detecting Computer-generated Disinformation

Modern neural language models can be used by malicious actors to automatically produce textual content looking as it has been written by genuine human users. Due to progress in the controllability of computer-generated text, there is a risk that state-sponsored actors may start using such methods for conducting large-scale information operations. Various detection algorithms have been suggested in the research literature to identify texts produced by language model-based generators, but these are often mainly evaluated on test data from the same distribution as they have been trained on. We evaluate promising Transformer-based detection algorithms in a large variety of experiments involving both in-distribution and out-of-distribution test data, as well as evaluation on more realistic in-the-wild data. It is shown that the generalizability of the detectors can be questioned, especially when applied to short social media posts. Moreover, the best performing (RoBERTa-based) detector is shown to be non-robust also to basic adversarial attacks, illustrating how easy it is for malicious actors to avoid detection by the current state-of-the-art detection algorithms.

My EU = Your EU? Differences in the Perception of European Issues Across Geographic Regions

Our perception of the situation in a country or a region is strongly influenced by the reflection of this situation in mass and social media channels. This effect is even more pronounced for geographically and culturally distant regions, for which no firsthand experience is available. To avoid information overload, news outlets typically filter the available news from foreign countries based on the expected interest of the target audiences. Such filtering imposes an inherent bias in the reporting and can create a distorted perception of a region among the consumers of news of other regions. This might lead to misunderstandings between countries and unsubstantiated political and individual decisions (e.g., in the context of migration). In this article, we systematically analyze the bias created in news reports. We consider Europe, or more precisely the European Union (EU) as our zone of concern , and examine its image in the media (news outlets) of other regions, Europe(NON-EU), Africa, Asia, Middle-East, America, and Oceania. An analysis of the year 2018 (January–December 2018) of news published in those regions reveals marked differences in the editorial policies and presented narrative when dealing with EU-related news. We observe a significant variation in the sentiment polarity of the reported EU-related stories between the European and other regional news outlets. We further analyze the polarity variation among different subregions of large geographical areas, such as Africa, Asia, and America. We observe a contrasting difference in their editorial policies. This trend also holds for news related to different topics, such as politics, business, economy, health, and international relation.

Towards an Interpretable Approach to Classify and Summarize Crisis Events from Microblogs

Microblogging platforms like Twitter have been heavily leveraged to report and exchange information about natural disasters. The real-time data on these sites is highly helpful in gaining situational awareness and planning aid efforts. However, disaster-related mes-sages are immersed in a high volume of irrelevant information. The situational data of disaster events also vary greatly in terms of information types ranging from general situational awareness(caution, infrastructure damage, casualties) to individual needs or not related to the crisis. It thus requires efficient methods to handle data overload and prioritize various types of information. This paper proposes an interpretable classification-summarization framework that first classifies tweets into different disaster-related categories and then summarizes those tweets. Unlike existing work, our classification model can provide explanations or rationales for its decisions. In the summarization phase, we employ an Integer Linear Programming (ILP) based optimization technique along with the help of rationales to generate summaries of event categories. Extensive evaluation on large-scale disaster events shows (a). our model can classify tweets into disaster-related categories with an85% Macro F1 score and high interpretability (b). the summarizer achieves (5-25%) improvement in terms of ROUGE-1 F-score over most state-of-the-art approaches.

Automatic and Semi-automatic Augmentation of Migration Related Semantic Concepts for Visual Media Retrieval

Understanding the factors related to migration, such as perceptions about routes and target countries, is critical for border agencies and society altogether. A systematic analysis of communication and news channels, such as social media, can improve our understanding of such factors. Videos and images play a critical role in social media as they have significant impact on perception manipulation and misinformation campaigns. However, more research is needed in the identification of semantically relevant visual content for specific queried concepts. Furthermore, an important problem to overcome in this area is the lack of annotated datasets that could be used to create and test accurate models. A recent study proposed a novel video representation and retrieval approach that effectively bridges the gap between a substantiated domain understanding – encapsulated into textual descriptions of Migration Related Semantic Concepts (MRSCs) – and the expression of such concepts in a video. In this work, we build on this approach and propose an improved procedure for the crucial step of the concept labels’ textual augmentation, which contributes towards the full automation of the pipeline. We assemble the first, to the best of our knowledge, migration-related videos and images dataset and we experimentally assess our method on it.

Combining Adversarial and Reinforcement Learning for Video Thumbnail Selection

This paper presents a new method for unsupervised video thumbnail selection. The developed network architecture selects video thumbnails based on two criteria: the representativeness and the aesthetic quality of their visual content. Training relies on a combination of adversarial and reinforcement learning. The former is used to train a discriminator, whose goal is to distinguish the original from a reconstructed version of the video based on a small set of candidate thumbnails. The discriminator’s feedback is a measure of the representativeness of the selected thumbnails. This measure is combined with estimates about the aesthetic quality of the thumbnails (made using a SoA Fully Convolutional Network) to form a reward and train the thumbnail selector via reinforcement learning. Experiments on two datasets (OVP and Youtube) show the competitiveness of the proposed method against other SoA approaches. An ablation study with respect to the adopted thumbnail selection criteria documents the importance of considering the aesthetics, and the contribution of this information when used in combination with measures about the representativeness of the visual content.

Exploiting Out-of-Domain Datasets and Visual Representations for Image Sentiment Classification

Visual sentiment analysis has recently gained attention as an important means of opinion mining, with many applications. It involves a high level of abstraction and subjectivity, which makes it a challenging task. The most recent works are based on deep convolutional neural networks, and exploit transfer learning from other image classification tasks. However, transferring knowledge from tasks other than image classification has not been investigated in the literature. Motivated by this, in this work we examine the potential of transferring knowledge from several pre-trained networks, some of which are out-of-domain. We show that by simply concatenating these diverse feature vectors we construct a rich image representation that can be used to train a classifier with state of the art performance on image sentiment analysis. We also evaluate a Mixture of Experts approach, for learning from this combination of representations, and highlight its performance advantages. We compare against the top-performing recently-published methods on four popular benchmark datasets and report new SOTA results on three of the four.

Hard-negatives or Non-negatives? A hard-negative selection strategy for cross-modal retrieval using the improved marginal ranking loss

Cross-modal learning has gained a lot of interest recently, and many applications of it, such as image-text retrieval, cross-modal video search, or video captioning have been proposed. In this work, we deal with the cross-modal video retrieval problem. The state-of-the-art approaches are based on deep network architectures, and rely on mining hard-negative samples during training to optimize the selection of the network’s parameters. Starting from a state-of-the-art cross-modal architecture that uses the improved marginal ranking loss function, we propose a simple strategy for hard-negative mining to identify which training samples are hard-negatives and which, although presently treated as hard-negatives, are likely not negative samples at all and shouldn’t be treated as such. Additionally, to take full advantage of network models trained using different design choices for hard-negative mining, we examine model combination strategies, and we design a hybrid one effectively combining large numbers of trained models.

Analyzing European Migrant-related Twitter Deliberations

Machine-driven topic identification of online contents is a prevalent task in the natural language processing (NLP) domain. Social media deliberation reflects society’s opinion, and a structured analysis of these contents allows us to decipher the same. We employ an NLP-based approach for investigating migration-related Twitter discussions. Besides traditional deep learning-based models, we have also considered pre-trained transformer-based models for analyzing our corpus. We have successfully classified multiple strands of public opinion related to European migrants. Finally, we use ’BertViz’ to visually explore the interpretability of better performing transformer-based models.

EUDETECTOR: Leveraging Language Model to Identify EU-Related News

News media reflects the present state of a country or region to its audiences. Thousands of news are posted and consumed by a large group of diverse audiences across the world. Media outlets of a region post different kinds of news for their local and global audiences. In this paper, we focus on Europe (precisely EU) and propose a method to identify news that has an impact on Europe from any aspect such as financial, business, crime, politics, etc. Predicting the location of the news is itself a challenging task. Most of the approaches restrict themselves towards named entities or hand-crafted features. In this paper, we try to overcome that limitation i.e., instead of focusing only on the named entities (Europe location, politicians etc.) and some hand-crafted rules, we also explore the context of news articles with the help of pretrained language model BERT. The auto-regressive language model based European news detector shows about 9-19% improvement in terms of F-score over baseline models. Interestingly, we observe that such models automatically capture named entities, their origin, etc; hence, no separate information is required. We also evaluate the role of such entities in the prediction and explore the tokens that BERT really looks at for deciding the news category. Entities such as person, location, organization turn out to be good rationale tokens for the prediction.

ObjectGraphs: Using Objects and a Graph Convolutional Network for the Bottom-up Recognition and Explanation of Events in Video

In this paper a novel bottom-up video event recognition approach is proposed, ObjectGraphs, which utilizes a rich frame representation and the relations between objects within each frame. Following the application of an object detector (OD) on the frames, graphs are used to model the object relations and a graph convolutional network (GCN) is utilized to perform reasoning on the graphs. The resulting object-based frame-level features are then forwarded to a long short-term memory (LSTM) network for video event recognition. Moreover, the weighted in-degrees (WiDs) derived from the graph’s adjacency matrix at frame level are used for identifying the objects that were considered most (or least) salient for event recognition and contributed the most (or least) to the final event recognition decision, thus providing an explanation for the latter. The experimental results show that the proposed method achieves state-of-the-art performance on the publicly available FCVID and YLI-MED datasets.

Explain and Predict, and then Predict Again

A desirable property of learning systems is to be both effective and interpretable. Towards this goal, recent models have been proposed that first generate an extractive explanation from the input text and then generate a prediction on just the explanation called explain-then-predict models. These models primarily consider the task input as a supervision signal in learning an extractive explanation and do not effectively integrate rationales data as an additional inductive bias to improve task performance. We propose a novel yet simple approach ExPred, that uses multi-task learning in the explanation generation phase effectively trading-off explanation and prediction losses. And then we use another prediction network on just the extracted explanations for optimizing the task performance. We conduct an extensive evaluation of our approach on three diverse language datasets — fact verification, sentiment classification, and QA — and find that we substantially outperform existing approaches.

VERGE in VBS 2021

This paper presents VERGE, an interactive video search engine that supports efficient browsing and searching into a collection of images or videos. The framework involves a variety of retrieval approaches as well as re-ranking and fusion capabilities. A Web application enables users to create queries and view the results in a fast and friendly manner.

Qualitative Interviews with Irregular Migrants in Times of COVID-19: Recourse to Remote Interview Techniques as a Possible Methodological Adjustment

Research designs require flexibility, and adjustments made to the designs do not always have to lead exclusively to disadvantages. In this research note, we would like to share our reflections on the impact of COVID-19 on the conduct of qualitative interviews with irregular migrants. Since these considerations were developed in close connection with one of our own projects, in which fieldwork is currently in the planning phase, we believe they may be relevant to similar projects. We include a brief remark on the current situation of irregular migrants in different (European) countries, as well as an assessment of the methodological feasibility of qualitative face- to-face interviews with irregular migrants and possible alternatives to this method such as remote and online interview formats. We conclude with insights on our recommendation to rely on a mixed mode approach, which allows us to use various remote interview modes, thus providing the necessary flexibility to adapt to profound health and social crises such as COVID-19.

A secure, free and inclusive society is a key objective of the European Union – and at the same time a major challenge. Against the backdrop of unprecedented global change, growing inter-dependencies and risks, the EU is funding research and innovation projects such as MIRROR aimed at strengthening the civil security of European society and its citizens.

Migration-Related Semantic Concepts for the Retrieval of Relevant Video Content

Migration, and especially irregular migration, is a critical issue for border agencies and society in general. Migration-related situations and decisions are influenced by various factors, including the perceptions about migration routes and target countries. An improved understanding of such factors can be achieved by systematic automated
analyses of media and social media channels, and the videos and images published in them. However, the multifaceted nature of migration and the variety of ways migration-related aspects are expressed in images and videos make the finding and automated analysis of migration-related multimedia content a challenging task. We propose a novel approach that e ffectively bridges the gap between a substantiated domain understanding – encapsulated into a set of Migration-related semantic concepts – and the expression of such concepts in a video, by introducing an advanced video analysis and retrieval method for this purpose.

Automated Text Analysis for Intelligence Purposes: A Psychological Operations Case Study

With the availability of an abundance of data through the Internet, the premises to solve some intelligence analysis tasks have changed for the better. The study presented herein sets out to examine whether and how a data-driven approach can contribute to solve intelligence tasks. During a full day observational study, an ordinary military intelligence unit was divided into two uniform teams. Each team was independently asked to solve the same realistic intelligence analysis task. Both teams were allowed to use their ordinary set of tools, but in addition one team was also given access to a novel text analysis prototype tool specifically designed to support data-driven intelligence analysis of social media data. The results, obtained from the case study with a high ecological validity, suggest that the prototype tool provided valuable insights by bringing forth information from a more diverse set of sources, specifically from private citizens that would not have been easily discovered otherwise. Also, regardless of its objective contribution, the capabilities and the usage of the tool were embraced and subjectively perceived as useful by all involved analysts.

 

 

Towards an Aspect-based Ranking Model for Clinical Trial Search

Clinical Trials are crucial for the practice of evidence-based medicine. It provides updated and essential health-related information for the patients. Sometimes, Clinical trials are the first source of information about new drugs and treatments. Different stakeholders, such as trial volunteers, trial investigators, and meta-analyses researchers often need to search for trials. In this paper, we propose an automated method to retrieve relevant trials based on the overlap of UMLS concepts between the user query and clinical trials. However, different stakeholders may have different information needs, and accordingly, we rank the retrieved clinical trials based on the following four aspects – Relevancy, Adversity, Recency, and Popularity. We aim to develop a clinical trial search system which covers multiple disease classes, instead of only focusing on retrieval of oncology-based clinical trials. We follow a rigorous annotation scheme and create an annotated retrieval set for 25 queries, across five disease categories. Our proposed method performs better than the baseline model in almost 90% cases. We also measure the correlation between the different aspect-based ranking lists and observe a high negative Spearman rank’s correlation coefficient between popularity and recency.

 

Summarizing Situational Tweets in Crisis Scenarios: An Extractive-Abstractive Approach

Microblogging platforms such as Twitter are widely used by eyewitnesses and affected people to post situational updates during mass convergence events such as natural and man-made disasters. These crisis-related messages disperse among multiple classes/categories such as infrastructure damage, shelter needs, information about missing, injured, and dead people, etc. Side by side, we observe that sometimes people post information about their missing relatives, friends with details like name, last location, etc. Such kind of information is time-critical in nature and their pace and quantity do not match with other kinds of generic situational updates. Also, the requirement of different stakeholders (government, NGOs, rescue workers, etc.) varies a lot. This brings two-fold challenges — (i). extracting important high-level situational updates from these messages, assign them appropriate categories, finally summarize big trove of information in each category and (ii). extracting small-scale time-critical sparse updates related to missing or trapped persons. In this paper, we propose a classification-summarization framework that first assigns tweets into different situational classes and then summarizes those tweets. In the summarization phase, we propose a two-stage extractive-abstractive summarization framework. In the first step, it extracts a set of important tweets from the whole set of information, develops a bigram-based word-graph from those tweets, and generates paths by traversing the word-graph. Next, it uses Integer-linear programming (ILP) based optimization technique to select the most important tweets and paths based on different optimization parameters such as informativeness, coverage of content words, etc. Apart from general class-wise summarization, we also show the customization of our summarization model to address time-critical sparse information needs (e.g., missing relatives). Our proposed method is time and memory efficient and shows better performance than state-of-the-art methods both in terms of quantitative and qualitative judgement.

 

Going Beyond Content Richness: Verified Information Aware Summarization of Crisis-Related Microblogs

High-impact catastrophic events (bomb attacks, shootings) trigger posting of large volume of information on social media platforms such as Twitter. Recent works have proposed content-aware systems for summarizing this information, thereby facilitating post-disaster services. However, a significant proportion of the posted content is unverified, which restricts the practical usage of the existing summarization systems. In this paper, we work on the novel task of generating verified summaries of information posted on Twitter during disasters. We first jointly learn representations of content-classes and expression-classes of tweets posted during disasters using a novel LDA-based generative model. These representations of content & expression classes are used in conjunction with pre-disaster user behavior and temporal signals (replies) for training a Tree-LSTM based tweet-verification model. The model infers tweet verification probabilities which are used, besides information content of tweets, in an Integer Linear Programming (ILP) framework for generating the desired verified summaries. The summaries are fine-tuned using the class information of the tweets as obtained from the LDA-based generative model. Extensive experiments are performed on a publicly-available labeled dataset of man-made disasters which demonstrate the effectiveness of our tweet-verification (3-13% gain over baselines) and summarization (12-48% gain in verified content proportion, 8-13% gain in ROUGE-score over state-of-the-art) systems.

 

Identifying Deceptive Reviews: Feature Exploration, Model Transferability and Classification Attack

The temptation to influence and sway public opinion most certainly increases with the growth of open online forums where anyone anonymously can express their views and opinions. Since online review sites are a popular venue for opinion influencing attacks, there is a need to automatically identify deceptive posts. The main focus of this work is on automatic identification of deceptive reviews, both positive and negative biased. With this objective, we build a deceptive review SVM based classification model and explore the performance impact of using different feature types (TF-IDF, word2vec, PCFG). Moreover, we study the transferability of trained classification models applied to review data sets of other types of products, and, the classifier robustness, i.e., the accuracy impact, against attacks by stylometry obfuscation trough machine translation. Our findings show that i) we achieve an accuracy of over 90% using different feature types, ii) the trained classification models do not perform well when applied on other data sets containing reviews of different products, and iii) machine translation only slightly impacts the results and can not be used as a viable attack method.

 

Extracting Account Attributes for Analyzing Influence on Twitter

The last years has witnessed a surge of autogenerated content on social media. While many uses are legitimate, bots have also been deployed in influence operations to manipulate election results, affect public opinion in a desired direction, or to divert attention from a specific event or phenomenon. Today, many approaches exist to automatically identify bot-like behaviour in order to curb illegitimate influence operations. While progress has been made, existing models are exceedingly complex and nontransparent, rendering validation and model testing difficult. We present a transparent and parsimonious method to study influence operations on Twitter. We define nine different attributes that can be used to describe and reason about different characteristics of a Twitter account. The attributes can be used to group accounts that have similar characteristics and the result can be used to identify accounts that are likely to be used to influence public opinion. The method has been tested on a Twitter data set consisting of 66,000 accounts. Clustering the accounts based on the proposed features show promising results for separating between different groups of reference accounts.

 

Veracity assessment of online data

Fake news, malicious rumors, fabricated reviews, generated images and videos, are today spread at an unprecedented rate, making the task of manually assessing data veracity for decision-making purposes a daunting task. Hence, it is urgent to explore possibilities to perform automatic veracity assessment. In this work we review the literature in search for methods and techniques representing state of the art with regard to computerized veracity assessment. We study what others have done within the area of veracity assessment, especially targeted towards social media and open source data, to understand research trends and determine needs for future research.

The most common veracity assessment method among the studied set of papers is to perform text analysis using supervised learning. Regarding methods for machine learning much has happened in the last couple of years related to the advancements made in deep learning. However, very few papers make use of these advancements. Also, the papers in general tend to have a narrow scope, as they focus on solving a small task with only one type of data from one main source. The overall veracity assessment problem is complex, requiring a combination of data sources, data types, indicators, and methods. Only a few papers take on such a broad scope, thus, demonstrating the relative immaturity of the veracity assessment domain.

 

Veracity assessment of online data

A desirable property of learning systems is to be both effective and interpretable. Towards this goal, recent models have been proposed that first generate an extractive explanation from the input text and then generate a prediction on just the explanation called explain-then-predict models. These models primarily consider the task input as a supervision signal in learning an extractive explanation and do not effectively integrate rationales data as an additional inductive bias to improve task performance. We propose a novel yet simple approach ExPred, that uses multi-task learning in the explanation generation phase effectively trading-off explanation and prediction losses. And then we use another prediction network on just the extracted explanations for optimizing the task performance. We conduct an extensive evaluation of our approach on three diverse language datasets — fact verification, sentiment classification, and QA — and find that we substantially outperform existing approaches.