Semantic Web Evaluation

Accepted Papers

Open Knowledge Extraction (OKE-2015) challenge

  • Michael Röder, Ricardo Usbeck and Axel-Cyrille Ngonga Ngomo: CETUS -- A Baseline Approach to Type Extraction  (task 1-2)
  • Julien Plu, Giuseppe Rizzo and Raphaël Troncy: An Hybrid Approach for Entity Recognition and Linking  (task 1)
  • Sergio Consoli and Diego Reforgiato: Using FRED for Named Entity Resolution, Linking and Typing for Knowledge Base population  (task 1-2)
  • Jie Gao and Suvodeep Mazumdar: Exploiting Linked Open Data to Uncover Entity Type  (task 2)

 

Semantic Publishing (SemPub2015) challenge

  • Martin Milicka and Radek Burget: Information Extraction from Web Sources based on Multi-aspect Content Analysis
  • Dominika Tkaczyk and  Lukasz Bolikowski: Extracting contextual information from scientific literature using CERMINE system
  • Stefan Klamp and Roman Kern: Machine Learning Techniques for Automatically Extracting Contextual Information from Scientific Publications
  • Andrea Giovanni Nuzzolese, Silvio Peroni, and Diego Reforgiato Recupero: MACJa: Metadata And Citations Jailbreaker
  • Bahar Sateli and Rene Witte: Automatic Construction of a Semantic Knowledge Base from CEUR Workshop Proceedings
  • Maxim Kolchin, Eugene Cherny, Fedor Kozlov, Alexander Shipilo and Liubov Kovriguina: CEUR-WS-LOD: Conversion of CEUR-WS Workshops to Linked Data
  • Liubov Kovriguina, Alexander Shipilo, Fedor Kozlov, Maxim Kolchin and Eugene Cherny: Metadata Extraction From Conference Proceedings Using Template-Based Approach
  • Pieter Heyvaert, Anastasia Dimou, Ruben Verborgh, Erik Mannens, and Rik Van de Walle: Semantically Annotating CEUR-WS Workshop Proceedings with RML
  • Francesco Ronzano, Beatriz Fisas, Gerard Casamayor del Bosque, and Horacio Saggion: On the automated generation of scholarly publishing Linked Datasets: the case of CEUR-WS Proceedings

 

Schema-agnostic Queries over Linked Data challenge: SAQ-2015

  • Zareen Syed, Lushan Han, Muhammad Rahman, Tim Finin, James Kukla and Jeehye Yun: UMBC_Ebiquity-SFQ: Schema Free Querying System
  • Andre Freitas, Christina Unger, Siegfried Handschuh: Evaluating Schema-agnostic Queries: The SAQ-2015 Test Collection

 

Sentiment Analysis challenge

  • Kim Schouten and Flavius Frasincar: The Benefit of Concept-Based Features for Sentiment Analysis
  • Giulio Petrucci and Mauro Dragoni: An Information Retrieval-based System For Multi-Domain Sentiment Analysis
  • Andrea Giovanni Nuzzolese and Misael Mongiovi: Detecting sentiment polarities with Sentilo 
  • Francesco Corcoglioniti, Alessio Palmero Aprosio and Marco Rospocher: Opinion frame extraction from news corpus

 

 
Open Knowledge Extraction Challenge
Abstract:

The vision of the Semantic Web (SW) is to populate the Web with machine understandable data so as to make intelligent agents able to automatically interpret its content - just like humans do by inspecting Web content - and assist users in performing a significant number of tasks, relieving them of cognitive overload. The Linked Data movement kicked-off the vision by realising a key bootstrap in publishing machine understandable information mainly taken from structured data (typically databases) or semi-structured data (e.g. Wikipedia infoboxes). However, most of the Web content consists of natural language text, e.g., Web sites, news, blogs, micro-posts, etc., hence a main challenge is to extract as much relevant knowledge as possible from this content, and publish it in the form of Semantic Web triples.

The Open Knowledge Extraction Challenge focuses on the production of new knowledge aimed at either populating and enriching existing knowledge bases or creating new ones. This means that the defined tasks focus on extracting concepts, individuals, properties, and statements that not necessarily exist already in a target knowledge base, and on representing them according to Semantic Web standard in order to be directly injected in linked datasets and their ontologies. The OKE challenge, has the ambition to advance a reference framework for research on Knowledge Extraction from text for the Semantic Web by re-defining a number of tasks (typically from information and knowledge extraction) by taking into account specific SW requirements. The Challenge is open to everyone from industry and academia.

Website:
 

 

Concept-Level Sentiment Analysis Challenge
Abstract:

Concept-level sentiment analysis focuses on a semantic analysis of text through the use of web ontologies, semantic resources, or semantic networks, allowing the identification of opinion data which with only natural language techniques would be very difficult. By relying on large semantic knowledge bases, concept-level sentiment analysis steps away from blind use of keywords and word co-occurrence count, but rather relies on the implicit features associated with natural language concepts. Unlike purely syntactical techniques, concept-based approaches are able to detect also sentiments that are expressed in a subtle manner, e.g., through the analysis of concepts that do not explicitly convey any emotion, but which are implicitly linked to other concepts that do so. Systems must have a semantics flavor (e.g., by making use of Linked Data or known semantic networks within their core functionalities) and authors need to show how the introduction of semantics can be used to obtain valuable information, functionality or performance. Existing natural language processing methods or statistical approaches can be used too as long as the semantics plays a main role within the core approach (engines based merely on syntax/word-count will be excluded from the competition). The Challenge is open to everyone from industry and academia.

Website:
 

 

 

Semantic Publishing Challenge
Abstract:

This is the next iteration of the successful Semantic Publishing Challenge of ESWC 2014. We continue pursuing the objective of assessing the quality of scientific output, evolving the dataset bootstrapped in 2014 to take into account the wider ecosystem of publications. This year’s challenge focuses on refining and enriching an existing linked open dataset about workshops, their publications and their authors. Thus, a combination of broadly investigated technologies in the Semantic Web field, such as Information Extraction (IE), Natural Language Processing (NLP), Named Entity Recognition (NER), link discovery, etc., is required to deal with the challenge’s tasks. The Challenge is open to everyone from industry and academia.

Website:
 

 

 

Schema-agnostic Queries over Large-schema Databases Challenge
Abstract:

The increase in the size and in the semantic heterogeneity of database schemas are bringing new requirements for users querying and searching structured data. At this scale it can become unfeasible for data consumers to be familiar with the representation of the data in order to query it. At the center of this discussion is the semantic gap between users and databases, which becomes more central as the scale and complexity of the data grows. Addressing this gap is a fundamental part of the Semantic Web vision. Schema-agnostic query mechanisms aim at allowing users to be abstracted from the representation of the data, supporting the automatic matching between queries and databases. This challenge aims at emphasizing the role of schema-agnosticism as a key requirement for contemporary database management, by providing a test collection for evaluating flexible query and search systems over structured data in terms of their level of schema-agnosticism (i.e. their ability to map a query issued with the user terminology and structure, mapping it to the dataset vocabulary). The challenge is instantiated in the context of Semantic Web datasets. The Challenge is open to everyone from industry and academia.

Website: