Visual Concept Detection and Linked Open Data at the TIB AV-Portal

Video thumbnail (Frame 0) Video thumbnail (Frame 1989) Video thumbnail (Frame 3237) Video thumbnail (Frame 5436) Video thumbnail (Frame 6229) Video thumbnail (Frame 8329) Video thumbnail (Frame 9484) Video thumbnail (Frame 13675) Video thumbnail (Frame 16780) Video thumbnail (Frame 19674) Video thumbnail (Frame 21425) Video thumbnail (Frame 24247) Video thumbnail (Frame 26232) Video thumbnail (Frame 27165) Video thumbnail (Frame 29025) Video thumbnail (Frame 30324) Video thumbnail (Frame 31441) Video thumbnail (Frame 32933) Video thumbnail (Frame 35278) Video thumbnail (Frame 40034)
Video in TIB AV-Portal: Visual Concept Detection and Linked Open Data at the TIB AV-Portal

Formal Metadata

Title
Visual Concept Detection and Linked Open Data at the TIB AV-Portal
Title of Series
Author
License
CC Attribution - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this license.
Identifiers
Publisher
Release Date
2017
Language
English

Content Metadata

Subject Area
Abstract
The German National Library of Science and Technology (TIB) researches and develops methods of automated content analysis and semantic web technologies to improve access to its library holdings and allow for advanced methods of information retrieval (e.g. semantic and cross-lingual search). Regarding scientific videos in the TIB AV-Portal spatio-temporal metadata is extracted by several algorithms analysing (1) superimposed text, (2) speech, and (3) visual content. In addition, the results are mapped against common authority files and knowledge bases via a process of automated Named Entity Linking and published as Linked Open Data to facilitate reuse and interlinking of information. Against this background the TIB constantly aims to improve its automated content analysis and Linked Open Data quality. Currently, extensive research in the fields of deep learning is conducted to significantly enhance methods of visual concept detection in the AV-Portal – both in terms of detection rates and coverage of subject-specific concepts. Our solution applies a state-of-the-art deep residual learning network based on the popular TensorFlow framework in order to predict and link visual concepts in audio-visual media. The resulting predictions are mapped against authority files and expressed as RDF-Triples. Therefore, in our presentation we would like to demonstrate how research in the field of machine learning can be combined with semantic web technologies and transferred to library services like the AV-Portal to improve functionality and provide added value for users. In addition we would like to address the question of data quality assessment and present scenarios of metadata reuse.
Feedback