MEDIAFI (FIWARE Media&Content) Enablers

a)     3D-Map Tiles

http://lab.mediafi.org/discover-3dmaptiles-overview.html

The 3D-Map Tiles SE supplies OSM-style map tiles - however, these tiles are a 3D representation of the physical geometry of locations in contrast to the image tiles of Open Street Map. The 3D-Map Tiles SE also supports various backend data providers and offers various kinds of tiles, such as projected OSM-tiles and laser-scanned elevation data with textures. The 3D-Map Tiles SE incorporates the GIS-DP GE from FIWARE.

 

b)    App Generator

http://lab.mediafi.org/discover-appgenerator-overview.html

This service enables the deployment of a complete application ecosystem on-the-fly: custom mobile apps with content (app name, icons, data), custom web-apps and backends. For example: the editor of a city guide application can now develop a web-app, backend and mobile apps and then simply apply this template to other cities. Using this SE, deploying a new city is a matter of minutes: provide new data, images and text, and the Generator takes care of the rest, deploying your web-app instances, creating your datasets and readying your new mobile apps for deployment.

 

c)     ARTool

http://lab.mediafi.org/discover-artool-overview.html

The ARTool platform enables the simple and fast creation of AR applications using a user-friendly design platform (ARTool Creator), and the subsequent deployment of these applications through the ARTool Deploy platform.

 

d)    Asset Storage

http://lab.mediafi.org/discover-assetstorage-overview.html

The Asset Storage SE is a system for storage and conversion of polygonal 3D models. It offers a REST interface to add and retrieve models, where HTTP content negotiation is used to determine the input and output format(s). Its current primary use is to import 3D models into its own storage format, and export them to something usable on the web (i.e. compatible with the 3D-UI-XML3D GE).

 

e)     Audio Mining

http://lab.mediafi.org/discover-audiomining-overview.html

Audio Mining analyses German or English-language audio/video files (e.g. content from a TV news show) and returns textual information suitable for indexing (e.g. for search engines). Audio Mining performs speech and speaker segmentation as well as speech recognition in order to render speech into text. The SE delivers segments, speaker identification, characteristic keywords and additional metadata in XML and JSON. Finally, the SE builds an index for multimedia search.

 

f)      Augmented Reality - Fast Feature Tracking

http://lab.mediafi.org/discover-augmentedreality.fastfeaturetracking-overview.html

All specific enablers of the Augmented Reality (AR) group provide various tracking methods to enable AR applications. The Fast Feature Tracking SE learns targets by colour and then matches the centre of a colour area (for example a coloured football or road sign) in the camera image to retrieve the relative camera pose information. This extends an application with the capabilities to apply the matching transformation to 3D-scene content and render them onto respective targets.

 

g)     Content Enrichment

http://lab.mediafi.org/discover-contentenrichment-overview.html

Content Enrichment provides functions to create, distribute and play interactive video content across platforms and devices by making video objects clickable. It also provides interfaces to incorporate Web 2.0 capabilities and community functionalities. The enabler acts as a common building block for future video and multimedia infrastructures. It allows seamless, platform-independent and convenient enrichment of any type of video content using any type of device for a variety of application cases.

 

h)    Content Optimisation

http://lab.mediafi.org/discover-contentoptimisation-overview.html

Content Optimisation processes incoming textual content (e.g. from the Audio Mining SE) and extracts characteristic keywords. Subsequently, semantic enrichment based on natural language processing (NLP) is performed to connect the transcripts and keywords with additional, contextual information. The SE integrates and harmonises additional content from diverse sources. The software is intended for SMEs wanting to build second screen applications (e.g. for TV documentaries), but can also be used for various other purposes.

 

i)      Context Aware Recommendation

http://lab.mediafi.org/discover-contextawarerecommendation-overview.html

This Specific Enabler consists of two server modules - an Activity and Context Recognition server module, which uses gathered contextual and sensory data for classification of user activity and context, and a Recommendation Matrix Preparation server module. Additionally, we provide a demo Android application for collection of contextual/sensory data and presentation of POI recommendation results.

 

j)      Flexible and Adaptive Text To Speech

http://lab.mediafi.org/discover-flexibleandadaptivetexttospeech-overview.html

The Flexible and Adaptive Text To Speech (FA-TTS) SE is a Text To Speech server that enables simple and fast creation of synthetic speech based on a text input. The technology used allows the manipulation of various acoustic and linguistic parameters in order to obtain the synthetic voice that is most suitable for a specific situation. Pitch/rhythm modifications and a vocal tract scaler can be used to generate more expressive speech.

 

k)    Fusion Engine

http://lab.mediafi.org/discover-fusionengine-overview.html

The Fusion Engine (FE) merges Points of Interest (POIs) from various data sources. The main objective is to build Open City Databases (OCDs) with different POIs obtained from different data sources (OSM, DBPedia, etc.). Duplicate POIs will be removed. Categories of POIs can be set up in order to merge and retrieve only specific POIs. The FE Specific Enabler is implemented as a backend service - interaction is with administrator only, rather than with users.

 

l)      Game Synchronization

http://lab.mediafi.org/discover-gamesynchronization-overview.html

The Game Synchronization SE provides functionality to synchronise the game world using the RTS (Real-Time Strategy) Lockstep mechanism. Provides an efficient way to synchronize many objects by sending their actions instead of streaming their positions.

 

m)  Geospatial - POI Interface

http://lab.mediafi.org/discover-geospatial.poiinterface-overview.html

The POI Interface SE implements an interface to the POI Data Provider GE or POI Storage SE APIs for Unity3d. It provides access to all the POI Data Provider GE methods and wraps the POI data structures into C# objects.

 

n)    HbbTV Application Toolkit

http://lab.mediafi.org/discover-hbbtvapplicationtoolkit-overview.html

Due to the lack of tools for content creators and developers, developing HbbTV applications can be demanding, time-consuming and expensive. The HbbTV Application Toolkit SE provides a powerful tool set enabling broadcasters, program editors and TV app developers to quickly and easily create HbbTV-compliant TV apps.

 

o)     Leaderboard

http://lab.mediafi.org/discover-leaderboard-overview.html

The Leaderboard SE provides for the storage of high scores and the retrieval of high scores as a sorted list. In addition, it can connect to the Social Network Enabler and automatically post a message when a player breaks the high score.

 

p)    Open City Database

http://lab.mediafi.org/discover-opencitydatabase-overview.html

The Open City Database (OCDB) SE is an open source database management system for any smart city related data (e.g. points of interest, open city data and related media from various sources). Besides its database functionality the OCDB provides a comprehensive API to create, modify and request data sets for their integration with smart city guide apps or any other application or service that may take advantage of open city data.

 

q)    OpenDataSoft

http://lab.mediafi.org/discover-opendatasoft-overview.html

The ODS SE has been specifically designed for non-technical business users to share, publish and reuse structured data in order to create interactive data visualizations and to feed external applications with data via a rich set of REST APIs.

 

r)     Phenomobile Character Manager

http://lab.mediafi.org/discover-phenomobilecharactermanager-overview.html

The phenomobile CharacterManager allows the simple and fast creation of characters in story-based games.The character manager handles all of the character states of all game characters in every game scene.This includes to handle the emotion and other states of a character.

 

s)     Phenomobile Dialog Manager

http://lab.mediafi.org/discover-phenomobiledialogmanager-overview.html

The phenomobile DialogManager allows the simple and fast creation of text dialogues in story-based games.The dialogue manager handles all of the dialogues of all game characters in every game scene. This includes to handle the emotion and other states of a character.

 

t)      POIProxy

http://lab.mediafi.org/discover-poiproxy-overview.html

The POIProxy SE is a service to retrieve Points of Interest from almost any public remote service that exposes geolocated data through a REST API or static files.Some examples of the kind of services that POIProxy can interact with: Open data portals (static files, OData APIs, REST APIs...), social networks (Flickr, Panoramio, Instagram, 500px, Twitter, Facebook, Foursquare...), event services (LastFM, Nvivo, SongKick, Meetup, Eventbrite, ... ), and other services, including real time services (Wikilocation, Geonames, OpenWeatherMap, CityBikes, ...)

 

u)    Reality Mixer - Augmented Audio

http://lab.mediafi.org/discover-realitymixer.augmentedaudio-overview.html

This aurally oriented enabler of the Reality Mixer group measure sensor location properties and adapt the virtual sound sources to the audio environment. The Augmented Audio enabler makes use of the POI interface enabler to provide correctly located spatial sounds. Thus, the addition of audio incorporated into the physical environment in a very seamless fashion leads to a more realistic sound experience for mixed reality applications.

 

v)     Reality Mixer - Camera Artifact Rendering

http://lab.mediafi.org/discover-realitymixer.cameraartifactrendering-overview.html

All visually-oriented SEs of the Reality Mixer group measure camera properties and adapt the virtual objects to fit to the camera image background visually. This client-side code modifies the virtual rendered content to match the camera image more closely in an AR context to provide more realistic appearance.

 

w)   Reality Mixer - Reflection Mapping

http://lab.mediafi.org/discover-realitymixer.reflectionmapping-overview.html

All visually-oriented SEs of the Reality Mixer group measure camera properties and adapt the virtual objects to fit to the camera image background visually. The Reflection Mapping SE utilizes a light probe to extract a sphere map from the camera image, which contains the environmental lighting conditions. This sphere map will be used to apply an appropriate lighting model to rendered virtual objects. Thus, the additional virtual objects are incorporated into the resulting image in a very seamless fashion leading to a more realistic experience of mixed reality applications.

 

x)     Second-Screen Framework

http://lab.mediafi.org/discover-secondscreenframework-overview.html

The Second Screen Framework (SSF) provides web applications that are running on a smart TV with all the crucial functionalities to establish a persistent bi-directional communication path to a web app running in the browser of mobile devices like tablet computers or smartphones running Android, iOS or other mobile operation systems. Moreover, the SSF provides the possibility to launch apps from a television on a second screen.

 

y)     SLAMflex

http://lab.mediafi.org/discover-slamflex-overview.html

SLAMflex provides detection and tracking of dominant planes for smartphone devices. This plane can then be used to show AR content relative to the plane orientation. The detection of plane is performed in the field of view of the smartphone camera. In subsequent frames it is tracked. The interface returns the plane position and orientation.

 

z)     Social Network

http://lab.mediafi.org/discover-socialnetwork-overview.html

The Social Network SE Core (or SNE) is a REST Service with a Web interface that gives end users the possibility of communicating with each other. Unlike monolithic infrastructures (such as Facebook), the SNE provides not only full autonomy of user data but also provides the possibility of running SNE as a federated service.

 

aa) TV Application Layer

http://lab.mediafi.org/discover-tvapplicationlayer-overview.html

TAL was developed internally within the BBC as a way of vastly simplifying TV application development whilst increasing the reach of BBC TV applications such as iPlayer. Today all of the BBC's HTML-based TV applications (such as BBC News and BBC Sport) are built using TAL.

 

bb) Unusual Database-Event Detection

http://lab.mediafi.org/discover-unusualdatabaseeventdetection-overview.html

The main functionality that the Unusual Database-Event Detection SE provides is a monitoring service of a database. It regularly checks database values. If any value is out of an expected range, an email alert is sent.