fake id california fake id online maker texas id best state for fake id fake id review illinois fake id fake id usa reddit fake id how to get a fake id how to get a fake id buy usa fake id buy fake id fake school id connecticut fake id cheap fake id best fake id fake id maker fake id god fake id website Fake id generator Fake id Fake id maker reddit fake id/ how to make fake id fake drivers license/ best fake id
cheap ray ban sunglasses outlet cheap oakley outlet sale cheap Ray Ban sunglasses online cheap Ray Ban sunglasses outlet uk Cheap Ray Bans Outlet cheap ray bans sale cheap ray ban glasses Ray ban sunglasses for cheap cheap ray ban sunglasses cheap ray ban glasses outlet cheap ray ban glasses cheap oakley sunglasses outlet Wholesale oakley sunglasses cheap oakley sunglasses online occhiali da sole ray ban clubmaster oakley Italia ray ban italia occhiali da sole ray ban outlet

MediaEval 2011 Benchmark Evaluation

MediaEval is a benchmarking initiative that offers tasks promoting research and innovation on multimodal approaches to multimedia annotation and retrieval. MediaEval 2011 focuses on speech, language, context and social aspects of multimedia, in addition to visual content. Participants carry out one or more tasks and submit runs to be evaluated. Results are written up and presented at the MediaEval 2011 workshop.

For each task, participants receive a task definition, task data and accompanying resources (dependent on task) such as shot boundaries, keyframes, visual features, speech transcripts and social metadata. In order to encourage participants to develop techniques that push forward the state-of-the-art, a “required reading” list of papers will be provided for each task. Participation is open to all interested research groups. In order to participate, please sign up by 31 May via http://www.multimediaeval.org

Choose one or more of the following tasks:

Genre Tagging
Given a set of genre tags (how-to, interview, review etc.) and a video collection, participants are required to automatically assign genre tags to each video based on a combination of modalities, i.e., speech, metadata, audio and visual (Data: Creative Commons internet video, multiple languages mostly English)

Rich Speech Retrieval
Given a set of queries and a video collection, participants are required to automatically identify relevant jump-in points into the video based on the combination of modalities, i.e., speech, metadata, audio and visual. The task can be approached as a multimodal task, but also as strictly a searching speech task. (Data: Creative Commons internet video, multiple languages mostly English)

Spoken Web Search
This task involves searching FOR audio content WITHIN audio content USING an audio content query. It is particularly interesting for speech researchers in the area of spoken term detection. (Data: Audio from four different Indian languages — English, Hindi, Gujarati and Telugu. Each of the ca. 400 data item is an 8 KHz audio file 4-30 secs in length.)

Affect Task: Violent Scenes Detection
This task requires participants to deploy multimodal features to automatically detect portions of movies containing violent material. Any features automatically extracted from the video, including the subtitles, can be used by participants. (Data: A set of ca. 15 Hollywood movies that must be purchased by the participants.)

Social Event Detection Task
This task requires participants to discover events and detect media items that are related to either a specific social event or an event-class of interest. By social events we mean that the events are planned by people, attended by people and that the social media are captured by people. (Data: A large set of URLs of videos and images together with their associated metadata)

Placing Task
This task involves automatically assigning geo-coordinates to Flickr videos using one or more of: Flickr metadata, visual content, audio content, social information (Data: Creative Commons Flickr data, predominantly English language)

MediaEval 2011 Timeline
March-May register and return usage agreements
1 June release of development/training data
1 July release of test data
8 August run submission
22 August working notes paper submission
1&2 September MediaEval 2011 Workshop in Pisa

The MediaEval 2011 Workshop is an official satellite event of Interspeech 2011 (http://www.interspeech2011.org)

MediaEval 2011 Coordination
Martha Larson, Delft University of Technology
Gareth Jones, Dublin City University

MediaEval 2011 Organization Committee
Claire-Helene Demarty, Technicolor
Maria Eskevich, Dublin City University
Guillaume Gravier, IRISA/CNRS
Pascal Kelm, Technical University of Berlin
Florian Metze, CMU
Vasileios Mezaris, ITI CERTH
Vanessa Murdock, Yahoo! Research
Roeland Ordelman, University of Twente and Netherlands Institute for Sound & Vision
Adam Rae, Yahoo! Research
Nitendra Rajput, IBM Research India
Sebastian Schmiedeke, Technical University of Berlin
Pavel Serdyukov, Yandex
Mohammad Soleymani, University of Geneva
Raphael Troncy, Eurecom

Contact
For questions or additional information please contact Martha Larson m.a.larson@tudelft.nl

MediaEval 2011 is coordinated by PetaMedia, a FP7 EU Network of Excellence, and by the OpenSem project of EIT ICT Labs. Many other projects make individual contributions to organization, including: AXES, Chorus+, Glocal, Quaero and weknowit.

Comments are closed.

Copyright 2015 AXES · RSS Feed