|10:45||Jason J. Corso|
|11:45||Image Annotation Challenge Summary|
|1:00||Poster session in the main ballroom|
|2:00||Jeffrey Mark Siskind|
|5:15||Discussion and Wrap Up|
The interaction between language and vision, despite seeing traction as of late, is still largely unexplored. This is a particularly relevant topic to the vision community because humans routinely perform tasks which involve both modalities. We do so largely without even noticing. Every time you ask for an object, ask someone to imagine a scene, or describe what you're seeing, you're performing a task which bridges a linguistic and a visual representation. The importance of vision-language interaction can also be seen by the numerous approaches that often cross domains, such as the popularity of image grammars. More concretely, we've recently seen a renewed interest in one-shot learning for object and event models. Humans go further than this using our linguistic abilities; we perform zero-shot learning without seeing a single example. You can recognize a picture of a zebra after hearing the description "horse-like animal with black and white stripes" without ever having seen one.
Furthermore, integrating language with vision brings with it the possibility of expanding the horizons and tasks of the vision community. We have seen significant growth in image and video-to-text tasks but many other potential applications of such integration – answering questions, dialog systems, and grounded language acquisition – remain unexplored. Going beyond such novel tasks, language can make a deeper contribution to vision: it provides a prism through which to understand the world. A major difference between human and machine vision is that humans form a coherent and global understanding of a scene. This process is facilitated by our ability to affect our perception with high-level knowledge which provides resilience in the face of errors from low-level perception. It also provides a framework through which one can learn about the world: language can be used to describe many phenomena succinctly thereby helping filter out irrelevant details.
The workshop will also include a challenge related to the 4th edition of the Scalable Concept Image Annotation Challenge one of the tasks of ImageCLEF. The Scalable Concept Image Annotation task aims to develop techniques to allow computers to reliably describe images, localize the different concepts depicted in the images and generate a description of the scene. The task directly related to this workshop is Generation of Textual Descriptions of Images.
We are calling for 1 to 2 page extended abstracts to be showcased at a poster session. Abstracts are not archival and will not be included in the Proceedings of CVPR 2015. We welcome both novel and previously-published work.
Contributions to the Generation of Textual Descriptions challenge will also be showcased at the poster session, and a summary of the results will be presented at the workshop.