CVPR Language & Vision Workshop list of accepted 2019 papers Individual emails coming by Monday. Some submissions will be invited to present longer talks. All submissions are invited to present spotlights. A schedule, instructions, and details are coming in the next week. Thanks! 1. A Coherence Approach to Data-Driven Inference in Visual Communication 2. Answer Them All! Toward Universal Visual Question Answering Models 3. Answering Questions about Data Visualizations using Efficient Bimodal Fusion 4. Baby steps towards few-shot learning with multiple semantics 5. ClearGAN: Photo-Realistic High-Resolution Text-to-Image Synthesis via Joint Inter-modal and Intra-modal Attention Modeling 6. Counterfactually Resilient Visual Grounding with a Modular Design 7. Curvature: A signature for Action Recognition in Video Sequences 8. Deep Sentiment Features of Context, Faces and Text for Affective Video Analysis 9. Dense Relational Captioning: Triple-Stream Networks for Relationship-Based Captioning 10. Generating Diverse and Informative Natural Language Fashion Feedback 11. Image Captioning with Integrated Bottom-Up and Multi-level Residual Top-Down Attention for Game Scene Understanding 12. Image captioning with weakly-supervised attention penalty 13. Indian Language Optical Character Recognition Through Bharati Script based Deep Convolutional Neural Networks 14. Integration of Text-maps in Convolutional Neural Networks for Region Detection among Different Textual Categories 15. Joint Visual-Textual Embedding for Multimodal Style Search 16. Let's Transfer Text Transformations to Images 17. Leveraging Unpaired Data for Image Captioning upon Scarce Supervised Data 18. Neural Sign Language Translation based on Human Keypoint Estimation 19. Language Guided Life-long Visual Learning 20. Referring Expression Object Segmentation with Caption-Aware Consistency 21. Scene Perspective Framing with Visual Question Answering Dialog 22. Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions 23. Sometimes you just need to ask: Situation recognition via VQA 24. SuperChat: Dialogue Generation by Transfer Learning from Vision to Language using Two-dimensional Word Embedding and Pretrained ImageNet 25. Tripping through time: Efficient Temporal Localization of Activities in Videos 26. Understanding Art through Multi-Modal Retrieval in Paintings 27. Using Language to Evaluate Image Enhancement Tasks 28. Video Object Segmentation with Language Referring Expressions 29. Visual Discourse Parsing 30. Visual Natural Language Query Auto-Completion for Estimating Instance Probabilities 31. VizWiz-Priv: A Dataset for Recognizing the Presence and Purpose of Private Visual Information in Images Taken by Blind People 32. Which generates better jokes, hand-crafted features or deep features 33. Multitask Text-to-Visual Embedding with Titles and Clickthrough Data