The Sixth International Workshop on Egocentric Perception, Interaction and Computing
Online, June 15 (AM), 2020

Attending the Workshop

Thank you to everyone who was able to attend the workshop live!

A recording of the workshop is now available to view as a YouTube playlist and the links to invidivual videos can be found from the program


You can find information about the upcoming ECCV edition here

MTurk Proposals

Alongside the CVPR 2020 edition of the EPIC workshop we called for proposals to create new egocentric data or annotations on existing data. Thanks to the generous donation from Amazon’s Mechanical Turk and Augmented AI (A2I) teams we were able to offer $10K in funding. We received a number of high quality proposals, thus the funding was split among the following recipients:

Classifying Cycling Hazards in Egocentric Data
Jayson Haebich (City University of Hong Kong); Christian Sandor (City University of Hong Kong); Alvaro Cassinelli (City University of Hong Kong)
This dataset contains 5-10 second egocentric video segments of hazardous cycling situations and associated IMU data. These videos are annotated with classifications of the cause of the hazard and the type of surface the cyclist is travelling on.

Understanding Dyadic Interactions from Egocentric Multi-Views
Cristina Palmero (Universitat de Barcelona)*; Javier Selva (Universitat de Barcelona); Zejian Zhang (Universitat de Barcelona); Julio Cezar S. Silveira Jacques Junior (Universitat Oberta de Catalunya (UOC) & Computer Vision Center (CVC)); David Leiva (Universitat de Barcelona); Sergio Escalera (Computer Vision Center (UAB) & University of Barcelona)
Multi-view egocentric dataset of non-scripted face-to-face dyadic interactions. It consists of recordings and profiling data of 147 subjects, distributed in 188 dyadic sessions, performing competitive and collaborative tasks with different behavior elicitation and cognitive workload.

Pixel-Wise Labelling of RGB-D THU-READ dataset
Ester Gonzalez-Sosa (Nokia Bell Labs)*; Diego Gonzalez-Morin (Nokia Bell Labs); Andrija Gajic (Universidad Autonoma de Madrid); Marcos Escudero-Viñolo (Universidad Autónoma de Madrid); Alvaro Villegas (Nokia Bell Labs)
Pixel-Wise THU-READ Labelling contains segmentation masks from a representative subset of the original 960 RGB-D egocentric videos, with 35 different object classes: human body, and 34 different objects. For more information, please contact ester.gonzalez@nokia-bell-labs.com

Aims and Scope

The EPIC series of events aims to bring together the different communities which are relevant to egocentric perception, including Computer Vision, Machine Learning, Multimedia, Augmented and Virtual Reality, Human Computer Interaction, and Visual Sciences. The main goal of the workshop is to provide a discussion forum and facilitate the interaction between researchers coming from different areas of competence. We invite researchers interested in these topics to join the EPIC@CVPR2020 workshop, submitting ongoing and recently published ideas, demos, and applications in support of human performance through egocentric sensing.

Egocentric perception introduces a series of challenging questions for computer vision as motion, real-time responsiveness, and generally uncontrolled interactions in the wild are more frequently required or encountered. Questions such as what to interpret as well as what to ignore, how to efficiently represent egocentric actions, and how captured information can be turned into useful data for guidance or log summaries become central.

EPIC Community

If you are interested to learn about Egocentric Perception, Interaction and Computing, including future calls for paper, code, datasets and jobs, subscribe to the mailing list: epic-community@bristol.ac.uk

Instructions to subscribe: