Workshop on Cross Modal Person Reidentification

Big MM Head

About CM-PRID'19 Workshop

Person reidentification has received a lot of attention in the recent past due to its potential in visual surveillance. Most of the works focus on visible spectrum and use single modality. With widespread surveillance and more stringent constraints, the current requirement is to develop techniques which can address the cross modality nature of the captured data. In addition to the visible spectrum reidentification challenges such as pose, illumination and scale variations, and occlusion, cross-modal dataset also pose the challenge of domain or spectrum variation. Thus, the cross-modal reidentification becomes practically very challenging. This necessitates two goals which are the focus of the workshop. First, to generate cross-modal datasets such as text-image, RGB-IR, image-video and RGB-Depth datasets. Second, novel techniques which can bridge the domain gap between the two modalities. Though some preliminary datasets and techniques exist, there is a huge scope of contribution towards the two goals.

We invite novel and high-quality papers presenting or addressing issues related to cross-modal person reidentification (re-id), but not limited to:

  • Models for cross-modal re-id.
  • Adversarial attacks on re-id.
  • Image to video or video to image based re-id.
  • Text to image based re-id.
  • RGB-IR re-id.
  • Cross modal data generation for augmentation
  • Modality invariant representation learning

Important Dates

Submission: 11:59 PM PST, July 10, 2019 July 23, 2019
Notification of Paper Acceptance: July 31, 2019
Camera Ready: August 5, 2019

For submission, please refer to the main conference site

Organizing Team

Program Committee

Updated by Web Admin