Second Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situation

Collocated with ACL 2022

May 26th - 28th, 2022


  • We will soon be releasing the training/validation set for “US Politics domain”. Stay tuned!
  • The training/validation dataset for “Covid19” domain is now released.


  • Task: Hero, Villain and Victim: Dissecting harmful memes for Semantic role labelling of entities
    Given a meme and an entity, determine the role of the entity in the meme: hero vs. villain vs. victim vs. other. The meme is to be analyzed from the perspective of the author of the meme.
    • Role labeling for memes: This task emphasizes detecting which entities are glorified, vilified or victimized, within a meme. Assuming the frame of reference as the meme author's perspective, the objective is to classify for a given pair of a meme and an entity, whether the entity is being referenced as Hero vs. Villain vs. Victim vs. Other, within that meme.

    • Definition of the entity classes:
      • Hero: The entity is presented in a positive light. Glorified for their actions conveyed via the meme or gathered from background context
      • Villain: The entity is portrayed negatively, e.g., in an association with adverse traits like wickedness, cruelty, hypocrisy, etc.
      • Victim: The entity is portrayed as suffering the negative impact of someone else’s actions or conveyed implicitly within the meme.
      • Other: The entity is not a hero, a villain, or a victim.

      Example 1

      Corresponding JSON Line input:
      ‘image’ : ‘image_1.png’,
      ‘OCR’: "When you've got a 98% chance of surviving\nthe China virus\nMSM:\njust get in\nthe coffin.\nVISIT PATRIOTPOST.US FOR THE BEST HUMOR AND MEMES\n",
      ‘hero’ : [],
      ‘villain’ : [‘mainstream media (msm)’],
      ‘victim’ : [‘people’],
      ‘other’ : [‘coronavirus’]

      Example 2

      Corresponding JSON Line input:
      ‘image’ : ‘image_2.png’,
      ‘OCR’ : "Whole world to\nPutin:\n*Putin\nVaccine\nWell done putin. i'm so proud of you\nVaccine\nVaccine\n",
      ‘hero’ : [‘Vladimir Putin’],
      ‘villain’ : [],
      ‘victim’ : [],
      ‘other’ : [‘the world’, ‘Salman Khan’, ‘vaccine’]

    • Evaluation Metric: The official evaluation measure for the shared task is the weighted F1 score for the multi-class classification.
    • Contest and Dataset:
    • Submission: Each team should submit the output in JSON line format for the final evaluation:

       ‘image’ : ‘image_name’,
       ‘hero’ : [entity_list],
       ‘villain’ : [entity_list],
       ‘victim’ : [entity_list],
       ‘other’ : [entity_list]

      In case of multiple submissions by a team, we shall consider the best submission prior to the deadline for the final evaluation. No exceptions shall be made.
    • Best paper and Task Winner Awards: There will be three awards. Each one also includes a discounted registration for ACL.
      • Winner of the Role labelling shared task
      • Best paper award (main track)
      • Best paper award for the shared task (based on analysis, writing, methodology)


  • Important Dates:
    • January 6, 2022: Release of the training set
    • March 8, 2022: Release of the test set
    • March 12, 2022: Deadline for submitting final results
    • March 25, 2022: System paper submission deadline
    • April 5, 2022: Notification of acceptance
    • April 10, 2022: Camera-ready papers due



Follow Us