StanceNakba Shared Task co-located with LREC 2026

Actor and Topic-Aware Stance Detection in Public Discourse

Advancing stance detection in polarized social media discourse on the Palestinian-Israeli conflict through dual-framework analysis of actor-level alignments and cross-topic patterns.

2 Subtasks
2,606 Annotated Samples
2 Languages

Research Overview

The StanceNakba 2026 Shared Task addresses stance detection in polarized social media discourse on the Palestinian-Israeli conflict and related regional issues. This task introduces a dual-framework approach that distinguishes between actor-level political alignments and cross-topic stance patterns across two conflict-related subjects.

Participants will develop models to analyze social media posts across two subtasks: Subtask A (Actor-Level Stance Detection) identifies whether authors express Pro-Palestine, Pro-Israel, or Neutral orientations in their general position toward the Palestinian-Israeli conflict, while Subtask B (Cross-Topic Stance Detection) detects Favor, Against, or Neither stances toward specific conflict-related topics: normalization with Israel and refugee presence in Jordan.

This dual-framework enables investigation of fundamental questions: How do general political alignments (actors) relate to positions on specific issues (targets)? Can models learn generalizable stance patterns that transfer across different topics?

Task Descriptions

Subtask A

Actor-Level Stance Detection

Build a single model to classify the author's general political stance toward the Palestinian-Israeli conflict.

Pro-Palestine Pro-Israel Neutral

Dataset Details

1,401 Total Samples
English Language
70/15/15 Train/Dev/Test Split

Examples

Pro-Palestine "The systematic displacement of Palestinian families from their ancestral homes represents a clear violation of international law and the right of return."
Pro-Israel "Israel's defensive measures are necessary responses to existential threats, ensuring the safety of its citizens against terrorism."
Neutral "The conflict involves competing territorial claims, with both populations having deep historical connections to the region."
Subtask B

Cross-Topic Stance Detection

Build a unified model that predicts stance across multiple conflict-related topics.

Favor Against Neither

Dataset Details

1,205 Total Samples
Arabic Language
2 Topics Covered

Topics

1. Normalization with Israel (577 samples)

2. Refugee/Immigrant Presence in Jordan (628 samples)

Examples

Favor (Normalization) الجامعة العربية قالت إنها لا ترى أن #التطبيع مع #إسرائيل خطوة ضد القضية الفلسطينية
Against (Normalization) إن قدرة الدولة على التطبيع جهارًا نهارًا مع النظام الإسرائيلي تتماشى مع قوة واستقرار نظامها المستبد
Favor (Refugees) لا مكان للعنصرية بالأردن اي شخص داخل الأردن يعامل معاملة ابن البلد وهاذا الشخص يمثل كل أردني شريف

Evaluation Criteria

Models will be evaluated using multiple metrics to ensure comprehensive assessment. Each subtask will have a separate leaderboard.

Primary Metric
Macro-averaged F1-score
Secondary Metric
Accuracy
Secondary Metric
Precision
Secondary Metric
Recall

Important Dates

January 1, 2026
Call for Participation
January 10, 2026
Training Set Release
February 10, 2026
Blind Test Set Release
February 17, 2026
System Submission Deadline
February 21, 2026
Release of Results
March 1, 2026
Paper Submission Deadline
March 15, 2026
Notification of Acceptance
March 21, 2026
Camera-Ready Deadline
May 11–16, 2026
Workshop Date (TBC)

Who Should Participate

This shared task will attract participants from stance detection, Arabic NLP, and computational social science communities by offering well-curated datasets and clear evaluation metrics for two distinct challenges.

Researchers can participate in either English actor-level stance detection (accessible to the general NLP community) or Arabic cross-topic stance detection (for Arabic NLP specialists), with both subtasks addressing real-world applications in understanding polarized discourse and conflict narratives on social media.

🎯 Stance Detection Researchers
🌍 Arabic NLP Community
📊 Computational Social Scientists

Guidelines for Participating Teams

Who can participate in shared tasks?

Anyone who wants to can participate, either independently or by forming a team.

Is there a fee to participate in a task?

No, participation is completely free but at least one team member should pay to register for the conference.

How can I sign up to participate in a task?

Please see instructions on the website for the task (e.g., on CodaLab or other evaluation platforms).

Can I participate in multiple shared tasks (separate competitions)?

Yes, you can participate in multiple different tasks.

Can I participate in multiple teams under the same shared task (same competition)?

This depends on the task. Check with the task organizers to see whether they allow an individual to be part of multiple teams within the task.

What does "evaluation period" mean?

The evaluation period is the time window in which the official part of each task competition takes place. Task organizers will release evaluation data to participants at the beginning of the evaluation period, and system outputs will be due before the end of the evaluation period.

How many runs can be submitted for a task?

This depends on the task. Task organizers often specify how many submissions by each team will be evaluated. Contact your task organizers if these details are not already stated on the task webpage.

Where can I find datasets from past shared tasks?

Task datasets are distributed by task organizers. Check the task website or contact the organizers. Many recent tasks archive their datasets on platforms such as Github.

Do I need to write a paper?

Yes, if you register and request the dataset, you agree to submit a paper. It's not just encouraged—it's a binding commitment you make when receiving the data.

What happens if I register but don't submit a paper?

This is strictly prohibited. You agree to register and submit a paper when requesting the dataset. Abandoning the task after receiving data may result in banning participants from future participation.

What is the paper format?

System papers must follow the format for short papers at NakbaNLP 2026. Papers must follow official LaTeX or Word templates, be up to 4 pages (excluding unlimited references). Check the dedicated paper submission guidelines:
https://lrec2026.info/authors-kit
https://softconf.com/lrec2026/nakbanlp2026

How should the paper title be formatted?

Use this format: <Team Name> at <Shared Task Name>: <Title of the work>
Example: "Scholarly at XYZ Shared Task: LLMs in Detection for Identifying Manipulative Strategies"

Must I cite the task overview paper?

Yes, each team is required to cite the shared task overview paper in their system paper.

Must someone from my team attend the conference?

Yes, at least one of the shared task team members should register and attend the conference.

Must papers be presented at the conference?

Yes, all accepted papers must be presented at the conference (in-person or virtual). Papers without at least one presenting author registered by the early registration deadline may be subject to desk rejection.

Do I need a Softconf account?

Yes, all participants and organizers are required to create a Softconf account.

Are the papers anonymous?

No, shared task papers (both system and overview papers) are not anonymous.

What happens if I exceed the page limit or miss required sections?

Submissions that exceed length requirements will be desk rejected.

Will I need to review other papers?

Yes, shared task teams are expected to serve as reviewers, and their reviews will be checked by one member of the shared task organizers.

Can I post my paper on arXiv?

Yes, authors are free to submit papers to arXiv at any time. Since papers are not anonymous, this won't interfere with the review process.

Can I make changes for the camera-ready version?

No additions/revisions or name changes are allowed after the review period for the camera-ready version.

What if my research raises ethical concerns?

If your research raises ethical considerations (e.g., potential for misuse), these should be discussed in the paper.

Should I release my code?

You are highly encouraged to release the code or data used and include the URL in the paper. This promotes reproducibility.

My question isn't answered here. Who should I contact?

For task-specific questions: Consult the task website and contact task organizers
For workshop logistics: Contact the shared task organizers
For Softconf technical issues: Contact Softconf support

Organizing Committee

For inquiries, please contact: stancenakba@gmail.com

Kholoud K. Aldous

Northwestern University in Qatar

Md. Rafiul Biswas

Hamad Bin Khalifa University, Doha, Qatar

Mabrouka Bessghaier

Northwestern University in Qatar

Kais Attia

Individual Researcher

Shimaa Ibrahim

Northwestern University in Qatar

Wajdi Zaghouani

Northwestern University in Qatar

Ready to Participate?

Join us in advancing stance detection research and understanding polarized discourse in social media.