Advancing stance detection in polarized social media discourse on the Palestinian-Israeli conflict through dual-framework analysis of actor-level alignments and cross-topic patterns.
The StanceNakba 2026 Shared Task addresses stance detection in polarized social media discourse on the Palestinian-Israeli conflict and related regional issues. This task introduces a dual-framework approach that distinguishes between actor-level political alignments and cross-topic stance patterns across two conflict-related subjects.
Participants will develop models to analyze social media posts across two subtasks: Subtask A (Actor-Level Stance Detection) identifies whether authors express Pro-Palestine, Pro-Israel, or Neutral orientations in their general position toward the Palestinian-Israeli conflict, while Subtask B (Cross-Topic Stance Detection) detects Favor, Against, or Neither stances toward specific conflict-related topics: normalization with Israel and refugee presence in Jordan.
This dual-framework enables investigation of fundamental questions: How do general political alignments (actors) relate to positions on specific issues (targets)? Can models learn generalizable stance patterns that transfer across different topics?
Build a single model to classify the author's general political stance toward the Palestinian-Israeli conflict.
Build a unified model that predicts stance across multiple conflict-related topics.
1. Normalization with Israel (577 samples)
2. Refugee/Immigrant Presence in Jordan (628 samples)
Models will be evaluated using multiple metrics to ensure comprehensive assessment. Each subtask will have a separate leaderboard.
This shared task will attract participants from stance detection, Arabic NLP, and computational social science communities by offering well-curated datasets and clear evaluation metrics for two distinct challenges.
Researchers can participate in either English actor-level stance detection (accessible to the general NLP community) or Arabic cross-topic stance detection (for Arabic NLP specialists), with both subtasks addressing real-world applications in understanding polarized discourse and conflict narratives on social media.
Anyone who wants to can participate, either independently or by forming a team.
No, participation is completely free but at least one team member should pay to register for the conference.
Please see instructions on the website for the task (e.g., on CodaLab or other evaluation platforms).
Yes, you can participate in multiple different tasks.
This depends on the task. Check with the task organizers to see whether they allow an individual to be part of multiple teams within the task.
The evaluation period is the time window in which the official part of each task competition takes place. Task organizers will release evaluation data to participants at the beginning of the evaluation period, and system outputs will be due before the end of the evaluation period.
This depends on the task. Task organizers often specify how many submissions by each team will be evaluated. Contact your task organizers if these details are not already stated on the task webpage.
Task datasets are distributed by task organizers. Check the task website or contact the organizers. Many recent tasks archive their datasets on platforms such as Github.
Yes, if you register and request the dataset, you agree to submit a paper. It's not just encouraged—it's a binding commitment you make when receiving the data.
This is strictly prohibited. You agree to register and submit a paper when requesting the dataset. Abandoning the task after receiving data may result in banning participants from future participation.
System papers must follow the format for short papers at NakbaNLP 2026. Papers must follow official LaTeX or Word templates, be up to 4 pages (excluding unlimited references). Check the dedicated paper submission guidelines:
https://lrec2026.info/authors-kit
https://softconf.com/lrec2026/nakbanlp2026
Use this format: <Team Name> at <Shared Task Name>: <Title of the work>
Example: "Scholarly at XYZ Shared Task: LLMs in Detection for Identifying Manipulative Strategies"
Yes, each team is required to cite the shared task overview paper in their system paper.
Yes, at least one of the shared task team members should register and attend the conference.
Yes, all accepted papers must be presented at the conference (in-person or virtual). Papers without at least one presenting author registered by the early registration deadline may be subject to desk rejection.
Yes, all participants and organizers are required to create a Softconf account.
No, shared task papers (both system and overview papers) are not anonymous.
Submissions that exceed length requirements will be desk rejected.
Yes, shared task teams are expected to serve as reviewers, and their reviews will be checked by one member of the shared task organizers.
Yes, authors are free to submit papers to arXiv at any time. Since papers are not anonymous, this won't interfere with the review process.
No additions/revisions or name changes are allowed after the review period for the camera-ready version.
If your research raises ethical considerations (e.g., potential for misuse), these should be discussed in the paper.
You are highly encouraged to release the code or data used and include the URL in the paper. This promotes reproducibility.
• For task-specific questions: Consult the task website and contact task organizers
• For workshop logistics: Contact the shared task organizers
• For Softconf technical issues: Contact Softconf support
For inquiries, please contact: stancenakba@gmail.com
Northwestern University in Qatar
Hamad Bin Khalifa University, Doha, Qatar
Northwestern University in Qatar
Individual Researcher
Northwestern University in Qatar
Northwestern University in Qatar
Join us in advancing stance detection research and understanding polarized discourse in social media.