OBSIDIAN
Model Overview
OBSIDIAN is a fine-tuned AraBERT-based model for Arabic tweet and short-text classification.
It predicts one of five labels:
- Threat
- Violence
- Distress
- Complaint
- Neutral
This model is part of the broader OBSIDIAN — Real-Time Social Media Intelligence and Threat Detection System project. The system is designed to support Arabic social media monitoring, institutional awareness, early-warning workflows, and decision-support dashboards.
Project Context
OBSIDIAN was developed as part of a graduate project at King Fahd University of Petroleum & Minerals (KFUPM). The broader system is intended to analyze public social media discourse, especially Arabic-language content, and transform noisy online text into structured insights.
The project vision includes:
- Arabic social media classification,
- real-time or near-real-time content monitoring,
- dashboard-based situational awareness,
- alert-level insights for high-risk content,
- future expansion toward trend tracking, entity-aware search, and influencer/account monitoring.
The current Streamlit application integrates this Hugging Face model for:
- single-text prediction,
- batch CSV/XLSX classification,
- live-monitoring simulation,
- optional n8n webhook-based tweet retrieval,
- alert-level dashboards,
- CSV and Excel result export.
Labels
Threat
Text containing direct or indirect threats, intimidation, or intent to cause harm.
Violence
Text describing physical aggression, assault, attacks, or violent incidents.
Distress
Text expressing fear, panic, emotional suffering, helplessness, or need for help.
Complaint
Text expressing dissatisfaction, frustration, criticism, or reporting a service/problem.
Neutral
Text without strong threat, violence, distress, or complaint signals.
Intended Use
This model is intended for:
- Arabic tweet classification,
- Arabic short-text classification,
- research and demonstration use,
- social media monitoring prototypes,
- public-discourse intelligence workflows,
- Streamlit-based dashboards and reporting tools.
The model is best used as a decision-support component rather than a fully autonomous decision system.
Not Intended For
This model should not be used:
- as the sole basis for enforcement, disciplinary, or legal decisions,
- to monitor private, encrypted, or direct-message content,
- as a substitute for trained human analysts in high-stakes settings,
- for non-Arabic content without additional validation,
- for production deployment without independent testing and governance review.
Training / Fine-Tuning Context
The model was fine-tuned as part of the OBSIDIAN project using Arabic social media text prepared for five-class classification.
The broader project work included:
- large-scale Arabic social media data collection,
- manual dataset construction and validation,
- cleaning and filtering of noisy Arabic text,
- baseline experimentation with classical machine learning,
- transition to transformer-based modeling,
- robustness testing on noisy real-world Arabic social media content.
The project progress report notes that transformer-based modeling was selected after classical baselines showed architectural limitations for semantic Arabic content classification. The transformer model was reported to achieve strong controlled evaluation results and more stable behavior under noisy, large-scale Arabic social media streams.
Users should still evaluate the model independently on their own target domain before relying on it for operational use.
Usage
Example with Hugging Face Transformers:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_id = "SoftALL/OBSIDIAN"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
model.eval()
text = "الخدمة سيئة جدًا والتطبيق يتعطل كل مرة"
inputs = tokenizer(
text,
return_tensors="pt",
truncation=True,
padding=True,
max_length=128,
)
with torch.no_grad():
outputs = model(**inputs)
probs = torch.softmax(outputs.logits, dim=1)[0]
pred_id = int(torch.argmax(probs).item())
label = model.config.id2label[pred_id]
confidence = float(probs[pred_id])
print(label, confidence)
Example Application Integration
The model is integrated into the OBSIDIAN Streamlit application, which supports:
Single Text Mode
Classify one Arabic text and display:
- predicted label,
- confidence,
- probability chart,
- class probability table.
Batch Upload Mode
Classify CSV/XLSX files and display:
- uploaded file preview,
- selected text-column preview,
- classified result preview,
- label / keyword / minimum-confidence filters,
- label distribution chart,
- downloadable filtered CSV output.
Live Monitor Mode
Simulate or fetch near-real-time Arabic tweets and display:
- fetched tweet preview,
- KSA-formatted timestamp,
- author ID,
- predicted label,
- confidence,
- alert level,
- label / alert-level / keyword / minimum-confidence filters,
- high/medium alert table,
- label distribution pie chart,
- tweets-per-class bar chart,
- confidence distribution histogram,
- alert-level chart,
- downloadable filtered CSV and Excel outputs.
The live-monitoring mode can optionally use an n8n webhook with the following parameters:
postLimittimeWindowHoursxQuery
Files in This Repository
This Hugging Face model repository includes:
config.jsonmodel.safetensorstokenizer.jsontokenizer_config.json
The Streamlit application code is hosted separately in the SoftALL GitHub organization under the OBSIDIAN repository.
Limitations
- The model is intended primarily for Arabic tweet and short-text classification.
- Performance may degrade on long texts, heavy code-switching, OCR noise, or text far from the training distribution.
- Some categories may overlap semantically, especially:
- Threat vs. Distress
- Complaint vs. Neutral
- Violence vs. Threat
- The model may misinterpret sarcasm, implicit references, jokes, or highly localized dialect expressions.
- The model does not replace human review in sensitive or high-risk contexts.
- Live monitoring depends on external data retrieval workflows and is not part of the model itself.
Ethical and Responsible Use
This model should be used responsibly and with human oversight.
Recommended safeguards include:
- using predictions as decision-support signals only,
- reviewing high-risk predictions manually,
- documenting data sources and legal basis for collection,
- avoiding private or unauthorized data monitoring,
- validating performance on the intended deployment domain,
- considering false positives and false negatives before acting on results.
Project Information
Project
OBSIDIAN — Real-Time Social Media Intelligence and Threat Detection System
Researcher / Project Owner
Abdullah Saeed Ali Al-Malki
Academic Affiliation
- University: King Fahd University of Petroleum & Minerals (KFUPM)
- College: College of Computing and Mathematics
- Department: Department of Computer Engineering / Computer Networks Program
- Degree Program: Master’s Degree in Computer Engineering
- Academic Year: 2024–2026
Supervisor
Dr. Saad Ezzini
Assistant Professor — Software Engineering
Information & Computer Science Department
Institutional / Professional Context
- Sector: Ministry of Interior — Public Security
- Department: Eastern Province Police
- Current Role: Director of Communications and Information Technology Division
Contact
- Email: ASALMALKI@POLICE.MOI.GOV.SA
Citation / Acknowledgment
If you use this model or application in demonstrations, reports, or research, please acknowledge the OBSIDIAN project and its academic context at King Fahd University of Petroleum & Minerals.
- Downloads last month
- 124