Model Card: Sentiment Classifier (DistilBERT - SST-2)

Overview

This model is a fine-tuned version of distilbert-base-uncased on the SST-2 dataset, designed for binary sentiment classification: labeling text as either positive or negative.

Itโ€™s fast, compact, and suitable for real-time inference tasks such as social media monitoring, customer feedback triage, and lightweight embedded NLP.


Use Cases

  • Detecting sentiment in tweets, reviews, or comments
  • Routing customer support tickets by tone
  • Analyzing product sentiment in e-commerce or app stores
  • Monitoring brand perception over time

Example

Input: "This new update is amazing โ€” so much faster!"
Output: Positive

Input: "This feature is broken and support isn't helping."
Output: Negative

---

## Strengths

- Extremely lightweight: good for mobile and low-latency use
- Fine-tuned on a benchmark sentiment dataset (SST-2)
- Strong out-of-the-box performance for informal English

---

## Limitations

- Binary only (positive/negative) โ€” no neutral or nuanced emotion
- Trained on English movie reviews โ€” may misinterpret sarcasm, cultural tone, or domain-specific feedback
- Not ideal for clinical, legal, or safety-critical sentiment tasks

---

## Model Details

- Architecture: DistilBERT
- Base model: `distilbert-base-uncased`
- Fine-tuning dataset: SST-2 (Stanford Sentiment Treebank)
- Max input: 512 tokens
- Classes: `Positive`, `Negative`

---

## License

MIT License โ€” free to use, adapt, and deploy commercially.

---

## Authorship Note

This model card was written by [Sarah Mancinho](https://huggingface.co/Sarah-h-h) as part of a public AI/LLM contribution series on Hugging Face.

Original model: [`distilbert-base-uncased-finetuned-sst-2-english`](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)

---

## Citation
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support