YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

Model Card for Model ID

This model (Voronin et al., 2025, TBA) is one of four model types developed during CLEF-2025 Multilingual Text Detoxification contest. The idea was to apply a Sage-T5-like approach for text detoxification tasks. The main model utilizes three loss functions:

  • seq2seq loss for paraphrase generations,
  • classification loss for token-level toxicity detection,
  • contrastive loss for improved semantic representation learning.

To evaluate the correctness of the approach, backbone of mT0-large was taken and four models were trained: with only seq2seq loss, seq2seq & classification losses, seq2seq & contrastive losses and all three losses. This final model employs all three described losses.

Model Details

Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

  • Developed by: Alexandr Voronin, Nikita Sushko, Daniil Moskovsky
  • Model type: mT0-large
  • Language(s) (NLP): am, ar, de, en, es, fr, he, hi, hin, it, ja, ru, tt, uk, zh
  • License: MIT
  • Finetuned from model [optional]: mT0-large

Uses

This model is intended to be used as a text detoxification task in 15 languages: Amharic, Arabic, German, English, Spanish, French, Hebrew, Hindi, Hinglish, Italian, Japanese, Russian, Tatar, Ukranian, Chinese.

Direct Use

The model may be directly used for text detoxification tasks.

How to Get Started with the Model

import transformers

pipe = transformers.pipeline('text2text-generation', 'alexandro767/SageDetox_detox_classification_contrastive')
pipe('Rewrite in non-toxic way in Russian: Ненавижу блять C-GAN')
Downloads last month
4
Safetensors
Model size
1.23B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for alexandro767/SageDetox_detox_classification_contrastive

Finetuned
(6)
this model

Datasets used to train alexandro767/SageDetox_detox_classification_contrastive