SPLADE-BERT-Tiny-Distil

This is a SPLADE Sparse Encoder model finetuned from prajjwal1/bert-tiny using the sentence-transformers library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.

Model Details

Model Description

  • Model Type: SPLADE Sparse Encoder
  • Base model: prajjwal1/bert-tiny
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 30522 dimensions
  • Similarity Function: Dot Product
  • Language: en
  • License: mit

Model Sources

Full Model Architecture

SparseEncoder(
  (0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
  (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SparseEncoder

# Download from the 🤗 Hub
model = SparseEncoder("yosefw/SPLADE-BERT-Tiny-distil-v5")
# Run inference
queries = [
    "lists of narcotic medications",
]
documents = [
    'The following list of narcotics is just a sample of some of the names you may hear either in a medical setting or on the streets: 1  Heroin. 2  Opium. 3  Oxycontin. Oxycodone. 4  Hydrocodone. Hydromorphone. 5  Fentanyl. Buprenorphine. 6  Levorphanol. 7  Codeine. Lorcet. 8  Lortab. 9  Norco. 10  Oncet. Procet. 11  Vicodin.  Xodol. Zydone.',
    'When used in a legal context in the U.S., a narcotic drug is simply one that is totally prohibited, or one that is used in violation of governmental regulation, such as heroin or cannabis. In the medical community, the term is more precisely defined and generally does not carry the same negative connotations.',
    'Tomb is a vault for the dead (an enclosed grave). Raider means someone who attacks the enemy or steals. The term applies to grave robbers. Or treasure hunters. It is also the name of a popular multi platform video game(Tomb Raider). Which features a main character who is a explorer/treasure hunter(Lara Croft).',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]

# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[14.6490, 16.3028,  1.8537]])

Evaluation

Metrics

Sparse Information Retrieval

Metric Value
dot_accuracy@1 0.4618
dot_accuracy@3 0.7832
dot_accuracy@5 0.8856
dot_accuracy@10 0.954
dot_precision@1 0.4618
dot_precision@3 0.267
dot_precision@5 0.184
dot_precision@10 0.1001
dot_recall@1 0.4473
dot_recall@3 0.7684
dot_recall@5 0.8766
dot_recall@10 0.9486
dot_ndcg@10 0.7104
dot_mrr@10 0.6365
dot_map@100 0.6323
query_active_dims 18.0406
query_sparsity_ratio 0.9994
corpus_active_dims 90.4874
corpus_sparsity_ratio 0.997

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,400,000 training samples
  • Columns: query, positive, negative, and label
  • Approximate statistics based on the first 1000 samples:
    query positive negative label
    type string string string list
    details
    • min: 4 tokens
    • mean: 8.98 tokens
    • max: 40 tokens
    • min: 20 tokens
    • mean: 80.67 tokens
    • max: 298 tokens
    • min: 17 tokens
    • mean: 76.49 tokens
    • max: 238 tokens
    • size: 1 elements
  • Samples:
    query positive negative label
    what was the congressional reconstruction act? On Mar. 2, 1867, Congress enacted the Reconstruction Act, which, supplemented later by three related acts, divided the South (except Tennessee) into five military districts in which the authority of the army commander was supreme.y Aug., 1868, six states (Arkansas, North Carolina, South Carolina, Louisiana, Alabama, and Florida) had been readmitted to the Union, having ratified the Fourteenth Amendment as required by the first Reconstruction Act. Reconstruction Acts of 1867-1868. Johnson s vetoes of these measures were overridden by Congress, repeating a familiar pattern. Nearly two years following the end of the Civil War, Congress finally forged a complete plan for reconstruction.Three measures were passed in 1867 as well as additional legislation the following year.early two years following the end of the Civil War, Congress finally forged a complete plan for reconstruction. Three measures were passed in 1867 as well as additional legislation the following year. [0.25]
    what are two similarities of atm and debit cards Similarities of ATM Card and Debit Card ATM card and debit card are made of plastic and both have the same appearance. Both are issued by the bank and provide the facility like balance inquiry, withdrawal of money or make payment online and much more. Debit cards offer the convenience of a credit but work in a different way. Debit cards draw money directly from your checking account when you make the purchase. They do this by placing a hold on the amount of the purchase. [5.547402381896973]
    who makes runway enduro tires Who makes Runway Enduro tires? Reference.com https://www.reference.com/vehicles/runway-enduro-tires-e6fad1dc190d5183 Runway Enduro tires are manufactured by GITI Tire, one the largest tire manufacturing companies in Asia and the 10th largest in the world. GITI Tire is based in … Runway Tires Global Passenger Car, 4×4/SUV & Light Truck/Van …
  • Loss: SpladeLoss with these parameters:
    {
        "loss": "SparseMarginMSELoss",
        "document_regularizer_weight": 0.3,
        "query_regularizer_weight": 0.5
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 24
  • per_device_eval_batch_size: 24
  • gradient_accumulation_steps: 2
  • learning_rate: 8e-05
  • num_train_epochs: 6
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.025
  • fp16: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • push_to_hub: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 24
  • per_device_eval_batch_size: 24
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 2
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 8e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 6
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.025
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: True
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss dot_ndcg@10
1.0 29167 9918.757 0.6805
2.0 58334 13.1599 0.6956
3.0 87501 11.9647 0.7034
4.0 116668 10.5555 0.7076
5.0 145835 9.6642 0.7089
6.0 175002 9.2451 0.7104
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 5.0.0
  • Transformers: 4.53.2
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.5.2
  • Datasets: 4.0.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

SpladeLoss

@misc{formal2022distillationhardnegativesampling,
      title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
      author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
      year={2022},
      eprint={2205.04733},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2205.04733},
}

SparseMarginMSELoss

@misc{hofstätter2021improving,
    title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation},
    author={Sebastian Hofstätter and Sophia Althammer and Michael Schröder and Mete Sertkan and Allan Hanbury},
    year={2021},
    eprint={2010.02666},
    archivePrefix={arXiv},
    primaryClass={cs.IR}
}

FlopsLoss

@article{paria2020minimizing,
    title={Minimizing flops to learn efficient sparse representations},
    author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
    journal={arXiv preprint arXiv:2004.05665},
    year={2020}
}
Downloads last month
14
Safetensors
Model size
4.42M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yosefw/SPLADE-BERT-Tiny-distil-v5

Finetuned
(67)
this model

Evaluation results