SPLADE-BERT-Tiny-Distil
This is a SPLADE Sparse Encoder model finetuned from prajjwal1/bert-tiny using the sentence-transformers library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
Model Details
Model Description
- Model Type: SPLADE Sparse Encoder
- Base model: prajjwal1/bert-tiny
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 30522 dimensions
- Similarity Function: Dot Product
- Language: en
- License: mit
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Sparse Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sparse Encoders on Hugging Face
Full Model Architecture
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("yosefw/SPLADE-BERT-Tiny-distil-v5")
# Run inference
queries = [
"lists of narcotic medications",
]
documents = [
'The following list of narcotics is just a sample of some of the names you may hear either in a medical setting or on the streets: 1 Heroin. 2 Opium. 3 Oxycontin. Oxycodone. 4 Hydrocodone. Hydromorphone. 5 Fentanyl. Buprenorphine. 6 Levorphanol. 7 Codeine. Lorcet. 8 Lortab. 9 Norco. 10 Oncet. Procet. 11 Vicodin. Xodol. Zydone.',
'When used in a legal context in the U.S., a narcotic drug is simply one that is totally prohibited, or one that is used in violation of governmental regulation, such as heroin or cannabis. In the medical community, the term is more precisely defined and generally does not carry the same negative connotations.',
'Tomb is a vault for the dead (an enclosed grave). Raider means someone who attacks the enemy or steals. The term applies to grave robbers. Or treasure hunters. It is also the name of a popular multi platform video game(Tomb Raider). Which features a main character who is a explorer/treasure hunter(Lara Croft).',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[14.6490, 16.3028, 1.8537]])
Evaluation
Metrics
Sparse Information Retrieval
- Evaluated with
SparseInformationRetrievalEvaluator
Metric | Value |
---|---|
dot_accuracy@1 | 0.4618 |
dot_accuracy@3 | 0.7832 |
dot_accuracy@5 | 0.8856 |
dot_accuracy@10 | 0.954 |
dot_precision@1 | 0.4618 |
dot_precision@3 | 0.267 |
dot_precision@5 | 0.184 |
dot_precision@10 | 0.1001 |
dot_recall@1 | 0.4473 |
dot_recall@3 | 0.7684 |
dot_recall@5 | 0.8766 |
dot_recall@10 | 0.9486 |
dot_ndcg@10 | 0.7104 |
dot_mrr@10 | 0.6365 |
dot_map@100 | 0.6323 |
query_active_dims | 18.0406 |
query_sparsity_ratio | 0.9994 |
corpus_active_dims | 90.4874 |
corpus_sparsity_ratio | 0.997 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 1,400,000 training samples
- Columns:
query
,positive
,negative
, andlabel
- Approximate statistics based on the first 1000 samples:
query positive negative label type string string string list details - min: 4 tokens
- mean: 8.98 tokens
- max: 40 tokens
- min: 20 tokens
- mean: 80.67 tokens
- max: 298 tokens
- min: 17 tokens
- mean: 76.49 tokens
- max: 238 tokens
- size: 1 elements
- Samples:
query positive negative label what was the congressional reconstruction act?
On Mar. 2, 1867, Congress enacted the Reconstruction Act, which, supplemented later by three related acts, divided the South (except Tennessee) into five military districts in which the authority of the army commander was supreme.y Aug., 1868, six states (Arkansas, North Carolina, South Carolina, Louisiana, Alabama, and Florida) had been readmitted to the Union, having ratified the Fourteenth Amendment as required by the first Reconstruction Act.
Reconstruction Acts of 1867-1868. Johnson s vetoes of these measures were overridden by Congress, repeating a familiar pattern. Nearly two years following the end of the Civil War, Congress finally forged a complete plan for reconstruction.Three measures were passed in 1867 as well as additional legislation the following year.early two years following the end of the Civil War, Congress finally forged a complete plan for reconstruction. Three measures were passed in 1867 as well as additional legislation the following year.
[0.25]
what are two similarities of atm and debit cards
Similarities of ATM Card and Debit Card ATM card and debit card are made of plastic and both have the same appearance. Both are issued by the bank and provide the facility like balance inquiry, withdrawal of money or make payment online and much more.
Debit cards offer the convenience of a credit but work in a different way. Debit cards draw money directly from your checking account when you make the purchase. They do this by placing a hold on the amount of the purchase.
[5.547402381896973]
who makes runway enduro tires
Who makes Runway Enduro tires?
Reference.com https://www.reference.com/vehicles/runway-enduro-tires-e6fad1dc190d5183 Runway Enduro tires are manufactured by GITI Tire, one the largest tire manufacturing companies in Asia and the 10th largest in the world. GITI Tire is based in … Runway Tires Global Passenger Car, 4×4/SUV & Light Truck/Van … - Loss:
SpladeLoss
with these parameters:{ "loss": "SparseMarginMSELoss", "document_regularizer_weight": 0.3, "query_regularizer_weight": 0.5 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 24per_device_eval_batch_size
: 24gradient_accumulation_steps
: 2learning_rate
: 8e-05num_train_epochs
: 6lr_scheduler_type
: cosinewarmup_ratio
: 0.025fp16
: Trueload_best_model_at_end
: Trueoptim
: adamw_torch_fusedpush_to_hub
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 24per_device_eval_batch_size
: 24per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 2eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 8e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 6max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.025warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Trueresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsehub_revision
: Nonegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseliger_kernel_config
: Noneeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportionalrouter_mapping
: {}learning_rate_mapping
: {}
Training Logs
Epoch | Step | Training Loss | dot_ndcg@10 |
---|---|---|---|
1.0 | 29167 | 9918.757 | 0.6805 |
2.0 | 58334 | 13.1599 | 0.6956 |
3.0 | 87501 | 11.9647 | 0.7034 |
4.0 | 116668 | 10.5555 | 0.7076 |
5.0 | 145835 | 9.6642 | 0.7089 |
6.0 | 175002 | 9.2451 | 0.7104 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 5.0.0
- Transformers: 4.53.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 4.0.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
SpladeLoss
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
SparseMarginMSELoss
@misc{hofstätter2021improving,
title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation},
author={Sebastian Hofstätter and Sophia Althammer and Michael Schröder and Mete Sertkan and Allan Hanbury},
year={2021},
eprint={2010.02666},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
FlopsLoss
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
- Downloads last month
- 14
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for yosefw/SPLADE-BERT-Tiny-distil-v5
Base model
prajjwal1/bert-tinyEvaluation results
- Dot Accuracy@1 on Unknownself-reported0.462
- Dot Accuracy@3 on Unknownself-reported0.783
- Dot Accuracy@5 on Unknownself-reported0.886
- Dot Accuracy@10 on Unknownself-reported0.954
- Dot Precision@1 on Unknownself-reported0.462
- Dot Precision@3 on Unknownself-reported0.267
- Dot Precision@5 on Unknownself-reported0.184
- Dot Precision@10 on Unknownself-reported0.100
- Dot Recall@1 on Unknownself-reported0.447
- Dot Recall@3 on Unknownself-reported0.768