--- language: - en tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:197462 - loss:MSELoss base_model: Qwen/Qwen3-Embedding-0.6B widget: - source_sentence: 'Instruct: Given a web search query, retrieve relevant passages that answer the query Query:who sings the song i don''t want to work' sentences: - The Invisible Man Griffin is the surname of the story's protagonist. His name is not mentioned until about halfway through the book. Consumed with his greed for power and fame, he is the model of science without humanity. A gifted young student, he becomes interested in the science of refraction. During his experiments, he accidentally discovers chemicals (combined with an unspecified kind of radiation) that would make living tissue invisible. Obsessed with his discovery, he tries the experiment on himself and becomes invisible. However, he does not know how to reverse the process, and he slowly discovers that the advantages of being invisible do not outweigh the disadvantages and the problems he faces. Thus begins his downfall as he takes the road to crime for his survival, revealing in the process his lack of conscience, inhumanity and complete selfishness. He progresses from obsession to fanaticism, to insanity, and finally to his fateful end. - 'Instruct: Given a web search query, retrieve relevant passages that answer the query Query:who did the united states become independent from' - Jordan Belfort Jordan Ross Belfort (/ˈbɛlfɔːrt/; born July 9, 1962) is an American author, motivational speaker, and former stockbroker. In 1999, he pleaded guilty to fraud and related crimes in connection with stock-market manipulation and running a boiler room as part of a penny-stock scam. Belfort spent 22 months in prison as part of an agreement under which he gave testimony against numerous partners and subordinates in his fraud scheme.[5] He published the memoir The Wolf of Wall Street, which was adapted into a film and released in 2013. - source_sentence: London water supply infrastructure Most of London's water comes from non-tidal parts of the Thames and Lea, with the remainder being abstracted from underground sources.[22] sentences: - 'Instruct: Given a web search query, retrieve relevant passages that answer the query Query:what is the number on the hogwarts express' - 'Instruct: Given a web search query, retrieve relevant passages that answer the query Query:when did roughing the kicker become a rule' - Agora Early in Greek history (18th century–8th century BC), free-born citizens would gather in the agora for military duty or to hear statements of the ruling king or council. Later, the Agora also served as a marketplace where merchants kept stalls or shops to sell their goods amid colonnades. This attracted artisans who built workshops nearby.[2] - source_sentence: 'Instruct: Given a web search query, retrieve relevant passages that answer the query Query:what is meant by lagging and leading current in ac circuit' sentences: - .org The domain name org is a generic top-level domain (gTLD) of the Domain Name System (DNS) used in the Internet. The name is truncated from organization. It was one of the original domains established in 1985, and has been operated by the Public Interest Registry since 2003. The domain was originally intended for non-profit entities, but this restriction was not enforced and has been removed. The domain is commonly used by schools, open-source projects, and communities, but also by some for-profit entities. The number of registered domains in org has increased from fewer than one million in the 1990s, to ten million as of June 2013. - 'Instruct: Given a web search query, retrieve relevant passages that answer the query Query:how many episode in season 1 game of thrones' - 'Instruct: Given a web search query, retrieve relevant passages that answer the query Query:when is season 11 of doctor who coming out' - source_sentence: Gabriel Vlad (born April 9, 1969) in Bucharest, is a former Romanian former rugby union football player. sentences: - As of May 2013, The Jewish Tribune had a circulation of 60,500 copies a week which made it, for a time, the largest Jewish weekly publication in Canada. - Cunjamba Dima is a city and commune of Angola, located in the province of Cuando Cubango. - He also acted in the National award winning Tamil movie Vazhakku Enn 18/9, directed by Balaji Sakthivel. - source_sentence: The actress was thirteen when she was offered the role of Annie. sentences: - All profits from the sale and streaming of the song go to music education supported by the CMA Foundation. - Narsingh Temple is situated at the across of the village just across confluence of Magri State village. - Contrasting significantly from other soccer leagues in the U.S., WLS intends to be an open entry, promotion and relegation competition. datasets: - sentence-transformers/natural-questions - sentence-transformers/gooaq - sentence-transformers/wikipedia-en-sentences pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 - negative_mse model-index: - name: SentenceTransformer based on Qwen/Qwen3-Embedding-0.6B results: - task: type: information-retrieval name: Information Retrieval dataset: name: NanoMSMARCO type: NanoMSMARCO metrics: - type: cosine_accuracy@1 value: 0.26 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.54 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.62 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.74 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.26 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.18 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.124 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.07400000000000001 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.26 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.54 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.62 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.74 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.49705652353860524 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.4194365079365079 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.43104169907220663 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: NanoNFCorpus type: NanoNFCorpus metrics: - type: cosine_accuracy@1 value: 0.32 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.44 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.46 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.56 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.32 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2533333333333333 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.192 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.15600000000000003 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.029912973644699657 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.04555227289257262 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.05270229388942461 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.07692701147361766 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.20504617696332558 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.3906269841269841 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.07524365929088889 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: NanoNQ type: NanoNQ metrics: - type: cosine_accuracy@1 value: 0.24 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.46 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.62 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.72 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.24 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.15333333333333332 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.12400000000000003 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.07600000000000001 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.23 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.45 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.58 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.68 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.44938843799218575 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.3822460317460316 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.3789963914205589 name: Cosine Map@100 - task: type: nano-beir name: Nano BEIR dataset: name: NanoBEIR mean type: NanoBEIR_mean metrics: - type: cosine_accuracy@1 value: 0.2733333333333334 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.48 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.5666666666666668 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.6733333333333333 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.2733333333333334 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.19555555555555557 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1466666666666667 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.10200000000000002 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.17330432454823322 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.34518409096419084 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.41756743129647483 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.4989756704912059 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.3838303794980389 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.39743650793650787 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.2950939165945515 name: Cosine Map@100 - task: type: knowledge-distillation name: Knowledge Distillation dataset: name: Unknown type: unknown metrics: - type: negative_mse value: -0.04732320085167885 name: Negative Mse --- # SentenceTransformer based on Qwen/Qwen3-Embedding-0.6B This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) on the [nq](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [nq](https://huggingface.co/datasets/sentence-transformers/natural-questions) - **Language:** en ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: Qwen3Model (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("tomaarsen/Qwen3-Embedding-0.6B-10-layers") # Run inference sentences = [ 'The actress was thirteen when she was offered the role of Annie.', 'Contrasting significantly from other soccer leagues in the U.S., WLS intends to be an open entry, promotion and relegation competition.', 'Narsingh Temple is situated at the across of the village just across confluence of Magri State village.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Evaluation ### Metrics #### Information Retrieval * Datasets: `NanoMSMARCO`, `NanoNFCorpus` and `NanoNQ` * Evaluated with [InformationRetrievalEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "query_prompt": "Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery:" } ``` | Metric | NanoMSMARCO | NanoNFCorpus | NanoNQ | |:--------------------|:------------|:-------------|:-----------| | cosine_accuracy@1 | 0.26 | 0.32 | 0.24 | | cosine_accuracy@3 | 0.54 | 0.44 | 0.46 | | cosine_accuracy@5 | 0.62 | 0.46 | 0.62 | | cosine_accuracy@10 | 0.74 | 0.56 | 0.72 | | cosine_precision@1 | 0.26 | 0.32 | 0.24 | | cosine_precision@3 | 0.18 | 0.2533 | 0.1533 | | cosine_precision@5 | 0.124 | 0.192 | 0.124 | | cosine_precision@10 | 0.074 | 0.156 | 0.076 | | cosine_recall@1 | 0.26 | 0.0299 | 0.23 | | cosine_recall@3 | 0.54 | 0.0456 | 0.45 | | cosine_recall@5 | 0.62 | 0.0527 | 0.58 | | cosine_recall@10 | 0.74 | 0.0769 | 0.68 | | **cosine_ndcg@10** | **0.4971** | **0.205** | **0.4494** | | cosine_mrr@10 | 0.4194 | 0.3906 | 0.3822 | | cosine_map@100 | 0.431 | 0.0752 | 0.379 | #### Nano BEIR * Dataset: `NanoBEIR_mean` * Evaluated with [NanoBEIREvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.NanoBEIREvaluator) with these parameters: ```json { "dataset_names": [ "msmarco", "nfcorpus", "nq" ], "query_prompts": { "msmarco": "Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery:", "nfcorpus": "Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery:", "nq": "Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery:" } } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.2733 | | cosine_accuracy@3 | 0.48 | | cosine_accuracy@5 | 0.5667 | | cosine_accuracy@10 | 0.6733 | | cosine_precision@1 | 0.2733 | | cosine_precision@3 | 0.1956 | | cosine_precision@5 | 0.1467 | | cosine_precision@10 | 0.102 | | cosine_recall@1 | 0.1733 | | cosine_recall@3 | 0.3452 | | cosine_recall@5 | 0.4176 | | cosine_recall@10 | 0.499 | | **cosine_ndcg@10** | **0.3838** | | cosine_mrr@10 | 0.3974 | | cosine_map@100 | 0.2951 | #### Knowledge Distillation * Evaluated with [MSEEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.MSEEvaluator) | Metric | Value | |:-----------------|:------------| | **negative_mse** | **-0.0473** | ## Training Details ### Training Dataset #### nq * Dataset: [nq](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17) * Size: 197,462 training samples * Columns: text and label * Approximate statistics based on the first 1000 samples: | | text | label | |:--------|:------------------------------------------------------------------------------------|:--------------------------------------| | type | string | list | | details | | | * Samples: | text | label | |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------| | Instruct: Given a web search query, retrieve relevant passages that answer the query
Query:the movie bernie based on a true story
| [-0.05126953125, -0.0020294189453125, 0.00152587890625, 0.060791015625, 0.022216796875, ...] | | College World Series The College World Series, or CWS, is an annual June baseball tournament held in Omaha, Nebraska. The CWS is the culmination of the National Collegiate Athletic Association (NCAA) Division I Baseball Championship tournament—featuring 64 teams in the first round—which determines the NCAA Division I college baseball champion. The eight participating teams are split into two, four-team, double-elimination brackets, with the winners of each bracket playing in a best-of-three championship series. | [0.033935546875, -0.0908203125, -0.010498046875, 0.0625, -0.01263427734375, ...] | | Instruct: Given a web search query, retrieve relevant passages that answer the query
Query:does the femoral nerve turn into the saphenous nerve
| [0.052978515625, -0.0028228759765625, -0.0022430419921875, 0.0732421875, 0.044677734375, ...] | * Loss: [MSELoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss) ### Evaluation Datasets #### nq * Dataset: [nq](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17) * Size: 3,000 evaluation samples * Columns: text and label * Approximate statistics based on the first 1000 samples: | | text | label | |:--------|:------------------------------------------------------------------------------------|:--------------------------------------| | type | string | list | | details | | | * Samples: | text | label | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------| | Instruct: Given a web search query, retrieve relevant passages that answer the query
Query:who was the heir apparent of the austro-hungarian empire in 1914
| [0.0262451171875, 0.0556640625, -0.0, -0.03076171875, -0.05712890625, ...] | | Instruct: Given a web search query, retrieve relevant passages that answer the query
Query:who played tommy in coward of the county
| [-0.00848388671875, -0.02294921875, -0.00182342529296875, 0.060546875, -0.021240234375, ...] | | Vertebra The vertebral arch is formed by pedicles and laminae. Two pedicles extend from the sides of the vertebral body to join the body to the arch. The pedicles are short thick processes that extend, one from each side, posteriorly, from the junctions of the posteriolateral surfaces of the centrum, on its upper surface. From each pedicle a broad plate, a lamina, projects backwards and medialwards to join and complete the vertebral arch and form the posterior border of the vertebral foramen, which completes the triangle of the vertebral foramen.[6] The upper surfaces of the laminae are rough to give attachment to the ligamenta flava. These ligaments connect the laminae of adjacent vertebra along the length of the spine from the level of the second cervical vertebra. Above and below the pedicles are shallow depressions called vertebral notches (superior and inferior). When the vertebrae articulate the notches align with those on adjacent vertebrae and these form the openings of the int... | [0.062255859375, -0.005706787109375, -0.009765625, 0.035400390625, -0.0125732421875, ...] | * Loss: [MSELoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss) #### gooaq * Dataset: [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) at [b089f72](https://huggingface.co/datasets/sentence-transformers/gooaq/tree/b089f728748a068b7bc5234e5bcf5b25e3c8279c) * Size: 3,000 evaluation samples * Columns: text and label * Approximate statistics based on the first 1000 samples: | | text | label | |:--------|:------------------------------------------------------------------------------------|:--------------------------------------| | type | string | list | | details | | | * Samples: | text | label | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------| | Instruct: Given a web search query, retrieve relevant passages that answer the query
Query:what essential oils are soothing?
| [-0.025146484375, 0.06591796875, -0.0025634765625, 0.0732421875, -0.046630859375, ...] | | Titles of books should be underlined or put in italics . (Titles of stories, essays and poems are in "quotation marks.") Refer to the text specifically as a novel, story, essay, memoir, or poem, depending on what it is. | [-0.006988525390625, -0.050537109375, -0.007476806640625, -0.07177734375, -0.049560546875, ...] | | Dakine Cyclone Wet/Dry 32L Backpack. Born from the legacy of our most iconic surf pack, the Cyclone Collection is a family of super-technical and durable wet/dry packs and bags. | [0.0016632080078125, 0.04150390625, -0.01324462890625, 0.0234375, 0.03173828125, ...] | * Loss: [MSELoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss) #### wikipedia * Dataset: [wikipedia](https://huggingface.co/datasets/sentence-transformers/wikipedia-en-sentences) at [4a0972d](https://huggingface.co/datasets/sentence-transformers/wikipedia-en-sentences/tree/4a0972dcb781b5b5d27799798f032606421dd422) * Size: 3,000 evaluation samples * Columns: text and label * Approximate statistics based on the first 1000 samples: | | text | label | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------| | type | string | list | | details | | | * Samples: | text | label | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------| | The daughter of Vice-admiral George Davies and Julia Hume, she spent her younger years on board the ship he was stationed, the Griper. | [0.0361328125, 0.01904296875, -0.003662109375, 0.0247802734375, 0.0140380859375, ...] | | The impetus for the project began when Amalgamated Dynamics, hired to provide the practical effects for The Thing, a prequel to John Carpenter's 1982 classic film-renowned for its almost exclusive use of practical effects-became disillusioned upon discovering the theatrical release had the bulk of their effects digitally replaced with computer-generated imagery. | [-0.0106201171875, -0.0439453125, -0.01104736328125, 0.00946044921875, 0.0322265625, ...] | | Lost Angeles, his second feature film, starring Joelle Carter and Kelly Blatz, had its world premiere at the Oldenburg International Film Festival in 2012. | [0.0272216796875, 0.0263671875, -0.007110595703125, 0.0294189453125, 0.01129150390625, ...] | * Loss: [MSELoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss) ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `learning_rate`: 0.0001 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `bf16`: True - `load_best_model_at_end`: True #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 0.0001 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional
### Training Logs | Epoch | Step | Training Loss | nq loss | gooaq loss | wikipedia loss | NanoMSMARCO_cosine_ndcg@10 | NanoNFCorpus_cosine_ndcg@10 | NanoNQ_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 | negative_mse | |:----------:|:--------:|:-------------:|:----------:|:----------:|:--------------:|:--------------------------:|:---------------------------:|:---------------------:|:----------------------------:|:------------:| | -1 | -1 | - | - | - | - | 0.0 | 0.0111 | 0.0 | 0.0037 | -0.1948 | | 0.0162 | 100 | 0.0018 | - | - | - | - | - | - | - | - | | 0.0324 | 200 | 0.0013 | - | - | - | - | - | - | - | - | | 0.0486 | 300 | 0.0012 | - | - | - | - | - | - | - | - | | 0.0648 | 400 | 0.0012 | - | - | - | - | - | - | - | - | | 0.0810 | 500 | 0.0011 | 0.0010 | 0.0012 | 0.0011 | 0.0 | 0.0250 | 0.0791 | 0.0347 | -0.1091 | | 0.0972 | 600 | 0.001 | - | - | - | - | - | - | - | - | | 0.1134 | 700 | 0.0009 | - | - | - | - | - | - | - | - | | 0.1296 | 800 | 0.0008 | - | - | - | - | - | - | - | - | | 0.1458 | 900 | 0.0007 | - | - | - | - | - | - | - | - | | 0.1620 | 1000 | 0.0006 | 0.0006 | 0.0008 | 0.0008 | 0.3983 | 0.1100 | 0.3080 | 0.2721 | -0.0706 | | 0.1783 | 1100 | 0.0006 | - | - | - | - | - | - | - | - | | 0.1945 | 1200 | 0.0005 | - | - | - | - | - | - | - | - | | 0.2107 | 1300 | 0.0005 | - | - | - | - | - | - | - | - | | 0.2269 | 1400 | 0.0005 | - | - | - | - | - | - | - | - | | 0.2431 | 1500 | 0.0005 | 0.0005 | 0.0007 | 0.0006 | 0.4665 | 0.1554 | 0.3481 | 0.3233 | -0.0593 | | 0.2593 | 1600 | 0.0005 | - | - | - | - | - | - | - | - | | 0.2755 | 1700 | 0.0005 | - | - | - | - | - | - | - | - | | 0.2917 | 1800 | 0.0005 | - | - | - | - | - | - | - | - | | 0.3079 | 1900 | 0.0004 | - | - | - | - | - | - | - | - | | 0.3241 | 2000 | 0.0004 | 0.0004 | 0.0006 | 0.0006 | 0.4292 | 0.1827 | 0.4041 | 0.3387 | -0.0541 | | 0.3403 | 2100 | 0.0004 | - | - | - | - | - | - | - | - | | 0.3565 | 2200 | 0.0004 | - | - | - | - | - | - | - | - | | 0.3727 | 2300 | 0.0004 | - | - | - | - | - | - | - | - | | 0.3889 | 2400 | 0.0004 | - | - | - | - | - | - | - | - | | 0.4051 | 2500 | 0.0004 | 0.0004 | 0.0006 | 0.0006 | 0.4780 | 0.1915 | 0.4106 | 0.3600 | -0.0515 | | 0.4213 | 2600 | 0.0004 | - | - | - | - | - | - | - | - | | 0.4375 | 2700 | 0.0004 | - | - | - | - | - | - | - | - | | 0.4537 | 2800 | 0.0004 | - | - | - | - | - | - | - | - | | 0.4699 | 2900 | 0.0004 | - | - | - | - | - | - | - | - | | 0.4861 | 3000 | 0.0004 | 0.0004 | 0.0006 | 0.0005 | 0.4937 | 0.1937 | 0.4117 | 0.3664 | -0.0498 | | 0.5023 | 3100 | 0.0004 | - | - | - | - | - | - | - | - | | 0.5186 | 3200 | 0.0004 | - | - | - | - | - | - | - | - | | 0.5348 | 3300 | 0.0004 | - | - | - | - | - | - | - | - | | 0.5510 | 3400 | 0.0004 | - | - | - | - | - | - | - | - | | 0.5672 | 3500 | 0.0004 | 0.0004 | 0.0005 | 0.0005 | 0.4939 | 0.1955 | 0.4533 | 0.3809 | -0.0489 | | 0.5834 | 3600 | 0.0004 | - | - | - | - | - | - | - | - | | 0.5996 | 3700 | 0.0004 | - | - | - | - | - | - | - | - | | 0.6158 | 3800 | 0.0004 | - | - | - | - | - | - | - | - | | 0.6320 | 3900 | 0.0004 | - | - | - | - | - | - | - | - | | 0.6482 | 4000 | 0.0004 | 0.0004 | 0.0005 | 0.0005 | 0.4948 | 0.2011 | 0.4373 | 0.3777 | -0.0482 | | 0.6644 | 4100 | 0.0004 | - | - | - | - | - | - | - | - | | 0.6806 | 4200 | 0.0004 | - | - | - | - | - | - | - | - | | 0.6968 | 4300 | 0.0004 | - | - | - | - | - | - | - | - | | 0.7130 | 4400 | 0.0004 | - | - | - | - | - | - | - | - | | 0.7292 | 4500 | 0.0004 | 0.0004 | 0.0005 | 0.0005 | 0.4909 | 0.2049 | 0.4515 | 0.3824 | -0.0477 | | 0.7454 | 4600 | 0.0004 | - | - | - | - | - | - | - | - | | 0.7616 | 4700 | 0.0004 | - | - | - | - | - | - | - | - | | 0.7778 | 4800 | 0.0004 | - | - | - | - | - | - | - | - | | 0.7940 | 4900 | 0.0004 | - | - | - | - | - | - | - | - | | 0.8102 | 5000 | 0.0004 | 0.0004 | 0.0005 | 0.0005 | 0.4875 | 0.2022 | 0.4448 | 0.3782 | -0.0475 | | 0.8264 | 5100 | 0.0004 | - | - | - | - | - | - | - | - | | 0.8427 | 5200 | 0.0004 | - | - | - | - | - | - | - | - | | 0.8589 | 5300 | 0.0004 | - | - | - | - | - | - | - | - | | 0.8751 | 5400 | 0.0004 | - | - | - | - | - | - | - | - | | 0.8913 | 5500 | 0.0004 | 0.0004 | 0.0005 | 0.0005 | 0.4943 | 0.2043 | 0.4519 | 0.3835 | -0.0474 | | 0.9075 | 5600 | 0.0004 | - | - | - | - | - | - | - | - | | 0.9237 | 5700 | 0.0004 | - | - | - | - | - | - | - | - | | 0.9399 | 5800 | 0.0004 | - | - | - | - | - | - | - | - | | 0.9561 | 5900 | 0.0004 | - | - | - | - | - | - | - | - | | **0.9723** | **6000** | **0.0004** | **0.0004** | **0.0005** | **0.0005** | **0.4971** | **0.205** | **0.4494** | **0.3838** | **-0.0473** | | 0.9885 | 6100 | 0.0004 | - | - | - | - | - | - | - | - | | -1 | -1 | - | - | - | - | 0.4971 | 0.2050 | 0.4494 | 0.3838 | -0.0473 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.11.10 - Sentence Transformers: 4.2.0.dev0 - Transformers: 4.51.2 - PyTorch: 2.5.1+cu124 - Accelerate: 1.5.2 - Datasets: 3.5.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MSELoss ```bibtex @inproceedings{reimers-2020-multilingual-sentence-bert, title = "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2020", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2004.09813", } ```