Update README.md
Browse files
README.md
CHANGED
@@ -1,14 +1,12 @@
|
|
1 |
-
# Enhanced-BGE-M3-with-
|
2 |
|
3 |
-
|
4 |
|
5 |
-
|
6 |
|
7 |
-
|
8 |
|
9 |
-
|
10 |
-
|
11 |
-

|
12 |
|
13 |
where:
|
14 |
|
@@ -23,7 +21,7 @@ where:
|
|
23 |
|
24 |
The difference between Contrastive Learning Loss and Contrastive Learning Penalty Loss:
|
25 |
|
26 |
-
|
27 |
|
28 |
## Specs
|
29 |
|
@@ -45,57 +43,19 @@ Performing negative sampling using the ANCE methodology and generating negative
|
|
45 |
| [fa_CLPL_train_data](https://github.com/Dream-Forge-Studios/Enhanced-BGE-M3-with-CLPL-and-MoE/blob/main/data/fa_CLPL_train_data.jsonl) | MIRACL Persian CLPL training dataset |
|
46 |
| [hi_CLPL_train_data](https://github.com/Dream-Forge-Studios/Enhanced-BGE-M3-with-CLPL-and-MoE/blob/main/data/hi_CLPL_train_data.jsonl) | MIRACL Hindi CLPL training dataset |
|
47 |
|
48 |
-
## Usage
|
49 |
-
|
50 |
-
Install:
|
51 |
-
|
52 |
-
- train
|
53 |
-
|
54 |
-
git clone https://github.com/Dream-Forge-Studios/Enhanced-BGE-M3-with-CLPL-and-MoE.git
|
55 |
-
pip install -e .
|
56 |
-
pip install transformers==4.45.2
|
57 |
-
pip install sentencepiece
|
58 |
-
pip install protobuf
|
59 |
-
pip install simple_parsing
|
60 |
-
|
61 |
-
- evalution
|
62 |
-
|
63 |
-
pip install -U FlagEmbedding
|
64 |
-
pip install sentencepiece
|
65 |
-
pip install protobuf
|
66 |
-
pip install faiss-cpu
|
67 |
-
pip install faiss-gpu
|
68 |
-
pip install nmslib
|
69 |
-
pip install pyserini==0.22.1
|
70 |
-
pip install peft
|
71 |
-
pip install "numpy<2"
|
72 |
-
pip install --upgrade datasets
|
73 |
-
pip install simple_parsing
|
74 |
-
|
75 |
-
Execution:
|
76 |
-
|
77 |
-
- train
|
78 |
-
|
79 |
-
python run.py --output_dir CreaLabs/bge-m3-fa-CLPL-outputMoE --model_name_or_path BAAI/bge-m3 --train_data ./train_data --learning_rate 1e-5 --fp16 y --num_train_epochs 2 --per_device_train_batch_size 1 --gradient_accumulation_steps 4 --dataloader_drop_last True --normlized True --temperature 0.02 --query_max_len 128 --passage_max_len 512 --train_group_size 5 --logging_steps 10 --same_task_within_batch True --unified_finetuning False --use_self_distill False --only_train intermediate --moe intermediate --num_experts 2 --num_experts_per_tok 1
|
80 |
-
|
81 |
-
- evalution
|
82 |
-
|
83 |
-
python step0-generate_embedding.py --encoder CreaLabs/bge-m3-fa-CLPL-outputMoE --languages ko --index_save_dir ./corpus-index --max_passage_length 8192 --batch_size 4 --fp16 --pooling_method cls --normalize_embeddings True --moe intermediate
|
84 |
-
python step1-search_results.py --encoder CreaLabs/bge-m3-fa-CLPL-outputMoE --languages ko fa hi --index_save_dir ./corpus-index --result_save_dir /data/js/search_results --threads 4 --hits 20 --pooling_method cls --normalize_embeddings True --add_instruction False --moe intermediate
|
85 |
-
python step2-eval_dense_mldr.py --encoder CreaLabs/bge-m3-fa-CLPL-outputMoE --languages ko --search_result_save_dir ./search_results --qrels_dir ./qrels --eval_result_save_dir ./eval_results --metrics ndcg@5 ndcg@10 --pooling_method cls --normalize_embeddings True
|
86 |
-
|
87 |
|
88 |
## Evaluation
|
89 |
|
90 |
-
|
91 |
|
92 |
## Citation
|
93 |
|
94 |
-
@misc{
|
95 |
-
title={Efficient
|
96 |
-
author={Jeongsu
|
97 |
year={2024},
|
98 |
-
eprint={},
|
99 |
-
archivePrefix={},
|
100 |
-
primaryClass={cs.
|
101 |
-
|
|
|
|
1 |
+
# Enhanced-BGE-M3-with-CLP-and-MoE ([paper](https://arxiv.org/abs/2412.17364), [code](https://github.com/CreaLabs/Enhanced-BGE-M3-with-CLP-and-MoE))
|
2 |
|
3 |
+
## Contrastive Learning Penalty (CLP)
|
4 |
|
5 |
+
CLP is a novel loss function designed to address the limitations of existing contrastive learning methods for improved performance in information retrieval tasks. It incorporates a penalty term that encourages the model to learn more discriminative representations by considering the similarity between negative samples and their corresponding queries.
|
6 |
|
7 |
+
The CLP loss function is defined as follows:
|
8 |
|
9 |
+
<img src="https://raw.githubusercontent.com/CreaLabs/Enhanced-BGE-M3-with-CLP-and-MoE/main/imgs/clpl_formula.PNG" width="1000"/>
|
|
|
|
|
10 |
|
11 |
where:
|
12 |
|
|
|
21 |
|
22 |
The difference between Contrastive Learning Loss and Contrastive Learning Penalty Loss:
|
23 |
|
24 |
+
<img src="https://raw.githubusercontent.com/CreaLabs/Enhanced-BGE-M3-with-CLP-and-MoE/main/imgs/figure1.PNG" width="1000"/>
|
25 |
|
26 |
## Specs
|
27 |
|
|
|
43 |
| [fa_CLPL_train_data](https://github.com/Dream-Forge-Studios/Enhanced-BGE-M3-with-CLPL-and-MoE/blob/main/data/fa_CLPL_train_data.jsonl) | MIRACL Persian CLPL training dataset |
|
44 |
| [hi_CLPL_train_data](https://github.com/Dream-Forge-Studios/Enhanced-BGE-M3-with-CLPL-and-MoE/blob/main/data/hi_CLPL_train_data.jsonl) | MIRACL Hindi CLPL training dataset |
|
45 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
46 |
|
47 |
## Evaluation
|
48 |
|
49 |
+
<img src="https://raw.githubusercontent.com/CreaLabs/Enhanced-BGE-M3-with-CLP-and-MoE/main/imgs/table4.PNG" width="1000"/>
|
50 |
|
51 |
## Citation
|
52 |
|
53 |
+
@misc{yu2024efficientfinetuningmethodologytext,
|
54 |
+
title={Efficient fine-tuning methodology of text embedding models for information retrieval: contrastive learning penalty (clp)},
|
55 |
+
author={Jeongsu Yu},
|
56 |
year={2024},
|
57 |
+
eprint={2412.17364},
|
58 |
+
archivePrefix={arXiv},
|
59 |
+
primaryClass={cs.IR},
|
60 |
+
url={https://arxiv.org/abs/2412.17364},
|
61 |
+
}
|