Muennighoff commited on
Commit
045fa25
·
verified ·
1 Parent(s): 34eb4b6

Scheduled Commit

Browse files
data/retrieval_individual-35e094d9-c3d4-447e-b2f4-7dd3f5d1d585.jsonl CHANGED
@@ -267,3 +267,5 @@
267
  {"tstamp": 1723479387.8137, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1723479385.6347, "finish": 1723479387.8137, "ip": "", "conv_id": "5ce0883bd69b403eafd37655e748cffb", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "when did the broncos win the super bowl", "output": [["when did the broncos win the super bowl", "Super Bowl XXXIII\n\nSuper Bowl XXXIII was an American football game played between the American Football Conference (AFC) champion and defending Super Bowl XXXII champion Denver Broncos and the National Football Conference (NFC) champion Atlanta Falcons to decide the National Football League (NFL) champion for the 1998 season. The Broncos defeated the Falcons by the score of 34–19, winning their second consecutive Super Bowl, and becoming the first franchise to record consecutive Super Bowl victories & defeats. The game was played on January 31, 1999, at Pro Player Stadium in Miami, Florida (now part of the suburb of Miami Gardens, which became a separate city in 2003)."]], "corpus": "wikipedia"}
268
  {"tstamp": 1723479399.1326, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1723479396.9082, "finish": 1723479399.1326, "ip": "", "conv_id": "63017f2afd1d419dabd96baaada8418d", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Are there any Deberta-V3 pre-trained for other languages than English?", "output": [["Are there any Deberta-V3 pre-trained for other languages than English?", "Title: DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing\n\nAbstract: This paper presents a new pre-trained language model, DeBERTaV3, which improves the original DeBERTa model by replacing mask language modeling (MLM) with replaced token detection (RTD), a more sample-efficient pre-training task. Our analysis shows that vanilla embedding sharing in ELECTRA hurts training efficiency and model performance. This is because the training losses of the discriminator and the generator pull token embeddings in different directions, creating the \"tug-of-war\" dynamics. We thus propose a new gradient-disentangled embedding sharing method that avoids the tug-of-war dynamics, improving both training efficiency and the quality of the pre-trained model. We have pre-trained DeBERTaV3 using the same settings as DeBERTa to demonstrate its exceptional performance on a wide range of downstream natural language understanding (NLU) tasks. Taking the GLUE benchmark with eight tasks as an example, the DeBERTaV3 Large model achieves a 91.37% average score, which is 1.37% over DeBERTa and 1.91% over ELECTRA, setting a new state-of-the-art (SOTA) among the models with a similar structure. Furthermore, we have pre-trained a multi-lingual model mDeBERTa and observed a larger improvement over strong baselines compared to English models. For example, the mDeBERTa Base achieves a 79.8% zero-shot cross-lingual accuracy on XNLI and a 3.6% improvement over XLM-R Base, creating a new SOTA on this benchmark. We have made our pre-trained models and inference code publicly available at https://github.com/microsoft/DeBERTa."]], "corpus": "arxiv"}
269
  {"tstamp": 1723479399.1326, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1723479396.9082, "finish": 1723479399.1326, "ip": "", "conv_id": "8797dc6ee9304811a8eab615b2d260ff", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Are there any Deberta-V3 pre-trained for other languages than English?", "output": [["Are there any Deberta-V3 pre-trained for other languages than English?", "Title: DeBERTinha: A Multistep Approach to Adapt DebertaV3 XSmall for Brazilian Portuguese Natural Language Processing Task\n\nAbstract: This paper presents an approach for adapting the DebertaV3 XSmall model pre-trained in English for Brazilian Portuguese natural language processing (NLP) tasks. A key aspect of the methodology involves a multistep training process to ensure the model is effectively tuned for the Portuguese language. Initial datasets from Carolina and BrWac are preprocessed to address issues like emojis, HTML tags, and encodings. A Portuguese-specific vocabulary of 50,000 tokens is created using SentencePiece. Rather than training from scratch, the weights of the pre-trained English model are used to initialize most of the network, with random embeddings, recognizing the expensive cost of training from scratch. The model is fine-tuned using the replaced token detection task in the same format of DebertaV3 training. The adapted model, called DeBERTinha, demonstrates effectiveness on downstream tasks like named entity recognition, sentiment analysis, and determining sentence relatedness, outperforming BERTimbau-Large in two tasks despite having only 40M parameters."]], "corpus": "arxiv"}
 
 
 
267
  {"tstamp": 1723479387.8137, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1723479385.6347, "finish": 1723479387.8137, "ip": "", "conv_id": "5ce0883bd69b403eafd37655e748cffb", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "when did the broncos win the super bowl", "output": [["when did the broncos win the super bowl", "Super Bowl XXXIII\n\nSuper Bowl XXXIII was an American football game played between the American Football Conference (AFC) champion and defending Super Bowl XXXII champion Denver Broncos and the National Football Conference (NFC) champion Atlanta Falcons to decide the National Football League (NFL) champion for the 1998 season. The Broncos defeated the Falcons by the score of 34–19, winning their second consecutive Super Bowl, and becoming the first franchise to record consecutive Super Bowl victories & defeats. The game was played on January 31, 1999, at Pro Player Stadium in Miami, Florida (now part of the suburb of Miami Gardens, which became a separate city in 2003)."]], "corpus": "wikipedia"}
268
  {"tstamp": 1723479399.1326, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1723479396.9082, "finish": 1723479399.1326, "ip": "", "conv_id": "63017f2afd1d419dabd96baaada8418d", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Are there any Deberta-V3 pre-trained for other languages than English?", "output": [["Are there any Deberta-V3 pre-trained for other languages than English?", "Title: DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing\n\nAbstract: This paper presents a new pre-trained language model, DeBERTaV3, which improves the original DeBERTa model by replacing mask language modeling (MLM) with replaced token detection (RTD), a more sample-efficient pre-training task. Our analysis shows that vanilla embedding sharing in ELECTRA hurts training efficiency and model performance. This is because the training losses of the discriminator and the generator pull token embeddings in different directions, creating the \"tug-of-war\" dynamics. We thus propose a new gradient-disentangled embedding sharing method that avoids the tug-of-war dynamics, improving both training efficiency and the quality of the pre-trained model. We have pre-trained DeBERTaV3 using the same settings as DeBERTa to demonstrate its exceptional performance on a wide range of downstream natural language understanding (NLU) tasks. Taking the GLUE benchmark with eight tasks as an example, the DeBERTaV3 Large model achieves a 91.37% average score, which is 1.37% over DeBERTa and 1.91% over ELECTRA, setting a new state-of-the-art (SOTA) among the models with a similar structure. Furthermore, we have pre-trained a multi-lingual model mDeBERTa and observed a larger improvement over strong baselines compared to English models. For example, the mDeBERTa Base achieves a 79.8% zero-shot cross-lingual accuracy on XNLI and a 3.6% improvement over XLM-R Base, creating a new SOTA on this benchmark. We have made our pre-trained models and inference code publicly available at https://github.com/microsoft/DeBERTa."]], "corpus": "arxiv"}
269
  {"tstamp": 1723479399.1326, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1723479396.9082, "finish": 1723479399.1326, "ip": "", "conv_id": "8797dc6ee9304811a8eab615b2d260ff", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Are there any Deberta-V3 pre-trained for other languages than English?", "output": [["Are there any Deberta-V3 pre-trained for other languages than English?", "Title: DeBERTinha: A Multistep Approach to Adapt DebertaV3 XSmall for Brazilian Portuguese Natural Language Processing Task\n\nAbstract: This paper presents an approach for adapting the DebertaV3 XSmall model pre-trained in English for Brazilian Portuguese natural language processing (NLP) tasks. A key aspect of the methodology involves a multistep training process to ensure the model is effectively tuned for the Portuguese language. Initial datasets from Carolina and BrWac are preprocessed to address issues like emojis, HTML tags, and encodings. A Portuguese-specific vocabulary of 50,000 tokens is created using SentencePiece. Rather than training from scratch, the weights of the pre-trained English model are used to initialize most of the network, with random embeddings, recognizing the expensive cost of training from scratch. The model is fine-tuned using the replaced token detection task in the same format of DebertaV3 training. The adapted model, called DeBERTinha, demonstrates effectiveness on downstream tasks like named entity recognition, sentiment analysis, and determining sentence relatedness, outperforming BERTimbau-Large in two tasks despite having only 40M parameters."]], "corpus": "arxiv"}
270
+ {"tstamp": 1723479649.0945, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1723479646.7838, "finish": 1723479649.0945, "ip": "", "conv_id": "b21bd9546b464f63a1dacdb7118ae7dd", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
271
+ {"tstamp": 1723479649.0945, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1723479646.7838, "finish": 1723479649.0945, "ip": "", "conv_id": "976f86af61e74932b3783f65d1ac06e7", "model_name": "embed-english-v3.0", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "corpus": "wikipedia"}