Scheduled Commit
Browse files
data/retrieval_individual-89f7d27e-df7d-414d-a213-dcd1c0b2f2a2.jsonl
CHANGED
@@ -229,3 +229,5 @@
|
|
229 |
{"tstamp": 1730294609.34, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1730294605.8484, "finish": 1730294609.34, "ip": "", "conv_id": "5bb15fc2d0a74d59bf38ec415bc4b463", "model_name": "BM25", "prompt": "Seeking a study that evaluates how well pretrained models like BERT capture real-world knowledge using a zero-shot fact completion task.", "output": [["Seeking a study that evaluates how well pretrained models like BERT capture real-world knowledge using a zero-shot fact completion task.", "Title: Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model\n\nAbstract: Recent breakthroughs of pretrained language models have shown the effectiveness of self-supervised learning for a wide range of natural language processing (NLP) tasks. In addition to standard syntactic and semantic NLP tasks, pretrained models achieve strong improvements on tasks that involve real-world knowledge, suggesting that large-scale language modeling could be an implicit method to capture knowledge. In this work, we further investigate the extent to which pretrained models such as BERT capture knowledge using a zero-shot fact completion task. Moreover, we propose a simple yet effective weakly supervised pretraining objective, which explicitly forces the model to incorporate knowledge about real-world entities. Models trained with our new objective yield significant improvements on the fact completion task. When applied to downstream tasks, our model consistently outperforms BERT on four entity-related question answering datasets (i.e., WebQuestions, TriviaQA, SearchQA and Quasar-T) with an average 2.7 F1 improvements and a standard fine-grained entity typing dataset (i.e., FIGER) with 5.7 accuracy gains."]], "corpus": "arxiv"}
|
230 |
{"tstamp": 1730295965.9479, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1730295963.7538, "finish": 1730295965.9479, "ip": "", "conv_id": "65eb716360304c03bf479df3f13e42dd", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "Why did the machines in The Matrix keep humans around?", "output": [["Why did the machines in The Matrix keep humans around?", "Gis Stackexchange\n\nQ: Filling voids in DEM SRTM 1-arc second using QGIS by resampling and inserting another DEM I'd like to use the DEM SRTM 1-arc second (Version 4, 30m) from USGS for Europe. Unfortunately there are some voids. My idea is to fill those voids using the void-filled version 3 (90m) by resampling it to 30m and inserting the missing data in the version 4 DEM.\nAs I'm new to QGIS I don't know the proper steps. I already got a 30m map by reprojecting the dimensions times three but I'm not sure if that's the right thing to do and I have no idea how to fill the voids afterwards. \n"]], "corpus": "stackexchange"}
|
231 |
{"tstamp": 1730295965.9479, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1730295963.7538, "finish": 1730295965.9479, "ip": "", "conv_id": "d94c748c74134476b473190e99e37d95", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Why did the machines in The Matrix keep humans around?", "output": [["Why did the machines in The Matrix keep humans around?", "Scifi Stackexchange\n\nQ: Why do the machines allow/initiate the building of a new Zion? In the Matrix Reloaded, the Architect says:\n\nThe function of the One is now to return to the Source, allowing a temporary dissemination of the code you carry, reinserting the prime program. After which, you will be required to select from the Matrix 23 individuals - 16 female, 7 male - to rebuild Zion. Failure to comply with this process will result in a cataclysmic system crash, killing everyone connected to the Matrix, which, coupled with the extermination of Zion, will ultimately result in the extinction of the entire human race.\n\nWhy do the machines initiate the re-building of Zion?\n\nA: As was discussed elsewhere on the site, the machines are not interested in humans being wiped out, they (as shown in Animatrix) just want to co-exist with humans.\nAs such, the machines allow the building of new Zion so that humans would not - as the quote you provided says - would not go extinct.\n"]], "corpus": "stackexchange"}
|
|
|
|
|
|
229 |
{"tstamp": 1730294609.34, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1730294605.8484, "finish": 1730294609.34, "ip": "", "conv_id": "5bb15fc2d0a74d59bf38ec415bc4b463", "model_name": "BM25", "prompt": "Seeking a study that evaluates how well pretrained models like BERT capture real-world knowledge using a zero-shot fact completion task.", "output": [["Seeking a study that evaluates how well pretrained models like BERT capture real-world knowledge using a zero-shot fact completion task.", "Title: Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model\n\nAbstract: Recent breakthroughs of pretrained language models have shown the effectiveness of self-supervised learning for a wide range of natural language processing (NLP) tasks. In addition to standard syntactic and semantic NLP tasks, pretrained models achieve strong improvements on tasks that involve real-world knowledge, suggesting that large-scale language modeling could be an implicit method to capture knowledge. In this work, we further investigate the extent to which pretrained models such as BERT capture knowledge using a zero-shot fact completion task. Moreover, we propose a simple yet effective weakly supervised pretraining objective, which explicitly forces the model to incorporate knowledge about real-world entities. Models trained with our new objective yield significant improvements on the fact completion task. When applied to downstream tasks, our model consistently outperforms BERT on four entity-related question answering datasets (i.e., WebQuestions, TriviaQA, SearchQA and Quasar-T) with an average 2.7 F1 improvements and a standard fine-grained entity typing dataset (i.e., FIGER) with 5.7 accuracy gains."]], "corpus": "arxiv"}
|
230 |
{"tstamp": 1730295965.9479, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1730295963.7538, "finish": 1730295965.9479, "ip": "", "conv_id": "65eb716360304c03bf479df3f13e42dd", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "Why did the machines in The Matrix keep humans around?", "output": [["Why did the machines in The Matrix keep humans around?", "Gis Stackexchange\n\nQ: Filling voids in DEM SRTM 1-arc second using QGIS by resampling and inserting another DEM I'd like to use the DEM SRTM 1-arc second (Version 4, 30m) from USGS for Europe. Unfortunately there are some voids. My idea is to fill those voids using the void-filled version 3 (90m) by resampling it to 30m and inserting the missing data in the version 4 DEM.\nAs I'm new to QGIS I don't know the proper steps. I already got a 30m map by reprojecting the dimensions times three but I'm not sure if that's the right thing to do and I have no idea how to fill the voids afterwards. \n"]], "corpus": "stackexchange"}
|
231 |
{"tstamp": 1730295965.9479, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1730295963.7538, "finish": 1730295965.9479, "ip": "", "conv_id": "d94c748c74134476b473190e99e37d95", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Why did the machines in The Matrix keep humans around?", "output": [["Why did the machines in The Matrix keep humans around?", "Scifi Stackexchange\n\nQ: Why do the machines allow/initiate the building of a new Zion? In the Matrix Reloaded, the Architect says:\n\nThe function of the One is now to return to the Source, allowing a temporary dissemination of the code you carry, reinserting the prime program. After which, you will be required to select from the Matrix 23 individuals - 16 female, 7 male - to rebuild Zion. Failure to comply with this process will result in a cataclysmic system crash, killing everyone connected to the Matrix, which, coupled with the extermination of Zion, will ultimately result in the extinction of the entire human race.\n\nWhy do the machines initiate the re-building of Zion?\n\nA: As was discussed elsewhere on the site, the machines are not interested in humans being wiped out, they (as shown in Animatrix) just want to co-exist with humans.\nAs such, the machines allow the building of new Zion so that humans would not - as the quote you provided says - would not go extinct.\n"]], "corpus": "stackexchange"}
|
232 |
+
{"tstamp": 1730301679.2041, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1730301676.8383, "finish": 1730301679.2041, "ip": "", "conv_id": "739bfbd0043b45298bd97fe597a7a42b", "model_name": "embed-english-v3.0", "prompt": "讻转讜讘 住讬驻讜专 讘谉 100 诪讬诇讬诐 讗讜 讬讜转专 注诇 诇讟讗讛 驻讞讚谞讬转, 拽讬驻讜讚 讘讬讬砖谉 讜谞诪专讛 砖讞讜专讛.", "output": [["讻转讜讘 住讬驻讜专 讘谉 100 诪讬诇讬诐 讗讜 讬讜转专 注诇 诇讟讗讛 驻讞讚谞讬转, 拽讬驻讜讚 讘讬讬砖谉 讜谞诪专讛 砖讞讜专讛.", "100\n\nThe U.S. hundred-dollar bill has Benjamin Franklin's portrait; the \"Benjamin\" is the largest U.S. bill in print. American savings bonds of $100 have Thomas Jefferson's portrait, while American $100 treasury bonds have Andrew Jackson's portrait.\nIn sports\nThe number of yards in an American football field (not including the end zones).\nThe number of points required for a snooker player to score a century break, a significant milestone.\nThe record number of points scored in one NBA game by a single player, set by Wilt Chamberlain of the Philadelphia Warriors on March 2, 1962.\nThe 100-yard dash and the 100-metre dash are sprint track events.\nThe 100-kilometre walk is a racewalking event.\nThe Hundred (cricket), a 100-ball cricket competition.\nIn other fields\nOne hundred is also:\nThe number of years in a century.\nThe number of centimeters in a meter.\nThe number of pounds in an American short hundredweight.\nIn Greece, India, Israel and Nepal, 100 is the police telephone number.\nIn Belgium, 100 is the ambulance and firefighter telephone number.\nIn United Kingdom, 100 is the operator telephone number.\nThe HTTP status code indicating that the client should continue with its request.\nThe age at which a person becomes a centenarian."]], "corpus": "wikipedia"}
|
233 |
+
{"tstamp": 1730301679.2041, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1730301676.8383, "finish": 1730301679.2041, "ip": "", "conv_id": "0e623ba438c04364bc16954edbd23065", "model_name": "text-embedding-004", "prompt": "讻转讜讘 住讬驻讜专 讘谉 100 诪讬诇讬诐 讗讜 讬讜转专 注诇 诇讟讗讛 驻讞讚谞讬转, 拽讬驻讜讚 讘讬讬砖谉 讜谞诪专讛 砖讞讜专讛.", "output": [["讻转讜讘 住讬驻讜专 讘谉 100 诪讬诇讬诐 讗讜 讬讜转专 注诇 诇讟讗讛 驻讞讚谞讬转, 拽讬驻讜讚 讘讬讬砖谉 讜谞诪专讛 砖讞讜专讛.", "Centenarian\n\nA centenarian is a person who has reached the age of 100 years. Because life expectancies worldwide are below 100, the term is invariably associated with longevity. The United Nations in 2012 estimated that there were 316,600 living centenarians worldwide.\nAs world population and life expectancy continue to increase, the number of centenarians is expected to increase substantially in the twenty-first century. According to the Office of National Statistics in the United Kingdom, one-third of babies born in the country in 2013 are expected to live to 100.\nThe United Nations estimates that currently there are 573,000 centenarians, almost quadruple the estimate of 151,000 made in 2000. According to a 1998 United Nations demographic survey, Japan is expected to have 272,000 centenarians by 2050; other sources suggest that the number could be closer to 1 million. The incidence of centenarians in Japan was one per 3,522 people in 2008.\nIn Japan, the number of centenarians is highly skewed towards females. Japan in fiscal year 2016 had 57,525 female centenarians, while there were 8,167 males, a ratio of 7:1. The increase of centenarians was even more skewed at 11.6:1."]], "corpus": "wikipedia"}
|