Muennighoff commited on
Commit
8243a1d
·
verified ·
1 Parent(s): 64be2a9

Scheduled Commit

Browse files
data/clustering_individual-3403941f-f995-496f-9668-ac196a27ebc6.jsonl CHANGED
@@ -100,3 +100,5 @@
100
  {"tstamp": 1742335583.8786, "task_type": "clustering", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1742335581.973, "finish": 1742335583.8786, "ip": "", "conv_id": "23a84a8f9e1646aea0084f667210ee2d", "model_name": "text-embedding-3-large", "prompt": ["Main Interests:\nQuantum Computing (0.95): Demonstrates a very strong passion for cutting-edge computational paradigms and theoretical advancements.\nNeural Networks (0.88): High interest in machine learning models and their applications in AI development.\nAstrophysics (0.82): Deep fascination with cosmic phenomena, space exploration, and theoretical physics.\nGenetic Algorithms (0.75): Strong enthusiasm for optimization techniques inspired by evolutionary biology.\n", "\nModerate Interests:\nCybersecurity (0.55): Moderate curiosity about digital security practices, likely for practical or professional relevance.\nAugmented Reality (0.48): Casual interest in AR technology, possibly for entertainment or niche applications.\n", "\nDisliked Topics:\nCryptocurrency (0.92): Strong aversion due to skepticism about its volatility, ethics, or environmental impact.\nReality TV (0.85): Dislikes trivialized entertainment, preferring intellectually stimulating content.\n", "\nMinimal Interest:\nStamp Collecting (0.02): Indifference toward traditional hobbies lacking technological or intellectual engagement.\nPop Music (0.06): Minimal connection to mainstream music, favoring niche or intellectually aligned genres.\n", "\nKeywords of Advanced Interest:\nQuantum Computing:\n\nSubtopics: Quantum Supremacy, Qubit Error Correction, Topological Quantum States\nNeural Networks:\n\nSubtopics: Deep Learning Architectures, Backpropagation Optimization, Neuromorphic Engineering\nAstrophysics:\n\nSubtopics: Exoplanet Detection, Dark Matter Theories, Gravitational Wave Analysis\nGenetic Algorithms:\n\nSubtopics: Evolutionary Robotics, Fitness Function Design, Multi-Objective Optimization", "convex", "toric", "progressive", "concave", "prismatic", "geothermal", "biomass", "hydroelectric", "tidal", "solar", "wind", "Leo", "Capricorn", "Virgo", "Aquarius", "Libra", "Gemini", "brioche", "pumpernickel", "sourdough", "ciabatta", "focaccia", "clarinet", "saxophone", "oboe", "bassoon", "trumpet", "trombone", "flute", "SSD", "GPU", "motherboard", "RAM", "CPU", "power supply", "hard drive", "spaghetti", "penne", "ravioli", "fusilli", "lasagna", "fettuccine", "bistro", "buffet", "fast casual", "sushi bar", "steakhouse", "cafe", "period", "comma", "hyphen", "semicolon", "question mark", "loafers", "flats", "sandals", "high heels", "boots", "sneakers"], "ncluster": 5, "output": "", "ndim": "2D (press for 3D)", "dim_method": "PCA", "clustering_method": "KMeans"}
101
  {"tstamp": 1742378210.2952, "task_type": "clustering", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1742378209.9663, "finish": 1742378210.2952, "ip": "", "conv_id": "2dd5f64b212b4e8d97495ab9a10b9dbc", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": ["watering can", "trowel", "rake", "shovel", "pruning shears", "basketball", "swimming", "baseball", "tennis", "cricket", "soccer"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
102
  {"tstamp": 1742378210.2952, "task_type": "clustering", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1742378209.9663, "finish": 1742378210.2952, "ip": "", "conv_id": "4a5b409fc1974941b7e49bc56c760ede", "model_name": "voyage-multilingual-2", "prompt": ["watering can", "trowel", "rake", "shovel", "pruning shears", "basketball", "swimming", "baseball", "tennis", "cricket", "soccer"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
 
 
 
100
  {"tstamp": 1742335583.8786, "task_type": "clustering", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1742335581.973, "finish": 1742335583.8786, "ip": "", "conv_id": "23a84a8f9e1646aea0084f667210ee2d", "model_name": "text-embedding-3-large", "prompt": ["Main Interests:\nQuantum Computing (0.95): Demonstrates a very strong passion for cutting-edge computational paradigms and theoretical advancements.\nNeural Networks (0.88): High interest in machine learning models and their applications in AI development.\nAstrophysics (0.82): Deep fascination with cosmic phenomena, space exploration, and theoretical physics.\nGenetic Algorithms (0.75): Strong enthusiasm for optimization techniques inspired by evolutionary biology.\n", "\nModerate Interests:\nCybersecurity (0.55): Moderate curiosity about digital security practices, likely for practical or professional relevance.\nAugmented Reality (0.48): Casual interest in AR technology, possibly for entertainment or niche applications.\n", "\nDisliked Topics:\nCryptocurrency (0.92): Strong aversion due to skepticism about its volatility, ethics, or environmental impact.\nReality TV (0.85): Dislikes trivialized entertainment, preferring intellectually stimulating content.\n", "\nMinimal Interest:\nStamp Collecting (0.02): Indifference toward traditional hobbies lacking technological or intellectual engagement.\nPop Music (0.06): Minimal connection to mainstream music, favoring niche or intellectually aligned genres.\n", "\nKeywords of Advanced Interest:\nQuantum Computing:\n\nSubtopics: Quantum Supremacy, Qubit Error Correction, Topological Quantum States\nNeural Networks:\n\nSubtopics: Deep Learning Architectures, Backpropagation Optimization, Neuromorphic Engineering\nAstrophysics:\n\nSubtopics: Exoplanet Detection, Dark Matter Theories, Gravitational Wave Analysis\nGenetic Algorithms:\n\nSubtopics: Evolutionary Robotics, Fitness Function Design, Multi-Objective Optimization", "convex", "toric", "progressive", "concave", "prismatic", "geothermal", "biomass", "hydroelectric", "tidal", "solar", "wind", "Leo", "Capricorn", "Virgo", "Aquarius", "Libra", "Gemini", "brioche", "pumpernickel", "sourdough", "ciabatta", "focaccia", "clarinet", "saxophone", "oboe", "bassoon", "trumpet", "trombone", "flute", "SSD", "GPU", "motherboard", "RAM", "CPU", "power supply", "hard drive", "spaghetti", "penne", "ravioli", "fusilli", "lasagna", "fettuccine", "bistro", "buffet", "fast casual", "sushi bar", "steakhouse", "cafe", "period", "comma", "hyphen", "semicolon", "question mark", "loafers", "flats", "sandals", "high heels", "boots", "sneakers"], "ncluster": 5, "output": "", "ndim": "2D (press for 3D)", "dim_method": "PCA", "clustering_method": "KMeans"}
101
  {"tstamp": 1742378210.2952, "task_type": "clustering", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1742378209.9663, "finish": 1742378210.2952, "ip": "", "conv_id": "2dd5f64b212b4e8d97495ab9a10b9dbc", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": ["watering can", "trowel", "rake", "shovel", "pruning shears", "basketball", "swimming", "baseball", "tennis", "cricket", "soccer"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
102
  {"tstamp": 1742378210.2952, "task_type": "clustering", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1742378209.9663, "finish": 1742378210.2952, "ip": "", "conv_id": "4a5b409fc1974941b7e49bc56c760ede", "model_name": "voyage-multilingual-2", "prompt": ["watering can", "trowel", "rake", "shovel", "pruning shears", "basketball", "swimming", "baseball", "tennis", "cricket", "soccer"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
103
+ {"tstamp": 1742392821.2509, "task_type": "clustering", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1742392821.0997, "finish": 1742392821.2509, "ip": "", "conv_id": "40548b9a40b44fbd8503e875c2666232", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": ["maple", "cedar", "pine", "oak", "birch", "surprise", "joy", "happiness", "disgust", "anger", "sadness", "fear", "Hindu", "Roman", "Norse", "Egyptian", "Celtic", "Oracle Cloud", "Google Cloud", "IBM Cloud", "AWS", "semi-arid", "hot and dry", "coastal", "cold", "polar"], "ncluster": 5, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
104
+ {"tstamp": 1742392821.2509, "task_type": "clustering", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1742392821.0997, "finish": 1742392821.2509, "ip": "", "conv_id": "c5dcce5b82204fe3a9e82e879ffb0a8d", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": ["maple", "cedar", "pine", "oak", "birch", "surprise", "joy", "happiness", "disgust", "anger", "sadness", "fear", "Hindu", "Roman", "Norse", "Egyptian", "Celtic", "Oracle Cloud", "Google Cloud", "IBM Cloud", "AWS", "semi-arid", "hot and dry", "coastal", "cold", "polar"], "ncluster": 5, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
data/retrieval_battle-3403941f-f995-496f-9668-ac196a27ebc6.jsonl CHANGED
@@ -139,3 +139,6 @@
139
  {"tstamp": 1742382015.9775, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "5ab5eb669d0445199e264be59505e515", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "autyzm hiperempatia", "0_output": [["autyzm hiperempatia", "Autism\n\nThus, there has been a recent shift to acknowledge that autistic people may simply respond and behave differently than people without ASD. So far, research has identified two unconventional features by which autistic people create shared understanding (intersubjectivity): \"a generous assumption of common ground that, when understood, led to rapid rapport, and, when not understood, resulted in potentially disruptive utterances; and a low demand for coordination that ameliorated many challenges associated with disruptive turns.\" Autistic interests, and thus conversational topics, seem to be largely driven by an intense interest in specific topics (monotropism).\nHistorically, autistic children were said to be delayed in developing a theory of mind, and the empathizing–systemizing theory has argued that while autistic people have compassion (affective empathy) for others with similar presentation of symptoms, they have limited, though not necessarily absent, cognitive empathy. This may present as social naïvety, lower than average intuitive perception of the utility or meaning of body language, social reciprocity, and/or social expectations, including the habitus, social cues, and/or some aspects of sarcasm, which to some degree may also be due to comorbid alexithymia. But recent research has increasingly questioned these findings, as the \"double empathy problem\" theory (2012) argues that there is a lack of mutual understanding and empathy between both non-autistic persons and autistic individuals."]], "0_corpus": "wikipedia", "1_conv_id": "b4c6c497e2fc4cf7b496b7361bd969dd", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "autyzm hiperempatia", "1_output": [["autyzm hiperempatia", "Autism\n\nAutism, also called autism spectrum disorder (ASD) or autism spectrum condition (ASC), is a neurodevelopmental disorder marked by deficits in reciprocal social communication and the presence of restricted and repetitive patterns of behavior. Other common signs include difficulty with social interaction, verbal and nonverbal communication, along with perseverative interests, stereotypic body movements, rigid routines, and hyper- or hyporeactivity to sensory input. Autism is clinically regarded as a spectrum disorder, meaning that it can manifest very differently in each person. For example, some are nonspeaking, while others have proficient spoken language. Because of this, there is wide variation in the support needs of people across the autism spectrum."]], "1_corpus": "wikipedia"}
140
  {"tstamp": 1742386726.802, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "eed51ed2b83a4e92a76fa65d90548be1", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "meaning of song do your ears hang low", "0_output": [["meaning of song do your ears hang low", "Do Your Ears Hang Low?\n\n\"Do Your Ears Hang Low?\" is a children's novelty song often sung in schools and at camps. The melody of this song is usually a shorter version of \"Turkey in the Straw\", but it can also be sung to the tune of the \"Sailor's Hornpipe\". It has a Roud Folk Song Index number of 15472.\nHistory\nThe origin of the song is most likely George Washington Dixon's \"Zip Coon\", penned in 1838. Variant versions with vulgar lyrics include \"Do Your Balls Hang Low?\" and \"Do Your Boobs Hang Low?\". Some authors regard these as parody versions of the campfire song, but, according to folklorists such as Ed Cray, the evidence strongly suggests that \"Do Your Balls Hang Low?\" came first, and that \"Do Your Ears Hang Low?\" is a sanitized version."]], "0_corpus": "wikipedia", "1_conv_id": "b32c478373b041d7b46f5689477f107e", "1_model_name": "BM25", "1_prompt": "meaning of song do your ears hang low", "1_output": [["meaning of song do your ears hang low", "Do Your Ears Hang Low?\n\n\"Do Your Ears Hang Low?\" is a children's novelty song often sung in schools and at camps. The melody of this song is usually a shorter version of \"Turkey in the Straw\", but it can also be sung to the tune of the \"Sailor's Hornpipe\". It has a Roud Folk Song Index number of 15472.\nHistory\nThe origin of the song is most likely George Washington Dixon's \"Zip Coon\", penned in 1838. Variant versions with vulgar lyrics include \"Do Your Balls Hang Low?\" and \"Do Your Boobs Hang Low?\". Some authors regard these as parody versions of the campfire song, but, according to folklorists such as Ed Cray, the evidence strongly suggests that \"Do Your Balls Hang Low?\" came first, and that \"Do Your Ears Hang Low?\" is a sanitized version."]], "1_corpus": "wikipedia"}
141
  {"tstamp": 1742386806.9074, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "768c8c6fb6134c54a189c8a82f0bef34", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "what is the difference between last and latest?", "0_output": [["what is the difference between last and latest?", "Meta Stackexchange\n\nQ: What's the difference between \"recent\" and \"newest\" sort options in my user page? When I go to my user page to look through all the questions that I have asked, I am always presented with the following choice of sorting options: \"recent\", \"views\", \"newest\", \"votes\". While I am okay with the second and the forth options, I still don't get the difference between \"recent\" and \"newest\". Can anyone here explain it to me, please?\n\nA: The tooltips when you mouse over the sort selection boxes explains:\n\n[recent] sort by recent activity\n[newest] sort by creation date\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "37f68cdb307c4427ba5f8ea12e53ce25", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "what is the difference between last and latest?", "1_output": [["what is the difference between last and latest?", "Stats Stackexchange\n\nQ: Pairwise chi-squared tests with Bonferroni correction I have two categorical datasets, say, $A$ and $B$, which are sparse. I would like to apply pairwise $\\chi^2$ tests to a certain categories, which are sufficiently populated (say, have expected values $>5$) in the following way. Let $k_{A}$ be number of elements of $A$ which are $k$, and $\\neg k_{A}$ be number of elements which are not $k$. So, I have a contingency table of the form:\n$$\n\\begin{array}{|r|r|}\n\\hline\nk_{A}&k_{B}\\\\\n\\hline\n\\neg k_{A}&\\neg k_{B}\\\\\n\\hline\n\\end{array}\n$$\nAfter applying $\\chi^2$ test, I multiply each $p$ by a number of tests I run, and this is a Bonferroni correction. And then I can conclude, that certain categories (having $p<0.05$ after applying correction) are significantly different in datasets $A$ and $B$.\nIs this a correct? If, for example, I would simply collapse categories with low expected values into a catch-all category 'others', and run a single $\\chi^2$ test, then I don't need to apply any corrections?\n"]], "1_corpus": "stackexchange"}
 
 
 
 
139
  {"tstamp": 1742382015.9775, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "5ab5eb669d0445199e264be59505e515", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "autyzm hiperempatia", "0_output": [["autyzm hiperempatia", "Autism\n\nThus, there has been a recent shift to acknowledge that autistic people may simply respond and behave differently than people without ASD. So far, research has identified two unconventional features by which autistic people create shared understanding (intersubjectivity): \"a generous assumption of common ground that, when understood, led to rapid rapport, and, when not understood, resulted in potentially disruptive utterances; and a low demand for coordination that ameliorated many challenges associated with disruptive turns.\" Autistic interests, and thus conversational topics, seem to be largely driven by an intense interest in specific topics (monotropism).\nHistorically, autistic children were said to be delayed in developing a theory of mind, and the empathizing–systemizing theory has argued that while autistic people have compassion (affective empathy) for others with similar presentation of symptoms, they have limited, though not necessarily absent, cognitive empathy. This may present as social naïvety, lower than average intuitive perception of the utility or meaning of body language, social reciprocity, and/or social expectations, including the habitus, social cues, and/or some aspects of sarcasm, which to some degree may also be due to comorbid alexithymia. But recent research has increasingly questioned these findings, as the \"double empathy problem\" theory (2012) argues that there is a lack of mutual understanding and empathy between both non-autistic persons and autistic individuals."]], "0_corpus": "wikipedia", "1_conv_id": "b4c6c497e2fc4cf7b496b7361bd969dd", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "autyzm hiperempatia", "1_output": [["autyzm hiperempatia", "Autism\n\nAutism, also called autism spectrum disorder (ASD) or autism spectrum condition (ASC), is a neurodevelopmental disorder marked by deficits in reciprocal social communication and the presence of restricted and repetitive patterns of behavior. Other common signs include difficulty with social interaction, verbal and nonverbal communication, along with perseverative interests, stereotypic body movements, rigid routines, and hyper- or hyporeactivity to sensory input. Autism is clinically regarded as a spectrum disorder, meaning that it can manifest very differently in each person. For example, some are nonspeaking, while others have proficient spoken language. Because of this, there is wide variation in the support needs of people across the autism spectrum."]], "1_corpus": "wikipedia"}
140
  {"tstamp": 1742386726.802, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "eed51ed2b83a4e92a76fa65d90548be1", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "meaning of song do your ears hang low", "0_output": [["meaning of song do your ears hang low", "Do Your Ears Hang Low?\n\n\"Do Your Ears Hang Low?\" is a children's novelty song often sung in schools and at camps. The melody of this song is usually a shorter version of \"Turkey in the Straw\", but it can also be sung to the tune of the \"Sailor's Hornpipe\". It has a Roud Folk Song Index number of 15472.\nHistory\nThe origin of the song is most likely George Washington Dixon's \"Zip Coon\", penned in 1838. Variant versions with vulgar lyrics include \"Do Your Balls Hang Low?\" and \"Do Your Boobs Hang Low?\". Some authors regard these as parody versions of the campfire song, but, according to folklorists such as Ed Cray, the evidence strongly suggests that \"Do Your Balls Hang Low?\" came first, and that \"Do Your Ears Hang Low?\" is a sanitized version."]], "0_corpus": "wikipedia", "1_conv_id": "b32c478373b041d7b46f5689477f107e", "1_model_name": "BM25", "1_prompt": "meaning of song do your ears hang low", "1_output": [["meaning of song do your ears hang low", "Do Your Ears Hang Low?\n\n\"Do Your Ears Hang Low?\" is a children's novelty song often sung in schools and at camps. The melody of this song is usually a shorter version of \"Turkey in the Straw\", but it can also be sung to the tune of the \"Sailor's Hornpipe\". It has a Roud Folk Song Index number of 15472.\nHistory\nThe origin of the song is most likely George Washington Dixon's \"Zip Coon\", penned in 1838. Variant versions with vulgar lyrics include \"Do Your Balls Hang Low?\" and \"Do Your Boobs Hang Low?\". Some authors regard these as parody versions of the campfire song, but, according to folklorists such as Ed Cray, the evidence strongly suggests that \"Do Your Balls Hang Low?\" came first, and that \"Do Your Ears Hang Low?\" is a sanitized version."]], "1_corpus": "wikipedia"}
141
  {"tstamp": 1742386806.9074, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "768c8c6fb6134c54a189c8a82f0bef34", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "what is the difference between last and latest?", "0_output": [["what is the difference between last and latest?", "Meta Stackexchange\n\nQ: What's the difference between \"recent\" and \"newest\" sort options in my user page? When I go to my user page to look through all the questions that I have asked, I am always presented with the following choice of sorting options: \"recent\", \"views\", \"newest\", \"votes\". While I am okay with the second and the forth options, I still don't get the difference between \"recent\" and \"newest\". Can anyone here explain it to me, please?\n\nA: The tooltips when you mouse over the sort selection boxes explains:\n\n[recent] sort by recent activity\n[newest] sort by creation date\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "37f68cdb307c4427ba5f8ea12e53ce25", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "what is the difference between last and latest?", "1_output": [["what is the difference between last and latest?", "Stats Stackexchange\n\nQ: Pairwise chi-squared tests with Bonferroni correction I have two categorical datasets, say, $A$ and $B$, which are sparse. I would like to apply pairwise $\\chi^2$ tests to a certain categories, which are sufficiently populated (say, have expected values $>5$) in the following way. Let $k_{A}$ be number of elements of $A$ which are $k$, and $\\neg k_{A}$ be number of elements which are not $k$. So, I have a contingency table of the form:\n$$\n\\begin{array}{|r|r|}\n\\hline\nk_{A}&k_{B}\\\\\n\\hline\n\\neg k_{A}&\\neg k_{B}\\\\\n\\hline\n\\end{array}\n$$\nAfter applying $\\chi^2$ test, I multiply each $p$ by a number of tests I run, and this is a Bonferroni correction. And then I can conclude, that certain categories (having $p<0.05$ after applying correction) are significantly different in datasets $A$ and $B$.\nIs this a correct? If, for example, I would simply collapse categories with low expected values into a catch-all category 'others', and run a single $\\chi^2$ test, then I don't need to apply any corrections?\n"]], "1_corpus": "stackexchange"}
142
+ {"tstamp": 1742392702.0712, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "b7e25ab1a1224f5c8e3343fb30019c71", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "how long ago the discovery of asthma was found?", "0_output": [["how long ago the discovery of asthma was found?", "Asthma\n\nHistory Asthma was recognized in ancient Egypt and was treated by drinking an incense mixture known as kyphi. It was officially named as a specific respiratory problem by Hippocrates circa 450 BC, with the Greek word for \"panting\" forming the basis of our modern name. In 200 BC it was believed to be at least partly related to the emotions. In the 12th century the Jewish physician-philosopher Maimonides wrote a treatise on asthma in Arabic, based partly on Arabic sources, in which he discussed the symptoms, proposed various dietary and other means of treatment, and emphasized the importance of climate and clean air. Chinese Traditional Medicine also offered medication for asthma, as indicated by a surviving 14th century manuscript curated by the Wellcome Foundation.\nIn 1873, one of the first papers in modern medicine on the subject tried to explain the pathophysiology of the disease while one in 1872, concluded that asthma can be cured by rubbing the chest with chloroform liniment.\nMedical treatment in 1880 included the use of intravenous doses of a drug called pilocarpine.\nIn 1886, F. H. Bosworth theorized a connection between asthma and hay fever.\nAt the beginning of the 20th century, the focus was the avoidance of allergens as well as selective beta-2 adrenoceptors agonists were used as treatment strategies.\nEpinephrine was first referred to in the treatment of asthma in 1905.\nOral corticosteroids began to be used for this condition in the 1950."]], "0_corpus": "wikipedia", "1_conv_id": "3e7b49cfd2b94c9081104712e3f59198", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "how long ago the discovery of asthma was found?", "1_output": [["how long ago the discovery of asthma was found?", "Asthma\n\nHistory Asthma was recognized in ancient Egypt and was treated by drinking an incense mixture known as kyphi. It was officially named as a specific respiratory problem by Hippocrates circa 450 BC, with the Greek word for \"panting\" forming the basis of our modern name. In 200 BC it was believed to be at least partly related to the emotions. In the 12th century the Jewish physician-philosopher Maimonides wrote a treatise on asthma in Arabic, based partly on Arabic sources, in which he discussed the symptoms, proposed various dietary and other means of treatment, and emphasized the importance of climate and clean air. Chinese Traditional Medicine also offered medication for asthma, as indicated by a surviving 14th century manuscript curated by the Wellcome Foundation.\nIn 1873, one of the first papers in modern medicine on the subject tried to explain the pathophysiology of the disease while one in 1872, concluded that asthma can be cured by rubbing the chest with chloroform liniment.\nMedical treatment in 1880 included the use of intravenous doses of a drug called pilocarpine.\nIn 1886, F. H. Bosworth theorized a connection between asthma and hay fever.\nAt the beginning of the 20th century, the focus was the avoidance of allergens as well as selective beta-2 adrenoceptors agonists were used as treatment strategies.\nEpinephrine was first referred to in the treatment of asthma in 1905.\nOral corticosteroids began to be used for this condition in the 1950."]], "1_corpus": "wikipedia"}
143
+ {"tstamp": 1742392740.0904, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "be46bbd2c8784b7ea9da16d97c17e8ba", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "A paper introducing a state-of-the-art BERT model specific to the Greek language.", "0_output": [["A paper introducing a state-of-the-art BERT model specific to the Greek language.", "Title: GREEK-BERT: The Greeks visiting Sesame Street\n\nAbstract: Transformer-based language models, such as BERT and its variants, have achieved state-of-the-art performance in several downstream natural language processing (NLP) tasks on generic benchmark datasets (e.g., GLUE, SQUAD, RACE). However, these models have mostly been applied to the resource-rich English language. In this paper, we present GREEK-BERT, a monolingual BERT-based language model for modern Greek. We evaluate its performance in three NLP tasks, i.e., part-of-speech tagging, named entity recognition, and natural language inference, obtaining state-of-the-art performance. Interestingly, in two of the benchmarks GREEK-BERT outperforms two multilingual Transformer-based models (M-BERT, XLM-R), as well as shallower neural baselines operating on pre-trained word embeddings, by a large margin (5%-10%). Most importantly, we make both GREEK-BERT and our training code publicly available, along with code illustrating how GREEK-BERT can be fine-tuned for downstream NLP tasks. We expect these resources to boost NLP research and applications for modern Greek."]], "0_corpus": "arxiv", "1_conv_id": "c4109c221c644c5d8de2592a2d4ef124", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "A paper introducing a state-of-the-art BERT model specific to the Greek language.", "1_output": [["A paper introducing a state-of-the-art BERT model specific to the Greek language.", "Title: GREEK-BERT: The Greeks visiting Sesame Street\n\nAbstract: Transformer-based language models, such as BERT and its variants, have achieved state-of-the-art performance in several downstream natural language processing (NLP) tasks on generic benchmark datasets (e.g., GLUE, SQUAD, RACE). However, these models have mostly been applied to the resource-rich English language. In this paper, we present GREEK-BERT, a monolingual BERT-based language model for modern Greek. We evaluate its performance in three NLP tasks, i.e., part-of-speech tagging, named entity recognition, and natural language inference, obtaining state-of-the-art performance. Interestingly, in two of the benchmarks GREEK-BERT outperforms two multilingual Transformer-based models (M-BERT, XLM-R), as well as shallower neural baselines operating on pre-trained word embeddings, by a large margin (5%-10%). Most importantly, we make both GREEK-BERT and our training code publicly available, along with code illustrating how GREEK-BERT can be fine-tuned for downstream NLP tasks. We expect these resources to boost NLP research and applications for modern Greek."]], "1_corpus": "arxiv"}
144
+ {"tstamp": 1742392768.0749, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "08f305420a804aa9a52d395d6e7e3c2f", "0_model_name": "embed-english-v3.0", "0_prompt": "High-resolution image generation with parallel token prediction and Vector-Quantized codes.", "0_output": [["High-resolution image generation with parallel token prediction and Vector-Quantized codes.", "Title: Unleashing Transformers: Parallel Token Prediction with Discrete Absorbing Diffusion for Fast High-Resolution Image Generation from Vector-Quantized Codes\n\nAbstract: Whilst diffusion probabilistic models can generate high quality image content, key limitations remain in terms of both generating high-resolution imagery and their associated high computational requirements. Recent Vector-Quantized image models have overcome this limitation of image resolution but are prohibitively slow and unidirectional as they generate tokens via element-wise autoregressive sampling from the prior. By contrast, in this paper we propose a novel discrete diffusion probabilistic model prior which enables parallel prediction of Vector-Quantized tokens by using an unconstrained Transformer architecture as the backbone. During training, tokens are randomly masked in an order-agnostic manner and the Transformer learns to predict the original tokens. This parallelism of Vector-Quantized token prediction in turn facilitates unconditional generation of globally consistent high-resolution and diverse imagery at a fraction of the computational expense. In this manner, we can generate image resolutions exceeding that of the original training set samples whilst additionally provisioning per-image likelihood estimates (in a departure from generative adversarial approaches). Our approach achieves state-of-the-art results in terms of Density (LSUN Bedroom: 1.51; LSUN Churches: 1.12; FFHQ: 1.20) and Coverage (LSUN Bedroom: 0.83; LSUN Churches: 0.73; FFHQ: 0.80), and performs competitively on FID (LSUN Bedroom: 3.64; LSUN Churches: 4.07; FFHQ: 6.11) whilst offering advantages in terms of both computation and reduced training set requirements."]], "0_corpus": "arxiv", "1_conv_id": "86d1ba1d2c8a4b40b8cb405df9102386", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "High-resolution image generation with parallel token prediction and Vector-Quantized codes.", "1_output": [["High-resolution image generation with parallel token prediction and Vector-Quantized codes.", "Title: Unleashing Transformers: Parallel Token Prediction with Discrete Absorbing Diffusion for Fast High-Resolution Image Generation from Vector-Quantized Codes\n\nAbstract: Whilst diffusion probabilistic models can generate high quality image content, key limitations remain in terms of both generating high-resolution imagery and their associated high computational requirements. Recent Vector-Quantized image models have overcome this limitation of image resolution but are prohibitively slow and unidirectional as they generate tokens via element-wise autoregressive sampling from the prior. By contrast, in this paper we propose a novel discrete diffusion probabilistic model prior which enables parallel prediction of Vector-Quantized tokens by using an unconstrained Transformer architecture as the backbone. During training, tokens are randomly masked in an order-agnostic manner and the Transformer learns to predict the original tokens. This parallelism of Vector-Quantized token prediction in turn facilitates unconditional generation of globally consistent high-resolution and diverse imagery at a fraction of the computational expense. In this manner, we can generate image resolutions exceeding that of the original training set samples whilst additionally provisioning per-image likelihood estimates (in a departure from generative adversarial approaches). Our approach achieves state-of-the-art results in terms of Density (LSUN Bedroom: 1.51; LSUN Churches: 1.12; FFHQ: 1.20) and Coverage (LSUN Bedroom: 0.83; LSUN Churches: 0.73; FFHQ: 0.80), and performs competitively on FID (LSUN Bedroom: 3.64; LSUN Churches: 4.07; FFHQ: 6.11) whilst offering advantages in terms of both computation and reduced training set requirements."]], "1_corpus": "arxiv"}
data/retrieval_individual-3403941f-f995-496f-9668-ac196a27ebc6.jsonl CHANGED
@@ -615,3 +615,9 @@
615
  {"tstamp": 1742386652.3521, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1742386647.3902, "finish": 1742386652.3521, "ip": "", "conv_id": "b32c478373b041d7b46f5689477f107e", "model_name": "BM25", "prompt": "meaning of song do your ears hang low", "output": [["meaning of song do your ears hang low", "Do Your Ears Hang Low?\n\n\"Do Your Ears Hang Low?\" is a children's novelty song often sung in schools and at camps. The melody of this song is usually a shorter version of \"Turkey in the Straw\", but it can also be sung to the tune of the \"Sailor's Hornpipe\". It has a Roud Folk Song Index number of 15472.\nHistory\nThe origin of the song is most likely George Washington Dixon's \"Zip Coon\", penned in 1838. Variant versions with vulgar lyrics include \"Do Your Balls Hang Low?\" and \"Do Your Boobs Hang Low?\". Some authors regard these as parody versions of the campfire song, but, according to folklorists such as Ed Cray, the evidence strongly suggests that \"Do Your Balls Hang Low?\" came first, and that \"Do Your Ears Hang Low?\" is a sanitized version."]], "corpus": "wikipedia"}
616
  {"tstamp": 1742386750.2753, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1742386748.183, "finish": 1742386750.2753, "ip": "", "conv_id": "768c8c6fb6134c54a189c8a82f0bef34", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "what is the difference between last and latest?", "output": [["what is the difference between last and latest?", "Meta Stackexchange\n\nQ: What's the difference between \"recent\" and \"newest\" sort options in my user page? When I go to my user page to look through all the questions that I have asked, I am always presented with the following choice of sorting options: \"recent\", \"views\", \"newest\", \"votes\". While I am okay with the second and the forth options, I still don't get the difference between \"recent\" and \"newest\". Can anyone here explain it to me, please?\n\nA: The tooltips when you mouse over the sort selection boxes explains:\n\n[recent] sort by recent activity\n[newest] sort by creation date\n\n"]], "corpus": "stackexchange"}
617
  {"tstamp": 1742386750.2753, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1742386748.183, "finish": 1742386750.2753, "ip": "", "conv_id": "37f68cdb307c4427ba5f8ea12e53ce25", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "what is the difference between last and latest?", "output": [["what is the difference between last and latest?", "Stats Stackexchange\n\nQ: Pairwise chi-squared tests with Bonferroni correction I have two categorical datasets, say, $A$ and $B$, which are sparse. I would like to apply pairwise $\\chi^2$ tests to a certain categories, which are sufficiently populated (say, have expected values $>5$) in the following way. Let $k_{A}$ be number of elements of $A$ which are $k$, and $\\neg k_{A}$ be number of elements which are not $k$. So, I have a contingency table of the form:\n$$\n\\begin{array}{|r|r|}\n\\hline\nk_{A}&k_{B}\\\\\n\\hline\n\\neg k_{A}&\\neg k_{B}\\\\\n\\hline\n\\end{array}\n$$\nAfter applying $\\chi^2$ test, I multiply each $p$ by a number of tests I run, and this is a Bonferroni correction. And then I can conclude, that certain categories (having $p<0.05$ after applying correction) are significantly different in datasets $A$ and $B$.\nIs this a correct? If, for example, I would simply collapse categories with low expected values into a catch-all category 'others', and run a single $\\chi^2$ test, then I don't need to apply any corrections?\n"]], "corpus": "stackexchange"}
 
 
 
 
 
 
 
615
  {"tstamp": 1742386652.3521, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1742386647.3902, "finish": 1742386652.3521, "ip": "", "conv_id": "b32c478373b041d7b46f5689477f107e", "model_name": "BM25", "prompt": "meaning of song do your ears hang low", "output": [["meaning of song do your ears hang low", "Do Your Ears Hang Low?\n\n\"Do Your Ears Hang Low?\" is a children's novelty song often sung in schools and at camps. The melody of this song is usually a shorter version of \"Turkey in the Straw\", but it can also be sung to the tune of the \"Sailor's Hornpipe\". It has a Roud Folk Song Index number of 15472.\nHistory\nThe origin of the song is most likely George Washington Dixon's \"Zip Coon\", penned in 1838. Variant versions with vulgar lyrics include \"Do Your Balls Hang Low?\" and \"Do Your Boobs Hang Low?\". Some authors regard these as parody versions of the campfire song, but, according to folklorists such as Ed Cray, the evidence strongly suggests that \"Do Your Balls Hang Low?\" came first, and that \"Do Your Ears Hang Low?\" is a sanitized version."]], "corpus": "wikipedia"}
616
  {"tstamp": 1742386750.2753, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1742386748.183, "finish": 1742386750.2753, "ip": "", "conv_id": "768c8c6fb6134c54a189c8a82f0bef34", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "what is the difference between last and latest?", "output": [["what is the difference between last and latest?", "Meta Stackexchange\n\nQ: What's the difference between \"recent\" and \"newest\" sort options in my user page? When I go to my user page to look through all the questions that I have asked, I am always presented with the following choice of sorting options: \"recent\", \"views\", \"newest\", \"votes\". While I am okay with the second and the forth options, I still don't get the difference between \"recent\" and \"newest\". Can anyone here explain it to me, please?\n\nA: The tooltips when you mouse over the sort selection boxes explains:\n\n[recent] sort by recent activity\n[newest] sort by creation date\n\n"]], "corpus": "stackexchange"}
617
  {"tstamp": 1742386750.2753, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1742386748.183, "finish": 1742386750.2753, "ip": "", "conv_id": "37f68cdb307c4427ba5f8ea12e53ce25", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "what is the difference between last and latest?", "output": [["what is the difference between last and latest?", "Stats Stackexchange\n\nQ: Pairwise chi-squared tests with Bonferroni correction I have two categorical datasets, say, $A$ and $B$, which are sparse. I would like to apply pairwise $\\chi^2$ tests to a certain categories, which are sufficiently populated (say, have expected values $>5$) in the following way. Let $k_{A}$ be number of elements of $A$ which are $k$, and $\\neg k_{A}$ be number of elements which are not $k$. So, I have a contingency table of the form:\n$$\n\\begin{array}{|r|r|}\n\\hline\nk_{A}&k_{B}\\\\\n\\hline\n\\neg k_{A}&\\neg k_{B}\\\\\n\\hline\n\\end{array}\n$$\nAfter applying $\\chi^2$ test, I multiply each $p$ by a number of tests I run, and this is a Bonferroni correction. And then I can conclude, that certain categories (having $p<0.05$ after applying correction) are significantly different in datasets $A$ and $B$.\nIs this a correct? If, for example, I would simply collapse categories with low expected values into a catch-all category 'others', and run a single $\\chi^2$ test, then I don't need to apply any corrections?\n"]], "corpus": "stackexchange"}
618
+ {"tstamp": 1742392690.1332, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1742392687.7767, "finish": 1742392690.1332, "ip": "", "conv_id": "b7e25ab1a1224f5c8e3343fb30019c71", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "how long ago the discovery of asthma was found?", "output": [["how long ago the discovery of asthma was found?", "Asthma\n\nHistory Asthma was recognized in ancient Egypt and was treated by drinking an incense mixture known as kyphi. It was officially named as a specific respiratory problem by Hippocrates circa 450 BC, with the Greek word for \"panting\" forming the basis of our modern name. In 200 BC it was believed to be at least partly related to the emotions. In the 12th century the Jewish physician-philosopher Maimonides wrote a treatise on asthma in Arabic, based partly on Arabic sources, in which he discussed the symptoms, proposed various dietary and other means of treatment, and emphasized the importance of climate and clean air. Chinese Traditional Medicine also offered medication for asthma, as indicated by a surviving 14th century manuscript curated by the Wellcome Foundation.\nIn 1873, one of the first papers in modern medicine on the subject tried to explain the pathophysiology of the disease while one in 1872, concluded that asthma can be cured by rubbing the chest with chloroform liniment.\nMedical treatment in 1880 included the use of intravenous doses of a drug called pilocarpine.\nIn 1886, F. H. Bosworth theorized a connection between asthma and hay fever.\nAt the beginning of the 20th century, the focus was the avoidance of allergens as well as selective beta-2 adrenoceptors agonists were used as treatment strategies.\nEpinephrine was first referred to in the treatment of asthma in 1905.\nOral corticosteroids began to be used for this condition in the 1950."]], "corpus": "wikipedia"}
619
+ {"tstamp": 1742392690.1332, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1742392687.7767, "finish": 1742392690.1332, "ip": "", "conv_id": "3e7b49cfd2b94c9081104712e3f59198", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "how long ago the discovery of asthma was found?", "output": [["how long ago the discovery of asthma was found?", "Asthma\n\nHistory Asthma was recognized in ancient Egypt and was treated by drinking an incense mixture known as kyphi. It was officially named as a specific respiratory problem by Hippocrates circa 450 BC, with the Greek word for \"panting\" forming the basis of our modern name. In 200 BC it was believed to be at least partly related to the emotions. In the 12th century the Jewish physician-philosopher Maimonides wrote a treatise on asthma in Arabic, based partly on Arabic sources, in which he discussed the symptoms, proposed various dietary and other means of treatment, and emphasized the importance of climate and clean air. Chinese Traditional Medicine also offered medication for asthma, as indicated by a surviving 14th century manuscript curated by the Wellcome Foundation.\nIn 1873, one of the first papers in modern medicine on the subject tried to explain the pathophysiology of the disease while one in 1872, concluded that asthma can be cured by rubbing the chest with chloroform liniment.\nMedical treatment in 1880 included the use of intravenous doses of a drug called pilocarpine.\nIn 1886, F. H. Bosworth theorized a connection between asthma and hay fever.\nAt the beginning of the 20th century, the focus was the avoidance of allergens as well as selective beta-2 adrenoceptors agonists were used as treatment strategies.\nEpinephrine was first referred to in the treatment of asthma in 1905.\nOral corticosteroids began to be used for this condition in the 1950."]], "corpus": "wikipedia"}
620
+ {"tstamp": 1742392735.404, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1742392733.1064, "finish": 1742392735.404, "ip": "", "conv_id": "be46bbd2c8784b7ea9da16d97c17e8ba", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "A paper introducing a state-of-the-art BERT model specific to the Greek language.", "output": [["A paper introducing a state-of-the-art BERT model specific to the Greek language.", "Title: GREEK-BERT: The Greeks visiting Sesame Street\n\nAbstract: Transformer-based language models, such as BERT and its variants, have achieved state-of-the-art performance in several downstream natural language processing (NLP) tasks on generic benchmark datasets (e.g., GLUE, SQUAD, RACE). However, these models have mostly been applied to the resource-rich English language. In this paper, we present GREEK-BERT, a monolingual BERT-based language model for modern Greek. We evaluate its performance in three NLP tasks, i.e., part-of-speech tagging, named entity recognition, and natural language inference, obtaining state-of-the-art performance. Interestingly, in two of the benchmarks GREEK-BERT outperforms two multilingual Transformer-based models (M-BERT, XLM-R), as well as shallower neural baselines operating on pre-trained word embeddings, by a large margin (5%-10%). Most importantly, we make both GREEK-BERT and our training code publicly available, along with code illustrating how GREEK-BERT can be fine-tuned for downstream NLP tasks. We expect these resources to boost NLP research and applications for modern Greek."]], "corpus": "arxiv"}
621
+ {"tstamp": 1742392735.404, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1742392733.1064, "finish": 1742392735.404, "ip": "", "conv_id": "c4109c221c644c5d8de2592a2d4ef124", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "A paper introducing a state-of-the-art BERT model specific to the Greek language.", "output": [["A paper introducing a state-of-the-art BERT model specific to the Greek language.", "Title: GREEK-BERT: The Greeks visiting Sesame Street\n\nAbstract: Transformer-based language models, such as BERT and its variants, have achieved state-of-the-art performance in several downstream natural language processing (NLP) tasks on generic benchmark datasets (e.g., GLUE, SQUAD, RACE). However, these models have mostly been applied to the resource-rich English language. In this paper, we present GREEK-BERT, a monolingual BERT-based language model for modern Greek. We evaluate its performance in three NLP tasks, i.e., part-of-speech tagging, named entity recognition, and natural language inference, obtaining state-of-the-art performance. Interestingly, in two of the benchmarks GREEK-BERT outperforms two multilingual Transformer-based models (M-BERT, XLM-R), as well as shallower neural baselines operating on pre-trained word embeddings, by a large margin (5%-10%). Most importantly, we make both GREEK-BERT and our training code publicly available, along with code illustrating how GREEK-BERT can be fine-tuned for downstream NLP tasks. We expect these resources to boost NLP research and applications for modern Greek."]], "corpus": "arxiv"}
622
+ {"tstamp": 1742392761.266, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1742392758.9496, "finish": 1742392761.266, "ip": "", "conv_id": "08f305420a804aa9a52d395d6e7e3c2f", "model_name": "embed-english-v3.0", "prompt": "High-resolution image generation with parallel token prediction and Vector-Quantized codes.", "output": [["High-resolution image generation with parallel token prediction and Vector-Quantized codes.", "Title: Unleashing Transformers: Parallel Token Prediction with Discrete Absorbing Diffusion for Fast High-Resolution Image Generation from Vector-Quantized Codes\n\nAbstract: Whilst diffusion probabilistic models can generate high quality image content, key limitations remain in terms of both generating high-resolution imagery and their associated high computational requirements. Recent Vector-Quantized image models have overcome this limitation of image resolution but are prohibitively slow and unidirectional as they generate tokens via element-wise autoregressive sampling from the prior. By contrast, in this paper we propose a novel discrete diffusion probabilistic model prior which enables parallel prediction of Vector-Quantized tokens by using an unconstrained Transformer architecture as the backbone. During training, tokens are randomly masked in an order-agnostic manner and the Transformer learns to predict the original tokens. This parallelism of Vector-Quantized token prediction in turn facilitates unconditional generation of globally consistent high-resolution and diverse imagery at a fraction of the computational expense. In this manner, we can generate image resolutions exceeding that of the original training set samples whilst additionally provisioning per-image likelihood estimates (in a departure from generative adversarial approaches). Our approach achieves state-of-the-art results in terms of Density (LSUN Bedroom: 1.51; LSUN Churches: 1.12; FFHQ: 1.20) and Coverage (LSUN Bedroom: 0.83; LSUN Churches: 0.73; FFHQ: 0.80), and performs competitively on FID (LSUN Bedroom: 3.64; LSUN Churches: 4.07; FFHQ: 6.11) whilst offering advantages in terms of both computation and reduced training set requirements."]], "corpus": "arxiv"}
623
+ {"tstamp": 1742392761.266, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1742392758.9496, "finish": 1742392761.266, "ip": "", "conv_id": "86d1ba1d2c8a4b40b8cb405df9102386", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "High-resolution image generation with parallel token prediction and Vector-Quantized codes.", "output": [["High-resolution image generation with parallel token prediction and Vector-Quantized codes.", "Title: Unleashing Transformers: Parallel Token Prediction with Discrete Absorbing Diffusion for Fast High-Resolution Image Generation from Vector-Quantized Codes\n\nAbstract: Whilst diffusion probabilistic models can generate high quality image content, key limitations remain in terms of both generating high-resolution imagery and their associated high computational requirements. Recent Vector-Quantized image models have overcome this limitation of image resolution but are prohibitively slow and unidirectional as they generate tokens via element-wise autoregressive sampling from the prior. By contrast, in this paper we propose a novel discrete diffusion probabilistic model prior which enables parallel prediction of Vector-Quantized tokens by using an unconstrained Transformer architecture as the backbone. During training, tokens are randomly masked in an order-agnostic manner and the Transformer learns to predict the original tokens. This parallelism of Vector-Quantized token prediction in turn facilitates unconditional generation of globally consistent high-resolution and diverse imagery at a fraction of the computational expense. In this manner, we can generate image resolutions exceeding that of the original training set samples whilst additionally provisioning per-image likelihood estimates (in a departure from generative adversarial approaches). Our approach achieves state-of-the-art results in terms of Density (LSUN Bedroom: 1.51; LSUN Churches: 1.12; FFHQ: 1.20) and Coverage (LSUN Bedroom: 0.83; LSUN Churches: 0.73; FFHQ: 0.80), and performs competitively on FID (LSUN Bedroom: 3.64; LSUN Churches: 4.07; FFHQ: 6.11) whilst offering advantages in terms of both computation and reduced training set requirements."]], "corpus": "arxiv"}