Scheduled Commit
Browse files- data/clustering_individual-76065722-116b-41cc-961b-aebfed552f79.jsonl +2 -0
- data/clustering_side_by_side-76065722-116b-41cc-961b-aebfed552f79.jsonl +1 -0
- data/retrieval_battle-76065722-116b-41cc-961b-aebfed552f79.jsonl +1 -0
- data/retrieval_individual-76065722-116b-41cc-961b-aebfed552f79.jsonl +6 -0
- data/retrieval_side_by_side-76065722-116b-41cc-961b-aebfed552f79.jsonl +2 -0
data/clustering_individual-76065722-116b-41cc-961b-aebfed552f79.jsonl
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
{"tstamp": 1726553846.6926, "task_type": "clustering", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1726553831.896, "finish": 1726553846.6926, "ip": "", "conv_id": "0625821eff6d4ce799744510d21ba3f3", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": ["Spanish", "English", "pencil", "quill", "pen", "hydrogen", "sodium", "nitrogen", "iron", "carbon", "oxygen"], "ncluster": 3, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
|
2 |
+
{"tstamp": 1726553846.6926, "task_type": "clustering", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1726553831.896, "finish": 1726553846.6926, "ip": "", "conv_id": "9837a4c4b61746099fb454e3d107a5c0", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": ["Spanish", "English", "pencil", "quill", "pen", "hydrogen", "sodium", "nitrogen", "iron", "carbon", "oxygen"], "ncluster": 3, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
|
data/clustering_side_by_side-76065722-116b-41cc-961b-aebfed552f79.jsonl
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"tstamp": 1726553884.6623, "task_type": "clustering", "type": "rightvote", "models": ["sentence-transformers/all-MiniLM-L6-v2", "nomic-ai/nomic-embed-text-v1.5"], "ip": "", "0_conv_id": "0625821eff6d4ce799744510d21ba3f3", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": ["Spanish", "English", "pencil", "quill", "pen", "hydrogen", "sodium", "nitrogen", "iron", "carbon", "oxygen"], "0_ncluster": 3, "0_output": "", "0_ndim": "3D (press for 2D)", "0_dim_method": "PCA", "0_clustering_method": "KMeans", "1_conv_id": "9837a4c4b61746099fb454e3d107a5c0", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": ["Spanish", "English", "pencil", "quill", "pen", "hydrogen", "sodium", "nitrogen", "iron", "carbon", "oxygen"], "1_ncluster": 3, "1_output": "", "1_ndim": "3D (press for 2D)", "1_dim_method": "PCA", "1_clustering_method": "KMeans"}
|
data/retrieval_battle-76065722-116b-41cc-961b-aebfed552f79.jsonl
CHANGED
@@ -1 +1,2 @@
|
|
1 |
{"tstamp": 1726551342.6114, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "c768f02af74e4cd88a50b39b81a99a28", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "Why did the machines in The Matrix keep humans around?", "0_output": [["Why did the machines in The Matrix keep humans around?", "Scifi Stackexchange\n\nQ: Why use full brained humans as batteries? \nPossible Duplicate:\nIs the basic premise of humans as a power source in The Matrix reasonable?\nWhy Did the Machines Even Bother With a Matrix? \n\nThe machines in The Matrix series use humans as batteries. That's OK. But why did the machines not alter their livestock such that the matrix itself would be unnecessary? \nConsider the recently proposed headless chicken farm. Something similar for the human batteries would seem to be an ideal solution for logical machines. No longer would they need to spend resources maintaining a virtual reality for their batteries. The energy, physical hardware and programs running the matrix could then be put to who knows what other uses.\n\nA: The explanation for the use of human brains would seem to be the fact that our complex thoughts lead to higher level of brain activity, thus resulting in exponentially more synapses firing (i.e. more electrical activity to harvest).\nThat makes human brains far more efficient than the brains of animals, even with the overhead of maintaining the matrix. Plus, that movie would have really sucked... dumb cows and chickens revolt against the machines! ;)\n"]], "0_corpus": "stackexchange", "1_conv_id": "71bb5b455d984d4c9990293bf764592b", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "Why did the machines in The Matrix keep humans around?", "1_output": [["Why did the machines in The Matrix keep humans around?", "Scifi Stackexchange\n\nQ: Why do the machines allow/initiate the building of a new Zion? In the Matrix Reloaded, the Architect says:\n\nThe function of the One is now to return to the Source, allowing a temporary dissemination of the code you carry, reinserting the prime program. After which, you will be required to select from the Matrix 23 individuals - 16 female, 7 male - to rebuild Zion. Failure to comply with this process will result in a cataclysmic system crash, killing everyone connected to the Matrix, which, coupled with the extermination of Zion, will ultimately result in the extinction of the entire human race.\n\nWhy do the machines initiate the re-building of Zion?\n\nA: As was discussed elsewhere on the site, the machines are not interested in humans being wiped out, they (as shown in Animatrix) just want to co-exist with humans.\nAs such, the machines allow the building of new Zion so that humans would not - as the quote you provided says - would not go extinct.\n"]], "1_corpus": "stackexchange"}
|
|
|
|
1 |
{"tstamp": 1726551342.6114, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "c768f02af74e4cd88a50b39b81a99a28", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "Why did the machines in The Matrix keep humans around?", "0_output": [["Why did the machines in The Matrix keep humans around?", "Scifi Stackexchange\n\nQ: Why use full brained humans as batteries? \nPossible Duplicate:\nIs the basic premise of humans as a power source in The Matrix reasonable?\nWhy Did the Machines Even Bother With a Matrix? \n\nThe machines in The Matrix series use humans as batteries. That's OK. But why did the machines not alter their livestock such that the matrix itself would be unnecessary? \nConsider the recently proposed headless chicken farm. Something similar for the human batteries would seem to be an ideal solution for logical machines. No longer would they need to spend resources maintaining a virtual reality for their batteries. The energy, physical hardware and programs running the matrix could then be put to who knows what other uses.\n\nA: The explanation for the use of human brains would seem to be the fact that our complex thoughts lead to higher level of brain activity, thus resulting in exponentially more synapses firing (i.e. more electrical activity to harvest).\nThat makes human brains far more efficient than the brains of animals, even with the overhead of maintaining the matrix. Plus, that movie would have really sucked... dumb cows and chickens revolt against the machines! ;)\n"]], "0_corpus": "stackexchange", "1_conv_id": "71bb5b455d984d4c9990293bf764592b", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "Why did the machines in The Matrix keep humans around?", "1_output": [["Why did the machines in The Matrix keep humans around?", "Scifi Stackexchange\n\nQ: Why do the machines allow/initiate the building of a new Zion? In the Matrix Reloaded, the Architect says:\n\nThe function of the One is now to return to the Source, allowing a temporary dissemination of the code you carry, reinserting the prime program. After which, you will be required to select from the Matrix 23 individuals - 16 female, 7 male - to rebuild Zion. Failure to comply with this process will result in a cataclysmic system crash, killing everyone connected to the Matrix, which, coupled with the extermination of Zion, will ultimately result in the extinction of the entire human race.\n\nWhy do the machines initiate the re-building of Zion?\n\nA: As was discussed elsewhere on the site, the machines are not interested in humans being wiped out, they (as shown in Animatrix) just want to co-exist with humans.\nAs such, the machines allow the building of new Zion so that humans would not - as the quote you provided says - would not go extinct.\n"]], "1_corpus": "stackexchange"}
|
2 |
+
{"tstamp": 1726553692.9398, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "21952991ae7249189e59002ed5c8a67d", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "can opaque objects refract light?", "0_output": [["can opaque objects refract light?", "Physics Stackexchange\n\nQ: How do opaque materials have an index of refraction? The index of refraction defines how much light is bent as it passes through an object, right? So how can opaque objects, which don't transmit light, have a value for the index of refraction? (e.g., Blender Docs lists the IoR of steel as 2.5)\n\nA: Index of refraction refers to the speed of light in a material, which comes up when determining how much light is reflected vs. refracted. In an opaque material the refracted light is absorbed, but the intensity of reflection still depends on the illumination angle.\nFor example, light reflected at Brewster's angle is completely polarized in the plane of the surface, because the angle between the reflected and refracted light is 90 degrees. The effect is easy to see in the polished floor of a long hallway if you have polarizing sunglasses. If you can find some physical polished steel and it has refractive index 2.5, you should see completely polarized light reflected at a 22 degree angle from the horizontal.\n"]], "0_corpus": "stackexchange", "1_conv_id": "f44baa149970471694b1dfb3f815a9e4", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "can opaque objects refract light?", "1_output": [["can opaque objects refract light?", "Physics Stackexchange\n\nQ: How do opaque materials have an index of refraction? The index of refraction defines how much light is bent as it passes through an object, right? So how can opaque objects, which don't transmit light, have a value for the index of refraction? (e.g., Blender Docs lists the IoR of steel as 2.5)\n\nA: Index of refraction refers to the speed of light in a material, which comes up when determining how much light is reflected vs. refracted. In an opaque material the refracted light is absorbed, but the intensity of reflection still depends on the illumination angle.\nFor example, light reflected at Brewster's angle is completely polarized in the plane of the surface, because the angle between the reflected and refracted light is 90 degrees. The effect is easy to see in the polished floor of a long hallway if you have polarizing sunglasses. If you can find some physical polished steel and it has refractive index 2.5, you should see completely polarized light reflected at a 22 degree angle from the horizontal.\n"]], "1_corpus": "stackexchange"}
|
data/retrieval_individual-76065722-116b-41cc-961b-aebfed552f79.jsonl
CHANGED
@@ -1,2 +1,8 @@
|
|
1 |
{"tstamp": 1726551267.1147, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1726551266.6266, "finish": 1726551267.1147, "ip": "", "conv_id": "c768f02af74e4cd88a50b39b81a99a28", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Why did the machines in The Matrix keep humans around?", "output": [["Why did the machines in The Matrix keep humans around?", "Scifi Stackexchange\n\nQ: Why use full brained humans as batteries? \nPossible Duplicate:\nIs the basic premise of humans as a power source in The Matrix reasonable?\nWhy Did the Machines Even Bother With a Matrix? \n\nThe machines in The Matrix series use humans as batteries. That's OK. But why did the machines not alter their livestock such that the matrix itself would be unnecessary? \nConsider the recently proposed headless chicken farm. Something similar for the human batteries would seem to be an ideal solution for logical machines. No longer would they need to spend resources maintaining a virtual reality for their batteries. The energy, physical hardware and programs running the matrix could then be put to who knows what other uses.\n\nA: The explanation for the use of human brains would seem to be the fact that our complex thoughts lead to higher level of brain activity, thus resulting in exponentially more synapses firing (i.e. more electrical activity to harvest).\nThat makes human brains far more efficient than the brains of animals, even with the overhead of maintaining the matrix. Plus, that movie would have really sucked... dumb cows and chickens revolt against the machines! ;)\n"]], "corpus": "stackexchange"}
|
2 |
{"tstamp": 1726551267.1147, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1726551266.6266, "finish": 1726551267.1147, "ip": "", "conv_id": "71bb5b455d984d4c9990293bf764592b", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Why did the machines in The Matrix keep humans around?", "output": [["Why did the machines in The Matrix keep humans around?", "Scifi Stackexchange\n\nQ: Why do the machines allow/initiate the building of a new Zion? In the Matrix Reloaded, the Architect says:\n\nThe function of the One is now to return to the Source, allowing a temporary dissemination of the code you carry, reinserting the prime program. After which, you will be required to select from the Matrix 23 individuals - 16 female, 7 male - to rebuild Zion. Failure to comply with this process will result in a cataclysmic system crash, killing everyone connected to the Matrix, which, coupled with the extermination of Zion, will ultimately result in the extinction of the entire human race.\n\nWhy do the machines initiate the re-building of Zion?\n\nA: As was discussed elsewhere on the site, the machines are not interested in humans being wiped out, they (as shown in Animatrix) just want to co-exist with humans.\nAs such, the machines allow the building of new Zion so that humans would not - as the quote you provided says - would not go extinct.\n"]], "corpus": "stackexchange"}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
{"tstamp": 1726551267.1147, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1726551266.6266, "finish": 1726551267.1147, "ip": "", "conv_id": "c768f02af74e4cd88a50b39b81a99a28", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Why did the machines in The Matrix keep humans around?", "output": [["Why did the machines in The Matrix keep humans around?", "Scifi Stackexchange\n\nQ: Why use full brained humans as batteries? \nPossible Duplicate:\nIs the basic premise of humans as a power source in The Matrix reasonable?\nWhy Did the Machines Even Bother With a Matrix? \n\nThe machines in The Matrix series use humans as batteries. That's OK. But why did the machines not alter their livestock such that the matrix itself would be unnecessary? \nConsider the recently proposed headless chicken farm. Something similar for the human batteries would seem to be an ideal solution for logical machines. No longer would they need to spend resources maintaining a virtual reality for their batteries. The energy, physical hardware and programs running the matrix could then be put to who knows what other uses.\n\nA: The explanation for the use of human brains would seem to be the fact that our complex thoughts lead to higher level of brain activity, thus resulting in exponentially more synapses firing (i.e. more electrical activity to harvest).\nThat makes human brains far more efficient than the brains of animals, even with the overhead of maintaining the matrix. Plus, that movie would have really sucked... dumb cows and chickens revolt against the machines! ;)\n"]], "corpus": "stackexchange"}
|
2 |
{"tstamp": 1726551267.1147, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1726551266.6266, "finish": 1726551267.1147, "ip": "", "conv_id": "71bb5b455d984d4c9990293bf764592b", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Why did the machines in The Matrix keep humans around?", "output": [["Why did the machines in The Matrix keep humans around?", "Scifi Stackexchange\n\nQ: Why do the machines allow/initiate the building of a new Zion? In the Matrix Reloaded, the Architect says:\n\nThe function of the One is now to return to the Source, allowing a temporary dissemination of the code you carry, reinserting the prime program. After which, you will be required to select from the Matrix 23 individuals - 16 female, 7 male - to rebuild Zion. Failure to comply with this process will result in a cataclysmic system crash, killing everyone connected to the Matrix, which, coupled with the extermination of Zion, will ultimately result in the extinction of the entire human race.\n\nWhy do the machines initiate the re-building of Zion?\n\nA: As was discussed elsewhere on the site, the machines are not interested in humans being wiped out, they (as shown in Animatrix) just want to co-exist with humans.\nAs such, the machines allow the building of new Zion so that humans would not - as the quote you provided says - would not go extinct.\n"]], "corpus": "stackexchange"}
|
3 |
+
{"tstamp": 1726553615.512, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1726553614.9023, "finish": 1726553615.512, "ip": "", "conv_id": "31b8e424b83d4076a75f1a2656f03183", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Research on aspect-based sentiment analysis that incorporates syntax structure through graph-based neural networks.", "output": [["Research on aspect-based sentiment analysis that incorporates syntax structure through graph-based neural networks.", "Title: Syntax-Aware Aspect Level Sentiment Classification with Graph Attention Networks\n\nAbstract: Aspect level sentiment classification aims to identify the sentiment expressed towards an aspect given a context sentence. Previous neural network based methods largely ignore the syntax structure in one sentence. In this paper, we propose a novel target-dependent graph attention network (TD-GAT) for aspect level sentiment classification, which explicitly utilizes the dependency relationship among words. Using the dependency graph, it propagates sentiment features directly from the syntactic context of an aspect target. In our experiments, we show our method outperforms multiple baselines with GloVe embeddings. We also demonstrate that using BERT representations further substantially boosts the performance."]], "corpus": "arxiv"}
|
4 |
+
{"tstamp": 1726553615.512, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1726553614.9023, "finish": 1726553615.512, "ip": "", "conv_id": "f21cb692f6ea4a7fa1e90b799d055086", "model_name": "text-embedding-3-large", "prompt": "Research on aspect-based sentiment analysis that incorporates syntax structure through graph-based neural networks.", "output": [["Research on aspect-based sentiment analysis that incorporates syntax structure through graph-based neural networks.", "Title: Aspect-based Sentiment Classification with Aspect-specific Graph Convolutional Networks\n\nAbstract: Due to their inherent capability in semantic alignment of aspects and their context words, attention mechanism and Convolutional Neural Networks (CNNs) are widely applied for aspect-based sentiment classification. However, these models lack a mechanism to account for relevant syntactical constraints and long-range word dependencies, and hence may mistakenly recognize syntactically irrelevant contextual words as clues for judging aspect sentiment. To tackle this problem, we propose to build a Graph Convolutional Network (GCN) over the dependency tree of a sentence to exploit syntactical information and word dependencies. Based on it, a novel aspect-specific sentiment classification framework is raised. Experiments on three benchmarking collections illustrate that our proposed model has comparable effectiveness to a range of state-of-the-art models, and further demonstrate that both syntactical information and long-range word dependencies are properly captured by the graph convolution structure."]], "corpus": "arxiv"}
|
5 |
+
{"tstamp": 1726553660.1884, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1726553659.6805, "finish": 1726553660.1884, "ip": "", "conv_id": "21952991ae7249189e59002ed5c8a67d", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "can opaque objects refract light?", "output": [["can opaque objects refract light?", "Physics Stackexchange\n\nQ: How do opaque materials have an index of refraction? The index of refraction defines how much light is bent as it passes through an object, right? So how can opaque objects, which don't transmit light, have a value for the index of refraction? (e.g., Blender Docs lists the IoR of steel as 2.5)\n\nA: Index of refraction refers to the speed of light in a material, which comes up when determining how much light is reflected vs. refracted. In an opaque material the refracted light is absorbed, but the intensity of reflection still depends on the illumination angle.\nFor example, light reflected at Brewster's angle is completely polarized in the plane of the surface, because the angle between the reflected and refracted light is 90 degrees. The effect is easy to see in the polished floor of a long hallway if you have polarizing sunglasses. If you can find some physical polished steel and it has refractive index 2.5, you should see completely polarized light reflected at a 22 degree angle from the horizontal.\n"]], "corpus": "stackexchange"}
|
6 |
+
{"tstamp": 1726553660.1884, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1726553659.6805, "finish": 1726553660.1884, "ip": "", "conv_id": "f44baa149970471694b1dfb3f815a9e4", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "can opaque objects refract light?", "output": [["can opaque objects refract light?", "Physics Stackexchange\n\nQ: How do opaque materials have an index of refraction? The index of refraction defines how much light is bent as it passes through an object, right? So how can opaque objects, which don't transmit light, have a value for the index of refraction? (e.g., Blender Docs lists the IoR of steel as 2.5)\n\nA: Index of refraction refers to the speed of light in a material, which comes up when determining how much light is reflected vs. refracted. In an opaque material the refracted light is absorbed, but the intensity of reflection still depends on the illumination angle.\nFor example, light reflected at Brewster's angle is completely polarized in the plane of the surface, because the angle between the reflected and refracted light is 90 degrees. The effect is easy to see in the polished floor of a long hallway if you have polarizing sunglasses. If you can find some physical polished steel and it has refractive index 2.5, you should see completely polarized light reflected at a 22 degree angle from the horizontal.\n"]], "corpus": "stackexchange"}
|
7 |
+
{"tstamp": 1726553767.2524, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1726553766.3648, "finish": 1726553767.2524, "ip": "", "conv_id": "332ba3587df54791a6cfb979351a14c0", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "A paper evaluating language models like GPT and T5 for multilingual classification tasks with few-shot examples.", "output": [["A paper evaluating language models like GPT and T5 for multilingual classification tasks with few-shot examples.", "Title: Language Models are Few-shot Multilingual Learners\n\nAbstract: General-purpose language models have demonstrated impressive capabilities, performing on par with state-of-the-art approaches on a range of downstream natural language processing (NLP) tasks and benchmarks when inferring instructions from very few examples. Here, we evaluate the multilingual skills of the GPT and T5 models in conducting multi-class classification on non-English languages without any parameter updates. We show that, given a few English examples as context, pre-trained language models can predict not only English test samples but also non-English ones. Finally, we find the in-context few-shot cross-lingual prediction results of language models are significantly better than random prediction, and they are competitive compared to the existing state-of-the-art cross-lingual models."]], "corpus": "arxiv"}
|
8 |
+
{"tstamp": 1726553767.2524, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1726553766.3648, "finish": 1726553767.2524, "ip": "", "conv_id": "29dd7bb6ca044e16a0187831e9823601", "model_name": "text-embedding-3-large", "prompt": "A paper evaluating language models like GPT and T5 for multilingual classification tasks with few-shot examples.", "output": [["A paper evaluating language models like GPT and T5 for multilingual classification tasks with few-shot examples.", "Title: Language Models are Few-shot Multilingual Learners\n\nAbstract: General-purpose language models have demonstrated impressive capabilities, performing on par with state-of-the-art approaches on a range of downstream natural language processing (NLP) tasks and benchmarks when inferring instructions from very few examples. Here, we evaluate the multilingual skills of the GPT and T5 models in conducting multi-class classification on non-English languages without any parameter updates. We show that, given a few English examples as context, pre-trained language models can predict not only English test samples but also non-English ones. Finally, we find the in-context few-shot cross-lingual prediction results of language models are significantly better than random prediction, and they are competitive compared to the existing state-of-the-art cross-lingual models."]], "corpus": "arxiv"}
|
data/retrieval_side_by_side-76065722-116b-41cc-961b-aebfed552f79.jsonl
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
{"tstamp": 1726553747.7866, "task_type": "retrieval", "type": "rightvote", "models": ["nomic-ai/nomic-embed-text-v1.5", "text-embedding-3-large"], "ip": "", "0_conv_id": "31b8e424b83d4076a75f1a2656f03183", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "Research on aspect-based sentiment analysis that incorporates syntax structure through graph-based neural networks.", "0_output": [["Research on aspect-based sentiment analysis that incorporates syntax structure through graph-based neural networks.", "Title: Syntax-Aware Aspect Level Sentiment Classification with Graph Attention Networks\n\nAbstract: Aspect level sentiment classification aims to identify the sentiment expressed towards an aspect given a context sentence. Previous neural network based methods largely ignore the syntax structure in one sentence. In this paper, we propose a novel target-dependent graph attention network (TD-GAT) for aspect level sentiment classification, which explicitly utilizes the dependency relationship among words. Using the dependency graph, it propagates sentiment features directly from the syntactic context of an aspect target. In our experiments, we show our method outperforms multiple baselines with GloVe embeddings. We also demonstrate that using BERT representations further substantially boosts the performance."]], "0_corpus": "arxiv", "1_conv_id": "f21cb692f6ea4a7fa1e90b799d055086", "1_model_name": "text-embedding-3-large", "1_prompt": "Research on aspect-based sentiment analysis that incorporates syntax structure through graph-based neural networks.", "1_output": [["Research on aspect-based sentiment analysis that incorporates syntax structure through graph-based neural networks.", "Title: Aspect-based Sentiment Classification with Aspect-specific Graph Convolutional Networks\n\nAbstract: Due to their inherent capability in semantic alignment of aspects and their context words, attention mechanism and Convolutional Neural Networks (CNNs) are widely applied for aspect-based sentiment classification. However, these models lack a mechanism to account for relevant syntactical constraints and long-range word dependencies, and hence may mistakenly recognize syntactically irrelevant contextual words as clues for judging aspect sentiment. To tackle this problem, we propose to build a Graph Convolutional Network (GCN) over the dependency tree of a sentence to exploit syntactical information and word dependencies. Based on it, a novel aspect-specific sentiment classification framework is raised. Experiments on three benchmarking collections illustrate that our proposed model has comparable effectiveness to a range of state-of-the-art models, and further demonstrate that both syntactical information and long-range word dependencies are properly captured by the graph convolution structure."]], "1_corpus": "arxiv"}
|
2 |
+
{"tstamp": 1726553787.6053, "task_type": "retrieval", "type": "tievote", "models": ["nomic-ai/nomic-embed-text-v1.5", "text-embedding-3-large"], "ip": "", "0_conv_id": "332ba3587df54791a6cfb979351a14c0", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "A paper evaluating language models like GPT and T5 for multilingual classification tasks with few-shot examples.", "0_output": [["A paper evaluating language models like GPT and T5 for multilingual classification tasks with few-shot examples.", "Title: Language Models are Few-shot Multilingual Learners\n\nAbstract: General-purpose language models have demonstrated impressive capabilities, performing on par with state-of-the-art approaches on a range of downstream natural language processing (NLP) tasks and benchmarks when inferring instructions from very few examples. Here, we evaluate the multilingual skills of the GPT and T5 models in conducting multi-class classification on non-English languages without any parameter updates. We show that, given a few English examples as context, pre-trained language models can predict not only English test samples but also non-English ones. Finally, we find the in-context few-shot cross-lingual prediction results of language models are significantly better than random prediction, and they are competitive compared to the existing state-of-the-art cross-lingual models."]], "0_corpus": "arxiv", "1_conv_id": "29dd7bb6ca044e16a0187831e9823601", "1_model_name": "text-embedding-3-large", "1_prompt": "A paper evaluating language models like GPT and T5 for multilingual classification tasks with few-shot examples.", "1_output": [["A paper evaluating language models like GPT and T5 for multilingual classification tasks with few-shot examples.", "Title: Language Models are Few-shot Multilingual Learners\n\nAbstract: General-purpose language models have demonstrated impressive capabilities, performing on par with state-of-the-art approaches on a range of downstream natural language processing (NLP) tasks and benchmarks when inferring instructions from very few examples. Here, we evaluate the multilingual skills of the GPT and T5 models in conducting multi-class classification on non-English languages without any parameter updates. We show that, given a few English examples as context, pre-trained language models can predict not only English test samples but also non-English ones. Finally, we find the in-context few-shot cross-lingual prediction results of language models are significantly better than random prediction, and they are competitive compared to the existing state-of-the-art cross-lingual models."]], "1_corpus": "arxiv"}
|