Muennighoff commited on
Commit
80d91ac
·
verified ·
1 Parent(s): a115b67

Scheduled Commit

Browse files
data/retrieval_battle-2cea24b1-39e9-480a-ba22-c617ea05c1fe.jsonl CHANGED
@@ -71,3 +71,4 @@
71
  {"tstamp": 1724230419.0313, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f5ebd2d068d94a24bff0f6ebc380efc7", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "0_corpus": "wikipedia", "1_conv_id": "6c3dcd7c8cf043a98fddd9819ac32fe8", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
72
  {"tstamp": 1724245661.8687, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "36c3a469a23049a196c940e31674305c", "0_model_name": "embed-english-v3.0", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "0_corpus": "wikipedia", "1_conv_id": "f419997054d14754bd93bde714810d25", "1_model_name": "text-embedding-004", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
73
  {"tstamp": 1724250288.9053, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "fb6a010b01324a1d8e76b603020f2a05", "0_model_name": "text-embedding-3-large", "0_prompt": "The first man to walk on the moon", "0_output": [["The first man to walk on the moon", "Neil Armstrong\n\nFirst Moon walk The flight plan called for a crew rest period before leaving the module, but Armstrong asked for this to be moved to earlier in the evening, Houston time. When he and Aldrin were ready to go outside, Eagle was depressurized, the hatch was opened, and Armstrong made his way down the ladder. At the bottom of the ladder, while standing on a Lunar Module landing pad, Armstrong said, \"I'm going to step off the LM now\". He turned and set his left boot on the lunar surface at 02:56 UTC July 21, 1969, then said, \"That's one small step for [a] man, one giant leap for mankind.\" The exact time of Armstrong's first step on the Moon is unclear.\nArmstrong prepared his famous epigram on his own. In a post-flight press conference, he said that he chose the words \"just prior to leaving the LM.\" In a 1983 interview in Esquire magazine, he explained to George Plimpton: \"I always knew there was a good chance of being able to return to Earth, but I thought the chances of a successful touch down on the moon surface were about even money—fifty–fifty... Most people don't realize how difficult the mission was. So it didn't seem to me there was much point in thinking of something to say if we'd have to abort landing.\" In 2012, his brother Dean Armstrong said that Neil showed him a draft of the line months before the launch. Historian Andrew Chaikin, who interviewed Armstrong in 1988 for his book A Man on the Moon, disputed that Armstrong claimed to have conceived the line during the mission."]], "0_corpus": "wikipedia", "1_conv_id": "3b9a30227c4249f09fc662b96a592145", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "The first man to walk on the moon", "1_output": [["The first man to walk on the moon", "Neil Armstrong\n\nOn July 20, 1969, Armstrong and Apollo 11 Lunar Module (LM) pilot Buzz Aldrin became the first people to land on the Moon, and the next day they spent two and a half hours outside the Lunar Module Eagle spacecraft while Michael Collins remained in lunar orbit in the Apollo Command Module Columbia. When Armstrong first stepped onto the lunar surface, he famously said: \"That's one small step for man, one giant leap for mankind.\" It was broadcast live to an estimated 530 million viewers worldwide. Apollo 11 was a major U.S. victory in the Space Race, by fulfilling a national goal proposed in 1961 by President John F. Kennedy \"of landing a man on the Moon and returning him safely to the Earth\" before the end of the decade. Along with Collins and Aldrin, Armstrong was awarded the Presidential Medal of Freedom by President Richard Nixon and received the 1969 Collier Trophy. President Jimmy Carter presented him with the Congressional Space Medal of Honor in 1978, he was inducted into the National Aviation Hall of Fame in 1979, and with his former crewmates received the Congressional Gold Medal in 2009.\nAfter he resigned from NASA in 1971, Armstrong taught in the Department of Aerospace Engineering at the University of Cincinnati until 1979. He served on the Apollo 13 accident investigation and on the Rogers Commission, which investigated the Space Shuttle Challenger disaster. In 2012, Armstrong died due to complications resulting from coronary bypass surgery, at the age of 82."]], "1_corpus": "wikipedia"}
 
 
71
  {"tstamp": 1724230419.0313, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f5ebd2d068d94a24bff0f6ebc380efc7", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "0_corpus": "wikipedia", "1_conv_id": "6c3dcd7c8cf043a98fddd9819ac32fe8", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
72
  {"tstamp": 1724245661.8687, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "36c3a469a23049a196c940e31674305c", "0_model_name": "embed-english-v3.0", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "0_corpus": "wikipedia", "1_conv_id": "f419997054d14754bd93bde714810d25", "1_model_name": "text-embedding-004", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
73
  {"tstamp": 1724250288.9053, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "fb6a010b01324a1d8e76b603020f2a05", "0_model_name": "text-embedding-3-large", "0_prompt": "The first man to walk on the moon", "0_output": [["The first man to walk on the moon", "Neil Armstrong\n\nFirst Moon walk The flight plan called for a crew rest period before leaving the module, but Armstrong asked for this to be moved to earlier in the evening, Houston time. When he and Aldrin were ready to go outside, Eagle was depressurized, the hatch was opened, and Armstrong made his way down the ladder. At the bottom of the ladder, while standing on a Lunar Module landing pad, Armstrong said, \"I'm going to step off the LM now\". He turned and set his left boot on the lunar surface at 02:56 UTC July 21, 1969, then said, \"That's one small step for [a] man, one giant leap for mankind.\" The exact time of Armstrong's first step on the Moon is unclear.\nArmstrong prepared his famous epigram on his own. In a post-flight press conference, he said that he chose the words \"just prior to leaving the LM.\" In a 1983 interview in Esquire magazine, he explained to George Plimpton: \"I always knew there was a good chance of being able to return to Earth, but I thought the chances of a successful touch down on the moon surface were about even money—fifty–fifty... Most people don't realize how difficult the mission was. So it didn't seem to me there was much point in thinking of something to say if we'd have to abort landing.\" In 2012, his brother Dean Armstrong said that Neil showed him a draft of the line months before the launch. Historian Andrew Chaikin, who interviewed Armstrong in 1988 for his book A Man on the Moon, disputed that Armstrong claimed to have conceived the line during the mission."]], "0_corpus": "wikipedia", "1_conv_id": "3b9a30227c4249f09fc662b96a592145", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "The first man to walk on the moon", "1_output": [["The first man to walk on the moon", "Neil Armstrong\n\nOn July 20, 1969, Armstrong and Apollo 11 Lunar Module (LM) pilot Buzz Aldrin became the first people to land on the Moon, and the next day they spent two and a half hours outside the Lunar Module Eagle spacecraft while Michael Collins remained in lunar orbit in the Apollo Command Module Columbia. When Armstrong first stepped onto the lunar surface, he famously said: \"That's one small step for man, one giant leap for mankind.\" It was broadcast live to an estimated 530 million viewers worldwide. Apollo 11 was a major U.S. victory in the Space Race, by fulfilling a national goal proposed in 1961 by President John F. Kennedy \"of landing a man on the Moon and returning him safely to the Earth\" before the end of the decade. Along with Collins and Aldrin, Armstrong was awarded the Presidential Medal of Freedom by President Richard Nixon and received the 1969 Collier Trophy. President Jimmy Carter presented him with the Congressional Space Medal of Honor in 1978, he was inducted into the National Aviation Hall of Fame in 1979, and with his former crewmates received the Congressional Gold Medal in 2009.\nAfter he resigned from NASA in 1971, Armstrong taught in the Department of Aerospace Engineering at the University of Cincinnati until 1979. He served on the Apollo 13 accident investigation and on the Rogers Commission, which investigated the Space Shuttle Challenger disaster. In 2012, Armstrong died due to complications resulting from coronary bypass surgery, at the age of 82."]], "1_corpus": "wikipedia"}
74
+ {"tstamp": 1724266545.4981, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "795ba18d3fc54fd3a4b153d8d7b7aa45", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "Visual Sound Recovery", "0_output": [["Visual Sound Recovery", "Title: Quantitative Constraint Logic Programming for Weighted Grammar Applications\n\nAbstract: Constraint logic grammars provide a powerful formalism for expressing complex logical descriptions of natural language phenomena in exact terms. Describing some of these phenomena may, however, require some form of graded distinctions which are not provided by such grammars. Recent approaches to weighted constraint logic grammars attempt to address this issue by adding numerical calculation schemata to the deduction scheme of the underlying CLP framework. Currently, these extralogical extensions are not related to the model-theoretic counterpart of the operational semantics of CLP, i.e., they do not come with a formal semantics at all. The aim of this paper is to present a clear formal semantics for weighted constraint logic grammars, which abstracts away from specific interpretations of weights, but nevertheless gives insights into the parsing problem for such weighted grammars. Building on the formalization of constraint logic grammars in the CLP scheme of Hoehfeld and Smolka 1988, this formal semantics will be given by a quantitative version of CLP. Such a quantitative CLP scheme can also be valuable for CLP tasks independent of grammars."]], "0_corpus": "arxiv", "1_conv_id": "f08d64859bdd458c85e33f8922eaec1b", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Visual Sound Recovery", "1_output": [["Visual Sound Recovery", "Title: Saving the Sonorine: Photovisual Audio Recovery Using Image Processing and Computer Vision Techniques\n\nAbstract: This paper presents a novel technique to recover audio from sonorines, an early 20th century form of analogue sound storage. Our method uses high resolution photographs of sonorines under different lighting conditions to observe the change in reflection behavior of the physical surface features and create a three-dimensional height map of the surface. Sound can then be extracted using height information within the surface's grooves, mimicking a physical stylus on a phonograph. Unlike traditional playback methods, our method has the advantage of being contactless: the medium will not incur damage and wear from being played repeatedly. We compare the results of our technique to a previously successful contactless method using flatbed scans of the sonorines, and conclude with future research that can be applied to this photovisual approach to audio recovery."]], "1_corpus": "arxiv"}
data/retrieval_individual-2cea24b1-39e9-480a-ba22-c617ea05c1fe.jsonl CHANGED
@@ -252,3 +252,5 @@
252
  {"tstamp": 1724256882.7484, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1724256880.5502, "finish": 1724256882.7484, "ip": "", "conv_id": "55aede8cdd1d41be8df54a5feeb359c5", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "return the top paper from CVPR 2023", "output": [["return the top paper from CVPR 2023", "Title: State-of-the-Art in Smart Contact Lenses for Human Machine Interaction\n\nAbstract: Contact lenses have traditionally been used for vision correction applications. Recent advances in microelectronics and nanofabrication on flexible substrates have now enabled sensors, circuits and other essential components to be integrated on a small contact lens platform. This has opened up the possibility of using contact lenses for a range of human-machine interaction applications including vision assistance, eye tracking, displays and health care. In this article, we systematically review the range of smart contact lens materials, device architectures and components that facilitate this interaction for different applications. In fact, evidence from our systematic review demonstrates that these lenses can be used to display information, detect eye movements, restore vision and detect certain biomarkers in tear fluid. Consequently, whereas previous state-of the-art reviews in contact lenses focused exclusively on biosensing, our systematic review covers a wider range of smart contact lens applications in HMI. Moreover, we present a new method of classifying the literature on smart contact lenses according to their six constituent building blocks, which are the sensing, energy management, driver electronics, communications, substrate and the interfacing modules. Based on recent developments in each of these categories, we speculate the challenges and opportunities of smart contact lenses for human-machine interaction. Moreover, we propose a novel self-powered smart contact lens concept with integrated energy harvesters, sensors and communication modules to enable autonomous operation. Our review is therefore a critical evaluation of current data and is presented with the aim of guiding researchers to new research directions in smart contact lenses."]], "corpus": "arxiv"}
253
  {"tstamp": 1724259962.7753, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1724259962.5628, "finish": 1724259962.7753, "ip": "", "conv_id": "435e8496b59e4262873847fa94d937a7", "model_name": "GritLM/GritLM-7B", "prompt": "ranking loss", "output": [["ranking loss", "Title: Learning Rank Functionals: An Empirical Study\n\nAbstract: Ranking is a key aspect of many applications, such as information retrieval, question answering, ad placement and recommender systems. Learning to rank has the goal of estimating a ranking model automatically from training data. In practical settings, the task often reduces to estimating a rank functional of an object with respect to a query. In this paper, we investigate key issues in designing an effective learning to rank algorithm. These include data representation, the choice of rank functionals, the design of the loss function so that it is correlated with the rank metrics used in evaluation. For the loss function, we study three techniques: approximating the rank metric by a smooth function, decomposition of the loss into a weighted sum of element-wise losses and into a weighted sum of pairwise losses. We then present derivations of piecewise losses using the theory of high-order Markov chains and Markov random fields. In experiments, we evaluate these design aspects on two tasks: answer ranking in a Social Question Answering site, and Web Information Retrieval."]], "corpus": "arxiv"}
254
  {"tstamp": 1724259962.7753, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1724259962.5628, "finish": 1724259962.7753, "ip": "", "conv_id": "c2acc9107e674e1596ad65f3835eaca7", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "ranking loss", "output": [["ranking loss", "Title: Learning Rank Functionals: An Empirical Study\n\nAbstract: Ranking is a key aspect of many applications, such as information retrieval, question answering, ad placement and recommender systems. Learning to rank has the goal of estimating a ranking model automatically from training data. In practical settings, the task often reduces to estimating a rank functional of an object with respect to a query. In this paper, we investigate key issues in designing an effective learning to rank algorithm. These include data representation, the choice of rank functionals, the design of the loss function so that it is correlated with the rank metrics used in evaluation. For the loss function, we study three techniques: approximating the rank metric by a smooth function, decomposition of the loss into a weighted sum of element-wise losses and into a weighted sum of pairwise losses. We then present derivations of piecewise losses using the theory of high-order Markov chains and Markov random fields. In experiments, we evaluate these design aspects on two tasks: answer ranking in a Social Question Answering site, and Web Information Retrieval."]], "corpus": "arxiv"}
 
 
 
252
  {"tstamp": 1724256882.7484, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1724256880.5502, "finish": 1724256882.7484, "ip": "", "conv_id": "55aede8cdd1d41be8df54a5feeb359c5", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "return the top paper from CVPR 2023", "output": [["return the top paper from CVPR 2023", "Title: State-of-the-Art in Smart Contact Lenses for Human Machine Interaction\n\nAbstract: Contact lenses have traditionally been used for vision correction applications. Recent advances in microelectronics and nanofabrication on flexible substrates have now enabled sensors, circuits and other essential components to be integrated on a small contact lens platform. This has opened up the possibility of using contact lenses for a range of human-machine interaction applications including vision assistance, eye tracking, displays and health care. In this article, we systematically review the range of smart contact lens materials, device architectures and components that facilitate this interaction for different applications. In fact, evidence from our systematic review demonstrates that these lenses can be used to display information, detect eye movements, restore vision and detect certain biomarkers in tear fluid. Consequently, whereas previous state-of the-art reviews in contact lenses focused exclusively on biosensing, our systematic review covers a wider range of smart contact lens applications in HMI. Moreover, we present a new method of classifying the literature on smart contact lenses according to their six constituent building blocks, which are the sensing, energy management, driver electronics, communications, substrate and the interfacing modules. Based on recent developments in each of these categories, we speculate the challenges and opportunities of smart contact lenses for human-machine interaction. Moreover, we propose a novel self-powered smart contact lens concept with integrated energy harvesters, sensors and communication modules to enable autonomous operation. Our review is therefore a critical evaluation of current data and is presented with the aim of guiding researchers to new research directions in smart contact lenses."]], "corpus": "arxiv"}
253
  {"tstamp": 1724259962.7753, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1724259962.5628, "finish": 1724259962.7753, "ip": "", "conv_id": "435e8496b59e4262873847fa94d937a7", "model_name": "GritLM/GritLM-7B", "prompt": "ranking loss", "output": [["ranking loss", "Title: Learning Rank Functionals: An Empirical Study\n\nAbstract: Ranking is a key aspect of many applications, such as information retrieval, question answering, ad placement and recommender systems. Learning to rank has the goal of estimating a ranking model automatically from training data. In practical settings, the task often reduces to estimating a rank functional of an object with respect to a query. In this paper, we investigate key issues in designing an effective learning to rank algorithm. These include data representation, the choice of rank functionals, the design of the loss function so that it is correlated with the rank metrics used in evaluation. For the loss function, we study three techniques: approximating the rank metric by a smooth function, decomposition of the loss into a weighted sum of element-wise losses and into a weighted sum of pairwise losses. We then present derivations of piecewise losses using the theory of high-order Markov chains and Markov random fields. In experiments, we evaluate these design aspects on two tasks: answer ranking in a Social Question Answering site, and Web Information Retrieval."]], "corpus": "arxiv"}
254
  {"tstamp": 1724259962.7753, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1724259962.5628, "finish": 1724259962.7753, "ip": "", "conv_id": "c2acc9107e674e1596ad65f3835eaca7", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "ranking loss", "output": [["ranking loss", "Title: Learning Rank Functionals: An Empirical Study\n\nAbstract: Ranking is a key aspect of many applications, such as information retrieval, question answering, ad placement and recommender systems. Learning to rank has the goal of estimating a ranking model automatically from training data. In practical settings, the task often reduces to estimating a rank functional of an object with respect to a query. In this paper, we investigate key issues in designing an effective learning to rank algorithm. These include data representation, the choice of rank functionals, the design of the loss function so that it is correlated with the rank metrics used in evaluation. For the loss function, we study three techniques: approximating the rank metric by a smooth function, decomposition of the loss into a weighted sum of element-wise losses and into a weighted sum of pairwise losses. We then present derivations of piecewise losses using the theory of high-order Markov chains and Markov random fields. In experiments, we evaluate these design aspects on two tasks: answer ranking in a Social Question Answering site, and Web Information Retrieval."]], "corpus": "arxiv"}
255
+ {"tstamp": 1724266528.389, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1724266526.2142, "finish": 1724266528.389, "ip": "", "conv_id": "795ba18d3fc54fd3a4b153d8d7b7aa45", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Visual Sound Recovery", "output": [["Visual Sound Recovery", "Title: Quantitative Constraint Logic Programming for Weighted Grammar Applications\n\nAbstract: Constraint logic grammars provide a powerful formalism for expressing complex logical descriptions of natural language phenomena in exact terms. Describing some of these phenomena may, however, require some form of graded distinctions which are not provided by such grammars. Recent approaches to weighted constraint logic grammars attempt to address this issue by adding numerical calculation schemata to the deduction scheme of the underlying CLP framework. Currently, these extralogical extensions are not related to the model-theoretic counterpart of the operational semantics of CLP, i.e., they do not come with a formal semantics at all. The aim of this paper is to present a clear formal semantics for weighted constraint logic grammars, which abstracts away from specific interpretations of weights, but nevertheless gives insights into the parsing problem for such weighted grammars. Building on the formalization of constraint logic grammars in the CLP scheme of Hoehfeld and Smolka 1988, this formal semantics will be given by a quantitative version of CLP. Such a quantitative CLP scheme can also be valuable for CLP tasks independent of grammars."]], "corpus": "arxiv"}
256
+ {"tstamp": 1724266528.389, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1724266526.2142, "finish": 1724266528.389, "ip": "", "conv_id": "f08d64859bdd458c85e33f8922eaec1b", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Visual Sound Recovery", "output": [["Visual Sound Recovery", "Title: Saving the Sonorine: Photovisual Audio Recovery Using Image Processing and Computer Vision Techniques\n\nAbstract: This paper presents a novel technique to recover audio from sonorines, an early 20th century form of analogue sound storage. Our method uses high resolution photographs of sonorines under different lighting conditions to observe the change in reflection behavior of the physical surface features and create a three-dimensional height map of the surface. Sound can then be extracted using height information within the surface's grooves, mimicking a physical stylus on a phonograph. Unlike traditional playback methods, our method has the advantage of being contactless: the medium will not incur damage and wear from being played repeatedly. We compare the results of our technique to a previously successful contactless method using flatbed scans of the sonorines, and conclude with future research that can be applied to this photovisual approach to audio recovery."]], "corpus": "arxiv"}