π We're excited to introduce MemoryCode, a novel synthetic dataset designed to rigorously evaluate LLMs' ability to track and execute coding instructions across multiple sessions. MemoryCode simulates realistic workplace scenarios where a mentee (the LLM) receives coding instructions from a mentor amidst a stream of both relevant and irrelevant information.
π‘ But what makes MemoryCode unique?! The combination of the following:
β Multi-Session Dialogue Histories: MemoryCode consists of chronological sequences of dialogues between a mentor and a mentee, mirroring real-world interactions between coworkers.
β Interspersed Irrelevant Information: Critical instructions are deliberately interspersed with unrelated content, replicating the information overload common in office environments.
β Instruction Updates: Coding rules and conventions can be updated multiple times throughout the dialogue history, requiring LLMs to track and apply the most recent information.
β Prospective Memory: Unlike previous datasets that cue information retrieval, MemoryCode requires LLMs to spontaneously recall and apply relevant instructions without explicit prompts.
β Practical Task Execution: LLMs are evaluated on their ability to use the retrieved information to perform practical coding tasks, bridging the gap between information recall and real-world application.
π Our Findings
1οΈβ£ While even small models can handle isolated coding instructions, the performance of top-tier models like GPT-4o dramatically deteriorates when instructions are spread across multiple sessions.
2οΈβ£ This performance drop isn't simply due to the length of the context. Our analysis indicates that LLMs struggle to reason compositionally over sequences of instructions and updates. They have difficulty keeping track of which instructions are current and how to apply them.
I've got my hands on an AMD Instinct MI100. It's about the same price used as a V100 but on paper has more TOPS (V100 14TOPS vs MI100 23TOPS) also the HBM has faster clock so the memory bandwidth is 1.2TB/s. For quantized inference it's a beast (MI50 was also surprisingly fast)
For LORA training with this quick test I could not make the bnb config works so I'm running the FT on the fill size model.
Will share all the install, setup and setting I've learned in a blog post, together with the cooling shroud 3D design.
Introducing the Synthetic Data Generator, a user-friendly application that takes a no-code approach to creating custom datasets with Large Language Models (LLMs). The best part: A simple step-by-step process, making dataset creation a non-technical breeze, allowing anyone to create datasets and models in minutes and without any code.
We outperform Llama 70B with Llama 3B on hard math by scaling test-time compute π₯
How? By combining step-wise reward models with tree search algorithms :)
We show that smol models can match or exceed the performance of their much larger siblings when given enough "time to think"
We're open sourcing the full recipe and sharing a detailed blog post.
In our blog post we cover:
π Compute-optimal scaling: How we implemented DeepMind's recipe to boost the mathematical capabilities of open models at test-time.
π Diverse Verifier Tree Search (DVTS): An unpublished extension we developed to the verifier-guided tree search technique. This simple yet effective method improves diversity and delivers better performance, particularly at large test-time compute budgets.
π§ Search and Learn: A lightweight toolkit for implementing search strategies with LLMs and built for speed with vLLM
Last Week in Medical AI: Top Research Papers/Models π₯ π (December 7 β December 14, 2024)
Medical LLM & Other Models - PediaBench: Chinese Pediatric LLM - Comprehensive pediatric dataset - Advanced benchmarking platform - Chinese healthcare innovation - BiMediX: Bilingual Medical LLM - Multilingual medical expertise - Diverse medical knowledge integration - Cross-cultural healthcare insights - MMedPO: Vision-Language Medical LLM - Clinical multimodal optimization - Advanced medical image understanding - Precision healthcare modeling
Frameworks and Methodologies - TOP-Training: Medical Q&A Framework - Hybrid RAG: Secure Medical Data Management - Zero-Shot ATC Clinical Coding - Chest X-Ray Diagnosis Architecture - Medical Imaging AI Democratization
Benchmarks & Evaluations - KorMedMCQA: Korean Healthcare Licensing Benchmark - Large Language Model Medical Tasks - Clinical T5 Model Performance Study - Radiology Report Quality Assessment - Genomic Analysis Benchmarking
Medical LLM Applications - BRAD: Digital Biology Language Model - TCM-FTP: Herbal Prescription Prediction - LLaSA: Activity Analysis via Sensors - Emergency Department Visit Predictions - Neurodegenerative Disease AI Diagnosis - Kidney Disease Explainable AI Model
Ethical AI & Privacy - Privacy-Preserving LLM Mechanisms - AI-Driven Digital Organism Modeling - Biomedical Research Automation - Multimodality in Medical Practice
We outperform Llama 70B with Llama 3B on hard math by scaling test-time compute π₯
How? By combining step-wise reward models with tree search algorithms :)
We show that smol models can match or exceed the performance of their much larger siblings when given enough "time to think"
We're open sourcing the full recipe and sharing a detailed blog post.
In our blog post we cover:
π Compute-optimal scaling: How we implemented DeepMind's recipe to boost the mathematical capabilities of open models at test-time.
π Diverse Verifier Tree Search (DVTS): An unpublished extension we developed to the verifier-guided tree search technique. This simple yet effective method improves diversity and delivers better performance, particularly at large test-time compute budgets.
π§ Search and Learn: A lightweight toolkit for implementing search strategies with LLMs and built for speed with vLLM
self.brag(): Kokoro finally got 300 votes in Pendrokar/TTS-Spaces-Arena after @Pendrokar was kind enough to add it 3 weeks ago. Discounting the small sample size of votes, I think it is safe to say that hexgrad/Kokoro-TTS is currently a top 3 model among the contenders in that Arena. This is notable because: - At 82M params, Kokoro is one of the smaller models in the Arena - MeloTTS has 52M params - F5 TTS has 330M params - XTTSv2 has 467M params
Let's go! We are releasing SmolVLM, a smol 2B VLM built for on-device inference that outperforms all models at similar GPU RAM usage and tokens throughputs.
- SmolVLM generates tokens 7.5 to 16 times faster than Qwen2-VL! π€― - Other models at this size crash a laptop, but SmolVLM comfortably generates 17 tokens/sec on a macbook! π - SmolVLM can be fine-tuned on a Google collab! Or process millions of documents with a consumer GPU! - SmolVLM even outperforms larger models in video benchmarks, despite not even being trained on videos!