text to speech model is used.
Feynman Innovations
ajibawa-2023
AI & ML interests
LLM, RL, DL, ML, AGI. Developing LLMs (preferably fully fine tuned ) for various use cases.
Recent Activity
updated
a dataset
3 days ago
ajibawa-2023/test
published
a dataset
3 days ago
ajibawa-2023/test
liked
a Space
8 days ago
Qwen/QwQ-32B-Demo
Organizations
ajibawa-2023's activity

replied to
their
post
10 days ago
Post
3876
Hi All, I recently released two Audio datasets which are generated using my earlier released dataset:
ajibawa-2023/Children-Stories-Collection
First Audio Dataset:https://huggingface.co/datasets/ajibawa-2023/Audio-Children-Stories-Collection-Large has 5600++ stories in .mp3 format.
Second Audio Dataset:https://huggingface.co/datasets/ajibawa-2023/Audio-Children-Stories-Collection has 600 stories in .mp3 format.
First Audio Dataset:https://huggingface.co/datasets/ajibawa-2023/Audio-Children-Stories-Collection-Large has 5600++ stories in .mp3 format.
Second Audio Dataset:https://huggingface.co/datasets/ajibawa-2023/Audio-Children-Stories-Collection has 600 stories in .mp3 format.

posted
an
update
10 days ago
Post
3876
Hi All, I recently released two Audio datasets which are generated using my earlier released dataset:
ajibawa-2023/Children-Stories-Collection
First Audio Dataset:https://huggingface.co/datasets/ajibawa-2023/Audio-Children-Stories-Collection-Large has 5600++ stories in .mp3 format.
Second Audio Dataset:https://huggingface.co/datasets/ajibawa-2023/Audio-Children-Stories-Collection has 600 stories in .mp3 format.
First Audio Dataset:https://huggingface.co/datasets/ajibawa-2023/Audio-Children-Stories-Collection-Large has 5600++ stories in .mp3 format.
Second Audio Dataset:https://huggingface.co/datasets/ajibawa-2023/Audio-Children-Stories-Collection has 600 stories in .mp3 format.

reacted to
singhsidhukuldeep's
post with π₯
2 months ago
Post
3617
Exciting Research Alert: Revolutionizing Complex Information Retrieval!
A groundbreaking paper from researchers at MIT, AWS AI, and UPenn introduces ARM (Alignment-Oriented LLM-based Retrieval Method), a novel approach to tackle complex information retrieval challenges.
>> Key Innovations
Information Alignment
The method first decomposes queries into keywords and aligns them with available data using both BM25 and embedding similarity, ensuring comprehensive coverage of information needs.
Structure Alignment
ARM employs a sophisticated mixed-integer programming solver to identify connections between data objects, exploring relationships beyond simple semantic matching.
Self-Verification
The system includes a unique self-verification mechanism where the LLM evaluates and aggregates results from multiple retrieval paths, ensuring accuracy and completeness.
>> Performance Highlights
The results are impressive:
- Outperforms standard RAG by up to 5.2 points in execution accuracy on Bird dataset
- Achieves 19.3 points higher F1 scores compared to existing approaches on OTT-QA
- Reduces the number of required LLM calls while maintaining superior retrieval quality
>> Technical Implementation
The system uses a three-step process:
1. N-gram indexing and embedding computation for all data objects
2. Constrained beam decoding for information alignment
3. Mixed-integer programming optimization for structure exploration
This research represents a significant step forward in making complex information retrieval more efficient and accurate. The team's work demonstrates how combining traditional optimization techniques with modern LLM capabilities can solve challenging retrieval problems.
A groundbreaking paper from researchers at MIT, AWS AI, and UPenn introduces ARM (Alignment-Oriented LLM-based Retrieval Method), a novel approach to tackle complex information retrieval challenges.
>> Key Innovations
Information Alignment
The method first decomposes queries into keywords and aligns them with available data using both BM25 and embedding similarity, ensuring comprehensive coverage of information needs.
Structure Alignment
ARM employs a sophisticated mixed-integer programming solver to identify connections between data objects, exploring relationships beyond simple semantic matching.
Self-Verification
The system includes a unique self-verification mechanism where the LLM evaluates and aggregates results from multiple retrieval paths, ensuring accuracy and completeness.
>> Performance Highlights
The results are impressive:
- Outperforms standard RAG by up to 5.2 points in execution accuracy on Bird dataset
- Achieves 19.3 points higher F1 scores compared to existing approaches on OTT-QA
- Reduces the number of required LLM calls while maintaining superior retrieval quality
>> Technical Implementation
The system uses a three-step process:
1. N-gram indexing and embedding computation for all data objects
2. Constrained beam decoding for information alignment
3. Mixed-integer programming optimization for structure exploration
This research represents a significant step forward in making complex information retrieval more efficient and accurate. The team's work demonstrates how combining traditional optimization techniques with modern LLM capabilities can solve challenging retrieval problems.

reacted to
Tonic's
post with π₯
2 months ago
Post
2398
ππ»ββοΈhey there folks ,
Goedel's Theorem Prover is now being demo'ed on huggingface : Tonic/Math
give it a try !
Goedel's Theorem Prover is now being demo'ed on huggingface : Tonic/Math
give it a try !

reacted to
hba123's
post with π₯
2 months ago
Post
1763
We developed a method that ensures almost-sure safety (i.e., safety with probability approaching 1). We proved this result. We then, present a practical implementation which we call InferenceGuard. InferenceGuard has impressive practical results: 91.04% on Alpaca-7B and 100% safety results on Beaver 7B-v3.
Now, it is easy to get high safety results like those if we want a dumb model, e.g., just don't answer or answer with EOS and so on. However, our goal is not to only have safe results, but also to make sure that the rewards are high - we want a good trade-off between safety and rewards! That's exactly, what we show. InferenceGuard achieves that!
Check it out: Almost Surely Safe Alignment of Large Language Models at Inference-Time (2502.01208)
Now, it is easy to get high safety results like those if we want a dumb model, e.g., just don't answer or answer with EOS and so on. However, our goal is not to only have safe results, but also to make sure that the rewards are high - we want a good trade-off between safety and rewards! That's exactly, what we show. InferenceGuard achieves that!
Check it out: Almost Surely Safe Alignment of Large Language Models at Inference-Time (2502.01208)

reacted to
davanstrien's
post with π
3 months ago
Post
1981
Updated the ColPali Query Generator Space
davanstrien/ColPali-Query-Generator to use
Qwen/Qwen2.5-VL-7B-Instruct.
Given an input image, it generates several queries along with explanations to justify them. This approach can generate synthetic data for fine-tuning ColPali models.
Given an input image, it generates several queries along with explanations to justify them. This approach can generate synthetic data for fine-tuning ColPali models.

reacted to
DawnC's
post with β€οΈ
4 months ago
Post
2332
π PawMatchAI: Making Breed Selection More Intuitive! π
Excited to share the latest update to this AI-powered companion for finding your perfect furry friend! I've made significant architectural improvements to enhance breed recognition accuracy and feature detection.
β¨ What's New?
Enhanced breed recognition through advanced morphological feature analysis:
- Implemented a sophisticated feature extraction system that analyzes specific characteristics like body proportions, head features, tail structure, fur texture, and color patterns
- Added an intelligent attention mechanism that dynamically focuses on the most relevant features for each image
- Improved multi-dog detection capabilities through enhanced spatial feature analysis
- Achieved better precision in distinguishing subtle breed characteristics
π― Key Features:
Smart breed recognition powered by advanced AI architecture
Visual matching scores with intuitive color indicators
Detailed breed comparisons with interactive tooltips
Lifestyle-based recommendations tailored to your needs
π Project Vision
Combining my passion for AI and pets, this project represents another step toward creating meaningful AI applications. Each update aims to make the breed selection process more accessible while improving the underlying technology.
π Try it now: DawnC/PawMatchAI
Your likes β€οΈ on this space fuel this project's growth!
#AI #MachineLearning #DeepLearning #Pytorch #ComputerVision #TechForLife
Excited to share the latest update to this AI-powered companion for finding your perfect furry friend! I've made significant architectural improvements to enhance breed recognition accuracy and feature detection.
β¨ What's New?
Enhanced breed recognition through advanced morphological feature analysis:
- Implemented a sophisticated feature extraction system that analyzes specific characteristics like body proportions, head features, tail structure, fur texture, and color patterns
- Added an intelligent attention mechanism that dynamically focuses on the most relevant features for each image
- Improved multi-dog detection capabilities through enhanced spatial feature analysis
- Achieved better precision in distinguishing subtle breed characteristics
π― Key Features:
Smart breed recognition powered by advanced AI architecture
Visual matching scores with intuitive color indicators
Detailed breed comparisons with interactive tooltips
Lifestyle-based recommendations tailored to your needs
π Project Vision
Combining my passion for AI and pets, this project represents another step toward creating meaningful AI applications. Each update aims to make the breed selection process more accessible while improving the underlying technology.
π Try it now: DawnC/PawMatchAI
Your likes β€οΈ on this space fuel this project's growth!
#AI #MachineLearning #DeepLearning #Pytorch #ComputerVision #TechForLife
Good work! Keep it up!!

reacted to
clem's
post with β€οΈ
4 months ago
Post
4385
Cool to see
@ylecun
joining the top 10 of most followed on HF!
(and leaderboard by @mvaloatto is here: mvaloatto/TCTF)
(and leaderboard by @mvaloatto is here: mvaloatto/TCTF)

reacted to
AkimfromParis's
post with π
4 months ago
Post
1762
π΅ Polymarket is leveraging βChatbot Arena LLM Leaderboardβ on HuggingFace for online gambling on the βTop AI model on January 31?β. π€
As of January 3rd, 2025:
-1./ Gemini (83%) -2./ ChatGPT (13%) -3./ Other (2%) -4./ Claude (2%) -5./ Grok (1%) -6./ Llama (<1%)
πΊπΈ The market opinion is following historical data. It's clearly bias towards US historical AI giants, yet Polymarket is forbidden in the USA and for US citizens.
π¨π³ In the βOtherβ, you might have Chinese AI labs that are probably the future AI leaders (Qwen, DeepSeek, Yi).
βοΈ In the market resolution, if two models are tied in the evaluation, they will take the alphabetical order. (e.g. if both were tied, βGoogleβ would resolve to βYesβ, and βxAIβ would resolve to βNoβ). π
That might be illegal usage of the Chatbot Arena policy? And maybe HuggingFace? @clem
Or maybe authors and contributors should get a cut each month as βmarket markersβ.Β @weichiang @angelopoulos
As of January 3rd, 2025:
-1./ Gemini (83%) -2./ ChatGPT (13%) -3./ Other (2%) -4./ Claude (2%) -5./ Grok (1%) -6./ Llama (<1%)
πΊπΈ The market opinion is following historical data. It's clearly bias towards US historical AI giants, yet Polymarket is forbidden in the USA and for US citizens.
π¨π³ In the βOtherβ, you might have Chinese AI labs that are probably the future AI leaders (Qwen, DeepSeek, Yi).
βοΈ In the market resolution, if two models are tied in the evaluation, they will take the alphabetical order. (e.g. if both were tied, βGoogleβ would resolve to βYesβ, and βxAIβ would resolve to βNoβ). π
That might be illegal usage of the Chatbot Arena policy? And maybe HuggingFace? @clem
Or maybe authors and contributors should get a cut each month as βmarket markersβ.Β @weichiang @angelopoulos

reacted to
cfahlgren1's
post with π
4 months ago
Post
2257
You'll notice the AI in the SQL Console is much better at working with chatml conversations:
Here's example of unnesting the cfahlgren1/react-code-instructions in less than 10 seconds by asking it. Check it out here: cfahlgren1/react-code-instructions
- "show me the average assistant response length"
- "extract user, system, and assistant messages into separate columns"
It's super easy to work with conversational datasets now with natural language π£οΈ
Here's example of unnesting the cfahlgren1/react-code-instructions in less than 10 seconds by asking it. Check it out here: cfahlgren1/react-code-instructions
- "show me the average assistant response length"
- "extract user, system, and assistant messages into separate columns"
It's super easy to work with conversational datasets now with natural language π£οΈ

reacted to
Kseniase's
post with π
4 months ago
Post
3157
**15 Agentic Systems and Frameworks of 2024**
This year, we started our βAI Agents and Agentic Workflowsβ series (https://www.turingpost.com/t/AI-Agents) to explore everything about AI agents step by step: all the vocabulary, how they work, and how to build them.
The huge interest in this series and the large number of studies conducted on agents showed that it was one of the most popular and important themes of the year. In 2025, most likely, agents will reach new highs β we will be covering that for you. Now, letβs review the agentic systems that have emerged this year.
Here is a list of 15 agentic systems and frameworks of 2024:
1. GUI Agents: A Survey (2412.13501)
2. Large Language Models Orchestrating Structured Reasoning Achieve Kaggle Grandmaster Level (2411.03562)
3. The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery (2408.06292)
4. MALT: Improving Reasoning with Multi-Agent LLM Training (2412.01928)
5. Agent S: An Open Agentic Framework that Uses Computers Like a Human (2410.08164)
6. Automated Design of Agentic Systems (2408.08435)
7. AgentInstruct: Toward Generative Teaching with Agentic Flows (2407.03502)
8. AgentStore: Scalable Integration of Heterogeneous Agents As Specialized Generalist Computer Assistant (2410.18603)
9. WALL-E: World Alignment by Rule Learning Improves World Model-based LLM Agents (2410.07484)
10. Generative Agent Simulations of 1,000 People (2411.10109)
11. DynaSaur: Large Language Agents Beyond Predefined Actions (2411.01747)
12. PRefLexOR: Preference-based Recursive Language Modeling for Exploratory Optimization of Reasoning and Agentic Thinking (2410.12375)
13. Generative World Explorer (2411.11844)
14. Bel Esprit: Multi-Agent Framework for Building AI Model Pipelines (2412.14684)
15. AutoKaggle: A Multi-Agent Framework for Autonomous Data Science Competitions (2410.20424)
Thanks for reading Turing Post!
Subscribe to receive new posts straight into your inbox -> https://www.turingpost.com/subscribe
This year, we started our βAI Agents and Agentic Workflowsβ series (https://www.turingpost.com/t/AI-Agents) to explore everything about AI agents step by step: all the vocabulary, how they work, and how to build them.
The huge interest in this series and the large number of studies conducted on agents showed that it was one of the most popular and important themes of the year. In 2025, most likely, agents will reach new highs β we will be covering that for you. Now, letβs review the agentic systems that have emerged this year.
Here is a list of 15 agentic systems and frameworks of 2024:
1. GUI Agents: A Survey (2412.13501)
2. Large Language Models Orchestrating Structured Reasoning Achieve Kaggle Grandmaster Level (2411.03562)
3. The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery (2408.06292)
4. MALT: Improving Reasoning with Multi-Agent LLM Training (2412.01928)
5. Agent S: An Open Agentic Framework that Uses Computers Like a Human (2410.08164)
6. Automated Design of Agentic Systems (2408.08435)
7. AgentInstruct: Toward Generative Teaching with Agentic Flows (2407.03502)
8. AgentStore: Scalable Integration of Heterogeneous Agents As Specialized Generalist Computer Assistant (2410.18603)
9. WALL-E: World Alignment by Rule Learning Improves World Model-based LLM Agents (2410.07484)
10. Generative Agent Simulations of 1,000 People (2411.10109)
11. DynaSaur: Large Language Agents Beyond Predefined Actions (2411.01747)
12. PRefLexOR: Preference-based Recursive Language Modeling for Exploratory Optimization of Reasoning and Agentic Thinking (2410.12375)
13. Generative World Explorer (2411.11844)
14. Bel Esprit: Multi-Agent Framework for Building AI Model Pipelines (2412.14684)
15. AutoKaggle: A Multi-Agent Framework for Autonomous Data Science Competitions (2410.20424)
Thanks for reading Turing Post!
Subscribe to receive new posts straight into your inbox -> https://www.turingpost.com/subscribe

reacted to
sayakpaul's
post with π₯
4 months ago
Post
4421
Commits speak louder than words π€ͺ
* 4 new video models
* Multiple image models, including SANA & Flux Control
* New quantizers -> GGUF & TorchAO
* New training scripts
Enjoy this holiday-special Diffusers release π€
Notes: https://github.com/huggingface/diffusers/releases/tag/v0.32.0
* 4 new video models
* Multiple image models, including SANA & Flux Control
* New quantizers -> GGUF & TorchAO
* New training scripts
Enjoy this holiday-special Diffusers release π€
Notes: https://github.com/huggingface/diffusers/releases/tag/v0.32.0

replied to
MoritzLaurer's
post
4 months ago
Very nice! Excellent!

reacted to
MoritzLaurer's
post with π₯
4 months ago
Post
1198
"Open-source AI: year in review 2024": amazing Space with lots of data-driven insights into AI in 2024! Check it out π
huggingface/open-source-ai-year-in-review-2024
huggingface/open-source-ai-year-in-review-2024

replied to
di-zhang-fdu's
post
4 months ago
Thanks!