daavoo

daavoo

AI & ML interests

None yet

Recent Activity

View all activity

Organizations

MultiπŸ€–Transformers's profile picture Mozilla.ai's profile picture Hugging Face Discord Community's profile picture

daavoo's activity

replied to their post 3 days ago
view reply

Currently, we only support 2 patterns that can be implemented (almost) consistently across frameworks:
single agent and multi-agent in the form of "manager" + "managed agents".

Don't hesitate to open an issue https://github.com/mozilla-ai/any-agent/issues to discuss what other patterns would be useful

reacted to Xenova's post with πŸ€—πŸš€πŸš€ 4 days ago
view post
Post
1658
Reasoning models like o3 and o4-mini are advancing faster than ever, but imagine what will be possible when they can run locally in your browser! 🀯

Well, with πŸ€— Transformers.js, you can do just that! Here's Zyphra's new ZR1 model running at over 100 tokens/second on WebGPU! ⚑️

Giving models access to browser APIs (like File System, Screen Capture, and more) could unlock an entirely new class of web experiences that are personalized, interactive, and run locally in a secure, sandboxed environment.

For now, try out the demo! πŸ‘‡
webml-community/Zyphra-ZR1-WebGPU
  • 1 reply
Β·
posted an update 5 days ago
view post
Post
1278
New release in https://github.com/mozilla-ai/any-agent πŸ€–

You can now use "managed_agents" also in langchain and llama_index, in addition to the other frameworks:

from any_agent import AgentConfig, AgentFramework, AnyAgent
from any_agent.tracing import setup_tracing

framework = AgentFramework("langchain")  # also in AgentFramework("llama_index") and the rest of frameworks
setup_tracing(framework)

agent = AnyAgent.create(
    framework,
    AgentConfig(
        model_id="gpt-4.1-mini",
        instructions="You are the main agent. Use the other available agents to find an answer",
    ),
    managed_agents=[
        AgentConfig(
            name="search_web_agent",
            description="An agent that can search the web",
            model_id="gpt-4.1-nano",
            tools=["any_agent.tools.search_web"]
        ),
        AgentConfig(
            name="visit_webpage_agent",
            description="An agent that can visit webpages",
            model_id="gpt-4.1-nano",
            tools=["any_agent.tools.visit_webpage"]
        )
    ]
)
agent.run("Which Agent Framework is the best??")
  • 2 replies
Β·
reacted to stefan-french's post with 😎 7 days ago
reacted to etemiz's post with πŸ‘€ 10 days ago
view post
Post
2163
It looks like Llama 4 team gamed the LMArena benchmarks by making their Maverick model output emojis, longer responses and ultra high enthusiasm! Is that ethical or not? They could certainly do a better job by working with teams like llama.cpp, just like Qwen team did with Qwen 3 before releasing the model.

In 2024 I started playing with LLMs just before the release of Llama 3. I think Meta contributed a lot to this field and still contributing. Most LLM fine tuning tools are based on their models and also the inference tool llama.cpp has their name on it. The Llama 4 is fast and maybe not the greatest in real performance but still deserves respect. But my enthusiasm towards Llama models is probably because they rank highest on my AHA Leaderboard:

https://sheet.zoho.com/sheet/open/mz41j09cc640a29ba47729fed784a263c1d08

Looks like they did a worse job compared to Llama 3.1 this time. Llama 3.1 has been on top for a while.

Ranking high on my leaderboard is not correlated to technological progress or parameter size. In fact if LLM training is getting away from human alignment thanks to synthetic datasets or something else (?), it could be easily inversely correlated to technological progress. It seems there is a correlation regarding the location of the builders (in the West or East). Western models are ranking higher. This has become more visible as the leaderboard progressed, in the past there was less correlation. And Europeans seem to be in the middle!

Whether you like positive vibes from AI or not, maybe the times are getting closer where humans may be susceptible to being gamed by an AI? What do you think?
Β·
posted an update 11 days ago
view post
Post
2797
Wondering how the new Google Agent Development Toolkit (ADK) compares against other frameworks? πŸ€”You can try it in any-agent πŸš€

https://github.com/mozilla-ai/any-agent

agent = AnyAgent.create(
    AgentFramework("google"),
    AgentConfig(
        model_id="gpt-4o-mini"
    )
)
agent.run("Which Agent Framework is the best??")

  • 1 reply
Β·
posted an update 13 days ago
view post
Post
1829
After working on agent evaluationπŸ”πŸ€– the last weeks, we started to accumulate code to make trying different agent frameworks easier. From that code, we have built and just released a small library called any-agent.


Give it a try and a ⭐: https://github.com/mozilla-ai/any-agent

from any_agent import AgentConfig, AgentFramework, AnyAgent

agent = AnyAgent.create(
    AgentFramework("smolagents"),  # or openai, langchain, llama_index
    AgentConfig(
        model_id="gpt-4o-mini"
    )
)
agent.run("Which Agent Framework is the best??")
reacted to stefan-french's post with πŸ”₯πŸš€ about 1 month ago
reacted to sharpenb's post with πŸ”₯ about 1 month ago
view post
Post
3086
We open-sourced the pruna package that can be easily installed with pip install pruna :) It allows to easily ccompress and evaluate AI models including transformers and diffusers.

- Github repo: https://github.com/PrunaAI/pruna
- Documentation: https://docs.pruna.ai/en/stable/index.html

With open-sourcing, people can now inspect and contribute to the open code. Beyond the code, we provide detailed readme, tutorials, benchmarks, and documentation to make transparent compression, evaluation, and saving/loading/serving of AI models.

Happy to share it with you and always interested in collecting your feedback :)
  • 2 replies
Β·
posted an update about 1 month ago
view post
Post
2014
πŸ€– πŸ—ΊMapped all(?) the swimming pools ️🏊 around another town with https://github.com/mozilla-ai/osm-ai-helper.

This time, I have mapped and contributed to https://www.openstreetmap.org more than 100 swimming pools around my wife's hometown. Only took about 20min to find them all (+~3 min verification) in a free Colab GPUπŸš€

Try it yourself around a single point: mozilla-ai/osm-ai-helper
reacted to chansung's post with πŸ‘ about 1 month ago
view post
Post
1570
Gemma 3 Release in a nutshell
(seems like function calling is not supported whereas the announcement said so)
posted an update about 1 month ago
view post
Post
1447
πŸ€– πŸ—ΊοΈPushed an update to support processing entire areas (i.e. a city) in https://github.com/mozilla-ai/osm-ai-helper.

I have mapped and contributed to https://www.openstreetmap.org all(?) the swimming pools around my hometown, taking about 1h to process (+15 min verification) in a free Colab GPUπŸš€

Try it yourself: mozilla-ai/osm-ai-helper

And check the https://github.com/mozilla-ai/osm-ai-helper to find the demo notebooks.
posted an update about 1 month ago