Rendered at 12:14:34 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
angarrido 1 days ago [-]
Local inference is getting solved pretty quickly.
What still seems unsolved is how to safely use it on real private systems (large codebases, internal tools, etc) where you can’t risk leaking context even accidentally.
In our experience that constraint changes the problem much more than the choice of runtime or SDK.
WillAdams 2 days ago [-]
Do you really mean/want to say:
>...and without permission on any device.
I would be much more interested in a tool which only allows AI to run within the boundaries which I choose and only when I grant my permission.
elchiapp 2 days ago [-]
That line means that you don't need to create an account and get an API key from a provider (i.e. "asking for permission") to run inference. The main advantage is precisely that local AI runs on your terms, including how data is handled, and provably so, unlike cloud APIs where there's still an element of trust with the operator.
(Disclaimer: I work on QVAC)
WillAdams 2 days ago [-]
OIC.
Should it be re-worded so as to make that unambiguous?
sull 2 days ago [-]
thoughts on mesh-llm?
mafintosh 2 days ago [-]
The modular philosophy of the full stack is to give you the building blocks for exactly this also :)
WillAdams 2 days ago [-]
Looking through the balance of the material, I can see that, but on first glance, this seems a confusible point.
1 days ago [-]
moffers 2 days ago [-]
This is all very ambitious. I am not exactly sure where someone is supposed to start. With the connections to Pear and Tether I can see where the lines meet, but is the idea that someone takes this and builds…Skynet? AI Cryptocurrency schemes? Just a local LLM chat?
Although an LLM chat is the starting point for many, there are many other use cases. We had people build domotics systems to control their house using natural language, vision based assistants for surveillance (e.g. send a notification describing what's happening instead of a classic "Movement detected") etc. and everything remains on your device / in your network.
elchiapp 2 days ago [-]
Hey folks, I'm part of the QVAC team. Happy to answer any questions!
knocte 1 days ago [-]
Are there incentives for nodes to join the swarm (become a seeder)? If yes, how exactly, do they get paid in a decentralized way? Any URL where to get info about this?
mafintosh 24 hours ago [-]
its through the holepunch stack (i am the original creator). Incentives for sharing is through social incentives like in BitTorrent. If i use a model with my friends and family i can help rehost to them
What still seems unsolved is how to safely use it on real private systems (large codebases, internal tools, etc) where you can’t risk leaking context even accidentally.
In our experience that constraint changes the problem much more than the choice of runtime or SDK.
>...and without permission on any device.
I would be much more interested in a tool which only allows AI to run within the boundaries which I choose and only when I grant my permission.
(Disclaimer: I work on QVAC)
Should it be re-worded so as to make that unambiguous?
Although an LLM chat is the starting point for many, there are many other use cases. We had people build domotics systems to control their house using natural language, vision based assistants for surveillance (e.g. send a notification describing what's happening instead of a classic "Movement detected") etc. and everything remains on your device / in your network.