They have a public github repo with examples of code, that’d be a good place to start - but you could also check out the electrosmith daisy seed, a little audio dev board. Someone has made a pedal enclosure for it so you can diy something VERY similar to what this is (minus the vibe coding tools).
I'm on a deep dive fine-tuning how I organize and manage my personal knowledge base - focused on entity extraction and strategic information retrieval and based on the AgREE paper from Apple[0] and persisting it in Memgraph.
I've got a nice ingest, extract, enrich process going for the graph - I'm currently working on a fork of claude-mem[1] that uses the graph as a contextual backend for agentic coding workflows.
If you're like me you're doing it to establish a greater level of trust in generated code. It feels easier to draw out the hard guard-rails and have something fill out the middle -- giving both you, and the models, a reference point or contract as to what's "correct"
This is a great use case for sub-agents IMO. By default, sub-agents use sonnet. You can have opus orchestrate the various agents and get (close to) the best of both worlds.
In this case I don't think the controller needs to be the smartest model. I use sonnet as the main driver and pass the heavy thinking (via zen mcp) onto Gemini pro for example, but I could use openai or opus or all of them via OpenRouter.
Subagents seem pretty similar to using zen mcp w/ OpenRouter but maybe better or at least more turnkey? I'll be checking them out.
Amp (ampcode.com) uses Sonnet as its main model and has GPT o3 as a special purpose tool / subagent. It can call into that when it needs particularly advanced reasoning.
Interestingly I found that prompting it to ask the o3 submodel (which they call The Oracle) to check Sonnet's working on a debugging solution was helpful. Extra interesting to me was the fact that Sonnet appeared to do a better job once I'd prompted that (like chain of thought prompting, perhaps asking it to put forward an explanation to be checked actually triggered more effective thinking).
Is there a way to get persistent sub-agents? I'd love to have a bunch of YAML files in my repository, one for each sub-agent, and have those automatically used across all Claude Code instances I have on multiple machines (I dev on laptop and desktop), or across the team.
In my experience the best use for subagents is saving context.
Example: you need to review some code to see if it has proper test coverage.
If you use the "main" context, it'll waste tokens on reading the codebase and running tests to see coverage results.
But if you launch an agent (a subprocess pretty much), it can use a "disposable" context to do that and only return with the relevant data - which bits of the code need more tests.
Now you can either use the main context to implement the tests or if you're feeling really fancy launch another sub-agent to do it.
AFAIK subagents inherit the default model since v1.0.64. At least that's the case for me with the Claude Code SDK — not providing a specific model makes subagents use claude-opus-4-1-20250805.
Very cool indeed. I started building something similar - relying on Auto Export [https://apps.apple.com/us/app/health-auto-export-json-csv/id...] to export my health data to an endpoint which stores it in a sqlite database. I never got as far as building an MCP server around the data but that's certainly the direction I was heading. The initial idea was to use my health data to provide context to a health/fitness agent that would recommend workouts, check-in on things, etc.
it is wild to me that products like this don’t allow you to easily export all data into sqlite (or duckdb) natively. it’s 2025 and you frequently have to page through hundreds or thousands of API calls to get a trivial amount of data (or use 3p services)
This isn’t some bespoke API/format that they made up to make it harder for you to get your data. Apple did the right thing here and implemented HL7 standards like CDA and FHIR. This is a win for interoperability. There are already a wealth of tools available for dealing with these standards.
Why would a malicious actor not be willing to setup the same infra as you, with an app on the iOS store to mine data once consented? I don’t see how api usage difficulty is a real security feature…
I haven't experimented with MCP too much because I have some reservations about it, but I decided to go MCP-first for this to see how it feels to prototype around it. My typical flow would have been to go sqlite+sveltekit.
Claude will do this. I've seen it create "migration scripts" to make wholesale file changes -- botch them -- and have no recourse. It's obviously _not great_ when this happens. You can mitigate this by running these agents in sandbox environments and/or frequently checkpointing your code - ideally in a SCM like git.
Do you use APIs with n8n? Just curious.
If you do, then you might want to factor in the cost of using those as well, including privacy implications.
In my limited testing, I found n8n to be heavily focused on cloud API use, from their onboarding quick tutorial to the collection of provided nodes, I found adapting them to strictly local use something of a chore.
There are a lot of nodes that are pre-built to interact with cloud APIs, but you effectively have an HTTP client available that can reach out to any endpoint. In my case, yes, I make use of cloud APIs and realize the trade-offs wrt privacy. You can hit any internal services you like assuming they're reachable from your n8n server
It doesn’t cost a dime to host n8n as long as you are only using it internally. If you offer it commercially, then you will have to negotiate a commercial license.
We have tools today that are uniquely good at wading through disparate sources and aggregating things into a format that we can easily digest. The worry of course - is that these tools are generally on offer from huge tech giants (google, openai, etc). The good news is, we have open-source versions of these tools that perform almost as well as the closed-source versions for these types of categorization and aggregation.
I would agree that information is now more scattered (like bread for ducks as the author notes) than ever before -- but we now have the unprecedented ability to wrangle it ourselves.
reply