Hacker Newsnew | past | comments | ask | show | jobs | submit | outofpaper's commentslogin

A harness is a collection of stubs and drivers configured to assist with automation or testing. It's a standard term often used in QA as they've been automating things for ages before Gen Ai came on to the scene.

Yes, it is also a device used to control the movement of work animals, which farmers have been using for ages before QA came on to the scene.

its technically an IDE, but harness makes it sound new and fancy.

??? I'm pretty sure you know what the differences are. Go touch grass and tell me it's the same as looking at a plant on a screen.

Dealing with organic and natural systems will, most of the time, have a variable reward. The real issue comes from systems and services designed to only be accessible through intermittent variable rewards.

Oh, and don't confuse Claude's artifacts working most of the time with them actually optimizing to be that way. They're optimizing to ensure token usage. I.E. LLMs have been fine-tuned to default to verbose responses. They are impressive to less experienced developers, often easier to detect certain types of errors (eg. Improper typing), and will make you use more tokens.


So gambling is fine as long as I'm doing it outside. Poker in a casino? Bad. Poker in a foresty meadow, good. Got it.

Basically true tbqh. Poker is maybe the one exception, but you're almost always better off gambling "in the wild" e.g. poker night with your buds instead of playing slots or anything else where "the house" is always winning in the long run. Are your losses still circulating in your local community, or have they been siphoned off by shareholders on the other side of the world? Gambling with friends is just swapping money back and forth, but going to a casino might as well be lighting the money on fire.

Exactly! The whole point of personal agents is that the data is yours and it's where you want it not in someone's cloud. What harness you use to work with this should be a matter of preference and not one of lock in.

The future will be ownership of our memories and data. AI companies will fight tooth and nail to keep that data walled in and impossible to export.

I agree. This is why I think Google has the long term advantage. They already have so much data. I can ask Gemini a question and it'll reference an email I sent a month ago.

It's an edge but I think it's going to become hard to gate data as they do. Soon our AI assistants will see and hear everything we see and hear in real-time. All of that will be ingested somewhere. Google can't prevent us from recording the things we see and hear.

Perhaps the competitive moat of the future will be time critical access to data. Google likely gets new data faster than everyone else, and they could use this time arbitrage in products like news, finance, research, etc.


If regulators force the capability of exporting to exist, what ya gonna do?

I continue to find it amusing that people really think corporates are really holding power. No - they are holding power granted to them by the government of the state.

Remind me why Zuck et al had to kiss the ring.


Very often, the regulators don't. Here in the US, half the country would refinance their mortgage for iMessage interoperability... if it were possible. Any time regulators reach for the "stop monopoly" button, Tim Cook screeches like a rhesus monkey and drops a press release about how many terrorists Apple stops.

If lobbying was illegal then you might have a point here, but alas.


Since it's already not walled-in in most cases I don't see this happening very effectively.

Using openrouter+kilocode I can simply switch between different providers' models and not miss out on anything.


Lol it aas an admirable attempt at something new. I loved the interesting blend of messaging and document creation. It the code still lives on as an archived open project btw.

https://github.com/apache/incubator-retired-wave


As misleading. Lots of their marketing push or at least thr ClawBros pitch it as running local on your MacMini.


To be fair, you do keep significantly more control of your own data from a data portability perspective! A MEMORY.md file presents almost zero lock-in compared to some SaaS offering.

Privacy-wise, of course, the inference provider sees everything.


To be clear: keeping a local copy of some data provides not control over how the remote system treats that data once it’s sent.


Which is what I said in my second sentence.


It’s worse than “[they] can see everything.” They can share it.


Is it not a given that anyone that gets access to a piece of information is also capable of sharing it?


Horrible. Just because you have code that runs not in a browser doesn't mean you have something that's local. This goes double when the code requires API calls. Your net goes down and this stuff does nothing.


For a web developer local-first only describes where the state of the program lives. In the case of this app that’s in local files. If anthropics api was down you would just use something else. Something like OpenRouter would support model fallbacks out of the box


Not to mention that you can actually have something that IS local AND runs in a browser :D


In a world where IT doesn't mean anything, crypto doesn't mean anything, AI doesn't mean anything, AGI doesn't mean anything, End-to-end encryption doesn't mean anything, why should local-first mean anything? We must unite against the tyranny of distinction.


What's dumb, on top of everything, is needing to store non special standard operating procedures in specific AI folders and files when wanting to work with AI tooling.


Copilot today supports the top-level AGENTS.md approach as well, which seems to be the cross-tool "standard".


It is a standard in a sense that they will all read it (although last I checked you still need to adjust the default config with Gemini). But feature support varies between different tooling. For example, only Claude supports @including other files.


The "standard" AGENTS.md suggestion for that is [regular markdown links](./like-this.md)


The problem is that it doesn't actually include the referenced file in the context. The model will only see what's in it if it deigns to read it, but that's not a given in all circumstances where it might need to.

I use this feature often in Claude to bring specific files so that they are in context at all times. E.g. when working on a parser, I will often put the grammar to be always in context. Or if working on a web app, all the model types.


Like needing to store IDE specific files?


A real test is synthesizing 100,000 sentences of this slect random ones and then inject the traits you want thr LLM to detect and describe, eg have a set of words or phrases that may represent spells and have them used so that they do something. Then have the LLM find these random spells in the random corpus.


Seek? In the grand scheme of things asking forgiveness only applies if you're going to not be that transformative and something like YouTube's automated copyright strikes might affect you. "Ask Forgiveness" is often a better option.

Fair use is a defense, not a requirement - You don't need permission to claim fair use; it's a legal defense if you're sued Seeking permission can backfire - Copyright holders may deny permission even when fair use would apply, creating unnecessary barriers.

This is especially true for parody and commentary.


> Seek? In the grand scheme of things asking forgiveness only applies if you're going to not be that transformative

That’s not why I said.

> something like YouTube's automated copyright strikes might affect you. "Ask Forgiveness" is often a better option.

Now you’re talking about platform quirks rather than copyright law.

> Fair use is a defense, not a requirement

Actually “fair use” is defined by law in a lot of counties. It’s not a defence, it’s a legal right.

> You don't need permission to claim fair use;

That’s not what I said.

> it's a legal defense if you're sued Seeking permission can backfire

It’s spelt “defence” (with a C not an S) and the point is you seek permission before distribution, not after

Not everything in life follows the “ask for forgiveness not permission” rule ;)

> Copyright holders may deny permission even when fair use would apply, creating unnecessary barriers.

Copyright holders cannot deny fair use in jurisdictions where fair use laws apply.


The difference between Seek and Ask Forgiveness in the situation outlined is that Seek lays out costs before hand and generally they are minimal, and Ask Forgiveness can determine costs at the will of the person sampled or remove the work completely from circulation.


The less juggling of concepts thr more effective any problem solver cam be.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: