This is an ostrich-head-in-the-sand type of outlook. If I gave you a great tasting cake made of boiled plastic would you still eat it?
It's important to know what goes into building *claws because of how pervasive they are; sooner or later because of all the hype they'll start being used everywhere and if people take your attitude it's a disaster waiting to happen.
Giving them unfettered access to your life and not even wonder if the foundation is solid is concerning imho.
> If I gave you a great tasting cake made of boiled plastic would you still eat it?
How is that analogy in any way relevant?
The OpenClaw I control is extremely useful to me. I've never been more excited about technology than right now. If it's not for you, I really don't care. Go do something you enjoy. Turning it into Chicken Little doomerism is completely uncalled for.
This doomer attitude is something I have towards all software products these days, not just *claws.
People use dependencies willy-nilly, avoid proper auditing of LLM output; all that have disastrous consequences as we've seen the past few years. NPM supply chain attacks, prompt injection causing data exfil, etc.
I am simply saying it's imperative to UNDERSTAND the platform before making it a core part of your life. If wanting proper understanding of vibe coded projects with dependency hell is Chicken Little doomerism, oh well.
Better analogy is the clawcake has no new ingredients, we've been cooking with the same ones for years now, and its a shame people are such terrible bakers that they are so impressed.
I discovered plop while grokking Cloudflare's kumo-ui library[0] and having used it to create some generators for a large Svelte project, it just seems so much better than a swarth of documents that list out how-tos while onboarding new contributors.
A quick Algolia search also didn't show any references to this which left me pretty surprised, so was wondering what you all use to tackle the problem Plop solves.
Yes I had to, the v20.14.43 I patched a month ago broke just today; but updating it was pretty easy; just have to update[0] and repatch 20.14.43 with an updated GMS patch.
I loved learning Computer Engineering in college because it de-mystified the black box that was the PC I used growing up. I learned how it worked holistically, from physics to logic gates to processing units to kernels/operating systems to networking/applications.
It's sad to think we may be going backwards and introducing more black boxes, our own apps.
I personally don't "hate" LLMs but I see the pattern of their usage as slightly alarming; but at the same time I see the appeal of it.
Offloading your thinking, typing all the garbled thoughts in your head with respect to a problem in a prompt and getting a coherent, tailored solution in almost an instant. A superpowered crutch that helps you coast through tiring work.
That crutch soon transforms into dependence and before you know it you start saying things like "Once you vibe code, you don't look at the code".
I think a lot of people, regardless of whether they vibe code or not are going to be replaced by a cheaper sollution. A lot of software that would've required programmers before can now be created by tech savy employees in their respective fields. Sure it'll suck, but it's not like that matters for a lot of software. Software Engineering and Computer Science aren't going away, but I suspect a lot of programming is.
I've been around for a while. The closest we ever got was probably RPA. This time it's different. In my organisation we have non-programmers writing software that brings them business value on quite a large scale. Right now it's mainly through the chat framework we provide them so that they aren't just spamming data into chatGPT or similar. A couple of them figured out how to work the API and set up their own agents though.
Most of it is rather terrible, but a lot of the times it really doesn't matter. At least most of it scales better than Excel, and for the most part they can debug/fix their issues with more prompts. The stuff that turns out to matter eventually makes it to my team, and then it usually gets rewritten from scratch.
I think you underestimate how easy it is to get something to work well enough with AI.
I assume he’s mostly joking but… how often do you look at the assembly of your code?
To the AI optimist, the idea of reading code line by line will see as antiquated as perusing CPU registers line by line. Something do when needed, but typically can just trust your tooling to do the right thing.
I wouldn’t say I am in that camp, but that’s one thought on the matter. That natural language becomes “the code” and the actual code becomes “machine language”.
And you could say that the difference is that high-level languages are deterministically transformed down, but in practice the compiler is so complex you'd have no idea what it's doing and most people don't look at the machine code anyway. You may as well take a look at the LLM's prompt and make an assumption of the high-level code that it spits out.
"Messages are stored on our servers and are technically accessible at the database level , we won't pretend otherwise. Kloak doesn't require email, phone, or personal info to create an account, your identity isn't tied to your messages the way it would be on other platforms.
Our goal is to implement end-to-end encryption for DMs so that even we can't read message content. But we're not there yet, since after all we need to make sure the platform is safe and not to shield illegal content being sent."
This is a message from one of their founders I found while exploring the app.
reply