So in these cases of government seizing bitcoin, the people they seize from have unencrypted private keys?
The article just says "private keys the defendant had in his possession" does this mean he was holding onto private keys that had no passwords / encryption at all that unlocked $15B?
Or does the government have an alternative way of "seizing" bitcoin? I remember years ago people throwing around conspiracy theories that bitcoin was invented by the NSA / other 3 letter agencies with a backdoor to basically allow easy tracking / seizure of criminal assets.
Im not a conspiracy theorist, but stories like these were the government seems so easy to seize such incredibly large amounts of money so easily seems to suggest some other mechanisms that aren't public.
chatgpt5 still is pathetically bad at roman numerals. I asked it to find the longest roman numeral in a range. first guess was the highest number in the range despite being a short numeral. second guess after help was a longer numeral but outside the range. last guess was the correct longest numeral but it miscounted how many characters it contained.
I wish cursor would let you see how much usage in terms of $$ you have done for your month. Its really hard to see in the dashboard the individual charges tokens, but then there is no cumulative. I haven't been able to find a way to see how much of my included usage is being used besides downloading the csv and manually summing. They just give you a very unhelpful "You will use your included credits by X date"
I suppose this is by design so you don't know how much you have left and will need to buy more credits.
Thats on demand usage. Not your plan usage. You get Y credits every month before you start using on demand usage. That $XX/$YYY is how much of your on demand usage limit you used.
I think the worst part of the autocomplete is when you actually just want to tab to indent a line and it tries to autocomplete something at the end of the line.
ok call me a spoiled Go programmer but I have had an allergy to manually formatting code since getting used to gofmt on save. I highly recommend setting up an autoformatter so you can write nasty, undented code down the left margin and have it snap into place when you save the file like I do, and never touch tab for indent. Unless you're writing Python of course haha
Format on save is my worst enemy. It may work fine for go, but you'll eventually run into code where it isn't formatted using the one your editor is configured for. Then, you end up formatting the whole file or having to remember how to disable save formatting. I check formatting as a git hook on commits instead.
If you’re checking it on git hooks then it’s even safer to have format on save. Personally I default to format on save, and if I’m making a small edit to a file that is formatted differently, and it would cause a messy commit to format on save, then I simply “undo” and then “save without formatting” in the VSCode command palette.
You can also add those non-logic commits to a .git-blame-ignore-revs file, and any client that supports it will ignore those commits when surfacing git blame annotations. I believe GitHub supports this but not sure. I think VSCode does…
The problem with autoformatting is that whitespace is a part of the program. I use whitespace to separate logical entities that cannot be made their own functions in an elegant way. Using autoformatters often breaks this, changing my code to something that looks good but doesn't actually make sense.
The more gentle autoformatters actually do their job correctly, but the more aggressive ones make code harder to read. And BTW, I hate golang with a passion. It's a language designed to get fifty thousand bootcamp grads from developing countries to somehow write coherent code. I just don't identify with that, although I do understand why it needs to exist.
Reminds me of critisms of python decades ago. that you wouldn't understand what the "real code" was doing since you were using a scripting language. But then over the years it showed tremendous value and many unicorns were built by focusing on higher level details and not lower level code
Comparing LLMs to programming languages is a fake equivalence. I don’t have to write assembly because LLVM will do that for me correctly in 100% of the cases, while AI might or might not (especially the more I move away from template crud apps)
That is a myth, cpu time is time spent waiting around by your users as the cpu is taking seconds to do something that could be instant, if you have millions of users and that happens every day that quickly adds up to many years worth of time.
It might be true if you just look at development cost, but if you look at value as a whole it isn't. And even just development cost its often not true, since time spent waiting around by the developer for tests to run and things to start also slows things down, taking a bit of time there to reduce cpu time is well worth it just to get things done faster.
Yeah, it's time spent by the users. Maybe it's an inneficiency of the market because the software company doesn't feel the negative effect enough, maybe it really is cheaper in aggregate that doing 3 different native apps in C++. But if CPU time is so valuable, why aren't we arguing for hand written C or even assembly code instead of the layers upon layers of abstraction in even native modern software?
> But if CPU time is so valuable, why aren't we arguing for hand written C or even assembly code instead of the layers upon layers of abstraction
Maybe we should, all it took was Figma taking it seriously and working at a lower level to make every other competitor feel awful and clunky next to it then it went on to dominate the market.
> But if CPU time is so valuable, why aren't we arguing for hand written C or even assembly code instead of the layers upon layers of abstraction in even native modern software?
Many of us do frequently argue for something similar. Take a look at Casey Muratori’s performance aware programming series if you care about the arguments.
> But if CPU time is so valuable, why aren't we arguing for hand written C or even assembly code instead of the layers upon layers of abstraction in even native modern software?
That is an extreme case though, I didn't mean that all optimizations are always worth it, but if we look at marginal value gained from optimizations today the payback is usually massive.
It isn't done enough since managers tend to undervalue user and developer time. But users don't undervalue user time, if your program wastes their time many users will stop using it, users are pretty rational about that aspect and prefer faster products or sites unless they are very lacking. If a website is slow a few times in a row I start looking for alternatives, and data says most users do that.
I even stopped my JetBrains subscription since the editor got so much slower in an update, so I just use the one I can keep forever as I don't want their patched editor. If it didn't get slower I'd gladly keep it as I liked some of the new features, but it being slower was enough to make me go back.
Also, while managers can obvious agree that making developer spend less time waiting is a good thing, it is very rare for managers to tell you to optimize compilation times or such, and pretty simple optimizations there can often make that part of the work massively faster. Like, if you profile your C++ compiler and look what files it spends time compiling, then look at those files to figure out why its so slow there, you can find these weird things and fixing those speeds it up 10x, so what took 30 seconds now takes 3 seconds, that is obviously very helpful and if you are used to that sort of thing you could do it in a couple of hours.
No, I wouldn't. That would require me be proficient in this, and I am not, so I am pretty sure I would not get to write better assembly optimisations unless I actually became better in that.
The difference is that there is no point (I know or would encounter) in which a compiler would not actually be able to do the job, and I would need to write manual assembly to fix some parts that the compiler could not compile. Yes a proficient programmer could probably do that to optimise the code, but the code would run and do the job regardless. That is not the case for LLMs, there is a non-zero changeyou get to the point of LLM agents getting stuck and it makes more sense to get hands dirty than iterating with agents.
That's not the same thing. LLMs don't just obscure low-level technical implementation details like Python does, they also obscure your business logic and many of its edge cases.
Letting a Python interpreter manage your memory is one thing because it's usually irrelevant, but you can't say the same thing about business logic. Encoding those precise rules and considering all of the gnarly real-world edge cases is what defines your software.
There are no "higher level details" in software development, those are in the domain of different jobs like project managers or analysts. Once AI can reliably translate fuzzy natural language into precise and accurate code, software development will simply die as a profession. Our jobs won't morph into something different - this is our job.
>There are no "higher level details" in software development, those are in the domain of different jobs like project managers or analysts. Once AI can reliably translate fuzzy natural language into precise and accurate code, software development will simply die as a profession. Our jobs won't morph into something different - this is our job.
I'm the non-software type of Engineer. I've always kind of viewed code as a way to bridge mathematics and control logic.
When I was at university I was required to take a first year course called "Introduction to Programming and Algorithms". It essentially taught us how to think about problem solving from a computer programming perspective. One example I still remember from the course was learning how you can use a computer solve something like Newton's Method.
I don't really hear a lot of software people talk about Algorithms but for me that is where the real power of programming lives. I can see some idealized future where you write programs just by mix and matching algorithms and almost every problem becomes essentially a state machine. To move from state A to State B I apply these transformations which map to these well known algorithms. I could see an AI being capable of that sort of pattern matching.
the hard thing is to define what State A and State B means Also to prepare for State C and D, so that it doesn’t cost more to add to the mix. And to find that State E everyone is failing to mention,…
> "Once AI can reliably translate fuzzy natural language into precise and accurate code, software development will simply die as a profession."
One-shotting anything like this is a non-starter for any remotely complex task. The reason is that fuzzy language is ambiguous and poorly defined. So even in this scenario you enter into a domain where it's going to require iterative cycling and refinement. And I'm not even considering the endless meta-factors that further complicate this, like performance considerations depending on how you plan to deploy.
And even if language were perfectly well defined, you'd end up with 'prompts' that would essentially be source codes in their own right. I have a friend who is rather smart, but not a tech type - and he's currently working on developing a very simple project using LLMs, but it's still a "real" project in that there are certain edge cases you need to consider, various cross-functionality in the UI that needs to be carried out, interactions with some underlying systems, and so on.
His 'prompt' is gradually turning into just a natural language program, of comparable length and complexity. And with the amount of credits he's churning through making it, in the end he may well have been much better off just hiring some programmers on one of those 'gig programming' sites.
------
And beyond all of this, even if you can surmount these issues - which I think may be inherently impossible - you have another one. The reason people hire software devs is not because they can't do it themselves, but because they want to devote their attention to other things. E.g. - most of everybody could do janitorial work, yet companies still hire millions of janitors. So the 'worst case' scenario would be that you dramatically lower the barriers to entry to software development, and wages plummet accordingly.
But working with AI isn’t really a higher level of abstraction. It’s a completely different process. I’m not hating on it, I love LLMs and use em constantly, but it doesn’t go assembly > C > python > LLMs
It would be a higher level of abstraction if there wouldn't be a need to handhold LLMs. You'd just let one agent build the UI, another the backend, just like a human would (you wouldn't validate their entire body of work, including their testing, documentation).
At that point yeah, a project manager would be able to build everything.
Engineers aren't the target audience for Lovable. I see it being used by designers and product managers, and also solopreneur type people, or non-technical folks wanting to build websites or start companies.
One PM I know uses it for designing prototypes then handing them off to the engineering team who then continue them in Claude Code etc.
So it's sorta of competing with Wix, Squarespace, Wordpress, and also prototyping tools like Figma.
reply