This level of detail isn't really helpful. I am working with AI and genuinely interested in learning more, but this offers very little.
More concrete examples to illustrate the core points would have been helpful. As-is the article doesn't offer much - sorry.
For one, I am not sure what kind of code he writes? How does he write tests? Are these unit tests, property-based tests? How does he quantify success? Leaves a lot to be desired.
What works for me is that often on a Monday I restart my machine. If something is truly important, I save it before the restart. There is something good to start from an empty state.
As an aside, this is the type of a problem that I think model checkers can't help with. You can write perfect and complicated TLA+/Lean/FizzBee models and even if somehow these models can generate code for you from your correct models you can still run into bugs like these due to platform/compiler/language issues. But, thankfully, such bugs are rare.
Writing is one of the best ways to learn something. Maybe non-experts learn something by writing about it?
Don't think the entire internet is repeating inaccuracies. :) I also believe there are readers that attempt to learn further than a blog. A blog post can inspire you to learn more about a topic, speaking from personal experience.
If there were no blog posts, maybe there would be no HN I believe.
There should be a place for non-experts. One could remain skeptical when they read blog posts without hating blog posts about complex topics written by non-expert.
More concrete examples to illustrate the core points would have been helpful. As-is the article doesn't offer much - sorry.
For one, I am not sure what kind of code he writes? How does he write tests? Are these unit tests, property-based tests? How does he quantify success? Leaves a lot to be desired.
reply