The best way to get an LLM to follow style is to make sure that this style is evident in the codebase. Excessive instructions (whether through memories or AGENT.md) do not help as much.
Personally, I absolutely hate instructing agents to make corrections. It's like pushing a wet noodle. If there is lots to correct, fix one or two cases manually and tell the LLM to follow that pattern.
Isn't that example the exact opposite of mixing content and presentation? The * notation applies the strong [emphasis] tag, the show rule (re-)defines the presentation. Ideally you would of course separate the two into separate files (template + content).
In my time using Typst, I found that Typst makes it possible/easy to make content even more abstract: write the content as a "data structure" and then present parts of it in various places around your document. For instance to list quantity/weight of a parts description in a parts index at the end.
> Ideally you would of course separate the two into separate files (template + content).
Exactly: If instructions for how to style the content are in the same file as the content, then that is mixing content _with_ presentation logic. Avoiding this approach to documentation is what I alluded to in writing, "Presumably, Typst allows including styles from external sources."
Was using vim for a decade, discovered Helix, installed it on all my systems and haven't looked back (at least not voluntarily; I'm always bewildered by the old-school CAD-like command-then-subject paradigm when I get thrown into vi on a random machine)
Yes! One of the worst bugs to debug in my entire career boiled down to a piece of Java mutating a HashSet that it received from another component. That other component had independently made the decision to cache these HashSet instances. Boom! Spooky failure scenarios where requests only start to fail if you previously made an unrelated request that happened to mutate the cached object.
This is an example where ownership semantics would have prevented that bug. (references to the cached HashSets could have only been handed out as shared/immutable references; the mutation of the cached HashSet could not have happened).
The ownership model is about much more than just memory safety. This is why I tell people: spending a weekend to learn rust will make you a better programmer in any language (because you will start thinking about proper ownership even in GC-ed languages).
Yeah that's definitely optimistic. More like 1-6 months depending on how intensively you learn. It's still worth it though. It easily takes as long to learn C++ and nobody talks about how that is too much.
Yes. I learned Rust in a weekend. Basic Rust isn't that complicated, especially when you listen to the compiler's error messages (which are 42x as helpful compared with C++ compiler errors).
Damn. You are a smart person. It’s taken me months and I’m still not confident. But I was coming from interpreted languages (+ small experience with c).
> This is an example where ownership semantics would have prevented that bug.
It’s also a bug prevented by basic good practices in Java. You can’t cache copies of mutable data and you can’t mutate shared data. Yes it’s a shame that Java won’t help you do that but I honestly never see mistakes like this except in code review for very junior developers.
The whole point is that languages like Java won't keep track of what's "shared" or "mutable" for you. And no, it doesn't just trip up "very junior developers in code review", quite the opposite. It typically comes up as surprising cross-module interactions in evolving code bases, that no "code review" process can feasibly catch.
Speak for yourself. I haven't seen any bug like this in Java for years. You think you know better and my experience is not valid? Ha. Ok. Keep living in your dreams.
Using gen AI for anything artistic (illustrations, music, video, creative writing) is a dead end. The results are soulless and bland. People notice immediately.
Code completions are fine. Driving code through chat is a complete waste of time (never saves time for me; always ends up taking longer). Agentic coding (where the LLM works autonomously for half an hour) still holds some promise, but my employer isn't ready for that.
Research/queries only for very low stakes/established things (e.g., how do I achieve X in git).
I found Helix much easier to get into than vim because you can see and experiment with the selection before committing to an action. It's also much more familiar to users of modern software (where you generally first select an object and then apply an action).
Yes, all the time. For reference-level information, I don't trust AI summaries. If I need to know facts, I cannot have even the possibility of a lying auto-complete machine between me and the facts.
Exploratory/introductory/surface-level queries are the ones that get handed to auto-complete.
I like how Kagi lets me control whether AI should be involved by adding or omitting a question mark from my search query. Best of both worlds.
As someone responsible for login/registration at a large online retailer, I see so much bot traffic and attacks. Attackers try to enumerate registered users, try to mass-login with credentials from password dumps, try to register accounts controlled by bots.
Login forms are a war zone. Looking for patterns that indicate the other party is a bot and serve them (and only them) a captcha is a technique that is quite effective. But it is not perfect. Especially business customers often get forced to solve captchas in our system.
If you know of a better solution (other than: don't be a big online shop), I'm all ears.
I'd guess that their problem is data pollution (marketing unhappy, ads impressions unaligned, data needs to be cleaned anyway before PowerPoint presentations for shareholders are made). And technically: unnecessary database growth which impacts migration efficiency, backup size and duration and stuff like that.
They don't seem to care about ad impressions being unaligned when their ads hit people who consider all forms of advertising to be a form of offensive and unauthorized graffiti on the mind, AKA vandalism.
Keyboards are such a good hobby project. The scope is comparatively small, yet within that scope you get in contact with many different and highly interesting subjects and challenges. And you can more or less pick and choose, which ones you engage with (wireless vs wired, soldering vs hand-wired, custom firmware vs. ZMK/QMK, split vs. traditional).
You can buy ZSA split keyboards with labels on the keycaps. Its great while you are still learning to type on these rather exotic keyboards. As you get more proficient, you start to rely more and more on the "central" keys (using layers toggles to put, say arrow keys on the home row). Muscle-memory is often more than enough.
That said, I have kept the number row labelled. These keys are not obscured by your hands and they can give you the necessary frame of reference. The ideal trade-off for me.
Personally, I absolutely hate instructing agents to make corrections. It's like pushing a wet noodle. If there is lots to correct, fix one or two cases manually and tell the LLM to follow that pattern.
https://www.humanlayer.dev/blog/writing-a-good-claude-md