Is there a session start hook? I don't think so, unless it was added recently.
I've mostly been working on smaller projects so I never need to compact. And skills are definitely not working even on the initial prompt of a new session.
Instead of including all these instructions in CLAUDE.md, have you considered using custom Skills? I’ve implemented something similar, and Skills works really well. The only downside is that it may consume more tokens.
Yes, sometimes skills are more reliable, but not always. That is the biggest culprit to me so far. The fact that you cannot reliably trust these LLMs to follow steps or instructions makes them unsuitable for my applications.
Another thing that helps is adding a session hook that triggers on startup|resume|clear|compact to remind Claude about your custom skills. Keeps things consistent, especially when you're using it for a long time without clearing context
The matching logic for a skill is pretty strict. I wonder whether mentioning ‘git’ in the front matter and using ‘gitlab’ would give a match for a skill to get triggered.
I ended up getting ASRock X870E Taichi Lite. The main reason to get it was because it had 2 CPU x8 slots which are spaced perfectly for an Nvidia NVLink. And, they are Gen5 PCIe.
"A report was recently published by an AI-research company called Anthropic. They are the ones who notably created Claude, an AI-assistant for coding. Personally, I don’t use it but that is besides the point."
Not sure if the author has tried any other AI-assistants for coding.
People who haven't tried coding AI assistant underestimates its capabilities (though unfortunately, those who use them overestimate what they can do too). Having used Claude for some time, I find the report's assertions quite plausible.
The article doesn't talk about the implausibility of the the tool to do the stated task. It talks the report, and how it doesn't have any details to make us believe the tool did the task. Maybe the thing they are describing could happen. That doesn't mean we have any evidence that it did.
If you know what to look for, the report actually has quite a few details on how they did it. In fact, when the report came out, all it did was confirm my suspicions.
The author’s arguments explicitly don’t dispute plausibility. It accurately states that mere plausibility is a misleading basis for this report, but that the report provides nothing but plausibility, and thus is of low quality and dubious motivation.
Anthropic’s lack of any evidence for their claims doesn’t require any position on AI agent capability at all.
Yup. One recent thing I started using it for is debugging network issues (or whatever) inside actual servers. Just give it permission to SSH into the box and investigate for itself.
Super useful to see it isolate the problem using tcpdump, investigating route tables, etc.
There are lots of use cases that this is useful for, but you need to know its limits and perhaps even more importantly, be able to jump in when you see it’s going down the wrong path.
> Personally, I don’t use it but that is besides the point.
This popped out to me, too. This pattern shows up a lot on HN where commenters proudly declare that they don’t use something but then write as if they know it better than anyone else.
The pattern is common in AI threads where someone proudly declares that they don’t use any of the tools but then wants to position themselves as an expert on the tools, like this article. It happens in every thread about Apple products where people proudly declare they haven’t used Apple products in years but then try to write about how bad it is to use modern Apple products, despite having just told us they aren’t familiar with them.
I think these takes are catnip to contrarians, but I always find it unconvincing when someone tells me they’re not familiar with a topic but then also wants me to believe they have unique insights into that same topic they just told us they aren’t familiar with.
Whether the author uses any AI tools or not (to talk of using Claude specifically) is quite literally completely beside the point, which is readily apparent from actually reading the article versus going into it with your hackles raised ready to "defend AI".
You most likely know and just suffered autocorrect, but given the context of using it to point out a similar mistake I feel the need to correct you: it should be “sic”, not “sick”.
I'm not sure if it contains exactly what you're looking for, but it includes several resources and notebooks related to fine-tuning LLMs (including LoRA) that I found useful.
My experience has been just the opposite. We have moved all our apps to Elixir now. It has one of the best developer experience to work with. Especially for concurrent programming.
I suspect OP is using an umbrella app as a shared library or something. That is the only explanation I can think of that can cause the issue with compilation order.
About documentation, not quite sure what the OP is talking about. Elixir and Erlang have really good documentation.
Anyway, to truly appreciate Elixir (and for that matter Erlang), one needs to understand OTP and the philosophy behind it. It is not just a language but a framework to build concurrent application.
tl;dr the base ModernBERT was trained with code in mind unlike most encoder-only models (therefore assuming it was also trained on JSON/YAML objects) and also includes a custom tokenizer to support that, which is why I mention that indentation is important since different levels of indentation have different single tokens.
This is mostly theoetical and does require a deeper dive to confirm.