It can be very device specific unfortunately. Thinkpad tend to work quote well. I had a Framework that my wife took from me and it's truly fantastic, works out of the box.
I don't remember Vim's Markdown support to be anything special, either; I do a lot of Markdown work, and tended to use Markdown-specific editors on the Mac like Ulysses and iA Writer, while doing my technical writing in BBEdit. (I never found Vim to fit me particularly well for prose of any kind, even though I was pretty experienced with it. Apparently my writing brain is not modal.)
Semi-ironically given the Org mode discussion, the markdown-mode package for Emacs makes it one of the best Markdown editors I've used!
I too have found this. However, I absolutely love being able to mock up a larger idea in 30 minutes to assess feasibility as a proof of concept before I sink a few hours into it.
They both fit Gaussians, just different ones! OLS fits a 1D Gaussian to the set of errors in the y coordinates only, whereas TLS (PCA) fits a 2D Gaussian to the set of all (x,y) pairs.
Yes, and if I remember correctly, you get the Gaussian because it's the minimum entropy (least additional assumptions about the shape) continuous distribution given a certain variance.
Both of these do, in a way. They just differ in which gaussian distribution they're fitting to.
And how I suppose. PCA is effectively moment matching, least squares is max likelihood. These correspond to the two ways of minimizing the Kullback Leibler divergence to or from a gaussian distribution.
Even documented libraries can be a struggle, especially if they are not particularly popular. I'm doing a project with WiFi/LoRa/MQTT on an ESP32. The WiFi code was fairly decent, but the MQTT and especially LoRa library code was nearly useless.
Sonnet 3.5 fails to generate basic JetpackCompose libraries properties properly. Maybe if somebody tried really hard to scrape all the documentation and force feed it, then it could work. But i don't if there are examples of this.
Like general LLM, but with complete Android/Kotlin pushed into it to fix the synapses.
Of course, why wouldn't it? It's a generative model, not a lookup table. Show it the library headers, and it'll give you decent results.
Obviously, if the library or code using it weren't part of the training data, and you don't supply either in the context of your request, then it won't generate valid code for it. But that's not LLM's fault.
This is something that actually concerned me and so I started using a lot of `bash` scripts to emulate behaviour that I like in Notable, that way I could use Notable without the fear of being locked it but I could try and emulate its behavior in a way that more suited my workflow.
I did a similar thing to replace my personal workflow that I'd been using a home instance of JIRA for for some time.
I thought about opening it up, but TBH there's loads of these things already, and the real value for me is that I built it myself and it conforms perfectly to what I need. I looked at a few others, but the overhead of grokking someone else's tool was just too much of a hassle for me.
The ergonomics of these things are really close to perfect when you decide yourself what features you want and how they should be implemented.
I couldn't agree more about the difficulty of grokking somebody else's tool.
The thing is though it took me a long time to get to the level where I could put something like this together and I wish I could have had some sort of guidance earlier on.
That's why I've tried to make my implementation modular, so others can take the things that they haven't figured out from mine and implement it in there workflow.
I should get around to documenting it so others can take replicate, imitate or fork it.
I think you wanted to link to your GitHub and Reddit post, but all your links seem to point to the same HN thread. Since I'm interested in the matter, would you mind putting the correct links?