Seems nice but I'm afraid it would not be compatible with my main work setup: VS Code on my main monitor, my web browser on my external monitor, and my eyes going back and forth between these 2 windows every few seconds to either read code or check the effects on the hot-reloading app. If one of the windows is dimmed, it would be painful.
Then you don't need a focus app, I'd say. But HazeOver has some "rules" that could help you. I agree that it only makes sense for me, on a single big display.
If I'm using the MBP 14" one, I'm always maximized...
Noob question (I only learned how to use ollama a few days ago): what is the easiest way to run this DeepSeek-R1-Distill-Qwen-32B model that is not listed on ollama (or any other non-listed model) on my computer ?
If you are specifically running it for coding, I'm satisfied with using it via continue.dev in VS Code. You can download a bunch of models with ollama, configure them into continue, and then there is a drop-down to switch models. I find myself swapping to smaller models for syntax reminders, and larger models for beefier questions.
I only use it for chatting about the code - while this setup also lets the AI edit your code, I don't find the code good enough to risk it. I get more value from reading the thought process, evaluating it, and the cherry picking which bits of its code I really want.
In any case, if that sounds like the experience you want and you already run ollama, you would just need to install the continue.dev VS Code extension, and then go to its settings to configure which models you want in the drop-down.
Search for a GGUF on Hugging Face and look for a "use this model" menu, then click the Ollama option and it should give you something to copy and paste that looks like this:
ollama run hf.co/MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF:IQ1_M
Whenever they have an alias like this, they usually (always?) have a model with the same checksum but a more descriptive name, e.g. the checksum 38056bbcbb2d corresponds with both of these:
I prefer to use the longer name, so I know which model I'm running. In this particular case, it's confusing that they grouped the qwen and llama fine tunes with R1, because they're not R1.
Many people here seem impressed about speed/performance. I have been using all sorts of terminals / emulators over the past 20 years and it never occurred to me a terminal can be slow. When I type a command, I just get the result instantaneously, for any terminal. What are the use cases that can make a terminal be slow?
> What are the use cases that can make a terminal be slow?
Rendering huge stdout/stderr can be a bottleneck. Try running a program that writes half a million lines of text to stdout without file redirects and a lot of terminal emulators will struggle to keep up.
I think this mostly plays a role when using modal text editors like vim in your terminal. Speed matters so very much then!
Give it a try if you want ;)
Simon Halvdansson runs "Harmonic", an Android client for Hacker News, I am using it daily for 2+ years and I sincerely recommend it.
I even asked him a feature (mark a story as read), and he implemented it shortly after.
Shout-out to you Simon!
https://github.com/SimonHalvdansson/Harmonic-HN
https://play.google.com/store/apps/details?id=com.simon.harm...