Hacker Newsnew | past | comments | ask | show | jobs | submit | lis's commentslogin

I found about it on LinkedIn, because Christian Kroll wrote about it:

https://www.linkedin.com/posts/christian-kroll_bigtech-europ...

I've no clue why they don't have a blog post or anything up yet.


Not necessarily these words, but a lot of these translations stem from linguistic purism:

https://en.wikipedia.org/wiki/Linguistic_purism


Unfortunately no. Here is a full list of countries: https://support.teufel.de/hc/en-us/articles/22903502875282-W...


We've forked excalidraw a while ago to allow running excalidraw without firebase as a backend. This can already be self-hosted. It needs some love, but it's a good starting point:

  * https://github.com/b310-digital/excalidraw
  * https://github.com/b310-digital/excalidraw-room/
  * https://gitlab.com/kiliandeca/excalidraw-fork
  * https://gitlab.com/kiliandeca/excalidraw-storage-backend


Yes, I agree. I've just ran the model locally and it's making a good impression. I've tested it with some ruby/rspec gotchas, which it handled nicely.

I'll give it a try with aider to test the large context as well.


In ollama, how do you set up the larger context, and figure out what settings to use? I've yet to find a good guide. I'm also not quite sure how I should figure out what those settings should be for each model.

There's context length, but then, how does that relate to input length and output length? Should I just make the numbers match? 32k is 32k? Any pointers?


For aider and ollama, see: https://aider.chat/docs/llms/ollama.html

Just for ollama, see: https://github.com/ollama/ollama/blob/main/docs/faq.md#how-c...

I’m using llama.cpp though, so I can’t confirm these methods.


Are you using it with aider? If so, how has your experience been?


Ollama breaks for me. If I manually set the context higher. The next api call from clone resets it back.

And ollama keeps taking it out of memory every 4 minutes.

LM studio with MLX on Mac is performing perfectly and I can keep it in my ram indefinitely.

Ollama keep alive is broken as a new rest api call resets it after. I’m surprised it’s this glitched with longer running calls and custom context length.


> In a Discord vocal on March 13th 2025 Michel Becker revealed that a documentary film explaining the solution would be shown in cinemas around France on May 2nd 2025. He hoped a broadcast in other countries would follow.


Yes, I'm really impressed by the speed as well.

A bit more about the collaboration can be found here:

https://cerebras.ai/blog/mistral-le-chat


Same here. Also one of the reason I loved building castles in Stronghold so much. It's so much more fun to build than to destroy.

That's also why I love Anno 1800 (as well as its predecessors). You can (mostly) avoid combat.


For a lot of people in Germany it's not about the risk of an accident, but rather the cost of building and decomissioning nuclear reactors.


At least not intentionally, though I agree that the domain sounds spammy. I’ve found this on mastodon by Mathy Vanhoef: https://infosec.exchange/@vanhoefm/112440635423432857


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: