Hacker Newsnew | past | comments | ask | show | jobs | submit | kukkukb's commentslogin

Surely, at some point in the near future, they'll be producing solar panels using solar energy?


Well, in some 2 years their solar production trend will reach their electricity consumption trend...

So either that or they'll deploy electric-arc sculptures all over the country for the population to see, listen, and smell.


There's a very fine line between Just In Time and Just Too Late. We advocate for Just In Case inventory, which is much more resilient.


Can you please share a little more about what Just In Case inventory looks like?


When the end user asks a question, is it sent to OpenAI? Or is this an LLM you built yourself?


Right now it's hitting OpenAI, but we want to try out other LLMs in the future.


Thanks. That might be a privacy issue, especially if the requester is from the EU


We run Julia code in production. These are compute-heavy services called by other parts of the app. Version upgrades are normally easy. Upgrade, run the unit tests, build the Docker images and deploy to K8s. Maybe we've been lucky, but since 1.3 we've never had big issues with version upgrades.


That's wild, I've seen huge performance regressions from changes to Base, among commands being dropped, irreconcilable breakages with basics of the ecosystem, spurious bugs in the networking libraries that are nearly impossible to chase down, and the menagerie of compatibility with essential packages break. I stopped caring around 1.7 when I realized this was the norm and it wasn't going to change. You must be honed in on some specific packages with a lot of custom code.


Oh, how the mighty Slashdot has fallen!


+1 for Linux, please


Congrats on launching! This looks really cool. Any plans for i18n?


We support i18n through dynamic variables. In short, you can define your flow in Frigade and instead of hardcoding strings in one language, you can simply pass in a variables from your codebase.

Do you use any i18n platform? We're thinking of perhaps creating some integrations.


We use Rails's built-in i18n. So this should work, as we could just send in the translated strings.


It's me!


Because when the browser un-gzips it, it takes a lot more memory than the minified version. And everyone seems to complain about browser bloat.


Johannesburg, South Africa. 100mb/s home fibre:

  ping 1.1.1.1
  PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
  64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=1.36 ms
  64 bytes from 1.1.1.1: icmp_seq=2 ttl=58 time=1.32 ms
  64 bytes from 1.1.1.1: icmp_seq=3 ttl=58 time=1.34 ms
  64 bytes from 1.1.1.1: icmp_seq=4 ttl=58 time=1.38 ms
  64 bytes from 1.1.1.1: icmp_seq=5 ttl=58 time=1.37 ms

  ping 8.8.8.8
  PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
  64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=1.33 ms
  64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=1.38 ms
  64 bytes from 8.8.8.8: icmp_seq=3 ttl=56 time=1.35 ms
  64 bytes from 8.8.8.8: icmp_seq=4 ttl=56 time=1.36 ms
  64 bytes from 8.8.8.8: icmp_seq=5 ttl=56 time=1.35 ms


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: