Hacker Newsnew | past | comments | ask | show | jobs | submit | more 7moritz7's commentslogin

> Chief microcontroller rival Adafruit

They are PCB brands. The microcontrollers are made by the usual manufacturers like ST, Renesas, Infineon...


I agree the writing is imprecise.

But of course Arduino historically also didn't make the Atmel or Pico chips, so I can sort of see what they were going for.


The writing is precise. Just inaccurate.


FOSS just does not have the aggressive scaling mindset. Even success stories like Linux' game compatibility and Chromium can be traced to just regular tech companies, as opposed to non profits.


Maybe this is ok?

Many non open source apps do get critical mass but they eventually go bust. Emacs, git, Linux and I think even Mastodon have a slower uptake but do not seem to have such a high risk of collapse. While YouTube and Facebook et al seem to have an insurmountable moat and collection of users the reality is recent history is littered with boom to bust failures:

MySpace, Vine, Yahoo all the way back to GeoCities.

I would be patient and only worry if mastodon is actively dying.

For me it's the only social media app I have installed.


What is the aggressive mindeset of Bluesky?

I have both Mastondon and Bluesky accounts and in my experience I find Bluesky is just simpler to use which attracted more of the types of accounts I wanted to follow. Nothing aggressive about that, just good UX resulting in a richer pool of accounts.


It is quite remarkable just how frequently people in tech forums underestimate reasoning models. Same story on several large technology subreddits. Wouldn't have been my guess for who will get caught off guard by AI progress.


The least volatile dataset, employee count 1-4 businesses, is steadily climbing in adoption. I feel like as long as the smallest businesses (so the most agile, non-enterprise software ones) increase in adoption, other sizes will follow.


They got big with mysql with optimizations (maybe mariadb?). neon would be postgres aas


Vitess (sharded MySQL) is how they became relevant. But broadly they've spent a lot of time making a great DaaS. There plan is to do the same with Postgres.


Can someone explain to me the neon pricing?

5 minutes of inactivity makes it idle.

If I get one query every 5 minutes and each query takes 100ms for whole month, do I get changed for 720 hours or for 14 minutes (total compute time)?


update: It's 720 hours of compute cost. Not really serverless. It's just managed database service and it can scale to zero. That's it.


5 bucks gets you 8gb ram 4 vcpu 75gb nvme at contabo actually

i know this is apples and oranges but that's 16 times the ram


you get all of those resources execpt what you need: a managed postgresql.

the difference in price is really the value added by having someone else managing postgresql for you.


what is there to manage on a single instance, single VM...


pitr, setting up a replica, observability, performance reports etc


When I first saw their LLM integration on Facebook I thought the screenshot was fake and a joke


Greater China is never used to describe a region. It means China, Tibet, Macao, Hong Kong and Taiwan according to Apple.


That has been solved with RAG, OCR-ish image encoding (deepseek recently) and just long context windows in general.


RAG is like constantly reading your notes instead of integrating experiences into your processes.


Not really. For example we still can’t get coding agents to work reliably, and I think it’s a memory problem, not a capabilities problem.


On the other hand, test-time weight updates would make model interpretability much harder.


Hasn't RLHF and with LLM feedback been around for years now


Large latent flow models are unbiased. On the other hand, if you purely use policy optimization, RLHF will be biased towards short horizons. If you add in a value network, the value has some bias (e.g. MSE loss on the value --> Gaussian bias). Also, most RL has some adversarial loss (how do you train your preference network?), which makes the loss landscape fractal which SGD smooths incorrectly. So, basically, there's a lot of biases that show up in RL training which can make it both hard to train, and even if successful, not necessarily optimizing what you want.


We might not even need RL as DPO has shown.


> if you purely use policy optimization, RLHF will be biased towards short horizons

> most RL has some adversarial loss (how do you train your preference network?), which makes the loss landscape fractal which SGD smooths incorrectly


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: