Hacker Newsnew | past | comments | ask | show | jobs | submit | wngr's favoriteslogin

Its being reported elsewhere that future new teslas will not have basic autopilot (the name Tesla use for the standard lane keep assist they offer) at all, the only way to get any form of lane keep assist will be to subscribe to FSD. The wording in the ars article linked here does a terrible job of explaining the change. Existing Teslas which already have basic Autopilot will still continue to have the feature.

New Teslas will now only have "Traffic Aware Cruise Control" as standard without lane assist, i.e. keeps pace with traffic and can stop/start, but user still has to provide steering input.


Honest question (from a parent): why was your six year old using YouTube unrestricted and unsupervised?

I feel kinda bad for the writer, because it's a good question: no, curing patients is not a good business model, just like public transit is not a good business model.

What a lot of folks neglect are N+1-order effects, because those are harder to quantify and fail to reach the predetermined decision some executive or board or shareholder has already made. Is curing patients a bad business model? Sure, for the biotech company it is, but those cured patients are far more likely to go on living longer, healthier lives, and in turn contribute additional value to society - which will impact others in ways that may also create additional value. That doesn't even get into the jobs and value created through the R&D process, testing, manufacturing, logistics of delivery, ongoing monitoring, etc. As long as the value created is more than the cost of the treatment, then it's a net-gain for the economy even if it's a net loss for that singular business.

If all you're judging is the first-order impacts on a single business, you're missing the forest for the trees.


Anyone else notice the obvious misspelling of “climate” in the AI-generated hero image?

Multi-version approaches to developing software aren't as good at reducing common-mode failures as many people expect[1].

[1] J. C. Knight and N. G. Leveson, “An experimental evaluation of the assumption of independence in multiversion programming,” IIEEE Trans. Software Eng., vol. SE-12, no. 1, pp. 96–109, Jan. 1986, doi: 10.1109/TSE.1986.6312924.


I work at a bank with old old monstrous sql queries. I thought I can make a use of tigerbeetle to simplify the system. But sadly, I just couldn't figure out how to make it work. Transaction requires lot of business logics but I couldn't convert that to make it work with combination of RDBMS + tigerbeetle. I wish there were some realworld example that I can get insight using tigerbeetle.

These are the kind of claims that make some Linux users tiresome to talk to. (Full disclosure: I am also a Linux user).

I'm not defending Microsoft, they are not necessarily my cup of tea, but these claims are only true of anything pre-Nadella era (part of 2014 and earlier).

Feel free to express your opinions, but don't be hateful!


It’s hard to evaluate setups like this without knowing how the resulting code is being used.

Standalone vibe coded apps for personal use? Pretty easy to believe.

Writing high quality code in a complex production system? Much harder to believe.


after covid is an illusion, there is no such thing as "after" for an ongoing pandemic....

and the company you work for seems rather incompetent


You may have just forgotten the pain of the learning curve? Admittedly postfix & dovecot are way more sane than rspamd. But their whole default config (and something like 50% of the options and documentation) are oriented around UNIX system accounts for each of your mail users, which seems insane and 80s-era to me (let's go dial up to the mainframe at 300 baud and see if we have any mail). It takes dozens of pages of documentation to orient yourself away from all that, understand Postfix's "address classes", that you generally want "virtual mailboxes", etc. No support for DKIM, except through sendmail-invented "milters", of which Postfix heartily recommends you to OpenDKIM, a project which hasn't been touched in 10+ years, doesn't support EC signing, is not packaged on most distros, is documented on a outdated non-https site with sparse even more out-of-date plaintext documentation, referring you to a defunct FTP site to download the code, etc. And milter requires setting up a UNIX or inet socket and tedious configuration, etc. etc.

Poor support for SASL, at least for mail users looking to god forbid send an email and relay it to the internet, and password-protect against random spammers doing the same, referring you instead to Dovecot SASL - also legacy cruft (partly the SASL protocol designers' fault), SASL has numerous "mechanisms" but nearly everybody uses just the PLAIN mechanism, ensuring a TLS channel is established first, which is about 10 lines of code to implement.

Just a ton of unnecessary legacy cruft IMHO.


I had enough problems with new Seagate Exos drives (actually new, not remanufactured or whatever these folks ended up with,) that I've taken to buying used Western Digital Ultrastar drives on Amazon for my NAS. They're cheaper, and so far, reliable enough. I wrote a little more about my rationale[0], but, basically:

1. With RAID-6, I can take two drive failures, and it is quicker to get a replacement off Amazon than wait for an OEM to RMA a drive under warranty

2. The Ultrastars have been pretty reliable in Backblaze's published data

3. The reseller I went through seems reliable enough

4. There's at least some evidence these "remanufactured" drives are coming from the OEMs, and based on past experience working at a few hardware manufacturers, the no trouble found rate for RMA'd hardware is typically quite high - to the point there is likely to be nothing wrong with a product that has been returned under warranty.

I guess a side benefit of this is at least I know I'm buying used drives.

0 - https://marcusb.org/posts/2024/03/used-hard-drives-from-tech...


> even if your body has had more than enough food to sustain itself for a whole week.

For reference, that amount of food is zero. You're already set for the week.

Animals don't operate like cars where they're constantly on the verge of death.


I am physically there at C3 right now and one of the prevalent themes this year is "being nice didn't work". You can see it in this year's tagline: "illegal instructions"

Definitely not abandoned, but it’s a free-time project for myself and another developer. At the end of last year we released version 0.5 with a new protocol design, and roughly a month ago released 0.5.9 with link cost changes to dramatically improve network latency.

I've "riced" Linux machines, Windows machines, different editors, terminals, file browsers, shells, web browsers, even commandline tools, to fully customize my own work machine.

I used to install cool tools, new non-standard programs and made edits to config files.

Now I basically just install Arch (personal machines) or Debian (servers), and leave almost everything at default. I have a handful of necessary tweaks for i3, mostly keybinds (Meta+O for emoji keyboard, a different runner, etc.) which I can reasonably remember, look up, or copy-paste to new machines. I used to have an intricate kickstart.nvim-based neovim setup, but I don't use it anymore.

I like tools which have configs, but I try not to touch them, so I don't have to care about which machine I'm on too much. I can ssh into any Linux or Unix-adjacent machine and just get work done. Visual Studio Code and Zed/Zeditor are wonderful with good defaults, which I don't need to change.

I adjust the font of all my terminals and editors to Fira Code, but that's pretty much it. The defaults are usually sane, and, if they're not, I look for a different program.

This is why I appreciate ArchLinux so much, too; They keep the default configs for most tools, and (almost) only make sane adjustments if any. I've given up on customizing the hell out of my machine(s). If customizing your own machine(s) is your hobby, go for it, but if you want to be productive, consider getting used to default keybinds, default naming, typing out `ls -l` instead of `ll`, and getting the job done. You can own and fully control your machine without exercising this control just because you can, everywhere.


I don't understand how anybody can still claim LLMs show "complex reasoning".

It's been shown time and time again that they'll produce a correct chain of reasoning when given a problem (e.g. wolf, goat, cabbage crossing a river; 3 guards and a door; etc.) that is roughly similar to what's in the training data but will fail when given a sufficiently novel modification _while still producing output that is confidently incorrect_.

My own recent experience was asking ChatGPT 3.5 to encode an x86 instruction into binary. It produced the correct result and a page of reasoning which was mostly correct, except 2 errors which if made by a human would be described as canceling each other out.

But GPT didn't make 2 errors, that's anthropomorphizing it. A human would start from the input and use other information plus logical steps to produce the output. An LLM produces a stream of text that is statistically similar to what a human would produce. In this particular case, it's statistics just weren't able to cover the middle of the text well enough but happened to cover the end. There was no "complex reasoning" linking the statements of the text to each other through logical inferences, there was simply text that is statistically likely to be arranged in that way.


This architecture is roughly how HashiCorp's Nomad, Consul, and Vault are built (I'm one of the maintainers of Nomad). While it's definitely a "weird" architecture, the developer experience is really nice once you get the hang of it.

The in-memory state can be whatever you want, which means you can build up your own application-specific indexing and querying functions. You could just use sqlite with :memory: for the Raft FSM, but if you can build/find an in-memory transaction store (we use our own go-memdb), then reading from the state is just function calls. Protecting yourself from stale reads or write skew is trivial; every object you write has a Raft index so you can write APIs like "query a follower for object foo and wait till it's at least at index 123". It sweeps away a lot of "magic" that normally you'd shove into a RDBMS or other external store.

That being said, I'd be hesitant to pick this kind of architecture for a new startup outside of the "infrastructure" space... you are effectively building your own database here though. You need to pick (or write) good primitives for things like your inter-node RPC, on-disk persistence, in-memory transactional state store, etc. Upgrades are especially challenging, because the new code can try to write entities to the Raft log that nodes still on the previous version don't understand (or worse, misunderstand because the way they're handled has changed!). There's no free lunch.


Same here, m&m's are nefarious indeed.

Thanks for the link. Looking through the Github, this appears to be close to what I'm looking for on my own project. Namely, being able to share files automatically with nearby devices when they move within range.

I thought Bluetooth might be viable, as beacons and GattServers seem to be designed for this purpose. Android's implementation was painful, yet doable. However, after wading into Microsoft's codebase, I suspect it will be slog.

Question: With an agreed upon network name or configuration, and agreed upon access parameters, how much change do you think it would take for FlyingCarpet to instead by used as a semi-anonymous, dynamic membership mesh network?

Ex: A laptop and a phone get near each other, they recognize their proximity, and that they are both offering the known network, they then check for the presence of known filetypes in a known folder location, and exchange small updates with each other. The same with any number of desktops, laptops, or phones that get near each other. If I was in a mall with 11 other similarly configured phones nearby, and 5 people working on laptops, then my phone would call the 16 other devices, and they would each get a file from my phone, and I would get a file from their phones.


Audiobooks while running / cooking / other activity where reading doesn’t make sense.

Ebook elsewhere.


It's lost the initial momentum but it's not dead yet. I'm still holding my breath that the critical mass makes the leap eventually; "it's just twitter but you can write and use custom clients and feed algorithms" is a compelling proposition

For me and a lot of others, it's the only twitter alternative we ever signed up for. A few never came back to twitter, but most did mostly for social reasons. But twitter as a platform gets worse each day, and if it ever truly breaks or dies, bsky will be the schelling point for a whole bunch of people


>the McKinsey framework only measures effort or output, not outcomes and impact, which misses half of the software developer lifecycle.

>“The McKinsey framework will most likely do far more harm than good to organizations – and to the engineering culture at companies. Such damage could take years to undo,” Orosz and Beck argue.

I would go further and say it will likely damage user experience too. Cranking out features to game productivity scores and ignoring outcomes is a good way to get bloat, bugs, cluttered and overly complex UI, and an overall degraded user experience.


Hey thanks for trying Spacedrive! The bug you're experiencing is known when browsing before adding a "Location". We index Locations ahead of time to generate a cache that makes browsing super fast, Spacedrive is, contrary to some replies here, designed for big data, we cache and virtualize everything.

Next update we'll fix the bug browsing non-locations, as those who open the app tend to try browsing first, before adding as a location it seems. It's alpha software so I hope you give us time to iron it all out!


One potential readability improvement - if the numbers in the tables were all right aligned. More controversial opinion - if you standardized all of the sizes onto either TiB or GiB.

Anyway, very cool. I am shocked at how many header files are present.

Another idea - how many unique files are there between releases and how many unique files are there total? Take a sha hash of every file, every commit. Calculate how many shas are shared between releases vs novel (ie # files churned per release). Can then also calculate on the global uniqueness over time. Of course, this means calculating billions of sha sums, so it could take forever, unless you had some cute trick to rip the value out of the git repos directly. Maybe you could even beat the odds and find that 1 in a quadrillion hash collision.


> its people so removed from the tech that they fail to RTFM

I find it highly amusing to think anything resembling an average user of tech has actually RTFM.

Heck, I still haven't finished the manual on Firefox and here I am using it as my daily driver. And it has people who actually understand how it all works writing the "manual".

EDIT: How many landlords have read the instructions on how RealPage's rent pricing software works, and how it sets values? 10%? Less?


Spot on.

Comparing to Apple (I run both Lenovo and apple kit) I’ve lost two bits of hardware with apple for multiple days on repairs in the UK. A friend just got told that there’s a 2 month turnaround on his custom M1 MBP 16 which has a logic board problem.

That scared the shit out of me enough to throw the money in on a Lenovo T14 gen 3 with NBD on site repairs. If I sell my apple kit I can afford to buy another identical machine and stick it on the shelf. I’m still £120 up on my mac and mitigated a whole bunch of risks.

A couple of T490’s is probably good enough for me to be fair.


I’d like to offer an alternative framing: don’t debate at all, just listen.

While high school debate did teach me many positive lessons and I am thankful that I spent four years of my life doing it, many years later I have come to understand that it also taught me something truly negative: that the point of a conversation is to win.

I have put a lot of time in my adult life unlearning that trait, and reflecting on the harm it did to my relationships with other people.

If you want to grow like OP here suggests—which I think is a valuable, worthwhile goal—you will do yourself a great service in learning to listen. I know we all think that we do this, but I don’t think many of us actually do.

When you talk to others, take note of how much of the time you spend formulating a response. I know that for me, I find that frequently I’m already generating my rebuttal before they finish speaking. I am effectively listening to respond, not to hear what they have to say. I’m much much better at listening to hear today than I was a decade ago, but I still have to correct myself on this routinely.

It’s important to note that you can listen to hear and still be free to respond; if you want to have an interesting conversation you will definitely need to put in some effort too. Just make sure that you’re internalizing what they’ve said before you form a response to it. It will almost certainly slow down the pace of a conversation but stands a decent chance of making each exchange a lot more interesting for both of you.

I really believe that the most profound realizations of my life have come when I shut up and put in the effort to internalize what other people around me were doing and saying.


Like Apple, they have a vision for $$$

> I wonder how much other information exists on the web that is extremely valuable yet unnoticed

Basically every scientific progress which gets published on peer reviewed journals (a lot of them open). Some books also are online.


As a sibling commenter said, those proprietary apps have a server part that sends notifications to Google's server.

If UnifiedPush were to gain enough traction (say it would be adopted on FireOS, HuaweiOS and whatever platform don't use Google services), those apps would likely start to support it directly.

Another possible avenue is to add UnifiedPush support to push libraries used by these apps both server and client-side. Multiple libraries (Google FCM included, I linked at least two others from Amazon and Huawei in another comment here) abstract push notifications away so the developer gets a single server to talk to, and a unified API across iOS, Android, Web, etc. If these libraries started supporting UnifiedPush, adoption could increase a lot, without involving any developer effort for these proprietary apps.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: