Hacker Newsnew | past | comments | ask | show | jobs | submit | rcv's commentslogin

I have no idea about his credentials or suitability for the job, but Warsh is the son in law of Trump's buddy Ronald Lauder. I get the feeling he's not picking people he doesn't think he can control.

> I get the feeling he's not picking people he doesn't think he can control.

The irony in wanting to control people when you lack self control.


Do you really not see the difference here? It's amazing how hard people will try to bOtH sIdEs this administration.

1) Obama was not in office when this advance was given. What favors could Penguin Random House have been trying to bribe him for?

2) The Obamas were paid a $65M advance on their books, which while a huge sum it was actually seen as a reasonable investment at the time given the expected popularity of those books[1]. Both books were insane hits and sold like crazy. "A Promised Land" sold ~900k copies _on the first day_[2]. They almost certainly earned out on advance and are probably continuing to rake in more from sales.

3) This Melania movie is widely expected to have very poor sales. While making unpopular movies isn't in itself a crime, the amount paid in royalties to her does not look to any reasonable person like a sound investment. At least, not if you expect your return to be in ticket sales or streaming fees.

[1] https://www.vox.com/culture/2017/3/2/14779892/barack-michell... [2] https://thehill.com/blogs/in-the-know/in-the-know/526599-bar...


Looks like your login is busted. I get the following when trying to log in with Google or Github:

``` { "code": "REDIRECT_URL_NOT_WHITELISTED", "error": "Redirect URL not whitelisted. Did you forget to add this domain to the trusted domains list on the Stack Auth dashboard?" } ```


I remember reading a paper back in grad school where the researchers put a dead salmon in the magnet and got statistically significant brain activity readings using whatever the analysis method à la mode was. It felt like a great candidate for the Ig Nobel awards.


That was our paper! We showed that you can get false positives (significant brain activity in this case) if fMRI if you don't use the proper statistical corrections. We did win an Ig Nobel for that work in 2012 - it was a ton of fun.


This is one for https://news.ycombinator.com/highlights!

(I mention this so more people can know the list exists, and hopefully email us more nominations when they see an unusually great and interesting comment.)

p.s. more on the salmon paper in this thread:

https://news.ycombinator.com/item?id=46291600

https://news.ycombinator.com/item?id=46288560

https://news.ycombinator.com/item?id=46288557


Interesting -- I just use https://news.ycombinator.com/best?h=168 for a weekly roundup, but that only tracks posts. Might need to supplement it with highlights or similar.

Reviewing the HN docs, https://news.ycombinator.com/bestcomments?h=168 might also be a good summary link.


> so anyone who didn't get vaccinated before Nov. 2021 is considered unvaccinated?

The answer is in the first paragraph of the "Design, Study Populations, and Outcomes" section:

Exposure to COVID-19 vaccination was defined as the administration of a first dose of an mRNA vaccine between May 1 and October 31, 2021 (inclusion period), which was the mass vaccination period for adults in France, who primarily received mRNA vaccines. Multiple vaccinations in exposed individuals were not considered. The unvaccinated group was defined as individuals who remained unvaccinated as of November 1, 2021. Individuals vaccinated before May 1, 2021 (12.0%), or who received a first dose of another (ie, non-mRNA–based) COVID-19 vaccine during the inclusion period (1.4%) were excluded.

> why is everyone so keen to defend big pharma? i thought we were supposed to hate them?

Are we? Says who? Certainly there are bad actors who profit off of the misfortune of others. There are also brilliant people who work hard to bring about access to lifesaving treatments. There have certainly been examples of fraud in the past, and there have also been examples of truly amazing public health benefits.

Do I personally think the US health system could be better structured to disincentivize the former and promote the latter? Definitely! Is that evidence of a global conspiracy? Nope.

> had COVID and got hit by a bus? that was a COVID death

There's a good analysis of that here: https://www.astralcodexten.com/p/the-evidence-that-a-million...

TLDR is that all-cause death increased in line with the reported covid deaths which strongly refutes the "had covid got hit with a bus" theory.


I know you're joking, but the most interesting finding here is that they got more financially intelligent and tuned in after ascending to higher positions within congress.


... until you get accused of generating that video with another AI.


No fair, i was born with 11 fingers!


> 2025-09-29T16:55:10.367Z is the date. Write a haiku about it.

what in the world?


That's just a dynamic bogus prompt used to trace and extract the system prompt.

Here's how it works in detail: https://mariozechner.at/posts/2025-08-03-cchistory/


> I genuinely don't understand what docker brings to the table. I mean, I get the value prop. But it's really not that hard to set up http on vanilla Ubuntu (or God forbid, OpenBSD) and not really have issues.

Sounds great if you're only running a single web server or whatever. My team builds a fairly complex system that's comprised of ~45 unique services. Those services are managed by different teams with slightly different language/library/etc needs and preferences. Before we containerized everything it was a nightmare keeping everything in sync and making sure different teams didn't step on each others dependencies. Some languages have good tooling to help here (e.g. Python virtual environments) but it's not so great if two services require a different version of Boost.

With Docker, each team is just responsible for making sure their own containers build and run. Use whatever you need to get your job done. Our containers get built in CI, so there is basically a zero percent chance I'll come in in the morning and not be able to run the latest head of develop because someone else's dev machine is slightly different from mine. And if it runs on my machine, I have very good confidence it will run on production.


OK, this seems like an absolutely valid use case. Big enterprise microservice architecture, I get it. If you have islands of dev teams, and a dedicated CI/CD dev ops team, then this makes more sense.

But this puts you in a league with some pretty advanced deployment tools, like high level K8, Ansible, cloud orchestration work, and nobody thinks those tools are really that appropriate for the majority of devteams.

People are out here using docker for like... make install.


Having a reproducible dev environment is great when everyone’s laptop is different and may be running different OSes, libraries, runtimes, etc.

Also docker has the network effect. If there was a good light weight tool that was better enough people would absolutely use it.

But it doesn’t exist.

In an ideal world it wouldn’t exist, but we don’t live there.


> Having a reproducible dev environment is great when everyone’s laptop is different and may be running different OSes, libraries, runtimes, etc.

Docker and other containerization solved the “it works on my machine” issue


almost. there is still an issue with selinux. i just had that case. because the client develops with selinux turned off, the docker containers don't run on my machine if i have selinux turned on.


you miss an intermediate environment (staging, pre-prod, canary, whatever you want to call it) with selinux turned on.


i don't. the customer does. and they don't seem to care. turning selinux off works for them and they are not paying me to fix that or work around it.


Docker is that lightweight tool, isn’t it? It doesn’t seem that complex to me. Unfamiliar to those who haven’t used it, but not intrinsic complexity.


Imagine you have a team of devs, some using macOS, some using Debian, some using NixOS, some on Windows + WSL. Go ahead and try to make sure that everyone's development environment by simply running "git pull" and "make dev"


Ha, I've written a lot of these Makefiles and the "make dev" command even became a personal standard that I added to each project. I don't know if I read about that, or if it just developed into that because it just makes sense. In the last few years, these commands very often started a docker container, though. I do tend to work on Windows with WSL and I most of my colleagues use macOS or Linux, so that's definitely one of the reasons why docker is just easier there.


15 years ago i has a customer who ran a dozen different services on one machine, php, python, and others. a single dev team. upgrading was a nightmare. you upgraded one service, it broke another. we hadn't yet heard about docker, and used proxmox. but the principle is the same. this is definitely not just big enterprise.


That is wild. I have been maintaining servers with many services and upgrading never broke anything, funnily enough: on Arch Linux. All the systems where an upgrade broke something were Ubuntu-based ones. So perhaps the issue was not so much about the services themselves, but the underlying Linux distribution and its presumably shitty package manager? I do not know the specifics so I cannot say, but in my case it was always the issue. Since then I do not touch any distribution that is not pacman-based, in fact, I use Arch Linux exclusively, with OpenBSD here and there.


i used "broken" generously. it basically means that for example for multiple php based services, we had to upgrade them all at once, which lead to a large downtime until everything was up and running again. services in containers meant that we could deal with them one at a time and dramatically reduce the downtime and complexity of the upgrade process.


Would there still have been a problem if you were able to install multiple php versions side-by-side? HPC systems also have to manage multiple combinations of toolchains and environments and they typically use Modules[1] for that.

[1] https://hpc-wiki.info/hpc/Modules


probably not, but it wasn't just php, and also one of the goals was the ability to scale up. and so having each service in its own container meant that we could move them to different machines and add more machines as needed.


Oh, I see what you mean now, okay, that makes sense.

I would use containers too, in such cases.


> so there is basically a zero percent chance I'll come in in the morning and not be able to run the latest head of develop because someone else's dev machine is slightly different from mine.

It seems you never had to deal with timezone-dependent tests.


What are timezone-dependent tests? Sounds like a bummer


I once had to work with a legacy Java codebase and they hardcoded the assumption their software would run in the America/New_York timezone, except some parts of the codebase and tests used the _system_ timezone, so they would fail if run in a machine with a different timezone.


[flagged]


I like how you didn't even ask for any context that would help you evaluate whether or not their chosen architecture is actually suitable for their environment before just blurting out advice that may or may not be applicable (though you would have no idea, not having enquired).


Much like parallel programming, distributed systems have a very small window of requirement.

Really less than 1% of systems need to be distributed. Are you Google? No? Then you probably don't need it.

The rest is just for fun. Or, well, pain. Usually pain.


I like how you didn't even enquire as to what size organisation they worked in order to determine if it might actually be applicable in their case.


I never said it was applicable to them, in fact I said the opposite:

> Obviously that ship has sailed for you, but I mean in the general sense.

In the general sense, no, you don't need a distributed system. Even if you have billions of dollars worth of revenue - no, you don't need a distributed system. I know, because I've worked on monoliths that service hundreds of thousands of users and generate billions in revenue.

If you're making YouTube, maybe you need a distributed system. Are you making YouTube? Probably not.

You can, of course, choose to make a distributed system anyway. If you want to decrease your development velocity 1000x and introduce unbelievable amounts of complexity and risk.


We're there at least 1000 engineers working on that system you worked on?


Yes, 1500 or so.


what they described is a fairly common set up in damn near most enterprises


Yeah most enterprise software barely works and is an absolute maintenance nightmare because they're sprawling distrivuted systems.

Ask yourself: how does an enterprise with 1000 engineers manage to push a feature out 1000x slower than two dudes in a garage? Well, processes, but also architecture.

Distributed systems slow down your development velocity by many orders of magnitude, because they create extremely fragile systems and maintenance becomes extremely high risk.

We're all just so used to the fragility and risk we might think it's normal. But no, it's really not, it's just bad. Don't do that.


Both can be true


Enterprises are frequently antipattern zoos. If you have many teams you can use the modular monolith pattern instead of microservices, that way you have the separation but not the distributed system.


Wherefore art thou IBM


> The fly-by-wire flight software for the Saab Gripen (a lightweight fighter) went a step further...

I would love to hear some war stories about the development of flight software. A lot of it is surely classified, but I'm fascinated by how those systems are put together.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: