Head End Power (HEP) is the electrical power supplied from the locomotive to the passenger cars for lighting, heating, air conditioning and other amenities - essentially the "hotel load" that keeps your private car functioning while attached to the train.
Power generated on a train is probably significantly more expensive than power you can pull from the grid. Most of Amtrak's network does not have power so I assume they rely on generators on the train.
It’s also called “hotel” power and is provided by the locomotive, but separate from “needed to run” power. A train can run with just air and the physical connection, hotel comes with the big “other cable” connected.
Some private cars do NOT use it and instead have their own generator. In theory you could have one with no lights, etc at all.
I’ve been on an Amtrak where it lost hotel power; nothing but emergency lighting until they got to a station where they could swap the locomotive.
But the train kept running, and the conductor had to walk the entire train announcing stops verbally; with no PA system.
It's from the loco which in the US almost exclusively used electric propulsion, just for capex vs. opex balance sheet gaming reasons mostly (except in and around NYC (tunnels) and some very recent electrification efforts (I think bright line in FL was looking at electrifying some trains? Something recently did and improved performance that way.) sourced from medium speed diesel generators housed in the loco.
Way back in the day of steam heating was via open-cycle steam and electric lighting via generators on passenger car axles with a local battery to keep the lights on while stopped.
Eventually with the end of steam they switched to electric heating and can conveniently siphon off electric lights from that.
The people doing this at this point are mostly rich rail enthusiasts. No one is doing this to actually get around. The most popular routes are the more scenic ones, like through the mountains. They’re not hitching a car into the Acela to go from NYC to Boston.
And rail car enthusiast associations, which usually consist of passionate but not very rich people - they will pool money together to afford a trip like this every now and then, so usually they'll go "ok we got 20k in membership fees this quarter, where can we go with this money" - so yeah, it will absolutely matter to them.
Tangentially, someone I knew from school worked at the Franklin Mint for a while and he told me their collectibles customers were mostly moderately well-off empty-nesters who now had this money to spend but really weren't into second homes or fancy cars.
I'm not sure that is true — I mean the rich part is true, but not necessarily the rail enthusiast part. One of the times we took the California Zephyr there was a private car on the end that I understood to be some sports-team tycoon who was more or less afraid of flying.
You do see these cars up in South Station occasionally attached to the Regional. Ive always assumed more of a Boston -> DC routing for those. Entertain some guests, get business done etc.
I think the Cardinal is a popular route for a lot of those guys. It’s the scenic way to Chicago. Instead of going from NYC and sort of hugging the south shore of the Great Lakes, it goes south to Dc, then to Charlottesville and over the old C&O route over the Appalachians through Charleston, WV and on along the Ohio river to Cincinnati and then eventually Chicago.
Those prices seem in reach for a dream vacation that you save up for. You can rent railcars that are already approved. buying a custom rail car is possible but likely out of budget for normal people.
The nice ones are almost all old business cars. The business car was used by the railroads for senior executives to move around their systems, and hold meetings.. usually contain a couple of executive bedrooms, a staff bedroom (they typically carried a cook and a steward, although the roles were sometimes combined). The rear half or so of the car is an open plan lounge/meeting room.
The cars were usually built by a company like Pullman, usually from a time frame of roughly 1900 +/- 20 years.
Huge money pits, with tons of (often quite ornate) wood m, etc. then add the cost of restoration (again almost all of these cars are 100+ years old), retrofitting modern electrical systems, air conditioning. Could easily be a million dollar project.
Not really, you just need to get more people. The fanciest car holds 8, other cars hold 20 to 70 people. So if you divide the price the people it's not that bad.
The first time I realized this kind of thing was a tour of a baseball stadium. They showed us the suites. I forgot how much they cost but if you got a bunch of friends together to fill one then they were in the same range as medium good seats.
I'm confused. Usually a "natural experiment" is a chance event that affects some random subset of a population. Here, they seem to be using "natural experiment" to refer to the event that someone decides to move to a different city. But obviously the subset of people in Amarilllo, TX who decide to move to New York, NY are going to be somewhat different than the subset who don't. So isn't this confounded?
It's really strange that they just jump into the paper and keep saying "natural experiment" over and over again without any justification that they actually have one. They do eventually get to this in the "Selection effects in relocation and mobile app usage" section, but I think they really downplay the seriousness of the issue.
Thanks for pointing this out. I submitted a request to spamhaus but it's an auto-responder black hole that tells me to contact my "IT department". (My personal blog, oddly enough, does not have one.) They don't explain why this domain which has no ads, sells nothing, and never sends email would be listed.
Just to be clear, these are hidden prompts put in papers by authors meant to be triggered only if a reviewer (unethically) uses AI to generate their review. I guess this is wrong, but I find it hard not to have some sympathy for the authors. Mostly, it seems like an indictment of the whole peer-review system.
AI "peer" review of scientific research without a human in the loop is not only unethical, I would also consider it wildly irresponsible and down right dangerous.
I consider it a peer review of the peer review process
Back in high school a few kids would be tempted to insert a sentence such as "I bet you don't actually read all these papers" into an essay to see if the teacher caught it. I never tried it but the rumors were that some kids had got away with it. I just used it to worry less that my work was rushed and not very good, I told myself "the teacher will probably just be skimming this anyway; they don't have time to read all these papers in detail."
Aerosmith (e: Van Halen) banned brown M&Ms from their dressing room for shows and wouldn’t play if they were present. It was a sign that the venue hadn’t read the rider thoroughly and thus possibly an unsafe one (what else had they missed?)
> As lead singer David Lee Roth explained in a 2012 interview, the bowl of M&Ms was an indicator of whether the concert promoter had actually read the band's complicated contract. [1]
I wonder if they had to change that as the word leaked out. I can just see the promoter pointing out the bowl of M&Ms and then Roth saying "great, thank you, but the contract didn't say anything about M&Ms, now where is the bowl of tangerenes we asked for?"
To add to this, sometimes people would approach Van and ask about the brown M&Ms thing as soon as they received the contract. He would respond that the color wasn’t important, and he was glad they read the contract.
This reminds me of the tables-flipped version of this. A multiple choice test with 10 questions and a big paragraph of instructions at the top. In the middle of the instructions was a sentence: "skip all questions and start directly with question 10."
Question 10 was: "check 'yes' and put your pencil down, you are done with the test."
Because it would end up favoring research that may or may not be better than the honestly submitted alternative which doesn't make the cut, thereby lowering the quality of the published papers for everyone.
It ends up favoring research that may or may not be better than the honestly reviewed alternative, thereby lowering the quality of published papers in journal where reviewers tend to rely on AI.
That can't happen unless reviewers dishonestly base their reviews on AI slop. If they are using AI slop, then it ends up favoring random papers regardless of quality. This is true whether or not authors decide to add countermeasures against slop.
Only reviewers can ensure that higher quality papers get accepted and no one else.
I expect a reviewer using AI tools to query papers to do a half decent job even if they don’t check the results… if we assume the AI hasn’t been prompt injected. They’re actually pretty good at this.
Which is to say, if there were four selections to be made from ten submissions, I expect that humans and AI reviewers to select the same winning 4 quite frequently. I agree with the outrage of the reviewers deferring their expertise to AI on grounds of dishonesty among other reasons. But I concur with the people that do it that it would mostly work most of the time in selecting the best papers of a bunch.
I do not expect there to be any positive correlation between papers that are important enough to publish and papers which embed prompt injections to pass review. If anything I would expect a negative correlation—cheating papers are probably trash.
Doesn't feel wrong to me. Cheeky, maybe, but not wrong. If everyone does what they're supposed to do (i.e. no LLMs, or at least not lazy prompts "rate this paper" and then c/p the reply) then this practice makes no difference.
The basic incentive structure doesn’t make any sense at all for peer review. It is a great system for passing around a paper before it gets published, and detecting if it is a bunch of totally wild bullshit that the broader research community shouldn’t waste their time on.
For some reason we decided to use it as a load-bearing process for career advancement.
These back-and-forths, halfassed papers and reviews (now halfassed with AI augmentation) are just symptoms of the fact that we’re using a perfectly fine system for the wrong things.
I have a very simple maxim, which is: If I want something generated, I will generate it myself. Another human who generates stuff is not bringing value to the transaction.
I wouldn't submit something to "peer review" if I knew it would result in a generated response and peer reviewers who are being duplicitous about it deserve to be hoodwinked.
Maybe they have some light to show if they're on. But is everyone supposed to just know that? Pointing a video camera at everyone you talk to is an... interesting social choice. Glasses like these should be designed with a physical mechanisms to cover the cameras.
(author here) This wraps JAX and JAX's version of NumPy. It would surely require some development to keep up, although it's quite short and simple (only 700 lines), so I don't think it would be a big burden. That said, I should be clear that my goal here is just to show that this is possible/easy, and possibly inspire existing array packages to consider adding this kind of syntax.
I like your alternatives! I agree that having to write
X = dp.Slot()
before assigning to X is unfortunate. I settled on the current syntax mostly just because I thought it was the choice that made it most "obvious" what was happening under the hood. If you really want to, you could use the walrus operator and write
(X := dp.Slot())['i','j'] = ...
but this cure seems worse than the disease...
Actually, doing something like
X = new['i','j'] = ...
could simplify the implementation. Currently, a Slot has to act both like an Array and a Slot, which is slightly awkward. If new is a special object, it could just return an Array, which would be a bit nicer.
From the title, I read this expecting another lame observational study which I would probably distrust on the basis that it doesn't show anything causal. It's not that! Rather, if I understand it, they (1) took mice and introduced leukemia cells and (2) took human leukemia cell lines. In both cases, they found biomarkers related to leukemia growth.
(I welcome corrections to that understanding from experts!)
Personally, this seems far from convincing evidence that taurine in energy drinks is actually causing cancer. But it is suggestive and it seems like one might reasonably avoid taurine out of an "abundance of caution".
The inverse of this study, in which a nutrient that helps cells grow, including cancer cells, is the cause of cancer is that every substance they find that kills cells, but is slightly more likely to kill cancer cells than regular cells is suddenly the cure for cancer.
The paper (an extremely difficult one to comprehend, reminding us how complex this field of research has become) only glancingly mentions energy drinks because apparently they are sometimes used to offset the effects of chemotherapy. That is, the context in which they are mentioned is people who already have leukemia. The entire rest of the paper is about how taurine produced by the body's own cells contributes to the advancement of the established disease.
Taurine deficiency has been claimed to be a driver of aging [1]. The claim from the news article about it possibly being related to cancer seems like it needs a much stronger justification.
I think the example of how to "correctly" cite a paper actually makes this issue seem smaller than it is. In reality, these conferences have very complicated (and unstated) "rules" for how a paper is supposed to look. If an "outsider" wanders in and submits a paper with new ideas, it will be very obvious that they are not a "member of the community" and their paper will usually be treated much more harshly as a result. This adds a huge amount of friction to research.
And what's particularly frustrating is that many organizers will try to combat this by writing papers saying they "particularly encourage" papers that are interdisciplinary, or focused on less fashionable topics, etc. It's good that they are trying to change things, but I think the main effect in practice is to encourage people to spend their time writing papers that have little chance of being accepted.
This issue isn't at all unique to computer science, though. Try publishing a paper in a top economics journal as an outsider!
I am fairly certain this rule was there against an obnoxious citing style of "The lambda calculus [1] was intended as a foundation for mathematics". It is especially obnoxious in the case of CS because when you cite e.g. "as Johns comments in his article about future developments of the programming languages [1963a]" it is quite important to know that this paper is actually from 1963 and can be mostly disregarded except as a historic curiosity; yet I've seen people vehemently defending this "[1]" style.
Is citation style really an issue? Even if they don't state which style they expect, surely you can tell their expected style from their existing publications? With proper tooling (e.g. LaTeX+BibTeX) it's pretty painless to switch styles.
Here's that rant of a blog from D.J. Bernstein [0] about how "[3, 7, 42]" citation style is superior and promotes scientific progress that I was thinking about when I wrote my comment. I personally find most of his reasoning pretty unconvincing; and so while I understand Meyer's irritation, I have to say I have to side with OOPSLA here. After all, you'd also probably want the submitted papers to be written in somewhat better than 5th-grade-high-school-student's English, and don't have way too many typos (I talk like ~15 typos per page).
Opinions vary on citation styles, my point was that it seems reasonable for a publication to standardise on one citation style, i.e. to require its use. I'm not sure it's what dynm meant by very complicated (and unstated) "rules" for how a paper is supposed to look. That article by DJB mentions that every author really ought to be using a citation-management solution like BibTeX, so that regardless of your preferences, it's easy to change your whole paper to a different citation style.
There's a slight (but only slight) irony in your use of the HackerNews convention for handling multiple links without breaking up the body of the main text. In this short-form medium it works great. I see someone made this same point in the thread you linked, at https://news.ycombinator.com/item?id=40673426
By complicated and unstated rules, I meant sort of norms about how you talk about previous work, how you caption figures, what order you have for the sections, how self-congratulatory vs. humble you are, etc. Theoretically, there are no rules for these things. But in practice, insiders seem to adopt a (deliberately?) complex ruleset and use it to signal to each other that they insiders.
The only time I like numbers is writing proposals and I only like it because it saves space. Other than that I much prefer (name, year) if I am to have a preference at all.
Adding to the frustration, (the lack of) these shibboleths partially undermine double-blind reviewing, which is on the rise in prestigious conferences. A reviewer from the in-group may immediately spot that a submission comes from the out-group.
The amount of text in books is surprisingly finite. My best estimate was that there are ~10¹³ tokens available in all books (https://dynomight.net/scaling/#scaling-data), which is less than frontier models are already being trained on. On the other hand, book tokens are probably much "better" than random internet tokens. Wikipedia for example seems to get much higher weight than other sources, and it's only ~3×10¹⁰ tokens.
If that 20% never had a factory job before, it is not a reliable indicator. It just means their current job is already shitty. They may get a factory job and realize that they were better off flipping burgers, even with less pay.
From TFA:
> When I first went to China as a naive 24 year old, I told my supplier I was going to “work a day in his factory!” I lasted 4 hours.
This poll is being propped up as evidence that people don't actually want to work in a factory, yet more people voiced interest in doing so than are currently, by an order of magnitude. If you believe there's a disconnect between perception and reality, that's fair, but it would have to be off by an order of magnitude on the positive side to support the premise, and an anecdote about a Chinese factory is very weak evidence of that. I would posit that many people would be happier and more fulfilled working in a factory than being stuck doing gig work or packing foreign products for Amazon or even bullshit desk work, but I'm not elitist enough to pretend to know what blue-collar workers in stagnant towns actually feel, let alone argue that they actually want the opposite of what they say. Personally, I wish I had the chance to work in a factory at 16 years old instead of a call center.
- Cost per mile: $4.72
- Minimum charge: $2296
There are also a huge number of other fees that I can't tell if you'd need to pay in practice, e.g.:
- Additional Locomotive Fee (per loco mile): $7.54
- Amtrak Locomotive Daily Charge: $2513
- Head End Power Daily Charge: $3433
- Annual Administrative Fee: $574
https://www.amtrak.com/content/dam/projects/dotcom/english/p...
reply