Hacker News new | past | comments | ask | show | jobs | submit | legerdemain's comments login

No, unless by "wealthy" you mean that the average student at Stanford/Berkeley/MIT doesn't come from a poor family.

My experience is primarily with Palantir. Almost all the founders I know worked together at Palantir, went into business to work on problems they first encountered at Palantir, got angel investment from wealthy early Palantir employees, and got connected to institutional funding through those wealthy Palantir investors.


I almost took an introductory course on archeology with Glenn Schwartz, many years ago, but dropped it after the first class. I remember having very different emotional responses to faculty members as a student. Schwartz struck me as elegant, diffident, blue-blooded, and completely disinterested in teaching a bunch of young morons who were just taking the course as a distribution requirement. I'm glad to see that he and his former students are an influential force in this area of study.

I always say a professor has (at least) 3 personalities, each of which do not need be similar to one another

  - Teacher
  - Advisor
  - Faculty
I've seen professors be great faculty members where everyone likes them but they are an abusive advisor and terrible teacher. I've seen professors be amazing advisors, terrible teachers, and most faculty members hate them (often happens when lots of department politics and this person is usually highly research focused). I've also seen people be good at all 3 and terrible at all 3. The current chair of my department is the latter and was unanimously voted against for chair, but no one else ran so he won by default lol.

In 2021, my employer was a major customer of Fivetran. Our Postgres syncs routinely broke and required time-consuming resyncs from scratch.

Dan's essay is dated 2022. It is now 2024, so maybe something has changed since then on the code path between Postgres and Fivetran to allow backtracking.


I get that the title of the post is wildly overstated. The author mentions a couple of meetups he has organized or heard of. Calling that a "reawakening" is... well, maybe if the author specifically means the NYC tech scene.

That said, damn I wish the meetup scene came back to South Bay, or even just SF, or anywhere at all within driving distance.

I get that grad students organize a lot of mini-conferences, talk series, and forums with other grad students, but I'm not in grad school.


This is yet another editorial that rehashes cliches about "toxic masculinity" and the "loneliness epidemic." Not to say these cliches don't have a basis in reality, but let's remember that this is an editorial whose purpose is to drive sales of the guest author's book.

The author, Ruth Whippman, is a "current affairs" journalist and documentary maker. In other words, she has made a career of picking trending pop-psychology topics and writing fluff commentary on them. Her books have subtitles like "Why are we driving ourselves crazy and how can we stop?"

Having said this, one pattern in how adult men behave emerges very clearly, at least to me, living in a middle-class US suburb. Men don't have a habit of building community. Women do.

For example, my suburb and neighboring ones have subreddit communities. Those subs regularly get posts from new arrivals who have trouble finding friends.

On average, female posters mostly get responses from other women, and most of the responses are about making plans to exchange contacts, get together as a group, and try activities together.

On average, male posters get grouchy responses from other men telling them that they aren't trying hard enough or creatively enough.

Women in my area have built social organizations that reach out to female newcomers and try to pair them up with other women looking for activity partners. Men have not.


If you're a man living in the suburbs and don't like "man" things (sports, cars, hunting and/or fishing in the South), you're in for a very rough time. 2x difficulty if you don't have kids.


One of my past employers hired the author of the post and put him in charge of our kitchens and facilities. He lasted a couple of years in that job and then quit for greener pastures. I'm not sure why he had accepted the job, and I never noticed any compounding transformative effects of his time with us.


You can 100% have multiple mutable references to the same variable in the same scope, without the first one getting dropped. For example:

   fn main() {
      let mut s = String::from("s");
      let mut1: &mut _ = &mut s;
      let mut2: &mut _ = &mut *mut1;
      *mut2 = String::from("t");
      println!("using mut2: {mut2}");
      println!("using mut1: {mut1}");
   }
Some people use a mental model of reference lifetimes that allows "discontinuous lifetimes" where the valid region has holes. I don't think that's how the compiler models reborrowing, and even under that model, `mut1` is created before `mut2` and gets dropped after `mut2`.


Very interesting, thanks for showing me. Messing with the code, it fails if you access mut1 before the mutation of mut2 because of the mutable borrow.

I did know but forget that all variables are dropped at the end of a scope, not earlier. Borrows can however last for shorter times and as this example illustrates. They can be borrowed multiple times, as long as they are returned before the parent uses their borrow.


> and even under that model, `mut1` is created before `mut2` and gets dropped after `mut2`

I think what you're calling "holes" are what I'd just consider to be "lazy" borrowing, where the reference doesn't really matter until the first time it's dereferenced. The only use of mut1 that happens before the end of mut2's lifetime is to assign a reference mut2 to a value that is never used. In other words, mut1 is never actually dereferenced while mut2 exists. I don't consider `&mut *` as dereferencing because it's just syntax for saying "make a new mutable reference to the same thing", which is necessary because mutable references don't implement Copy or Clone. If you instead used `let mut2 = &mut s;`, it would be abundantly clear; they don't really overlap in any way that matters because nothing would change if mut1 didn't exist at all until after mut2 no longer needed to.


I don't understand what you're saying. Are you saying that "lazy borrowing" is how the compiler works, or that it's just how you mentally model Rust for your own convenience? There is no "lazy borrowing" in the Rust compiler, as far as I know.

You can create a mutable reference `mut1`, mutate through it, create a new mutable reference `mut2`, mutate through it, then mutate through `mut1` again. Here is an example to illustrate:

  fn main() {
    let mut s = String::from("1"); // create value
  
    let mut1: &mut _ = &mut s; // create a ref mut
    *mut1 = String::from("2"); // write through mut1
    println!("using mut1: {mut1}"); // print mut1
  
    let mut2: &mut _ = &mut *mut1; // create a second ref mut
    *mut2 = String::from("3"); // write through mut2
    println!("using mut2: {mut2}"); // print mut2
    
    *mut1 = String::from("4"); // write through mut1 again
    println!("using mut1: {mut1}"); // print mut1
  }
Lifetime analysis can infer shorter lifetimes for unused refs. But creating refs in the wrong order matters even if you never use them, and not all orders are valid Rust programs. Here's a very small example that doesn't compile:

  fn main() {
    let mut s = String::from("1");

    let mut1: &mut _ = &mut s;
    let mut2: &mut _ = &mut s; // never used!
  
    println!("Using mut1: {mut1}");
  }


The "lazy borrowing" was just a personal mental model, not any claim about how things are implemented under the hood. That said, I think all your examples are hinging on very specific behavior from the `Deref` and `DerefMut` traits that might not apply generally.

Looking at the generating MIR output[0] from your second example changed to use the deref syntax from the first one, it seems like the value of `mut1` is silently getting downgraded to `&[&str]` under the hood, presumably due to the fact that the rules for `Deref`[1] and `DerefMut`[1] allow the compiler to liberally substitute dereferences with calls to those trait methods, which crucially allows the compiler to change the result of dereferencing `&mut` to return a shared reference. This is actually almost exactly what was being suggested in the sibling comment here: https://news.ycombinator.com/item?id=39557697.

[0]: https://play.rust-lang.org/?version=nightly&mode=debug&editi...

[1]: https://doc.rust-lang.org/std/ops/trait.Deref.html#deref-coe...

[2]: https://doc.rust-lang.org/std/ops/trait.DerefMut.html#mutabl...

(edited shortly after posting to rephrase without needing to use deref operator in the in-line snippets because it messed up the italics of all of the text after like this)


Would it be easier to convince you if I replaced String with a type that doesn't implement Deref or DerefMut? I do wish you had tried it yourself, because I hope your goal is to learn something about Rust and not just to argue down someone on the Internet.

    fn main() {
        #[derive(Debug)]
        struct X(i32);
    
        let mut s = X(1); // create value
      
        let mut1: &mut _ = &mut s; // create a ref mut
        *mut1 = X(2); // write through mut1
        println!("using mut1: {mut1:?}"); // print mut1
      
        let mut2: &mut _ = &mut *mut1; // create a second ref mut
        *mut2 = X(3); // write through mut2
        println!("using mut2: {mut2:?}"); // print mut2
        
        *mut1 = X(4); // write through mut1 again
        println!("using mut1: {mut1:?}"); // print mut1
    }


> Would it be easier to convince you if I replaced String with a type that doesn't implement Deref or DerefMut?

`&T` and `&mut T` implement Deref (and `&mut T` implements DerefMut) for all types[0].

> I do wish you had tried it yourself, because I hope your goal is to learn something about Rust and not just to argue down someone on the Internet.

I'm pretty confused about the sharp turn you took here. I happen to think your explanation of what's going on in the code you posted is incorrect, and if you truly are hoping that others here are trying to learn, you'd be more successful in helping them by not acting as if anyone who doesn't think you're correct is acting in bad faith. From my perspective, you completely misread what I said about Deref in my last comment and then didn't follow the advice you're giving about trying out the change I suggested with your own code because the same error occurs with the struct you define if you don't use the deref operator[1]. I'm open to the idea that I'm wrong about `Deref` being the cause of all this, but you don't really seem to be open the idea that you might be incorrect, which comes across as fairly arrogant given that you haven't really demonstrated that you understand that I'm talking about the `Deref` implementation on the _reference_, not the underlying value,.

[0]: https://doc.rust-lang.org/src/core/ops/deref.rs.html#164

[1]: https://play.rust-lang.org/?version=nightly&mode=debug&editi...


I am confused by your responses because the "reborrowing" mechanism in the Rust compiler is pretty widely discussed.

https://smallcultfollowing.com/babysteps/blog/2013/11/20/par...

https://haibane-tenshi.github.io/rust-reborrowing/

https://stackoverflow.com/questions/62960584/do-mutable-refe...

Meanwhile, you are insisting on reasoning about Rust from first principles and making lists of links to documentation that is, at best, tangentially relevant.

For example, the fact that `&T` implements Deref is not relevant here. Applying Deref::deref to a &T returns a &T. For most intents, it's a no-op.


I read this as mut1 and mut2 both being downgraded to shared references because they aren't used to mutate anymore. I'd imagine that's not what's actually happening though?


No - that they are both &mut _ indicates that a mutable reference is being acquired regardless of whether or not they’re used for any mutation. Possibly the compiler could automatically lower to a shared reference if it detects no mutation access locally but there may be design reasons why that’s impossible (+ you can’t have a mutable and a shared reference simultaneously anyway so downgrading to shared would still be disallowed)


Somewhat tangential, but RustConf last year had a talk[1] about a guy picking up Rust to get better times at Battle Snake[2].

[1] https://www.youtube.com/watch?v=-I1BfSpoWM0

[2] https://play.battlesnake.com/


Bob McGrew from Palantir is OpenAI's research chief? That's one hell of a move up for him!


How widespread is Slurm?


Slurm is absolutely ubiquitous in the high-performance computing (HPC) community. I believe its only similar competitors in the HPC space are the SGE [1] and Torque/PBS [2] resource schedulers.

I'm not sure of the exact numbers, but I would guess that an overwhelming majority of the Top 500 Supercomputers [3] are running Slurm. And as others have noted, research computing centers in academia all mostly run Slurm. And Slurm also dominates in the DoE national labs in the US.

Oh, and as a [potentially apocryphal] fun fact, the name "Simple Linux Utility for Resource Management (SLURM)" is a backronym from the soda in Futurama! [4]

[1] https://en.wikipedia.org/wiki/Oracle_Grid_Engine

[2] https://github.com/adaptivecomputing/torque

[3] https://www.top500.org/

[4] https://futurama.fandom.com/wiki/Slurm


According to Wikipedia, "Slurm is the workload manager on about 60% of the TOP500 supercomputers." I have used it as a job manager front end for most computational clusters in the last 10 years or so.


Llama 2 models were trained on slurm


related, has anyone had success moving from Slurm to Kubernetes for a physical (non-cloud) cluster primarily used for training large models on lots of GPUs?


It's used in most high-performance computing clusters (except for the folks that are still on Torque, I guess).


I see, so it's limited to HPC contexts? I'm just surprised that as a data engineer, I've never seen it in real life.


Definitely! I was in academia for ten years and SLURM is everywhere. It's free! Now outside academia, SLURM is nowhere. AWS and Slowflake are king.


Both of my last two companies used Slurm. Probably just comes down to if the company maintains its own internal compute cluster.


> Now outside academia, SLURM is nowhere

Do you mean outside of academia _and_ HPC? Industry HPC clusters using slurm are quite common.


Many compute clouds provide SLURM env - definitely AWS and GCP do.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: