Hacker Newsnew | past | comments | ask | show | jobs | submit | fooker's commentslogin

The easy way out for you is getting into a Phd program. You should have a easy way in if your research background is good.

Now you might not want a PhD for various reasons, but tech jobs are a bit more tricky to navigate nowadays. I'd honestly not hire someone in your position, there is really no easy to do it.

Given that, the difficult way out is starting a company, which has an entirely different set of challenges.


Yeah, PhD would be an overkill. On top of that, PhD at top labs are super-competitive.

Thanks for some check & balance.


Having credible research experience before applying will you into most PhD programs.

(I have a PhD from a top 4-5 university.)


There is a rematerialize pass, there is no real reason to couple it with register allocation. LLVM regalloc is already somewhat subpar.

What would be neat is to expose all right knobs and levers so that frontend writers can benchmark a number of possibilities and choose the right values.

I can understand this is easier said than done of course.


> There is a rematerialize pass, there is no real reason to couple it with register allocation

The reason to couple it to regalloc is that you only want to remat if it saves you a spill


Remat can produce a performance boost even when everything has a register.

Admittedly, this comes up more often in non-CPU backends.


> Remat can produce a performance boost even when everything has a register.

Can you give an example?


Rematerializing 'safe' computation from across a barrier or thread sync/wait works wonders.

Also loads and stores and function calls, but that's a bit finicky to tune. We usually tell people to update their programs when this is needed.


> Rematerializing 'safe' computation from across a barrier or thread sync/wait works wonders.

While this is literally "rematerialization", it's such a different case of remat from what I'm talking about that it should be a different phase. It's optimizing for a different goal.

Also feels very GPU specific. So I'd imagine this being a pass you only add to the pipeline if you know you're targeting a GPU.

> Also loads and stores and function calls, but that's a bit finicky to tune. We usually tell people to update their programs when this is needed.

This also feels like it's gotta be GPU specific.

No chance that doing this on a CPU would be a speed-up unless it saved you reg pressure.


Also try LXDE and LXQT if you would like a 'lighter KDE' vibe instead of the 'lighter gnome 2' vibe of XFCE.

Yep LXQt is a beast, super snappy and complete. I use it on an old laptop (2012) and it still works great with a very low memory footprint (much lower than XFCE when I tested a bunch of them).

LXQt is great, except for the fact it can only do 'regular, italic, bold, bold italic' for font weights even when a font supports medium (my preferred font weight, regular just seems so dainty now I've gotten used to medium).

I also like the fact that it allows use of any window manager and even supports Wayland now (so Wayfire is an option).


If I want something light, I tend to gravitate towards fluxbox, icewm, i3/sway, windowmaker or twm depending ony mood and the paradigm I am looking for.

There are many other options though.


Maybe std::make_movable would have been a slightly better name, but it's so much simpler to write std::move.

Split the difference with std::moveable().

Also signals it doesn't actually move, while remaining just as fast to type.



But that misses too much of the semantics. It also implies ownership transfer, even if copied.

thanks to the incredible advances in terms of developer tooling over the last 50 years (i.e. tab-autocompletion) there should be no difference in writing those two.

There is a difference, lots of stuff starts with make_, so lots of possible completions.

std::rvalue

> he thinks he knows better than the entire medical establishment

I think you have missed the part about why we are in this situation.

People are absolutely fed up with the medical establishment. There is no way to twist this.


The solution is to fix the medical establishment, not to appoint a person trained by Facebook moms and and natural food blogs.

Yes, I agree.

Now, everyone trying to fix the medical establishment is immediately called an anti vaxxer, science denier, etc.

At some point it was inevitable that we get someone who can shrug these labels off because they do not have a scientific reputation that can be killed with these labels.

My point is, again, we are in this situation because sane attempts to fix things has not worked. To an extent that people will literally try anything.


> everyone trying to fix the medical establishment is immediately called an anti vaxxer, science denier, etc.

That's because the thought leaders who are fed up with the medical establishment are gaining traction by spreading anti-vax and science denial ideas and not calling out specific medical establishment (other than "big pharma is a boogie man!"). So, it's hard to take their position seriously (even though, I too and anti medical establishment)


> Now, everyone trying to fix the medical establishment is immediately called an anti vaxxer, science denier, etc.

Well they keep showing up with shitty unverified claims...are we supposed to treat their shitty claims as valid just because they're against the grain?

It's also good to keep an eye on the graft. It's funny how pretty much every big personality in the alt-med space has totally awesome products to sell you that Big Science won't let you know about.


If your “fixes” for the medical establishment include spreading unsubstantiated fear mongering about vaccines and science denial then you would be right to be classified as an anti vaxxer and a science denier.

I think you might be missing the posters point. He agrees with you on nearly every point you are making. He is however expanding on that saying that the problem is something of a self-own by the combination of science, science reporting, and science driven policy. Trust was so thoroughly lowered that there was almost no avoiding an event like Trump/RFK. It can be true that 1. RFK is not qualified and is likely to make things worse. 2. This is partly the responsibility of the establishment for squandering the trust that the public put in them.

When the conclusions don't match your predictions, examine your priors.

Do not "do not mistake..." ...


The amount of code was relatively low.

Not the million line codebases we have today. 50-100 lines was the usual program or script.


iirc they were initially using actual ttys(as in typewriters) and the input delay was hell which is the reason so many UNIX commands are two letters.

So likely they would work on the printout:

   1,$n
And then input the corrections into ed(1).

That was one generation before this. In unix v4 times, input latency was in the order of ~100ms, basically limited by the serial port.

Pretty advanced terminals were starting to show up too - https://en.wikipedia.org/wiki/VT100


There's a whole lot more, check `third_party/` if you work at Google.

(disclaimer, used to work at google a long time ago)


There were directories there for sure, but I honestly never saw anything get used from there (except I think TensorFlow was in there?).

My personal experience was I never used any OSS code (that wasn't Google Open Sourcing its own code) except Linux & LLVM.

It definitely didn't feel meaningful to the company besides the ones I called out.


A lot of it also comes from acquired projects/companies, that are brought to google3, with plans to deprecate and get rid of eventually

You should be allowed to do whatever you want, by default. Preventing things only make sense if there's good reason to.

Otherwise everything you do, you have to first think about whether you are allowed to, like a slave.


> or have the resources or time to setup an LLC.

Costs about 200$ at most places and can be done online.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: