The jobs in algo trading are very interesting for technically - mathéamtically inclined people. It’s really one of those fields where you have a direct impact on the results of your - measurable in additional dollars made.
There is 0 social impact. That’s the downside of course - but hey, how many jobs out there are really having any kind of positive social impact ? Not 0, but close to it.
That impact is not made by HFT - finding the right risk premia for different investments is very valuable but that is a signal measured over days/weeks/months because actual capital investment decisions take that long.
Intraday financial games are zero-sum. What HFTs gain, they leech away from mutual funds and pension funds and retail investors and market makers who operate over a longer horizon.
I work on very interesting technical problems with very smart colleagues for excellent pay. All my career progression is in compensation, so I can remain an IC forever and no one thinks that's a negative. I'm subject to no politics whatsoever, and there's very little politics in the company as a whole. The work I do every day has a direct material impact on the company and I'm rewarded proportionally to my impact. My WLB is so-so, but it's better than it was in grad school so I'll take it.
Regarding social impact, the world does have some demand for liquidity and price discovery. Providing those services is both essential and extremely difficult. It's definitely not the most social good I could be doing with my talents, but I think it's weakly positive.
I thought the H100s were ~30-40k each. But they're not widely available and you usually buy multiple boxes from vendors that also come with expensive CPUs/RAM etc.
H100s are intended to be retailed piecemeal. The big enterprise model is DGX GH200, at $10 mil each.
Its just that current supply is extremely short, so the H100s end up only available to big buyers. But that will be resolved in time. Nvidia wants every university lab to have a H100 so no competitor sneaks in there.
The deltas between the military/technology of the worlds top economies might not even be noteworthy to a species capable of interstellar travel. From there perspective we might just all look the same.
And are most of the people she‘s well-connected to not annoyed that she lost their money?
And isn‘t attractive just stating the same as good genetics?
And is she really that smart, given the fact that she first dropped out and then her company failed and now she was imprisoned and the company only „worked“ anyway because of fraud?
> And are most of the people she‘s well-connected to not annoyed that she lost their money?
Ya admittedly one would have to assess that and also make some estimate about how much her notoriety will positively or negatively affect her offsprings' outcomes. I tend to lean toward it being a net positive, since fame seems valuable almost regardless of how it's acquired these days.
> And isn‘t attractive just stating the same as good genetics?
It also includes height, medical history, longevity, intelligence, ambition/drive, work ethic. I admittedly don't know much about medical history, but I might be willing to gamble.
> And is she really that smart,
No she's not really that smart, I'd estimate 1-2 stddevs from the mean. In my opinion that's smart enough for a potential spouse with other positive attributes, particularly if you think that you're smarter still.
The distribution is different. The median programmer provides much more value than the median artist. Additionally math/cs skills apply much more broadly than art skills. The entertainment industry would be much less leveraged without the tools we built for them.
You decide you want to optimize for number of lives saved. You decide that future lives, those of people yet to be born, are worth as much as those currently alive. You place small, but importantly not zero, probability on existential risks to humanity, so that when you do the expected value calculation, even an infinitesimally small risk of humanity's complete extinction results in negative infinite utility. You're also very smart and realize that smart people can do damage if their objectives are misaligned, and you start to worry about something much much smarter than you with objectives misaligned to humanity's more broadly. In their defense reward specification is indeed a hard problem, RL agents find unexpected policies that maximize reward in even toy settings. At this point you're down the rabbit hole and no other problem seems to compare. Climate change will leave some people alive, pandemics leave some people alive, AIs have no such kindness.