Hacker Newsnew | past | comments | ask | show | jobs | submit | more stopads's commentslogin

If you're interested in privacy or protecting your email address and you're still using Chrome or GMail you need to start by eliminating those and de-googleifying your life as much as possible.

Although it seems like a lot of people are forced into Google products by work, I can see the value of this for them.


This extension is also on Firefox. You don’t have to use Google to use it, it works well with my protonmail email.


Open pit copper mines are one of the most horrible environmental extractions imaginable. I grew up next to one, it was pretty unpleasant. Not quite as nasty as uranium mining, but it's getting up there.


As far as I can tell nobody has reported on the "neighborhood portal" offered to police yet, looking at how this works along with the bizarre, giddy culture around how empowering this is for law enforcement is just disturbing.


In the past major corporations used to launch speculative research projects and risky ventures, but that rarely happens any more. All the resources are being shifted to stock buybacks, everything else is a secondary priority.

Some companies like Chevron and Texas Instruments have even committed in writing to "returning 100% of all cash flow in perpetuity" to buybacks and dividends. So yes, they have legal contracts saying they won't agree to invest in future growth or investments or research no matter how much money they make, they are so committed to this buyback first model.

Google is a true leader in the buyback wars, they were the first company to commit to $25B in buybacks per quarter, which used to be a shockingly large number for these programs. Lots of other companies have followed that pattern since.


Mark Zuckerberg takes it for granted that the most important things in the universe are productivity growth and the number of people citing your publication and talks about ways we can make these numbers go up more in this podcast.

Are those really our biggest priorities right now? Should they be? How many more decades are we supposed to pretend the pie getting bigger will cause all problems to ~magically tricklesolve themselves~, because this there is lots of evidence that this is not happening.


Tyler Cowen has a book with basically that premise - that in the long run, economic growth is the only thing that matters. It's called Stubborn Attachments.

It's not an intuitive position for most people, but it's a pretty convincing argument - it's certainly been the most important thing in the last 100 years, in my mind (think the incredible increase in health, standards of living, etc, which, even if they aren't spread close to equally, have benefited literally almost everyone).


Endless economic growth is only a convincing argument if you don't understand basic math.

Things that appear exponential locally usually turn out to be logistic or cyclic.

I would like to say that I'm surprised that an economist doesn't understand math, but I can't.


Questions of endless economic growth are a red herring/off topic.

If exponential economic growth is possible now, then it should be invested in now. If yields change in the future, investment should be reassessed.


"If exponential economic growth is possible now..."

It's not.

You can arrive at many false conclusions by assuming a false antecedent.


I'm not sure what you think is happening then. What do you measure economic growth in, Dollars or percentages?


Regardless of the units you use, you won't get an exponential curve. You can fit an exponential curve to the data. You can also fit a logistic curve to the date.

Heck, you can even fit a linear curve to the data.


This line of thinking can lead to catastrophic consequences if the long-term implications are not well understood. Short-term exponential growth can lead to a catastrophic crash down the road if the debt, resources, or social fractures are not sustainable in the long run.


Yeah, most of that was before we discovered (or just didn't care enough) we have been destroying natural resources in a completely unsustainable way, from the extraction processes (mining) to disposal (ocean filled with trash)


Sounds like there are externalities that need to be internalized, but not like we need to stop growing.


His book argues a more nuanced form of economic growth. It’s basically growth is the most important thing, but there must also be human rights constraints along with some others that prevent destroying the world.

It’s worth reading (and it’s short). The only thing that bothered me about it was what I suspect was a poorly disguised justification of religion via “faith” that I found unnecessary and out of place.


Arguably the worst stewards of the environment and other resources have been those economies that didn't focus on growth in a capitalistic sense. Consider the corrupt banana Republicans of Brazil who cheerfully support burning the rain forests, or the apocalyptic levels of environmental disregard displayed by the Soviets.

Yes, it's true that here in the US, we set a river on fire that one time. That sucked. We tried to do better, and largely succeeded -- which feeds back into the larger question of why it takes 40x as much time and money to build a subway in the Y2K era as it did in the 1900s.

Point being, with a healthy economy, at least you have the option to do right by your neighbors, your countrymen, and the planet as a whole. That may be what Tyler is getting at.

Although in one part of the interview, his words sent chills down my spine ("I’ve often suggested for graduate school, instead of taking a class, everyone should be sent to a not-so-high-income village...") All three of the participants spent a lot of time and verbiage signaling their historical awareness, but apparently the Cultural Revolution escaped their notice because it didn't happen in Vienna or Edinburgh. I'd like to assume good faith... but holy hell, dude, how'd you think that would sound?


Maybe a growing pie is not perfect, but it is greatly preferable to a shrinking one. For a simple example, take a company that looses some contracts and stops growing; you get new conflicts or old ones become critical, ppl start leaving, etc. As long as the pie is growing, it can hide or solve by default many existing problems which would be very difficult or costly to fix otherwise.


> As long as the pie is growing, it can hide or solve by default many existing problems

I think one issue with this analogy, is that companies have central points of authority but Western economies don't so much. For example GDP growth doesn't magically fix rising homelessness, and in the absence of central authority it becomes very difficult to address. People often don't like the idea that their "taxpayers money" is being spent on other people that aren't (currently) working.


>For example GDP growth doesn't magically fix rising homelessness

Nothing magically fixes homelessness, but GDP correlates well with a host of public benefits, including "poverty reduction and per capita income growth performances are correlated" [1]. All these correlations can be more accurately detailed by searching google scholar.

In the US, it seems homelessness has decreased greatly since it was first investigated in the 1860s as people moved to cities. During the GD in 1930, there were about 2M homeless against a total population of 120M people, and now it's around 500k against a population of 320M.

I'd suspect that GDP rise over those periods helped reduce homelessness quite a bit by providing more resources per capita to address it.

[1] https://www.kiwiblog.co.nz/2011/02/gdp_correlations.html


Homelessness has been increasing rapidly in recent years across North America even while GDP growth has been adequate. You actually need to devote resources to fight homelessness.


>Homelessness has been increasing rapidly in recent years across North America even while GDP growth has been adequate

I don't think that's true. For example, in the US, dept of urban housing has this [1], which shows total number of homeless decreasing despite total underlying population increasing.

[1] https://en.wikipedia.org/wiki/Homelessness_in_the_United_Sta...


As you can see the number is going up. I would also be careful with the HUD data as it is based on a yearly count conducted of shelters in January [1]. Counting homeless people is tricky as only 10% are on the street and the methods used to estimate populations are fairly unsophisticated.

From the same surveys it's fairly clear in recent years homeless populations are rising if you look at exhibit 2.5 on page 25. [2]. Worse transitional housing is being replaced with shelter beds meaning those who are homeless are more likely to stay so.

I also should be clear looking big picture like this misses details. Things like Los Angeles's homeless population increasing by 16% [3] and a 5% rise in Seattle [4] and similarly Vancouver [5]. All of these cities have great GDP growth. Experts across the political spectrum are fairly consistent that the cause is related to affordable housing.

Now I can't make you change your mind about something you want to believe, but when you really engage with what's going on suggesting causality between GDP and homelessness is a very shallow superficial analysis. I think the data makes a pretty clear of what's going on. I'd encourage you no matter what position you choose to take to really engage with the details.

1. https://www.washingtonpost.com/business/2019/09/18/surprisin...

2. https://files.hudexchange.info/resources/documents/2018-AHAR...

3. https://www.latimes.com/local/lanow/la-me-ln-homeless-count-...

4. https://www.city-journal.org/seattle-homelessness

5. https://www.vancourier.com/news/vancouver-s-record-breaking-...


Isn't this a prime example of correlation, not causation?


Given that GDP/capita is not only correlated but causually related to many wellness factors in society, it much more likely not simply a correlation. These connections are well studied - check through google scholar is you really want to dig into it.

More GDP/capita means more resources to fight things like homelessness.


Obviously more GDP means more resources, the question is whether it solves issues like homelessness "by default" as suggested by the OP I responded to. My opinion is that it greatly depends on the dominant cultural narratives of the general public and of the ruling party.

In the UK we've seen fairly reasonable GDP growth in the last decade but have also seen a large overall increase in statutory homelessness acceptances over the same period [0]. That's just the official records: the Crisis charity estimates that the number of "hidden homeless" single adults has according to government estimates increased by a third over roughly the same period [1]. The reasons for this are generally given to be the lack of affordable housing being built, and welfare reforms such as capping the housing allowance.

The cultural narratives in play here are that a prolonged austerity was required in order to salvage the economy, and that our welfare is subject to significant levels of stress from deliberate "benefit cheats" or "welfare mummies". Both of these are contentious, as far as I've been able to gather.

[0] https://researchbriefings.parliament.uk/ResearchBriefing/Sum...

[1] https://www.london.gov.uk/sites/default/files/london_assembl...


Is growing pie sustainable?


Why wouldn't it be? Humans have infinite wants and needs. These change overtime in terms of taste/perception as well. So the counter point should be made as to how this isn't the case and that a "growing pie" is actually against human nature.


It's the compound interest problem. If you keep compounding growth, like growing GDP at 1-2% a year, within some few thousand years you've grown bigger than all the resources within a sphere the same number of light years in radius. So clearly, growth is not sustainable and must eventually slow down until it approaches zero.

To be clear, we're far away from that today. But the parent is mathematically correct, the best kind of correct.


A lot of people nitpicking at the argument here, about resource substitutions, efficiency improvements, etc.

So let's simplify this argument a little. It's not possible to have infinite growth in a finite universe. By definition infinity > not infinity for arbitrarily large values of not infinity.

Therefore the limit of growth as time approaches infinity is 0.

At some point you reach the physical limit of the system and exponential growth ends. This is true even for digital systems if the digital good involves some physical resource like energy, storage space, bandwidth, etc.

For example, if we keep growing energy usage at the historical rate of 3% a year, we would cook ourselves with the waste heat within 400 years. It doesn't matter where the energy comes from. Now we can get around that by moving off the earth, which buys time, but eventually, also in short order, we'd use all the available energy from our star. We could get around that through nuclear energy sources or widening the area to encompass more stars. But there is only so much matter and energy in the universe. Eventually the show stops.

I challenge you to find a theoretical counter example.


> At some point you reach the physical limit of the system and exponential growth ends. This is true even for digital systems if the digital good involves some physical resource like energy, storage space, bandwidth, etc.

At some point well before that, the sun will go red giant and swallow earth, so that's sort of beside the point. In the meantime, if we continue to fuel GDP growth via inefficient use of non-renewable natural resources, we'll have big problems far sooner than any astrophysical limit sets in.


You are assuming that increased resource consumption always is tied to increased GDP. Others have argued that once GDP hits a threshold, both incremental and total resource consumption decreases.

For example, a richer economy might replace physical goods with digital goods which have higher value and lower resource demand.

[1] Check out Andrew McAfee on More from Less


I think that's a temporary setback only. It's a one time thing to replace your resource intensive good with a more efficient one. But that doesn't give you growth forever, because any level of exponential growth with any level of resource consumption can't be reconciled with a finite universe.

Sooner or later all exponential growth must end. That includes all forms of compound growth, such as GDP increasing at any percent you want year after year.


>because any level of exponential growth with any level of resource consumption can't be reconciled with a finite universe.

I think you misread my post. Real world growth with negative or zero resource consumption can be reconciled with a finite universe.

On a mathematical level, there is no reason that GDP can not go infinite in a finite universe as GDP does not need material inputs.


Maybe I was being too generous to your post. There is no way to decouple GDP from material inputs, even if, as with digital good and services the inputs are quite small. They're still there, which means they're still finite, which means GDP is finite.


So perhaps the issue is that there are two cases being conflated.

In the practical sense, there is ample room for GDP growth on Earth in the immediate future. A huge portion of the population is underdeveloped while only a fraction of the world's natural resources have been extracted. Data also suggests GDP in developed economies can be increased while total resource consumption decreases once areas reach.

If someone is going to make a hypothetical mathematical/universal scale argument, I think it worth pointing out that GDP is accounting construct, with no inherent tie to material reality. For example two people could sell each other arbitrary services, (eg silence), and charge arbitrary sums.


>like growing GDP at 1-2% a year, within some few thousand years you've grown bigger than all the resources

Increased GDP does not require consuming more resources. Using them more efficiently adds significantly to GDP, so this argument doesn't provide a rationale to claim infinite GDP growth is impossible.

A good example is many advanced economies have increased per capita GDP significantly while lowering per capita energy consumption at the same time (and lowering many other per capita material consumption areas).


Uh, more notable incorrect persons have noted this argument. You’re using what’s called a Malthusian argument.

Anyhow, let’s say everything you say is correct for today’s wants and needs. But if it is true that society changes and needs change etc, then your argument doesn’t hold for tomorrow.

Suppose for a moment there is only 10 years worth of X left, who is to say that one of these things doesn’t happen:

* X thing is no longer needed because Y thing is no longer in fashion

* X thing gets replaced by other comparable Z material

... and this goes on forever.

That’s the point.


> If you keep compounding growth, like growing GDP at 1-2% a year, within some few thousand years you've grown bigger than all the resources within a sphere the same number of light years in radius.

Would be super interested to see your math. What are the inputs and equation that get you to these specific outputs (a few thousand years, all resources, a few thousand light years)?


If the Pharoah Tutankhamen had invested the equivalent of $1, and earned a real rate of return of 1%, today's value of that investment would be more than $350 trillion, which is a quantity larger than Credit Suisse's estimate of "global net worth".


The calculation isn't so simple -- a lot of growth is efficiency growth: getting a lot more out of the resources you're using. Switching all energy production to nuclear, for example, would lower resource usage while drastically raising GDP.


But why should growth of the growing GDP should be the target? I don't think that anyone believes that the GDP growth can increase every year. But you can reasonably assume that it can grow every year


I think his point is that, for example, a flat 2% yearly GDP growth is a yearly increase in absolute growth due to compounding, and most economists absolutely do believe that's possible somewhat indefinitely (with significant historical basis for that belief).


> Is growing pie sustainable?

Humans are encountering the limits of connection and communication. This means that even if the pie grows in the aggregate, we're going to fracture.


Are you Mark Zuckerberg?


>Sorry, this site requires JavaScript to be enabled.

For a flat blog post? Like, seriously?


the OP message falls flat when it can't even be bothered to server render.

also, most offensive is the use of 3rd party js hosting. given the tiny size of the 3rd party libraries (relative to 1st party code), add them in to the code from your own site (which you're already bundling up. SMH)

[from jsdelivr.net]

polyfill.min.js (31.8kb/96.8kb) (wire/resource)

url-polyfill.min.js (1.9kb/6.1kb)

react.production.min.js (4.7kb/12.3kb)

react-dom.production.min.js (36kb/116kb)

[from liaison.dev]

bundle.a7c7d78d57.immutable.js (108kb/375kb)

actual website code: https://github.com/liaisonjs/liaison/tree/master/website


I don't understand why it is wrong to use CDNs to distribute common libraries.

About the fact that I have used Liaison to build its own website, I agree that it might not be the best choice. :)

For now, Liaison doesn't support server-side rendering, but it is something that could come eventually.


The actual text of the article (not including the table and image) is about 2700 characters long.


I’m glad that the JS world is shifting from SPAs to SSR. Using a SPA for a blog is just bad on multiple levels.


IMO SPA's aren't that bad, but loading large assets from third parties is.


examples?


welcome to the web 2.0

cooming soon: web assemply web 3.0 which (I fear) might feel a lot like flash web pages from the early 00s. I hope I'm wrong.


Web 2.0 was like 15 years ago now. Web 2.0 was "supposed" to be about separation of style & content and gracefully degrading JavaScript.

I don't know want to call what we have now.


Probably the "Web App" or "Web SPA" era, since everything seems to need to be a single page app that exists mostly in the client.

There are some benefits, of course. Stable and standardized UI is finally available as a baseline for devs (much like early Windows/iOS software) and all roads are clearly leading toward a unified design model for phone apps/web/desktop. Bloat and needless complexity are the biggest issues at the moment, but all the frameworks seem to be (finally) focusing on becoming more snappy.


Webasm doesn't enable anything new that couldn't already be done right now.


but it will improve the ability to obfuscate webpages


I was replying to someone saying that webasm will bring flash style webpages.


I'd probably rather sift through a disassembler than obfuscated JS.


got to get the fuck out of the industry professionally at this point


I had to disable AdGuard to display it.


Life expectancy is going down every year. These are good times for a few privileged elite, everyone else is moving backwards.


The data tell a different story. I've offered them in a another comment, but see increasing household income for every quintile here [1] and decreasing poverty here [2]. While income inequality is increasing, that doesn't mean quality of life is decreasing or "moving backwards." [1] https://www.advisorperspectives.com/dshort/updates/2019/11/2... [2] https://www3.nd.edu/~jsulliv4/2017%20Consumption%20Poverty%2...


Poor people are literally dying younger, and you're trying to argue that they are somehow better off? This is some Monty Burns level of delusion.


Great review, but the highlight for me is that windows task manager view of CPU utilization. It's like looking at a spreadsheet:

https://images.anandtech.com/doci/15044/3970X%20Task%20Manag...


I remember Linus playing around with a Xeon Phi CPU with a few hundred threads. The task manager was all percent signs.

https://youtu.be/fBxtS9BpVWs?t=200

Looks like Microsoft has already got 1000+ cores on Windows: https://techcommunity.microsoft.com/t5/Windows-Kernel-Intern...

Can we get Bruce Dawson[0] one of those? I wonder how many more bugs he'll run into.

[0] https://randomascii.wordpress.com/


If somebody gives me a ThreadRipper I promised to do some performance investigations. I keep meaning to update my fractal program (Fractal eXtreme) to support more than 64 threads and that would give me a good excuse (processor group support is needed).

Anyone? Got a spare one lying around?


I would write to amd marketing. They will likely loan you a machine, or perhaps even give you one if you gave them tangible reasons.


I'm really curious about the machine it's running on. 896 physical cores is an odd number - 32 x 28, 16 x 56 or 8 x 112 are the likely combinations. The picture identifies it as a Xeon Platinum 8180 which is a 28C/56T CPU. Are there systems that support 32 Intel CPUs in one host? I thought quad socket was the practical limit these days.


Here's one, "HPE Superdome" https://h20195.www2.hpe.com/V2/getpdf.aspx/A00036491ENW.pdf

See the diagram on page 6, they have a custom routing chip to link up 8 boards over UPI.


It says right there, Xeon Phi 7210.

Knight's Landing supports 4-wide threading per core so you get 256 threads, which is exactly what it shows in task manager under "logical processors".


I was talking about the Microsoft article, not the LTT video, which are using different CPUs.

The HP 32 socket chassis (8x4 socket boards) seems to be the answer.


> Can we get Bruce Dawson[0] one of those? I wonder how many more bugs he'll run into.

Oh yeah, especially because core affinities in Windows get all wonky once you go above 64 threads.


> Oh yeah, especially because core affinities in Windows get all wonky once you go above 64 threads.

Can you elaborate? I haven't noticed any particular "wonkiness" happening?


Thread affinities are tied to 64-bit numbers. So processor virtual cores are lumped into groups of at most 64. Threads can only be assigned to a single group at a time, and by default all threads in a process are locked to one group.

https://bitsum.com/general/the-64-core-threshold-processor-g...

https://docs.microsoft.com/en-us/windows/win32/procthread/pr...


Well, each thread being only able to be scheduled on some of 64 cores hasn't been a huge issue so far. Usually you want your threads to stay in the same NUMA region anyways because cross socket communication is expensive.

Annoying if a processor group spans two NUMA regions leaving just a few processors to other side...


Huge issue for whom?

Hypervisor developers are livid about this across all cloud providers.

In practical terms the baseline of a windows OS is rather high, so if you have high density throughput of compute then you get more for your money.

IE: You pay a license cost per CPU, and you pay at minimum 1physical CPU core and 1GiB of memory per Windows machine;


Well, I wouldn't advocate using Windows in such a setting...

I/O layer overhead in Windows is considerable. As any Windows kernel driver developer knows, passing IRPs (I/O request packets) through a long stack does not come for free. Not just drivers for filesystems, networking stacks, etc. and devices, but there are usually also filter drivers. IRPs go through the stack and then bubble back up.

Starting threads and especially processes is also rather sluggish. As is opening files.

There's no completely unified I/O API in Windows. You can't consider SOCKETs (partially userland objects!) as I/O subsystem HANDLEs in all scenarios and anonymous pipes for process stdin/stdout redirection are always synchronous (no "overlapped I/O" or IOCP possible).

For compute Windows is fine, all this overhead doesn't matter much. But I don't understand why some insist using Windows as a server.

But when someone pays me for making Windows dance, so be it. :-) You can usually work around most issues with some creativity and a lot of working hours.


The arguments I’ve heard are IO completion ports are less “brain dead” then epoll or select/poll and visual studio is a great IDE. Otherwise I’m not sure either.


IOCP is great, just annoying you can't use it with process stdin/stdout/stderr HANDLEs, at least if you do things "by the book". Thick driver sandwiches in Windows... not so great.

Visual Studio a great IDE... well, the debugger isn't amazing unless you're dealing with a self-contained bug (often find myself using windbg instead). Basic operations (typing, moving caret, changing tab, find, etc.) are slow at least on my Xeon gold workstation.


> I remember Linus playing around with a Xeon Phi CPU with a few hundred threads. The task manager was all percent signs.

The last Sun chips (Niagara / Ultrasparc Tx) also had pretty high count, IIRC they had 64 threads / socket, and were of course multi-socket systems. At 1.6GHz they were clocked pretty low for 2007 though.


If you think that's crazy, then remember the CM-5 machines in Jurrasic park? A real 90s supercomputer.

Anyway, each of those lights was "on" when a core was utilized, and "off" when a core was off.

https://upload.wikimedia.org/wikipedia/commons/6/61/Frostbur...


For anyone else interested who, unlike me, doesn't want to spend a few seconds swearing at how useless Google's become, OP meant 'CM-5'.

https://en.wikipedia.org/wiki/FROSTBURG

https://en.wikipedia.org/wiki/Connection_Machine

Although this particular bout of verbalised obscenity did just make me change my default browser search engine to duck duck go as Google crossed out 'supercomputer' as missing from its query, which was annoying as it was 50% of my sodding query and, technically, the only correct bit. Google search is totally crap these days.


> For anyone else interested who, unlike me, doesn't want to spend a few seconds swearing at how useless Google's become, OP meant 'CM-5'.

Oh, sorry for the typo. I just fixed it. It used to say "CX-5", I must have been thinking about Mazda cars or something when I wrote the post...


Oh, absolutely no worries! In older, smarter days (less smart, or at least less presumptive, perhaps?), I'd like to think that Google would've been clever enough to correct the incorrect aspect of my search term. As I say, this was the final nudge to move my standard search over to something else :)


That was the case of the original CMs (1/2/200), but the CM5 switched from the original single-bit processors to a lower number of RISC SPARC cores.

While it retained the LEDs, I don't think they had the same 1:1 correspondance to cores of the previous model: the CM1 and CM2 up to 65536 cores (the CM2 also had an FP coprocessor for 32 cores, for an additional 2048 special-purpose cores) whereas the CM5 had "only" 1056 processors.


The CM-5 LEDs had different settings options: on, disabled, and also a “random and pleasing” setting. So in one respect, modern RGB LED motherboards utilize trickle-down supercomputer technology in the form of... blinkenlights.


Well, it would be moreso if they laid out logical cores as columns and physical cores as rows - they missed a nice UI trick there!


A nicer UI trick would be to change how Windows Task Manager reports CPU usage on the "Details" tab, because with, e.g., 64 virtual cores, single-threaded process CPU utilization can only ever be 00, 01, or (assuming the display rounds up) 02, because the displayed value is

  sum of core % utilization
  -------------------------
       number of cores
In a system with hundreds of cores, (predominantly) single-threaded processes' CPU utilization will therefore appear to be identically zero.

The "Processes" tab improves on this slightly by adding a single decimal digit, but this display still becomes less and less useful as core count increases because its precision remains inversely proportional to the number of (virtual) cores in the system.


Chrome changed their task manager to make the CPU percentages be relative to a single core instead of relative to the total CPU power available. This makes the numbers more comparable (100% on one machine is roughly the same as 100% on another, regardless of core count) and it avoids the problem of increasingly tiny numbers. Microsoft should follow that lead.

This change means that Chrome's task manager can show CPU usage percentages that are higher than 100%, but that is fine. The percentage is simply the percentage of a logical processor that is being used and 200% means that two logical processors are being used. Simple, easy to understand and explain.


macOS's Activity Monitor, and the task manager on Ubuntu (forgot its name, something like System Monitor maybe?) when in “Solaris mode” do sum of core % utilisation without dividing by number of cores. So a single core at 100% shows up as 100%, four cores at 100% shows up as 400%.


For a weird trick, change the decimal symbol and the digit grouping symbol to the same value in the Region settings in the old Control Panel. Then everything always uses 0% CPU :)


Works with the current 1:1 assumptions, but then the early windows performance problems with ryzen's NUMA design shows what happens when those assumptions are wrong. Their design, while less cool, is capable of handling pretty much any configuration.


The core explosion AMD has started has quickly made current cpu monitor UIs cease to even make sense, its great!


Can't wait for a scrollbar in htop...

[EDIT: 256 threads in htop on ARM] https://www.servethehome.com/wp-content/uploads/2018/05/Cavi...


Check out winocm's view of a 112-core server: https://twitter.com/winocm/status/1183533208605990913


This is honestly my favorite part of my new 3700x!


What a nice visual summary of why most people do not need this many cores.


These are not mainstream CPUs; people who do rendering and run server farms for a living will find this level of parallel computing a godsend.


And yet anandtech writes articles as though "home usees" and "gamers" have a user for these cores, despite benchmarks being the most demanding workloads those users will ever run on these machines.

In the "HEDT" space, the media coverage and forum chitchat is mostly for users who by these cards for decorative purposes, not work purposes. People buying for real work need realistic test measurements on realistic workloads.


Yeah, benchmarking the games is a little embarrassing. Outside of Twitch streamers who do software encoding on-device (which is a minority of a minority), almost everyone interested in game performance would be better served by a different market segment.

I guess it's driven by the types of hardware they normally review - their audience is interested in game benchmarks, so they might as well report them.


It’s not so unreasonable. I imagine there are plenty of users with HEDT workloads who also want to play games on the side, and want some assurance that their HEDT box can do both.


I mean nvenc has made a huge difference to streamers using nvidia cards already, you don't really need more than a 'consumer' 6C/12T or 8C/16T even with games gradually moving to supporting more cores if you have an nvidia card.


Prior to the very latest generation nvidia GPUs the quality of nvenc was significantly worse than software x264. This, along with the still there even if very small performance hit, is why the major streamers all still use a dual PC setup with capture cards.

If you're just streaming casually this doesn't matter at all. If this is your day job, though, you probably want all the quality & control you can get, and NVENC isn't quite there.


Where I live people give their phones and unlock code to others to drive under their name. They usually do this for drivers who can't pass a background check or even don't have a driver's license, but still need money.

The legit driver almost always gets a cut for this, free money on your vacation days or while you're working other, better jobs.


Back in 2015-7 or so yes, you'd be immediately dogpiled on here and reddit and most other sites for daring to criticize Google. I was trying to warn people about them and was always banned or downvoted to some shadowban realm immediately.

Google isn't the actual problem though, advertising is. The backlash needs to be primarily about ads, not the companies that just lie repeatedly in order to bend all tech infrastructure towards the needs of advertisers.


We've never banned people for "daring to criticize $bigco". That's half of HN. If we banned you, it was likely because there was something else abusive about your posts.


Ads are the worst, but aren't the only issue. Apple and (older?) Microsoft are problematic too without being advertising businesses.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: