> Every time I listed a new product for a different TV remote, my item would be flagged, but then I would go in and make a small edit, and that seemed to trigger some sort of review, and everything would be great.
They may have a filter for people who exceed a certain number of flags?
That’s the first thing that jumped out to me. He was flagged multiple times, tweaked the listing and passed. But it may not be a free pass every time. Eventually multiple flags added up to something - a ban. So may not be Samsung per se as anything special just hitting some flag limit.
Throughout the vast arc of human existence, you worked or you died. Nature does not hand you food, shelter, clothing, and flush toilets for free. Most of us, if dropped naked into the wilderness, will die within 24 hours.
Bone evidence from colonial America is the colonists worked like dogs and died young. Bone evidence from the Indians showed repeated famines.
Generally speaking, the clan you worked to survive with generally didn't cancel you because a seashell currency accountant saw some tangential benefit for a moment. Which might just be their own survival need to look busy or be a perforative hard ass, to avoid the same thing.
The loyalty you are dismissing as unnatural, was painstakingly discovered by our genes, precisely because it maximized survival in our natural environment.
In other arid areas, people use terracing on hills so the water runoff is slowed and the water can soak into the aquifers. Also, dikes are built around fields to hold the water and also let it soak into the ground.
> In other arid areas, people use terracing on hills so the water runoff is slowed and the water can soak into the aquifers. Also, dikes are built around fields to hold the water and also let it soak into the ground.
> Are these done in California?
People terrace where the only arable land is in hills or mountains. The vast majority of California's farmland is flat as a board.
California's central valley also has one of the most massive systems of water control (aqueducts, levees, etc) in the world.
The problem with water and Ag in California is caused by the massive disparities in water rights that make it extremely cheap for some and expensive for others, depending on their water rights.
You need some outflow to carry away salts. Otherwise, the water just evaporates and leaves the salt there, and eventually the land becomes unable to grow crops. This has historically been a serious problem in Mesopotamia, for example.
There has been quite a lot of investment in spreading grounds, aquifer recharging, and stormwater capture. Last year, LA county recaptured enough water to meet the yearly water needs of 2.4 million people.
Does this mean 2.4 million peoples personal drinking and cleaning use in a typical residential area?
Or are they including all the industrial water use those people contribute to by existing? That then includes industrial agriculture, and all industrial water use that enables modern life.
I suspect they mean only the first one, which is at least misleading, and at worst a lie intended to deceive.
I've always thought the 8087 was a marvelous bit of engineering. I never understood why it didn't get much respect in the software business.
For example, when Microsoft was making Win64, I caught wind that they were not going to save the x87 state during a context switch, which would have made use of the x87 impractical with Win64. I got upset about that, and contacted Microsoft and convinced them to support it.
But the deprecation of the x87 continued, as Microsoft C did not provide an 80 bit real type.
Back in the late 80's, Zortech C/C++ was the first compiler to fully implement NaN in the C math library.
I’d agree that the engineering was brilliant (but 68882 gang represent!). Its ISA was so un-x86-like, though, as it was basically an RPN calculator. X86 had devs manipulating registers. X87 had them pushing operands and running ops that implicitly popped them and pushed the result back on the stack.
That’s not better or worse, just different. However, I can imagine devs of the days saying hey, uh, Intel, can we do math the same way we do everything else? (Which TBH is how you’d end up with an opcode for a hardware-accelerated bubble sort or something, because Intel sure does love them some baroque ISAs.)
> Its ISA was so un-x86-like, though, as it was basically an RPN calculator
Yeah I remember when I first came across floating point stuff when trying to reverse engineer some assembly - I wasn't expecting something stack-based.
Eh, as far as compiler backends go, the RPN stack was worse.
I thought the X86_64 instruction set was a giant kludge-fest, so I was looking forward to implement the AArch64 code generator. Turns out it is just as kludgy, but at right angles. For example, all the wacky ways of simply loading a constant into a register!
Excel needed the x87 as well as they cared about maintaining the 80-bit precision in some places to get exactly the same recalc results. So they would have fixed it eventually most likely.
What do you mean by respect? Here's a layperson's perspective, at least.
Up through the 486 (with its built in x87), the x87 was always a niche product. You had to know about it, need it, buy it, and install it. This is over and on top of buying a PC in the first place. So definitionally, it was relegated it to the peripheries of the industry. Most people didn't even know x87 was a possibility. (I remember distinctly a PC World article having to explain why there was an empty socket next to the 8088 socket in the IBM PC.)
However, in the periphery where it mattered, it gained acceptance within a matter of a few years of being available. Lotus 1-2-3, AutoCAD, and many compilers (including yours, IIRC) had support for x87 early on. I would argue that this is one of the better examples of marginal hardware being appropriately supported.
The other argument I'd make is that (thanks to William Kahan), the 8087 was the first real attempt at IEEE-754 support in hardware. Given that IEEE-754 is still the standard, I'd suggest that x87's place in history is secure. While we may not be executing x87 opcodes, our floating point data is still in a format first used in the x87. (Not the 80-bit type, but do we really care? If the 80-bit type was truly important, I'd have thought that in the intervening 45 years, there'd be a material attempt to bring it back. Instead, what we have are a push towards narrower floating point types used in GPGPU, etc.... fp8 and f16, sure... fp80, not so much.)
The disinterest programmers have in using 80 bit arithmetic.
A bit of background - I wrote my one numerical analysis programs when I worked at Boeing. The biggest issue I had was accumulation of rounding errors. More bits would put off the cliff where the results turned into gibberish.
I know there are techniques to minimize this problem. But they aren't simple or obvious. It's easier to go to higher precision. After all, you have the chip in your computer.
Yes, the argument of Kahan in favor of the 80-bit precision has always been that it will allow ordinary programmers, like the expected users of IBM PC, who do not have the knowledge and experience of a numerical analyst, to write programs that perform floating-point computations without subtle bugs caused by unexpected behavior due to rounding errors.
> The disinterest programmers have in using 80 bit arithmetic.
I don't know, other than to say there's often a tendency in this industry to overlook the better in the name of the standard. 80-bit probably didn't offer enough marginal value to enough people to be worth the investment and complexity. I also wonder how much of an impact there is to the fact that you can't align 80-bit quantities on 64-bit boundaries. Not to mention the fact that memory bandwidth costs are 25% higher when dealing with 64-bit quantities, and floating point work is very often bandwidth constrained. There's more precision in 80-bit, but it's not free, and as you point out, there are techniques for managing the lack of precision.
> A bit of background - I wrote my one numerical analysis programs when I worked at Boeing. The biggest issue I had was accumulation of rounding errors.
This sort of thing shows up in even the most prosaic places, of course:
The 80-bit format was included in the IEEE standard since the beginning.
The IEEE standard had included almost all of what Intel 8087 had implemented, the main exception being the projective extension of the real number line. Because of this deviation in the standard, Intel 80387 has also dropped this feature.
Where you are right is that most other implementers of the standard have chosen to not provide this extended precision format, due to the higher cost in die area, power consumption and memory usage, the latter being exacerbated by the alignment issue. The same was true for Intel when defining SSE, SSE2 and later ISA extensions. The main cost issue is the superlinear growth of the multiplier size with precision, a 64-bit multiplier is not a little bigger than a 53-bit multiplier, but much bigger.
Nowadays, the FP arithmetic standard also includes 128-bit floating-point numbers, which are preferable to 80-bit numbers and do not have alignment problems. However, few processors implement this format in hardware, and on the processors where it would need to be implemented in a software library one can obtain a higher performance by using double-double precision numbers, instead of quadruple precision numbers (unless there is a risk of overflow/underflow in intermediate results, when using the range of double-precision exponents).
In general, on the most popular CPUs, e.g. x86-64 based or Aarch64 based, one should use a double-double precision library for all the arithmetic computations where the traditional 80-bit Intel 8087 format would have been appropriate.
They may have a filter for people who exceed a certain number of flags?
reply