Absolutely this. And when you look through the bug history of those low-level APIs there's a lot of evidence.
That said, the other big difference with Haskell is the low-level API actually provides functionality to solve the exact problem of an async exception being raised anywhere via `mask`. It's still hard to use correctly but at least it's possible.
It's hard to build systems with hard real-time requirements with GHC Haskell.
The vast majority of systems people build do not have hard real-time requirements. People have built video games, real-time audio, and all sorts of other soft-real-time systems with Haskell
No they aren’t soft real time systems either, most applications in general a don’t satisfy soft real time requirements.
Most people hack it but hacking it with Haskell requires rewriting your pure functional code into a form that strongly resembles procedural languages like C. This defeats the purpose of a lazy pure functional programming language.
If most applications don't satisfy soft real time requirements either, then maybe those requirements aren't really important for most cases.
There's only one industrial Haskell codebase I've worked on, and although parts of it were very procedural, that was probably less than half the codebase, even including the IO-bound code. Sure it's one way to use the language, but it's certainly not the only one.
> If most applications don't satisfy soft real time requirements either, then maybe those requirements aren't really important for most cases.
Most applications are buggy and crappy. Sure, if you’re churning out minimally passable garbage for a paycheck, then don’t worry about correctness or timing requirements and pray for the best. Cf. Slack desktop client. If you are producing something that necessarily has a higher quality bar then these things matter, e.g. an aircraft control interface. Haskell isn’t up to the challenge (imagine indefinitely waiting without feedback for a long-lived heavily nested thunk to evaluate when you press “land” ouch).
I think you're reading the description incorrectly. It is not meant to be a specific instance that may happen differently for other instances. "You" is an abstraction over all players of the game show, there's an implicit "forall you:" in front of the sentence.
I agree that this is not as precisely stated as it could be. And unfortunately there's a type of logic puzzle that relies on people understanding these implicit quantifiers and being prepared to ignore them, which sometimes makes it hard to determine if they're supposed to be assumed present or not. But the most common interpretation is to include it.
> I think you're reading the description incorrectly. It is not meant to be a specific instance that may happen differently for other instances. "You" is an abstraction over all players of the game show, there's an implicit "forall you:" in front of the sentence.
But the problem described a specific instance. You are reading it wrong. Right now you are just digging deeper into a hole.
If the problem actually was told as you describe then you would be right, but it isn't. It just describes a single instance, you are at the show, the host opens a door, what do you do?
You completely missed the point of my comment. The wording describes a specific instance. And additionally, the rhetorical convention implies that there is no instance of the problem for which the description doesn't hold. This part is unwritten; the reader is meant to apply it from context.
It seems like this is why there's a big disagreement about this particular write-up. Some people are applying this additional constraint, and others (yourself) aren't.
If you counter that this constraint isn't written, and therefore it shouldn't apply, I'm sorry but you're wrong. The author clearly intended it.
Have you seen BNFC (http://bnfc.digitalgrammars.com/)? I've used it with the Haskell bindings and really appreciated it. Parser, pretty-printer, and AST skeleton all auto-generated. I've only used it with Haskell though, can't speak to the other languages.
You're missing the key piece: Toys R Us didn't go into debt for any good business reason, it was bought by a private equity firm and loaded with debt it didn't need. This is how private equity works:
1. A PE firm uses a combination of other people's money (limited partners a.k.a investors) and debt to buy a company.
2. The PE transfers the debt to the company's books. This way, if the company goes bankrupt the PE fund isn't liable for that debt.
3. The PE firm charges millions of dollars in fees for providing management services. This way the PE firm makes money regardless what happens to the business.
Toys R Us was profitable before it was bought and saddled with debt that was essentially used to purchase it's own business.
“In 2004, after years of flat sales and falling profits, the Toys R Us board of directors put the company up for sale” [1]. Then, over “the next five years, sales at Amazon quadrupled to $34 billion”.
Toys ‘R’ Us was bought as meagre profits fell and right before Amazon went for them. Blaming this outcome on the debt load is inaccurate.
> To compete, Toys R Us would have had to invest significantly in its website and stores. But the retailer was using most of its available cash to pay back its debt.
Yes, profits were falling crazily, but the company was still profitable. Without the debt load, they could've spent some time losing money while they pivot to a new business strategy. The debt load really prevented them from trying anything except surviving as long as they could.
> they could've spent some time losing money while they pivot to a new business strategy
There is zero evidence, in the history of Toys 'R' Us or their failed competitors, that another strategy would have worked. More likely? It would have limped along until the next recession. In any case, I see no reason to blame capital structure when a simpler explanation abounds: Amazon taught people to buy toys online.
I don't understand why people keep repeating this story. Bain Capital had to provide huge amounts of collateral (upwards of a billion dollars), all of which was lost in the bankruptcy. Do people really believe that PE firms will write of a loss of hundreds of millions in order to make a few million in management fees?
The PE fund's capital is partly other people's money too. In particular, fund limited partners (the fund investors) put up most of the initial money. The PE firm itself also puts in money but not nearly that much. This makes it easier for the fund managers to make a profit themselves quickly, even if the fund investors end up losing.
I'm familiar with this narrative, I'm saying that the evidence in the article, though the article is written by someone sympathetic to this point of view, contradicts it. The PE firms, despite their huge fees, lost a boatload of money. No one wanted to buy the business out of bankruptcy for more than it was worth piecemeal. There are easily identifiable reasons (Walmart, Amazon) why the competitive landscape is much tougher than when it was a profitable business. It sounds to me like the PE firms did a lot of harm by keeping the company from bankruptcy for so long.
In general, if a business is clearly profitable it should continue no matter how thoroughly you screw up the capital structure: the worst case is a bankruptcy in which the shareholders get nothing, the debt holders get very little, and a new capital structure gets created. When this doesn't happen, it's because the business is worthless or nearly so.
The financials aren't publicly available, but I'd expect PE firms still made money here. The funds probably lost money, but funds are an investment vehicle separate from the managing entity.
I disagree that profitable businesses can continue no matter the capital structure. You're assuming that there's a liquid market for businesses, or parts of them. For a huge retailer that's almost certainly not true. Additionally, the current owners have to actually want the business to continue. I would believe there's more value to a short-term owner in liquidating assets than accruing profits over a relatively short timeframe.
Also a similar "technique" (LBO to be exact) was used to break down a giant company into smaller companies, effectively shutting down monopolies at the expense of a small number of workers, often making big gain for investors. With the failure of RBR Nabisco LBO, investors think twice about this kind of venture nowaday. If Toys'R US failure could cost the investors a bulk of money, maybe this won't happen anymore with big business like this.
Anyway, if you take part of venture like this, you better be the law firm or the executives, especially if your target is a big company.
The question was more: why does the law allow this? They are using one company to benefit another, then leaving it in debt without financial liability. Seems like a gaping loophole in the law there.
Some resources are scarce, but food in the U.S. is not. We have allocation and distribution issues, with political choices as a major factor, not supply constraints.
As far as I recall, you can download code to your desktop but not your laptop, unless you’re doing something like iOS development that would be difficult to do over say ssh -Y.
I'm not a googler, but from my readings about their dev culture: it wouldn't be practical, as such, for a lot of what that dev server is really used for. Stupid-much development, however, is API driven and based on isolated logic. Those individual components or spike solutions would make absolute sense on the laptop.
More importantly: with the kinds of operations Google is running these days you'd really expect lots of the APIs youd need for product development to be readily accessible.
So, yeah, you'd be hacking on your laptop separately, and maybe doing some kinds of work. They run a monorepo, though, so it'd be of a somewhat peripheral nature. Proper dev is what the dev station is for.
You're claiming that 40% of their total earnings are lost to federal tax? In this example, what types and how much income are you proposing? Doesn't seem realistic to me.
If by "anyone" you mean "my tech savvy friends at google" then yes. I've seen many friends end up on a google-hosted AMP version of a webpage, unable to complete whatever ticket purchase etc they needed because whatever autoAMPifyer the origin site uses produced half broken pages. These users have no idea what AMP is or why the page doesn't work (or that they could have reached the origin site directly if they click around on several unlabeled half-invisible icons in the fake-address-bar on top). In fact, they don't even realize they aren't browsing the origin site. They just give up.
Conceptually this isn't all that different from having a website that works in browser A but not B due to insufficient testing. Why did they not fix it?
Because how many website operators continuously google their own web site on mobile devices to click through to experience their google-hosted AMP editions? I would wager it's a fairly small % of the number of websites running an auto-AMP-ifying wordpress plugin, for example.
I've seen plenty of badly designed mobile websites over the years and these sites seems to have turned out okay - most mobile browsers keep the option of "Request Desktop site" very accessible for a good reason.
The worse case I've seen is a site in which every page crashed Mobile Safari without fail regardless of which version you ask for. It was eventually fixed but I never figured out why. If the sites are just running some script without checking then the admins have failed their line of duty.
Well, I suggest you start giving AMP content that isn't in Google's cache the same preferred treatment in search results that you give cached content.
I'm just going to file another complaint to the EU Antitrust committee otherwise, as this is a simple and clear violation. (Although I doubt my own complaint would be very relevant — all large publishers already have filed such complaints, and Google will be fined for it).
My pages actually increased their loadtime tenfold when I tried them with AMP — they're tested on a HUAWEI IDEOS X3 on 64kbps GPRS. AMP increases load time to over a minute. (On a modern phone, with a modern connection, my pages are obviously also faster — I'm also testing with a Nexus 5X on 100Mbps 4G)
I have the choice between a massively worse user experience, or worse search ranking. And that's a choice that's just not acceptable to me.
That said, the other big difference with Haskell is the low-level API actually provides functionality to solve the exact problem of an async exception being raised anywhere via `mask`. It's still hard to use correctly but at least it's possible.