It's not a "massive blow" at all. Consumers will only vaguely remember this in a month. Netflix got a lot of new signups and got to test out their streaming infrastructure to figure out what needs work.
The fight itself was lame which worked in their favor. No one really cared about not being able to see every second of the "action". It's not like it was an NBA game that came down to the last second.
The challenger to these will solve for a different problem. Not every transaction needs complex fraud detection or being able for the customer do to chargebacks.
For a 3% discount, would customers agree to use something that worked just like cash, where the transfer was instant and couldn't be undone? Then you don't have to worry about fraud, chargebacks, etc.
You are missing the opposite side of the fraud picture: Where it's not the business scamming you, but someone taking your credentials and spending up to the limit in a store that deals with no chargebacks. This is, if anything, the larger size of the fraud losses for the Stripes of the world. Fake businesses that use the cards either for testing if the creds are good, or where the owners charge cards that they obtained from some other malicious actor.
So it's not that I get 3% off by not supporting chargebacks, but whether I want to have a dollar under a payment system that supports someone emptying me out without recourse.... and the answer is often no.
Or further abusing your weak password on a site and then racking up a ton of charges to a product that they're capable of laundering in some way into money for them at any ratio.
There's an issue that you're not addressing: what happens when someone who isn't me spends my money? I think people would be happy for the theoretical 3% discount until their account is drained and sent to North Korea with no recourse.
It is fantasy to think they'd get a 3% discount. The goods in stores that take only cash do not tend to be cheaper than those that do.
They know what people are willing to pay and will charge the price. If they see people are willing to pay $99 with a credit card, then they'll be willing to pay that with cash.
I think the issue here is who is paying the fee and where is the fee surfaced. A free market solution would work here, but it requires some regulating to create the transparency required.
Everyone pays their own credit card fee as a line item on the receipt, merchants are required to print it on the receipt. If customers actually had to pay their own fee's on each swipe you'd see a lot less people reaching for the Platinum card and instead for the no frills local bank credit card. You'd also see immense downward market pressure on swipe fees as now card issuers have to compete against each other.
Technically the merchant is paying the fee, and he perhaps is passing some or all of it to you.
The reason merchants might not pass it all to you if that they get a lot more sales volume when they support credit cards, so they can still be more profitable while paying for some of those fees.
I know I'm going to get hated for saying this, but the businesses that charge extra for credit card use under $10 are trying to extract as much out of you - they're aiming to get the best of both worlds. The price of their goods are still such that they're assuming you'll pay with a card.
At the end of the day a business has several costs. Rent, cost of shipping, utilities, etc. When these go up so do the costs of goods. Credit card fees are no different in that regard. If they hated it that much they wouldn't support credit card payments. They do support it because then know it'll bring in more revenue than without - and will easily pay for itself and more.
The issue is there's a huge disparity in the fees for certain payment methods.
Some cards cost merchants much more than others, but they are contractually forbidden from differentiating their prices based on that. It's anticompetitive. Lots of "buy now pay later" schemes work similarly, when afterpay was (or is) a big thing they charge 7% and forbid the merchant from including that cost in their prices.
If the consumer had to bear the cost of their payment choice, no problem, but the reality is consumers with low fee payments are paying slightly more than they should for everything and those with high fee payments pay less than they should for everything.
The reason merchants don't pass fees onto credit card customers only is that the credit card network prohibits them from doing so. If they were to charge a credit card fee, they'd get banned from processing credit cards at all.
> The goods in stores that take only cash do not tend to be cheaper than those that do.
In NYC they most definitely do. A lot of the corner stores will change you less with cash. I'm not sure it is a the card payment or that they are keeping the sale off the books, but something that might cost me $18.50, I'll pay $18 for.
When I wrote that comment I knew someone would come out and use New York City as a counterexample.
The reality is except for a few of the really major cities those types of stores are usually more expensive than their larger counterparts in virtually all other cities in the US.
In my city I'm not going to get cheaper groceries by going to the smaller stores. They are more expensive regardless of whether they support credit cards or not. They may be superior and certain other aspects but price is not one of them.
My guess is the opposite may be true only in places where owning a car is expensive or inconvenient.
In Poland, the default way for computer shops in 2000-2010s was to offer 2% discount when paying in cash. (The prices displayed were assuming cash, so if you paid by card, you'd pay more.)
I didn't see this anywhere else though. It probably made sense for computer shops because most transactions one would do there would be sporadic, big, and planned.
(Since then, the Mastercard/Visa fees went down to 0.2-0.3% due to EU rules, so probably those discounts are less popular now).
In the US offering different prices when paying by cash vs card was a violation of the agreement with Visa, as is putting a minimum price threshold for card usage.
It's still fairly widespread though, and occasionally makes the news. Might explain why you didn't see it often.
I believe the Visa merchant agreement never forbade cash discounts, only credit card surcharges. I'm not sure, but the current rules are different due to a legal settlement.
In the US, not only does Visa now allow cash discounts and minimum price thresholds up to US$10, but they also allow, in most states, credit card surcharges (sometimes subject to specific state-law legal requirements).
Visa still officially disallows minimum price thresholds outside the US and certain related territories like Guam, and credit card surcharges outside the US - but I nevertheless see them plenty often here in Germany in small shops. I think the permission to offer cash discounts is global.
And how would that work accounting wise? Would they just claim that a bunch of PCs "fall off" a truck?
I'm not sure subjecting everyone to poorly regulated (even in the EU it's fair from ideal) monopolies/oligopolies that are legally entitled to literally tax every single transaction in the economy (in addition to the complete loss anonymity and all the implications of that) is not a too high price to pay for some reduction in tax fraud...
“Shrinkage” is the generic term I have heard for stock losses of all kinds in retail and distribution channels.
In many jurisdictions, cash payments can allow the retailer to avoid on-paying sales tax or VAT, as well as mark stock shrinkage as a loss for their own tax purposes.
Countering this would require very careful auditing of electronic toll records and paper receipt processes, which are in most cases trivial to evade if well-prepared.
And you can’t always be sure that the shrinkage - without the cash - is reported to the manager of the retailer by the person on the till, especially if an unofficial handwritten receipt is provided by the cashier.
I recall seeing a situation involving a very large champagne purchase on New Year’s Eve in cash for 25% off and a “till receipt problem”.
This is the purpose of Zelle, Venmo, money wires, and checks. But there are many problems they don’t solve, that customers and sellers prefer to be solved and are willing to pay for.
I would use my debit card even if it behaved exactly like cash, ie, when the recipient got the money, my only way of getting it back is to sue them or call the police.
Obviously any electronic payment system needs to be secure internally but society lasted a long time and made fine progress when having your wallet stolen meant losing your money.
It would be fine to require a person to charge their debit card with a finite amount rather than have it be funded up to the limit of the supporting account and that would solve the last problem compared to cash.
I understand that Europe is more secure with chip+pin, but in the US, debit cards do exactly what you describe. If fraud happens, you are out money until it is resolved.
The key difference from cash, in the US, is the ability to abuse cards at a later date without the physical card. For someone to steal your wallet, they have to be colocated with you and can only steal as much as you're walking around with.
As long as debit cards have a magnetic stripe and have their full number printed on them, and that information is useful, this problem remains.
I don’t believe SCA is enforced by the bank. It’s voluntary by the merchant. It acts as a liability shift but won’t save you from someone not caring about it and emptying your account (temporarily until the chargeback goes though). I don’t think any bank offers an option of “allow SCA-only transactions” and I don’t think it would be even possible (I’m not sure there is any token/session identifier to tie the SCA request and the actual subsequent transaction even).
When adding a card to a taxi app for example I get SCA prompt for a zero amount, but then they can charge me for any amount without subsequent SCA flows.
Presumably those subsequent transactions wouldn’t have a liability shift to the issuer but it still means that they can at least temporarily steal all your money until your chargeback claim goes through.
The whole concept of “card number” is rotten. What’s needed is an oAuth2-type system where every payment needs to redirect to the bank (actual redirect, no stupid hacky iframe like SCA/3DSecure is) and where you can see the merchant and set the max amount (and whether one-off or recurring) and the bank records that and keeps a list of authorized merchants so you can revoke them at any time. The merchant then must use this token to pull money, and can't pull more than what the token allows - just like your usual oAuth2 scopes.
This is not right at all (it's mandatory fo all banks and merchants in the EEA), although you're correct that SCA still has loopholes (like a US merchant... just trying, although a bank could just mandate 3DS to solve that).
How do you explain the example I gave where the taxi app only has to SCA me once and not upon every transaction? This is in the EU.
What I suspect is that the "mandatory" bit is by law (and the law has flexibility, which covers this taxi app scenario) but there is no technical solution to make it mandatory, thus a non-compliant merchant can still drain your account until your chargeback claim goes through.
You're right that it's not fully enforced technically. It's complicated, and I don't think that's really solvable by technology (being that this scenario is roughly equivalent to direct debiting). Banks can validate if a particular merchant has already been used by a customer and blocking them from debiting your account, but since that SCA has exceptions for recurring debiting, this is not really enforcable once the customer has authorized the merchant for any debiting.
> If you attempt an exemption and the bank returns a decline code indicating that the payment failed due to missing authentication, you’ll have to reattempt the payment with your customer but this time utilizing SCA.
Yeah, Europe is ahead on this; I hedged my earlier statements heavily.
It's not a difficult technological problem to solve. A card's chip should be able to guarantee that the card is physically present for any transaction.
Obviously online payments would pose a problem, people would need to either own USB card chip readers or banks would need to do something new and special.
In Germany (/ the EU?) we have electronic ID cards that can be used for a few online services.
The physical card can communicate via NFC, and there's a smartphone app you can use with it.
For PCs, you can buy some fancy NFC interface if you want, but you can also have your phone act as a reader, the PC connects to it over the local network.
Maybe something similiar could work for banking cards. They all have NFC anyways.
On the other hand, you might as well just have an app that is registered with the bank on your computer/phone (like how it works for smartphone NFC payments) and skip the card.
Online payments are done using pretty much the same system. Instead of the chip, you get either a 2nd authentication mechanism, or start out with a strong token (be it the strength of the token itself, or the stability of it).
An older example was getting transaction authorisation numbers. You would either get a long indexed list on paper, or you could receive then over the phone (voice or text). This was then mostly replaced (about 10 years ago) with hardware (H/T)OTP type tokens that required your card to be inserted in the token and PIN authenticated. Later on that too was replaced by a cardless version, and that one then was replaced (for consumers) with mobile apps.
The combination of minimum software versions, online authentication, transaction limits, daily limits, and time-locked temporary limit increases (so you can buy a car with your phone, but you have to up the limit a couple of hours ahead of time for it to take effect) make it pretty safe with acceptable risk for the bank. And then there's of course the standard fraud detection and prevention departments, so if you do something unusual that also involves a lot of money, you're likely going to get a call.
For business use, there are other systems, generally two types like EU-wide smartcards or bank-specific smartcards that can be used to authenticate and authorise. You'd use an USB or NFC connected method for that. Sometimes that involves entering a PIN on the device itself before the computer can talk to it, but that does make the OTP exchange very fast. You'd still have limits or multiparty authorisation setup in your organisation so you don't end up with one person just moving a couple of 100K around on their own.
And then there's some overlapping systems, apparently this one is going EU-wide: hhttps://en.wikipedia.org/wiki/EIDAS and apparently some implementations include useful things: https://www.idin.nl/en/businesses/ like age confirmation where the business doesn't need to know who, what or where you are just if you're of age (and not even a specific age). Granted, nothing is perfect, but it's a whole lot better than finding some S3 bucket somewhere with JPEGs of ID cards. As long as they don't do dumb stuff like trying to MITM TLS, it's progress. The overlap is in the concept where you can use some electronic means to prove who you are to get something done.
If you have an unprotected vector fraudsters will find and exploit it. They're literally paid to do so.
I've seen fraudsters that are ridiculously persistent to make $2,000 in a year. But they just keep poking at it at a certain point you're able to ramp that up to $80,000 in a month I know they're good it was completely worth it to him for several years.
How I've seen people spend hundreds of hours to generate a few hundred dollars worth of in-game currency or on-site reward points.
> Not every transaction needs complex fraud detection or being able for the customer do to chargebacks.
Well, not until you get hacked.
We might be happy with instant, no-undo transactions until our device gets hacked and our bank account with many thousands of dollars gets drained, through no fault of our own.
Then suddenly, complex fraud detection and transaction reversals seems like an awfully good idea.
Because the issue here isn't about chargebacks where you genuinely made the transaction but the business failed to deliver, and maybe you lose a couple hundred dollars. The issue here is about when you never authorized transactions at all, and you lose all your savings.
EU caps interchange fees at 0.3%, which is probably still too much. The 3% is mostly to finance the various gimmick programs that make naive people think they are "gaming the system" with their 20th card in wallet (and because they can, of course).
Have they talked about how local laws and elections will work? The backers are putting up a ton of capital and boot strapping it with $1B+ in community benefits and housing subsidies.
When it starts being populated, is it going to be run like a company town[0] where they control the stores and restaurants? How much control will they have over local elections?
Couldn't they simply incorporate?, in the municipal sense.
A quick search indicates this is the process in California:
> Today, incorporation means going through a rigorous and complicated process with the Local Agency Formation Commission (LAFCO) in the county where the community sits (each census-designated place must be contained within a single county). Before a community can even apply for incorporation, at least 25 percent of registered voters there (a community must have a minimum of 500 registered voters to qualify at all) must sign a petition stating their desire to make their community a city.
Key problem is that the average consumer is with their health insurance provider for about 3 years. This means that when you have drugs that provide significant health and cost benefits in the long term, the patient's current insurer pays the cost but another company receives most of the benefits.
It would help tremendously if the US changed the rules and made it so most people got insurance directly and not through their employer.
That's not really a key problem and not how insurance companies evaluate cost/benefit. We overall want a healthier population and if you think hard enough about it, you lose some subscribers and gain some subscribers so it's a win win for all health insurance companies.
My understanding is that insurance companies have fixed profit margins by law— essentially a cost plus model (just like California power companies). So Why don't health insurers wan't the least healthy and most expensive general population in the long run?
That's not accurate. The Medical Loss Ratio (MLR) created by the ACA is 80% of revenue collected must be spent paying benefits. So, 20% of revenue goes towards all operational expenses and profits. If they go over that ratio, they need to refund to policy holders.
So maximize medical costs and minimize opex for best returns.
At least opex is in the calculation, California utilities don't even have that. It goes a long way to explaining why I pay 50c/kwh in sf (CA private) vs 15c in Sacramento (municipal) or Nevada (private without cost+)
I think my larger point, that jacking up premiums increases revenue, thus increasing the opex/profit pool (regardless of the percentages) is a perverse incentive to jack up premiums as much as possible
As in a 3-5 year (which is about what I've seen with ACA premiums) doubling of premiums, thus doubling the potential profit pool without increasing opex in any way. In fact, the use of third parties to provide "pre-approval" for an increasing number of drugs, tests and procedures reduces opex, leaving more of that 20% for profits.
In fact, my insurer has consistently raised premiums while squeezing providers who are now charging separate fees (meaning additional co-pays for me) for stuff that was once included in a single fee (one example is charging an "outpatient facilities fee" in addition to the copay for seeing a doctor. Increasing my costs, while the insurance company can just shrug and say, "that's not covered, suck it up!"
Meanwhile, doctors (especially GPs) are over-scheduled (my GP is scheduled to see every patient in 20 minutes or less), allowing the practices to charge for seeing more patients -- increasing their profits -- at the expense of patient health.
And heaven forfend having multiple related issues which require more than one specialty -- you're just shunted from specialist to specialist without much (if any) communication and a shrug if something unrelated to the specialist's area comes up.
The point is that the whole industry is fraught with perverse incentives that drive up costs, reduce the quality of care and a laser focus on the wrong stuff (i.e., services provided vs. holistic health outcomes).
It's disgusting and harms people. You'd think that by now we'd have decent healthcare. But the perverse incentives pushing toward financialization of, well, pretty much everything medically related, are actually impacting the average lifespans of Americans.
While I agree there's still some perverse incentives and I'm largely a pro-universal healthcare (or at least public option) kind of person, they can't just jack up premiums and thus increase their profits. They'd have to also increase benefit payouts if they're near the cap. And FWIW, its generally a good thing for them to reduce opex, so long as quality doesn't suffer (although it often does go hand in hand in practice).
One could point out though, it kind of incentivizes them to no longer negotiate prices as hard. This makes benefits more expensive, growing the total size of their 20%.
But on the other hand, pointing out the idea of holistic health outcomes, increasing benefit payouts to those kind of processes is something incentivized by this rule. Adding a lot of these more "fringe" holistic health benefits, like telehealth nutritionists gym membership subsidies and what not, also grows the benefit payout side and then lets them take more total profit. But there's no free lunch here, those benefits are largely coming from the premiums being collected.
Also, a lot of these insurers don't even end up getting any of that 20% some years. The first few years after the ACA pretty much every insurance company had some big losses. It has been a while since I looked at the industry, but they're not always making massive money margin-wise.
Don't take my comment as me endorsing the current system. Its dumb and broken and I hate it.
>they can't just jack up premiums and thus increase their profits. They'd have to also increase benefit payouts if they're near the cap.
Maybe I'm missing something here, but other things being equal, just increasing premiums allows for a larger opex/profit pool.
20% of $100 is $20 and 20% of $200 is $40. Yes, they'd still need to spend (using my made-up numbers) $160 instead of $80 on care/benefits, but the profit pool still increases -- a perverse incentive to raise premiums.
>One could point out though, it kind of incentivizes them to no longer negotiate prices as hard. This makes benefits more expensive, growing the total size of their 20%.
Absolutely. That points up the perverse incentive to raise premiums and highlights the lack of incentive to reduce costs.
>But on the other hand, pointing out the idea of holistic health outcomes, increasing benefit payouts to those kind of processes is something incentivized by this rule. Adding a lot of these more "fringe" holistic health benefits, like telehealth nutritionists gym membership subsidies and what not, also grows the benefit payout side and then lets them take more total profit. But there's no free lunch here, those benefits are largely coming from the premiums being collected.
I guess I should have been more specific when using the word "holistic." I meant it in the sense of treating the whole person (with a focus on the outcome of such treatment) rather than just specific symptoms. I wasn't talking (or even thinking) about nutritionists and gym memberships and stuff like that. Those aren't bad ideas, but are geared toward healthy people.
Those with specific conditions (e.g., vascular disease, cancer, hepatitis, HIV, etc.) manifest with other issues that, while they are exacerbated by a specific condition, aren't successfully treated by addressing that condition(s). Addressing ancillary issues (which may be at least as, if not more, debilitating than the condition one may be treated for by any particular specialist) along with the "primary" issue in a holistic (rather than specific treatment for specific issues while ignoring other issues -- which is one of the perverse incentives of paying by the procedure rather than the outcome) fashion.
I won't get into details here, but recent events in my own life have pointed up how fractured medical care is and how difficult it is to navigate (especially when your GP is allotted 15-20 minutes with you every six months or so) side-effects of various treatments/procedures/drugs that have specialists shrugging their shoulders and saying, "that's not my specialty. I did my job. If you're not recovering/getting healthy, that's not my problem. Go see someone else."
There are no incentives for medical practices to provide comprehensive management of health issues because the only person who is (theoretically) paid to do so, isn't given the time or the resources to do so. Everyone else just wants to do their specialist thing and walk away.
That's the wrong way to do medicine.
My apologies if I wasn't clear about that in my previous comment.
>Also, a lot of these insurers don't even end up getting any of that 20% some years. The first few years after the ACA pretty much every insurance company had some big losses. It has been a while since I looked at the industry, but they're not always making massive money margin-wise.
That's absolutely correct. In fact, most insurers (at least in my area, a large, dense, urban area) have left the ACA market and those that are left struggle to maintain their viability -- mostly by screwing over their customers.
So yeah. I mostly agree with you. But that doesn't help me or the million of others who pay exorbitant premiums for mediocre medical care.
I'm pretty angry about it, but short of moving somewhere with universal health care, I'm not sure how to address that.
> Yes, they'd still need to spend (using my made-up numbers) $160 instead of $80 on care/benefits, but the profit pool still increases -- a perverse incentive to raise premiums.
So they're still not just increasing profit by jacking up premiums, they have to actually find another $80 of benefits to pay out. If subscribers don't have those costs, they can't just increase premiums. If they raise prices to $200 but only end up with $90 of benefits to spend they have to cut a refund.
>So they're still not just increasing profit by jacking up premiums, they have to actually find another $80 of benefits to pay out. If subscribers don't have those costs, they can't just increase premiums. If they raise prices to $200 but only end up with $90 of benefits to spend they have to cut a refund.
While that's true, such a situation isn't very common[0].
Only a small percentage of people receive rebates, and those that do don't necessarily get much in the way of a rebate
It's anecdata, but my premiums have nearly doubled in the past four years, with no additions to coverage (in fact, my insurer informed me that while my premiums were going up by 15% in 2024, I would receive less coverage than in 2023. The only reason I maintained my coverage with them is because I have an ongoing issue and I'd prefer not to be forced to change providers until the issue has been fully addressed.
So yes, just raising premiums doesn't guarantee more profit, but it certainly enables it and, at least in my case (which, again is just anecdata) means higher premiums and less coverage. Something doesn't seem right here.
And that something is how we've implemented healthcare in the US.
> average consumer is with their health insurance provider for about 3 years
Are they? That sounds right for how long they're with an employer, but if I move companies I'm probably going with the same insurance carrier under the new company's plan. The total list of carriers [0] is pretty dang small (and not every licensed company is doing new policies).
Even if what you say is true it seems like reciprocity would make up for it - Company A pays and Company B benefits like you say, but for every situation like that there's a situation where Company B pays and Company A benefits.
> That sounds right for how long they're with an employer, but if I move companies I'm probably going with the same insurance carrier under the new company's plan.
At least middle sized companies seem to change insurers commonly enough that both my wife and I have had our employers change said insurers in the middle of our employment.
I think decoupling health insurance from one’s employer could greatly change the entire healthcare industry. It would go a very long way to reduce cost obfuscation, among many other things. So many people think their health insurance costs $30 a month because that’s their “contribution”.
Unfortunately it will never happen, as it’s insanely politically unviable as almost no one wants that to happen. It’s the ultimate free market approach, but then people would have to pay for something they “get for free”. And once they realized how insane the system is, and how much everything actually costs, you might see knock on effects from that. Some bad, some good. It would be an experiment for sure.
Single payer is more realistic, even if it doesn’t do much to affect many of the underlying issues.
I can't think of any important things Mozilla has created since pushing Brendan Eich out 9 years ago. That's almost a decade and billions in revenue they've burned through.
There's now almost no programmers on the board or in senior leadership positions. The interim CEO they picked is an MBA who ran a business line at AirBnB.
I was executive sponsor of Rust, which Graydon Hoare was doing as a personal project while working with me on ES4. Rob Sayre, JS team manager, agreed Rust should be an official project, so Graydon went full time on Rust at Mozilla. This was in 2008.
Later, others notably Niko Matsakis and Patrick Walton (apologies for leaving yet other folks’ names out) took Rust to 1.0.
Mitchell didn’t know what Rust was until I explained it, wasn’t CEO when we made it an official project, but was CEO for this:
Holy shit, this is actually Eich. Comment deserves more attention
Huge fan, you were pushed out unfairly - it's wild that you were fired for donating money to a vote that passed. It was a normal and popular opinion at the time.
Brendan, I'm a big fan of a lot of your work, but I really think it's a poor look commenting in this thread at all. Most folks on HN are well aware of Mitchell's record, and those that aren't...well it's willful ignorance on their part.
I think it comes off as petty to punch down, and wanted to let Mr. Eich know (out of respect) in case he didn't realize it. But if he realizes it and doesn't care, or just disagrees, that's his decision.
Punching down implies I’m up. How do you figure that? I’m not at Mozilla, not paying myself a seven figure salary, not ever engaging with the Davos set.
In any event, my comment laid out salient bits Rust history, which however much you might not like them, do not “punch” anyone.
You're a chief executive of a successful web technology and browser organization, while it looks as though Baker is being removed from the one you previously left.
I have no issue with the Rust facts, just that the context makes it look like you're being petty by further highlighting Baker's failure in a thread about what is effectively her removal, and I thought maybe you didn't realize that and would like to know. If you know and don't care, or disagree, then just disregard me.
>I’m not at Mozilla, not paying myself a seven figure salary, not ever engaging with the Davos set.
Mozilla's a dying organization kept on life support by Google and playing make-believe hero of the free web these days. Certainly tough to get much lower than that.
You are assuming facts not in evidence or provably false per public IRS 990 forms:
1. Mitchell was not as far as I know removed.
2. She has extracted over $20M gross pay including bonuses for the last several years. I’ll do the exact math later.
Let’s see how much her comp goes down in this year’s IRS Form 990, which will come out at the earliest in late 2025.
3. Mozilla has a ton of cash in the bank while Brave is still building a new business model. To say I’m doing better in any financial sense is cheeky. If you use Brave, thanks for your support.
Last thing: Mozilla will take years to die, and it could perdure as an NGO, even after Firefox. Don’t assume it will die quickly. We are all dying, in this world.
"Punching down" is a sophomoric dunce-phrase in any event, but even with the most charitable interpretation of that phrase, it's wrong. I was not punching, nor is your asserted direction "down".
Mitchell (along with all leadership) should not be immune from criticism, even (or especially, if the leader in part caused the downfall) if you wrongly believe that they were fired, underpaid, or running a "dying" outfit -- all of which as far as I know are false.
You are only getting Brendan's story, and why is Brendan going out of his way to attack Mitchell Baker? It strongly implies some powerful drive besides sharing information.
Whatever white knighting is, I'm interested in the truth, in the dangers of Internet mobs, and in fairness to anyone (including you). All are essential. Look at what our world has become as people disparage all that.
> the person who has destroyed firefox’s marketshare?
One aspect of that mob mentality: You skipped past having evidence and reasoning for that assertion; it's just assumed. And then you act out in anger that anyone would question the mob's assumptions.
Don’t ignore context. The comment to which my first comment above replied implied that Mitchell (for Mozilla) was due credit for Rust.
Your (1) is still planting a falsehood: Mitchell’s role when I sponsored Rust was not CEO, she did not have to approve or reject Rust, but she did assent to my advice that we make Rust an official project.
This is not an attack, it’s simply what happened. You are the one who keeps concern trolling, or whatever it is you are doing, to give Mitchell undue credit or to shield her from anything that could be taken as criticism.
> I can't think of any important things Mozilla has created since pushing Brendan Eich out 9 years ago.
Comment to which I replied, which you wrote:
> Rust, for example.
Let's recap, since you seem to have a very short context window or memory. Someone wrote they couldn't think of anything important Mozilla created after I left. You cited Rust. I testified that Rust started many years before I left and I was Rust's C-level sponsor and immediate colleague of its creator.
You then reappear after several nesting replies to imply I'm lying and have bad motives. After this, here we are with you ignoring your own false claim that Mozilla created Rust after I left. It seems to me, without ascribing motive, that you are the one with a weak grasp on the truth here, even the truth of what you wrote in prior comments on this page.
I think the public and personal contexts are getting mixed together here, to bad effect (as always).
It's a personal situation for you and I can't imagine how much s-t you have heard and taken over the years. You have the misfortune to be personally invested in a public issue, and I'm glad I'm not in your shoes. If we were at a dinner party, of course I wouldn't say a word about it - it would be rude to you and you know infinitely more about it.
But we're not at a private dinner; we're in a public context. People discuss public issues without being experts or researching every detail; they will get some things wrong or be imprecise. Also, they are just not as focused as you are - understandably - on the same things and at the same level detail. When I credited Mozilla with Firefox and Rust, I didn't specifically credit Mitchell Baker with it, nor did I care about that detail of who did what (also, I didn't talk about creation; much of the Rust development was after you - but I only say that because you care; I don't). That's really important to you and so that's what you focused on and I can see where you got that impression - it just wasn't important enough to clarify. It was a bit sloppy, but I'm not writing a dissertation or a contract.
You did inject yourself personally into a 'public context'; I don't think the anger is appropriate, nor your bullshit about my motives. What I wrote was a genuine complement to everyone at Mozilla, including you: It was reminding the world that Mozilla has done so far is spectacular - unreal, heroic achievements that changed the world, twice over. Mozilla is just Mozilla to me, not one person or another.
Still, I apologize that I wasn't more polite when I remarked about potential bias. I didn't respond directly to you, but I should have been careful to make it respectful - not because you are a big deal on some scale, but just the opposite: you're a human being. I knew you were around and regardless of context, I don't buy that public figures are free game for abuse. Good luck with Brave, another great idea that I hope changes the world.
I already said I was not going to speculate on your motives, after easily showing your false claim: that Mozilla created Rust after I left.
So the only "bullshit" about motives has come from you. And you are the only one who seemed emotional, if not angry, to the point you threw "attack" as a typical DARVO move. This is pure projection.
It matters who did what, when. Especially in view of Mitchell's power of the purse at Mozilla. Yes, she could have killed Rust, but it would have blown up in her face and she had no need to kill it. No, she does not get credit for creating Rust even after I left. The Rust team was working on spin-out well before they all left. As for Servo, it is better off out of Mozilla: https://news.itsfoss.com/servo-rust-web-engine/.
Netscape, on its deathbed (and I think due to Baker's efforts, in part), open-sourced the Netscape code. Mozilla was created by ex-Netscapers, developed that code, and released a few versions of a Mozilla browser, which followed Netscape's idea of integrating browser, mail client, webpage editor, other stuff (maybe IRC client?).
Sometime later, a few Mozillians decided the web and Mozilla needed a simple, sleek, fast browser, and built Firefox.
Mozilla was created by Netscape itself, before its "deathbed". They were possibly the first "open-core" project of significant scale: they wanted to do a Big Rewrite (Netscape 4 code was unmanageable and crusty), and hoped that doing it as opensource would speed things up. The original Mozilla suite was supposed to be the experimental/rough version, which Netscape would then polish and sell as its own. Unfortunately, by the time this happened (and it did happen - Netscape released a few Mozilla-based versions), the browser market was entirely commoditized and there was no path to profitability for Netscape (which had been absorbed by AOL by then). The Big Rewrite took way longer than expected, and the open setup introduced even more development friction.
The Mozilla suite never got anything else beyond browser/mail/editor. I think the AOL version shipped a bunch of extra bookmarks and that's it. They had already built some infrastructure for extensions and themes though, and that's effectively what Firefox took to extreme consequences: a skunkwork group of Mozilla devs stripped the suite down to the lone browser, and forced everything else to be an extension. That was Phoenix (rebirth and all that), which became Firebird (because people can't spell Phoenix, and the other surviving products could be aligned as Thunderbird and Sunbird), which became Firefox (because oops in IT there's a Firebird already, a database with angry lawyers).
He achieved a lot.
Launched a new browser with full privacy protections by default, an independent search engine, a private LLM service and more while Mozilla just rebranded things from others.
If anything, it shows how Mozilla with it's resources are just wasting money, brave was bootstraped with a much smaller team and capital, yet they defended privacy in a way Firefox never could.
> they defended privacy in a way Firefox never could
I have some familiarity with Brave. They use Chromium (or some components of it?) which is probably more secure from attackers. It has some built-in privacy, but how is it better than Firefox's?
Because it blocks ads and all trackers with no exceptions? The reason is they aren't afraid from Google, in addition to that they launched a fully independent search engine and added new isolation features to chromium, like localhost access and cookie isolation.
I don't understand why the author dismisses carbon steel and cast iron pans. They're my favorite. They last forever and cook great. Go into any restaurant and you'll see mostly cheap, carbon steel pans.
Because they require a lot of maintenance for people who aren't even sure they enjoy cooking for 1 minute more than absolutely necessary. I understand that carbon steel is a lot less maintenance than cast iron, but it does require maintenance.
By comparison, you can soak a stainless steel pan in water for weeks with basically zero negative consequences.
Also, if you live in a culture that frequently cooks acidic dishes, then you're working against yourself by dissolving the seasoning constantly. I also understand that it's fine in moderation.
The author gives several reasons for their decision. I love my Lodge cast iron pan, but I also agree with the reasons that the author gives when choosing to not recommend them for "most" people.
One of their reasons was that they can't find any evidence regarding the long term effects of eating out of seasoned cast iron pans. That rubbed me the wrong way, because it seems to be trying to suggest that, until proven otherwise, the absence of evidence suggests that there may be a problem.
Well… yeah? We know that smoking oil is carcinogenic, and it forms this weird polymer layer when a pan is seasoned. Would not be surprised if further research shows that Teflon is actually safer.
Stainless steel pans are basically as durable as cast iron pans, while being lighter in weight and quicker to heat. I am finding it difficult to justify keeping my cast iron pan around.
Cast iron does wash differently from other things, but in my experience it's less work. Most of my cooking is on cast iron, and when I'm done I rinse it under the tap, rubbing a bit with a scrubber if food is stuck to it. I don't use salt. Then I put it back on the stove for a minute until it's dry. It takes me about 30s to clean a pot, less time than something I need to rinse-soap-rinse, and I end up with a clean pot on my stove ready for the next use.
Our stove almost always has a 10" dutch oven, 6" pan, and 10" pan sitting on it, ready to use.
The only reason people prefer cast iron is an affectation for "old school" things. There is no practical, objective reason to use them over stainless steel. Advocates seem to enjoy the new hobby of cookware maintenance. Similar to how to some Reddit guys are really into shaving with straight razors as a hobby, when it performs worse than a Mach 3.
I'm fully on the cast iron train but it definitely heats less evenly. Nothing 1-2 minutes pre-heating can't solve, though. That said, as you mentioned, it will hold temperature a lot better.
Ah, maybe I was confusing the two effects. Good explanation, thanks. I guess holding the temperature better makes it feel like it heats more evenly, but only after it's well preheated. It's definitely my goto for recipes where keeping a steady temperature despite introducing cold ingredients matters
I find cast iron extremely easy to care for - I basically just heat it up then scrub it under hot water. If you cook meat in stainless, it seems to take a hell of a lot more effort to clean off any residue.
When cast iron is brand new it can take a bit of work, but not a terrible amount - most of the "seasoning" happens just by using it. The two main downsides are things I actually like - the weight, and the lack of even heat distribution.
I have a well-seasoned Lodge cast iron pan, and a 5-ply All-Clad pan. Despite the fact that Lodge is in no way a high-end cookware brand, I can consistently get a perfect sear without sticking in it. The All-Clad is hit or miss.
Maybe I’m just not used to it and need to adjust something, but the cast iron at this point is dead easy to use.
"Even prior to producing their first words, infants are developing a sophisticated speech processing system, with robust word recognition present by 4–6 months of age. These emergent linguistic skills, observed with behavioural investigations, are likely to rely on increasingly sophisticated neural underpinnings. The infant brain is known to robustly track the speech envelope, however previous cortical tracking studies were unable to demonstrate the presence of phonetic feature encoding. Here we utilise temporal response functions computed from electrophysiological responses to nursery rhymes to investigate the cortical encoding of phonetic features in a longitudinal cohort of infants when aged 4, 7 and 11 months, as well as adults. The analyses reveal an increasingly detailed and acoustically invariant phonetic encoding emerging over the first year of life, providing neurophysiological evidence that the pre-verbal human cortex learns phonetic categories. By contrast, we found no credible evidence for age-related increases in cortical tracking of the acoustic spectrogram."
The data is well established that social media use by teens leads to worse mental health outcomes. I'm a parent and as my kids near the age in which social media becomes a thing, I started digging into. I had assumed it would be vague and filled with underpowered studies, but it's not. Social media is bad for kids and the data is very clear on it.
This post has a list of the some of the better studies and gives a good synthesis of the results:
That's a good analysis, but I don't find it convincing. He's trying really hard to disprove Haidt's post by poking holes in many of the studies. If you look at 386 studies in the social sciences, of course you'll find issues with the analysis or design of many of them.
The larger trends ("most of the effect is driven by teen who use no social media", etc.) aren't supported by the data he presents (look at the table of "social media time" -> Depression for example).
Are the researchers who look into this problem predisposed to finding a connection? Probably. But I do think the open, community based analysis Haidt led was done well and if you look at what they found digging through 386 studies, it's compelling.
> He's trying really hard to disprove Haidt's post by poking holes in many of the studies.
Because this is how evidence based reasoning works. If the evidence that is supposed to support the hypothesis is fundamentally flawed, then our hypothesis doesn't actually have any support.
The fact that we can apparently so easily find flaws in what is supposed to be empirical evidence should make us more cautious about drawing firm conclusions. In fact, low quality literature is something of a plague in many social sciences at the moment (e.g. the replication crisis in social psychology).
It has been studied mostly on US adolescents. There is some research on Dutch teens too.
I would say more that this effect is more well studied on US teens so we can say more conclusive things on US teens. In addition, there is some indication that the effect could also be present in non-US teens and should be further studied.
A lot of the research says that social media isn't a cause, but more of a catalyst in the presence of other factors like cyber bullying
This reminds me of similar effects where people attributed misinformation online to mostly right wing people, but it was actually right wing was only a catalyst when a predisponsity for chaos was also present OR when de-policing and federal investigations were blamed for rise in crime when it was only also when a particular district had a "viral" event.
I don't think social media is a causal factor in itself, but it is definitely a catalyst factor in the presence of other things like wealth inequality, clout chasing, and cyberbullying.
The fight itself was lame which worked in their favor. No one really cared about not being able to see every second of the "action". It's not like it was an NBA game that came down to the last second.