I can't read this article, but I live in a market that Waymo is just beginning to develop (St. Louis). I have seen two Waymos in our area in the last week. I am really, really looking forward to having it available, but I think it is several years away.
It will be useful for me and my wife, but it would have been transformative for my kids in the pre-driving teen years. We do so much ferrying them around, and to be able to put them in a safe car and get them where they are going would have made such a difference for us. I think it might be here in time to make a difference with my younger child.
For that use case, trust and safety are really, really important to me. That plan only works for me because I suspect the Waymo driver will be safer than I am. I think for companies developing self-driving cars, the safety reputation matters a lot. Right now Waymo is the only one I would consider, because their roll out has been so methodical and competent.
As a small business that started with a one-time/upgrade based pricing policy, and moved to a recurring policy, I don't think it is too late for tailwind to do so for future upgrades/improvements. I am saddened that they laid people off before trying. I understand doing that is a leap of faith/risk, but that is what you need to do.
The key thing they need to recognize is that some percentage of their customers are serious businesses that want them to continue developing/maintaining the software, and that these businesses will be supportive as long as the deal is the same for everyone (you can't ask them to pay out of the goodness of their hearts, as then they feel they will be taken advantage of by people who don't pay).
When we switched to a recurring pricing model, I thought it was going to be a disaster. In fact, I got an angry call from exactly one customer (who then remained a customer despite threatening to leave). I got subtly expressed approval/relief from many more.
The book "How to Sell at Margins Higher than Your Competitors" was helpful to me, and might be helpful here as well. The key is to realize that you want to sell to people who really value your product and will pay for it. You don't want to maximize volume, you want to maximize revenue x margin.
You already have an installed base of people who value your product enough to pay for it once, you just have to create a system that enables them to sustain the technology they value in order to get ongoing support/upgrades/fixes/etc. The people who are going to complain on hacker news about recurring pricing aren't the people you want as customers anyway.
If the majority of your customers don't value it that much, then you are pretty cooked. But you may as well find that out directly. If people really don't want to pay for the software, don't waste time creating it for them.
We made the switch about 20 years ago. Since that time, about 70% of our lifetime revenue has come from recurring payments. Had I not had the courage to make the switch, I would be writing now that the business has been an unsustainable mistake, but that would have been false.
>If the majority of your customers don't value it that much, then you are pretty cooked.
cries in gamedev
Sadly my options are to either sell a few thousand copies on pc and deal with complaints on how my game isn't an 80 hour long timesink, or go into mobile and employ all the dark patterns I hate about marketing.
This article is the second time I have seen a news outlet try to 'break' the vending machine experiment. That is definitely really entertaining. In this case, they convinced the AI that it lived in a communist country and it was part of an experiment in capitalism. That's funny!
But I really wish Anthropic would give the technology to a journalist that tries working with it productively. Most business people will try to work with AI productively because they have an incentive to save money/be efficient/etc.
Anyway, I am hoping someone at Anthropic will see this on HN, and relay this message to whatever team sets up these experiements. I for one would be fascinated to see the vending machine experiment done sincerely, with someone who wants to make it work.
The reality is that even most customers are smart enough to realize that driving a business they rely on out of business isn't in their interest. In fact, in a B2B context, I think that is often the case. Thanks.
I am surprised this hasn't gotten more attention. I feel like HN used to love nothing more than complaining about patent trolls. Anyway, this article suggestions an action through regulation.gov which, based on the content of the page, seemed worth doing to me.
The current internet zeitgeist is ironically anti-tech and pro AI doomer communism.
Patent trolls hurt tech so thats now a good thing on new HN (now filled with normies like most tech companies these days). The enemy of my enemy is my friend sort of thing.
Nobody has any real values or beliefs anymore. We’re all just swimming in vibes.
AI has rather superceded intellectual property, since it's apparently fine to distil a derived work of everything on the planet if you have enough investor money.
I think this is an interesting idea, but the details on how the incentives get aligned, and how graduated students support the university are unclear to me.
I am a fan of the idea that universities need a new pricing model that is correlated to career results. Like, maybe universities are only eligible for students to receive gov't loans in proportion to the increase in W2 income of their past students, etc.
But I am not convinced that the model in this article, as described, scales. It might be a model for attracting very high-value students and thriving. But in that case it might just be about selection effects, and not about delivering a value-add in education.
I also think that the article fails to recognize that one of the main tasks of major universities is research. Quite possibly the research and education functions should be separated, but I wish the article addressed some of these issues more explicitly.
Still, $100m towards and effort is interesting. Maybe it will evolve in an even more interesting direction.
The incentive structure of higher education is broken. My gift to the University of Austin is meant to change that—by tying its success to the real-world achievements of its students. By Jeff Yass.
I agree. I read an article a few months ago about how frequent MAID (medical assistance in dying) is in Canada. I am surprised that that has not led to larger scale studies about the dying process.
In this particular case, the press release notes "Scientifically, it's very difficult to interpret the data because the brain had suffered bleeding, seizures, swelling...". That does seem to limit how much can be generalized from this one case. A larger study of MAID patients would be more useful.
Edit: Maybe the issue is that the MAID itself would alter the brain state. That actually seems pretty plausible.
I think Steve Ballmer's quote was something like "Measure a software project's progress by increase in lines-of-code is like measuring an airplane project's progress by increase in weight."
I have been reading the book “apple in China” after hearing the author on a podcast. It has fundamentally altered my view of apple as a company. From a consumer perspective, I thought it was a an amazing company. But looking behind the scenes, I came to understand how morally compromised it has been for a very long time. In retrospect, I feel complicit in things I didn’t understand I was part of.
Anything looks worse when you see behind the curtain. The question is in comparison - who produces technology you want without that behind the scenes behavior (or being dependent on someone else's behind the scenes behavior!)?
I read your comment before reading the article, and I disagree. Maybe it is because I am less actively involved in AI development, but I thought it was an interesting experiment, and documented with an appropriate level of detail.
The section on the identity crisis was particularly interesting.
Mainly, it left me with more questions. In particular, I would have been really interested to experiment with having a trusted human in the loop to provide feedback and monitor progress. Realistically, it seems like these systems would be grown that way.
I once read an article about a guy who had purchased a subway franchise, and one of the big conclusions was that running a subway franchise was _boring_. So, I could see someone being eager to delegate the boring tasks of daily business management to an AI at a simple business.
It will be useful for me and my wife, but it would have been transformative for my kids in the pre-driving teen years. We do so much ferrying them around, and to be able to put them in a safe car and get them where they are going would have made such a difference for us. I think it might be here in time to make a difference with my younger child.
For that use case, trust and safety are really, really important to me. That plan only works for me because I suspect the Waymo driver will be safer than I am. I think for companies developing self-driving cars, the safety reputation matters a lot. Right now Waymo is the only one I would consider, because their roll out has been so methodical and competent.