Hacker Newsnew | past | comments | ask | show | jobs | submit | autoconfig's commentslogin

> When creation was hard, skill was the differentiator: you had to actually be good to make something worth showing. Now the barrier is near zero, so you need reach. Reach costs money or it costs years. Probably both.

Creation has progressively been getting easier since the invention of the computer, it is not a new phenomena. This naturally pushes the boundary on what needs to be delivered in order to find paying customers. In other words, creation still is "hard" if you want to succeed.

> I launched something last week. 14 people signed up — no ads, just a couple of posts. 14 real people who didn't have to. That number is tiny and it felt like something. Then I sat down to think about what it would take to grow it and I couldn't look at that math for very long.

This applies to 90+% of founders that have ever launched something. The hard part comes from continuing to push forward when you experience this (which you will over and over). It sounds like the author expects that what was hard suddenly should be easy.


sounds like the author is discovering business.

I didn't read the article, but yes, going from 0-1,1-10 is really hard and really rewarding. And it got easier with the Internet. Going from 10-1k and 1k-1M is a different ball-game. Always was.

The dream of running my own company got me to learn programming. 20 years later I'm an employee at a company still dreaming of running my own company. But now I can realize that reality and dreams are not the same and that's ok. As in, I probably really don't want to run my own company. We'll see =P


>> The dream of running my own company got me to learn programming. 20 years later I'm an employee at a company still dreaming of running my own company. But now I can realize that reality and dreams are not the same and that's ok. As in, I probably really don't want to run my own company. We'll see =P

To me having one own's company was just a means to the end: making enough money to live comfortably without the need to get a job ever again for the rest of my life. I too learned programming as a means to achieve that end but eventually realized that I don't need a company if I can short-circuit the path to money. By switching to the right domain - finance, where what I learn might be eventually put to use directly by investing capital into profitable trading strategies.

Back to OP's article, if there's a domain where money as a moat is not a problem, that's definitely finance: https://www.visualcapitalist.com/all-of-the-worlds-money-and...

I work in this domain since almost 20 years and can tell you, noone's gonna risk a billion dollars on crap vibe coded by AI. I wrote before, I don't know what crack these AI people are smoking but when there's real stakes at play, they don't play around with toys. And AI in programming is a toy. The unlikely triumph of "Can I haz teh codez?" CTRL+C / CTRL+V "prompt experts" (mocking it, lol) strategy on Stack Overflow, along with the people who employ it.

I'm not worried about MY particular future in this industry. I'm not worried that AI is gonna replace me, us, or write anything significant here at all in the foreseeable future until it fucking evolves into AGI which is somewhere 5000 years from now, optimistically.

The party's gotta come to an end really soon along with the figures on how much money AI makes versus it's real utility - which is, simply stated, "a toy".

Not necessary but here's my mood while writing this comment: - listening to this song: https://www.youtube.com/watch?v=6EWqTym2cQU&list=RD6EWqTym2c...


It's absolutely wild to see how unevenly distributed the future we're living in is.


That's almost that famous quote by William Gibson: "The future is already here — it's just not very evenly distributed."


Is it any surprise when wealth inequality has been increasing in the USA for the past 50 years? Globally the picture is even more bleak.


In the game of business, quitting = losing.


Something I've noticed a lot with Twitch streamers and YouTubers is that many of them want the outcome and are prepared to do the work, but want some sort of guarantee of success. It's very difficult to really sell to people that you will work very hard for a while and there's absolutely no promise that you'll ever have more than 14 subscribers. That's simply the core risk of entrepreneurship.


Also, as someone who's main source of income is a YouTube channel: there is a type of threshold effect, where your videos are not good enough to watch until one day they suddenly are.

This means that until you reach that threshold, it feels like you're not making progress, cause every video just gets the same result (no views). Even if below the surface, you're slowly inching closer to that moment where your videos will actually be watched.


I have the impression it's more of a "bait and switch" thing. People get into streaming thinking they can just play games and make money, but then they realize nobody is going to watch someone play game X, they all want to watch game Y, because game Y is popular right now among the people who have the most time to play games and watch streams (kids). So they enter the industry thinking they can just do whatever they want, and quickly realize they have to do infinite things they don't want to actually reach the level of popularity they need.


The thing is, the barrier isn't near zero. The time to reach an MVP has just decreased. But you still very much need expertise, strategy, etc. to deliver something worthwhile. The bar has just increased.


You've basically chosen to ignore the whole AI argument as if it was just another tool and we had business as usual. Given how pervasive and fast developing it is, there should be an argument why it can be dismissed so easily.


> Creation has progressively been getting easier since the invention of the computer, it is not a new phenomena. This naturally pushes the boundary on what needs to be delivered in order to find paying customers. In other words, creation still is "hard" if you want to succeed.

Only for developers. Outside of software creation is still hard. Global markets giving access to excellent manufacturing sure does help, but software is a bubble.


Creating marketing material has certainly gotten easier as well, it used to require a lot of work to create these spam pamphlets and company documents but today its trivial. Of course those are worthless to society so didn't help GDP but it filled our society with advertisements and spam and filled companies with worthless documents since now nobody thinks before making one.


Seriously you can ship in a week things that FAANGs would have paid billions for 10 years ago.

LLMs are just glue between pieces of your code you still need to be able to plug them into a coherent architecture to do something impressive.


Apple Pay (launched around 12 years ago, and likely costing billions) is exactly the counter-example here. The 'code' was the easy part. The moat was the decade of hardware R&D and the leverage to force banks to adopt a new standard. An LLM might write the API wrapper in seconds, but it can't hallucinate a relationship with Visa and Mastercard. You literally cannot create a new Apple Pay in a week or even years, no matter how much you vibe code.

I'm sometimes baffled by what people think can pass as a product in a real sense.


To the extent that’s true it has much, much more to do with AWS, open source libraries, and collective knowledge, than it does LLMs.

But I honestly can’t think of anything you could do in a week that a company in 2015 would have paid billions for unless it’s something like tweaking an LLM. But in that case it’s the original model, not the 1 week or work you put in.


OK, say you build a Whatsapp clone in a week. How many Whatsapp users will switch to your app?


> things that FAANGs would have paid billions for 10 years ago

such as?


I can only assume the grandparent means Google+


I mean literally anything that leverages modern APIs.

WYSIWG Site Builders, text Chat bots, audio Transcription, voice synthesis.

Yeah building from scratch would take longer, but you can slap a UI, a DB/schema around modern APIs and output something that would be science fiction 10 years ago.


WYSIWYG sure builders existed 10 years ago and did not cost billions. Chat bots were a novelty since the NFT bubble hadn't popped yet, and they would invent NFTs to stake the economy on instead. Audio transcription and synthesis existed and did not cost billions.


You think people would pay billions for a site builder??? Or that it would have been SF 10 years ago? I take it you were not around much 10 years ago.


i would argue that reach has already been the biggest limiting factor for the last 10 even 20 years.


It's just another way of saying "Ideas are worthless, execution is what matters" which has always been largely true.

Yes, you need the idea first of course. But that's truly the easy part. 99% of "ideas" rely on great execution to be worth even looking at - much less paying for - for anyone else.


That isn't what it's saying and I don't think the idea that "execution is what matters" is even true, other than to point out that ideas aren't valuable by themselves.

This is about marketing, about getting people to know and care that the thing you built exists. You can execute perfectly (in terms of making a great product) and not get a single eyeball.


> You can execute perfectly (in terms of making a great product) and not get a single eyeball.

that's a tautological statement - if not a single eyeball is on the product, then you obviously didnt make a great product. after all, who determines a product is great? It's those eyeballs, not the creator.


"It never gets easier, you just go faster" - Greg LeMond


That quotation pops up on cycling subreddits occasionally and I've always disliked it because I think it discourages people from casual bike riding.

I've been biking to work occasionally for a few years now and it definitely gets easier.


Yeah the quote assumes you're riding without speed limits. In a typical commute, it does get easier once your cardiovascular ability exceeds the upper speed limit given the route.


No, the quote assumes you want to go faster. I don't really. I enjoy my ride and if I wanted to get to work 5 minutes faster, I would leave 5 minutes earlier.


I read that quote as speaking more to the human condition and less about cycling. Humanity has a tendency to keep pushing to the edge of its current abilities no matter how much they expand.


Only if you continue to push yourself while training. What used to be difficult absolutely gets easier in endurance after training.


> This requires calling native APIs (e.g., Win32), which is not feasible from Electron.

That is not correct. One of the major selling points of Electron is precisely that you can call native APIs.


This thesis has existed since Cursor first started, and the gap between them and VSCode has only widened since then. It’s worth spending some time thinking about why that may be before having such strong conviction about their demise.


> the gap between them and VSCode has only widened since then

What is in this gap? Do you know of any good resources that outline the features that Cursor provides over VSCode with Copilot?


You can't really name a list of features that cursor has that copilot doesn't. It's more like: Cursor appears to heavily dogfood their features, VSCode's copilot seems to check the feature boxes, but each one sucks to use. The autocomplete popups are jarring. The copilot agent doesn't seem to gather the correct context. They still haven't figured out tool calling. It's really something you have to try rather than look at a checklist of features.


I think your knowledge is a bit outdated? Cursor definetley still has an edge, but VSCode Github Copilot UI has come a long way and using the same underlying models for both the results are fairly similar and change only in ux niceties

stuff like background agents cursor is way ahead.

Zed Editor is a nice contender too


I tried copilot agent like 3 weeks ago. If that much has changed since then, props to Microsoft.

Zed is very nice, it’s just a totally different workflow. I think people who work in a domain where AI is not particularly strong would be better off with Zed, since Cursor’s way of reviewing edits is a little clumsy.


yeah tbh copilot is really not -that great- compared to both cursor or zed.

But i tihnk that's UX polish they can fix it if they cared

we'll see i guess maybe MS prefers to just buy them out?

Cursor getting out of price tho


What about on the speed front? VS Code's biggest problem is with how slow it is. I'd already be done and on to the next (and maybe the next thing after that) by the time it finally gets around to things. I like the concept, but I only have so much time in a day.


If you find VS Code to be slow, you might give Zed a try. I have been using Zed with my Claude API key and it's really something.


You can literally download and try it for free. Cursor is just better, its insane that Microsoft screwed up AGAIN!


have you tried using either of them?


You mean the gap in vscode compatibility?


Yeah idk what "gap" every cursor user talks about. I installed cursor, it didn't work on wsl closed that chapter asap. Went to windsurf, enjoyed it but it's credit usage scheme was very confusing, nearly pressed the buy button until I went back to try copilot.

Copilot is good enough, even the free tier gets whatever annoying tasks I don't want to do done. Anything more complex I already have a Gemini and ChatGPT subscription so I just do the old copy paste.


What are your thoughts on why it might be?


* a small, focused team moves faster

* cursor has great taste and that's hard to replicate at MS scale

* Microsoft had allegiance to OpenAI early on which reduced their experimentation with other models


Haven't touched Tauri because of the cross platform issues. The major appeal with Electron to me is the exact control over the browser. I'm curious about Rust integration though. I'm guessing they're doing something that provides better DX over something like https://github.com/napi-rs/napi-rs?


> Haven't touched Tauri because of the cross platform issues.

You were wise. That's the biggest issue plaguing the project right now.

> curious about Rust integration though

Tauri is written in 100% native Rust, so you write Rust for the entire application backend. It's like a framework. You write eventing and handlers and whatever other logic you want in Rust and cross-talk to your JavaScript/TypeScript frontend.

It feels great working in Rust, but the webviews kill it. They're inferior browsers and super unlike one another.

If Tauri swapped OS webviews for Chromium, they'd have a proper Electron competitor on their hands.


Sounds easier/more reasonable the other way around. Aren't there already specific libs / bridges for Rust / Electron / Node for performance heavy computations ?


It’s beyond shameful that chrome on android doesn’t support shared workers yet. We decided to simply nix multi tab support on android because of it.


Either you care about being correct or you don't. If you don't care then it doesn't matter whether you made it up or the AI did. If you care then you'll fact check before publishing. I don't see why this changes.


When things are easy, you’re going to take the easy path even if it means quality goes down. It’s about trade offs. If you had to do it yourself, perhaps quality would have been higher because you had no other choice.

Lots of kids don’t want to do homework. That said, previously many would because there wasn’t another choice. But now they can just ask ChatGPT for the answers they’ll write that down verbatim with zero learning taking place.

Caring isn’t a binary thing or works in isolation.


"Lots of kids don’t want to do homework"

Sure, but if you're a professional you have to care about your reputation. Presenting hallucinated cases from ChatGPT didn't go very well for that lawyer: https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-...


That's a lawyer in an adverserial situation. Business consultants tell their clients what they want to believe, the facts be dammed.


it sounds like ai doesn't really change that situation


But the point is it does if you count making it worse changing the situation.


I don't think it follows that taken an easier path would mean quality goes down.


what about tests?


Because maybe you want to, but you have a boss breathing down your neck and KPIs to meet and you haven't slept properly in days and just need a win, so you get the AI to put together some impressive looking graphs and stats that will look impressive in that client showcase thats due in a few hours.

Things aren't quite so black and white in reality.


I mean those same conditions already just lead the human to cutting corners and making stuff up themselves. You're describing the problem where bad incentives/conditions lead to sloppy work, that happens with or without AI

Catching errors/validating work is obviously a different process when they're coming from an AI vs a human, but I don't see how it's fundamentally that different here. If the outputs are heavily cited then that might go someway into being able to more easily catch and correct slip-ups


Making it easier and cheaper to cut corners and make stuff up will result in more cut corners and more made up stuff. That's not good.

Same problem I have with code models, honestly. We already have way too much boilerplate and bad code; machines to generate more boilerplate and bad code aren't going to help.


The technology also makes it easier and cheaper to make good things, so the direction of the outcome isn't guaranteed.


Yep, I agree with this to some extent, but I think the difference in the future is all that stress will be bypassed and people will reach for the AI from the start.

Previously there was alot of stress/pressure which might or might not have led to sloppy work (some consultants are of a high quality). With this, there will be no stress which will (always?) lead to sloppy work. Perhaps there's an argument for the high quality consultants using the tools to produce accurate and high quality work. There will obviously be a sliding scale here. Time will tell.

I'd wager the end result will be sloppy work, at scale :-)


I think a lot about how differentiating facts and quality content is like differentiating signal from noise in electronics. The signal to noise ratio on many online platforms was already quite low. Tools like this will absolutely add more noise, and arguably the nature of the tools themselves make it harder to separate the noise.

I think this is a real problem for these AI tools. If you can’t separate the signal from the noise, it doesn’t provide any real value, like an out of range FM radio station.


Not only that: by publishing noise, you’re lowering the signal/noise ratio.


People are much less scrupulous using LLM output than making up stuff themselves, because then they can blame the LLM.


It's possible that you care, but the person next to you doesn't, and external pressures force you to keep up with the person who's willing to shovel AI slop. Most of us don't have a complete luxury of the moral high ground at our jobs.


It's the high reps fault then of not caring about quality. Either you assimilate in that low quality lower management using AI slop or change job.


It looks like the moral high just came more in demand.


It's a bit like saying "my kids are going to hit themselves anyway, so it doesn't matter if I give them foam rods or metal rods".


Maybe this would make sense if you saw the whole world as "kids" that you had to protect. As an adult who lives in an adult world, I would like adults to have access to metal tools and not just foam ones.


I guess I can replace "kid" with "toddler" and add "unsupervised" at the end.


How hard it is to produce credible-looking bullshit makes a really big difference in these scenarios.

Consultants aren't the ones doing the fact-checking, that falls to the client, who ironically tend to assume the consultants did it.


don't you think the problem of checking for correctness then becomes more insidious then? we now can generate hundreds of reports that look very professional on the surface. the usual things that would tip you off that this person was careless aren't there -- typos, poor sentence construction, missing references. just more noise to pick through for signal


> If you care then you'll fact check before publishing.

Doing a proper fact check is as much work as doing the entire research by hand, and therefore, this system is useless to anyone who cares about the result being correct.

> I don't see why this changes.

And because of the above this system should not exist.


If 20% of people don't care about being correct, the rest of everyone can deal with that. If 80% of people don't care about being correct, the rest of us will not be able to deal with that.

Same thing as misinformation. A sufficient quantitative difference becomes a qualitative difference at some point.


That is not at all how things work at Meta. The impact of the things you deliver as an engineer has a direct effect on your performance review. For better or for worse, that also means that engineers have a ton of leverage on deciding what to work on. It's highly unlikely that the engineers working on this were laughing at it while doing so.

Don't assume that you can simply pattern match because you've been at another big company. I've been at three, meta being one of them. And they have all operated very differently.


How do you think it happened, then? Having also worked there the OP’s story makes total sense to me lol. If you’re on a team with the charter to “make AI profiles in IG work” then you’re just inevitably going to turn off your better judgement and make some cringy garbage.


I think the incorrect premise here was that engineers always know what a good product is. :) And I say that as an engineer myself. It's fully possible that the whole team was aligned on a product idea that was bad, it happens all the time. From my experience though, if there's any company where engineers don't just mindlessly follow the PMs and have a lot of agency to set direction, it's Meta. Might differ between orgs but generally that was my experience.


I suspect they wanted to be able to say "worked on AI at Facebook" on their resume and this was their way of doing it


I don’t think anyone took this seriously while building it, if that’s what you’re implying.

I’ve been at companies like this where you are told to build X, you laugh with your co-workers, and then get to work because you’re paid disgusting amounts of money to build stupid shit like this.

That’s part of why I quit to start my own company. It’s such an awful waste of resources.


Aider operates on your file tree / repo and edits and creates files in place. So it at least lessens the copy / paste drastically. This is a very different experience than using chatgpt or Claude on web. Still not ideal UX compared to having it in the IDE though to be clear.


When you do something that is extraordinarily hard, sometimes it takes longer than you expect. But now we're here: https://techcrunch.com/2024/08/20/waymo-is-now-giving-100000...


To be fair, is waymo "only" AI? I'm guessing it's a composite of GPS (car on a rail), some high detailed mapping, and then yes, some "AI" involved in recognition and decision making of course but the car isn't an AGI so to speak? Like it wouldn't know how to change a tyre or fix the engine or drive some where the mapping data isn't yet available ?


Where did I say that it's AGI? I was addressing the parent's comment:

> "Reminds me of autonomous vehicles a couple of years back".

I don't think any reasonable interpretation of "autonomous vehicle" includes the ability to change a tyre. My point is that sometimes hype becomes reality. It might just take a little longer than expected.


Ok maybe I just never saw the hype, just another engineering and data challenge that was going to be solved one way or another.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: