This thesis has existed since Cursor first started, and the gap between them and VSCode has only widened since then. It’s worth spending some time thinking about why that may be before having such strong conviction about their demise.
You can't really name a list of features that cursor has that copilot doesn't. It's more like: Cursor appears to heavily dogfood their features, VSCode's copilot seems to check the feature boxes, but each one sucks to use. The autocomplete popups are jarring. The copilot agent doesn't seem to gather the correct context. They still haven't figured out tool calling. It's really something you have to try rather than look at a checklist of features.
I think your knowledge is a bit outdated? Cursor definetley still has an edge, but VSCode Github Copilot UI has come a long way and using the same underlying models for both the results are fairly similar and change only in ux niceties
I tried copilot agent like 3 weeks ago. If that much has changed since then, props to Microsoft.
Zed is very nice, it’s just a totally different workflow. I think people who work in a domain where AI is not particularly strong would be better off with Zed, since Cursor’s way of reviewing edits is a little clumsy.
What about on the speed front? VS Code's biggest problem is with how slow it is. I'd already be done and on to the next (and maybe the next thing after that) by the time it finally gets around to things. I like the concept, but I only have so much time in a day.
Yeah idk what "gap" every cursor user talks about. I installed cursor, it didn't work on wsl closed that chapter asap. Went to windsurf, enjoyed it but it's credit usage scheme was very confusing, nearly pressed the buy button until I went back to try copilot.
Copilot is good enough, even the free tier gets whatever annoying tasks I don't want to do done. Anything more complex I already have a Gemini and ChatGPT subscription so I just do the old copy paste.
Haven't touched Tauri because of the cross platform issues. The major appeal with Electron to me is the exact control over the browser. I'm curious about Rust integration though. I'm guessing they're doing something that provides better DX over something like https://github.com/napi-rs/napi-rs?
> Haven't touched Tauri because of the cross platform issues.
You were wise. That's the biggest issue plaguing the project right now.
> curious about Rust integration though
Tauri is written in 100% native Rust, so you write Rust for the entire application backend. It's like a framework. You write eventing and handlers and whatever other logic you want in Rust and cross-talk to your JavaScript/TypeScript frontend.
It feels great working in Rust, but the webviews kill it. They're inferior browsers and super unlike one another.
If Tauri swapped OS webviews for Chromium, they'd have a proper Electron competitor on their hands.
Sounds easier/more reasonable the other way around. Aren't there already specific libs / bridges for Rust / Electron / Node for performance heavy computations ?
Either you care about being correct or you don't. If you don't care then it doesn't matter whether you made it up or the AI did. If you care then you'll fact check before publishing. I don't see why this changes.
When things are easy, you’re going to take the easy path even if it means quality goes down. It’s about trade offs. If you had to do it yourself, perhaps quality would have been higher because you had no other choice.
Lots of kids don’t want to do homework. That said, previously many would because there wasn’t another choice. But now they can just ask ChatGPT for the answers they’ll write that down verbatim with zero learning taking place.
Caring isn’t a binary thing or works in isolation.
Because maybe you want to, but you have a boss breathing down your neck and KPIs to meet and you haven't slept properly in days and just need a win, so you get the AI to put together some impressive looking graphs and stats that will look impressive in that client showcase thats due in a few hours.
Things aren't quite so black and white in reality.
I mean those same conditions already just lead the human to cutting corners and making stuff up themselves. You're describing the problem where bad incentives/conditions lead to sloppy work, that happens with or without AI
Catching errors/validating work is obviously a different process when they're coming from an AI vs a human, but I don't see how it's fundamentally that different here. If the outputs are heavily cited then that might go someway into being able to more easily catch and correct slip-ups
Making it easier and cheaper to cut corners and make stuff up will result in more cut corners and more made up stuff. That's not good.
Same problem I have with code models, honestly. We already have way too much boilerplate and bad code; machines to generate more boilerplate and bad code aren't going to help.
Yep, I agree with this to some extent, but I think the difference in the future is all that stress will be bypassed and people will reach for the AI from the start.
Previously there was alot of stress/pressure which might or might not have led to sloppy work (some consultants are of a high quality). With this, there will be no stress which will (always?) lead to sloppy work. Perhaps there's an argument for the high quality consultants using the tools to produce accurate and high quality work. There will obviously be a sliding scale here. Time will tell.
I'd wager the end result will be sloppy work, at scale :-)
I think a lot about how differentiating facts and quality content is like differentiating signal from noise in electronics. The signal to noise ratio on many online platforms was already quite low. Tools like this will absolutely add more noise, and arguably the nature of the tools themselves make it harder to separate the noise.
I think this is a real problem for these AI tools. If you can’t separate the signal from the noise, it doesn’t provide any real value, like an out of range FM radio station.
It's possible that you care, but the person next to you doesn't, and external pressures force you to keep up with the person who's willing to shovel AI slop. Most of us don't have a complete luxury of the moral high ground at our jobs.
Maybe this would make sense if you saw the whole world as "kids" that you had to protect. As an adult who lives in an adult world, I would like adults to have access to metal tools and not just foam ones.
don't you think the problem of checking for correctness then becomes more insidious then? we now can generate hundreds of reports that look very professional on the surface. the usual things that would tip you off that this person was careless aren't there -- typos, poor sentence construction, missing references. just more noise to pick through for signal
> If you care then you'll fact check before publishing.
Doing a proper fact check is as much work as doing the entire research by hand, and therefore, this system is useless to anyone who cares about the result being correct.
> I don't see why this changes.
And because of the above this system should not exist.
If 20% of people don't care about being correct, the rest of everyone can deal with that. If 80% of people don't care about being correct, the rest of us will not be able to deal with that.
Same thing as misinformation. A sufficient quantitative difference becomes a qualitative difference at some point.
That is not at all how things work at Meta. The impact of the things you deliver as an engineer has a direct effect on your performance review. For better or for worse, that also means that engineers have a ton of leverage on deciding what to work on. It's highly unlikely that the engineers working on this were laughing at it while doing so.
Don't assume that you can simply pattern match because you've been at another big company. I've been at three, meta being one of them. And they have all operated very differently.
How do you think it happened, then? Having also worked there the OP’s story makes total sense to me lol. If you’re on a team with the charter to “make AI profiles in IG work” then you’re just inevitably going to turn off your better judgement and make some cringy garbage.
I think the incorrect premise here was that engineers always know what a good product is. :) And I say that as an engineer myself. It's fully possible that the whole team was aligned on a product idea that was bad, it happens all the time. From my experience though, if there's any company where engineers don't just mindlessly follow the PMs and have a lot of agency to set direction, it's Meta. Might differ between orgs but generally that was my experience.
I don’t think anyone took this seriously while building it, if that’s what you’re implying.
I’ve been at companies like this where you are told to build X, you laugh with your co-workers, and then get to work because you’re paid disgusting amounts of money to build stupid shit like this.
That’s part of why I quit to start my own company. It’s such an awful waste of resources.
Aider operates on your file tree / repo and edits and creates files in place. So it at least lessens the copy / paste drastically. This is a very different experience than using chatgpt or Claude on web. Still not ideal UX compared to having it in the IDE though to be clear.
To be fair, is waymo "only" AI? I'm guessing it's a composite of GPS (car on a rail), some high detailed mapping, and then yes, some "AI" involved in recognition and decision making of course but the car isn't an AGI so to speak? Like it wouldn't know how to change a tyre or fix the engine or drive some where the mapping data isn't yet available ?
Where did I say that it's AGI? I was addressing the parent's comment:
> "Reminds me of autonomous vehicles a couple of years back".
I don't think any reasonable interpretation of "autonomous vehicle" includes the ability to change a tyre. My point is that sometimes hype becomes reality. It might just take a little longer than expected.
What's your point? Is it that one shouldn't attempt to enter a market just because it's difficult? Or are you trying to educate the founders about something obvious that they likely have already spent 1000x more time thinking about than you?
> Caffeine is a stimulant, which means it increases activity in your brain and nervous system. It also increases the circulation of chemicals such as cortisol and adrenaline in the body.
Extremely unlikely. Caffeine is one of the most studied substances on earth. It's not a secret that it causes a clear physiological response. There are tons of double blind placebo studies on this.
Hi Alan. Our target audience is people that write long form content. In order for the product to provide a great user experience for that audience we feel that we need to build an opinionated editor and integrate AI in an opinionated way into that editor. It’s hard for me to see how that is a product inside GitHub or Atlassian.
Notion (and others) is undeniably an incumbent that as you say can’t be hand waved away. With that said we are already getting feedback from paying customers that feel we’re better at some things, including a snappier editing experience and a more natural way to interact with the AI portion. I also think our chat integrates in a way that sets us apart.
If you really think you are in the target audience I would encourage you to give it a chance. We’re very open to feedback and if there’s anything specific that you think other products are doing better, we want to address it.
When I read the description I immediately thought of Ben Evans talking about how what happens when your entire company is rendered into a feature by an encumbant [0] (FWIW I disagree with his attack on antitrust).
So, how are you going to make sure Type survives Microsoft adding ChatGPT to Word, or Substack offering something similar, etc.