Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would love to hear from other people here but I tried it for a while and dropped my subscription. For short things it was nice, but the larger the suggestion the worst it got. I found myself constantly mentally debugging the output it suggested, I do not know if it was faster but I was mentally exhausted by it, unable to go for long periods of time. Dumb code completion is predictable, I know I'll need to press down three times before its even shown to me, enabling me to think forward. With "smart" code completion I need to constantly stop my train of thought to process what ever it throws at me. I even tried to just trust the system, playing it fast and loose, not double checking everything, but then it just produced bugs.

I'm developer with 16 years of experience, currently working mainly with python for API work.

I'd love to hear accounts of other people, please add your background if you feel comfortable. I want to see if there is some correlation to experience, programming languages or use cases.



I currently use copilot and 80% of the benefit I get from it is boilerplate and refactoring, the rest is just using it as smart autocomplete where I can zip through adding a bunch of properties or arguments. I use it for a lot of greenfield stuff though which is really where it shines since a lot of that work is just standing up all the essential bits and pieces before you have to do anything complex. It's helped me massively with going from an idea to a working implementation both by getting rid of a lot of the boring typing and by keeping me going when I started to get a bit run down or uninspired.

I can understand why you might not like it if you were using it for critical things that needed to be well planned and debugged before running. I don't find it very good at intricate work but that's ok with me since I want to really slow down at those points and think about what I'm doing.

As a side note, using ChatGPT with GPT-4 or even just GPT turbo is an amazing unblocker for projects where you need to use unfamiliar packages, APIs or languages. You can just talk to it about what you're trying to do and it'll provide you with great examples and explanations. It won't be right 100% of the time but it's right enough to get you unstuck and a lot faster than searching through docs or stackoverflow for a good answer. It helps to be very precise with your problem statement as well, like specifying the version of the package you want to use or a time frame. Those little prompt tricks remind me a lot of the Google-fu we had to learn to search effectively. I'm excited that Copilot is going to be moving to GPT-4 with chat built in, it'll unify the whole process.


(Software dev for 15+ years)

I'm using it for Typescript + NodeJS development. I find Copilot most valuable when it's something I'd need to Google anyway, like how to format a date string or how to do X in selenium. 8 out of 10 times the answer is right, and the other times it is at least interesting (gives me an idea of what to look for).

This quick feedback is _way_ faster than googling and keeps me in the IDE, and also just makes it more enjoyable to code when there is this "pair programming" partner that I can interact with via code/comments and it will generate ideas for me, even if they aren't all perfect.


I mostly code in JS/TS and ruby, and find it pretty handy. Especially when I need to write some unit tests, I usually only write it("should work like this and like that"), and 90% of the time it generates a useful unit test[0].

Also, I find it very useful when I code in an unfamiliar environment, for example python. While usually I know what I want to do, I'm not exactly familiar with details of syntax's, or how a specific library api looks like. This is where it shines, in my experience. Huge time-saver.

[0] https://www.strictmode.io/articles/using-github-copilot-for-...


That's been my (limited) experience, you have to debug its output and if you don't you have problems later. Pasting CoPilot's code into ChatGPT was sort of interesting but not really a time-saver, although pretty useful for understanding new concepts. Ultimately I'd still have to go read the documentation to actually understand how to use something new correctly however. Not really sure if it's worth $ for.

Where it is pretty useful I think is in examining large chunks of poorly commented code bases, where you're using CoPilot to generate comments describing what the code is supposed to be doing, i.e. '#here we ..'.


My trial ran out yesterday and I cancelled the subscription. It feels limiting on its own. There are three use cases off the top of my head

- auto complete. This one is amazing but I’m not willing to spend 10/month for only that

- generate code inline. The main purpose of copilot. It works okay but too often I feel like I’m faster if I google things myself. Perhaps I’m just a very fast googler and reader. Wouldn’t surprise me

- use a solution and adapt it to my own code. I don’t know how this could work without copilot having access to my browser and knowing what I just read somewhere. I’m very excited about this but right now copilot does not seem the right tool

But above all I cancelled for two reasons: it’s too slow and I can’t trust my privacy and code IP to be respected

Edit: I also feel there is a lot of secondary information lost. If I google I have multiple tabs and windows and (temporal) structure. I also learn about neighboring concepts via stackoverflow comments, or I learn about how to navigate the docs for whatever I’m doing right now.

With copilot I am not exposed to all of this. Not yet


Edit: I also feel there is a lot of secondary information lost. If I google I have multiple tabs and windows and (temporal) structure. I also learn about neighboring concepts via stackoverflow comments, or I learn about how to navigate the docs for whatever I’m doing right now.

This! I think what we lose from these auto-completion systems is a differing of opinions, options and abilities to learn and evolve our ideas.

I don't always want the most commonly weighted average answer. I want to see, learn and try new things.

But above all I cancelled for two reasons: it’s too slow and I can’t trust my privacy and code IP to be respected

I'm starting to look for GitHub alternatives for this reason, I'm sure it's no surprise where they're getting all the great training data from, it's our code. Which might be ok, but no you won't even be able to innovate to make some money because co-pilot will be reading all your suggestions.

I wonder if Microsoft trains it on their own proprietary code? :)


I haven't found it very useful. It's been useful for writing tests, but in most other stuff it's like an annoying person who's always finishing your sentences for you... wrong. It'll autocomplete stuff like file paths and just make up garbage, which messes up my intellisense autocomplete which would actually get it right.

I'm on the trial and I'll probably cancel it before I have to pay. If it was easier to turn it on and off or only use at specific moments, I might keep it, but as it is, it hasn't been great for me. I have 25 years of experience. Maybe if I was younger, I'd like it, but at this point, I'm usually not typing something unless I know what I want to type, so it's just a distraction.


Late reply but in the same boat with VSCode. God help anyone not using a typed language to help more easily catch copilot's BS; it regularly just makes up method and interfaces that don't exist.

It interferes too much with the standard autocomplete/intellisense. I want to use copilot in an on-demand fashion for help with libs I'm not familiar with and scaffolding, but the team seems dead-set on having it always on and in your face; no settings to swap around the default for normal autocomplete.

Maybe I can try toggling it on and off more aggressively. Perhaps Copilot-X's prompt will be more useful for me. IDK but the experience is disappointing especially compared to the potential.

Edit: Also I alluded to this but it's just plane wrong a lot. So the inline suggestions are like 75% wrong and I'm fighting to get my autocomplete to show up instead.


I used it for a while but I found that too many suggestions are worthless and having to consider them makes me waste more time than just writing the code myself. For the things that it is useful like snippets, I found that ChatGPT is better anyway.


I don't know about the new developments but with last year's copilot, it's great as long as you don't have it on all the time. Just turn it on when you want it to complete something. Unfortunately VSCode didn't/doesn't make that easy -- you have to hack something together yourself:

https://github.com/orgs/community/discussions/7553#discussio...


Totally agree, I cancelled my subscription because the way the extension works (worked?) is just way too distracting, shoving (often wrong) suggestions in my face all the time.

It should have an mode where it only ever suggests a single best guess when I press a certain shortcuts.


At this point, Copilot is as natural as autocomplete and syntax highlighting to me. Of course I can still write code without them, but it would feels really off and counter-productive.


I haven't tried it yet, but this was how I assumed I would feel

Something I'm actually more optimistic/curious about is the potential for code analysis, like an advanced linter. "Tell me if you think you see any bugs in this code." That's something I would totally use even as an experienced programmer working in an older, slow-going codebase (maybe especially in those)


That's exactly what I'm waiting for, either with Copilot X, or GPT4 32k API access.


It mostly just gets in my way by being frequently wrong, and disabling my IDE's autocomplete. If I didn't get it free I wouldn't pay for it.


I also didn't find it very helpful. It would constantly suggest code that didn't even compile, e.g. suggest calling setters for fields that weren't there. I spent so much time reading the suggestions or deleting them after using them and finding out they were bullshit that I didn't save any time at all.


started using copilot this week as a trial and my experience so far echoes yours, Ruby dev doing lots of api work and I’m spending way more time mentally debugging all the hallucinated variables and functions it vomits out it, and fixing incorrect interfaces and I find it way more exhausting than just writing the code myself

IntelliJ inspections and refactorings blow copilot out of the water it’s not even a contest.


Copilot for generating TS prop types on react components is fantastic, or guessing the write type for a library object's type.


I haven't used copilot but your experience sounds exactly like what I would expect. Since AI is based on prediction, it makes sense that broader predictions would be less accurate. I think stringing together output from a lot of smaller predictions would yield better results. Which, at the end of the day, means that a human + AI will always be more productive than AI on its own. At least for the foreseeable future.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: