My experience is that copilot is basically a better autocomplete, but anything beyond a three liner will deviate from current context making the answer useless - not following the codebase's convention, using packages that aren't present, not seeing the big picture, and so on.
In contrast, cursor is eerily aware of its surroundings, being able to point out that your choice of naming conflicts with somewhere else, that your test is failing because of a weird config in a completely different place leaking to your suite, and so on.
I use cursor without bringing my own keys, so it defaults to claude-3.5-sonnet. I always use it in composer mode. Though I can't tell you with full certainty the reasons for its better performance, I strongly suspect it's related to how it searches the codebase for context to provide the model with.
It gets to the point that I'm frequently starting tasks by dropping a Jira description with some extra info to it directly and watching it work. It won't do the job by itself in one shot, but it will surface entry points, issues and small details in such a way that it's more useful to start there than from a blank slate, which is already a big plus.
It can also be used as a rubber duck colleague asking it whether a design is good, potential for refactorings, bottlenecks, boy scouting and so on.
My experience is that copilot is basically a better autocomplete, but anything beyond a three liner will deviate from current context making the answer useless - not following the codebase's convention, using packages that aren't present, not seeing the big picture, and so on.
In contrast, cursor is eerily aware of its surroundings, being able to point out that your choice of naming conflicts with somewhere else, that your test is failing because of a weird config in a completely different place leaking to your suite, and so on.
I use cursor without bringing my own keys, so it defaults to claude-3.5-sonnet. I always use it in composer mode. Though I can't tell you with full certainty the reasons for its better performance, I strongly suspect it's related to how it searches the codebase for context to provide the model with.
It gets to the point that I'm frequently starting tasks by dropping a Jira description with some extra info to it directly and watching it work. It won't do the job by itself in one shot, but it will surface entry points, issues and small details in such a way that it's more useful to start there than from a blank slate, which is already a big plus.
It can also be used as a rubber duck colleague asking it whether a design is good, potential for refactorings, bottlenecks, boy scouting and so on.