Maybe they want to have their own protocol and standard for file editing for training and fine-tuning their own models, instead of relying on Anthropic standard.
Or it could be a sunk cost associated with Cursor already having terabytes of training data with old edit tool.
Maybe this is a flippant response, but I guess they are more of a UI company and want to avoid competing with the frontier model companies?
They also can’t get at the models directly enough, so anything they layer in would seem guaranteed to underperform and/or consume context instead of potentially relieving that pressure.
Any LLM-adjacent infrastructure they invest in risks being obviated before they can get users to notice/use it.
Why can I not authenticate into Google Antigravity?
Google Antigravity is currently available for non-Workspace personal Google accounts in approved geographies. Please try using an @gmail.com email address if having challenges with Workspace Google accounts (even if used for personal purposes).
Completely agree, and I've started writing more Roslyn analyzers to provide quick feedback to the LLM (assuming you're using it in something like VS Code that exposes the `problems` tool to the model).
I also want C# semantics even more closely integrated with the LLM. I'm imagining a stronger version of Structured Model Outputs that knows all the valid tokens that could be generated following a "." (including instance methods, extension properties, etc.) and prevents invalid code from even being generated in the first place, rather than needing a roundtrip through a Roslyn analyzer or the compiler to feed more text back to the model. (Perhaps there's some leeway to allow calls to not-yet-written methods to be generated.) Or maybe this idea is just a crutch I'm inventing for current frontier models and future models will be smart enough that they don't need it?
Storing issues in a local SQLite database reminded me of Fossil SCM, which natively supports "tickets" as a source code artifact (and which uses a SQLite database under the hood): https://fossil-scm.org/home/doc/trunk/www/bugtheory.wiki.
We self-host GitHub using GitHub Enterprise Server. It is a mature product that requires next-to-no maintenance and is remarkably stable. (We did have a period of downtime caused by running it on an underprovisioned VM for our needs, but since resolving that it hasn't had problems.)
Of course we have a small and mostly unchanging number of users, don't have to deal with DDoS attacks, and can schedule the fairly-infrequent updates during maintenance windows that are convenient for us (since we don't need 100% availability outside of US working hours).
I don't have the metrics in front of me, but I would say we've easily exceeded github.com's uptime in the last 12 months.
> Things start to go sideways when you have tens of thousands of users.
If that’s really the case, run another GitHub instance then. Not all tens of thousands of users need access to the same codebases. In the kind of environment described someone would want identity boundaries established around each project anyway…
It’s fairly stable, but with a large codebase I’ve seen it take a day + to rebuild the search index, not to mention GHES relies on GitHub.com for the allowed actions list functionality which is a huge PITA. It should not rely on the cloud hosted version for any functionality. That having been said, I don’t think there’s much of an alternative and I quite like it.
you don't have to manage access to Actions that way.
on GHES you can use https://github.com/actions/actions-sync/ to pull the actions you want down to your local GHES instance, turn off the ability to automatically use actions from github.com via GitHub Connect, and use the list of actions you sync locally as your whitelist.
My employer did this for years. It worked very well. Once a day, pull each action that we had whitelisted into GHES and the runners would use those instead of the actions on github.com.
I would have thought if you had tens of thousands of developers all needing access to the same git repos, then you'd probably have a follow-the-sun team of maybe 50 or 100 engineers working on your git infra.
> Things start to go sideways when you have tens of thousands of users.
Hm not really. I manage the GHES instance at my employer and we have 15k active users. We haven't needed to scale horizontally, yet.
GHES is amazingly reliable. Every outage we have ever had has been self-inflicted; either we were too cheap to give it the resources it needed to handle the amount of users who were using it, or we tried to outsmart the recommended and supported procedures by doing things in a non-supported way.
Along the way we have learned to never deviate from the supported ways to do things, and to keep user API quota as small as possible (the team which managed this service prior to my team would increase quota per user anytime anyone asked, which was a capital-M Mistake.)
I was the administrator of a GitHub Enterprise Server instance back in 2015-2016 (I think 2014 too).
Rock-solid stability, for a company with 300+ microservices, 10+ big environments, 50+ microenvironments, who knows how many Jenkins pipelines (more than 900, I’ll tell you that). We deployed several times a day, each service on average had 3 weekly deployments.
As a company, I think GitHub (public) should do better, much better, given this is happening more frequently as of late, but if big companies (even medium ones) don’t have their own package caches, they are all in for a ride.
At a previous Startup we had GitHub + GitHub Actions, and we were on AWS. We setup some OCI images cache. Sure, if GitHub went down we could not deploy new stuff, but at least it wouldn’t take us down. If we really needed the pipelines, I suppose we could have setup some backup CLI or AWS CodePipeline (eww) workflows.
I'm in Sydney, and it's the first column in the UI by default for me. However I still can't add Auckland, Johannesburg, or any other Australian cities I've tried.
> [Dr. Jonathan] Anomaly is a well-known figure in a growing transatlantic movement that promotes development of genetic selection and enhancement tools.
There's an ongoing issue with ranking in Google Search that's affecting a large number of search results. We've identified the root cause and this issue is unrelated to the ongoing core update rollout.
I built my own simple coding agent six months ago, and I implemented str_replace_based_edit_tool (https://platform.claude.com/docs/en/agents-and-tools/tool-us...) for Claude to use; it wasn't hard to do.