Starlink has been deployed on JSX for almost a year now and I've taken quite a few flights on their Bay Area to LA and Vegas routes. Despite 20 people on the planes, no one has ever been on a video conference, though I could see it becoming an issue with a broader consumer base.
I just did an Antarctic cruise that had StarLink. While it sucked to have internet access - like, I wish I could’ve been fully disconnected - it did allow me to run my stuff remotely and ultimately have more time out there.
Hi, CISO at Anthropic here. Sorry that we didn't respond to your BAA request. I am accountable for our response to BAA requests and I'd like to dig into what happened here. If you are comfortable, would you please reach out to me at j@anthropic.com to let me know how you sent your request in?
Hi, Anthropic is a 3 year old company that, until the release of GPT-4o last week from a company that is almost 10 years old, had the most capable model in the world, Opus, for a period of two months. With regard to availability, we had a huge amount of inbound interest on our 1P API but our model was consistently available on Amazon Bedrock throughout the last year. The 1P API has been available for the last few months to all.
No open weights model is currently within the performance class of the frontier models: GPT-4*, Opus, and Gemini Pro 1.5, though it’s possible that could change.
We are structured as a public benefit corporation formed to ensure that the benefits of AI are shared by everyone; safety is our mission and we have a board structure that puts the Response Scaling Policy and our policy mission at the fore. We have consistently communicated publicly about safety since our inception.
We have shared all of our safety research openly and consistently. Dictionary learning, in particular, is a cornerstone of this sharing.
The ASL-3 benchmark discussed in the blog post is about upcoming harms including bioweapons and cybersecurity offensive capabilities. We agree that information on web searches is not a harm increased by LLMs and state that explicitly in the RSP.
I’d encourage you to read the blog post and the RSP.
> We are structured as a public benefit corporation formed to ensure that the benefits of AI are shared by everyone; safety is our mission and we have a board structure that puts the Response Scaling Policy and our policy mission at the fore. We have consistently communicated publicly about safety since our inception.
Nothing against Anthropic, but as we all watch OpenAI become not so open, this statement has to be taken with a huge grain of salt. How do you stay committed to safety, when your shareholders are focused on profit? At the end of the day, you have a business to run.
Our consistent position has been that testing and evaluations would best govern actual risks. No measured risk: no restrictions. The White House Executive Order put the models of concern at those which have 10^26 FLOPs of training compute. There are no open weights models at this threshold to consider. We support open weights models as we've outlined here: https://www.anthropic.com/news/third-party-testing . We also talk specifically about how to avoid regulatory capture and to have open, third-party evaluators. One thing that we've been advocating for, in particular, is the National Research Cloud and the US has one such effort in National AI Research Resource that needs more investment and fair, open accessibility so that all of society has inputs into the discussion.
I just read that document and, I'm sorry but there's no way it's written in good faith. You support open weights, as long as they pass impossible tests that no open weights models could pass. I hope you are unsuccessful in stopping open weights from proliferating.