Hacker Newsnew | past | comments | ask | show | jobs | submit | jasondclinton's commentslogin

This is false.


Thanks for the report! We're addressing it urgently.


Seems to be working well now (in Claude Code 1.0.2). Thanks for the quick fix!


Starlink has been deployed on JSX for almost a year now and I've taken quite a few flights on their Bay Area to LA and Vegas routes. Despite 20 people on the planes, no one has ever been on a video conference, though I could see it becoming an issue with a broader consumer base.


Tons of the Hawaiian flights have it now too. It's such a game changer. Most people are on YouTube, it looks like -- no video calls I've seen.


I just did an Antarctic cruise that had StarLink. While it sucked to have internet access - like, I wish I could’ve been fully disconnected - it did allow me to run my stuff remotely and ultimately have more time out there.

Pretty wild how well it works.


Hi, CISO at Anthropic here. Sorry that we didn't respond to your BAA request. I am accountable for our response to BAA requests and I'd like to dig into what happened here. If you are comfortable, would you please reach out to me at j@anthropic.com to let me know how you sent your request in?


It's available now! Sorry for the delay.


I don't see it yet on us-west-2 (bedrock -> model access). Do you mean another region? Or does the account have to be in a special canary group?


Have you tried our prompt generator? https://docs.anthropic.com/en/docs/build-with-claude/prompt-... . We've seen it improve performance.


The benchmark is imagined as zero shot, so no tweaking.


Got it, thanks for the feedback!


It's live now! Sorry for the delay.


I still get

{"message":"Could not resolve the foundation model from the provided model identifier."}

on us-west-2.


Hi, Anthropic is a 3 year old company that, until the release of GPT-4o last week from a company that is almost 10 years old, had the most capable model in the world, Opus, for a period of two months. With regard to availability, we had a huge amount of inbound interest on our 1P API but our model was consistently available on Amazon Bedrock throughout the last year. The 1P API has been available for the last few months to all.

No open weights model is currently within the performance class of the frontier models: GPT-4*, Opus, and Gemini Pro 1.5, though it’s possible that could change.

We are structured as a public benefit corporation formed to ensure that the benefits of AI are shared by everyone; safety is our mission and we have a board structure that puts the Response Scaling Policy and our policy mission at the fore. We have consistently communicated publicly about safety since our inception.

We have shared all of our safety research openly and consistently. Dictionary learning, in particular, is a cornerstone of this sharing.

The ASL-3 benchmark discussed in the blog post is about upcoming harms including bioweapons and cybersecurity offensive capabilities. We agree that information on web searches is not a harm increased by LLMs and state that explicitly in the RSP.

I’d encourage you to read the blog post and the RSP.


> We are structured as a public benefit corporation formed to ensure that the benefits of AI are shared by everyone; safety is our mission and we have a board structure that puts the Response Scaling Policy and our policy mission at the fore. We have consistently communicated publicly about safety since our inception.

Nothing against Anthropic, but as we all watch OpenAI become not so open, this statement has to be taken with a huge grain of salt. How do you stay committed to safety, when your shareholders are focused on profit? At the end of the day, you have a business to run.


That’s what the Long Term Benefit Trust solves: https://www.anthropic.com/news/the-long-term-benefit-trust No one on that board is financially interested in Anthropic.


Our consistent position has been that testing and evaluations would best govern actual risks. No measured risk: no restrictions. The White House Executive Order put the models of concern at those which have 10^26 FLOPs of training compute. There are no open weights models at this threshold to consider. We support open weights models as we've outlined here: https://www.anthropic.com/news/third-party-testing . We also talk specifically about how to avoid regulatory capture and to have open, third-party evaluators. One thing that we've been advocating for, in particular, is the National Research Cloud and the US has one such effort in National AI Research Resource that needs more investment and fair, open accessibility so that all of society has inputs into the discussion.


I just read that document and, I'm sorry but there's no way it's written in good faith. You support open weights, as long as they pass impossible tests that no open weights models could pass. I hope you are unsuccessful in stopping open weights from proliferating.


You're the first person who I've run into who heard the podcast, thank you for listening! Glad that it was informative.


Oh hey you're the guy! Thanks for doing the pod I found it informative. I can't listen to enough about this stuff. Are there any that you recommend?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: