Hacker Newsnew | past | comments | ask | show | jobs | submit | error9348's commentslogin

Would make interaction easier for ai agents on the web.


And screen readers.


It would be great if all NeurIPS talks were accessible for free like this one. I understand they generate some revenue from online ticket sales, but it would be a great resource. Maybe some big org could sponsor it.


they are - on a 1 month delay. https://slideslive.com/neurips-2023 last year.

so if you have the patience, wait, if no patience, pay. fair?

we did this for ai.engineer too except we believe in youtube a bit more for accessibility/discoverability. https://www.youtube.com/@aiDotEngineer/videos


Wow, thanks for the correction. Didn't know this existed - to be fair, I only found a preview last year I tried, and paid up.


sweet! thanks for posting


> Documents reviewed by 60 Minutes show OpenAI agreed to pay SAMA $12.50 an hour per worker, much more than the $2 the workers actually got, though SAMA says what it paid is a fair wage for the region.

Let me guess. SAMA had the classic sku/$/hr saas pricing.


Post hoc ergo propter hoc


Original source seems to be https://archive.is/wwoQZ


Devs need to include security pervasively (like they have ops for deployments).

  * Canonical, RedHat and others have confirmed the severity, a 9.9, check screenshot.
  * Devs are still arguing about whether or not some of the issues have a security impact.
> I've spent the last 3 weeks of my sabbatical working full time on this research, reporting, coordination and so on with the sole purpose of helping and pretty much only got patronized because the devs just can't accept that their code is crap - responsible disclosure: no more.

With a confirmed 9.9 there's no need to argue, get the top priorities done, work on others on the possibility they need to be released as well. The act of working in them will usually give a clear answer if it could have security impact. Don't have armchair debates. You can't find loopholes if your mindset is that there are none.


will these (or have these) notifications been paused while the decision is being reconsidered?


> by using CSS for displaying the numbers, you put the validity of the documents in the hands of the rendering engine.

Any automation tool could result in a regression which may generate invalid numbering. At any rate, I doubt a court would ignore mens rea.


The interface looks very Apple as well. Looks like you create a config file, and you already have a model in mind with the hyperparameters and it provides a simple interface. How useful is this to researchers trying to hack the model architecture?

One example: https://github.com/apple/corenet/tree/main/projects/clip#tra...


Not much. But if you just want to adapt/optimize hyperparams, this is a useful approach. So I can certainly see a possible, less technical audience. If you actually want to hack and adapt architectures it's probably not worth it.


Jax trends on papers with code:

https://paperswithcode.com/trends


Was gonna ask "What's that MindSpore thing that seems to be taking the research world by storm?" but I Googled and it's apparently Huawei's open-source AI framework. 1% to 7% market share in 2 years is nothing to sneeze at - that's growth rates similar to Chrome or Facebook in their heyday.

It's telling that Huawei-backed MindSpore can go from 1% to 7% in 2 years, while Google-backed Jax is stuck at 2-3%. Contrary to popular narrative in the Western world, Chinese dominance is alive and well.


>It's telling that Huawei-backed MindSpore can go from 1% to 7% in 2 years, while Google-backed Jax is stuck at 2-3%. Contrary to popular narrative in the Western world, Chinese dominance is alive and well.

MindSpore has an advantage there because of its integrated support for Huawei's Ascend 910B, the only Chinese GPU that comes close to matching the A100. Given the US banned export of A100 and H100s to China, this creates artificial demand for the Ascend 910B chips and the MindSpore framework that utilises them.


No, mindspore rises because of the chip embargo

No one is going to use stuff that one day is cut off supply.

This is one signal why Huawei was listed by Nvidia as competitor in 4 out of 5 categories of areas, in nvidia's earnings


Its meteoric rise started well before the chip embargo. I've looked into it, it liberally borrows ideas from other frameworks, both PyTorch and Jax, and adds some of its own. You lose some of the conceptual purity, but it makes up for it in practical usability, assuming it works as it says on the tin, which it may or may not. PyTorch also has support for Ascend as far as I can tell https://github.com/Ascend/pytorch, so that support does not necessarily explain MindSpore's relative success. Why MindSpore is rising so rapidly is not entirely clear to me. Could be something as simple as preferring a domestic alternative that is adequate to the task and has better documentation in Chinese. Could be cost of compute. Could be both. Nowadays, however, I do agree that the various embargoes would help it (as well as Huawei) a great deal. As a side note I wish Huawei could export its silicon to the West. I bet that'd result in dramatically cheaper compute.


This data might just be unreliable. It had a weird spike in Dec 2021 that looks unusual compared to all the other frameworks.


China publishes a looooootttttt of papers. A lot of it is careerist crap.

To be fair, a lot of US papers are also crap, but Chinese crap research is on another level. There's a reason a lot of top US researchers are Chinese - there's brain drain going on.


When I looked into a random sampling of these uses, my impression was that it was a common kind of project in China to take a common paper (or another repo) and implement it in Mindspore. That accounted for the vast majority of the implementations.


Note that most of Jax’s minuscule share is Google.


Q3-7 & Q3-5d get to the workability. I don't think OpenAI responds to that part of the RFC. Meta's comment on that issue seems to be fairly clear, they oppose the proposed rules on KYC for IaaS and are "not aware of technical capabilities that could not be overcome by determined, well-resourced, and capable actors".

https://www.ntia.gov/sites/default/files/publications/open_m...

https://about.fb.com/wp-content/uploads/2024/03/NTIA-RFC-Met...


The fact is though, every corporate actor in this entire landscape is just playing their hand. Anybody's stance on anything at any given moment doesn't mean they're more or less ethical-- the moment they perceive a strategic benefit to walling everything off which would surpass the PR cost, they will. They've probably already got PR folks workshopping angles for the press release.


This is true to a degree though there are high profile actors such as Yann LeCunn who have ethical boundaries. Yann wants AI to be open source and available to all, and he's straight up said that he won't work for a company that doesn't follow this principle. Zuck might not have a hand to play in terms of AI products, but even if he did he'd have to tread carefully because the guy that sets his whole AI direction and stewards all their research would 100% walk if he wasn't happy with the ethical direction of the company.


Same could be said for Ilya once upon a time ?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: