It could be key spraying, maybe targeting a particular organization with distributed infrastructure for which the attacker already has some keys, but more likely groups blasting default keys (i.e. for some crappy IoT devices that included them in the firmware etc) for a nice & quick botnet.
The term 1day is not uncommon, not only among exploit vendors on either side of the law (in other words, selling to crime groups or selling to governments) but also among the general threat intelligence and broader cybersecurity community. It doesn’t stand out as strange at all, really. However, I don’t believe their particular definition aligns with the typical/colloquial usage of the term. It’s usually used in the more direct sense, i.e. a very new vulnerability that is unlikely to have been broadly patched.
As far as the term 0day, I don’t think there’s much debate or contrarian opinions to be had. The only room for argument I see is between defining it as an unreleased vulnerability unknown to the vendor versus a vulnerability known to the vendor but not yet patched, basically what this article defines a 1day as.
Either way, it comes down to nitpicking nuances of definitions of commonly-used terms. I don’t think there’s much meaningful discussion to be had.
I would love if (_consistently available_) $15/day street parking was a thing in Manhattan, it'd be a good deal cheaper than garages and obviously a lot more convenient than keeping your car elsewhere. There isn't much benefit to having a car in Manhattan for day-to-day life, but it would be nice to have for things like day trips. Right now I park my car about 45 minutes away in another borough (at my family's house) so when I do need to drive I have a +90min fixed cost added to my commute time.
FWIW, Manhattan commercial real estate goes for about $80/ft²/yr. A parking space therefore costs about $40/day in real estate rent, so $15/day is well below costs.
Is that $80/ft^2/yr raw land value? Indoor space cannot be directly compared against a spot in a garage that’s not climate controlled and is more vulnerable to vandalism.
The current models would presumably be accessible for customers regardless of OpenAI’s state. If OpenAI were to hypothetically somehow vanish into thin air, products and features built on their products could still be supported by Azure’s offering.
Sure, but what's the point on building a product on top of a stable API that exposes a technology that won't evolve because it's actual creators have imploded? It remains to be seen whether OpenAI will implode, but at this point it seems the dream team is t getting back together.
The waiver still allows for logging of prompts for the specific purpose of abuse monitoring for some limited retention period, right? How difficult is it to have this waived as well?
I don’t really understand what safety work is or entails here, given OpenAI will surely not be the only group to achieve AGI (assuming any group does.) What stops other companies from offering similar models with no (or just less) regard for safety/alignment, which may even be seen as a sort of competitive edge against other providers? Would the “safety work” being done or thought about somehow affect other eventual players in the market? Even regulation has the same challenges, but with nations instead of companies, and AFAIK that was more Sam’s domain than Ilya’s. It almost seems like acceleration for the sake of establishing a monopolistic presence in the market to prevent other players from viability, and then working in safety afterwards, would give a better chance of safety long-term… but that of course also seems very unrealistic. I think more broadly, if we’re concerned with the safety of humanity as a species we can’t think about the safety problem on the timescale of individual companies or people, or even governments. I do wonder how Ilya and team are thinking about this.
The board, like any, is a small group of people, and in this case a small group of people divided into two sides defined by conflicting ideological perspectives. In this case, I imagine the board members have much broader and longer-term perspectives and considerations factoring into their decision making than the significant, significant majority of other companies/boards. Generalizing doesn’t seem particularly helpful.
Generalizing is how we reason, and having been on boards and worked with them closely, I can straight up tell you that's not how it works.
In general, everyone is professional unless there's something really bad. This was quite unprofessionally handled, and so we draw the obvious conclusion.
I am also not a stranger to board positions. However, I have never been on the board of a non-profit that is developing technology with genuinely deep, and as-of-now unknown, implications for the status quo of the global economy and - at least as the OpenAI board clearly believes - the literal future and safety of humanity. I haven’t been on a board where a semi-idealist engineer board member has played a (if not _the_) pivotal role in arguably the most significant technical development in recent decades, and who maintains ideals and opinions completely orthogonal to the CEO’s.
Yes, generalizing is how we reason, because it lets us strip away information that is not relevant in most scenarios and reduces complexity and depth without losing much in most cases. My point is, this is not a scenario that fits in the set of “most cases.” This is actually probably one of the most unique and corner-casey example of board dynamics in tech. Adherence to generalizations without considering applicability and corner cases doesn’t make sense.
I've been wondering if there's a chance the inevitable explosion of hyper-realistic disinformation and manipulation content - of course brought on by genai significantly reducing the cost and barrier to entry to very high-volume realistic multimedia content production - could make the public digital information landscape so obviously polluted and cacophonous that even the most oblivious of media consumers will begin to have their trust of information from purely online sources (or at least social media) continuously erode, basically solving the problem of online disinformation efforts by destroying the confidence in the medium altogether.
Disinformation and manipulation is already such an incredibly invasive problem across all media and all sides of political spectrum that I can’t imagine the majority of people ever changing in the way you describe.