Notion is horrendous for this. Hiding every control behind an invisible hover target. No, I don't want my company documentation to have a minimalist aesthetic. I just want to use it.
This is a good story and something everyone should experience in their career even just for the lesson in humility. That said:
> Here's the technical takeaway: Never use CASCADE deletes on critical foreign keys. Set them to NULL or use soft deletes instead. It's fine for UPDATE operations, but it's too dangerous for DELETE ones. The convenience of automatic cleanup isn't worth the existential risk of chain reactions.
What? The point of cascading foreign keys is referential integrity. If you just leave dangling references everywhere your data will either be horribly dirty or require inconsistent manual cleanup.
As I'm sure others have said: just use a test/staging environment. It isn't hard to set up even if you are in startup mode.
Reduces the vertical space by vastly increasing the horizontal space and inserting line noise into the syntax. I don't even understand what that code would do.
> Since February 2024, the National Institute of Standards and Technology’s (NIST) National Vulnerability Database (NVD) has encountered delays in processing vulnerabilities.. caused by factors such as software proliferation, budget cuts and changes in support.. NIST, an agency within the United States Commerce Department, saw its budget cut by nearly 12% this year.
Reading that article closely it says nothing about an NVD budget cut, only a NIST one. They were trackijg the changes after NIST's budget was cut, not NVD's. As pointed out below, CISA announced a cut and then NIST more than made up for it by reallocating funds, for an NVD funding increase, even though NIST had their overall budget cut.
One of your references has budget numbers that are two orders (?!) of magnitude higher than the CISA number. Hopefully someone can chime in with granular historical data for NIST NVD and MITRE-via-NIST CVE funding.
It's not like they say, but it's at least three different implementations and I don't think any were cloudflare because I've been running into those pages for years and they've got captchas (functional or not). One of them was Akamai I think indeed
Yeah, I definitely don't want to pivot this thread into a product pitch, as the important thing is helping the open-source projects, but we can work with the maintainers to tune the systems to be as strict/lax as preferred. I'm sure the other services can too, to be fair.
The underlying issue is that many sites aren't going to get feedback from the real people they've blocked, so their operators won't actually know that tuning is required (also, the more strict the system, the higher percentage of requests will be marked as bots, which might lead an operator to want things to be even more strict...)
I will say -- a higher-end bot detection service should provide paper trails on the block actions they take (this may not be available for freemium tiers, depending on the vendor).
But to your point, the real kicker is the "many sites aren't going to get feedback from the real people they've blocked" since those tools inherently decided that the traffic was not human. You start getting into Westworld "doesn't look like anything to me" territory.
I'm not into westworld so can't speak to the latter paragraph, but as for "high-end" vendors' paper trail: how do log files help uncover false blocks? Any vendor will be able to look up these request IDs printed on the blocking page, but how does it help?
You don't know if each entry in the log is a real customer until they buy products proportional to some fraction of their page load rate, or real people until they submit useful content or whatever your site is about. Many people just read information without contributing to the site itself and that's okay, too. A list of blocked systems won't help; I run a server myself, I see the legit-looking user agent strings doing hundreds of thousands of requests, crawling past every page in sequence, but if there wasn't this inhuman request pattern and I just saw this user agent and IP address and other metadata among a list of blocked access attempts, I'd have no clue if the ban is legit or not
With these protection services, you can't know how much frustration is hiding in that paper trail, so I'm not blocking anyone from my sites; I'm making the system stand up to crawling. You have to do that regardless for search engines and traffic spikes like from HN