Hacker Newsnew | past | comments | ask | show | jobs | submit | too_pricey's commentslogin

This isn't true. I'm one of those people who tested remarkably well, and back in college would do fine on exams despite frantically copying all of my own (non-comp Sci) assignments. Better than my peers who knew more and helped me cram. Test anxiety is real.


I was a great test taker, I used to make a sort of game out finishing tests in half the time as almost everybody else and acing it at the same time. I also never crammed, never attended pre-test study groups, and sometimes made a show of drinking beers right before the test just to annoy the people cramming in the last minute.

But I'm not particularly brilliant, in fact I wouldn't be terribly surprised if I have undiagnosed ADHD. My test taking performance trick, which I freely told everybody to their annoyance, was very simple. I knew the material! Read the assigned texts, do the optional homework, pay attention in class. If you know the material you don't have to try to cram it into your brain in the last half hour before the test. If you know the material you don't have to try to reason it out from first principles during the test. You just go in, fill out the easy answers straight away, go back and do a second pass for the tricky questions, and that's it. If you have to sit there wracking your brain for 30 minutes on a single problem it's because you already fucked up with how you approached the course weeks ago.

Again, I'm not special for this. There were a handful of other students who were as fast as me. We'd sit in the hall waiting for our friends, look at each other and say "you knew all this stuff too, huh?" "yeah of course"


Multiple concert venues in my city use these, so I interact with them all the time. They have replaced standard metal detectors, bag searches, and manual patdowns w/ hands and/or metal-detecting wands. Security checkpoints are the primary point of delay for getting into venues, and places that have rolled these out process people through about 95% faster. It's a huge difference. If it does trigger, you just get the manual patdown you would have gotten anyway, so the false positive cases aren't any lost time.

The article and settlement seem to only mention the false positive rate, which is a bad thing to focus on. Every true positive is a much faster experience. Only subjecting 110 out of 3000 people to a longer search is a big improvement. Given the negative outcomes of a gun slipping through and the lack of a cost of a false positive, we probably want it to be tuned to be more false positive prone anyway. We don't need these to detect guns THAT well, we just need them to weed out people who definitely don't have them.

I do have concerns about what its false negative rate is relative to the standard practice it replaces. I do not really trust whatever psuedo-AI they're bolting to their metal detectors; it's probably easier to get a gun through. That said, the false negative rate probably isn't good already. TSA isn't great on their false positive rate, does more intense screening, and isn't being staffed by hungover 20-somethings. So maybe the false negative rate didn't actually increase by much?


>TSA isn't great on their false positive rate, does more intense screening, and isn't being staffed by hungover 20-somethings. So maybe the false negative rate didn't actually increase by much?

TSA is abysmal on the false negative rate for things that actually matter. The FNR for actual weapons and explosives is somewhere between 80 and 95%[1]. It's because they waste all of their attention looking for nail clippers and water bottles.

Even an FNR of 50% would be a massive improvement.

[1] https://abcnews.go.com/US/tsa-fails-tests-latest-undercover-...


An acceptable false negative rate really depends on the consequences of getting caught.

If someone's nefarious plan depends on smuggling a gun in, they want to be confident they won't be arrested or shot at the entrance. Even failing to detect 20% of firearms means there's an 80% chance they'll be caught before they can do whatever it is they plan on doing. This is also why it's important to have armed guards alongside the scanners. Scanners aren't very useful if the only armed person is the bad guy.

If the consequences of getting caught are negligible (as is the case for anyone trying to bring a box cutter through airport security), then the attacker can try as many times as they want without issue. Even if the false negative rate is low, they only have to get lucky once.

Annoyingly, I can't find any info about false positive/negative rates for various scanners. There doesn't seem to be the equivalent of Consumer Reports or Underwriters Labs for scanners. My guess is that the numbers must be pretty bad if companies aren't willing to go through public 3rd party testing.


The TSA's own security tests clearly show a significant percentage of guns, knives and explosives regularly get through. This is further confirmed by the number of travelers who, after arriving, discover the handgun they accidentally left in some pouch in their suitcase that was never detected.

> There doesn't seem to be the equivalent of Consumer Reports or Underwriters Labs for scanners. My guess is that the numbers must be pretty bad if companies aren't willing to go through public 3rd party testing.

Of course they are but the main reason there's no publicly available objective testing isn't only that sellers don't want it. In reality, no stakeholder in the security market wants it. The vast majority of high-volume public security like airports, concerts and sporting events is largely unnecessary and mostly ineffective but our current political/media environment requires appearing to "do something" to "make it safe". The Vice-President of "Make it (Seem) Safe" knows that their shareholders, politicians and the public aren't willing to pay more or be even more inconvenienced than they already are for 800% better "Make it (Seem) Safe"-ness.

Metaphorically speaking, the tiger repellent is working just fine, thank you. Those truly worried about tiger attacks feel safer and those being well-paid for preventing tiger attacks can claim virtually 100% effectiveness. So, if you start the world's best Tiger Repellent Testing Laboratory, you'll find a shocking lack of interest in buying your test data from both sellers and buyers in this brisk, profitable and growing market. Much like the lack of interest in objective testing data for lie detectors, astrology readings and placebo pills. The smaller minority of customers actually willing to pay more for improved detection (like Tel Aviv airport), do their own in-context performance testing anyway. In fact, a good proxy for doing your own effectiveness testing is available for free. Just look at what those under constant active threat with real consequences actually pay for and do.


I unknowingly transported ammo both to and from Mexico. I used an old backpack that I had previously used as a range bag from years ago. I ended up finding several 223 rounds in Mexico, then when I got back, even more 22lr.



Failing to detect 20% of firearms is probably a really deal. I would bet the majority of findings are just ordinary people who forgot to leave their gun in the car (akin to how TSA used to take grandma's knitting needles away every flight).

So, if your process really only detects people not trying to bypass it and people not even in the wrong then it's a problem.

Although I mean with the long lines at security you might as well just gun everybody down outside the stadium in that nice open area they are all packed into ...


I guess on a more general level, I'm confused why a metal detector with bit some machine learning on top wouldn't make a better widget? You'd think that different shape metal object produce different magnetic flux, and there's probably more dimensions to that than just size?


If it's just a gate that you are walking through then it's all or nothing. Basically just a metal detector.

If you have a conveyor belt system and a CT scanner like setup then yes, you could build a better metal detector.


Theme parks are switching to these too and it's oh so much nicer to just walk through and if you're unlucky having your bag searched.


> Security checkpoints are the primary point of delay for getting into venues, and places that have rolled these out process people through about 95% faster. It's a huge difference.

I assume expediting peak crowd throughput at low labor cost is the primary, if not entire, value of the device. I hate that it's being marketed dishonestly but I also assume most concert venue buyers know (or suspect) it probably doesn't work all that well in practice. However, in a concert context accurate detection isn't their main priority. They need to get more bodies per minute into the venue at lower cost while appearing to conduct security checks sufficiently 'real' enough to act as a deterrent to get those who care about getting 'caught' to leave their knife or concealed carry handgun (or whatever) in the car.

The only hard and fast requirement is meeting the contractual security requirements of the venue and promoter's insurance carriers - because no insurance = no concert. It's a bonus if the 'security' also looks plausible enough to reassure the small fraction of perpetually fearful people statistically challenged enough to actually worry about terrorists or an active shooter killing them while at a Taylor Swift concert (as opposed to the infinitely more likely chance of dying in a car crash on the way to the concert).

In a perfect world, everyone would be rational and numerate enough that we wouldn't need to maintain the pretense of 'security theater' in contexts where actual security isn't necessary. But in the imperfect world we live in, I prefer having concert (and airport) security be as minimally disruptive and inexpensive as possible regardless of effectiveness (since it's unnecessary and mostly ineffective in those contexts anyway). I just wish companies would sell these products as 'security placebos' instead of lying about it because fraud is wrong.


I don't know anything about the devices that are being sold, but it doesn't seem impossible to me that with better signal processing from a metal detector, you could reduce false positives a bit while maintaining or slightly improving the false negative rate.


Sure, I agree that's an interesting and likely solvable technical problem. However, the vast majority of the addressable market in today's over-secured society don't really need improved detection. Concert venues, sports arenas and similar customers buy massive volume and they are much more concerned with faster throughput enabled by shorter cycle time and minimal false positives. Of course the head of security at Madison Square Garden can never publicly admit they don't care about better detection enough to pay more for it, but I'm confident the sales managers at these security vendors understand exactly what their largest market segments really care about.

Customers like Tel Aviv International Airport, who actually care to some meaningful extent about improved detection, are a small minority segment of the overall market. Creating new technical measures able to demonstrate improved performance in rigorous objective tests on the metrics these customers care about (some sweeter spot on the matrix of false pos, false neg, true pos, true neg, net throughput, cost) would be valuable but only to that small segment.


Note that I said lower false positives and equal or slightly better false negatives, which aligns with what you say customers want.

Of course I suspect venues really don't care about false negative rates much at all, so there's a big temptation for everyone to just turn sensitivity down.


Very likely, but who's going to pay for the engineering efforts? If the customer doesn't care about having a better device, or a better device would actually make their job harder, then it's wasted effort on the part of the manufacturer.

Engineering exists to solve a problem. It's entirely likely that your definition of the "problem" differs from that of the paying customer.


> get more bodies per minute into the venue at lower cost while appearing to conduct security checks sufficiently 'real' enough to act as a deterrent

Or they just need to convince most of their customers it will be safe to attend, while covering their ass by following "best practices" if something slips through, people get hurt and they get sued.


> convince most of their customers it will be safe to attend

Apparently, you've never met my Aunt Sue. She has a graduate degree in innumeracy with a minor in illiteracy and a specialization in worrying about whatever the media tells her to worry about. However, she always votes.

More seriously, it's not cost-effective to "convince most customers it will be safe enough to attend." The game theory around fallacious public perception makes it a losing proposition for a politician or company to ever appear to reduce security requirements because as soon as "rare bad thing happens", they will be blamed - even though their reduction in pointless measures had no bearing on it.

Most independent experts agree that securing cockpit doors in 2002 made subjecting every passenger to the TSA's increased security measures unnecessary and, objectively, a very poor ROI in both cost and disruption. However, the TSA will never, ever go away - even though it could and should. Not only is reducing security politically costly, the TSA is now a multi-billion dollar federal bureaucracy, paying hundreds of vendors with lobbyists and employing tens of thousands of unionized workers spread across the most populous congressional districts. Yes, this is frustrating.


They don't need to convince most of the public, just enough so they can fill up their venue. There are some Aunt Sues who are too afraid to go to concerts, but concert venues are still able to fill up when they have a popular act so it stands to reason that they're managing public perception of the risk well enough for their own needs.


> The only hard and fast requirement is meeting the contractual security requirements of the venue and promoter's insurance carriers

I think it would be a good idea to create an explicit carve out in the law saying that there is no premise liability for a property owner or event organizer due to a third party committing a crime.


So, do away with all negligent security cases?


Yes. In general, business owners aren't expected to prevent crimes against their customers. If someone attacks me at a bar or grocery store, I probably won't get very far trying to sue the owner for failing to check everyone for weapons on entry. I'm not sure I'd have more success with a concert venue, but it appears insurance companies perceive enough risk to demand certain procedures.

Codifying that expectation in law would reduce costly and obnoxious security theater. Of course, a business advertising a certain level of security could be sued for failing to provide it.


Ok, but it seems like a bit of a non-sequitur to say “ business owners aren't expected to prevent crimes against their customers” when there’s a body of law to the contrary.


Is there? In most US states, the concept of premises liability seems to be derived entirely from case law, not statute. Some states appear to have statutes limiting its scope, such as https://colorado.public.law/statutes/crs_13-21-115

Edit: to be clear, I don't think there's anything actually stopping someone from attempting to sue a bar or grocery store over a crime committed there, but it usually doesn't happen and would likely be an uphill battle for the plaintiff.


So what? It's not like common law has less effect. "Body of law" is understood by lawyers to include both common and statutory law.


The point is that it can easily be overridden with statute

"business owners meeting definition X are only liable in conditions Y"


This is a stupid point. “The current law can easily be overridden by passing a new law.”

No shit.


I think a better argument here is that common law/case law here is ambiguous enough to create a situation where there's an unreasonable and unpredictable risk for certain kinds of businesses.

Another problematic case this sort of liability leads to is hotels in Las Vegas routinely searching guest rooms after a lawsuit following the 2017 shooting. I don't think it's desirable to expect hotels to search rooms or to call the police if someone has "too much" luggage. That's paranoid, an invasion of privacy, and unlikely to prevent a future mass murder.

I don't want a world where I have to submit to searches to go anywhere or do anything, and I hope that's not a fringe position.


You say that as if statues are any less ambiguous.


I was only responding to GP's statement that common law can be overridden by statute.

If I wanted to respond to the idea that premises liability should be eliminated then I would have responded to your first post.

And I actually do think that most people would call your position a fringe position once you actually start talking details like "but what about guns in schools?" If you truly believe that you shouldn't have to submit to a search to go ANYWHERE or do ANYTHING then you hold a fringe position.


My position is that it is undesirable for premises liability to lead to an increase in private businesses searching their guests, not that nobody should be searched anywhere for any reason. I do think it should be rare in practice: airports, probably; concerts, probably not absent some unusual threat; hotel rooms pretty much never.

It appears routinely searching students in public schools is fairly rare in the USA; under 8% use metal detectors[0]. That certainly does not mean they're allowed to bring guns, just that they probably won't be discovered if they do.

[0] https://nces.ed.gov/fastfacts/display.asp?id=334


No, your position is: "I think it would be a good idea to create an explicit carve out in the law saying that there is no premise liability for a property owner or event organizer due to a third party committing a crime." and "I don't want a world where I have to submit to searches to go anywhere or do anything."

This means that nobody should be searched when they go anywhere or do anything, and if they aren't and someone gets shot by a third party, stabbed by a third party, or mugged by a third party then there is no liability to the business/landowner in any case. Ever. Searches will not be rare in practice, they will not occur. Airports? Never. Concerts? Never. Hotel rooms? Never. Schools? Never.

And hence we've already established the problem with your position and why it's fringe. Not even you can realistically argue for your own positions without caveats. This is a great example of a motte and bailey fallacy.


I see the miscommunication now. It's possible to interpret what I wrote as "no places/activities should require searches", but what I meant was "not all places/activities should require searches". That's a bit hyperbolic of course since the resources don't exist to search everyone, everywhere, all the time. There has been an increase in recent years, and I would like it reversed.

I am opposed to premises liability being a motivation for anyone to conduct searches. Liability isn't the reason searches are conducted at airports or courthouses to give a couple examples, so eliminating it would not eliminate those searches. Businesses also might have other motivations, such as making their customers feel safer; if that outweighs customers finding it annoying or offensive, some of those would likely continue.

> someone gets shot by a third party, stabbed by a third party, or mugged by a third party then there is no liability to the business/landowner in any case. Ever.

This does correctly state my position.


I will concede the technical point: there is a body of law that sometimes expects business owners to prevent crimes and sometimes doesn't, with a whole lot of ambiguity about exactly what any given owner is actually expected to do. I think that ambiguity should be reduced by putting criminal acts out of scope.


The Phoenix Project has been very influential on me in my security career, at least partially because I share the name of the ineffectual CISO and want so desperately to avoid the link.

I think the book is still very applicable, and every security practitioner needs to be hit over the head with it (or at least The DevOps Handbook or Accelerate). Security generally is decades behind engineering operations, even though security is basically just a more paranoid lens for doing engineering ops; the ideas from Phoenix are still depressingly revolutionary in my field.


I actually wrote blogs about two of my (least) favorites: [VPNs](https://securityis.substack.com/p/security-is-not-a-vpn-prob... [Encryption](https://securityis.substack.com/p/security-is-not-an-encrypt...). Thank you for pointing out I don't link to them in this original post.

Password resets are definitely one, and I still have to tell prospects and customers that I can't both comply with NIST 800-63 and periodically rotate my passwords, every single day. Other ones I often counter include other aggressive login requirements, WAFs, database isolation, weird single tenancy or multitenancy asks, or for anti-virus to be in places that they don't need to be.


I wrote this! I'm excited to see this get attention here. I'll be responding to folks' comments where I feel like I have something to add, but please let me know if you have any questions or feedback!


There's certainly a lot of cargo cult security controls out there. One of the big issues is simply that it is very hard to change established practices. It takes a lot of effort, and senior people who are not security experts have to sign off on the "risk" of not doing what all their peers are doing.

There is one word I would change in your post title. Security has a useless controls problem, not security is a useless controls problem.


You're completely right re Drata as a company (we use a different compliance vendor, but very similar setup re the agent).

You're a bit off on whether this would fail a SOC2 audit, thankfully. As the OP said, they don't have access to production systems, which basically means you can treat that employee however you want from a SOC2 (and ISO, and most other control framework perspectives). The company OP is working for can state "We do not require these controls on contractors without production access" and that is totally fine for SOC2. Pushing back on the agent requirement is totally reasonable!


That depends on how they wrote their policies. If they were careful, they left themselves room in their policies to be flexible about people who don't have access to prod. If they weren't --- and lots of teams aren't --- then it's tricky to go back and say "oops I got that part of the policy wrong, the new policy says we can do whatever we want in this case". Again: the real thing SOC2 is assessing is consistent enforcement and monitoring. It's not a "security audit".


Do you think? I wasn't sure because although he doesn't have access to production systems a lot of controls are around access to the code, e.g. Github.

But quite possibly you are right.


I've been through SOC2 (sat in with auditors and walked them through pretty much all of our stuff around source code and testing and building things). SOC2 is very much a "do you have policies for x, y and z" and "are you actually implementing those policies", with a VERY HEAVY emphasis on "are you doing what you say you'll do". There's nothing that says "You must monitor any place your source code could exist", but there's plenty that says "You must have a policy for change management" and stuff like. And you'll get dinged hard if you have a policy that says "We monitor every device that has our source code on it" and then turn around and have contractors you don't monitor.

That said, it's also completely trivial (on the auditor side) for them to say "Oh, we're changing this policy to 'We monitor devices with production access'". Good luck pushing for that to happen as a contractor, though...


My understanding is that it's not completely trivial to make these kinds of policy changes once you get past your Type 1. This would be a nitpick except that it implies something important about how you should handle SOC2: don't be ambitious or expansive in your Type 1 audit, and leave yourself room to see what's going to work long term. This is something I've seen a lot of people mess up.


I'm with you. The focus on this line also ignores context.

Gladwell is directly quoting Nassim Taleb, and openly says how little he follows the math Taleb discusses earlier in the same chapter. The point is "Look at how smart Taleb is, I don't even know what half these words mean", he was never trying to understand the math or imply he did. In that context mis-transcribing eigenvalues doesn't feel nearly as damning as it's made out to be around these parts?


They're a non-profit, so their financials are publicly disclosed. ProPublica only has it as recently as 2018, but here was the financials then: https://projects.propublica.org/nonprofits/display_990/82450...


That’s helpful, thanks.

So they’re 4mil in the hole? How is it possible that they’re still running?


They have $100 mn in donation from Acton


That wasn't a donation, it was a loan.


(A zero-interest 50-year loan, not a bona fide "I want my money back" loan.)


A fifty-year, interest-free loan is functionally a donation, IMO


You're not allowed to mention that, we have to pretend it's a charity.


Weird that it isn't listed in the data in that document.

I have no idea why anyone would donate $20 when they're sipping on $100m ...


$100 million given current growth wont last as long. Telegram 4 years had a run rate of $1 million per month for servers and dev costs. At that time they had about 200 million users.

Signal is using AWS & GCP ( for cloud fronting ), they could be approaching that spend level.


> $100 million given current growth wont last as long.

That is 100% their problem, though. I trust that they will develop a sustainable business model when it becomes necessary. Otherwise, look at their tax info shared above. Sporadic donations won't even make a small dent.

I mean, shoot, they won't even give us a hint at how much to donate to cover our own costs. That would be a start.


WhatsApp used to charge $1/ yr at 200 million users, which kept them well funded. A $1 donated by just the Android users at 50 million + would be $50 million per year.

TBF they havent had to think about this too much before the last 5 days, so give them some time to come up with a plan.

In the mean throw them whatever you are comfortable with.


I just get stuck in an infinite cloudflare loop :(


How are ya'll planning to adjust to changes in interest rates? Will prizes become lower, rarer, will the base/worst case interest rate change first, etc?

For this to have the societal impact ya'll seem to want, it's going to need high usage from people who probably have little experience with the changes in the Fed rate and its impact on savings accounts. You obviously need to adjust to this yourself, but I can see changes in lottery odds (or changes in prize value) as being aggravating to users.


We want the bulk of our all-in value to come from prizes, since that's really our differentiator. Should rates go down, we would likely lower the base rate first. Should rates go up, prizes would be the first to go up.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: