I think it's a little different across sects, but I grew up around a lot of baptists and the line was that anyone who asks was forgiven at any point, so I think this falls into the repentant category, though I don't think it actually requires repent. Lots of tough questions around someone who lives a virtuous life who asked forgiveness at an early age versus an evil person asking for forgiveness on their death bed and them both ending up in the same place.
Not an ML person, but I generally understand the concept. Why do you care about ceilings here? I get that the camera is pointed weird ways, but if you're trying to detect nudity presumably if trained on a bunch of images of that subject, it would just call ceilings False.
I think I understand, so you basically need in your training set anything that the production model would presumably see for it to work well? You can't just say here are a bunch of positives and a bunch of negatives, the negatives have to be actual things the model will see.
If you think about it, this can't be otherwise. Positive/negative is misleading, since that distinction is the entire thing we are trying to teach. Let's call two labels A and B. You tell the model:
This long penis is A. This short penis is A. This cat is B. This dog is B. Now, what is this ceiling?
The model, looking at the ceiling, discovers a long fluorescent tube. It is long! Neither cat nor dog is long (The model is yet to discover longcat), and while penis comes in long and short variety, all penises seem long-ish. The ceiling is A.
In the typical ML task formulation, closed-set classification, the model is forced to make a choice between the provided categories, even if the output is not really either. And neural networks tend to perform erratically outside of the training data distribution.
Adding common inputs to the training (or at least validation and test) sets is a good solution. Its hard data work, but will pay off. There are some techniques outside of closed-set classification that can help reduce the problems, or make the process of improving it more effective:
- Couple the classifier with a out-of-distribution (novelty/anomaly) detector.
Samples that score high are considered "Unknown" and can be flagged for review.
- Learn a distance metric for "nudity" instead of a classifier, potentially with unsupervised or self-supervised learning (no labels needed). This has higher chance of doing well on novel examples, but it still needs to be validated/monitored.
- Use one-class classifier, trained only on positive samples of nudity. This has the disadvantage that novel nudity is very likely to be classified as "not nudity", which could be an issue.
Yes, with emphasis on "well". It does work without ceilings. But your training set should include everything you will frequently encounter in test set. Ceilings are too frequent to miss.
"But does your social circle not include other female friends that you could discuss the optics of this situation with?"
Respectfully, this shouldn't be on the person from whom advice is being sought, but on the asker.
"Or better yet, be honest with the female friend that you wanted to advice, pointing out what she had done before and expressing your apprehension in providing critical feedback?"
If someone asked me this, I'd probably think they used hashtags like #redpill or were into bashing Ellen Pao or something. It comes across as, "you can't say anything these days without being offensive, men in tech are soooo mistreated."
>If someone asked me this, I'd probably think they used hashtags like #redpill or were into bashing Ellen Pao or something. It comes across as, "you can't say anything these days without being offensive, men in tech are soooo mistreated."
Wise moderates don't even join the conversation on social media. A wise decision for an individual, but it's harmful to all of us collectively. One negative side effect is people assume that everyone is an extremist of one stripe or another.
the problems genders face are interrelated, invalidating all men that complain about something just because of a history specifically disenfranchising women will not help your cause
> If someone asked me this, I'd probably think they used hashtags like #redpill or were into bashing Ellen Pao or something. It comes across as, "you can't say anything these days without being offensive, men in tech are soooo mistreated."
Interesting. I hesitate to accept your interpretation as the universal one, because my experience has been that there are ways to express these concerns without coming off as a bigot. Perhaps more education, awareness or discussion is required.
> my experience has been that there are ways to express these concerns without coming off as a bigot
Please share with us. I can't think of a single way to express these concerns without looking like a "red piller". The mere fact someone even has concerns marks them as suspicious and harmful.
I'm objectively very high performing at aerobic exercise, since high school and have kept it up. I have seen quantitatively higher performance when taking 3-5 mg/kg of caffeine before exercise and regularly during. This includes anything high intensity aerobic, from swimming to running to cycling.
This is the best collection of research I have seen on the topic and where I arrived at my 3-5 number above many years ago, perhaps it will be useful and seems to speak to the broader topic at hand: https://fellrnr.com/wiki/Caffeine
When the last time _my_ tweet reached millions of users is not the right question. We see that daily tweets go viral and are seen by millions, some things are seen by a billion people (though perhaps not virally in a short period of time). If people are pumping bitcoin with rampant speculation to get other people on the train to increase its value, just a handful need to go viral to increase value, not necessarily yours or mine specifically.
It's like the Birthday Paradox. The chances that someone in a room shares my birthday are small, but the chances that two people in the room share a birthday is large. It just takes a few of the millions of tweets to go viral.
I won't address your second point at length except to say that it's naive to think that with the ease of broadcasting, suddenly the wall is at rebroadcasting, which can be done with no more than a click and is significantly easier than creating the original message.
I read the article. It sounds more like practically everyone is stuck in a steady state of believing the same "tells" like fidgeting, averting gaze, and stuttering they have been told forever, and those things don't work. That's not really like a GAN at all.
Additionally what you say about being able to spot the average liar, as an average person, is contrary to the article. I would encourage you to read the article more carefully to distill the main points.
I usually got it from brew cask, not sure which "version" it downloaded - never saw any ads in it, myself but either way - I'd rather just not deal with a scummy project anymore.
I don't get the impression that several hundred/thousand individuals had anything to do with this being solved by this person, except perhaps by virtue of not solving it and thereby leaving it to be an open problem with the opportunity to be solved.
After reading the article, this didn't come across as a "standing on the shoulders of giants" situation, but rather that this person by chance had the problem, found it interesting enough to work on, and happened to try the right path to solve it.
Right, but if the FBI goes and hires a code breaker, they don’t get one of the three who solved it. They get one of the thousands with the credentials to have a reasonable chance of success. So if the FBI wants to put the three who solved it on the job, they have to deploy the thousand who didn’t because they don’t know the difference between them until it’s solved.
How about the FBI builds a time machine, goes to the future to find out who solved it, then hires those people? I call it self-fulfilling cryptography.
Still survivorship bias. We tend to assume that because they cracked it, anyone like them could've done that in the past. In reality, maybe even the author had a chance of 0.001% to crack it and they just got lucky in choosing the right paths and having the right intuitions.
It's the same in many parts of research. You obviously would've had no chance without their expertise but just because a researcher discovers something new does not mean there's no luck involved.
I think the point of the person to whom you're replying is that it being solved doesn't mean it was as easy as it now looks. When we see it done and we see the methods are not all that novel, we are biased to think that. But the fact that it was not more complete until then and was a subject of fascination for many actually implies the opposite. We're looking at an incredibly rare event and asking why a few specific investigators couldn't have achieved that decades ago.
The "decades ago" caveat is especially key, because they used a computer system with raw performance well within an order of magnitude of today's Top500 supercomputers. I like to point out government incompetence too, but spending that kind of resources banging away at a message from a killer we don't believe has been active for several decades isn't exactly a prudent move.