AI will scrape your blog and your personal philosophy will eventually become a part of collective Human Intelligence. That's a pretty good reason to blog imo.
That reminds me of a gimmick a while ago where GitHub would collect your repositories into an Arctic Code Vault. That was IMO a bit of an incentive for me to upload random bits of git repositories I have on my PC just so that I can say my code will last 1,000 years somewhere in the arctic.
I remember it vaguely but there used to be a badge awarded for being among the first 100 people to solve the problem. I was obsessed with getting that badge to the point that I spent obscene amount of time solving the-then recently released problem even when the following day was my final exams. I did manage to get that badge though. This was circa 2013. Fun times!
That would be something that is intelligent to you. I believe the author (or anyone in general) should be focused on mining what intelligence objectively is.
Best we will ever do is create a model of intelligence that meets some universal criteria for "good enough", but it will most certainly, never be an objective definition of intelligence since it is impossible to measure the system we exist in objectively without affecting the system itself. We will only ever have "intelligence as defined by N", but not "intelligence".
Perhaps it was due to English not being my primary language, but it took me an embarrassing amount of time to learn that probability and likelihood are different concepts. Concretely, we talk about probability of observing a data given an underlying assumption (model) is true while we talk about the likelihood of the model being true given we observe some data.
Yeah, it was a poor choice of nomenclature, since, in common, nontechnical parlance, "probable" and "likely" are very close semantically. Though I'm not sure which came first, the choice of "likelihood" for the mathematical concept or the casual use of "likely" as more or less synonymous with probable.
But the article makes it crystal clear (I had never seen it explained so clearly!):
"For conditional probability, the hypothesis is treated as a given, and the data are free to vary. For likelihood, the data are treated as a given, and the hypothesis varies."
The likelihood function returns a probability. Specifically it tells you, for some parametric model, how the joint probability of the data in your data set varies as a function of changing the parameters in the model.
If that sentence doesn't make sense, then it's helpful to just write out the likelihood function. You will notice that that it is in fact just the joint probability density of your model.
The only thing that makes it a "likelihood function" is that you fix the data and vary the parameters, whereas normally probability is a function of the data.
If you think about it, this has evolutionary advantages as well. No time to feel pain when your life itself may be in peril due to starvation. Finding food for sustenance easily supercedes recovery.
Especially if you haven't done this before, you start experiencing very strong hunger about 8-12 hours after your last meal. This is very, very much in advance of any kind of threat to your life or health from starvation. In fact, the sensation of hunger typically dulls after another 12h or so, so that if you make it past 24h of not eating, you'll typically feel less hunger than you did your first night of skipping dinner.
Reminds me of Simulated Annealing. Some randomness have always been part of optimization processes that seek a better equilbrium than local. Genetic Algorithms have mutation, Simulated Annealing has temperature, Gradient Descent similarly has random batches.
Yes, explaining the "why / how did the SAT solver produce this answer?" can be more challenging than explaining some machine learning model outputs.
You can literally watch as the excitement and faith of the execs happens when the issue of explainability arises, as blaming the solver is not sufficient to save their own hides. I've seen it hit a dead end at multiple $bigcos this way.
reply