There is a Supreme Court ruling that IQ tests are assumed to unfairly disadvantage minority candidates. This was in the context of employment rather than college admissions, but I don't think anyone has been bold enough to test whether it applies.
Broadly speaking, IQ tests tend to include a lot of cultural assumptions and therefore members of the in-group will test higher than members of out-groups of the same aptitude. IQ tests are therefore treated as discriminatory (disproportionate impact) "by default." The administration of a particular test for a specific purpose can usually be OK'd by either showing that particular test is not discriminatory, or by showing that particular test has a measurable correlation to the specific purpose for which it's used. My understanding is that "specific purpose" in the employment context means the individual job description, not just hiring in general.
> There is a Supreme Court ruling that IQ tests are assumed to unfairly disadvantage minority candidates
I wonder if there's an "IQ test" that unfairly disadvantages non-minority candidates. It would be interesting to see the results if the same pool of people (minority and non-minority) take both tests.
SAT presumably tests you have learned certain things in high school and so are ready for university. E.g. you won’t show up to your first university math class needing to learn all of high school math, and thus be unable to complete the class.
I’d say it proves that cryptocurrencies that do not allow fractional reserve banking are just rehashes (no pun intended) of old & discredited bimetallic/gold standard/specie/mercantilist ways of thinking.
I'll leave this here, since many comments are about the transition from physics to data science.
"For now, however, in hard-core physical science at least, there is little evidence of any major BD-driven breakthroughs, at least not in fields where insight and understanding rather than zerosales resistance is the prime target: physics and chemistry do not succumb readily to the seduction of BD/ML/AI. It is extremely rare for specialists in these domains to simply go out and collect vast quantities of data, bereft of any guiding theory as to why it should be done. There are some exceptions, perhaps the most intriguing of which is astronomy, where sky scanning telescopes scrape up vast quantities of data for which machine learning has proved to be a powerful way of both processing it and suggesting interpretations of recorded measurements. In subjects where the level of theoretical understanding is deep, it is deemed aberrant to ignore it all and resort to collecting data in a blind manner. Yet, this is precisely what is advocated in the less theoretically grounded disciplines of biology and medicine, let alone social sciences and economics. The oft-repeated mantra of the life sciences, as the pursuit of ‘hypothesis driven research’, has been cast aside in favour of large data collection activities [7]. And, if the best minds are employed in large corporations to work out how to persuade people to click on online advertisements instead of cracking hard-core science problems, not much can be expected to change in the years to come. An even more delicate story goes for social sciences and certainly for business, where the burgeoning growth of BD, more often than not fuelled by bombastic claims, is a compelling fact, with job offers towering over the job market to anastonishing extent. But, as we hope we have made clear in this essay, BD is by no means the panacea its extreme aficionados want to portray to us and, most importantly, to funding agencies. It is neither Archimedes’ fulcrum, nor the end of insight."
>And, if the best minds are employed in large corporations to work out how to persuade people to click on online advertisements instead of cracking hard-core science problems, not much can be expected to change in the years to come.
You can merge ML and theory in at least one way. I attended a talk by Prof. Karen Willcox of the University of Texas at Austin (I'm a PhD student in mechanical engineering there) where she argued that in fluid dynamics and combustion at least, it's better to use "model order reduction" instead of machine learning. The problem with many models (e.g., Navier-Stokes equations) in these fields is that they are computationally expensive. Model order reduction looks for ways to reduce the computational cost of the model while maintaining accuracy, and it uses many of the same techniques as machine learning. Based on the examples she gave it seemed to be the closest thing I've seen to merge the two.
> For now, however, in hard-core physical science at least, there is little evidence of any major BD-driven breakthrough
> Yet, this is precisely what is advocated in the less theoretically grounded disciplines of biology and medicine, let alone social sciences and economics. The oft-repeated mantra of the life sciences, as the pursuit of ‘hypothesis driven research’, has been cast aside in favour of large data collection activities
The thing is, I've just spent two years working for molecular neurobiologists in the field of Single Cell RNA Sequencing, and large data collection has definitely lead through tons of breakthroughs there.
We can now classify cell types based on gene activation, on top of the previously existing morphology and location the cells are found. That can then be used to discover new subtypes, the origins of cells during embryonic development, and even predict which cells will evolve into others[0][1][2][3]. All of this requires vast amounts of data to ensure there is enough statistical power. In fact, the insistence on using unbiased samples before applying clustering algorithms is a big part of overcoming biases based on pre-existing expectations.
(Also, may I request that you edit your comment and break up that block of text into sub-paragraphs, for the sake of readability?)
> And, if the best minds are employed in large corporations to work out how to persuade people to click on online advertisements instead of cracking hard-core science problems, not much can be expected to change in the years to come.
The lack of a guiding theory has been something I've talked about with regard to big data and ML in general. This said, if you look closely at the math behind ML, it looks, not to surprisingly, like statistical mechanics.
So if we take some of the thought processes behind Stat Mech (or S&M as we used to call it in grad school), and you kinda squint your eyes hard to blur the less robust discussions you read about, you get the sense that ML is more about the "thermodynamics of information" than anything else.
I find this intriguing and definitely want to spend more time on this stuff.