As someone who has the old dead tree version of Intel’s x86 and 64 architecture instruction set reference (the fat blue books), and in general as someone who carefully reads the data sheets and documentation and looks for guidance from the engineers and staff who wrote the said data sheets, I always have reservations when I hear that “intuitively you would expect X but Y happens.” There’s nothing intuitive about any of this except, maybe, a reasonable understanding of the semi-conductive nature of the silicon and the various dopants in the process. Unless you’ve seen the die schematic, the traces, and you know the paths, there is little to no reason to have any sort of expectations that Thing A is faster than Thing B unless the engineering staff and data sheets explicitly tell you.
There are exceptions, but just my 2c. Especially with ARM.
"Intuitively" here should be taken to mean approximately the same as "naively" – as in, the intuition that most of us learn at first that CPUs work ("as if") by executing one instruction at a time, strictly mechanistically, exactly corresponding to the assembly code. The way a toy architecture on a first-year intro to microprocessors course – or indeed a 6502 or 8086 or 68000 – would do it. Which is to say, no pipelining, no superscalar, no prefetching, no out-of-order execution, no branch prediction, no speculative execution, and so on.
Respectfully, I disagree. CPU architecture optimization is in continuous dance with compiler optimization where the former tries to adapt to the patterns most commonly produced by the latter, and the latter tries to adjust its optimizations according to what performs the faster within the former.
Therefore, it is not unreasonable to make assumptions based on the premise of "does this code look like something that could be reasonably produced by GCC/LLVM?".
It is true that as cores get simpler and cheaper, they get more edge cases - something really big like Firestorm (A14/M1) can afford to have very consistent and tight latencies for all of its SIMD instructions regardless of the element/lane size and even hide complex dependencies or alignment artifacts wherever possible. But compare that with simpler and cheaper Neoverse N1, and it's a different story entirely, where trivial algorithm changes lead to significant slowdown - ADDV Vn.16B is way slower than Vn.4H, so you have to work around it. This is only exacerbated if you look at much smaller cores.
LLVM and GCC deal with this by being able to use relatively precise knowledge of CPU's (-mtune) fetch, reorder, load and store queue/buffer depths, as well as latencies and dependency penalty cost of opcodes of the ISA it implements, and other details like loop alignment requirements, branch predictor limitations.
Generally, it's difficult to do better in straight-line code with local data than such compilers assuming that whatever you are doing doesn't make concessions that a compiler is not allowed to make.
Nonetheless, the mindset for writing a performant algorithm implementation is going to be the same as long as you are doing so for the same class of CPU cores - loop unrolling, using cmovs, and scheduling operations in advance, or ensuring that should spills happen, the load and store operations have matching arguments - all of that will be profitable on AMD's Zen 4, Intel's Golden Cove, Apple's Firestorm or ARM's Neoverse V3.
"Our hypothesis strongly indicates that the 'pleasure index' or 'adrenaline rush' of relationships is taking more prime importance in the younger generation over long-term stability," he says. "It is alarming that impulsiveness or confusion can lead to instability in the human relation-maintaining behavior, which is actually affecting the normal social behavior in humans."
Social media and dating apps rely on the same reward centers (new relationship energy is a dopamine hit); think of what makes a long term, healthy emotional partnership. Is that what you're getting from a dating app profile and experience until the relationship is more solid? Or is it closer to OnlyFans? Are your partnering expectations being set by social media instead of reality? Relationships are work, they are not a fairy tale, social media is everyone's highlight reel and marketing/sales effort as a person.
Dating apps gave the illusion of a better partnering path, when really we lost in person avenues to much detriment [1] [2] [3] [4] [5]. Go meet people in person if you want success in this regard, optimize for opportunity and luck to come across another human you might fit with as inexpensively as possible. Try to be emotionally healthy and available, as well as empathetic as well. You'll be head and shoulders above other potential partners.
Lots of nuance as the topics of social and dating are deeply intertwined.
I don't think I ever met a single woman in person. Like literally a married woman asked me out at one point (though I had to figure the married part out on my own).
> I don't think I ever met a single woman in person.
That's almost certainly not actually true unless you have some pretty bizarre personal circumstances. If you went to school, you've met single women in person.
But I could see how you might feel that way, given the scale/anonymity, the taboos, and pervasive atomization of modern society.
Yes, it's obviously a bit tounge-in-cheek (though if you add "attractive" it might not be). That said, I am not in school now, and it would probably be a bit impractical to go back for the dating scene for a few reasons.
Given the same choices the people who voted for Brexit had, who/what you would you have voted for? What do you suspect the outcome would have been as it relates to the matter at hand?
I think it is generally accepted that already when Brexit was up for vote it was known to be a universally bad idea - from a rational point of view. Almost every trustable source of information was clear that, for example, the economic outcome it would lead to would be ... suboptimal.
So, if you were rational it was pretty easy choice that Brexit would be the wrong way to go.
Unfortunately, people are not rational (and, even worse, unwilling to admit not having been rational at the time) and many could be "captured" by all kinds of vague BS that politicians came up with -"take back control"- that sounded as if Brexit would be a good idea.
(Actually the UK has less control now of global matters, since they can't shape EUN decisions anymore, the UK has rather given up control - to France and Germany which now dominate EU decision making.)
FTA
>T-Mobile contradicted that clear promise on a separate FAQ page, which said the only real guarantee was that T-Mobile would pay your final month's bill if the company raised the price and you decided to cancel
Same way that credit card companies and websites change their terms every now and then. It's not what you agreed to when you opened the account, but they just do it. Don't like it? Then cancel. They know that canceling credit cards is a pain, you have to go change all your recurring charges and automatic payments, and also closing and opening credit card accounts can ding your credit rating a bit. So most people just bend over and take it.
What's clickbait about it? It's the title of the article as written by the author. How would one even go about automatically flagging "clickbait" titles? Obviously the term means different things to different people.
* it can be a calculator
* it can be a text editor
* it can be a side scrolling shooter
* it can be a text adventure
* it can be a life simulator
* it can be an algorithm playground
* it can be a way to work on larger programs in smaller pieces in quiter places!
There are tradeoffs for everything. There are a few horror stories about cloudflare charging companies using terabytes of bandwidth or doing legally gray activities an arm and a leg but on the other hand there is no other way to get things like analytics engine, queues, or stateful websocket servers for the price they offer along with $0 for bandwidth egress.
I don’t agree with this. Quick assessment: what makes someone more susceptible to groupthink? To propaganda? Is it the way they write? That doesn’t sound right. It is not an LLM/ChatGPT-borne ailment. So I would not paint these tools as such a boogeyman.
For those of you who do not suffer from dead-on aphantasia but simply have bad visualization/memory recall: you may find great industry in learning to draw, paint, sculpt and immersing yourself in perceptory recall to essentially, in an analogous manner of speaking, strengthen weak muscles. You may go so far as to find the book “ The training of the memory in art and the education of the artist” by B., L. Horace to be of interest. Do not skip on music and try to find ways to engage physically as best as possible. Perception works best in totality, and people who have strong visual recall often have other strong recall functions as well, disabilities notwithstanding.
I don’t personally agree with the term “overcorrecting” because they aren’t correcting anything. The output is already correct according to the input (humans behaving as they are). It is not biased. What they are doing is attempting to bias it, and it’s leading to false outputs as a result.
Having said that, these false outputs have a strange element of correctness to them in a weird roundabout uncanny valley way: we know the input has been tampered with, and is biased, because the output is obviously wrong. So the algorithm works as intended.
If people are discriminatory or racist or sexist, it is not correct to attempt to hide it. The worst possible human behaviours should be a part of a well-formed Turing test. A machine that can reason with an extremist is far more useful than one that an extremist can identify as such.
It really was just trading one bias (the existing world as it stands) for another bias (the preferred biases of SF tech lefties) so that was kind of funny in its own way. It would have been one thing if it just randomly assigned gender/race, but it had certain one-way biases (modifying men to women and/or white to non-white) but not the opposite direction... and then being oddly defiant in its responses when people asked for specific demographic outputs.
Obviously a lot of this was done by users for the gotcha screen grabs, but in a real world product users may realistically may want specific demographic outputs for example if you are using images for marketing and have specific targeting intent or to match the demographics of your area / business /etc. Stock image websites allow you to search including demographic terms for this reason.
If the current set of biases can be construed to lead to death, heck yeah I will take another set.
The idea is that this other set of biases will at least have a chance of not landing us in hot water (or hot air as it might be right now).
Now note again, that the current set of biases got us in an existential risk and likely disaster. (Ask Exxon how unbiased they were.)
AI does not optimize for this thing at all. It cannot tell the logical results from, say, hiring a cutthroat egoist. It cannot detect one from a CV. Which could be a much bigger and more dangerous bias than discrimination against disabled.
It might be likely optimizing for hiring conformists even if told to prefer diversity, as many companies are, and that would choke any creative industry ultimately.
It might be optimizing for short term tactics over long term strategy. Etc.
The idea here is that certain set of biases go together, even in AI. It's like a culture, we could test for it. In this case, hiring or organizational culture.
You're committing a very common semantic sin (so common because many, many people don't even recognize it): substituting one meaning of "biased" for another.
Sociopolitically, "biased" in this context clearly refers to undue discrimination against people with disabilities or various other marginalized identities.
The meaning of "biased" you are using ("accurately maps input to output") is perfectly correct (to the best of my understanding) within the field of ML and LLMs.
The problem comes when someone comes to you saying, "ChatGPT is biased against résumés that appear disabled", clearly intending the former meaning, and you say, "It is not biased; the output is correct according to the input." Because you are using different domain-specific meanings of the same word, you are liable to each think the other is either wrong or using motivated reasoning when that's not the case.
no assertion about this situation, but be aware that confusion is often deliberate.
there is a group of people who see the regurgitation of existing systemic biases present in training data as a convenient way to legitimize and reinforce interests represented by that data.
"alignment" is only a problem if you don't like what's been sampled.
> there is a group of people who see the regurgitation of existing systemic biases present in training data as a convenient way to legitimize and reinforce interests represented by that data.
Do you have a link to someone stating that they see this as a good thing?
> I don’t personally agree with the term “overcorrecting” because they aren’t correcting anything.
When I think of "correctness" in programming, to me that means the output of the program conforms according to requirements. Presumably a lawful person who is looking for an AI assistant to sift through resumes would consider something that is biased against disabled people to be correct and conform to requirements.
Sure, if the requirements were "an AI assistant that behaves similarly to your average recruiter in all ways", then sure, a discriminatory AI would indeed be correct. But I'd hope we realize by now that people -- including recruiting staff -- are biased in a variety of ways, even when they actively try not to be.
Maybe "overcorrecting" is a weird way to put it. But I would characterize what you call "correct according to the inputs" as buggy and incorrect.
> If people are discriminatory or racist or sexist, it is not correct to attempt to hide it.
I agree, but that has nothing to do with determining that an AI assistant that's discriminatory is buggy and not fit for purpose.
I don't disagree with what you wrote here, however who gets decide what "correcting" knobs to turn (and how far)?
The easy obvious answer here is to "Do what's right". However if 21st century political discourse has taught us anything, this is all but impossible for one group to determine.
Agreed, problem as well is "do what's right" is a thing that changes a lot over time.
And while “the arc of the moral universe is long, but it bends toward justice.” .. it gyrates a lot overcorrecting in each direction as it goes.
Handing the control dials to a educationally/socially/politically/etc homogenous set of San Fran left wing 20 somethings is probably not the move to make. I might actually vote the same as them 99% of the time, while thinking their views are insane 50% of the time.
> If people are discriminatory or racist or sexist, it is not correct to attempt to hide it.
What is the purpose of the system? What is the purpose of the specific component that the model is part of?
If you're trying to, say, identify people likely to do a job well (after also passing a structured interview), what you want from the model will be rather different than if you're trying to build an artificial romantic partner.
> The output is already correct according to the input (humans behaving as they are). It is not biased.
This makes sense because humans aren’t biased, hence why there is no word for or example of it outside of when people make adjustments to a model in a way that I don’t like.
>> A machine that can reason with an extremist is far more useful than one that an extremist can identify as such.
And a machine that can plausibly sound like an extremist would be a great tool for propaganda. More worryingly, such tools could be used to create and encourage other extremists. Build a convincing and charismatic AI, who happens to be a racist, then turn it loose on twitter. In a year or two you will likely control an online army.
How does a computer decide what's "extreme", "propaganda", "racist"? These are terms taken for granted in common conversation, but when subject to scrutiny, it becomes obvious they lack objective non-circular definitions. Rather, they are terms predicated on after-the-fact rationalizations that a computer has no way of knowing or distinguishing without, ironically, purposefully inserted biases (and often poorly done at that). You can't build a "convincing" or "charismatic" AI because persuasion and charm are qualities that human beings (supposedly) comprehend and respond to, not machines. AI "Charisma" is just a model built on positive reinforcement.
> These are terms taken for granted in common conversation, but when subject to scrutiny, it becomes obvious they lack objective non-circular definitions
This is false. A simple dictionary check shows that the definitions are in fact not circular.
In general, dictionaries are useful in providing a history, and sometimes, an origin of a term's usage. However, they don't provide a comprehensive or absolute meaning. Unlike scientific laws, words aren't discovered, but rather manufactured. Subsequently they are, adopted by a larger public, delimited by experts, and at times recontextualized by an academic/philosophical discipline or something of that nature.
Even in the best case, when a term is clearly defined and well-mapped to its referent, popular usage creates a connotation that then supplants the earlier meaning. Dictionaries will sometimes retain older meanings/usages, and in doing so, build a roster of "dated", "rare", "antiquated", or "alternative" meanings/usages throughout a term's mimetic lifecycle.
It's an issue of correlating semantics with preconceived value-judgements (i.e. the is-ought problem). While this may affect language as a whole, there are (often abstract and controversial) terms/ideas that are more likely to acquire or have already acquired inconsistent presumptions and interpretations than others. The questionable need for weighting certain responses as well as the odd and uncanny results that follow should be proof enough that what is expected of a human being to "just get" by other members of "society" (an event I'm unconvinced happens as often as desired or claimed) is unfalsifiable or meaningless to a generative model.
There are exceptions, but just my 2c. Especially with ARM.