Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Concepts that clicked only years after you first encountered them?
579 points by luuuzeta on Jan 1, 2023 | hide | past | favorite | 913 comments
I'm reading Petzold's Code [1], and it dawned on me that I didn't understand logic gates intuitively until now. I took a Computer Architecture course back in college, and I understood what logic gates meant in boolean algebra but not empirically. Petzold clarified this for me by going from the empirical to the theoretical using a lightbulb, a battery, wires, and relays (which he introduces when he talks about the telegraph as a way to amplify a signal).

Another concept is the relationship between current, voltage, and resistance. For example, I always failed to understand why longer wires mean more resistance while thicker wires mean less resistance.

[1]: https://www.codehiddenlanguage.com/




1. Everyone is the main character in their own story.

This manifest in all sorts of ways - from people not being there when you need them the most, from friends dying off as soon as proximity changes, to how and why get people get promoted in jobs. This isn't necessarily bad, but if you don't know how to navigate this it can be quite painful and confusing.

2. Representation matters.

I knew this for a long time, but it didn't fully click until years had gone by and I realized I had unconsciously held myself back from pursuing a wide range of things because I just didn't see anyone like me there.

3. Rules in life are just constructs that we as humans have created.

Starting a business helped the most on this one. That's when I started to see that "rules" or "procedure" are all made up and exceptions can always be made.

(Edit: typos)


> Everyone is the main character in their own story

For those with anxiety, one of the best pieces of advice I ever received (that also took me years to internalize!) was the corollary to this: nobody is thinking about you in a critical way, at the level that you are criticizing yourself, because they are their own main character. Which is incredibly freeing, because the anxious person’s assumption that one embarrassing moment will turn into them obsessing about your failure… is absolutely nonsensical, because the only person they are obsessing about is the main character to them, themself.


>nobody is thinking about you in a critical way, at the level that you are criticizing yourself

I think there are shades of this, which is to say the level of judgement can vary.

I feel that in a smaller city my social anxiety is not as strong as when I’m in a larger city.

In larger cities, I feel much more evaluated and judged ie eyes on me looking at my clothes, checking me out, etc despite being a much more detached and impersonal environment than smaller cities where I just feel ignored. Maybe it’s just more people around putting me more on edge.


Have you ever come across someone who you thought was dressed or acted strange while riding public transports or walking in the streets? Found one of your fellow traveler extremely attractive? I would assume most of us would answer yes.

Do you remember the face of these persons? I can tell you that I don't.

Nobody really cares about strangers in big cities. They are part of the background. Sometimes you are struck by something then you move along. Even people who might actually be judging you will have forgotten you exist minutes later. That's the beauty of cities.


I find it's the opposite for me. I'm basically anonymous in big cities, whereas smaller cities and towns can be, uh, incestuous when you start getting to know people. They can also be visciously judgmental, but that goes for people anywhere.


They might not judge you more in a large city, or be any more interested in you per see - they might just be a little bit more eager to check if you might be dangerous. As in places with many encounters with many people, the risk is higher.


Thanks a lot for your comment! This part in particular

> the only person they are obsessing about is the main character to them, themself

seems obvious in retrospect.


You’ll worry less about what people think about you when you realize how seldom they do.


This is the classic platitude. I don't buy it - I do judge other people like that.


In aggregate, I have no doubt! But for a person suffering from social anxiety acute enough to affect their performance, the length of time that you are judging specifically them for a specific thing that they did is almost certainly orders of magnitude less than that person with anxiety fears - because, unless you've developed unheard-of levels of thought parallelism, you will almost certainly have moved on to judging someone else!


Regarding #3, the following David Graeber quote always stuck with me:

“The ultimate, hidden truth of the world is that it is something that we make, and could just as easily make differently.”


Sapiens by Harari covers this extensively. So much of human development is our ability to ascribe fictive boundaries and definitions to things. Rules, authorities, myths, money, culture, countries, companies. What are any of them? They're all made up. We could just stop believing in all of them tomorrow.


I often see Harari and Graeber's being discussed at the same time, and I am always surprised.

Harari write pop sci, a grand narrative with many liberties with both history and science.

Graeber's books are heavily referenced pieces of work. Debt: The first 5000 years ends at 73% on my Kindle, because the remaining 27% of the book (144/534 pages) is references and footnotes.

Their books are not in the same intellectual category.


The number of references are not an indication of the quality of a work. Adam Smith wrote a whole book on The Theory of Moral Sentiments but in Debt, Graeber invents Adam Smith's morality whole cloth.

I have no intention of reading Dawn of Everything but reviewers have noticed references that say the opposite of what the authors claim they say.


I am aware that there are problems with the books, even some serious ones. But I find that hardly surprising from a book that references a 1000 other scholarly works. But the reason you can critique the book effectively is because he references where his claims originate from.

In the language of this essay [1], Graeber's work is legible, Harari's not at all.

[1] https://acesounderglass.com/2022/02/07/epistemic-legibility/


I find this interesting. I've read Graeber essays, but the only book I tried is "Utopia of Rules" and I found that it read as if he did exactly what you describe Harrari as doing.

It felt like he set up a windmill separate from reality to tilt at.

I do agree that Harari is great for thought piece stuff, I put him in the same category as Gladwell. Entertaining to read, but not to be trusted as a singular source.


Our models and explanations of the world are dependent on our belief system and the framework through which we look at. For example, a marxist's explanation of the economic system is very different from that of a neo-liberal. The two would not even agree on an ontology, yet alone the mechanism.

Graeber was a self-proclaimed anarachist. His books indeed have a different description of reality than the orthodox description of reality. If you don't agree with his description, then it will indeed look outlandish - the same way an atheist's and religious person's descriptions sound outlandish to each other. If you do agree with his descriptions, or at least that there are many different conceptions of the world, then his views are more reasonable.

Harari is not making scholarly arguments at all, so we cant even say much about his books.


But it’s all fictive boundaries. Don’t stop at countries and companies -nothing exists until it’s named.


This was my personal big breakthrough. I realized that humans deal with abstractions of things and not things themselves. Accordingly, most of what humans argue about isn’t even real, but rather only what is perceived through layer upon layer of abstraction.


I've had this revelation too but wouldn't describe it as a big breakthrough though sometimes it feels like it should matter more. Can I ask how you feel like this affects your day to day life?


For me the most concrete practical use of this kind of thinking is achieving "mind over matter" when dealing with physical pain. It seems to ease the emotional impact if you focus on the physical sensation of pain itself rather than the idea of pain, which carries so much more baggage that adds an extra dimension to the suffering that becomes counterproductive in non-survival scenarios like lifting heavy weight at the gym.


It allows me to turn the volume down on things. So many things actually matter very little. Also, it allowed me to see things a bit more clearly. For example, all forms of categorization and measurement are somewhat arbitrary (no matter how useful). Another one I got from this is that meaning is assigned and not found as it exists only in the mind. Or, another favorite of mine, concepts don’t really exist. Like meaning or measurement, they are useful but not real.


Indeed, except conceptually, there is no actual separation between "things."

See: quantum mechanics.


All values ,social construct and structure we create as human maybe just as part of our human existence (essence ) ,same a wildlife photographer watches wild animals ,like a passive observer ,without any moral judgement and prejudice .We can watch humans with same lens ,all our man made problem ,may it be ecological disasters or economic system ,it maybe just side effect of our existence as a species.


That is on my shelf ready to read. Just got to finish a couple more currently in the queue.


Most of those things are backed up with guns though. Which for me, makes them as real as they need to be.


A convenient stance, I suppose, when discussing what to do with “useless eaters.”


Convenience isn't the point. The rules of society, laws, morality, ethics, compassion as a virtue ... they don't exist as physical laws. They're human constructs, and humans can (and do) choose to ignore them.

It's an important growth moment to realise that the consequences of violating those "laws" are not delivered by the laws of nature but instead by the behaviour of humans. So you can be rude or arrogant, and the only punishment for that comes from people choosing not to work with you or similarly being unpleasant back to you.

Nazis had a lack of compassion and denied some people the (man-made!) rights of humanity based on race, ability, sexual orientation, and other categories. The consequences for that behaviour had to come from other people standing up to them.

I've met people who think that "the universe" will deliver just punishment for violating an ethical or moral code. In reality, every society and set of values is held together by people defending them and meting out punishing to violators. Sometimes formally (police, courts, armies, declarations of war) and sometimes personally.


I understand the argument you’re putting forth, I just view it as the equivalent of the argument made by the Nihilist Germans in The Big Lebowski. It ends in will to power.

Within the constraints of the materialistic assumptions underlying it, I do agree that the ideals of compassion and empathy are examples of best-case behavior but the humans who rise to the top in such systems are usually not encumbered with the capacity for either. They would push back on the constraints of such “objective ideals” anyway because they see no coherent place for them in will to power systems.


At macroscopic level ,human construct is as valid as nature's


> “useless eaters.”

Unfamiliar term. Turns out it is Nazi political slang.

https://en.wiktionary.org/wiki/useless_eater


Yes, Harari echoes this term in some of his speeches, asking what to do with the “useless class of people” that emerge in the wake of the AI Revolution. Instead of asking whether we should be evaluating the materialistic “usefulness” of human beings in terms of the purely economic value they generate, or even whether we should be pursuing technologies that will create these conditions, he satisfies himself with questions about how to keep the “useless people” out of the way with drugs and the Metaverse.


That, and the idea that humans are now programmable and the concept of free will is over. He and his son-of-a-Nazi mentor, Klaus Schwab, want to turn you into the Borg. Literally.

Harrari is a class A megalomaniac and psychopath. Study him if you like, but never let that fact slip your mind as you read his psychotic musings.


What are useless eaters? You’ve put it in quotes, but I don’t see it elsewhere.


+1 for the Graeber quote. I wasn't familiar with it but makes sense he would have originated it. Dovetailing that with this Ask HN theme: When you realize that bog-standard consumer loan amortization formulae are arbitrary and not grounded in any fundamental reality more detailed than "the lender gets something in addition to the amount lent", it's eye-opening. I didn't really recognize that until after reading his Debt: The First 5000 Years (yes, I'm familiar with the statement flaws that others have commented before about his works), but set against alternatives I learned about in business school, like zero-coupon bonds (special to corporates, bondholders, and banks) it put the standard loan amortization schedule into context.


#3 really hits home when you realize we have rules for killing each other (war). And we have separate legal trials to determine if someone broke those rules while trying to kill someone else. The penalty for breaking the rules about killing someone often is to be killed.


Probably most rules that we have are there to prevent us (admittedly this applies mostly to men) from killing each other.

That's why "rules are just constructs we humans created" rings stupid/horrifying. Obviously they were created by humans but that is also why they matter. I prefer "rules were written by humans, mostly in blood". It took years for OP to get the first part, here's to them getting the second part faster.

One doing something wrong to another would get families into an infinite loop of mutual revenge--until we got around precisely to create rules and put up a trusted authority to ensure justice without the need for vengeance. Rules is how our civilization functions and its only hope.


Honar killing


> 3 really hits home when you realize we have rules for killing each other (war

Those are some of the oldest and most important ones


2. Hits home for me as I’ve begun to realize the same thing. Not seeing any person of color in an activity makes it’s so I don’t want to be the first.


As a white guy, I'd certainly encourage you not to worry about this -- if we're talking about an upper middle class, professional-ish activity, you can be fairly sure that all those white people feel self-conscious about their group being so white and would welcome you to join them.

Anyway, I'd be interested to hear more about the psychology of this.

I remember when I was growing up in the 90s and 00s in California, people talked about race way less than they do today. When ethnic representation became a common topic of conversation, I had a hard time believing it at first, because it seemed so self-evidently obvious to me that race wasn't a particularly important characteristic of a person. I actually had the experience of thinking back to my time in jr high/high school and thinking "wow, that friend of mine had dark skin, and they weren't from India... I guess they were Black, huh".

I'm not trying to claim that I didn't have subconscious biases related to race as a kid. I'm sure I did. But I do suspect they have become a lot more severe as a result of people talking about race so much -- it has become a much more salient characteristic. (I'm also more aware of trying to mitigate my biases and avoid microaggressions and so on, of course.)

So yeah, I'm curious to compare notes with other 90s kids in this regard. I'm white, but if I was Black, I imagine that I'd be way more self-conscious about it now than I was when I was growing up. (Like, if I'm the only white person in a group, I feel self-conscious about it now in a way that I didn't feel when I was a kid.)


I’m not sure > As a white guy, I'd certainly encourage you not to worry about this -- if we're talking about an upper middle class, professional-ish activity, you can be fairly sure that all those white people feel self-conscious about their group being so white and would welcome you to join them.

Is as helpful as you may have intended. The comment was about not wanting to be the first, even if the group is welcoming, the commentator will still be the first/only person of color in the group and there is a discomfort inherent to that even if everyone is trying to be welcoming.

Also re “colorblind” policies of the 90a versus the explicitness of today, I think it’s similar to the current reckoning in journalism. We all have biases and pretending we don’t/acting like we are capable of pure objectivity simply hides them and makes them more difficult to combat. On the other hand explicitly acknowledging them allows you to consider how they might be influencing your decisions.


For the colorblindness part, I think my core objection to the cultural changes which have occurred since I was a kid is not to increased awareness of biases per se, but rather this phenomenon: https://ncase.itch.io/wbwwb

My feeling is that the best way to reduce subconscious bias is to have good race relations. And the best way to have good race relations is to have more positive interactions than negative interactions.

A thought experiment: Imagine a non-Mexican person says "I love celebrating Cinco de Mayo. It's an excellent excuse to drink tequila!"

As a culture, we can decide whether that's a positive interaction or a negative interaction.

We can make it a positive interaction by laughing and clinking glasses.

We can make it a negative interaction by calling the person out for cultural appropriation or trivializing an important Mexican holiday.

My intuition is that making it a negative interaction is a big mistake, because it worsens subconscious racial biases and generally makes the world a less pleasant place. I recommend playing this game https://ncase.itch.io/wbwwb to better understand my intuition here. (Based on what I've read about Israel/Palestine, the game is an excellent description of why things have been getting so bad there lately.)


Irony is heavy there, unintentional double entendre of sorts. Mexicans barely recognize or care about the 5th of May. That’s an American Chicano celebration for the most part. Mexicans kind of laugh about its importance in the US.

I say this from Mexico, sitting next to my Mexican wife and her mother.


To loop this back to OP, "US Mexicans" are different from "Mexico Mexicans" is something I didn't realize until my adult life. They have their own way of speaking, their own foods, and a more recent immigrant background as a minority, which has a huge affect sociologically. That doesn't make their particular cultural enclave less important, but it is an interesting observation. American Italians are another great example of this, though their immigration ended much longer ago.


I know this just your example but I think it demonstrates the larger issue with victim culture. Cinco de Mayo is not actually a very important holiday in Mexico. I agree that it’s not good to turn everything into a negative interaction but it’s not clear to me that having “cultural appropriation” as a top of mind issue is a very healthy thing either.


Cultural appropriation is nonsense. Probably most of what we are is a result of cross-pollination of cultures. Who gets to decide who belongs to one group or another?


I think they are called Progressives.


I find the unatonable original sin spin placed on white people these days in the USA to be a tad bit repulsive personally.


And that’s why the Proud Boys say they’re not apologizing for creating the modern world. And the idea is popular.

To be clear, many Hispanics agree. Many of their ancestors played a role alongside the rest of their fellow whites. Whites are still admired by Asians as well.

This anti-white thing is largely an American phenomenon and other far left folks around the world that are resentful and cannot accept responsibility for their own condition. The rest of them deal with the hand they’re dealt, and go to work. As no one has more influence over their lives than them. We just accept it rather than adopt victimhood as a way of life. That’s how you create losers.


Not sure what fantasy you’re living in where a terrorist group (aside from perhaps the U.S. government) is “creating the modern world” lmao

I presume whatever fantasy it is might be a fairly common “master race” one


The "they" when they say "they" created the modern world is not the membership of the organisation, but rather western civilisation/culture.


If it is silly to feel responsible for bad things that other people did, just because they happened to have the same color of skin, it is also silly to take credit for the good things other people did.


Exactly. We can only be proud of what we personally accomplish.

I wasn’t endorsing the Proud Boys, but they aren’t completely without a point either. I’m more concerned victimhood culture is destroying more lives than racism is at this point.

My wife is not a white woman. No one’s going to tell our kids that they’re doomed. I want them to flourish. regardless of circumstances, through grit, persistence, determination, and resilience.

At a certain point, we all need to deal with the hand we’re dealt in life. I certainly was not on the fortunate side of the fence myself.

There’s a reason why a lot of families no longer exist. Tough times came and they put a bullet in their head. We’re heading for tough times again and outcomes will be delineated by who has taught their children strong values.


Well strictly speaking a) they don't say they're proud of it, they say the "refuse to apologise for it" and b) they have ethnic minority members and their leader was afro-cuban (Enrique Tarrio).


This is true. I am not a Proud Boy expert. I don't support the movement. I just know of them and referenced a line of theirs. But their stance is better nuanced than my recollection was.

I do like the minority inclusion because you don't have to be white to be Christian, support a modern law-abiding civilization, or generally do "what works". I'm white but my wife is Mexican. We're both very conservative in most ways. Her family is proud of the Christian conquest of Latin America and don't think they went far enough in the conquest. Far beyond the typical liberal American view of boo hoo for the "natives". Americans are not aware that many hispanics are more conservative than they think. The racial purity thing is not going to be very productive going forward. The Proud Boys were a model for the future in that regard.

What the boat is sailing on now though are Greco-Roman values, which is the basis of western civilization. Which is kind of important since the east tends to rely on oppression. Look at Russia and China today.


To be clear, are you saying that white people are supposed to feel bad for being white because of "bad things" done by white people?


That is the underlying tone that comes across to me whenever these subjects come up. Effectively pushing against the idea makes one a "racist" or a "white supremacist" as far as I can tell. I don't know how to respond to that kind of labeling because I don't find myself disliking people because of something as superficial as their skin color.


> Effectively pushing against the idea makes one a "racist" or a "white supremacist" as far as I can tell.

Your wording is a wee bit vague. There's nothing wrong about pushing back on the notion that if one is white than one is guilty of crimes committed in the name of white superiority (because that's actually racist too).

But it also doesn't discount the fact that racism is still very much a problem.


I am not vague. What is vague is the shadowy threat of being called a racist for stating ones opinion while having the wrong color of skin. It is a problem. Just a wee bit.


I'm sorry -- that wasn't intended as an insult, more of an issue on me being able to ascertain exactly what you were saying. Comprehending other people's writing can be challenging at times even if you think you know what they're saying.

One's opinion should stand on the words/ideas behind them, not what one looks like. Getting called out for that is bullshit and should be disregarded.

Part of my confusion is also that I don't feel the threat that you mention. I'm also okay with being challenged and with being wrong. I think the key is to try to engage with respect and be prepared to listen to what the other party is trying to say -- and assume best intentions.


I fully agree with you about ideas standing on their merits. I hope we continue to move in that direction as a species.


I certainly didn't mean to invalidate their discomfort -- as I mentioned at the end, I feel the same way sometimes. I apologize if people found it invalidating.

I actually wrote the initial draft of my comment with welcomingness as sort of a background assumption in my mind, and then I thought "wait a minute, even if I'm assuming that people would be welcoming, the person reading my comment might not. and given how hostile online discussions can get, it seems safer to err on the side of emphasizing welcomingness." That still seems roughly correct to me -- I think some topics have so much hostility associated with them that whatever you're going to say about the topic, it probably makes sense to also add in a bunch of welcomingness too, just so the discussion goes well.


> As a white guy, I'd certainly encourage you not to worry about this -- if we're talking about an upper middle class, professional-ish activity, you can be fairly sure that all those white people feel self-conscious about their group being so white and would welcome you to join them.

I've heard similar things, but as a non-minority, you don't see what happens when you aren't looking or when people think they can get away with something without consequences, including social or professional consequences. Someone who is a minority has to live it 24/7, so they get a bigger picture that you don't.

You might be well-intentioned and surrounded by people who you believe are well-intentioned, but not everyone is, and plenty of the people who talk and act as if they're well-intentioned will turn around and betray those purported intentions when you aren't paying attention.

With regard to race-blindness in the 90s and 00s, consider that today's focus on race is a reaction to the willful ignorance of the decades prior. Race mattered just as much back then as it does now, it's just that the majority in the past had patted themselves on the back and convinced themselves that it was a solved problem if they just stop talking about it.


>You might be well-intentioned and surrounded by people who you believe are well-intentioned, but not everyone is, and plenty of the people who talk and act as if they're well-intentioned will turn around and betray those purported intentions when you aren't paying attention.

Yeah I totally buy that.

>With regard to race-blindness in the 90s and 00s, consider that today's focus on race is a reaction to the willful ignorance of the decades prior. Race mattered just as much back then as it does now, it's just that the majority in the past had patted themselves on the back and convinced themselves that it was a solved problem if they just stop talking about it.

If we believe racial discrimination is the root cause of racial inequality, then "talk about race less" (as a way to treat people more equally, and reduce discrimination) seems like a reasonable hypothesis to test. So the thing about people patting themselves on the back seems perhaps overly cynical?

I'm also not convinced that what we're doing now is working better than what was done in the 90s and 00s. For example, this Gallup poll indicates that Black adults believe race relations with white people have gotten much worse since 2013: https://news.gallup.com/poll/1687/race-relations.aspx

And I've seen research indicating that corporate bias reduction trainings typically backfire. Apparently if you want your company to have more diverse leadership, one of the most effective ways to do that is to create an official mentorship program which randomly pairs off junior people and senior people without regard to race. It seems that a junior employee of color gets better mentorship if they are regarded as a "junior employee at our company" as opposed to "junior employee of color". The mentorship program just works to overcome activation energy and create relationships between junior and senior people which might otherwise not exist due to awkwardness. (This is all from Chapter 8 of the book Meltdown: Why Our Systems Fail and What We Can Do About It)

The output of the 90s and 00s era was Obama getting elected president in 2008 and serving two terms. By contrast, I do think you can make the argument that Trump is in some sense a product of the woke era.

Trump's popularity exploded around July 2015: https://www.realclearpolitics.com/epolls/2016/president/us/2...

But e.g. the number of NY Times articles on "whiteness" were on an exploding trend by 2014: https://marginalrevolution.com/marginalrevolution/2019/06/th...

(The number of articles with those terms grows even faster after Trump becomes popular. I would postulate that the mechanism is something like this game I linked elsewhere in the thread https://ncase.itch.io/wbwwb )

Anyway, my impression from reading right-wing Twitter feeds is that supposed woke overreach is a major motivation for Trump supporters, playing into the backfire point I mentioned previously.

I'm sure there is room to improve on the 90s and 00s. But I remain skeptical of the politician's syllogism: "We must do something. This is something. Therefore, we must do this."


> "talk about race less" (as a way to treat people more equally, and reduce discrimination) seems like a reasonable hypothesis to test.

This was the status quo for a very long time. Mentioning people's race was like mentioning people's weight. It didn't work.

> I'm also not convinced that what we're doing now is working better than what was done in the 90s and 00s.

The 2005 study "Are Greg and Emily More Employable Than Jamal and Lakisha" applied to jobs 5000 times with a stack of carefully crafted fake resumes that were randomly assigned names that were either stereotypically white (e.g. Greg, Emily) or black (e.g. Jamal, Lakisha.) The same resumes with white sounding names received 50% more callbacks. Over 5000 job applications. What we were doing in the 90s/2000s was not working and the fact that many white people find the polite approach more comfortable is not a good reason to stick to it. This isn't a social problem-- we're talking about people's basic chance of success in life. This idea that we can have equal opportunity by deliberately avoiding intervention is a proven fallacy. Pushing through the discomfort to find what actually works is a worthwhile undertaking. If another race with a far larger population was 50% more likely to get job callbacks than you were, I guarantee you'd agree.

The increase in political tension our culture has seen in the past decade plus is vastly more complex than the most visible catalyst of any given moment. Blaming social progress for the increasing resistance to social progress doesn't make sense. You could just as easily blame fascism or any other activist end of a political blob. As flawed and overly bandwagoned as it may be, "wokeness" is addressing real problems that oppress people in measurable ways every day. "Anti-Wokeness" is just another value signalling position to have that highlights political fault lines far older than any living American.


What makes you think talking more about race works any better or should be considered to be "social progress" rather than the opposite? Would that study have better results if done today? I'm not American, but I don't see why you are so sure today's America is not closer to the Jim Crow Era than 2005 was.

I'm a black person who grew up in Sweden in the 00s and early 10s, and I'm very glad race was never relevant or talked about outside of social science class. Almost everyone at my school was white, and I never felt like I was different because of my skin color. The first person who ever told me that my race or skin color was relevant to anything was an American. The worst thing about American culture (and what keeps me from moving to the US even though I could make much more money there) is that so many Americans are insisting on considering race (and other arbitrary categorizations like ethnicity, sex and gender) to be such a large part of what makes a person, overshadowing that person's unique attributes and identity.


You should read about the experiences of African Americans. Many cultural facets and political tensions held over from slavery are alive and well.

Financially, income inequality in Sweden is vastly lower than in the United States as it is, and in the United States much of the inequality falls on racial boundaries. A recent Federal Reserve Bank of Boston study found that local white household’s median net worth was $247,000 while for African American households it was $8. Yes, eight single dollars. The Black population in Boston has been well established for well over a century, and much of its white population came in waves of western European immigration in the early to mid 20th century.

Not talking about this problem hasn't solved it so far because systemic self-perpetuating inequality doesn't just go away if you ignore it.


I'm not objecting to talking about racism. I'm objecting to seeing race as a significant part of what makes a person. It's possible to do one without the other.

I don't see how it's useful to note that something didn't work when there is no evidence to suggest that the alternative works any better. I don't understand how talking more about race as a characteristic could possibly help to reduce racism or racial inequality. It seems to me much more likely to do the opposite.


That the social construct of race shouldn't contribute significantly to a person's identity is irrelevant. Vast troves of empirical evidence show how dramatically being born on one side or another of a racial boundary in the United States changes your life experience. See the study I cited in another comment "Are Greg and Emily more employable than Jamal and Lakisha." researchers sent 5000 job applications to jobs pairing a handful of fake, equally weighted resumes with stereotypically white or black sounding names. White sounding names got FIFTY PERCENT more callbacks... This is the United States where no job among already disadvantaged prime likely means no home, no food, no health care, no nothing. This isn't an affect people adopt as part of some fanciful cultural identity. Disrupting it isn't about pride or hurt feelings. It is a bin that our society forces people into and it will not go away by pretending it doesn't exist. Talking about it or not taking about it is the wrong dichotomy. It's directly addressing it VS not addressing it.


You didn't grow up in America, you grew up in Sweden. I don't think any further explanation is required.


I'm not sure the 2005 study tells us much. Our question is which strategy works best for reducing discrimination. To answer that, we'd want to measure the level of discrimination over time to estimate the trend.

Imagine we found an identical study from 1995 that showed that white people received 100% more callbacks at that time, and a study from 1985 that found that white people received 200% more callbacks at that time. And then imagine that when subgroup analysis was done on the 2005 study, it was determined that most of the effect was from older hiring managers who were about to age out of the population. In this hypothetical, the study would strongly validate the strategy being used in the 90s and the 00s.

>This idea that we can have equal opportunity by deliberately avoiding intervention is a proven fallacy.

I didn't claim that. I just want interventions that don't backfire.

For example, how about removing the name from the resume?

With modern technology, it would be easy to conduct interviews remotely over screenshare with a voice changer.

80/20 rule, 20% of the discrimination probably causes 80% of the harm. As you imply, hiring discrimination is one of the most harmful types of discrimination. So anonymized interviews could put a serious dent in racial inequality. As the technology matured, we could hit firms which didn't anonymize interviewees with a punitive tax.

Would certainly be an interesting study, at least. Run it every few years to measure the trend in discrimination, as discussed previously.

>The increase in political tension our culture has seen in the past decade plus is vastly more complex than the most visible catalyst of any given moment.

If political tension is a complex phenomenon, isn't it possible that racial discrimination is also a complex phenomenon, which won't be solved effectively using the social equivalent of a sledgehammer?

>Blaming social progress for the increasing resistance to social progress doesn't make sense.

Imagine a cop responded to criticism by saying: "Blaming crime fighting for the increasing resistance to crime fighting doesn't make sense." Would you find that persuasive?

It's possible to "solve" a problem in a way that superficially looks like progress, but actually makes the underlying issue worse. And if you can find a way to get paid money to do that, you'll never be out of a job. (Trump may not be good for America, but he's good for CBS. NY Times subscriptions exploded in the wake of Trump's election, IIRC.)

>You could just as easily blame fascism or any other activist end of a political blob.

I do in fact blame fascism. Historically speaking, it seems fairly normal for right-wing extremism and left-wing extremism to arise at the same time in the same society. The left-wing extremism is strengthened by the right-wing extremism, and vice versa.


That's a pretty glib dismissal of a peer-reviewed and heavily cited study with a ton of data. Calling any of these interventions a social sledgehammer is laughably hyperbolic. How does any of what I said really on racism being simple? The name on the resume is obviously a symptom of a larger problem and only one of many ways someone's race shows up in any number of consequential situations far more difficult to spot, like in neural network derived decisions. No I don't blame crime fighting for the increased resistance to crime fighting because the actions taken by cops that people are actually mad about are crimes and opposing them is crime fighting. That you consider interrogating our culture to highlight racism to be political extremism says a lot. I'm all done here.


>That's a pretty glib dismissal of a peer-reviewed and heavily cited study with a ton of data.

I explained why regardless of study quality, the study can't support the point you want it to. You're not addressing my point, just making a vague appeal to authority.

>Calling any of these interventions a social sledgehammer is laughably hyperbolic.

What interventions are you referring to? You yourself referred to wokeness as "flawed and overly bandwagoned".

>How does any of what I said really on racism being simple?

You argued, without any supporting evidence, that it couldn't be the case that efforts to reduce racism could make it worse. If racism is complex then we shouldn't be surprised by counterintuitive results like that.

>The name on the resume is obviously a symptom of a larger problem and only one of many ways someone's race shows up in any number of consequential situations far more difficult to spot, like in neural network derived decisions.

I didn't claim that removing names from resumes was a complete solution, but it's suspicious that there seems to be so little interest in it, given that it could be very impactful.

Additionally, reducing hiring discrimination should have positive downstream effects: As Black people get better jobs, they move up in social class and that reduces stereotyping.

I think the fact that you aren't interested in an incremental solution should make you wonder if you're part of the flawed woke bandwagon you referred to. Remember, lots of incremental solutions can add up to a complete solution.

>No I don't blame crime fighting for the increased resistance to crime fighting because the actions taken by cops that people are actually mad about are crimes and opposing them is crime fighting.

By the same token, many people who claim to advance social progress may actually be advancing social regress.

>That you consider interrogating our culture to highlight racism to be political extremism says a lot.

I consider abolishing the police to be political extremism. See this article: "Yes, We Mean Literally Abolish the Police" https://www.nytimes.com/2020/06/12/opinion/sunday/floyd-abol...

>I'm all done here.

I don't mind, it's tiresome to debate people who argue in bad faith.


I don't see how you've rebutted the study at all; you've argued that it would be possible to offset the racism in job applications by scrubbing names from resumes and using voice changers on interview calls (come on). But both of those interventions just mitigate the racism, they don't directly reduce it.

Mewanwhile, you've just cited a totally unrelated (and widely dunked-upon) NYT op-ed as evidence for your position, and then accused the person you're arguing with of bad faith. Physician, heal thyself.


>I don't see how you've rebutted the study at all

You're correct, I haven't rebutted the study. Again, I explained that it doesn't make the point he wants it to make. See first 2 paragraphs here: https://news.ycombinator.com/item?id=34214917

>using voice changers on interview calls (come on)

What I describe has already been done for gender: https://interviewing.io/blog/voice-modulation-gender-technic...

And if researchers can change the name on an applicant's resume to do a study, why can't HR departments do the same thing?

"Come on" is not a good faith counterargument.

>But both of those interventions just mitigate the racism, they don't directly reduce it.

I'm not sure that is true. Imagine someone had the experience of interviewing a Black developer using a voice changer, giving them a thumbs up, meeting them in person for the first time, and learning that they were Black -- counter to the stereotype they held. You don't think that experience could be a powerful demonstration of a person's racial bias which could cause them to rethink some stuff?

Past that, I don't see why the distinction between mitigating and reducing is so important. Racism is bad because it causes bad effects. (CF argument earlier in the thread: "If another race with a far larger population was 50% more likely to get job callbacks than you were, I guarantee you'd agree.") If the easiest way to reduce the bad effects doesn't involve reducing racism, we should do it anyway. The point is to help people, not ideologically purify people.

>Mewanwhile, you've just cited a totally unrelated (and widely dunked-upon) NYT op-ed as evidence for your position

I cited it as evidence for the position that some extremists want to abolish the police. It's right there in the title of the op-ed. I don't see what dunking has to do with it.

Here's a woman with almost half a million Twitter followers saying the same thing in 2022: https://twitter.com/BreeNewsome/status/1547223384643231744 She tweets something like this every few months. Here's one of her tweets where she explains what she means in more detail: https://twitter.com/BreeNewsome/status/1267138648174137345


Interviewing.io ran an experiment with voice changes, in part for the publicity value (that's not a criticism). Can you cite a tech company with more than 20 employees that uses voice changers in interviews?

There's a bunch of other problems with what you're saying here (for instance, the idea that you can correct for ethnicity with a voice changer), but they're not interesting compared to this question.


>Can you cite a tech company with more than 20 employees that uses voice changers in interviews?

No, I cannot. It seems that despite the big conversation about diversity in tech, people aren't very interested in practical solutions to reducing discrimination.

It comes down to the point I made about helping people vs ideological purification. Many people prefer the latter. This is the sort of left-wing extremism I referred to earlier.

To be honest, I think the main reason companies don't use voice changers is if they do, the ideological purifier types will go after them. Just in this thread you can see how upset you and DrewADesign got, after I suggested the idea.

>the idea that you can correct for ethnicity with a voice changer

If it can work for gender, why can't it work for race?


> If it can work for gender, why can't it work for race?

Because pitch/tone is different than ethnolect, and voice changers can obscure differences of the first but not the second.


Women and men also use language differently. I doubt this is a big issue in practice -- you could mask the ethnolect using AI.


> Women and men also use language differently

Not as significantly.

> I doubt this is a big issue in practice

It absolutely is.

> you could mask the ethnolect using AI.

That’s not just a “voice changer”, and if you are going beyond pronunciation to alter grammar and vocabulary that is ethnically identifying, with even modern AI, you have a non-negligible chance of occasionally whiffing and radically altering semantics.


Also worth pointing out that this ethno-masking AI is a service that, so far as I know, does not currently exist. Which makes citing it a particularly weak response to the studies that show candidates with identifiably Black ethnicity do poorly compared to white-coded candidates with identical backgrounds.


There does seem to be a bit of conflating “this is an idea which might have potential and be worth investment in researching and developing” with “this is an alternative which actually exists, and if there was any concern people would just use it”.


I only meant to make the first claim, FWIW.

The idea was suggested in response to someone who implied I favored "deliberately avoiding intervention". I mentioned the idea to clarify my position, and give an example of the sort of intervention I'd be in favor of.

If someone turns it into a product, they deserve mega kudos as far as I'm concerned.


Lets not forget that you changed the framing on this discussion, from (paraphrased) "there is evidence that race-based preferences are widespread in hiring, suggesting that we should take seriously the idea of race-specific privileges and disadvantages among candidates", to your preferred discussion of "is it possible to engineer a system that would prevent hiring managers from knowing the race of applicants".


How "upset" I got?


Here's a user telling me I'm "desperate to assuage and protect racist beliefs" because (a) I said we should avoid methods that backfire, and (b) I suggested a practical idea for reduction of racial discrimination in hiring: https://news.ycombinator.com/item?id=34226817


What does that have to do with the question I just asked you? Can you just answer it directly?


You're confusing me with the poster you're responding to.


The best way to put out a fire is to ensure that it doesn't start in the first place.

Racism stems from racist beliefs, and your approach to putting out spot fires ignores the raging wildfire that's spitting them out. Confronting the source of racist behavior, racist beliefs, is confronting the root cause of discrimination.

In the 90s and 00s, much of the anti-racism education consisted of the same things you'd cry "woke" about, but were instead called "politically correct". The education confronted racist beliefs, and engaged the how and why those beliefs were wrong, and gave students and intuition about why those beliefs were wrong when they encounter them in their lives.

CRT, for example, is nothing new. Anti-racism education in the 90s and 00s was direct implementations of the CRT school of thought that existed in the decades prior. At the time, the right-wing was losing their minds over how "PC" it was to say "African American", or to teach that the Civil War was fought over slavery, and not the fig leaf of "states' rights", or that they shouldn't say the n-word but black people can.

"Woke" is the new "PC", and the CRT that gets derided as "woke" is the same CRT that was derided as "PC" in the past, and it is the same CRT that influenced anti-racism education during your halcyon days of the 90s and 00s. That same anti-racism education addressed those very same racist beliefs you're desperate to assuage and protect.

Dancing around the problem and pretending racism and racist beliefs don't exist, because pointing that out makes some people uncomfortable, is something we've tried for decades, and it doesn't work, as MLK has pointed out:

> First, I must confess that over the last few years I have been gravely disappointed with the white moderate. I have almost reached the regrettable conclusion that the Negro's great stumbling block in the stride toward freedom is not the White Citizen's Council-er or the Ku Klux Klanner, but the white moderate who is more devoted to "order" than to justice; who prefers a negative peace which is the absence of tension to a positive peace which is the presence of justice; who constantly says "I agree with you in the goal you seek, but I can't agree with your methods of direct action;" who paternalistically feels he can set the timetable for another man's freedom; who lives by the myth of time and who constantly advises the Negro to wait until a "more convenient season."

Right-wing resistance to the results of liberal democracy is as old as time, and not anything new that suddenly appeared in 2015 like you seem to beleive. All of the rhetoric you hear now is the same rhetoric espoused by the right in the decades prior. The right has spent decades getting triggered over direct action and society acknowledging racism, and "woke" is just what they're calling it instead of "PC".

These beliefs existed before 2015, but they were mostly only expressed in "good company" between like-minded individuals, because blatant racism became a social faux pas. They were allowed to fester and go unaddressed because addressing them made people uncomfortable. But then Trump comes in and welcomes the far right renaissance that was happening around the world, a world-wide renaissance that had nothing to do with the NY Times posting articles that upset you, to the US. Trump showed them that no, they don't have to speak about those beliefs in hushed tones, you can wear bigotry on your sleeve and millions of people will celebrate it. Since you seem to take their rhetoric seriously, you can find plenty of far right leaders and ideologues saying just that, that Trump is "their guy" and that he opened the door to mainstream their far right ideas and rhetoric.


I object to "anti-racist" advocacy which has been shown to be actively counterproductive. It's not a firefighting method if it makes the fire worse.

I don't claim we should never confront racism. I claim we should do it in an evidence-based way.

I went through the public education system in the 90s and 00s, in California's Bay Area of all places, and the discussion around race was very different than the discussion today. The idea that we should treat people the same regardless of race has switched from being a liberal position to being a conservative position. It seems to me that whatever was being done in the 90s and 00s "worked" in a way that what we're doing today does not. Probably because what we're doing today is very different, and is the product of an ideological conversation on social media rather than a conversation informed by data or reasoned discussion.

>But then Trump comes in and welcomes the far right renaissance that was happening around the world, a world-wide renaissance that had nothing to do with the NY Times posting articles that upset you, to the US.

Generally speaking the rest of the world takes their cultural cues from the US much more than the US takes their cultural cues from the rest of the world. Hollywood movies are viewed globally. The entire world follows news from the United States closely. Much of the rest of the world uses American social media websites. Etc.

It's a common observation that the current rise of wokism started in the US and then spread elsewhere in the world, which fits the general pattern. So the overall hypothesis that NY Times left-wing extremism is the root cause of the current worldwide far-right renaissance is pretty plausible to me. As you said -- treat the root cause.

>Since you seem to take their rhetoric seriously, you can find plenty of far right leaders and ideologues saying just that, that Trump is "their guy" and that he opened the door to mainstream their far right ideas and rhetoric.

I don't claim otherwise.


You were just confronted with evidence, and invented an AI ethnic voice masker on the spot to dismiss it. Your evidence, meanwhile, is an NYT op-ed about police abolition. I don't necessarily endorse anything this person wrote (I didn't read it that carefully), but in this thread you've been deploying a number of really weak arguments.


I don't think you've been reading my comments very carefully either -- it seems like you are generally missing the structure of my reasoning, and reinterpreting it as a sort of collage of bits of pieces of what I'm saying. (I'm going to stop responding to you in this thread, because I have a feeling neither of us is getting much out of this discussion -- but if you're interested in understanding my position better, I encourage you to read what I've already written more carefully. Please don't attribute positions to me that I don't hold, and keep in mind that I'm a human who makes mistakes.)


> As a white guy, I'd certainly encourage you not to worry about this -- if we're talking about an upper middle class, professional-ish activity, you can be fairly sure that all those white people feel self-conscious about their group being so white and would welcome you to join them.

Although I appreciate the intentions of such people, as a non-white guy this always made me feel uncomfortable. That I'm not being welcomed for being a new member but rather a new member who's somewhat different (in color in this case). Whether this is at work, sports groups etc. I guess you call that a token?


Yeah that's definitely an understandable feeling. But on the other hand, if representation matters (as others in this thread claim), it seems hard to please everyone? Supposing you had a group that's 100% white by accident, and they know representation matters. Because they're 100% white, they're starting from a disadvantage: If you want representation, it helps to have representation already! So from this perspective, being 10% nicer to non-whites who join the group seems like a natural solution, to overcome the chicken-and-egg problem.

Maybe the right compromise is to just be especially careful not to be rude to people who are underrepresented in the group, and not worry about it beyond that?


But why does representation of different skin colors matter or ethnicity matters? There are very very few occasions where it does, but in most everyday situations like work, sports etc it really doesn't. I'd rather be classified by an attribute that's relevant to the situation such as skill level or expertise rather than arbitrary attribute like my skin color, why not my height or color of my teeth instead?


Didn't we already establish upthread why it matters? Why are we backtracking 5 comments deep?


Where was that? I must have missed it.


> Not seeing any person of color in an activity makes it’s so I don’t want to be the first.

https://news.ycombinator.com/item?id=34210289


The better question to ask yourself (you the white guy), is would you join a group [insert som hobby like hiking] composed 100% of Cholitas. If yes, are you sure? if not, why not?

*Not singling out Cholitas, but it's a group that may feel "other" to you.


As I stated, I've had the experience of feeling self-conscious in a group where I was the only white person (or one of just a few). That never happened when I was a kid, but it has happened a few times as an adult.

With regard to the Cholitas thought experiment, I think my major source of discomfort is that at least one of them would look at me and think to herself "he's a patriarchical colonizer" and feel hostility from that. I'd feel more comfortable if I had a friend in the group who could vouch for me. Otherwise I'd worry the Cholitas didn't want me to be there.

I suppose that works in reverse as well -- an ethnic person might look at a group of white people and worry that one of them was present at Jan 6 or what have you.

What I miss from my childhood was the feeling that we're all in this together as human beings and race just isn't important. In the time & place I grew up, if someone was to start denigrating others on the basis of race, everyone would consider them to be an unimportant wacko. In that cultural context I wouldn't feel as much apprehension joining a group of Cholitas.


I strongly agree. Trading in the "shared humanity" approach for one where we self segregate and pre-emptively expect and search for discrimination at all moments seems like the opposite of progress.


If I lived in a majority-Cholita area, I would not give it a second thought. If I lived in an area with not many Cholitas, I would wonder if there was a reason the group had only Cholitas.


Really? I live in Japan but am not Japanese. I speak Japanese and have many Japanese friends. I’d feel quite different about joining a hiking group that was 100% Japanese as opposed to one that had a mix of Japanese and non-Japanese people.


I don't have experience with Japan. But if I were in an English-speaking country I'd not give it a second thought.


An English speaking country like Ireland? Or an English speaking country like Jamaica, Nigeria or Pakistan?


Yep!


Seems likely there's a Japan/US difference here


I have spent large parts of my life living with and interacting with people who are not my "skin color" so, yes, not a problem at all.


I regularly attend yoga classes where I am the only man. It doesn’t bother me.


The better question is would that group let a lone white guy join?


In my experience of being that 'lone white guy' more often than not, the amount of general acceptance I get in any group where I've been the odd man out has been great. Life is very different offline.


I’m a fellow white man, currently surrounded by Mexicans in Mexico. The only white here. No problems at all. I’m treated as an individual and treat others the same.

It’s very refreshing. It’s like the USA in my youth. Back in Chicago the racial tensions with blacks is far more. But still not as bad as one would assume. There’s a few that think their purpose in life is to scold you. A lot of that is just culture though, victimhood culture is permeating all racial groups in the US.


Heck yes. But I'm weird that way. It took me until I was in my 40s to realize the majority of women I had dated would be defined as black in most people's eyes. I hadn't even thought about race or defining anyone. I'm super white with blue eyes but my dad has what my mom termed a 'Mediterranean complexion' and my grandma was even 'more Mediterranean' with 'kinky' hair (she wasn't being racist just trying to point something out to me that I failed to understand, even after her pointing them out). My poor mom. She tried to explain my grandparents didn't approve of my dad because he was different. I was like 'oh, because of his long hair'. My mom gave up at that point.

I feel bad for the woman who kept pointing out she was from Oakland and I kept just treating her like the Santa Cruz surf betties I had dated before not realizing what she was hinting at or that I might be taking her out of her comfort zone. I like to think the date we boogie boarded all day then ate pizza in the back of my truck while the porpoises swam by (learn their schedule Las Selva/Aptos peeps, 'running into' porpoises at the beach adds a little extra magic to a date for valleys not used to it) was good and not too uncomfortable.


"As a white guy, I'd certainly encourage you not to worry about this -- if we're talking about an upper middle class, professional-ish activity, you can be fairly sure that all those white people feel self-conscious about their group being so white and would welcome you to join them."

This isn't an attack and your sentiment seems genuine but obviously this person knows his fears are not logical. I doubt most people can will themselves into feeling accepted even if others, like yourself, do your best to them feel accepted.

That's why representation matters


[flagged]


MLK also said this:

> I must confess that over the last few years I have been gravely disappointed with the white moderate. I have almost reached the regrettable conclusion that the Negro's great stumbling block in the stride toward freedom is not the White Citizen's Council-er or the Ku Klux Klanner, but the white moderate who is more devoted to "order" than to justice; who prefers a negative peace which is the absence of tension to a positive peace which is the presence of justice; who constantly says "I agree with you in the goal you seek, but I can't agree with your methods of direct action;" who paternalistically feels he can set the timetable for another man's freedom; who lives by the myth of time and who constantly advises the Negro to wait until a "more convenient season."


And in I Have a Dream, he said this:

>There are those who are asking the devotees of civil rights, when will you be satisfied? We can never be satisfied as long as the Negro is the victim of the unspeakable horrors of police brutality. We can never be satisfied as long as our bodies, heavy with the fatigue of travel, cannot gain lodging in the motels of the highways and the hotels of the cities.

>We cannot be satisfied as long as the Negro's basic mobility is from a smaller ghetto to a larger one. We can never be satisfied as long as our children are stripped of their selfhood and robbed of their dignity by signs stating: for whites only.

>We cannot be satisfied as long as a Negro in Mississippi cannot vote and a Negro in New York believes he has nothing for which to vote.

Since then, most of these problems have been greatly reduced.

Consider police brutality, for instance. On the scale of the entire population, it is small. The Washington Post says they know of 7 unarmed Black people killed by police in 2022. Of those, the majority were fleeing the scene of the crime. See https://www.washingtonpost.com/graphics/investigations/polic...

There are about 45 million Black people in the US, so those 7 victims represent around 0.00001% of the Black population.

And we can hardly say that "basic mobility is from a smaller ghetto to a larger one" after a 2-term Black president, who many Black people (including in New York) voted for.

Since injustice has shrunk since King's day, it follows that our concern for that injustice should be smaller as well. Same way it wouldn't be reasonable to murder a clerk who didn't give you the proper amount of change after a purchase.


I agree, and I would extend that to sex and gender as well, and say that we should treat every person equally and according to their character.


Growing up this was a big realization for me the other way. I was raised by evangelicals in Kansas, including going to a mennonite school for most of my education. As a teenager I realized how at my school, at my church, at basically everything in my family's shared life, there was not a single person with brown skin. Not even one. And worse, no one had real awareness of this, and would not understand how it could be any sort of problem if you brought it up. To them it was just the natural order of the universe that their world was purely white people.

There's plenty of evangelicals of every race, but overall the communities are very segregated. It was clear no matter their professed faith, in practice the community I grew up in was hostile to PoC.


[flagged]


Obviously I understand my childhood community in far more detail and depth than your rather transparent question. My conclusions of their racism are based on far more than I mention, but yes, it has since then proven to be a very clear smoke signal. Other places I've seen the same thing but not as severely are snow sports and auto racing.

I have no desire to retread the 101 of how "white" cannot be treated as logically equivalent to other racial labels. Go do some reading on the subject. A good place to start is trying to enumerate exactly what heritages count as white, and how that has changed over time.


> Go do some reading on the subject. A good place to start is trying to enumerate exactly what heritages count as white, and how that has changed over time.

This is clearly from someone who hasn't experience other cultures. White, black, asian, latino labels were all made up around 1700-1800s and all have been argued who's included all over the world even now, is a human created concept that has changed over time. Like most Americans your clearly making up for any biases of an all white upbringing and systemic racist past by being anti-white now.


[flagged]


I answered you clearly. Ethnic homogeneity is a smoke signal. Ethnic whiteness is exceptional.

You're past my line with ascribing what I think and who from in a way that's literally just you ranting your imagination at me. You have no idea my mind on any of these things based on our conversation thus far.

Talk about the self report...


If you were teleported back to the year 0 and went to what is now the US and encountered a large ethnically homogeneous tribe, what exactly would that smoke signal to you?

(Editing: as far as "ascribing what you think", the only things I've ascribed to what you think that you think that ethnic homogeneity in post-European white population is morally improper, which is exceedingly clear from the phrasing of the first comment you posted when you called it a "problem", and that you believe your community was racist, which is clear from the second comment you posted.)


Imagine being so reductive you lost the point along the way


"Imagine an absurd hypothetical year zero where no contextual understanding of the history of racism is relevant, then what would you do?"

Sure buddy.


>> Ethnic homogeneity is a smoke signal

The Kenyan soccer team doesn't have any white players. Are they racist?


How on earth do you people think this is some rhetorical slam dunk. You're literally telling all of us who you are. What I want to say to you past that violates the rules of this place, but I'm pretty sure you can get the idea.


"You people". I don't know, but that sounds pretty racist.


> makes it’s so I don’t want to be the first.

This makes me curious. Would you mind answering, from your point of view, why do you think that is? Is it a specific scenario or type of scenario you wish to avoid or is there a generalized concern that comes with it? Is that due to uncertainty or past experience?


There's probably many reasons but one that comes to mind is, if I'm first am I an ambassador for my race now? Probably. The people here might be self selecting in other aspects of their life so they may not actually be around PoC anywhere else.

Another aspect is that I don't have that shared connection or ease of relating to people in the group. The activity has obviously selected for a specific group and there are probably other aspects of their lives that are also different from my own.

Past experience mostly. Trying to make friends where the majority is white was difficult though I was in university and far more awkward then. One on one or in smaller settings, its easier and at work it's relatively straightforward to become friendly with anyone.


Frankly; if the group is large enough, homogeny likely means that you the minority weren’t the first… the first(s) got pushed out.


Similarly this gives me pause about Ivy League MBA programs.

Although a small sample size, despite their ambitions the BIPOC people I know haven’t been able to reap any professional benefits from it. Whether from access to executive roles, getting taken seriously by venture capital firms, or in their attempts to join venture capital firms. There is a level of discretion in these team forming situations that is not extended to them whether it has anything to do with their race or not, its pretty clear the upwards mobility is not coming from this credential.

For people with their own capital and leverage, it amplifies their ambition if they want. BIPOC don't really have this.

The “average salary” of MBA alumnis is not what is interesting about getting one, for me or them.

Some, or more, examples to the contrary would make it seem less like a total waste of time.


So even with all the BLM advocacy, there is no pipeline for Black people with MBAs from Ivy League schools? That doesn't fit with what I've seen in other industries. Companies fell all over themselves to hire Black people. Is this not happening in MBA-land?


read it again, its not about getting hired, its about getting capital and decision making roles that skip the corporate ladder.

these have an overrepresentation of Ivy League MBAs in them with a pedigree that has nothing to do with promotions. People see the Ivy League MBA as key to this and aren’t interested in the other things an MBA can do. The jobs these disillusioned alumni have were attainable without the MBA.


There are very typical archetypes that benefit from top MBAs and if your background isn’t one of those, then it won’t help. For example I doubt an associate (who is black) at a major IB firm gets much worse outcomes from an Ivy League MBA than their white peers. If black people are over represented in the people who are admitted but not from those backgrounds that stand to most benefit, it would stand to reason their outcomes would be on average worse. But that doesn’t mean taking an arbitrary white MBA grad and making them black would make their outcomes much worse


my first post covered all of this rehash posted as a rebuttal

"despite the small sample size"

"despite their ambitions"

for people with the typical archetype "it amplifies their ambition if they want"

"whether it has anything to do with their race or not"

"the “average salary” of MBA alumnis is not what is interesting about getting one"

the point is for more examples that show its not a waste of time for ambitious BIPOC, and what archetype would help integrate them into the archetype that often happens to also have an MBA. is it possible to obtain this archetype or are we really in a caste system.


Your original claim was much broader:

> despite their ambitions the BIPOC people I know haven’t been able to reap any professional benefits from it


because they didn't need the MBA for the roles they did land

its all part of one contiguous complete thought


You also say:

> the connections resulted in no improved professional access or ability to exchange time for food and shelter.

When you make such broad statements, then walk them back in comments downstream, it makes conversation unfruitful.


When you ignore such large chunks of context, it makes conversation unfruitful


you’re unwilling to view it as a coherent and related thought

“improved” is the operative word that you seem to ignore or view as contradictory

the MBA did not improve their work prospects, they already had work prospects. you asked for a clarification, you call the clarification backtracking

do you have an actual opinion on the topic now that its more clear for you


> the BIPOC people I know haven’t been able to reap any professional benefits from it

Probably because there's almost always an Asian who's more qualified or a white with better connections. The problem is the key insight you get from a BIPOC is BIPOC tend to not have much money (set aside equity and vicious cycles for another day), so unless they're your market, you're not interested in marketing to people who can't afford your products.


but this isn't about marketing or needing a BIPOC to point out BIPOC markets?

thats a really odd take, how did you start with connections and end up with consumer marketing?

is this a personal experience or is this non-sequitur is a common sentiment because we need to address that as ignorance


I'm looking how diverse candidates from Ivies meaningfully differentiate themselves. Ivies admit to discriminating against Asians, so on average, a BIPOC from an Ivy will be less qualified than an Asian. Why would you hire them, then? The pitch for diversity is always that diverse teams make better decisions, but it's not racial diversity that does it, it's different lived experiences. Companies exist to make money, so how can they turn someone's insight from an underprivileged background into sales when poor people buy less? This was also in the context of an MBA program where the whole purpose is making money. It'd be different for a PM of an app where 100% reach is the goal.


thanks for explaining that, I think there are some assumptions that are flawed in this decision tree.

1) the admissions into the Ivy would be biased, the education should be the same as before, as admissions into Ivy were always biased in some way. So whatever their alumni (and dropouts) were known for in that brand would still be the same.

2) aggregate better decisions due to different lived experiences doesn't mean just selling consumer products to underprivileged people. you're really going to have to work on what people mean by that, "sales to poor people" isn't the whole universe of different lived experiences, there can be other market inefficiencies with a good TAM. very similar to non-revenue based reach.


I am not an ivy league MBA person but my understanding of these sorts of things is that it's not the credential itself that's useful, but the connections you make in the process.


yes, all participants were aware of this, and yet the connections resulted in no improved professional access or ability to exchange time for food and shelter.


Are you willing to give some examples? I’m curious in what contexts people feel this way.


A specific example is if you look at popular sailing channels on youtube. I got into sailing last year and I didn't notice it at first but every popular channel is white. They have events where they meet fans and it's crazy the lack of colour in the group shots. I really wish I hadn't noticed it because it's a bit off putting. It's also surprising that I've only begun to care now that I'm much older.


Sounds like a good enough reason to mandate quotas for everything everywhere. Anything else on your wish list?


I've noticed #2 with my 4 year old. She won't do anything unless there's a girl doing it. My other kids don't seem to care though.


There are some interesting studies about girls and women playing chess. When blind, the two sexes perform on average about the same, but when they know the gender of their opponent, girls and women underperform.

It is hypothesised that the reason behind this is because chess is a boys’ club, so to speak, and thus there is not a lot of representation.

Absence of representation means that it may seem that you are the only one doing XYZ, which in and of its own can be terrifying because we often feel that the odds are stacked against us (which is a self-fulfilling prophecy), or that we are held to much higher standards than others.

Personally, I enjoy seeing diverse representation even if I am not represented. I want people to not be afraid to pursue their dreams and goals, I don’t want implicit prejudices due to lack of representation either.


I’ll bet this does not replicate, the effect of stereotype threat is notoriously overvalued. Don’t fall for junk science.


Women that dedicate their careers to chess still dramatically underperform men at the professional level. There is even a different rating level and title system for women: women earning the "women grandmaster" title need 200 fewer ELO points than the grandmaster title for men. No woman has ever won the World Chess Championship, and there are far fewer female GMs than male GMs.


Some women (Polgar for example) have argued the separate womens' titles significantly contribute to the gap in skill.


How do women and men do versus AI opponents, i.e. without any gender? Wonder if chess.com has the data?


We all get slaughtered. It's possible Magnus could make one draw with white against a not-current Stockfish on limited depth out of a hundred games, but that would be about the best any of us could do.


Chess engines support the concept of skill level. Pretty much any chess program or app supports this out of the box, such as the aforementioned chess.com. The only times they are run unhindered are for analysis, engine testing, or fun.


Badly? AI is vastly superior to human capability, no human wins


There's a very interesting point in the thread below which would resolve the question - we could use chess puzzles! If there is a mismatch between performance on puzzles with live results, it would be pretty solid proof that the effect is real. It doesn't have to be full on AI.


I've always wondered about this, actually.

With almost all sports males have a dramatic physical advantage[1] so it's not really fair to have males compete with females.

With chess there is no physical advantage, so why keep separate tournaments for males and females? What is the reasoning for preventing males and females competing in chess?

[1] In the 90s, ISTR reading about a tennis match between a #3-rank female player against a #5000-something-ranked male, and the male player simply dominated. I wish I could find that article now, but it was a print mag and I don't remember which one it was.


While we shouldn’t fall for junk science, we also shouldn’t fall for cynicism as an excuse to stick to our priors. It might not replicate if you try to do so, but until someone does, we shouldn’t write it off.


Surely the burden of proof should be on the side making the claim?


I’ve in general grown somewhat averse to the “burden of proof points in some direction” line of thinking.

They are both making claims. (And I think usually most people are making some sort of claim, unless they are just nitpicking). One is claiming that they remember a study — pretty vague, and easily proven (as far as we can tell at least) by just showing us any study. The other is claiming that that study doesn’t replicate… without having seen it, which seems like a pretty high bar to clear.


Most studies in this reference class (social science) don't replicate, so I'm fine with that being the default assumption.


It is hard to speculate without knowing much about the original poster or the types of studies they read. Maybe they prefer “Nature Human Behavior,” who knows?



One sort of annoying thing about this site is that it seems to be convention to ask for sources, but conversations quickly fall off the front page and so when you come back with the sources, nobody is left to respond! But this looks at least to me to be some good stuff, the second link appears to be at least somewhat of a replication.


And the claim in this case is "it does not replicate".


https://onlinelibrary.wiley.com/doi/abs/10.1002/ejsp.440

The study. Note that this is about taking equally matched players, and having them compete via the Internet without knowing the gender of their partner, then again knowing the gender.


This reads needlessly aggressive, and the accompanied downvotes seem unjustified.

In addition to the other studies published by Wiley, I have two more:

https://journals.sagepub.com/doi/10.1177/0956797620924051

https://journals.sagepub.com/doi/10.1177/0956797617736887


@dang

Why is GP flagged but parent isn’t?

Parent adds nothing to the conversation, but apparently GP was worth removing even though it is now supported by 3 different sources.


I think I found the study: https://onlinelibrary.wiley.com/doi/abs/10.1002/ejsp.440

And a related meta study that looks at sports in general: https://www.sciencedirect.com/science/article/pii/S146902921...


> There are some interesting studies about girls and women playing chess. When blind, the two sexes perform on average about the same, but when they know the gender of their opponent, girls and women underperform.

My guess is that it's due to how expectations affect how likely we are to enter flow state. If we subconsciously expect ourselves to enter a challenge in less than good/fair/even conditions we might be less likely to engage fully because we're using our attention for questioning ourselves/the situation.

At least that's how I feel I myself work.


> 3. Rules in life are just constructs that we as humans have created.

> Starting a business helped the most on this one. That's when I started to see that "rules" or "procedure" are all made up and exceptions can always be made.

This is a big one that I learned through the same experience. Everything is arbitrary, the rules are made up and the points don't matter.

It made me appreciate those who recognize this, and in turn treat others well, even when shit hits the fan metaphorically, and I have absolutely zero tolerance for bullshit hoop jumping and assholes. I know you don't have to play by that book, so I won't, and I'm happier for it.


Interesting interplay between 2 and 3. Why does representation matter so much if these are just human-created constructs? Genuinely interested to know how the same person had both of these epiphanies.


The most popular humanly constructed narrative these days in the US is that representation is everything so I tend to agree with your point. There is an interesting immediate clash between the two.


A construct being human-created doesn't mean it isn't important, or that we can simply ignore it. Just means it can be changed, unlike a law of nature, for instance.


Ha, interesting observation. They didn't happen at the same time. #2 followed by #3 much later.


why does debt matter so much? human-created construct


Point 3 also hit me a few months ago during a Dan Carlin’s Hardcore History podcast about Rome. Caesar got back from his conquests and half the senate wanted to punish him for crimes against humanity that he conducted during his wars. However, he also claimed massive amounts of land for Rome. Long story short: he wasn’t punished. And this happened 2000 years ago.


Napoleon wasn't roman. Do you mean a caesar?


Yes. Well spotted. Thanks.


Did they actually care about crimes against humanity? I thought they were worried about the amount of power he had?


> 1. Everyone is the main character in their own story.

Here is just one example of the total wrongness of something I tend to be automatically sure of: everything in my own immediate experience supports my deep belief that I am the absolute centre of the universe; the realest, most vivid and important person in existence. We rarely think about this sort of natural, basic self-centredness because it’s so socially repulsive. But it’s pretty much the same for all of us. It is our default setting, hard-wired into our boards at birth. Think about it: there is no experience you have had that you are not the absolute centre of. The world as you experience it is there in front of YOU or behind YOU, to the left or right of YOU, on YOUR TV or YOUR monitor. And so on. Other people’s thoughts and feelings have to be communicated to you somehow, but your own are so immediate, urgent, real.

David Foster Wallace


> 3. Rules in life are just constructs that we as humans have created.

Corollary to this is that human rules aren't like programming rules and that the words that make up the rules get interpreted by a human.

One thing that this means is that you can't "hack" a human rule by picking the semantic meaning of a word which works best for you. You have to actually convince the arbiter of that rule that they agree with your meaning. If they don't, and they have a hundred years of legal ruling behind them that they've read and you haven't, then you're screwed.

And good human rules usually do have exceptions to them ("yelling fire in a crowded theater" being the most well understood). This is also why "the exception that proves the rule" is not a stupid saying.

And this is a feature and not a bug. The worst rules we generate are usually the ones that require the human arbiter to be rigid and mechanical. That tends to produce injustices like "three strikes you're out" and "mandatory minimum sentencing" (or any attempt to make the handball rule in soccer/football be objective and just winds up making it worse).


To further your point 1, I think also “Everyone sees things through their perspective”.

So when you explain something, 10 people hear 10 slightly different things as their own experiences and biases, and even hopes, interpret your statement.

That’s why being able to communicate accurately, clearly, and concisely is a very difficult and important thing. If you can do it tailored to specific groups and with humour, bonus points.


While I do see merit in the idea that representation matters, to me, it all depends on what representation in particular. Racial representation gets all the attention.

But for example, seeing that a philosophy major can have a successful computer science experience


I don't want to downplay the importance of diverse academic backgrounds, but representation of immutable characteristics (gender identity, race, disability, etc) usually get the most attention because they are essentially always the things that have impact on kids. Your little girl or your autistic child is going to pursue paths they maybe wouldn't otherwise take because of representation. The earlier you get a child to engage with something, the better their outcomes with pursuing it.

Representation is literally about people seeing themselves. If you can't nail down what "people like me" look like when it comes to representation, it might be that representation isn't quite the right way to frame the problem. It's only one aspect of diversity.


Any suggested reading for #1 ("learn how to navigate \$THIS.")?


It’s old and perhaps a little controversial, but “ NLP: The New Technology of Achievement” has good examples and abstractions that could help you with #1


Controversial in that it's generally considered a pseudoscience. I'd advise against anyone looking to figure out how to navigate this wasting their time with pseudosciences.


> it's generally considered a pseudoscience.

Only by people who have no experience with it.

NLP ("Neurolinguistic Programming", not Natural Language Processing) is pre-scientific, yes, but that's not the same as being a pseudoscience, and calling it such is a disservice.

We have pretty much discovered the operating system of the human mind. applying the principles and techniques discovered and systematized under the rubric of NLP one can do amazing things.

- - - -

It is strange that academic research psychologists seem to have trouble with it. But that's their problem, it doesn't change the fact that people all over the world are learning and using this stuff every day. If academic psychologists can't understand it that just makes them irrelevant, it doesn't change the facts.


Plenty of people all over the world are learning and using the teachings of Scientology. That doesn't make them right or even useful


> learning and using this stuff every day

Anecdotal evidence isn't evidence, it's anecdote. You're basing this all off what people are saying works for them. There's a strong possibility that having learned this gives them extra confidence/trying harder, which in tern leads them to improvements. It's like taking sugar pills telling someone it's amphetamine.


I wasn't cured of clinical depression by "extra confidence" or "trying harder".


This is a case of correlation, plus (again) totally unsubstantiated anecdotal claims. Suggesting people use unproven, non-evidence based approaches, to an illness that leads almost a million people a year to take their own lives, is reckless and potentially harmful. You believe NLP "technology" worked for you -- cool. Some people believe taking LSD worked for them -- cool. Some people say L. Ron Hubbard's "technology" has helped them. I won't take that away from them, but if they start pushing it on other, that's deeply wrong.

There's plenty of evidence-based methodologies that work for statistically significant portions of the population; and those are administered by doctors, and professionals.

But where you cross the line is when you start telling other people to do this too, and that it will work. I won't take the fact that you found something helpful in this. People find all sorts of different things helpful. But I firmly believe that you trying to make this sound like a sure thing is immoral and dangerous.


> This is a case of correlation, plus (again) totally unsubstantiated anecdotal claims.

I went to a one of the greatest hypnotherapists in the world for help with depression, he put me in a trance and did something, and when I came out of the trance my depression was gone.

Dismissing that as mere "correlation" seems illogical to me.

In re: "unsubstantiated anecdotal claim" well, you got me there. I'm just some rando on the internet, eh? Nevertheless, it's true. (I've detailed the experience before on HN, use algolia if you're curious, eh?)

> Suggesting people use unproven, non-evidence based approaches,

Ah, but I am the living proof. My life is the evidence. This is primary data.

> [Suggesting NLP] is reckless and potentially harmful.

I don't see that at all. In fact, to me this situations seems exactly the opposite. This NLP is a profound breakthrough in human psychology. I feel you're being "potentially harmful" to call it pseudoscience and warn people away from it.

I don't doubt that your motives are good and that you're sincere, it's just that you're on the wrong side of history on this one. NLP will eventually have a scientific grounding, perhaps after today's scientific community ages out, but it's inevitable.

(As an aside, when you say it's "reckless" to talk about how NLP cured my depression, what exactly are you getting at? It's not like NLP is a drug with dangerous side effects, eh? It's just talking, after all.)

> You believe NLP "technology" worked for you -- cool.

No dude, not cool. I want to know what happened to me. I want us to "do science" to NLP! I want to know how and why it works.

This "science has debunked NLP" line of nonsense is not helpful.

> There's plenty of evidence-based methodologies that work for statistically significant portions of the population; and those are administered by doctors, and professionals.

For depression? Really? That's terrific! Still doesn't change the fact that NLP works. Again, like I said, if scientists can't figure out how and why the models and techniques of NLP work that just makes them look incompetent.

> But where you cross the line is when you start telling other people to do this too, and that it will work.

Well, at risk of sounding snarky, I must say it's a good thing you aren't the one who gets to decide whether I can recommend NLP to people or not!

> I firmly believe that you trying to make this sound like a sure thing is immoral and dangerous.

I apologize for that. NLP is not a panacea. There is a wide variance in NLP therapists, and results. It's not a "sure thing".

(That said, for e.g. phobias there is a concrete repeatable algorithm that alleviates them, and my understandingh is that it's pretty much 100% effecitive.)

- - - -

I tell you what, it really is puzzling to me why academics have a hard time with NLP. You sound like you have read some of these scientific papers that supposedly "debunk" NLP. If you'll cite some I'll look them over and see if I can detect how or why they failed. I mean, I'm not an expert in NLP nor psychology, but perhaps we can salvage something constructive out of this exchange?

- - - -

edit: I saw when I posted this comment that you have replied to the other sub-thread ( https://news.ycombinator.com/item?id=34226389 ) meanwhile.

I've got to do some other things for a bit but I'll reply in a few hours.

In any event, well met! :)


> pre-scientific

What does that mean?


Nothing. Either something follows the scientific method, or it does not. Pre-scientific is a fancy way of saying "we have no evidence for this". Bear in mind, this field is decades old, and has been subjected already to scientific study. All of which have debunked it.

In the typical style of SCAMS (supplemental, complementary, and alternative medicines), this line really just means they're waiting for the one study to validate them in the face of countless studies that have not.


You don't have any actual experience with NLP do you?


No, but I'm trained as a scientist, so I know how to read research, data, and scientific studies. I don't need to have experience with Scientology to tell you that, based on the available evidence, it's bunk. Nor do I need experience with homeopathy to tell you that water-memory is unscientific and nonsense. Direct, personal experience, isn't how the scientific community collects data, and converts that data into knowledge. It's rigorous study, and testing. NLP has been subjected to that, and it has failed to stand up.


>> You don't have any actual experience with NLP do you?

> No

Thank you. That's really the best I can hope for in conversations about NLP with skeptical people. I'm not going to convince you, you're not going to convince me. But I appreciate your honesty in admitting you do not have first hand experience with NLP.

I am not a scientist myself, and I'm not quite comfortable calling myself an "amateur scientist". However, I am very very pro-science. I consider science to be one of the most important human activities. I also value skepticism. (E.g. IMO James Randi is a hero in the intellectual life of humanity.) However, that said, this is an area where science is out of the loop. This stuff works. It's rigorous and repeatable. It's not even hard to do.

> NLP has been subjected to [rigorous study, and testing], and it has failed to stand up.

That can't be true.

I don't know what the boffins did, but if they can't replicate these simple patterns and techniques they just make themselves look foolish.


You've skipped pretty much most of my argument, but I'll humour you nonetheless. There's no such thing as science being out of the loop. Read my other comment, if you're pushing this unverified "technology" as a cure for depression, you're doing something dangerous and immoral. Just because you found value in this, does not mean it's an alternative to evidence-based, and tested methods. I also solved my depression by very alternative methods, but I also strongly advise people against the methods I used, since they're very much so a "your mileage will very" methods.

>If they can't replicate these simple patterns and techniques they just make themselves look foolish

Consider if the scientific community; a group who has studied everything from astral projection, to the reaction that created the atom bomb; has been unable to replicate the anecdotal results, that it's possible the anecdotal results are caused by confounding factors.

The power of "I want to believe" is very strong.

> That can't be true.

But it is. From a 2010 meta-analysis:

The huge popularity of Neuro-Linguistic Programming (NLP) therapies and training has not been accompanied by knowledge of the empirical underpinnings of the concept. The article presents the concept of NLP in the light of empirical research in the Neuro-Linguistic Programming Research Data Base. From among 315 articles the author selected 63 studies published in journals from the Master Journal List of ISI. Out of 33 studies, 18.2% show results supporting the tenets of NLP, 54.5% - results non-supportive of the NLP tenets and 27.3% brings uncertain results. The qualitative analysis indicates the greater weight of the non-supportive studies and their greater methodological worth against the ones supporting the tenets. Results contradict the claim of an empirical basis of NLP.

18% is statistically insignificant.

Frankly, I'm glad you were able to overcome your depression. Really, the fact that you did this speaks more to your strength as a person than the strength of NLP. Overcoming depression isn't easy -- it's something I have also struggled with. Rather than credit this, I think you should be crediting yourself.


> You've skipped pretty much most of my argument,

Well, it wasn't very good...

Frankly, comparing NLP to Scientology or homeopathy is silly.

> but I'll humour you nonetheless.

I appreciate it.

This isn't my first rodeo talking to scientific skeptics about NLP. Like I said, usually the best I can hope for it that the person I'm talking to is willing to admit they don't have any first-person experience with NLP. (They never do.)

It's not like I'm going to start a formal research program myself, eh?

Nor is reading a science paper purporting to debunk NLP going to cause a relapse of my old depression, eh? (God I hope not! Now that would be some foul magic, eh?)

> There's no such thing as science being out of the loop.

I don't understand what you mean. You're obviously not trying to say that science knows everything already, yeah?

Consider the evolution of the field of Chemistry: it started as the pre-scientific investigation of the properties of matter we call Alchemy (which was shunned and even illegal in many places and times) and gradually became systematized into Chemistry, a proper science. Psychology is in the middle of this transition from an alchemical body of knowledge into a systematic science. NLP is the forefront of that process, the cutting edge. It's the first systematic, rigourous, repeatable school of psychology in human history. It's kind of a big deal.

> if you're pushing this unverified "technology" as a cure for depression, you're doing something dangerous and immoral.

Ah! Let me be clear: I am not "pushing [NLP] as a cure for depression" and if it seemed that way I misspoke.

One of the primary inventors of this "technology" did cure me of depression in a single session of hypnosis lasting no more than ten minutes. I am aware of how outlandish that sounds. Nevertheless, it's true. (And I'm sure my "miraculous" cure is not even in the top 100 of his most amazing interventions.)

I don't see the danger or immorality of sharing my story.

> ... does not mean it's an alternative to evidence-based, and tested methods.

Sure it is! It works. There's lots of evidence and it's been tested over and over again all over the world.

The problem here isn't that NLP is unverified. I verify it. The problem is that the folks who call themselves scientific psychologists evidently can't even begin to understand it well enough to study it!

>> If they can't replicate these simple patterns and techniques they just make themselves look foolish

> Consider if the scientific community; a group who has studied everything from astral projection, to the reaction that created the atom bomb;

It's not one group, eh? To be clear, I trust physicists (in re: physics) more than I trust my mother.

> has been unable to replicate the anecdotal results, that it's possible the anecdotal results are caused by confounding factors.

Sure it's possible. But it's wildly unlikely.

At this point (2023) we're talking about millions of people using this stuff everyday vs. a handful of "scientists" who somehow can't replicate these simple techniques and patterns that (again!) millions of normal everyday people can! It's very weird.

> The power of "I want to believe" is very strong.

Er, kind of a tangent but one of the discoveries of NLP is the formal subjective structure of belief. The upshot of that is that you can literally rewrite your beliefs as you see fit. So yeah, I don't "want to believe", I believe what I want. Part of the reason I am so committed to empiricism is that I can't rely on belief.

> > That can't be true.

> But it is.

Let's not descend to "yeah huh" and "nah uh", eh? I know that some scientific psychologists have published some papers, but there has to be some error or incompetence. That's the only possibility (in my world view.)

> From a 2010 meta-analysis:

Got a link?

I'm going to tear though the abstract you quoted, and then go look for this paper. (I can do this all day. I am right, and this is important. I have infinite time and patience for this discussion.)

> The huge popularity of Neuro-Linguistic Programming (NLP) therapies and training has not been accompanied by knowledge of the empirical underpinnings of the concept.

Right! That's the problem!

> The article presents the concept of NLP

First error: "NLP" is not a single concept. "Neurolinguistic Programming" is a catch-all term for a whole constellation of models and techniques, as well as the attitude and methods that evolved the models and techniques.

> in the light of empirical research in the Neuro-Linguistic Programming Research Data Base.

I've never heard of "the Neuro-Linguistic Programming Research Data Base". I'll look it up, or do you have a link or more context?

> From among 315 articles

On NLP?

> the author selected 63 studies

By what criteria?

> published in journals from the Master Journal List of ISI.

What's the ISI?

> Out of 33 studies,

Are there links to these individual studies? I don't have access to science journals.

> 18.2% show results supporting the tenets of NLP, 54.5% - results non-supportive of the NLP tenets and 27.3% brings uncertain results.

I'd really like to see the individual papers. Without looking at them there's nothing constructive I can say.

> The qualitative analysis indicates the greater weight of the non-supportive studies and their greater methodological worth against the ones supporting the tenets. Results contradict the claim of an empirical basis of NLP.

Right, so to me that just means that the science was done poorly. That's all. I know NLP works, so from my POV the problem must be with the methods of investigation.

> 18% is statistically insignificant.

I'm not a statistician, but it's not important. If you're willing to stick around and talk about it I'm more than willing to go through each and every one of those 33 or 63 or 315 studies and give my considered opinion of what they might have been doing wrong. Like I said in the other comment, I'm not a scientist nor an NLP expert, but for what it's worth (which might not be much) I'm willing to read the studies and tell you what I think.

> Frankly, I'm glad you were able to overcome your depression.

Cheers! It's hard to overstate how badly off I was. As you no doubt know, it can be hard for people who don't suffer from something like that to understand what you're going through. I remember one point in high school my mom turned to me and said, "I know you're not faking this because no one would ever deliberately be this miserable." (It was more comforting than it sounds.)

> Really, the fact that you did this speaks more to your strength as a person than the strength of NLP.

No. It doesn't. The fact that I was never suicidal speaks to my strength as a person.

The fact that Dr. Bandler cured me of depression in ten minutes speaks to the strength of his invention NLP.

> Overcoming depression isn't easy

Except it was. It took less than ten minutes. I didn't even have to do anything, I was in trance, just sitting there. Subjectively the process took only moments. It was only afterward that someone told that it had been about ten minutes. I've had farts that were more difficult!

It behooves us to pay attention to this fellow, eh?

> it's something I have also struggled with.

I'm so sorry. It's not something I would wish on anyone.

> Rather than credit this, I think you should be crediting yourself.

I know, but that's because you're wrong about NLP being bunk. It really was the world-class hypnotherapist who cured me.

I do credit myself with going to see Dr. Bandler!

Seriously, I had to do a lot of work on myself (using NLP) before I could even go to a therapist at all, and then I went to several in a row, who each helped but not enough. Eventually a relative died and I inherited enough money to afford to go see Bandler. If I hadn't, who knows... It's not worth thinking about.

Well met!


I just found the paper:

https://doi.org/10.2478/V10059-010-0008-0

"Thirty-Five Years of Research on Neuro-Linguistic Programming. NLP Research Data Base. State of the Art or Pseudoscientific Decoration?"

Polish Psychological Bulletin, 2010, vol 41 (2), 58-66

Oh, but I saw that on the paper itself the author's name has an asterix:

Tomasz Witkowski*

This leads to a footnote at the bottom of the first page that says:

*Klub Sceptyków Polskich [Polish Sceptics Club]

(Followed by a physical address and an email address.)

So he's a Skeptic. In other words, he's probably strongly biased against NLP from the outset, eh? I'm no longer sure that reading this wouldn't be a big waste of my time.


When someone presented with overwhelming evidence to the contrary further entrenches themselves in a belief, we're no longer dealing with a matter of knowledge, but of belief. Reading your response, you're just trying to debunk overwhelming evidence, and rejecting most of it out of hand. Or rejecting it in a way that indicates you don't understand how to interpret it. There's no real point in carrying on.

It is also worth noting that Richard Bandler's claims to hold a PhD are highly dubious. There's no record for a dissertation submitted to University of San Francisco under his name (see link below). It should go without saying, but my scepticism of someone who's PhD can't be verified to even exist, is immense. Again, being called "Doctor" without credentials is common in the S.C.A.Ms world.

https://dissexpress.proquest.com/dxweb/results.html?QryTxt=S...


But you haven't presented "overwhelming evidence". You cited a meta-study paper that was prepared by a member of a Skeptics club. It's like you're not even trying.

(Look, from Shannon's Information Theory we have the result that the unpredictability of a message is a measure of its information content. Knowing only one piece of information about that paper: that the author is a Skeptic, I can already predict that that paper will have a negative result. There's really no information there, eh?)

Like I said, if you're willing to pick some actual scientific studies I'm willing to read them with an open mind. I'd like to get to the bottom of this little mystery. In fact, I'm going to read a bunch of these (studies that are referenced from the paper you gave, and others I can find) anyway.


> if you're willing to pick some actual scientific studies

And I have. Meta-reviews are considered important, effective, and valuable ways of probing the state of the literature. Dismissing it out of hand because you dislike who prepared it is fallacious.


> Dismissing it out of hand because you dislike who prepared it is fallacious.

Please, it's like you cited a paper called "Do Ghosts Exist?" prepared by a member of the There's No Such Thing as Ghosts club. You don't feel at least a little bit silly?

- - - -

Anyway, I'm reading the paper now.

He starts out well, with a DB of 315 articles, but then discards all but 63 of them "based on the criterion of whether the journal in which the given articles were publisher was recorded on on the Master Journal List of the Institute for Scientific Information in Philadelphia. This operation does not require justification in more detail."

Okay...

That's about 20%. Didn't you just say that "18% is statistically insignificant"?

But for the sake of discussion, let it pass: we will assume that these 63 papers are a good sample of the available information.

He goes on to select "Thirty-three empirical articles, which tested the tenets of the concept and/or the tenets-derived hypotheses."

He breaks them down into three subcategories:

> 1. Nine works supporting the NLP tenets and the tenets-derived hypotheses (27.3%).

> 2. Eighteen works non-supportive of the NLP tenets and the tenets-derived hypotheses (54.5%).

> 3. Six works with uncertain outcomes (18.2%).

Note that in the abstract the numbers for "supporting" and "uncertain" have been exchanged:

> Out of 33 studies, 18.2% show results supporting the tenets of NLP, 54.5% - results non-supportive of the NLP tenets and 27.3% brings uncertain results.

Is 27.3% statistically insignificant?

Anyway, I'll dig through these papers but I doubt I'm going to find a "smoking gun" that invalidates NLP.

As for this meta-study, it's really just Skeptic propaganda:

> My analysis leads undeniably to the statement that NLP represents pseudoscientific rubbish, which should be mothballed forever.

That doesn't really sound like an objective scientific statement does it?

What I find extremely weird and a bit concerning is where he writes:

> Here I would like to refer to the statement expressed by O’Donohue and Ferguson (2006), who propose that each type of therapy that does not have empirical supportive evidence of its effectiveness should be called experimental. They also put forward a suggestion that each case of performing such therapies without informing the clients about its experimental status should be referred to and treated as criminal activity. I fully agree with this view.

I mean, talk about gatekeeping, eh?

He wants people to be charged with a crime for not prefacing their work with his disclaimer.

He believes that NLP is nothing, yet it's somehow so dangerous that it needs a warning label? That seems irrational.

Anyway, I'm going to look through the referenced papers and see if I can figure out what's going wrong...

Cheers!


I've looked over the "works non-supportive of the NLP tenets and the tenets-derived hypotheses" and, well, in the words of Inigo Montoya, "I do not think it means what you think it means."

Basically, in the mid-80's some researchers badly misunderstood a facet of NLP, researched it badly, and then declared the whole thing to be baloney. Here's a good (brief) paper describing some of the problems with the "research":

Einspruch, E. L., & Forman, B. D. (1985). Observations concerning research literature on Neurolinguistic Programming. Journal of Counseling Psychology, 32, 589-596.

Since then, as far as I can tell, almost no one has done anything like proper science to NLP at all, at all. The NLP people are merrily doing their thing, the research psychologists are doing their thing, and "never the twain shall meet", eh?

It's a sad state of affairs really.


I see you edited your comment to take a swipe at Bandler, eh?

Why are you carrying on if there's no real point? (Sorry. I shouldn't tease you. I do appreciate the opportunity to hash this out in a public forum.)

AFAIK people call him "Dr." out of respect for his genius and contributions to humanity, it's never occurred to me before to wonder whether he actually had a PhD. I mean, it's not really relevant? He doesn't need a piece of paper to do what he does.

(BTW I think he was at UC Santa Cruz, not SF.)


Except that's not at all what Doctor means, not even remotely. Doctor has a specific meaning, and it's understood to mean a set of credentials. If you don't have it, but say you do, you're lying. It's absolutely relevant. If someone is claiming to have credentials they don't have, then they're untrustworthy.


If I promise never to call him "Doctor" again will you look through the referenced papers with me?

I just woke up, I've got my coffee, and I'm ready to go...

- - - -

Gatekeeping the term "Doctor" is uninteresting, and it doesn't make your side of the argument look good. You have nothing stronger to dun him on?

If you really want to focus on character assassination rather than truth and science, well here you go: https://www.latimes.com/archives/la-xpm-1988-01-29-mn-26470-... Have fun.

I'll be busy reading papers. You know where to find me if you change your mind.


> Gatekeeping the term "Doctor" is uninteresting, and it doesn't make your side of the argument look good. You have nothing stronger to dun him on?

I think the fact that he's claiming to have credentials that he does not is pretty damning. You have this utterly backwards. If I claim to have founded something groundbreaking, and it turns out I lied about my credentials, that casts doubt on everything I've done.


So he calls himself "Dr." but he doesn't have a PhD. I get it. So terrible. Much deceptive. Wow.

Do you want to discuss some scientific studies or not?


Not if you're going to sarcastically dismiss valid points that discredit what you're talking about like this.


So if I'm not completely respectful and I don't admit defeat before even we begin you're unwilling to examine the scientific evidence?

The NLP people have developed a body of models and techniques for rapid and durable psychological changes. This body of work is systematic and repeatable, but the NLP folks haven't (yet!) "done science to it" to try to figure out why these techniques work.


I don’t really care about this one or the other but you’re taking about this thing exactly the way cultists do. It doesn’t lend it any credence. Why would the “evil scientists” be so against your ideology, specifically? Academics publish papers on all kinds of things. This is the same type of stuff of Gene Ray or flat-Earthers etc.


Please don't cross into personal attack or name-calling.

https://news.ycombinator.com/newsguidelines.html


What name-calling did I do? I said it is how cultists behave.


"Cultists" is the name-calling.


FWIW I wasn't offended.


For someone who doesn't care one way or the other you're being a just a little insulting.

> It doesn’t lend it any credence.

NLP doesn't need "credence". It works. Believe or not, it's no skin off my teeth.

Here's the thing: I pretend to be a crass cranky curmudgeon but I'm actually a bleedin' heart soft-touch kind of a fellow. I care about my fellow humans beings and I wish them well, more-or-less. I know NLP works, because I've worked with it myself, and because I was cured of serious clinical depression by Richard Bandler himself. This stuff really is a turning point in human history.

So, on the one hand, I got my cure and went on with a new normal life, so it doesn't matter to me personally what you or anybody in particular thinks of NLP.

On the other hand, what kind of monster would I be if I kept my mouth shut when some ignorant person calls it "pseudoscience" and warns people away from it? To me it's like I'm one of the first people to hear about and benefit from penicillin, yeah? And some dude is like "penicillin is generally considered a pseudoscience". I'm going to speak up, ya feel me? There's a feeling of obligation, to truth and to the other people out there who are suffering like I was, and who may well find relief in this new not-yet-science of human psychology.

> Why would the “evil scientists” be so against your ideology, specifically?

I never said "evil scientists", and NLP is not an ideology. (Nor a cult for that matter.)

As for why NLP is shunned by researchers, who have a hard time replicating even the simplest discoveries, even though normal everyday people learn and use this stuff successfully every day all over the world, like I said, it's strange.

I have no answer for you.

The obvious answer: "NLP is BS", is a non-starter for the reasons outlined above. (To reiterate: it does work.)

In any event, although it's a puzzle, it's not my problem. My depression was cured.

In sum, check out NLP. Or not, what do I care? (But really, do. It's the real deal.)


What does representation mean in this context?


For #2, I would alter this to say: connections matter.

We are more likely to pursue careers or interests if we know someone (and are friends with them or related to them) that is interested in the same thing or has had experience in the same thing.


That doesn't ring true for me. I was already pursuing careers and interests that my friends were pursuing. That didn't change anything for me mostly because my friends came from a very different background than me. It also doesn't solve issues like my old boss who opted not to have children because she wanted to move up the corporate ladder at a FAANG company. She said that there was no mental model on how to navigate having kids at the company. All of the ones that did were men whose wives stayed home and took care of everything. She (and I) question if it's even possible to do it.

Your comment reminds me a bit of something I heard years ago from a manager at Facebook claiming that the way to solve imposter syndrome was to have people select which teams and projects they worked on because they would be more motivated to work at it. Totally off from what my experience had been, but likely applicable to some people.

However, I'm not saying you are wrong and I'm right. I think what's surfacing here is that what I posted doesn't apply to everyone and what you are saying doesn't apply to everyone either. It likely helps different types of people in different situations.


For me, I didn't become a software developer until my late 20s, because I was largely intimidated by it, despite being relatively geeky and tech oriented as a teen. There was no one in my small town I knew that went on to become a software dev. And I tended to place devs on a pedestal in my own head, about how smart they were or whatever. I definitely had the imposter syndrome in the first few years of my career. Working my way through community college then onto a top tier computer science university, and then participating in a startup where I got to live in Silicon Valley for a while, I learned that a lot of people in the industry aren't exceptional geniuses. Having access to that network of people in Silicon Valley though, especially at the time (less important nowadays I would argue), was crucial to so many people's success at getting good advice, using the right technologies and business tactics, getting positions and raising capital.

There's an old piece of advice I remember hearing once that for almost any intractable problem or area of concern in your life, the answer can usually be found by meeting more people. If you actively work towards meeting people that know the thing you want to know or have experience in the thing you want to learn or be better at, you'll eventually meet them and unlock that piece of wisdom or personal connection that can get you where you need to go.


That it 'doesn't apply to you' or 'everyone' really isn't the point here.

Obviously it doesn't apply to everyone.

Have you seen how many people in Hollywood have famous relatives in the industry?

It's the same for any industry.

It's very common.

Growing up, I wouldn't have fathomed what it meant to be a screen writer. I literally did not know what the job entailed, other than 'writing'. Nothing.

My Uncle was an Engineer, hanging out with him I saw how that worked, became interested a bit, ended up recognizing how important 'Math' etc. was, worked on a couple of related projects etc.. Eventually getting an internship because of a family member. It was a small favour but a big deal.

And even aside from opportunities, all of the small things, habits, perspective and behaviours of a class of people expected to perform certain functions.

Anyone can be whatever but it's far more likely that those who are exposed, who have role models, behind the scenes access, internships etc are going to get opportunities.

Representation does matter I think, that said it depends a lot on context. For politics, judicial system it matters existentially. In pop culture, especially for kids I think it's important that they 'see themselves' in roles. For a lot of things however I don't think it matters at all.


Please excuse the poor grammar, English isn't my first language. For #3, this rule always fascinated/confused me the most. How did us humans (animals formed from evolution/survival of the fittest) create these abstract 'rules' or fictions we all just collectively follow. Was there some sort of cumulation of ideas of the past woven into novel ones to make an archetypal epoch? Any insight/book recommendation to bridge this knowledge gap is much appreciated.


> because I just didn't see anyone like me there

To me, that would be the other way around: a chance to stand out and be one of a kind, act in a way nobody else had acted before, and reap the benefits of being unique, or the first at least.

(Seeing this as exciting or frightening probably depends on your level of sociopathy.)


> To me, that would be the other way around: a chance to stand out and be one of a kind, act in a way nobody else had acted before, and reap the benefits of being unique, or the first at least

This is a great attitude to have. I know that not everyone has this ability (yet) to function this way, but it really should be the goal of how to operate.

For me, it took me years to get there. Variety of reasons, but I realized much later in life that I was still carrying around traumas of being targeted in public by random strangers at a young age just by being out in public (yes, just walking down the street). That sort of stuff can erode psychological safety that we bring with us and unknown situations can cause us to react rather than respond. This is one of the reasons I think representation matters. Not just that it creates a mental model of what's possible, but proof (hopefully) that it's also safe and allowed.


> 2. Representation matters.

Strange, I had the opposite encounter. I realized the only thing keeping me from doing things was myself. There are definitely real barriers (hiring quotas, affirmative action, etc.) but without artificial constraints the only thing stopping you is you. You might feel a little uncomfortable but that's something easily overcome - and almost like a superpower when you realize you can overcome an external locus of control.


Since you are referring to affirmative action and hiring quotas as “real barriers” I would guess that you haven’t really been in a situation where you the first/only person of your gender or race (etc) to do something. In that case (since you haven’t lacked representation) I’m not sure how well you can speak to its importance. But please correct me if im wrong.


This is actually how it played out for me once I recognized it. I'm of the mindset now that I can jump in and do things. And I think you hit on one of the most important lessons that came out of it for me - the only thing stopping you is you.

With that said, some form of representation helped me greatly with it. It doesn't need to be an exact match, but for me it needed to be enough to make me break my assumptions and see whatever weird walls I had put up in my thinking.


I would tread carefully when saying any kind of discomfort is “easily overcome”. We are human beings after all and it’s a pretty generalized statement.

Being a white male but simply being unique in coming from a poor community in a poor city was enough for me to inflict a lot of unnecessary pain on myself through undergrad by seeing myself as different from the majority of elites in my program.


"without artificial constraints the only thing stopping you is you."

To be a plumber, yes.

To be a Doctor, kind of.

To be anything really competitive, not really, no.

There is a reason startup and regular CEO's are way over-represented from upper middle class families, and not ultra poor classes.

If you don't grow up playing Golf (expensive), it's highly unlikely you're going to the PGA.

For the poor kids to even fathom they could do something, they need to be exposed to the concept in a material way, on the whole. Obviously it's not always the case but representation 'is a thing'. And of course it can be way overstated in importance in many cases.


most of us here have our own drive, but I’ve come to appreciate that other people are inspired by seemingly superficial things

It clicked for me when I let my publicist go wild and she got me on listicles of BIPOC founders, before, we just had lots of quotes and interviews. Only people already interested in the project on its own merits were following along. After, there were lots of people that are interested in the representation in that kind of niche who otherwise just wouldn't know how to find that representation. Or just wouldn't be able to tell by founder names alone.

and of course we got the amplified engagement from people arguing about “why does race matter” in the LinkedIn comments. so shoutout to the useful idiots, publicists expect that to exist and calculate it.


> inspired by seemingly superficial things

This seems similar to adding an addictive drug to food. It will certainly appeal to a lot of people, but does it add value? Getting more consumers of your product should not be the end goal if we want a healthy, high trust society. Developing a good product should be.

> Only people already interested in the project on its own merits were following along

Isn't this what we want as a society?


save these goals for your non-profit

the rest of us want revenue. there is no build it and they will come, its appealing to people's sentiments, and the people have to know its there at all.


Ah, questioning the morality of exploiting human nature is for non profits. Got it.

There is a right way and a wrong way to achieve success.

Profit is not the only thing that matters in life.


how did you get that interpretation from any of this?

regardless, a for-profit company should not use its runway on ideals as it won't last very long, it has to build and find ways to be known about beyond its small community. that's what this is about.


I disagree entirely. Ethics in business is the foundation of being civically responsible.

If the business suffers in the short term, so be it.

I do understand this is not a popular view in todays business world however, and it shows.


Its more than a view, its survivorship bias

That doesnt mean be unethical, it means dont try to move the overton window or “use your platform”, other organizations exist specifically to do this and they are not called for profits


That reads like a very condescending and cynical response.

People enjoy representation, imho, because it helps them see that they are not alone, that the deck isn’t stacked against them, and so on. It also helps in removing prejudice against certain groups of people.


>it helps them see that they are not alone //

Do they have to consider separation of humans by characteristics to be important in order to get you that position where they feel alone with other humans? Bluntly, if a male says I can't be inspired by females isn't that because they're sexist?

It seems extraordinarily damaging to society to say to people, as many seem to 'here are the people you're allowed to be inspired by: they're superficially similar to you', at opposed to saying 'Payne-Gaposhkin asked the same question as you, see you're alike as people' (despite maybe being of different sex, race, nationality, era, class, wealth, etc.).


> Bluntly, if a male says I can't be inspired by females isn't that because they're sexist?

Not necessarily. It could be, but I wouldn't attribute it to malice as much as I'd attribute it to difficulty in relating to the person.

> Do they have to consider separation of humans by characteristics to be important in order to get you that position where they feel alone with other humans?

Your error in this is that you assume the behaviour is conscious and deliberate. Our brains do many calculations before we are even aware of them. I recommend Gladwell's book "Blink" on the topic [1].

Humans are by all means social beings, and we use heuristics and mental shortcuts to simplify our world, leading to fast and unconscious decisions often to our own detriment. We construct in and out groups before we are aware them, and said groups affect our decisions. This is why people growing in mixed communities are far less likely to be or exhibit racist behaviour.

I am expressing this only as a testament to our innate fallibility, this isn't targeted towards any specific group of people because there are all sorts of communities that are isolated and have very few interactions with others.

I experienced awe and inspiration when I realized that I was sitting at the same benches, and studying at the same place as Heisenberg when he came up with QM. But it was only when I stopped and thought about it, when I thought about the place I was sitting and the history behind it.

Sometimes all we have is superficial information, and our brains try to make the most of it. It's in our nature after all, we are all fallible humans, so why not help alleviate it and enable people to become their best selves?

[1] https://www.amazon.com/Blink-Power-Thinking-Without/dp/03160...


It sounds like you agree that it's racism, or whatever, when we demand that inspirational figures have a specific characteristic but feel lack of acceptance of people as people is initially subconscious and so is excusable even when being consciously acted on?

I'm thinking


I don't think it's racism or any -ism of that kind any more than it is difficulty relating automatically.

I am fairly certain that this is something that can be trained, but the degree to which I can relate with a man with ABC properties (except gender) will almost always be less than the degree I relate to a woman with ABC properties as gender is also included.

These decisions are always made rapidly, unconsciously, and when we have very little information about a person other than what we can immediately observe.

Changing the image that we have about people with other properties alleviates this.


Feeling uncomfortable is not what is a barrier. A barrier is an executive sexually harassing you to the point of suicide with no recourse. See: the blizzard fiasco. The riot fiasco. The Google fiasco. The Uber fiasco. Etc…


While I do see merit in the idea that representation matters, to me, it all depends on what representation in particular. Racial representation gets all the attention but I find that one less compelling.

But for example, seeing that a philosophy major can have a successful programming career, can encourage others in the humanities to see themselves do it too.

Or seeing someone with ADHD run a business successfully and overcome executive function challenges, can also help others with ADHD.


> Seeing that a philosophy major can have a successful programming career, can encourage others in the humanities to see themselves do it too. Or seeing someone with ADHD run a business successfully and overcome executive function challenges, can also help others with ADHD.

These are great call outs. The definition for representation can be highly nuanced and very personal. This is not to say that issues like race and gender aren't important and don't need support. Just that there are so many things that people bring to the table that can make them feel like "the other" which holds them back. It's surprising how lack of role models can lead some people to think it's not feasible or not even possible.


#1 is a fallacy called main character syndrome


> 2. Representation matters.

As a well represented person I can tell you this has nothing to do with representation, its just that the vast majority of humans in the modern world have close to zero agency and/or don't think they can actually change things.


> As a well represented person I can tell you this has nothing to do with representation, its just that the vast majority of humans in the modern world have close to zero agency and/or don't think they can actually change things.

As a well-represented person, you're probably making assumptions about the effects of representation that are less informed than you realize.


Learning history / literature in school is important.

I was a total STEM math nerd in school. I used to frequently complain how I don't get what's the point of it, or how it's a waste of time and I'm learning nothing. I still think the emphasis of school was off, but I get the point of it now.

Stories are like code for humans. You can't tell someone what it means to be good or bad, or to give them a course in philosophy and they will become good people. But you can tell them a good story, that engages with them emotionally, and it will change their perception. And history shows that in fact, those stories being told and repeated aren't just interesting minor curiosity, but they have shaped the direction of humanity and they are driving it. A single person with a single story can change history in such a way that it would be completely different without it. And some stories about stories need to be told as a warning so that people will not fall for those kinds of stories again.


Regarding literature, I'll give an opposite opinion. Being forced to read the classics in school led me to not caring about them at all because I just couldn't relate at all, I had not had the requisite life experience for them to make an impact on me. Now, being an adult and having the life experience, reading them again makes me think more deeply and I can actually relate. So I think in school, we actually shouldn't read high literature lest we hate reading them as adults, which is really when they should be read, not before.


Not only this, but also the impact having to read something has on enjoyment. There are books like To Kill A Mockingbird that I didn’t care for in high school simply because I had to read it. When I revisited it years later of my own volition, I enjoyed it greatly.


I’d say this is by design. Exposure, not enjoyment.

There’s a number of classics I read at the time that I didn’t enjoy until adulthood. Reading them early on gave me a framework for understanding them later on in life.


I rather enjoyed wading through antiquated language in an attempt to understand. Honestly I don't recall if the prose in To Kill a Mockingbird was difficult to grasp and I suspect it wasn't but there were so many other books that we were "forced" to read that really were an enjoyable challenge. Shakespeare for instance.


I don’t think this is always true. I enjoyed reading before school forced me to, and while I mildly resented being forced to read something I didn’t particularly want to read, it never made me enjoy reading less.

I also read a few books I otherwise would’ve never picked up that ended up very interesting.


I don't think they're saying they disliked reading totally, but that they disliked books they were forced to read. Although, if that happened to be every single book in the syllabus, then sure, someone might mistakenly think they don't actually like reading.

Ultimately, sacrifices get made to try and get mass education working; the current system is barely scalable, so anything even more personalized is pretty much out of the question.


So? Many others do enjoy reading even if for school. Should we stop teaching math cuz kids don’t like it in the hopes they’ll teach themselves as adults? School will never be all things to all kids.


I think their comment was, if anything, an hint to check out some of those classics again, even if you didn’t enjoy them when you were made to read them in school.


The vast majority of adults read almost 0 books. I think there’s some value in forcing classic cultural capital when we can even if it’s lost on some.


Maybe the vast majority of adults don’t read books because as kids they were forced to read “classic” works they didn’t identify with nor enjoy and have thus been conditioned to see books as boring and worthless.

Perhaps the solution is to not force specific books on anyone. If kids want to read Twilight for class, let them. Maybe it’s more important (and effective) to have them read anything and enjoy it than cramming “high literature” down their throats.

I posit that a person who enjoys the experience of reading a bad book is likely to later on pick up several good ones, while someone who was forced to read a handful of ostensibly good books they weren’t ready for is likely to never pick up a book again.


> Perhaps the solution is to not force specific books on anyone. If kids want to read Twilight for class, let them. Maybe it’s more important (and effective) to have them read anything and enjoy it than cramming “high literature” down their throats.

We had two separate classes in school, one for English, and one just called "reading". In it, we could read whatever we wanted, as long as we read for the entire period. Sometimes we'd discuss our books. It was great.


I would actually really have enjoyed "reading" class. Free periods where we got to read whatever were always the best in middle school, wish those continued. I'd just read whatever if I could anyway - and hell, sometimes that was the classics after all.


There's always "that book" for everyone. That one they finally click with. And then they can't shut up about it, such as the davinci-code... or outliers (my daughter's first)... but so few books hit you like lightning... I think part of it is just trying to get to that point where one hits you so expose people to books, who know what will stick. Worked for my daughter.

For me it was axctually the first novel I read. My aunt gave me The Malloreon. I was bored in winter break in the late 80s and I felt guilty I hadn't used my aunt's gift. So I read it. Now I recall spending an entire day curled up with a quick trip insulated mug in our breakfast nook when it was below freezing out reading the next book next to my mom. It'd take about 3 hours to finish that herbal tea.

Shortly after my dad gave me unlimited allowance for books. It was a smart but mildly costly move.


My dad did the same, and I have to thank him for that. It may have been expensive but no more so than doing some equivalent with video games or the like, none of which would have aided me as much as reading did.


I agree in general, but for me it wasn't so much just having to read them as it was having to do a bunch of silly assignments based on them. My high school completely ruined The Great Gatsby for me that way.


I think most educators and teachers know that students won't get the classics. The goal is to at least make the students comfortable with the difficult language of the classics, and make them realize that analysis of these books yields deep insights.

If they instead taught students easier books in schools, student would never develop the reading skills to tackle the classics, and a much smaller percentage of adults would ever bother rereading the classics or even acknowledge their power.

Of course, the teaching has to be improved so that students never hate it.


Man, my literature classes in high school were a complete joke. The "deep insights" were all just arbitrary memes that had little to no correlation with reality

Stuff like: "What was the meaning of the yellow curtains?"

They pretty much ruined my enjoyment of every book I had to read, even though I did and still do love reading. For books required by classes, I just read the cliff's notes.


That's also true. To be honest, I don't know what the midpoint should be. For Shakespeare, we watched movie adaptations which made a lot more sense than just reading a somewhat dry play.


For plays, performance is an important part of what they are, so a theater program is also good.

The issue is that performance arts, or the arts in general, usually are the first on the chopping block at schools that are either facing budgetary pressures or need to improve standardized test scores.


Well they were movie adaptations, not plays themselves, so cutting performance arts didn't make much of a difference for those in English class. But I agree, it's a shame that they're cut in many schools.


I think people come out of school hating reading because nobody likes being forced to read specific books, as everyone's tastes are different. It sounds like this might have been what happened to you.


I never hated reading, it's just that I didn't get the point of it. What are we even doing here and why?

What's the point? Math has clear purpose. I do enjoy some books. I just didn't get the point of studying them beyond reading. Why are they babbling about ? Is there any use for this information or is it just a torture in memorization? Why would I ever care about all these abstract literature terms?

Ironically they missed telling the meta story: why would anyone care about stories.

If stories are just something you read for fun, why would anyone care to teach me how to analyze them?

EDIT: To be clear, these were the questions that weren't answered back then.

Only now, years later with life experience the purpose is clear.


I completely agree with this. I loathed Shakespeare in school. Now I try to get a visit every year to the Stratford Festival (in Stratford, Canada) to see at least one or two Shakespearean plays.

I also remember reading 1984 on my own time in high school- it wasn't in the curriculum the year I might have. It blew me away. But if I'd been forced to read it, I probably would have been bored.


For other HN readers:

I was like this too. I had no interest in ol' Bill. Then I read his 'best' works. And then I read them again. It takes a bit to get really used to iambic pentameter, to the old words, to the places and people he was writing for, to his limitations of candlelight and lack of amps, etc.

Reader, it is totally worth it.

There are very many very good reasons that he is still produced and studied. I encourage you in this new year to give him a solid try. Not just a play or two. Read him and watch the plays at the same time. There are hundreds of productions on YT for every play of his, all free, most very good.


To add to this, Shakespeare might sound dry, but it's the same as if you read a screenplay instead of watching a movie; plays are meant to be seen, not (only) read.

Something else that might impact our current understanding is that we are not in their time period, so many things we don't understand might be specific to their time, even if we get the broad strokes and very human themes, such as love or revenge. It's the same as if someone 400 years from now saw the movies we put out, some things are just very specific to the time and place.

There's also the change in English and pronunciation too, of course.


My issue was the volume we were expected to read, or at least volume on top of books I wasn't interested in reading.


And exams.

Learning is one thing, being tested on facts and trivia is totally something else.


I woudl get so far ahead on the books I'd forget trivial things thye'd ask on the quizz. In high school our teacher made us read The Sword of Shanarah (~1991). I remember missing a question on a quizz because it was about a first encounter with Allanon where I answered about later in the book. But the teacher was a smart person, he just gave me the final exam on the book which I passed. So I got opted out of the quizzes. He had a whole group of nerds who were way onto scions of Shanarah or later. But this was a private school.

He told me that the reason he chose that book was that it was actually easy and a page turner, but also different than what most kids read. But most of all he chose it cause it's a big honking bok. To that point at the end of the class 90% of the kids had near on finished the book. Even the laziest kid in class had to agree that no it wasn't a big chore to read that big of a book if you broke it down and they weren't afraid of big books anymore.


I was fine with literature, but yeah, history, what a snooze-fest. Why do I care who fought what war 1,500 years ago and who won? It seemed like it mostly came down to a bunch of pompous elites having dick-measuring contests a lot of the time.

Then one day the teacher didn’t want to teach and instead showed an episode of Connections, and I was blown away. Learning about how and why our science and technology became what it is was something I could related to and seemed actually useful. I still don’t care for military history, though.


Even if there is no connection to our own time, history is one of the most effective teachers of the process of empathy out there. It requires you to be able to place yourself in another time, place, and context. I'll also note that military history and "elites having dick-measuring contests" is far from the trendy side of the last four or so decades of history research and writing. Cultural history, microhistory, and even history of science are all hot topics.



> You can't tell someone what it means to be good or bad, or to give them a course in philosophy and they will become good people. But you can tell them a good story, that engages with them emotionally, and it will change their perception.

Is that actually true? Do we have good reason to believe that people who study history/literature behave more ethically?


Additionally, I would say more historical emphasis within STEM itself would be beneficial. Motivate through context. Show students that the concepts arose from people solving problems.


> Learning history / literature in school is important.

Really depends on HOW you learn it IME. If it's just regurgitating dates/names/whatever it isn't helpful at all, at least for me. If you establish that event x led to y because of z, it just clicks and suddenly makes sense.

For example, let's take hitlers rise to power: "He became the chancellor of germany in 1933" That is just about useless. "Hitler rose to power with the help of the nazi party, which was partially formed in response to the treaty of versailles' excessively harsh terms, leading to an extreme amount of inflation and a harsh drop in industry. This set the stage for hitler arguing to use a war as a means of getting rid of the penalties of versailles and bringing germany out from the slump"

For me, in school I was mostly taught the first variation.


This is true for literally everything you can possibly learn in school. You can learn math badly. You can learn physics badly. You can learn programming badly. You can learn personal finance badly. Professionals who work on history pedagogy work hard to prevent the sort of "memorize names and dates" approach. This shouldn't be considered a unique problem with history as a discipline.


The power of follow ups (especially in sales)

One thing which held me back for a very long time was not following up with people who didn't show much interest initially.

I wasted so many good leads thinking it is impolite to follow up with people after contacting them once. My whole life changed once I understood the power of follow ups and understanding that most people are so busy that it takes at least 6 reminders before most people will take any substantial action.

The reverse is also true. People say a lot of things and most of the times you never cross the bridge or reach it. Nowadays, I rarely argue about anything and don't act on stuff until a person reminds me once or twice. This small filter can be like a miracle for saving your time and energy.


Agreed. When I was doing sales, I would follow up with people seven times before they replied. I used to feel bad until I realized that it worked, then I just automated that part and didn't think about it further, the leads once they replied were sent to my CRM where I could then set them up on a call or some other action.


I've automated the opposite: three unsolicited mails, and I automatically send the sender's entire company to spam and reply with a friendly request to stop. Three more, and I send an automated cease and desist, threatening legal action (with receipt confirmation requested). If it still continues after that, I actually involve lawyers.

Most people are only willing to do something after repeated attempts because they're polite and want you to stop. If a website asks you whether you want cookies, you only click "yes" because the alternative is more work. If you constantly pester and threaten someone long enough, you'll always get a "yes" at some point. But that's not consent, which is why it's illegal.

Even if lawyers cost money, if I'm just expensive and stressful enough to stop people like you from sending even a single unsolicited message, it's worth it. Time is the most valuable resource we humans have, you've got no right to waste other people's.


Good for you but you're such a vanishingly small part of the population that it won't stop anyone in sales and marketing from continuing to follow up on people (which, by the way, is not illegal). You effectively don't matter.

And good follow up software will also stop once a reply has been sent from the recipient, which is then forwarded to a salesperson to triage.


> which, by the way, is not illegal

As business, sending automated messages to individuals without prior authorization is spam. Pre-checked tick boxes are not consent to receive such messages, you need explicit opt-in. If you are informed that these messages are unwanted, and told to stop, and yet you continue, it becomes explicitly and knowingly illegal.

> You effectively don't matter

As long as I cost them enough to make sure they'll never even think of contacting me again, I'm happy.


Correct, if you're following CANSPAM in the US you must abide by its rules in order to remain legal, however explicit opt-in is not required. What is required is:

- No deceptive information

- Valid physical address

- Must have an easy way to unsubscribe, most often an unsubscribe link but asking the recipient to reply to stop being sent emails is also sufficient

- Promptly honor unsubscribe requests within ten days

Other countries have different laws though.

https://www.gmass.co/blog/is-cold-emailing-illegal/

> As long as I cost them enough to make sure they'll never even think of contacting me again, I'm happy.

Honestly if they're following the above rules, you won't cost them anything. And if they're not, and therefore doing it illegally, well, looks like your lawyers will have their work cut out for them.


I'm in Germany, so far stricter rules apply. And often senders aren't even complying with the CANSPAM rules. Far too often have I received an email with sad gifs attached telling me they don't want to stop emailing me.

Recruiters, Sales, shitty spam farms, it's all the same.

My hometown already banned any kind of outdoor ads with lights and heavily regulated ad size, personally I'd appreciate if we banned advertising altogether as society.


That's unfortunate in Germany, I've had people cold email me with good information, I've even gotten a few jobs from that. Similarly, I've had good clients come from cold email as well.

I'm not sure how advertising as a whole could be banned, there are many types, and it could be argued that even word of mouth (or merely even showing a brand image like logos) is a sort of advertising. Also, it would be very difficult for new entrants in an industry to start up, ironically cementing existing players in the market; Coca-Cola doesn't need more advertising since everyone knows about Coke, while a new soda brand would find it extremely hard to grow without any advertising whatsoever.


How would it cement existing players?

Many people already make purchasing decisions by looking at comprehensive tests from non-profit foundations (https://www.test.de/) or one of the public broadcasters' (https://www.ndr.de/fernsehen/sendungen/markt/index.html, https://www1.wdr.de/fernsehen/markt/index.html), and then do price comparisons with one of the many popular price comparison websites (https://www.idealo.de/, https://geizhals.de/, https://www.hardwareschotte.de/, https://www.preispiraten.de/).

If your product is good, it'll do well in the tests and the price comparison, and be bought.

You only need advertising if your product is actually bad and you instead want to convince people to pay for it because you want it associated with a certain image of your customers. But in that case, you're not selling a product, you're selling a status symbol, basically jewellery (e.g., Apple, the sneaker market, etc)

And the market of status symbols is harmful to society anyway, so if it gets destroyed as collateral damage of a potential ad ban, that'd be awesome.


When you say "many people," it sounds more like this your workflow for purchasing decisions, not many people's. The way you outlined is so rare that I doubt many people actually do what you describe, unless you want to redefine the word "many." There is a reason advertising exists and works, and you are not realizing that you are a rare, rare exception to that. It is akin to someone using only Gentoo and saying how many people also use it. Yes, some people might use it too, but it is so far outside the range of normality in the population that it does not matter, even if the Gentoo user thinks their bubble is many other people's too.


With "many", I'm actually talking about a majority here in Germany. Stiftung Warentest, Markt, Frontal 21 and similar media are extremely popular. Stiftung Warentest alone has a revenue of over 63 million a year. [1]

The consumer magazine NDR Markt had the highest TV ratings of any show in recent years (excluding evening news and special covid news updates), with a market reach of 13.4% in the broadcast area. [2] That's not even counting WDR Markt which has even better ratings.

> so far outside the range of normality in the population that it does not matter

That's where you're provably wrong.

That said, there are bad actors in this market too – especially around the Check24 group (annual revenue exceeding 500 million in 2021) and other actors, which are ranking products less by quality or trustworthy test reports, but instead based on who pays more for ads (see also: Google Shopping).

The EU New Deal For Consumers is actually going to influence this quite a bit. The EU plans to regulate comparison sites more strictly, as comparison sites will have a more important role in a future with more restricted, and overall less prevalent advertising. (See also [3], [4])

    ________________
[1] https://www.test.de/unternehmen/stiftung-5017075-5843365/

[2] https://www.ndr.de/der_ndr/zahlen_und_daten/NDR-Fernsehen-Di...

[3] https://www.bundeskartellamt.de/SharedDocs/Meldung/DE/Presse...

[4] https://www.sueddeutsche.de/wirtschaft/vergleichsportale-kon...


We have organizations like Consumer Reports here too in the US, they also make a large amount of money, but that doesn't mean that most people make their purchasing decisions from such organizations. I guarantee if you did some sort of study, most would say they made a primary purchasing decision via advertising.


According to a GfK Study from April 2021, asking 1001 Germans between 18 and 74, the largest influences on purchasing decisions were:

- personal recommendations from friends (35,5%)

- independent comparison tests (30,5%)

- amazon reviews (23,2%)

- media reports (13,7%)

- corporate information, including advertising (10,0%)


Many of those can include advertising, such as Amazon placement and reviews, media reports (many PR firms exist to push your product onto media outlets), personal recommendations, and even independent comparison tests (people see an ad, and research it more thoroughly including seeing how it fares on comparison tests, and perhaps tell their friends too).

I am not saying that the tests themselves are not independent, nor that people don't use them to make purchasing decisions, but the prevalence of advertising is such that you cannot disentangle its influence from every other. Clearly, even in Germany, ads exist, and if they did not work, then the ads would not exist.


> such as Amazon […] reviews

> many PR firms exist to push your product onto media outlets

> personal recommendations

Half of that is fraud, the other half is at least unethical. Reviews may only be the personal opinion of people who provably purchased the product. False reviews are fraud and explicitly a criminal act. Placing ads in media without explicitly marking it clearly and obviously an ad, so-called “native advertising”, is explicitly a criminal act.

> the prevalence of advertising is such that you cannot disentangle its influence from every other.

My original argument was that advertising isn't necessary. I think after this discussion it's become obvious that there are many other working and prevalent ways to discover products.

In fact, all advertising does is distort and corrupt those existing methods, trying to get me to spend more money than necessary on a worse product. Every cent spent on advertising could've been spent on improving the product instead.


> Half of that is fraud, the other half is at least unethical.

Not really. Yes, ads must be clearly stated, but oftentimes news outlets are just looking for stories to run, which are not compensated with free product or money, and thus are not subject to such advertising laws (in the US). With regards to reviews, I don't mean that reviews are paid, rather that Amazon charges companies to be "sponsored" and if people buy those and review them, their review was at least partially influenced by buying the top sponsored product. Same with personal recommendations, not sure how you can construe it as fraud or unethicality if I see a Coke ad, buy it, and recommend it to my friends.

> Every cent spent on advertising could've been spent on improving the product instead.

It is clear that you don't work on any products that make money, or at least aren't in any of the non-technical sales or marketing roles, or are not a founder who must necessarily do both the technical and business side. I used to think the same way, that ads are bad, but if I build a product and I want people to use it, I must necessarily tell people about it in some way, Twitter posts, telling a local reporter or tech blog, cold outreach to people, all of that is advertising in some respect. If I build it, they do not come, automatically, of their own volition, they don't even know I exist! Us engineers do not exist in a vacuum, we code for the goal of businesses to make money. Such black-and-white thinking as yours is not conducive to actually making and selling a product.


I was co-founder in the past, and I run a few open source projects. My partner specializes in SEO, and my neighbor is in a leading position in an ad agency.

I understand the advertising industry perfectly well, but that's exactly why I hate it so much. All advertising does is help incumbents, because they can afford to waste money on excessive marketing budgets. So many products are sold on buzzwords, false claims and promoting a lifestyle instead of on merit.

Advertising, especially false advertising, needs to be at least heavily regulated if not banned entirely. Even the slightest exaggeration from the truth needs to be punished harshly. It's impossible to compete honestly in a market when your competitors are selling based on AI Cloud Blockchain Smart Industry 4.0 IoT buzzwords.


Sure, I hate false advertising as much as the next person, but by banning all advertising, I'd make no money from my product if I couldn't even tell people my product existed, which is the effect such bans you advocate for would have.

To get back to a prior question you asked (which I did not answer it seems) as to why banning advertising would protect incumbents, it's because people already know about them while they wouldn't know about upstarts. Again with the Coke example, a new soda brand would find it exceedingly difficult to gain a foothold if everyone simply defaults to Coke normally. Same as in software, imagine if a new search engine competing with Google came about, but since everyone already uses Google and since the new search engine could not advertise themselves (with such a ban as yours), they will likely fail, even if their product is superior.


Google did not grow through advertising, but through word-of-mouth because they were actually better than the competition. I discovered the soda brands I'm now primarily drinking (Fritz, Premium) the same way.

So much of human society works through word-of-mouth. In fact, I'd argue that the type of megacorporation that can only exist grow through advertising is in fact damaging to society in itself. Small, localised SMBs is how we should go into the future.


Small, localized SMBs were the ones most hurt by advertising changes such as between Apple and Facebook. But, let's agree to disagree, I don't think we'll change each other's minds in this conversation. Have a good day and happy new year.

https://arstechnica.com/gadgets/2022/08/small-businesses-cou...


From a typical starting pool of 100 leads, what percentage respond positively having received (up to) seven emails?


It depends on the campaign, sometimes it's as high as 50%. However the amount that explicitly say no or to stop messaging them is less than one percent, most people just ignore them.


See, you don't think like salespeople do. It doesn't matter how much this practice harms you, the only externality that matters is how much it benefits them.


I block people on LinkedIn or report them as spam on Gmail if I get more than 3 unsolicited messages. 7 just feels borderline ridiculous.


You (and many technical people) do, but most people don't, that's why it works. Reminds me of this comment I saw a while ago: https://news.ycombinator.com/item?id=25825917#25828439

> On HN, people hate cold emails. In real life, I've found that most people will respond or ignore. Like a tiny minority will act like you killed their mother, but that's life.

> I know you know this, if you're in sales, but I, like many other engineers who read this forum was overly cautious when I first started speaking to people because I anticipated that 99/100 would be upset at having to talk to me.

> The truth was that 99/100 were willing to speak to me and listening to HN and Reddit set me back farther than I expected until I unlearned that lesson.

> So I'm saying this for the benefit of all those other engineers like me.


That's good to know, thanks for the tip

Man oh man, it really disgusts me that people are so stupid though. They just let companies blast them with popups, e-mails, text messages, etc all day when it takes so little effort to cripple the spammers and cut them off completely.


I wouldn't say they're stupid. If they find value in someone's cold outreach, they'll reply, otherwise they'll ignore it. But the point is that many engineers think any cold outreach automatically spam while most people don't. Just one more reason not to make engineer specific tools that require sales, it's an uphill battle.


This reminds me of the parable of the widow and the unjust judge from Luke 18, where the widow perseveres in asking the judge for a judgement in her favor and the judge eventually accedes because he's so tired of seeing her.

(The point of the parable, as I understand it, is not "One Weird Social Engineering Trick That Will Always Get You What You Want!!!11" but rather that we should persevere in prayer and petitions to God because, if even the unjust judge was eventually moved, how much more likely is our heavenly Father to grant us our heart's desire because he isn't unjust like the judge. This is an example of an "a fortiori" argument, https://en.wikipedia.org/wiki/Argumentum_a_fortiori , which are fairly common in scripture.)


This comment makes my LinkedIn make a lot more sense


Yes, I used to use some LinkedIn sales tools that had automated follow ups. Same with recruiter emails, if you think they're handcrafting each email, I have bad news for you. It's all templated, they use your first name and maybe put some effort into the intro using your specific company and experience, but generally, the rest of the intro email and the subsequent emails are all automated.


How much time do you allow passing before each recontact?


Around 3 days.


That sounds a bit too pushy for me. I would expect at least two weeks before recontacting.


That's pretty long, they'll have forgotten about you in the meantime.


You were not trained well.

The principal element in sales is asking for the order. Everything, and I mean everything, follows from that. If you had been trained this way, following up would be second nature.


Some products dont fit well with a one call close and are centered around building long term relationships with your clients. This is especially true in b2b. Maybe you were trained too narrowly.


You can have your great “relationship with your clients.” You will see how illusory that is when a competitor appears with a 50% better value proposition. Oh, and SOMEONE has to ask for the order.


I am not trained at all in sales. Can you expand upon this?


Narcissism.

My grandmother was a terrible narcissist. I loved her dearly and she had a lot of wonderful qualities, but The quality that stood out the most, sadly, was narcissism.

My mother was also a narcissist to a somewhat lesser degree. It didn't occur to me that I too was a narcissist until I was about 35 years old. It took waking up in the corner of the living room in my friends one bedroom apartment Early one morning to see it.

I had pushed away my wife and kids because in my mind all of my problems were their fault. I had blamed others for every thing that had ever happened to me or every feeling that I had felt. And in that moment I realized:

It's ME.

Everything changed in that instant. It was no longer just about me anymore. I stopped seeing the people closest to me as opponents and started seeing them as what they were, family. My support system. The love of my life.

As the years have gone by since then I have seen more of my past through that light and things have become so much more clear.

Understanding that my grandmother was a very damaged person who turned a narcissist to deal with it, then raise my mother similarly, help me understand two things. The first was that the things I blamed myself for in the past weren't my fault. Secondly, it helped me forgive them for some of the awful things that happened. I'm not saying it's okay to be a narcissist. But recognizing that their narcissism affected my life, and it was something that I could shed in my own personality was a serious life changer. And the funny part is that after I realized all of this, my debilitating depression essentially went away. And that was a big deal.

I also learned not even 2 years ago that I have ADHD which was like a light bulb moment for me as well because it explained so much of my life.


I find it interesting when people come from a long family line of narcissists (not uncommon) that they often attribute it to a cycle of being raised by a narcissist making you into a narcissist.

it seems more likely that there is a genetic/biological basis for this personality trait


It’s possible but I don’t really buy it. There’s a very real sense in which narcissism is just a lack of acquired skills that most people have. That skill being the ability to see both the good things and the bad things about yourself and about others at the same time. If you’re raised in an environment where your caretaker teaches you they are always right (with severe consequences for pointing out they aren’t) then it is hard to develop a nuanced worldview. The child can either get on board with the parent, become “one of the good ones” and become a narcissist themselves or they can become a black sheep of the family, usually with worse outcomes.

Importantly, the skills can be learned as an adult it’s just hard to do. I come from a long lineage of family members who did not know how to swim, but I don’t think that there’s a genetic basis for this.


You hit it on the nail head with lack of coping skills. I had a grandmother who tried to compensate for the way my mother treated me by treating me like I was royalty. And my coping mechanism for being treated poorly at home was thinking that I was better than everybody else in some way. Combine those two with the fact that I was taught no coping skills, and yes I became a narcissist.

When I had my awakening, I was finally able to put to work some coping skills I learned the hard way, in a mental hospital. But once I realized the problem was me, the changes were instant. I was ready for change.

Unfortunately, some people have no such awakening. And I avoid those types of people like the plague.


I think genetics are completely misunderstood. I also have serious Narcissism in my family and to me it very much looks like that's the world view you learn if that's how it's demonstrated to you as a small child. Small children have no filter on inputs also. If dad isn't around because mother is unbearable and she lives a narcissistic reality it becomes the water that you swim in. I don't think there is genetic code that determines that you think other people are to blame for everything!

Random theory: Children of narcissists like IT because it's a world that is very predictable with rational explanations.


or it could be a biological disposition to developing it as a response to the trauma of being raised by a narcissist


I've come to believe that a deep childhood wound (parental divorce, neglect, abuse, or being adopted, etc.) predisposes one to become a narcissist when combined with the message that you are special, different from others, etc.

Steve Jobs, for example. He wasn't as malevolent as Donald Trump, but both have both factors: the childhood trauma + the message that they were special. I have several acquaintances who are narcissists and have both factors as well.

I think that the trauma without the message that you are special and different predisposes someone to become a hyper-achiever, where ambition is a channel for the pain. See: Elon Musk, Tim Ferris, etc.


   I've come to believe that a deep childhood wound (parental divorce, neglect, abuse, or being adopted, etc.) predisposes one to become a narcissist when combined with the message that you are special, different from others, etc.
You just described my childhood.


Hey! Glad to hear you're on a better path.

Thank you for sharing.

Please continue to contribute here on HN, I really appreciated your comment and experience.


Thank you for your kind reply :)


It's interesting to read that you slowly shift the narrative from accepting your faults to making excuses to blame externalities, like inherited narcissism and ADHD. Look the word up, it's essentially an umbrella term that pretty much describes anything that can be considered negative, almost to the point that it's meaningless, except as a convenient tool to antagonize people. Selfishness and self-admiration is spectrum, everyone possess it one way or another. To claim otherwise is inhuman. Clearly, there are actual extreme narcissists, but they are not as frequent as people use the term loosely nowadays. Disclaimer: not a psychologist, but probably neither are you.


I don't think he was trying to justify or excuse his way out of his bad qualities in the second part; It felt more like he was pinpointing the reason. I'd say it's okay for people to give reasons for bad actions as long as they they still admit to wrongdoing and don't use that reason as an excuse for it being okay.


Indeed. It wasn't justification any more than Root Cause Analysis is making excuses for a systems failure.


> Look the word up, it's essentially an umbrella term that pretty much describes anything that can be considered negative

I suggest you look up the difference between the common sense of the word narcissism and the psychiatric diagnosis (which has a more rigorous and specific definition).


Domain Driven Design. The book by Eric Evans lays out a bunch of concepts and as a developer that had not owned the architecture of a big domain, it was hard for me to see exactly where they fit. But after reading the book a couple times, and then encountering a few tricky domain modeling challenges, I started to see where these patterns add value. Also, as I started trying to describe the cohesive domain architecture of the system to a growing engineering organization, I also clicked on the advantage of having a standardized set of terminology for the problem, rather than inventing your own. It's nice to be able to link to an existing explanation of what a Repository is for, instead of having to name and document your own ad-hoc architectural patterns (or more likely, end up with you ad-hoc architecture being under-documented).

Things like Repositories, Aggregates, Bounded Contexts, and so on are going to be a net drag on your system if you only have a few 100 kloc of code in a monolith. But they really start to shine as you grow beyond that. Bounded Contexts in particular are a gem of an idea, so good that Uber re-discovered them in their microservices design: https://www.uber.com/blog/microservice-architecture/.

(Edited to clarify the book author)


It's a solid book. I was chugging through it a bit at a time, until I discovered Scott Wlaschin's Domain Driven Design Made Functional. I've dropped the Evans book (as it's mired in mid-00's Enterprise Java patterns) and am enjoying this one quite a bit more.

Both books are great. Read whichever one aligns with your practice best.

I recommend Wlaschin's book for anyone curious about FP, without hesitation. He's great at explaining things from first principles, without veering off into "monad tutorial" territory.

https://pragprog.com/titles/swdddf/domain-modeling-made-func...


Evan’s book is a bit of an academic slog, but I finally felt like I understood how object-oriented design should work.

An easier introduction I recommend is InfoQ’s “Domain Driven Design Quickly”, available in print or a free PDF ebook: https://www.infoq.com/minibooks/domain-driven-design-quickly...


I have to agree!

I'm only about a third of the way through; I didn't realize that there was so much about properly building OO systems that I'd simply never seen in the wild, or heard of before.


Thanks, I hadn't seen that book -- will give it a read. I'm interested to see how DDD gets applied to the functional programming paradigm, as it seems quite deeply ingrained in OOP to me.


You'll be impressed, I think. It feels very natural - at least, it does to me.


Is Wlaschins still reasonable (readable?) if you're not an F# developer (yet?) - but are familiar with FP generally?


Very. My fsharp experience is fairly limited, but I am having no trouble following along.


I had the exact opposite reaction to this book. It seemed like great stuff when I first read it, but over time, I realized it's really misguided in a lot of ways:

- Aggregates are too heavy. You need to make the decision about what is or is not an aggregate way too early in the design process. Boundaries are fuzzy.

- Actual concepts don't exist in nicely packaged bounded contexts. Concepts overlap a lot. You need to make the decision about which concept fits into which bounded context too early in the design process. Boundaries are fuzzy. Things are kinda like other things. The definition of "Employee" is not the same in the Scheduling context as the HR context as the Payroll context, yet they do overlap a lot, and you can't just treat them as completely separate things. If you break everything down into tiny contexts to deal with this, you just make Contexts and Aggregates the same.

- Repositories are not original to DDD and I think are very likely to foster absolutely horrific SELECT N+1 or even SELECT N^2 or N^3 performance. You simply can't let one bounded context do all its expensive operations in a vacuum; not when you have lots of contexts and lots of operations. In a complex system, most parts need to be planning, not doing. The results of most operations should be a plan that you can compose with other plans, analyze, and possibly even have an "optimization pass" if you need one.

- Ubiquitous language is the right idea. If you take nothing else from DDD, take this.


Fair enough; YMMV.

I've not found that Aggregates need to be designed at the beginning; I've found it works fine to define an Aggregate after you start seeing performance issues with query patterns (i.e. define an Aggregate and forbid direct access to sub-objects when you see pathological access patterns/deadlocking).

Personally I've found Repositories to be a good way of enforcing that N+1 queries DO NOT happen. For example, in Django you can have the repository run select_related/prefetch_related and `django-seal` on the ORM query to forbid unexpected queries. This somewhat neuters the flexibility of the ORM, which can be a big cost, but lets you build much more restrictive queries that are guaranteed to perform well. It's a trade-off and I don't think it's a clear win for every use-case, but particularly when dealing with Aggregates I think having a limited number of ways to query is beneficial. (This might mean you're running some sub-optimal SQL queries, over-fetching etc., but for most line of business applications, that's actually a viable trade in exchange for simpler domain interfaces and protection against N+1 queries.)

Regarding "planning" vs. "doing", that seems to fit quite well with doing most of your work in POJO/POPO domain models, then only rendering those into DB writes at the edges. I think Repos can help with that. (IME you get N+1 selects when you use ORM models that abstract away the SQL queries and have application code interacting directly with ORM models; if you remove the ORM from your core domain logic and force it to live at the edges in Repos, this is not possible.)


I suspect these different design "paradigms" all look somewhat similar when done really well and thoughtfully, but tend to have different gotchas and failure modes. Happy vs. unhappy families and all that. The worst design decisions we've seen are very salient in our minds, and these bad decisions are more shaped by the paradigm than the good ones are, so they look like the fault of the paradigm. Therefore we tend to compare paradigms based on how they go wrong, and people see bad design and think "wow we need a new paradigm". In that sense: YMMV indeed, and if DDD works for you, go for it. It's definitely worth learning, even if only so you can disagree with it! There are a lot of good smaller points in the DDD book even if you disagree with the larger paradigm.

As an example, when I think "Repository", I think a class with methods like "GetEmployee(employeeId)" that hit the database immediately, and probably call other GetSingleThing methods in other repositories, possibly in a loop. That's how you get SELECT N^2s and up. That is, of course, their worst possible use, and you can have much better-thought-out services and still call them "repositories".

That all being said, there are some near-universal "good ideas" I've learned, but they probably don't make a good textbook, and they certainly don't make a singular "paradigm" to follow. Things like: make decisions as late as possible (but no later). Do less, plan more ("do" meaning something with a side effect; planning is side-effect free). Work with sets instead of individuals. Separate identity from data. Raise the ceiling (empower smart people), not the floor (don't design for stupid people), but also realize that everyone is both sometimes. I use these general rules to evaluate paradigms, and I think DDD strongly fails the "make decisions as late as possible" rule, and doesn't tend to foster the "do less, plan more" rule unless you really reinterpret a lot of its directives.


I strongly agree with the general principles you're putting forth, and agree it's hard to build an architectural philosophy which bakes in "good sense" as well. To some extent these can be orthogonal. (At the very least, your architectural philosophy had better not be opposing good sense, but I think it's fine for the philosophy to not encode every best-practice of good sense.)

> all look somewhat similar when done really well and thoughtfully, but tend to have different gotchas and failure modes.

I do agree with this to an extent, although I also think that "what does this theory tell me to do that differs from some other theory" is a quite valuable question; I do think there are some structural aspects of DDD which could be "bad" in other approaches.

Perhaps most obviously, the meta-approach of having well-known named concepts in your architectural framework is in some sense opposed to the "pragmatic design" or "framework-less" approach of just using good taste and experience to come up with the right design for each situation. The latter perhaps being "better" for experienced architects, but harder to teach IMO.

> when I think "Repository", I think a class with methods like "GetEmployee(employeeId)"

Not to beat a dead horse, but on this object-level point -- in my understanding DDD explicitly contemplates both singular fetches like `GetEmployee(employeeId)` and plural like `GetEmployees(createdAfter)`. I think the book (Evans, 2003) has examples of each (though I could be mis-remembering and drawing from some of the other practical DDD-by-example books).

But I can see where a team might use a sub-optimal combination of existing Repo queries rather than writing their own new ones.


> Aggregates are too heavy. You need to make the decision about what is or is not an aggregate way too early in the design process. Boundaries are fuzzy.

I've had the exact same complaint. I think there's a lot of great stuff to take away from DDD -- I also hear myself frequently making the same point about it's ubiquitous language -- but going all-in on some of its concepts, especially early in a projects life, is probably always a mistake that may well end in disaster.


The terminology really turned me off because I found it difficult to comprehend. But I loved the idea of ubiquitous language and always meant to return to the book and give it another shot.


Indeed, some of the names are just bad; "Anti Corruption Layer" is actually a quite useful concept for explicitly translating between concepts in different Bounded Contexts (do the translation explicitly at the edge so the mappings between BCs are explicit, instead of mixing terminology), but man I feel like something more snappy could have been used.


This is the book by Eric Evans?


That's the one. (I edited in a clarification to my original post.)


It took me an embarrassing amount of time to fully grok what object oriented programming really means. It's one thing to have someone tell you what an object is, but it's another thing entirely to build a fully object oriented system.

I remember what made it click: I was designing an animation system, which had a bunch of different interdependent moving parts. Once I started treating each part like an object and letting it manage its own state it all just clicked. I started with this massively complex functional-like system that managed four or five different motions, but once it was broken into objects most of the code just fell off and it became a nice clean system.

I was super proud of it at the time, but it's pretty bad by my current standards.


I didn't fully grok OOP until I saw how composition and dynamic dispatch can be used in real code to create abstractions and flexibly swap out different implementations for your interfaces.

You could build a chatbot that supports Discord, Slack, and IRC dynamically at runtime, or a web app that can use multiple different database engines, or a social network with multiple types of posts that can all be rendered in a main feed, or a bunch of other things. In all of these cases you can also take advantage of this kind of dynamic dispatch to inject mock objects for testing, as well as theoretically have an easier time swapping out a layer if you want to change a dependency.

What really frustrates me is that almost none of the OOP instruction I've encountered ever showed these kinds of real, practical examples. They always over-emphasize harmful inheritance-based approaches and never really explain composition. In college we learned about OOP by building a little text-based RPG thing with different types of monsters implemented as subclasses of an abstract base class, which left me feeling like there wasn't much practical use for it outside of game development.

It wasn't until my first internship that I saw a real-world use of OOP, in the form of a giant Spring Boot monolith with tons of dependency injection. Eventually, after staring at that for a few months, OOP finally clicked for me, but I still find it annoying that nobody ever tried explaining this using practical, small-scale examples.


> What really frustrates me is that almost none of the OOP instruction I've encountered ever showed these kinds of real, practical examples. They always over-emphasize harmful inheritance-based approaches and never really explain composition.

So true. The canonical example used by OO pundits is often a "Person". A Person can be either a Contractor or an Employee (inheritance).

Having spent a lifetime in HR systems, there really couldn't be a worse example. It turns out that people may become contractors or employees, and then change back again, leaving the company and then returning. Some people may even hold the role of contractor and employee contemporaneously. Any crazy shit is true for humans. Instead of using simple OO classes to model people, IRL you end up with many, many tables that capture their lifecycle.

Good OO candidates/examples are usually non real worl things. Window managers and windows. Progamming concepts. That kind of thing, where the rules are simple and point in time.


> I didn't fully grok OOP until I saw how composition and dynamic dispatch can be used in real code to create abstractions and flexibly swap out different implementations for your interfaces.

> You could build a chatbot that supports Discord, Slack, and IRC dynamically at runtime...

As a counter-point, OOP isn't necessary for dynamic dispatch.

None of these dynamic features you describe are necessarily any harder in non-OOP languages — or easier in OOP languages.

You can do the same kind of dynamic dispatch in Elixir, Go, etc.

OOP really is a preference, not a differentiator.

Source: I cut my teeth on OOP and used it for many years. Been using Elixir for the last half a decade, and have zero loss of ability to do things like this.


I agree. IMO, Java-style OOP conflates 2 different concepts, polymorphism and inheritance.

Polymorphism (including dynamic dispatch and duck typing) is a game changer, in that it encourages simple, stable interfaces, enables testing, encourages encapsulation, etc. It's a key technique for building big projects.

Inheritance is a tool for reducing the amount of code written by a human, among many others (things like code generation and composition) I haven't seen it unlock other important conceptual domains the way polymorphism does.

Unfortunately many undergraduate curriculums get overly excited about inheritance when teaching OOP. I guess animal-cat-dog is an easy example (though totally unrealistic), but the problems polymorphism solves don't often show up in classroom-sized projects.


Yes, the general applicability of these concepts is exactly why they should be emphasized when teaching OOP rather than wasting time on uselessly contrived examples of inheritance.

However, I'm not really sure why your comment is framed as a counterpoint to mine. I explained what it took for me to grok OOP, but I never said OOP alone provides composition and dynamic dispatch. It seems needlessly antagonistic to make this some kind of programming paradigm pissing contest when that has nothing to do with anything I said.


For me, the issue was that OOP in an academic context was always toy examples. OK, Dog and Cat are Animal that can MakeNoise, Eat, and HaveFur (except sometimes not). Yeah, I get it. It's a ridiculous amount of boilerplate compared to just throwing some functions in and getting it done.

The issue was that anything they taught had to be dumbed down to the point to fit in a classroom and they basically stopped there. We also never wrote code that would be looked at by more than one person. I didn't "get it" until I spent years in a real job, with systems complex enough for this to matter, and practical examples like Dependency Inversion etc as you say. There was also a complete lack of Composition and (at the time I did it) multiple-inheritance which is a bit more controversial these days.

Our traditional Comp Sci course also worked on a variety of languages but nearly zero UI or web programming back then, which are both natural fits for OOP that we largely ignored.


Examples totally do suck.

A class Animal is inherited by Dog class that barks. That's like the total opposite of what any practical example can be and people confuse how that's even useful.


You don't really understand OOP until you recognize it as one style among many that you may use in different parts of each program. Some parts are naturally OO-y, others DD-y, others declarative, often an unholy mix in each place. The problem dictates elements of the solution. Shoehorning a solution into a particular style makes a bad solution.

Never forget who is in charge. Deferring to a style dodges responsibility for your choices.


Perl's approach to OOP has flaws but it being generally "data structures you already know but with some magic and functions attached" helped me understand it on a more fundamental level. It greatly supports the mix-and-match approach you mention here.


DD?


My guess would be domain driven design.

I remember that being all the rage 10ish years ago? I think it works, but I don’t see it much anymore


Data-driven. What weather prediction, finite-element analysis, and current AI do.


Domain-driven or data-driven (not sure which).


My first job was doing mostly Lego-pieces greenfield development. I learned a lot, but also didn’t learn as much as in my second job where I started having to work with a much more complicated, slightly “legacy” system with a big codebase.

At small code sizes, OO does not really have any apparent advantages to doing everything imperatively or in a hacky way. In fact I’d argue it tends to overly complicate and obfuscate things. But there is a point at which you really start wanting OO instead because you can no longer reason about the binary as a whole, and need to start thinking in terms of interfaces with separation of concerns. Even if you did understand the whole state, there are too many people working on it to keep up with changes to the whole state in a way that lets you reason about the thing as a whole.

At my first job (and in college) code never reached such levels of complexity so OO always seemed like some dumb fad to make things more complicated than necessary. I still that is often the case, but now I absolutely see the benefits when they present themselves.


I doubt you're the only one, and I actually feel that a lot of people that find themselves having to program in an OOP language don't really understand OOP fully or what its motivations are. I personally don't think it's possible to really grok OOP without first understanding the motivations for its origins and how it compares to doing everything in a purely imperative fashion. The best way to actually understand its purpose (and flaws) is to implement an OOP system in an imperative language (e.g. implement a basic OOP class system in C, etc.)

It's common for a lot of online resources (I don't know about university) to provide introductions to programming using OOP, which I think is a terrible mistake. It fills beginners' heads with all sorts of incorrect, fanciful ideas about how computers work and what happens when a program is executed. It's also difficult to know how to use OOP concepts and structures correctly without knowing why they're convenient. I strongly believe you need a good understanding of the technical motivations for such constructs before you can actually use them with good effect--otherwise you're simply relying on imitating patterns or taking certain things as foundational when in fact they are not. I think functional languages and strictly imperative languages are far more appropriate for a first programming language.


Very much agreed. My main problem with school was being told "how" and not "why".

If I were to teach programming, I'd start with straight up assembly. Give people a taste of what the cpu is actually doing under all your abstractions. Probably a few weeks, enough for hello world, fibonacci numbers, and an intro to branching. Then introduce C, get used to pointers and basic high-level language concepts. Really hammer in the idea of thinking like the CPU and being mindful of the resources you're using.

From there, guide them towards building a real application in imperative C or C++. After that the overarching theme is rewriting the application in an object oriented language, with attention to why and when you should and shouldn't use these new tools.

IMO, understanding the fundamentals of how a cpu works is absolutely essential for writing good code at any level of abstraction, and it seems a lot of new programmers are missing that.


I read the earliest articles on OO and SmallTalk in Byte Magazine, which IIRC would have been the early 80s. OOP was totally and utterly bewildering to me, a BASIC and Pascal programmer.

It "clicked" much later, in two stages, using languages that were "kind of like" OOP, even if not rigorously so: LabVIEW, HyperCard, Visual Basic. I think VB had a decent strategy for introducing OO to the rest of us. Out of the box, it was "object based," meaning that you could use classes that had been created by someone else. For a bit more money you could buy the version that let you do full OO, but I never did that. But by being a user of objects, it gave you an idea of what you'd want if you could create them for yourself.

Nowadays of course people range from being bullish to bearish on OO, and I've had the experience of doing it badly and making a mess of things, when a procedural or functional model would probably be better.

Kind of a lesser issue is that I finally grasped how to work with a modern OS after laying my hands on the first couple volumes of the Win32 programmer's manuals, which I think were vastly less forbidding than Inside Macintosh.


> the first couple volumes of the Win32 programmer's manuals

What books exactly are you referring to here?


That's a long time ago. ;-) As I recall, there was a set of 2 or more books, published in the early 90s. And I never had my copy, but several of the software devs had them on their shelf, and I'd borrow them as needed. Typically, my use was to write a binding for Visual Basic. For instance I hacked into the audio hardware, and wrote a bespoke interface to the serial ports.

It was strictly API stuff, not how the actual OS works inside.


I've always found OO programming completely natural. n fact, it's the only kind of programming i've ever really done - create a structure with data members, and then create functions to work on that structure. you can (and should) do this in low-level languages such as C and assembler.

Of course, if you want to go the whole polymorphic route, i'd suggest using something like c++, but the key ideas are really structures, and functions that operate on those structures.


As someone who started programming in BASIC, OO is a really good way to think about stuff (it introduces the abstraction of “conjoined data” that is typed and has functions) but it was not natural. It just feels natural now that I’ve spent so long in school and at work dealing with OO languages. I imagine if you had started with BASIC you’d feel the same


I really struggle outside OO paradigms as well. Having classes is such a useful framework to organize how data is stored, operated on, and moved around.

The hardest bit is knowing when to stop, aiming for the RAII sweetspot in C++ is the goal not AbstractBeanFactoryFactoryBuilder().


I've always had a hard time because I was thinking.. why would I carry the performance hit of instantiating objects for a bunch of stuff?


Why accept the performance hit of copying structs when you could use a reference type?

It's a tradeoff. OOP makes sense for certain types of problems, and functional programming solves other problems. It's very much a case of selecting the right tool for the job.

I recently transitioned from C# game programming to C++ embedded system firmware. Objects don't make much sense for what I'm doing now, and I'm coming to see the elegance of functional programming.

There is no one answer, just a box of tools. What makes a good programmer is understanding each of your tools well enough to decide which one will solve your problem in the way you want.


Could be wrong but, at least in C++, there's realistically no overhead to creating a trivial object that you wouldn't have by manually instantiating the variables anyway.

(Ignoring virtual stuff, lengthy constructors -- code run is code run but a raw object I think is by itself relatively harmless.)


you are right - in c++ (and in most other languages that support oo) there is no overhead on instantiating a structure, or variables, that you would not have to pay anyhow.


you have to instantiate variables anyway. grouping those variables in objects simply makes your code more understandable - there is really no extra cost.


If the functions are not built into the objects, that's not object-oriented programming.


When I was younger I had a lot of trouble with OOP. It probably didn’t help that my two strongest languages at the time (PHP and Python) allow you to use them fully iteratively if you want. For some reason I never found an explanation that focused on state. A lot of the examples I saw I remember thinking it made a lot more sense to just rewrite the functions so they aren’t part of a class.

I still frequently see code examples online that are written using classes that don’t need to be. I imagine this is the reverse problem: people from languages like Java thinking everything has to be in a class.


Yes, looking back I was shocked to realize how little my oop education and work had talked about state


I had a similar shift in perspective recently, that went from viewing objects from the outside as things to be manipulated, to viewing them from within as independent entities with boundaries. What made it click for me was watching Alan Kay talks and reading “Object Thinking” by David West. Now, I think the idea is due for some sort of renaissance. Though the culture of programming has turned against it in favor of FP, OOP continues to underlie the most important languages, organizations, and products in a way that speaks to its ultimate potential: to build evolvable, growable, maintainable software systems in service of human flourishing.


This one is the same for me. I'm not even sure I still totally get it.

It feels like something used an awful lot in ways that don't really add any value. But again, it must just be my lack of understanding. Maybe one of these days it'll click in.


In my animation system, I had a laser beam projecting from a fixed point to a movable point in space. There are also sprites which travel up and down the beam.

Moving the beam endpoint required an animation to bring the width to 0, change the endpoint, then ramp the beam width back up. The sprites each have their own animation in different conditions, and moving them along the beam is another animation.

The real key for me was the sprites. I built a class which takes in the start and end points of the beam and an update method. Once you start the sprite up, you just have to poke it every frame with the time delta and it manages all of its own animations.

Likewise, the laser itself was an object that managed its own animations and the sprites related to it.

This resulted in a bunch of objects with just a little code in them. But because the concerns are separated, it results in less code overall than it would take to manage everything all at once.

Is this the best, or even a good way to do this? Probably not, but it was a very beautiful solution in the moment


Nah, none of what you described (encapsulation and separation of concerns) is specific to OOP, you could do that with FP or imperatively with plain structs and functions. It's more like you are starting finally understand how to program in the large.

I'm not completely against OOP, but written code that strictly adheres to this paradigm tends to be more convoluted than necessary.


An object is a function closed over its variable default arguments.


Unit testing and using dependency injection to write test-able code.

I'm not sure if it was years, but it wasn't immediate. I just didn't understand why dependency injection was good at first, and not just someone's weird personal code style choice.

I thought it was just people being "Enterprisey" which I'd encountered many times over the years.

Once I committed to unit testing, I realized how necessary it is.

Unfortunately I still encounter customers who haven't bought into it and as a result have untestable code. It's so hard to go back and retrofit testing.


Seven years into my career I'm increasingly convinced that the emperor has no clothes with respect to unit tests that are just transcripts of the code under test with "mock.EXPECT()" prepended to everything - 95% by volume of the unit test code I've ever read, written, or maintained.


I call those lockdown tests and they're a smell. You don't want to dictate the implementation, just the inputs and outputs. The former leads to very brittle code, the latter describes and validates the contract. It's also important where you put the logic of the test in the code base when you have multiple layers. This latter part is harder and system dependent.

In many cases mocks are now over used where previously they were important in say 2008. Especially now with languages that support functions as objects, better generics, and other features which weren't common a while back. Likewise frameworks are and languages are generally way more testable now which means you're doing less backflips like static injecting wrappers for DateTime.now into libraries to make tests work. This further allows more contract and less implementation specific testing.

As with most things there is a lot of nuance/art to doing it well and smoothly


> You don't want to dictate the implementation, just the inputs and outputs

When programmers really embrace dependency injection like the parent comment, the implementation largely is the inputs, and your code expects them all to be implemented correctly (or mocked correctly by test code). I completely agree that mocks are overused, and their importance in 2008 were in my view a direct result of the popularity of dependency injection patterns.

Following that logic, the best way to remove mocks from your tests is to minimize dependency injection, relegating it all to a single place in the code wherever possible, and implementing all other domain logic as pure functions of input to output. How to test that code which touches other systems? Integration tests (or multi-system tests or whatever you like to call them).


I generally agree with you, DI went really far in many ways through up to like 2015.

In 2008, I actually found code often got more testatable when it was dependency injected. I spent a month ripping singletons out of a gui application so we could make it into a web service, that would have been a lot easier with a DI model. The system was nigh-on untestable until we got rid of those singletons.

I link it in another comment but https://github.com/gaffo/CSharpHotChocolateZeroQLIntegration... is how I do things these days, api integration test, and more focused unit tests where you don't understand the library well yet, there's lots of tricky edge cases, or error conditions such as setup that are harder to integration test. I'm still feeling it out.


Yep, I think mocks are mostly a smell. The "functional core" of a module should be entirely or almost entirely unit testable with (possibly fake) dependencies passed in. The glue code ("imperative shell") should be tested at a higher level - "integration" or "end to end" or whatever you want to call it - which looks at the externally observable effects of running the code (database changes or API responses or whatever) rather than the details of its execution.


How are mocks a smell if "unit testable with (possibly fake) dependencies" is okay? That's what mocks are.

Or do you mean specifically "expecting" interactions with those mocks? Because I agree that's usually not that valuable.


Fakes are not mocks. A fake is an actual implementation of the dependency that runs in-process/in-memory. This means that, unlike with mocks, your test code does not dictate the behavior/outputs of the dependency.

Ideally, service/library owners would write and maintain the fake to ensure that it stays in sync as changes are made to the actual service.


Basically what Cyph0n said, fakes and mocks aren't the same thing.

But it isn't quite as smelly when I see a mock (or "stub" more typically) used as an expedient way to create a fake to pass in as a dependency.

Like you said, it is asserting on the interactions with a mock that is what primarily smells bad to me. I very rarely (honestly I think maybe never) see this done in a way that isn't just rewriting the implementation of the method as expectations in the test.


Amen. I have been frustrated at many jobs where developers were so proud of endlessly writing mocks and spending more time on their tests than on core functionality.

Over time I have favored very basic unit tests and leaning in more on automated integration and regression tests. I know the theory of unit tests catching things earlier, I just don’t think it matters in practice.


I think in practice unit tests are both the most and least useful tests. Unit tests of code that just generates output as a function of input are the most useful, and unit tests of code that just glues together a bunch of side effects are the least useful. I like to find ways to structure things so that it is mostly comprised of the first kind of code, and has so little logic in the second kind of code that it's easy to justify not unit testing it.


I once saw type checking presented as kind of an alternative to unit tests which doesn't rely on the assumption that developers write good tests, and I really like that viewpoint. A powerful type system can prevent many bugs, doesn't need additional test code and doesn't make any assumptions.

Of course it's not necessarily a full replacement, but it's definitely better and more time efficient than bad tests.


Yes, static analysis of types can eliminate a big class of unit tests (basically just checking pre- and post- conditions on types), which is a big part of why it's great. But most unit tests should be for logic. But that's why they aren't very useful for "glue" methods which just thread things through different dependencies - they shouldn't have much interesting logic.


> Of course it's not necessarily a full replacement

I'd actually say: Not in the slightest. A type system just rules out illegal input values[0] but it won't ensure that your business logic is correct.

And even that[0] is not entirely correct because, most of the time, a type system just rules out some illegal values but not all of them, because it cannot represent all possible constraints. Gary Bernhardt discussed this whole type checking vs. tests debate at length here: https://www.destroyallsoftware.com/talks/ideology


This is the way, but it’s tough to sell “we’re not going to unit test this part” in a professional setting where there are incentives to look more responsible than thou, directors are looking at unit test coverage reports, etc.


Yeah I have no qualms about writing mostly-BS unit tests when the existing structure of a project makes it infeasible to write good ones. When in Rome! But when I'm starting from scratch or have a chance to do refactoring, I move toward the "functional core" approach as much as possible.


You can still get code coverage points with broader tests.

I think it’s reasonable to disavow expecting single unit tests for nearly every method on every class with mock/boilerplate/copy-paste hell. However, I would still expect “local integration” unit tests that exercise broader chunks of code and keep code coverage up.

So it should be “I don’t need to write a new unit test for this because an existing unit test calls the method that calls this method without it being mocked and you can see it’s covered in the code coverage.”


Code coverage in Go is on a package level. A test that exercises a handler, controller, and repository can accrue coverage for at most one of them.


Huh. I guess I’ll add that to the list of reasons I don’t like Go.


I think my favorite of these was a function that called 4 other functions and didn't do anything else, its unit test mocked out all 4 other functions and asserted that each was called.


Every damn day. It’s the thing that puts me most at risk of falling out of love with software engineering.


I had a take home test a year or two ago and I got grilled during the interview for not writing crap like that.


How would you unit test that?


This was in django, where a good such test would set up the database, call the function, then check that the new state in the database is as expected.

(Which IIRC is what I changed it to)


I've seen tests like that ("has the mock been called")... and in general I've felt that they were kinda useless.

Thinking about the original TDD approach -- red, green, repeat maybe it's not that bad? I prefer your approach, where you actually verify the state change. But in the absence of that, I wouldn't mind the "call the mock" approach. At least it does change some of the contract?


I would say it's worse than no test, because it discourages refactoring. Any alterations would make the test fail, even if it still does what it's supposed to do.


I've got mixed feelings about this. I'm leaning towards agreeing with you, because testing for the most part should be about the contract and not specific implementation. However in rare cases, as part of early development phase perhaps, I could see this being a useful signal. Provided it gets refactored later on.

I suppose having lots of mocks is a smell after all..


It’s better than no test because it makes the metric go up, and as we all know, virtuous and rational engineering practice is about recognizing that metrics are the objective truth about The Good and anything else is just your opinion. /s


Yeah, the mere presence of unit tests is not enough. It has to actually assert something useful.

When I code review, I try to make sure I call out "fake tests".


The useful assertion to be made about gateway/repository layer code is that it gets the expected behavior out of the dependency. This is not an assertion you can make when you've mocked out the dependency. You must make it in an integration test, not a unit test. Unit tests in these layers just make assertions about "it calls this method on the client" or "it sends this string," which tells you nothing about whether doing that is actually correct.

It's relatively uncommon for handler/controller code to have logic worth testing, most of the time it's just maintaining the separation of layers and concerns by wrapping gateway/repository calls. All there is to assert about it is that "it calls this function in the next layer."

Every once in a while there's nontrivial functionality to test in the middle, and unit tests can often be a good fit for that, but in my experience it's more the exception than the rule.


That’s a great point. Seeing lots of mocks and assertions that certain functions are called is often much ado about nothing since no actual code functionality is exercised. I do sometimes see the return value functionality of mocks used as a stub, just because the dev hasn’t internalized the distinction and can “make it work” with a mocking library.

One of the only legit use cases for mocks that I have personally come across is validating things like a sequence of API calls to an external service, or queries to a database where there is a check that certain efficiencies are guaranteed, e.g. verify that an N+1 select problem doesn’t creep in, or knowing that a app-level caching layer will prevent redundant API calls.


I keep looking for a reason to write software using a proper DI container framework. In talking to Java devs most of them just mention injecting mocks for your database. You don't really need a full DI container for that you could use a service locator instead or some even lighter DI pattern. If you have that particular hammer though and know how to use it, then it becomes easy to achieve and use config to inject your backend.

But that's a long way from glorious full DI containers where you never call 'new' in your code anywhere and all object creation can be dictated by config. I suspect that must be only the kind of thing that people who maintain 1,000,000 line codebases that are at the center of massive bureaucracies.


At my bigco, there's no service-level DI (c++)

Just vanilla C++ classes, and virtual interfaces if we need to mock things for unit tests.

No automatic wiring of the hierarchy.


I'm thinking more like programmers working at the IRS or SSI or something like that, where you have million line long decades old codebases. Not anything that Google or Amazon would write. Applying a DI framework right from the start may actually be a best practice--that allows the codebase to evolve under the next 10 years of contract programmers--instead of just being the YAGNI that it would to anyone in the tech sector.

Although you might say that microservices are like a distributed DI framework that would let you rewrite or mock components provided that the rewrite/mock adheres to the API framework (curiously, I've actually seen that used in tests with a quick and dirty in-memory sequential unauthenticated server used for testing clients where the server passed the same API test suite as the real server).


The thing that made it really click with me was the quote (two whom I'm not sure it belongs to) that is roughly "code to abstractions/interfaces, not implementation".

The idea that I take advantage of good abstractions and I send those objects into my classes that need to perform actions via those abstractions made a lot of sense. Helps enable good polymorphism, as well as unit testing and other things.

I don't think I'm doing it justice, but the idea took a good while to understand there reasons behind it. Some books that helped me grok the idea were

- Patterns of Enterprise Application Architecture - Clean Architecture - Architecture Patterns with Python

along with running into problems that could be easily solved with a decent abstraction at work and learning to apply it directly.


Can you explain the concept? I feel I didn't grok it fully


I found this explanation very helpful: https://hakibenita.com/python-dependency-injection (if you program in Python)


Not sure about poster above, but I found a large amount of value in writing tests when developing API backends. I knew the shapes and potential data. Was easier to write tests to confirm endpoints looked right than manually hit the api endpoints.


Always amuses me when I see tests spin up a http server, just to call a function.


It used to be hard. I just spent a day in c# and gql figuring out how to do API level tests with a new framework. But when you get it working it's ever so much faster.

https://github.com/gaffo/CSharpHotChocolateZeroQLIntegration...

Still playing around with the right level for this but this is currently nice as it gives me compiled type checking and refactoring. This is an example/extraction from another project which uses react as the client. Wasn't big on cross language api level tests yet for speed of development, as that's a tradeoff. Redundancy vs even more framework.


Sorry, I probably wasn't clear. I wasn't spinning up the server itself, just testing the endpoints via their functions. Though this likely falls more under integration vs unit tests.

As for unit tests... I mostly add them to projects with something egregious happens or a very hard bug to spot can occur - just to prevent anyone else from foot gunning themselves (here be dragons or whatever).


You were clear. I was agreeing with you.


What specifically amuses you?

Todays http servers may have any number of request altering/enhancing "middleware" calls between the incoming request and the actual business logic/function.

How do you ensure that your api works as designed if you only test (pure) business functions? Or do you re-create the middleware chain of functions manually for you test?


/api/foo -> fooApi()

You don't need to test the call to /api/foo, you only need to test the call to fooApi(). It doesn't/shouldn't require a http server to do that. Just call the function directly.

If you want to test that /api/foo exists, that is essentially a different test and only requires a mock version of fooApi(), because you've already tested fooApi() separately above.

The benefit of this approach is that your tests run a lot faster (and easier) not having to spin up an entire http server, just to test the function.

As for the middleware... that is also tested separately from the business logic. You don't want to tie your business logic to the middleware, now do you? That creates a whole dependency chain that is even harder to test.


Obviously, you are doing it correctly and not putting all your logic in the controller.

See also: Frontend development that requires running the entire stack in order to test it.


The controller in this example is fooApi(). I generally reserve this layer for taking the input parameters (ie: query/post data), and then passing that data to some sort of service which executes the business logic on that data.

For example, the business logic is what talks to the database. This way, I can test the controller separately from the business logic. Often, I don't even bother testing the controller, since it is usually just a single line call to the business logic.

Anyone who writes a lot of tests realizes very quickly that in order to write testable code, you have to separate out the layers and compartmentalize code effectively in order to allow it to be easily tested. Tracking dependencies is critical to good testing (which is where this whole DI conversation got started).

If you aren't writing code like this, then you're just making testing harder on yourself... and then we end up with tons of code which can never be modified because the dependencies between layers are all too complicated and intertwined. Don't do that.

At this point in my 27+ year career, I don't see any reason to not do things correctly. The patterns are all instinctual and no need to try inventing something new, I don't even think about it any more, I just do it from the start.


technically this would be considered an integration test I think.


I‘ll add to that: the difference between "offline" unit testing and spring integrarion testing with test containers and real application contexts + all the related concepts like @SpyBean, @Mock, @MockBean...

I always hated testing and I still do, but every time I commit to doing it right I catch so many errors before QA.


> dependency injection

The term is unfamiliar to me -- is it related to "fault injection"?


Let's say Class A depends on Class B. A lot of people have the instincts to have A construct B so that the callers of A don't have to worry about it.

But this makes testing A in isolation difficult. When testing A, you want to mock out B with an instance the test can manipulate.

So we want A to not create B, instead we want B to be "injected" into A. The general strategy of having B passed into A is called dependcy injection.


Dependency injection is one of those technical terms that are made up to describe something that has been done for decades but with a new name because the creators of the new term don't have much experience and/or they want to gain notoriety for creating a new programming technique.

If you've ever written a constructor for a class that has arguments (within the constructor signature) that are used by the instance of the class when instantiated then you have done dependency injection, or put more simply 'passing stuff in' which was eloquently stated in another comment on this thread.


I believe the term Dependency Injection was coined by Martin Fowler, as the meaning of the pattern name Inversion of Control is less obvious. I'm going to assume that Martin Fowler has quite a bit of experience, although you might accuse him of wanting to gain notoriety...

In any case, DI is not at all a new trendy term. Inversion of Control dates back to the gang of 4 patterns book from the '90s. Yes, constructor parameters are one way to implement DI, but not the only one. I'm ambivalent on the whole pattern movement, but they are a great educational tool for novice programmers, and it is good to have some standard terminology for core patterns.

To put it another way, constructor based DI is not simply the use of constructor parameters, it is understanding the OO design principle that side-effecting code should be isolated into objects, rather than just dumping a bunch of DB queries into your functions, like we did back in the stone age.


People would have way fewer opinions on it if it was just called "passing stuff in"


It is a pattern where you inject the dependencies of a class into it rather than create the instances of the dependencies within the class. for more info --> https://en.wikipedia.org/wiki/Dependency_injection

it makes the code cleaner and testable.


What I find troubling about that page is that it does not qualify up front whether the dependencies are:

A. Also depended upon by other, coequal unrelated classes, possibly maintained by others, OR

B. Depended upon only by a single class (of higher functionality)

Situation A is sometimes called a portability interface; B might be called an internal structuring interface.

The difference has crucial implications for social power relationships between the human developers and users involved.


The idea of DI is that you creat resources indirectly, and don't call the named constructor in your code. You then use interface types to interact with the resource.

This way, your code isn't bound to a specific implementation.

Some DI frameworks even go so far and define all resources in a config file. This way you can switch out the implementation without a recompilation.


No it is a design pattern to structure code. It is also known as the hollywood principle (don't call us, we call you). Meaning that dependencies of a class are provided from the outside, the class itself doesn't know how to create instances of dependencies, just relys on the dependency container (also named inversion of control)...


Going to steal a description I wrote for a blogpost several years ago, when I had recently understood DI for the first time so it was very fresh in my mind:

https://hasura.io/blog/build-fullstack-apps-nestjs-hasura-gr...

Dependency Injection solves the problem of when you want to create something, but THAT something also needs OTHER somethings, and so on.

In this example, think about a car.

A car might have many separate parts it needs:

  class Car {
    constructor(wheels: Wheels) {}
  }

  class Wheels {
    constructor(tires: Tires) {}
  }

  class Tires {
    constructor(rims: Rims, treads: Treads)
  }

  class Rims {
    constructor() {}
  }

  class Treads {
    constructor() {}
  }
We can manually construct a car, like:

  const car = new Car(new Wheels(new Tires(new Rims(), new Treads())))
But this is tedious and fragile, and it makes it hard to be modular.

With dependency injection, it allows you to register a sort of "automatic" system for constructing an instance of "new Foo()", that continues down the chain and fetches each piece.

  class NeedsACar {
    constructor(@Inject private car: Car) {}
  }
And then "class Car" would have an "@Inject" in it's "constructor", and so on down the chain.

When you write tests, you can swap out which instance of the "@Injected" class is provided (the "Dependency") much easier.


I know the car is a classic, cliche OOP example, but for any semi-experienced programmer, I feel like it's a really bad choice of analogy which only serves to obscure how you would use a technique like DI in actual real-world code, rather than an artificial toy example.


The first example is also dependency injection. The rest of what you described is why a dependency injection framework often becomes useful in a system with a large dependency tree.

The alternative to dependency injection is for functions to instantiate dependencies internally rather than having them be passed in from the calling context.


I think you might have it backwards. The first example _is_ dependency injection, too, since you are passing the required instances into the constructor. The non-DI approach would be for the Car class to import and instantiate the Wheels class inside its constructor function.


I'd probably reach for a Builder pattern first for that rather than go full DI container.


Good series about the approach from a functional perspective https://fsharpforfunandprofit.com/posts/dependencies/


You'd do fault injection by injecting dependencies that respond with / throw faults.


I think the poster is referring to Guice or something similar - that there's a framework which selects which instantiated runtime object gets injected as a dependent into another, thereby automatically "figuring out" what object needs to get instantiated first, and then next etc.


Agreed, this has taken me years. It also is tough because lots of people over use it IMHO. When the logic keeps bouncing around dozens of classes its too hard to follow, even if its easy to test.


+1 to this. It doesn't help that some dependency injection frameworks' (ahem, looking at you Dagger2) error messages can be convoluted and hard to understand.


Another solution is to eliminate classes and only use structs or similar plain objects. Makes mocking and testing functions much easier. At this point I see no reason for OOP whatsoever and consider it a big mistake.


Getting rid of data abstraction and encapsulation is throwing the baby with the bath water.

The abstract concept of OOP (messages between complex objects, as defined by Alan Kay) is an attempt at mimicking biological systems. Most modern languages implement data abstraction, but call it OOP, where they encapsulate some functionality with the data it operates on. Really helped with varying data formats in the AirForce in the 60s, apparently. There isn't anything wrong with this abstract concept either - it's a way of structuring a solution, with trade-offs.

Support for unit testing and mocking has little to do with OOP, and everything to do with the underling platform. Both C++ and Java, for example, do not have a special runtime mode where arbitrary replacement of code or data could occur. This is necessary for mocking functionality that is considered implementation detail and hidden by design. The hidden part is great for production code, not great for testing.

For example, if an object in java has a field like 'private final HttpClient client = new CurlBasedHttpClient();' this code is essentially untestable because there is no way in Java to tell the JVM "during testing, when this class instantiates an HttpClient, use my MockHttpClient".

Kotlin fixed some of that with MockK, which can mock the constructor of a Kotlin object, and you can return your mock implementation when the constructor is invoked.

Clearly, it's a platform issue. There could be a world where you could replace any object in the stdlib or any method or field with a mock version. JavaScript is much more flexible in that regard, which is why unit testing js code is much easier.

The root of it all stems from the fact that unit tests need to change some implementation details of the world around the object, but production code should not be able to, in order to get all the benefits of encapsulation.

If you get rid of modern OOP, you are swinging the pendulum in the opposite direction, where your tests are easy to write on any platform, because everything is open and easily accessible, but your code will suffer from issues that creep up when structures are open and easily accessible, such as increased coupling and reduced cohesion.


I believe that combining state and functionality - the root of OOP - is a mistake. Tons of programming patterns and concepts exist to solve this fundamental mistake. When you stop using classes all of your code becomes so much cleaner, easier to reason about, test, debug, and so on. You can never create only pure functions in the real world but you get closer to this ideal.

I stand by my statement that OOP is totally unnecessary.


You can believe that the world is flat, announce that boldly and stand by your statement. It doesn't mean it's true in reality, only that it's true in your mind.

Edit: This is more blunt than I intended it to be. For what it's worth, I happen to agree with you in principle, but I also think you're taking the roof off the car here and boldly claiming that the experience is so much better in the summer. What about winter? What about when it rains? What are the trade-offs? Not mentioning the downsides means that you either haven't found them, haven't thought about them or are intentionally omitting them from the discussion.


No need to tone it down; You were right in calling out the GP.

To dismiss the whole of OOD/OOP so cavalierly just goes to show they don't know what they are talking about.

Much of the success of Modern Software is directly due to the wholesale adoption of OOD/OOP in the last few decades.


OOP is successful despite of itself. My point is it’s just an inferior way of programming and there is no reason to use it other than legions of OOP programmers who take it as gospel. Obviously this is an opinion but it’s informed by lots of experience in new and legacy code bases.


To be blunt, you are stating "opinions" without any basis in facts; hence it is hard to take you seriously.

Separation of Concerns, Modularization, Reusability, Type Hierarchies, Type Composition, Interface contract-based programming, Frameworks etc. were all made mainstream by OOD/OOP. These are things taken for granted by programmers today. As somebody who has been doing OOD/OOP since the early nineties i can tell you it was the single biggest reason for the explosion of Software in the past few decades.

As a concrete example, early in my career i had programmed in C using the Windows API; both 16-bit and Win32 (Thank you Charles Petzold). It was difficult, tedious and a lot of work. And then Microsoft introduced MFC (Microsoft Foundation Classes) Framework with Visual C++ IDE. With a few clicks of the wizard, i had a complete skeleton application with a lot of hard work already done for you. That was a revelation for me on the power of OOD/OOP. Things i had slaved over in Win32 was now at the fingertips of every noob who could type. The same revelation happened (but not to the same extent) when i moved from Xlib to Motif on Unix platforms.

I had pointed you to Bertrand Meyer's book OOSC2 (in my other comment) as the book to read to understand OOD/OOP. Another great book to study is Barbara Liskov and John Guttag's Program Development in Java: Abstraction, Specification, and Object-Oriented Design.


The major reasons software exploded are the internet and the ubiquity of computing devices that get cheaper, faster, and smaller. Language had nothing to do with it, with or without OOP.

I was formally trained and cut my teeth in OOP code bases and my latest company we are very light on classes. 99% is plain objects and modules that operate on them. Everything is very straightforward and logical. In my side projects I’ve stopped using classes as well and it’s so much cleaner.

This is opinion and preference. If I hired a guy who wanted to litter the code base with AbstractUserBeanFactoryFactories I’d have to let them go because it’s a situation I don’t want anymore.


>The major reasons software exploded are the internet and the ubiquity of computing devices that get cheaper, faster, and smaller. Language had nothing to do with it, with or without OOP.

This is putting the Cart before the Horse. Computers, Internet etc. can't do anything without the Software to drive them. That software is written in some Language/Tool using some Paradigms and Software Engineering principles. The ease of use, ease of structuring, ease of understanding etc. of these are what drives Creation, Adoption and Expansion of Computing and Devices.

>I was formally trained and cut my teeth in OOP code bases and my latest company we are very light on classes. 99% is plain objects and modules that operate on them. Everything is very straightforward and logical. In my side projects I’ve stopped using classes as well and it’s so much cleaner.

If you think merely using a syntactical structure like "class" (and Design Patterns) is what makes code OOP, then you don't understand OOP. You can do various degrees of OOP without language syntactical support. This is why i listed the principles and not some syntactic sugar in my previous comment. It is a way of thinking and Software Engineering which has given the most "bang-for-buck" so far in the Industry.

>This is opinion and preference. If I hired a guy who wanted to litter the code base with AbstractUserBeanFactoryFactories I’d have to let them go because it’s a situation I don’t want anymore.

Opinion and Preferences must be based on facts else it is only as good as "Flat Earther" category and nothing more can be discussed.


This is so silly that i will just leave this here for edification: https://en.wikipedia.org/wiki/Object-Oriented_Software_Const...


Yes but if you represent 2 functions over some state a(s) and b(s) then you have the same issue with this as if it was:

Class s: fun a(self)... fun b(self)...

The problem is mutability not how state is bound to a function.


The idea is to make the functions pure and return the state ‘a(s_1) -> s_2, value’. Makes so much easier to test and write concurrent programs.

Though it’s not always practical, like when needing to push an element to an array for making updated state. It is nevertheless possible to shave of pure parts considerably making mutability easier to maintain.


I don't understand but am intrigued. Can you point to some resources that elaborate on and demonstrate what you're talking about here?


OOP does not have a monopoly on abstraction and encapsulation. What OOP does, namely object-level encapsulation, is just a very extreme way of structuring code around mutable state. IMO the better alternative is to avoid mutable state as much as possible and keep data and code separate. Code structured as pure functions is easy to test and encapsulation can be done at the module level.


OOP does not require mutable state. OOP has its origins in imperative languages, and so in the past OOP often involved poor management of mutable state.

I am a Scala programmer, and our code mixes OO and FP. Almost all of our classes are completely immutable.

In the end, you will always need to encapsulate data and functions in some manner. The FP approach involves module systems, but as I understand it, objects in Scala actually provide a better module system than found in pure FP languages.


My point is that improving the state of unit testing has little to do with OOP, and everything to do with the environment that the code runs in.


Recursion is an obfuscated way to run a loop with a stack. At first it seems like magic, and makes things so much easier. When trying to replicate it the first time actually using a loop and stack it’s really hard to get right. But by the time you’ve done it two or three times, it’s as easy as breathing.

Similar, OOP is an obfuscated way to run a function with a context. The first time you separate the data in the context from an object, it’ll be hard to get it right and make it easier than to just use Objects and Methods. But once you’ve done it two or three times, it’s as easy as breathing.

You can optimize loading the context to be better than copying a bunch of unaligned data in lots of ways, but polymorphism is a common way to get there.


This seems totally orthogonal to me.

Using a totally non-OOP functional style, you can either instantiate state within a function or pass it in from the calling context, which is the same trade-off that dependency injection targets.


If you work on a legacy code base with tens of many millions of lines of code, the non-OOP will be a better code base. My opinion.


I wasn't disagreeing (or agreeing) with that opinion. It's just orthogonal to whether or not dependency injection is a useful approach.


I agree with OOP not being needed at all, but I don't agree on this being an alternative to dependency injection.

However you structure it, there will always be "glue code" that ties the nice inmutable code with the outside-interacting bits, and if you want to unit test those, dependency injection (with functions or state, not classes or instances) is still the way to go.


True, but there might not be a big need for unit testing the glue code. By its very name it doesn't contain much logic to test. Integration tests are more valuable in this case.


OOP tends to encourage thinking in more abstract ways than is often necessary imho. I feel like learning a language with only structs had a positive effect on my coding ability (as someone who more or less started with Java).

The few times I had to use Java afterwards I felt the same - all OOP features were unnecessary or at least didn't feel like the most straightforward approach. Nowadays I never use classes in Python, JS etc., it's just not needed - and in the case of Python it makes JSON insanely cumbersome.


People never change because of YOU. This applies to work, relationship and family.

It made me click about the saying that science advances one funeral at a time. It is easier to rally people of similar thought than to change people of opposite opinion. Not impossible, just more difficult. It explains a lot of thing in my opinion.

1. It is easier to start a start up than to convince your boss to take a certain product direction. E.g to not pursue certain pursuit, as outlined by John carmack's departure from meta. The ultimate judgement will be whether YOUR idea survive rather than whether your boss buy your idea. And I prefer bootstrap, at least for now, for that reason.

2. Never attempt to change your spouse. Find the common ground instead.

3. Empathy is mostly about experience sharing. You can't have people feel something they never experience before. If you can empathize, it means you have experience to draw similarity between. Imagine teaching a 18yo to be a father, that's how preaching people to be empathetic felt like.


Yes, that resonates. But how do people change? Through their own rude awakenings? When they contrast their own selected peer group with other teams?


"Change" is manifestation of idea. If you have the idea that sleeping 8 hours a day is beneficial, you are very likely to act like it.

Imo, ideas form within the spectrum of indoctrination and "epiphany".

Indoctrination is something that you hear over and over, and sort of taking it as granted. Parent to children, school to students, religion to devotees, government to citizen, social media to general public, hackernews to us developers, all impart their flavors to our mind and we act according to it. Take sleeping for example, sleep well, sleep long enough, sleep early shouldn't elicit much debate, we treat it as truth more of less. On the other hand, dietary cholesterol consumption, which is considered, advertised and indoctrinated as bad for more than 5 decades has now garnered some attention to consider otherwise [1][2][3]

Epiphany is circumstantial, event driven. Either you have a lightbulb moment, or reality decides to slap you in the face. Having a close friend/relative die at a young age will either makes you treasure life a lot more, or send you into deep depression. But it will change you forever for sure. On the other hand, again taking sleeping as an example, I have been a night person since teenage, habitually sleeping at 3am, 4am, feeling drowsy in the morning and study/work in the late evening. "I prefer working in silence" is the excuse I gave to the then self. One day it dawn on me that there are only 24 hours a day, if I can get things done in 1am before sleep, I can equally get things done in 6am after sleep. And if I can rest well, I can get things done quicker. No new information just one day I viscerally feel that shifting the biological clock earlier makes more.

1. https://en.wikipedia.org/wiki/Ancel_Keys

2. https://en.wikipedia.org/wiki/Seven_Countries_Study

3. https://en.wikipedia.org/wiki/Robert_Lustig


There's a noticeable difference in my abilities at the start and end of the day.

At the start I have more processing power and short term memory grunt. At the end I have far more knowledge loaded into cache.

The first part of the ole sleep schedule is just plain getting enough REM.

The next level of coordination is scheduling your time to take advantage of the morning horsepower, and end of day creativity.


People change due to their own concepts that take a while to click.

You can have an impact there. But you can't cause immediate change.


btw, I think my comment is just a different take to fortituded0002's comment "Everyone is the main character in their own story." https://news.ycombinator.com/item?id=34209966


Reminds me of "managing unmotivated people is hard, so don't".


The power of an outline when writing.

Over the past few years, I've been teaching myself how to write better. I'm not talking about elementary syntax or grammar. I'm not talking about writing the traditional, American English five paragraph essay. I'm talking about writing longer pieces of prose, articles or blog posts or short chapters with word counts ranging anywhere between 1500-3000 words. On this journey of improving the craft, I realized that one of my biggest struggles was writing cohesively. Although I've been able to get lots of words on (digital) paper, eventually I'd get lost in my own web of thoughts, the article itself totally incoherent, no structure, no organization.

Constructing outlines and reverse outlines[0] has helped me tremendously. It's not easy ... but the concept itself is finally — years later — starting to click.

[0] - https://explorationsofstyle.com/2011/02/09/reverse-outlines/


I love a good outline and can't really imagine writing anything above 1000 words without one. Getting the outline into shape really feels like breaking the back of any piece - after that it's just filling in the gaps.

Something I realised much too late was how delaying the move to the keyboard was a useful strategy. Now I mostly start with pen and paper, sometimes with a mind map if I really need to organise my thoughts, and only hit the PC once I have a pretty firm idea of what I'm going to say. I've even done first drafts in longhand, which feels like double handling at first, but there's something about that added filter of transposing from paper to computer that helps you reassess objectively what it is that you're writing.


It might also be a good idea to distill transient notes down into evergreen notes to refine and review knowledge: https://notes.andymatuschak.org/Evergreen_notes

This is a great way to refine your ideas before you write and makes it easier to develop the outline as an assembly of different notes that you already have.

An Executable Strategy for Writing: https://notes.andymatuschak.org/z3PBVkZ2SvsAgFXkjHsycBeyS6Cw...

The main insights for me were: - Using notes as a way to organize one's writing (by assembling ideas together into an outline, then filling out the details) to avoid writer's block. - Creating "logs" around concepts that extract useful ideas from ephemeral observations and distill concrete insights over time. - Using an "inbox" of new ideas as a way to focus attention and perform spaced repetition of concepts. - Organizing notes via tags, backlinks, and associations/outlines rather than as a hierarchy. - Incrementally iterating on atomic concept notes to form larger chunks in memory which allows thinking about more complex ideas and recognizing patterns.


While I still do create "atomic notes" in my slip box and approach writing using a bottom up approach advocated by Zettelkaten professionals and enthusiasts, I no longer exclusively use this technique. I approach my writing not only bottom up, but top down. For me, the problem is—and has always been—with atomic notes is while its easy to create little, bite-sized notes, the problem (again: for me) that I accumulate a pile of notes, independent notes that have no connections to one another: no coherent web of thoughts.


This summer I read "On Writing Well" by William Zissner which was an eye opener for me. I'm far from an expert in writing clear texts, but I am definitely noticing more text which are just... big balls of blurb that don't actually say anything. All because of that book.

It sounds dumb, but this year it clicked for me how big of a difference a poorly written text compares to a well written text.

Hope your training pays off, itsmemattchung!


Zinsser's "On Writing Well" is one of my favorite books, I like how clear and concise its prose is.

I got Joseph M. William's "Style: Toward Clarity and Grace" recommended [1] so you might be interested in it. I read the introduction (I'm planning to read it this year) and with the few examples the author presents it sells the idea that prose doesn't need to be utterly complex to communicate ideas and concepts succinctly and clearly.

[1]: https://news.ycombinator.com/item?id=33601492


This is the first book I recommend to anyone who wants to improve their writing. With Minto's Pyramid Principle as a follow-on.

My top 3: 1/ Edit ruthlessly. Every single word is reduced to its simplest form and pulls its weight--it has a damn good reason for being there. 2/ Aspire to write at a third-grade reading level. Readers prefer simple writing even when reading deeply technical content. 3/ Start your most important conclusions up front, not at the end. You're not writing The Sixth Sense. Do your reader a favor and tell them the big reveal first. You can then follow through and persuade the reader why your conclusions are right.


Bottom Line Up Front and "Anything worth reading is 10% of the first draft," are the two hardest procedural skills in writing.

It's just so much experience to avoid those mistakes.


Using org-mode's powerful outlining functionality has done wonders for my writing. I can ramble on and on and on, then use headings and subheadings to create conceptual "bins" and move chunks of my prose into the bins depending on topic, then smooth my words over with more transitional language so the thoughts flow more naturally.


I also love and need outlines. It’s carried forward in not just writing, but making great presentations, plans, and decision making.

Also, obligatory RiffTrax on outlines: https://youtu.be/yfcyVtD8-Dk


Thanks for sharing this!!! I know that the video clip itself is supposed to be satirical, but I actually found it quite useful. I loved it and watch all 10 minutes of it.

Out of curiosity, in the skit, do you know which book the actor is reading? In the skit, at the top of the page, the book is titled: "Making outlines and summaries"[0].

[0] Fast forwarded to specific time: https://youtu.be/yfcyVtD8-Dk?t=129


Yeah it’s great!

A couple seconds before yours, he has an index card with the title “Directing Learning 371.3”

A quick search yields the title: https://openlibrary.org/works/OL7389387W/Directing_learning

But I was unable to find a physical or digital copy available. Good luck!


I just purchased a physical copy of the book! In case anyone wants to purchase it too, there's one left I found over at abesbooks:

https://www.abebooks.com/servlet/BookDetailsPL?bi=2072352131...


I’ve straight up had to write essays in Excel before to break writers block. Absolute writing by outline.


I'd be interested in seeing an example of your writing process, using Excel as a way to break writer's block.


This was decades ago. But basically take which ever paragraph format intended and break it out into individual cells. So label one cell “thesis” then “point 1” etc. Then string them together in output. It was probably after being up all day and night to help explain the need for that. It did help identify points that needed reinforcement though, “I think.”


Taking the point of view of others, and learning to view a situation from a "perspective-less perspective."

It's easy to think you have people skills because you listen to others and repeat their point of view back to them before telling them they're wrong. And unfortunately you can get quite far in the business world simply by being good at demolishing other people's positions.

As a mental exercise, a few years ago in meetings I started deleting the names from the running transcript I keep in my head. "Joe said X and Jane said Y and then I said Z" was replaced with "we said X, then Y, then Z." It was a remarkably effective device to rise above the "who's going to win?" attitude and instead think about the best way for everyone to proceed as a group. I suddenly started to get what people say about meditation and removing the "I" perspective from your life. If instead of being you, you're a quadcopter hovering near the person you call yourself, it's so much easier to get your ego to shut up and start listening for once.


mine is the opposite, you can't get anywhere in life with the perspectiveless perspective


Maybe it works best as a blend of looking out for yourself and looking out for the group.

Or, cynically, maybe it's effective to adopt the group-first mentality once you have the personal momentum from years of being self-centered.


observe selflessly, act selfishly


Matrix multiplication. First encountered it in high school, where the textbooks presented matrices without any real motivation, and matrix multiplication just seemed like a weirdly-defined operation. Once I got to linear algebra in college and matrix multiplication was presented as the way to compose linear transformations, it made a lot more sense.


The easiest way to describe matrix multiplication is nested function composition.

    f(x) = 2x
    g(x) = x + 5
It should hopefully be obvious that "nesting" the two isn't commutative:

    g(f(x)) = (2x) + 5 = 2x + 5
    f(g(x)) = 2(x + 5) = 2x + 10
One miraculous fact here is that no matter how many functions we stack, we only ever have two terms: x and a constant term. Thus, we can represent this 'linear system' in terms of its coefficients, as long as we agree on an order: e.g. 2x+5 might become "2,5" in our system.

You can do "multiplication" on these packs to compose them together, and even though the rules feel obtuse in abstract, they follow the logic of function composition. A matrix represents a similar function transform, only it's in 3 dimensions, and in order to handle rotation, it needs to swap around x/y/z. So an identity matrix is really saying:

   f(x) = 1x + 0y + 0z + 0
   f(y) = 0x + 1y + 0z + 0
   f(z) = 0x + 0y + 1z + 0


Who can afford to live in three dimensions nowadays?


Matrices are funny. You can encounter them as a teenager reading 3D graphics tutorials on the internet, learn that they can compactly represent scalings, rotations, and translations, and that several transformations can be conveniently “stacked” using this thing called matrix multiplication which looks like this… and that’s it, now you can use them for a cool practical purpose, but the tutorials never derive or attempt to justify why mmul looks like that, often because the author doesn’t know either!

Or you can learn about them in school and be given neither a real-world use case nor a rationale or derivation for mmul.

Or you can encounter them in college, and there the experience depends on whether it’s a good or bad kind of a linalg class. But even there – after all the painstaking definitions and lemmas and derivations – it’s easy to end up not grokking how mmul does what it does even if you grasp all the building blocks – vector spaces, bases, how matrices can represent bases and systems of linear equations and linear operations on vectors and how they’re all kinda equivalent.


It reminds me of a funny story back when I was a student.

We had a week-long group project in the first year whose theme was "a 360° pong". Our group decided that it meant the paddles had to travel in a circle around the playfield and I've decided that matrices and stacked 2D transformations were the way to go. The other students gave me blank stares, I basically said "trust me, you don't need to understand them to use them" and off we went coding.

We ended up with the most impressive pong clone out of all the groups, as nearly all of them had axis-aligned rectangular paddles going around a rectangular path, whereas ours had a stretched half-circle paddle going around a circle path always facing the center of the playfield, alongside extra features like walls and a level editor.

First class the Monday morning after, the math teacher announced that the next topic was matrices. We stared at each other in the group and grinned manically.

If anyone wants to stare at an old C codebase from 10 years ago by a bunch of first year students: https://code.google.com/archive/p/pong-norris/


And a lot of dynamics is e to a matrix.


It took me taking a class in neural networks in my thirties to really understand matrix multiplication


I did a fairly rigorous linear algebra course in school, and it went over my head despite passing it.

Saw it explained in a single slide of Andrew Ng's ML course and everything clicked perfectly.

Where I lived math was taught like fucking shit, it was all algebra and zero context as to why it could be useful in real-life scenarios, zero abstraction such as visualization or metaphor. Everyone involved in concocting that pedagogic aberration should feel terrible about it.


The Coursera one? That's the exact course I was thinking of.


Which class?


A sibling comment to your mentioned Andrew Ng's class, and I actually had the same experience as them with the same class. You can find it by searching for Stanford CS229; the lectures are available on YouTube.


It was Andrew Ng's machine learning class on Coursera


Linear algebra in general for me was kind of one of these concepts. Aced my university linear algebra class with no idea what the heck I had learned. It didn't start to click for me until I started using it for tangible problems.


A big one for me was realizing that a matrix times a vector returns a weighted sum of the matrix columns, with the vector's elements as weights. It's not particularly profound but it has definitely clarified my intuition around a number of matrix problems.


Matrix multiplication is the first example that came to mind for me too. I learned it as compositions of linear transformations (the professor “taught” it through a question on the take-home final) but it felt abstract to me and took years to become intuitive to the point where I could actually explain it from scratch.


Related, this always amazes me: https://news.ycombinator.com/item?id=32195262


The "essence of linear algebra" series by 3blue1brown on YouTube does a really good job at intuitively explaining and visualizing matrix multiplication and other linear algebra topics.


1/2) Trusting Institutions

Institutions such as Police, Universities, NHS, Scouts, MsF, Religions, Churches, YCombinator et alia have a hierarchy of internal loyalties in strict precedence:

* The Staff Member

* The Staff Member’s Family

* The Friends of the Staff Member

* The Colleagues of the Staff Member

* The Group within the Institution the Staff Member belongs to

* Wider Groups in the Institution

* The actual powerbrokers within the Institution

* The acknowledged Leadership of the Institution (may be different to actual powerbrokers)

* The actual goals of the Institution

* The acknowledged goals of the Institution (may be different to actual goals)

* Helping YOU in accordance with what the acknowledged goals of the Institution are…

Only when all the loyalties in that list are satisfied is there the slightest chance you may get anything positive from the Institution.

Despite the long list of higher precedence loyalties it is still frequently possible to have positive outcomes…

But because it is a long list of loyalties far more important than helping YOU, there are often breaks.

And because people and families and relationships are involved they can change at any moment.

So trusting Institutions to do the best for you or act honourably needs to be carefully weighed against the likelihood that will happen

2/2) Mortgages

How mortgage repayments change over time as you pay off some of it (YMMV)


> How mortgage repayments change over time as you pay off some of it (YMMV)

This is a big one people buying houses recently due to FOMO don't understand: you're really not building equity the first few years because almost all of your monthly payment is going to interest. The "irrecoverable cost" of owning a home can often be higher than renting in some HCOL areas now, although I understand that people buy for more emotional reasons (e.g. not wanting to yank their family around to a new place each year to avoid being at the mercy of a bad landlord on each lease renewal).


The autonomic nervous system and the adrenal cortex. Homeostasis is taught as a textbook fact, the body reverting to a baseline over time. What’s not taught is how much the impacts of daily life events drive a continuous stress response. Fight or flight is not just a reaction to deadly threats. It’s active every moment of every day to ensure survival. The adrenal cortex is always active, to traffic, your boss and colleagues, relationship struggles, and overall health and wellness like sleep and nutrition. Yes, the system reverts to a baseline over time but how much that baseline varies is obvious in tracking resting heart rate.


"Understanding Stresses and Strains" from 1968 presents a good depiction of the mechanism with familiar-looking cartoon characters: https://archive.org/details/understandingstressesandstrains

The caveman pressing the alarm button is something I think of a lot.


I had to see it. Thanks for sharing.


I'll add one more book reccomendation on this subject: Sapolsky's "Why zebras don't get ulcers". It's a really eye opening summary of the physiology of the stress response literature. Quite scary at times.


Any books you could recommend?

I presume you already know of this book: The Body Keeps the Score: Brain, Mind, and Body in the Healing of Trauma by Bessel van der Kolk.


That’s a great one. I’ve been looking for something that links the physiology with knowledge work and life demands, but I haven’t found it yet.


Thank you. Bought and reading.


YAGNI and KISS.

When I was a junior developer I used to overthink and overdesign solutions, most of which was never needed. It took many years and a lot of battle scars to realize that less abstraction is more. Today I see a lot of juniors do the same mistake and I ask them to revise their designs to keep it simple.


I use the rule of threes to kill off any premature abstractions. ie. Don’t think you need to make this generic if you have one or two different variants of it. Most often it’s one- so it’s very easy to see that you’re just solving imaginary problems at this point.

Each abstraction has a mental cost for reading and understanding the code so I try to be sparing. Too many code bases in my youth were a nightmare to navigate because I could never find any actual concrete code.


Also don't forget that code (including the complexity you want to add to it) will probably have only something like an aggragate ~1/3 chance of still being used in a production scenario 5 years or so away anyway (for any number of reason, product change, feature removal, company bought / sold, the list goes on and on...) of course these numbers are anedotal but the point is it can be detrimental to think your code will run as is for the next million years, especially after the first draft


That being said, sometimes a good abstraction can really simplify a complex problem by dividing it up to a set of smaller sub-problems. In such cases, the abstraction is worth it even if there is no reuse. As always, rules like this have to be interpreted in a context.


It’s certainly a rule of thumb, I also prefer to break things down but I try to use functions as the basic units as they’re the easiest to keep in your head when it’s clear what they do


My boss constantly repeats that an 80% solution is what we should strive for, for the most time. It has to be good enough, but not perfect. It's easy to get bogged down by trying to make it the best it can be.


My personal trick: as long as I'm confident that I will be able to refactor in the abstraction, I don't bother. I may or may not ever revisit that code.


Pretty much anything related to personal growth (however you define it) takes time to click, but it took a while to understand two things at a deep enough level to begin to make change:

* everyone has a set of habits that served them well during childhood, but may be maladaptive as an adult (e.g. getting angry is almost universal a sign of this maladaptive childhood habit rearing up). Book recommendation [0]

* the fear of hope is a key thing to understand when it comes to why people (myself included) hold themselves back. Taking the first step can feel terrifying because it demonstrates that one is responsible for creating one's own life. Book recommendation [1]

The only way out of this is to deliberately design your days so that you get the most out of them, even if it is fumbling at the start. Add in some times to relax here and there, but if you have a plan, it's much easier than staring at a blank Saturday with the vague goal of "I MUST learn JS/Rust/Go/Scala or my career will be over!!!" and then getting nothing done.

[0] https://www.goodreads.com/book/show/23129659-adult-children-...

[1] https://www.harpercollins.com/products/how-we-change-ross-el...


What an algebra equation is. I could solve linear equations, quadratic equations, and systems of equations by adding/subtracting/multiplying/dividing both sides of the equation and by adding two equations together. But I didn't know why you were allowed to do any of those things, or what else might be allowed. Are you allowed to square both sides? Raise both sides to a power?

It wasn't until maybe my third year of algebra that I realized an equation means both sides of the equation are equal to each other, which means you can perform any operation at all to both sides, and the result will still be equal.


It was long after I graduated and passed symbolic logic that I realized a proof is the “solution” of an equation using logical operators instead of algebraic ones. Proof writing then becomes following branches on the tree of textbook knowledge to new leaves. I envy those who grok this when they’re kids.


Can you elaborate more on this?


Just be a little careful: when solving an equation, you're usually asking when the sides are equal, not asserting that they are. So the implication has to go in the opposite order.


This is a great example! I was fortunate enough that my father explained it to me when I was in school. An equation is like a balanced scale: whatever you do to one side, you must do to the other, to keep things balanced.

I still remember that "click".


Social constructs aren't less real than other tangible concepts. For instance, I used to complain a lot about the non-sense that's to wear a tuxedo in weddings. I was the typical kid saying that I would like to go wearing my best sweatpants to weddings, and that probably my sweatpants are more expensive than some of the tuxedos there. Then, I matured and realized that what we wear have a meaning, wether we like it or not. It's like words, maybe you don't like the definition of the word "car", but using an alternative word would be useles since nobody else would understand you. So if you care about the people getting married and you want to communicate it you need to wear the damn tuxedo.


What kind of sweatpants are more expensive than a tuxedo??



One of the biggest realizations recently for me was realizing that a nearly all of software development is basically about turning a slow, manual process into a faster, automated process. Modern CI/CD stems from a bunch of shell commands that somebody wrote and manually executed to test an app and upload it to a server. Modern automated software testing stems from humans writing small test apps and running them to confirm correct behavior. Many modern development practices stem from allowing small test apps to be written easier and faster. It's all just a giant manual process-to-automated process time-saving machine.


This is a great way to understand complex, new-fangled technologies. Ask "what manual process does this speed up"?


Okay, I'll bite:

What manual process does ChatGPT speed up?


The process of aggregating and formatting searchable information from many sources.

When you simply Google something, you're presented with blogs, articles, stackoverflow pages, github repositories, documentation, ... you're still left with the manual process of parsing all of the results, e.g. turning what you've read into runnable code, summarising and taking notes in a format that is easy for you to follow.

Furthermore ChatGPT allows you to have a dialog about the results. Maybe you have two equally interesting results and don't know which one to go with? Usually you'd have to do "sub-googling" in cases like this and once again parse, aggregate and format those results. With ChatGPT you can basically just ask it to expand on the previous results and help you figure out what to do.


Writing peer reviews at work


Interpreting search results


Googling


To add on to this, when you think in term of software development, that's already pretty meta because you skipped level 1: Formalizing any type of process. Something that is the entry level requirement to get, well, anything done via programming, is actually only done sporadicly in a lot of not-software fields.


In fact a well designed system I work on (database service) is built to give people sufficient monitoring & control that operators handle anything out of the ordinary until they get annoyed enough to automate it


C is basically a faster way to write ASM. And so on.


Save humans time with a workflow.

Save humans time.

Save humans time.


Two concepts that I never really understood until I encountered them together: functional programming and recursion.

When I first encountered recursive functions when I began learning programming I had a really hard time understanding what the function would do and how it would play out as it called itself. I couldn't think through the recursion and imagine what would happen. Nor did I really understand how to usefully apply one to a situation. I would either use a loop or a recursive function that utilized lots of external state to work.

When I later encountered functional programming, having learned programming with OOP languages, it was a real mind-bender. I finally started to understand it but when I encountered the need for recursion in FP, it really threw me for a loop. How the hell was I supposed to do this without external state? So it was that restriction that really let me understand how to create a recursive function that could return something useful with nothing more than the initial input. This new understanding also gave me a better appreciation of functional programming and the idea of pure functions.


I had a similar experience with tail recursion that I only fully grokked much later when I had to implement some stack-trickery in C for a computer game. I realized then that tail recursion was typically optimized as a jump to code within the same stack frame rather than creating a new stack frame.


It took me a long time to grasp market economics. I knew it worked somehow but I didn't quite understand why. What really made it click was Milton Friedman's "Free to Choose" TV series[0].

[0] https://www.youtube.com/watch?v=dngqR9gcDDw&list=PLXD32Z5YYi...


I had the same experience (and then I had to unlearn it all because it only applies to trivially narrow examples. MF’s trick is to hide the assumptions well the same way a magician diverts your attention).


Superconductivity. I did four PhD projects regarding superconductivity properties at nanoscale without really getting hold on the concept. For instance, how zero resistance is related to cooper pair tunnelling and BdG Hamiltonian. Only when I was forced to write introduction a month before my thesis submission I gave my best last try for the farewell and it clicked. Note the title of my thesis “Quantum effects of superconducting phase” ;)

Quantum computers. I was so deceived by all the hype that it took me long to see it as SU(N) matrix product accelerator.

Zero knowledge proofs and Shamir transform to non-interactive protocol. The tunnel analogy did quite a harm to me, also the math was just unbearable (defining set of languages and etc). Only a year ago when I got an old paper on “Observational wallets” describing how to prove ‘log_g A = a’ and ‘log_g A = log_g B’ the whole ‘proving statement without revealing secret’ after long frustration and angst made sense to me. If I had encountered finding Vodo analogy first, It would have clicked much quicker to me.


Old papers are great. I have often deliberately sought out old papers to get a quick and good introduction to a subject. It doesn't always work -- sometimes a newer paper with better abstractions/examples/terminology wins, of course.

I no longer know when I figured out this trick, but I have been using it for at least twenty years. I wish I had known about it earlier.


There are tradeoffs. Most of the time I find terminology has drifted so substantially over time that reading old papers isn't worth the effort. I instead usually look for literature review type of papers that often have good summaries of relevant results in modern terminology.


I have also found old papers to be the best for learning a new concept - they make it as simple as it can be. A colleague of mine wrote a paper that proves that negative numbers exist and would often cite archimedes, Galileo, newton, etc in his bibliographies. I think he loved crediting the first person to come up with whatever idea or concept he was using.


Would love to read that paper but search is not turning up any papers with that title for me. What's the exact cite?


David Chaum, Torben Pryds Pedersen, Wallet Databases with Observers, 1993.


Thanks!


For me the big one was relational modeling & normalization. I took a database class in college and I thought normalization was idiotic. It seemed obviously foolish and wasteful and I couldn't imagine why anyone would do this to themselves by choice. Then ~2 years later I got a job working with data, at a company with quite a big database that was designed largely by experts. Within a few months of starting to work with real data I had internalized the Why of normalization and become a believer. Ironically I've been doing database work ever since.

Lately I've worked for a number of companies where no one -- like seriously, absolutely zero developers on the team -- understand normalization. They may perform a shadow of it, because someone somewhere told them to, or some article said this is how you model data that looks like X, but they don't get it. Predictably, their data models are absolute garbage. Teaching them is an uphill battle because they don't care, they're not curious, and their web frameworks (cough Rails cough) have taught them to distance themselves from the database and treat it as dumb.

The other one was Rich Hickey's "Simple Made Easy", which I first watched in 2013. I enjoyed it even then but I didn't really understand what he was getting it. After working with, and building, some systems done in a functional style, I feel like I figured most of the ideas. That would have been some time in 2017 so, definitely took a while to soak in.


I love this as a research topic. Here are a few examples close to me.

1. The nature of variance in regression methods. The first time I heard about "soaking up the variance" in modifying stat models in brain imaging I had no idea what was going on. Then I spent a few years doing brain imaging and modified models to "soak up the variance" differently. Over the few years I came to grok the concept. I had a electrical engineering PhD colleague with an impressive resume who would argue with me about the effects of models. I realized that he knew the textbook stuff (which I didn't), but he didn't actually grok the concepts (which I did).

2. Once in an office was I mystified by hot spots and cold spots of wifi signal. One of my colleagues, a brilliant engineer, then explained to me what might affect the shape of the hot and cold spots, which is why he sits in certain places. I asked him if he had an intuition of how RF fields are distributed and he said yes, so I asked him how. He said, "I used to do a bit of tensor calculus".


I used to work in an office where I only got cell service on cloudy days, I assume it had something to do with the clouds reflecting the signal.


The quote below is from a 1995 interview with Steve Jobs. Forgive the length, but the whole thing is quite poignant and there wasn’t much to be trimmed. This particular interview confirmed my own experiences with esoteric processes that people do just because that’s how it’s always been done, and the opportunities that exist because of it.

————————

You know, throughout my years in business I discovered something. I would always ask why you do things. The answers that I would invariably get are: “Oh, that’s just the way things are done around here.” Nobody knows why they do what they do. Nobody thinks very deeply about things in business. That’s what I found.

I’ll give you an example. When we were building our Apple computers in a garage, we knew exactly what they cost. When we got into a factory in the Apple II days, the accountants had this notion of a standard cost, where you kind of set a standard cost and at the end of the quarter, you would adjust it with a variance. I kept asking: why do we do this? The answer was, “That’s just the way it’s done.”

After about six months of digging into this, I realized that the reason they did this is that they didn’t have good enough controls to know how much it’s going to cost. So you guess. And then you fix your guess at the end of the quarter. And the reason you don’t know how much it costs is because your information systems aren’t good enough. But nobody said it that way.

So later on, when we designed this automated factory for the Macintosh, we were able to get rid of a lot of these antiquated concepts and know exactly what something cost.

So in business a lot of things are folklore. They are done because they were done yesterday. And the day before. What it means is, if you are willing to ask a lot of questions and think about things and work really hard, you can learn business pretty fast. It’s not the hardest thing in the world. It’s not rocket science.


Agreed with this. One thing I've learned about large established companies is there is a lot of room to deliver impact if you can rework inefficient processes and navigate the politics to have that change adopted.

Steve was fortunate in that he could do whatever he wanted which isn't always the case for everyone else.


Computer networks. For years I was dumbfounded how IP addresses, VPN, ports, etc, all worked and tied together. Then, when I was interning as a software developer a fair while ago, a colleague drew the analogy of "IP address" = house on street, "port" = something you ask for when you knock on the door of a house.

Then it all just clicked. I still remember that day all these years later.

Other notable mentions, in no particular order:

* Perfect is the enemy of good enough. Did n't really appreciate this idea until around 2 years into my career.

* How to be professionally displeased at something. Early on my career I would get way too angry at incompetent colleagues peeing in the pool, e.g. bad code, design, management, etc. I would complain quite a bit! It only clicked ~2-3 years later into my career when I figured that one's displeasure at a situation should be a function of both how bad the situation is and how able you are to improve it. When you offer constructive solutions to incompetence (suggest alternative algo, management style, library, tool, etc) whilst not actually mentioning what is wrong, instead of just fruitlessly reminding people of what they did wrong, people become far more cooperative and receptive, etc.

My tin-foil-hat pet-theory is that the relatively recent tyranny of low expectations and "participation award" society has on average made younger people much more sensitive to negative comments about their work. The extension of that is that people end up robbed of more detailed reasoning about what they did that was wrong.

People are interesting!


> Then, when I was interning as a software developer a fair while ago, a colleague drew the analogy of "IP address" = house on street, "port" = something you ask for when you knock on the door of a house.

Hey thanks for sharing! Can you maybe expand on this in any form? I'd love to get that kind of analogy. Maybe you have a blog post about it? That'd be amazing!


Network: Street

IP address: House

Port: What kind of thing from inside the house you want

Gateway: House #1, start of street.

DNS: Tells you which house number corresponds to a family name. "Oh why yes, 'samhuk' lives at house # xxx.xxx.xxx.xxx

The analogy goes on and on. It's how I have always thought about networks since.


This[0] blog post explains the basics of IPs, ports, etc. using that analogy

[0] http://www.steves-internet-guide.com/tcpip-ports-sockets/


One of the canonical computer networking texts, Kurose and Ross's Computer Networking: A Top-Down Approach, is chock-full of these sorts of analogies and metaphors.

It's very readable.


I've heard it with a door for every port, but one door where you ask for that specific port works too!


I think “door” works better because any function can be behind any port, and it is a matter of convention, not of asking for a specific thing. I.e. when you connect to port 80 you don’t ask for HTTP, you just hope that’s what’s available at that port.


The word port is another for door in Latin based languages.


Ah, now "port key" makes sense: it's the key to the door (or portal).


Basic music theory.

Almost no one I encountered bothered to actually explain anything. They simply regurgitated things and I guess expected me to somehow intuitively understand something or other.


Oh my god. Music and photography are my two pet peeves. They have their own twisted vocabulary for everything that surely exists for historical reasons, but might as well be purpose-built to obstruct polymaths from connecting their concepts to other concepts in other fields.

My single largest goal when I'm teaching, is to find an appropriate analogy to a concept that my student already grasps, and bridge it to the new concept. Some fields make this super easy -- MechE and electronics, for instance, if you understand one, you're well on your way to understanding the other -- and some make it super hard.


While I agree with your goal of finding analogies when teaching new concepts, I think your first paragraph is a pretty unsympathetic take on music theory. (I don't know anything about photography.)

Music theory is descriptive, not prescriptive. Is it any surprise that something we had to invent a whole new symbolic notation for is difficult to connect to concepts we can describe in natural language?

If you want to go really deep into the philosophy of music, to understand it from "first principles" so to speak, I highly recommend Leonard Bernstein's lecture series "The Unanswered Question" - https://en.wikipedia.org/wiki/The_Unanswered_Question_(lectu... (available on YouTube).

To paraphrase Bernstein's comment from his book "The Joy of Music": we need to stop comparing Beethoven to grassy fields and mountain streams. A "major seventh" or a "plagal cadance" are themselves the description of those qualities. By all means use music as metaphor, but when you're trying to describe it you're going to be using terms specific to its domain.


Sadly, mathematics classes are like that as well. Instructors start throwing equations on the board, expecting us to somehow connect it all together. The best math textbook (Theory of Algebra) I ever read had little sections about the person who revealed a particular subject, why they were studying it, and how the subject is used.


Sadly many professors fall victim to the Curse of Knowledge. It doesn't help they need to follow a tight schedule and intuition isn't something you can develop in a single lecture. I suppose self-study and repetition is the most likely solution.

>The best math textbook (Theory of Algebra) I ever read had little sections about the person who revealed a particular subject, why they were studying it, and how the subject is used.

I've found the best type of books provide motivation for concepts, how they have evolved, etc.

Take Computer Science for example, many of its concepts were area of research for decades but from a student's perspective it seems these concepts were always here instead of being constantly refined until the states they're now in.


Exactly, understanding the intentions and history behind a concept is key to achieving comprehension. I've had to teach myself nearly all of the more advanced mathematical concepts I know, and I'm finally starting to reach a point where I feel I have a nice approach toward achieving an understanding:

- Learn the definition - Learn the motivations and history - Peruse a few examples - Try to map the above to a brief synopsis that explains the concept in intuitive terms that relate to your own life. Rely on pictures.

Finally, I find it helpful sometimes to try and "deduce" identities from "first principles". e.g. assume I didn't know that n^0 = 1. How might I reach that conclusion? If I have an understanding of what exponentiation means I should be able to come up with a few different propositions (these could even be relatively informal) that make such a conclusion make sense.

My childhood was rife with mathematics teachers that focused more on rote memorization of identities instead of careful explanation of definitions and development of "intuition". There's pretty much no better way to ensure you'll produce students that dislike and suck at math for the rest of their lives than proceeding by mind-numbing rote memorization.


I think that's a great approach. If possible, you can add another step: Teaching it to someone else.

>There's pretty much no better way to ensure you'll produce students that dislike and suck at math for the rest of their lives than proceeding by mind-numbing rote memorization.

I couldn't agree more.


Do you remember what the book was called, or the writers.


I recently learned what solfege actually is from a simple ChatGPT conversation after hearing about it for many years.


I can't find the link. But I saw a minidocumentary about a person who trained his ears to listen to harsh noises. Once he attuned his ears, he could write music in those new musical notes.


Any good resources that do it the right way?


This site is nice https://www.musictheory.net/lessons

Music theory is not inherently complex, but the notation adds incidental complexity -- it's sort of like the QWERTY keyboard; once you're stuck on a suboptimal way of doing things it's hard to move off.


I've been using Punkademic and I'm very happy with it.


Looks like I have to pay before I can see what "Tools you will need to learn Music Theory quickly and efficiently".

https://www.punkademic.com/course/music-theory-comprehensive...


That's how a paid course works?


Yeah, but I'm not going to start a course if I don't know what tools and equipment I need for it, upfront.


They have a free trial. And a YouTube. And the courses are republished on tons of platforms, each of which has some sort of preview.


Michael New on Youtube.


Self fulfilling prophecy.

It took several near burn-outs and tons of hours of my life lost due to frustrations before I truly understood this. The world is what you want it to be.

Hopeless because everyone in the world is so selfish? You create the world you live in. Live selfless and see the world transform around you.

Frustrated you have to constantly interfere to make someone good at their role? You have to yourself believe they are the best in that role and then see them flourish.

Afraid your relationship might not workout? The fear itself will make your worst thought a reality.

Your life becomes extremely easy once you truly understand the world is what you believe it to be.

Note: There are of-course strings attached to this concept, but understanding the power of this is life changing.


This is true of almost everything in math. You learn some definitions and techniques in one class, and it doesn't all become clear what's going on until you've used those to as the base layer for solving some other problems in the next class. Part of it is just that it's hard to teach you need the first concept in order to understand the second, but you need the second to understand why you should care about the first, so it's all a bit circular.


That professors were thinking real hard about which problem sets to give us in hopes that we would actually learn something.

I don't think I really understood that completely until I started TA-ing.


Tangentially related but that's something I've come to realize too as of recent. Asynchronous programming in JavaScript is new to me so as an exercise, I'm writing a document where I explain the concepts to myself and once I shifted from "how I understand" to "how I would this to someone else", figuring out a good enough scenario/problem harder.

Good problem sets that aid studets' intuition aren't easy to come by, oftentimes they're either too easy or too hard.


Big bang didn't happen in a single point in our space-time like a firecracker, it happened everywhere and was just a uniformly much hotter and denser universe that didn't really explode into anything, but space-time just expanded to make the universe less dense.


If “everywhere” is an infinitesimal point, is there much difference?


Everywhere was not an infinitesimal point, if the present Universe is infinite (as it seems to be). In this case, the Universe was already infinite at the very first meaningful instant of time.


To be more pedantically precise:

The big bang was not like a firecracker that went off in pre-existing, infinite flat space-time...


There are two replies to my comment, yours and someone else's. They both say the opposite thing!


Don't some people prefer to say the slow expansion over using the term big bang? Although it hasn't taken off as a term. Or something like that?


One of the big aha moments which clicked for me only fairly recently was staring at some physics equations my own internalized realization that light doesn't experience time and then I had to tell everyone haha - but that one eureka moment unlocked a whole lot of understanding and certainly a lot more questions. This of course was after all the schooling and physics where it somehow sailed over my head the whole time.


I was going to put this as mine. It’s astounding how important this is.

I woke up at 2am with the intuition that light was like a lightning strike across spacetime, with no time and no distance between the emitter and absorber.

Why and how, though. What are the fully-baked implications.


I remember that we found that entirely obvious (well, most of us) in my high-school physics class.

What I still don't understand, though, is how photons can "wave" or how "long" a photon is (or how they can have any "length" at all).

What I also didn't understand back then was how light can be slower when there is matter present -- or how a photon could somehow know that it was supposed to cause another, very similar, photon to be released in just the right angle when it hit something that, if zoomed out enough, looked like a mirror. How on Earth could the photon know that the photon it was hitting was part of a flat surface and what its orientation was? I even started using that mirror question as a litmus test of physics students once I started at uni (comp.sci.) -- pretty much all of them failed, not by not knowing the answer but by not understanding that this was even a question that required an answer.

I now know that the photon -- the wave(s) in the electromagnetic field -- cause atoms (and in particular their electrons) to move about a bit, which in turn cause waves in the electromagnetic field. And by adding up all these waves, we get the resulting wave which moves slower than c and which might seem to have been reflected at an angle. Why my physics teacher in high-school didn't tell us that, I don't know.


This one really struck me too. I think of it from the point of view of the emitter, which is surrounded by a sphere of every final destination of a photon in every direction, with no distance in between. The photon itself, from its own perspective, is born and immediately dies, having stepped across the 0-length gap between emitter and absorber.


And therefore the future affects the past. And light only ever gets emitted when there is a receiver to absorb it.


Given that most physicists would tell you the same, I presuming you're not merely saying this is only of physical importance -

Many spiritual teachings say that consciousness is made of light.


The mere physical implications are astounding but don’t seem to be fully embraced. A photon that left a billion years ago connects the source and destination. They’re basically touching each other. Shortest distance between entangled photons isn’t the 3D distance, it’s back via the source. Our perspective is that a photon takes a billion years but we need to see past it to what is instant, because it hints at deeper questions and answers.


I'm not sure how this works out in media which slow down light

https://en.wikipedia.org/wiki/Cherenkov_radiation

There, e.g. gravitational waves will be at c (in a vacuum), but wouldn't photons experience time there?

Edit: I suppose the photons are still actually moving at c


I had thought because light was absorbed and re-emitted but that and scattering ideas are incorrect. See this video for an explanation of why light travels slower through a medium with an index of refraction greater than one. https://youtu.be/CUjt36SD3h8


Tensors. Years ago, I would read the Wikipedia page about them every so often, and completely fail to understand what they were.

Then one day I was modeling something in a spreadsheet, and I thought to myself "you know, what I really need here, is 3rd axis to this spreadsheet". And for a few minutes, I thought I had invented a new form of spreadsheet/structure, and was considering trying to build a 3d spreadsheet program.

But, of course, suddenly everything I'd read about tensors and not understood before snapped into place, and I realized that this is exactly what they are (to be precise, this is a special case of a tensor, the contemplation of which caused me to understand what the general case is, and why e.g. a scalar is a rank 0 tensor, a vector is rank 1, a matrix is rank 2 my 3d spreadsheet idea is rank 3, etc).


It sounds like you are describing a multidimensional array, which can be used to represent a tensor, but is not conceptually the same thing. A tensor is an element of a tensor product.


Working with others.

Really.

I was super annoyed and insanely annoying to work with for years.

Then I understood that difference is hard to cope with, but more often than not, good. As it trades some short term efficiency to long term one.

Same with your output. If it is dumbed down to a level everyone can understand it, you really learned it.


Reminds me of the saying "If you want to go fast, go alone, if you want to go far, go together." But an important corollary is that you should be very selective about who you travel with. In some cases, like a corporate setting where you have no control over the randos you are thrown together with, it can indeed be the least painful option to go it alone. The alternative is knowing that if everyone does their best you can all aspire to be ... mediocre.


That’s cool, what kinds of things helped you learn this? I bet those things would be really valuable to a lot of people if you felt like sharing.


I'd say parenting is the one that really made a difference.

Not because I had a strike of common sense, but because I actually cared very much and started to listen to others. I didn't want to be horrible to my kids.

So "starting to listening to others". Every troll has a kernel of truth. And sometimes what you consider a troll isn't one in the first place.

Which is very apropos those days, where everyone is confined in their own echo chambers by all those clever social media algorithms that always favor short term bliss.

And yes. Trying to explain things to my kids enables me to understand those things way deeper.


We are all just children. As kids we always want to be adults but then we become adults and ultimately everyone around us is just a child who’s been on the earth longer.


I will die knowing a little about programming computers and not much else. Over time this has slowly become acceptable to me.


You’ve still got time to learn a little bit about half a dozen things, if you’re so inclined.


This is something that clicked after a few hours rather than a few years, but it is similar in a way. A long time ago when I was a graduate student I was in the library one afternoon studying some idea in field theory (physics). It covered maybe three pages in a small book, but it wasn't making any sense. Finally, hours later it suddenly became clear, and it was so simple. I remember saying to myself, "Why didn't they just say that?" I looked back at the book and they said exactly what I would have said. I guess sometimes words are not enough.


They need to simmer a bit.


Two are very pertinent for me at the moment:

1. The Monty Hall problem

2. The Wason Selection Task.

I read about the first one eons ago and was impressed that Marilyn Von Savant was vindicated. Shoutout to her!

The name Wason was unknown to me but I bought a book called The Oxford Companion to The Mind (new in 1987!) and his 'task' was featured in it (pg. 639 in my edition). I spent a lot of time satisfying myself as to the answer and kind of got there. This Christmas I am reading Stephen Pinker's "Rationality" and the description of both have allowed the penny to fully drop. I hadn't been thinking about these puzzles for ages and then when they cropped up "unexpectedly" I grokked them easily. They "clicked". There is a certain type of wisdom that comes with passed time.


New to me me, interesting problem and origin story:

https://en.wikipedia.org/wiki/Monty_Hall_problem


That's a great paradox that can be really hard to understand for some but it's quite obvious when it finally clicks. You just needs proper reference or rather point of view that makes You understand that Monty selection and player decisions are not unrelated events and Monty just eliminates a whole branch of possibilities by introducing his knowledge into the system (he cannot show the prize and thus must always select one of the goats). And if You can imagine this problem as a decision tree You suddenly, as a bonus, understand conditional propability.


Oh thank goodness. Another human not cool enough to "get" the Monty Hall problem at first. I thought I was the only one... Took me years...


I made a pact with a group of friends never to bring up Monty Hall again, especially not when alcohol is involved.

The only things I’ve seen people get more upset about than MHP is infidelity and politics.


For me, it was the difference between “is” and “ought”.

It sounds obvious when you first think about it, but I spent a vast majority of my 20’s with a view of the world filled with “oughts” instead of “is’s”, especially when it comes to things I can’t control. I.e., people “ought” to behave a certain way, versus the way people actually behave. The way companies/governments “ought” to operate, versus the way they actually do.

Thinking in terms of “oughts” can really cloud your judgement in ways that you might not be totally aware of. I guess this is kind of like the “realpolitik” philosophy, but coming to this realization in my early 30’s actually made me a lot happier and less confused about why the world is the way it is.


Are you familiar with the is-ought problem? Somewhat related.

https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem


Oh wow, I wasn’t but this perfectly sums it up! Learned it the hard way. “Fact-value” is another good way of putting it.


Yeah, David Hume has some good works. I recommend you read Treatise of Human Nature if you can, and follow along with commentary rather than trying to parse it all on your own, I took a class on modern philosophy where we covered it.


It took a couple years in college for me to understand entropy.

Entropy in classical thermodynamics is presented in a mysterious way that leads to confusion.

Entropy in statistical thermodynamics, however, is logical. Once one understands basic statistical thermodynamics, entropy isn't mysterious.

The book in my statistical thermodynamics class was An Introduction to Thermal Physics by Daniel Schroeder, which is an excellent book that I've referred to many times since.


I had to study entropy twice in college for different courses that were 2 years apart from each other, and I still remember this one quote I read somewhere:

  The first time you study entropy, you won't understand it. The second time you study it, you'll think you understood it until you realize you didn't. By the third time you study it, you just don't care anymore and just use it.
10 years after graduating, and I haven't encontered entropy again after the second time, so you can guess where I'm at in this quote. But thank you, for now I know how to attack it if I ever need it again.


Entropy is basically a useful quantity. It's no more mysterious than enthalpy or Gibbs free energy, both of which have also caused me confusion in the past.

To me, the issue with entropy is that it's initially presented without clear justification, so people don't know why it's important. Statistical mechanics made it clear to me that the state with the most entropy is the most likely to occur in equilibrium. (Statements about entropy representing "disorder" and whatnot in my opinion are handwavy, often lead to confusion, and should be avoided.)


For me, it was approaching it from the information-theoretic perspective. E. T. Jaynes' paper was what made it all click for me: https://bayes.wustl.edu/etj/articles/theory.1.pdf

Edit: but that was only after I had grokked Shannon's paper on information theory, which I felt was pretty intuitive.


I also have chosen entropy for my most memorable grokk! :)


Singing.

I was part of a choir in high school and early college, but never managed to get good at this, even though I knew well how skilled singing should sound.

More than a decade later, when I was really exhausted after tending to my infant child I let out a sigh of the same type that we did as exercise back in the choir("Aaaaaaaah"). To my surprise it was almost effortless despite making quite the sound due to it being more or less at the resonance frequency of the bones in my skull.

And then it dawned on me. It's essentially turning yourself into an elaborate fart pillow.

Of course there's much more to that in things like posture and the mentioned resonance, but I had that covered from years of training. The only piece missing was using the diaphragm correctly.

Now I sing daily and while my range is pretty average, I don't strain myself any more and can do it for hours. Also the tone is just so much better now.


Lie Groups... I signed up for a grad level course in my second year, and had no idea what was going on. Eventually I did my PhD on algebraic combinatorics, which works with Lie Groups quite a lot, but it took years to internalize all the ideas needed to have any intuition at all.

https://en.m.wikipedia.org/wiki/Lie_group


Although I still don't understand liegroups with significant mathematical rigor, I do think I finally understand why they are used in code for computer vision and state estimation. It took me a shockingly long time to understand why.

This paper was the catalyst to me finally grasping some of the details: https://arxiv.org/abs/1812.01537


Graph theory. For the longest time I thought it is an interesting and even beautiful branch of mathematics but somewhat "recreational" and disconnected from my various bread-and butter workhorses for applied work (from linear algebra and PDE's to differential geometry etc). So I never bothered to dig and connect the dots.

At some point I realised the connections between the two worlds e.g., graph operations as linear algebra [0] or the transition from continuous Laplacian to graph Laplacian via a discrete Laplace operator [1]

[0] https://graphblas.org/

[1] https://en.wikipedia.org/wiki/Discrete_Laplace_operator


Topological sorting! So important


Life is actually short. As a kid, you don't really put much stock into that being bandied about by your elders. You feel like things will go on forever, and that you're basically immortal. As an adult, I now have less time than the time I've been alive to make of what time I have left. I'm not left with cripping regret, but I have wondered what I could have done to make better use of my time.. and that time is just simply gone.


Fourier Transform.

I'd been writing DSP code in C and MATLAB during undergrad while thinking I knew every possible aspect thanks to the deep study we had as part of coursework across all the transforms methods in our ECA & Communications coursework in India. Also, dad was a self-taught hands-on Analog electronics whiz whose day job was in Telecommunications Training, and I had uncles and aunts in the Telecom/Electronics/ATC industry, so I had all these resources from a very young age to revisit the concepts that were being taught to me in school, against practical applications over and over.

In week 2 or 3 of ECE490 at UofR, Prof. Heinzelman[1] broke a barrier that I did not know existed in my understanding of DSP. It was pinned to me realizing that the intuition between Fourier and Laplace transforms being the same. It was a moment that I haven't experienced since, in that I felt my brain got re-wired within that hour. It must help that her father[2] basically wrote the textbook on speech processing.

[1] http://www.hajim.rochester.edu/ece/heinzelman/

[2] https://www.ece.rutgers.edu/lawrence-rabiner


Interesting, Fourier transform has never been an issue for me but Laplace never clicked. I know it's kind of similar (convolution, exponential and all that) but I miss the connection with the frequency domain that is there for Fourier.


The frequency is the imaginary part of the Laplace parameter s (and the attenuation is the real part of the Laplace parameter s).

Laplace transforms work on systems with attenuation—that’s the main advantage.

Because the kind of transform was swapped out anyway, people used the chance to often only define one-sided Laplace transforms that only work for t > 0 (because as an engineer, thats the systems you want anyway).

There’s a direct correspondence between the (usual) one-sided Laplace transform and the (unusual) one-sided Fourier transform for that reason.

Since you usually have systems where f(t) = 0 for all t < 0 anyway, the distinction one-sided or not is not so important in practice for understanding.


Music theory. I'm not claiming I understand it all.

I've had many eureka moments when I suddenly realise what or why this thing from years ago is important.

I think teaching music theory with both what and why (with examples) is essential.

I still have haven't found a book that does a good job of it and keeps it interesting. Any recommendations?


The axiom of choice, Tychonoff's theorem, and the open mapping theorem [0, 1, 2]. Each one took me, in my view, much too long to grasp.

[0] https://en.wikipedia.org/wiki/Axiom_of_choice

[1] https://en.wikipedia.org/wiki/Tychonoff%27s_theorem

[2] https://en.wikipedia.org/wiki/Open_mapping_theorem_(function...


The axiom of choice is the topic of one whole quarter of set theory in my undergraduate years. I still don't think I fully understand it. By "fully understand it" I mean achieving the level of proficiency of my professor who, upon hearing seemingly any theorem encountered in undergraduate study, would immediately tell you whether it requires Axiom of Choice, merely Dependent Choice, Countable Choice, or doesn't require Choice.


i didn't really understand hypermedia, and, in particular, the uniform interface/HATEOAS until a few years after i started building intercooler.js (htmx predecessor)

https://intercoolerjs.org/2016/01/18/rescuing-rest.html

much later:

https://htmx.org/essays/hateoas/


I'm reading the HyperMedia book right now, thanks!


The use of Predicate Calculus in coming up with the Proof along with the Program.

Predicate Calculus is used to show that the path followed by a Process through a Cartesian Product space (created from all the memory variables in a Program) is the one you had in mind w.r.t. its Specifications. Suddenly you start to understand basic Set Theory, Types, Relations (Functions) and Logic.


The biggest one has to be design patterns in OO design!

I was fortunate to have a great teacher which taught OOP by focusing on design patterns, dynamic dispatch, composition over inheritance, etc, etc, the really foundational concepts.

I was too young and naive to fully understand all the concepts, I just thought writing Java was cool! Ah.

Many years later, I really realized how to apply design patterns to solve real problems in a large production codebase and it all clicked then and in retrospect that teacher was really amazing, I just was too naive to realize it back then...


Dynamic programming made very little sense to me when I first encountered it, then a few years later I read the DP section of Algorithms by Dasgupta, Papadimitriou, and Vazirani and it somehow clicked for me. Now I enjoy dynamic programming problems when I get them in interviews because they are usually pretty easy once you understand the trick.


This one is always interesting to me. People mention "dynamic programming" and my gut says "code that writes code" and every single time I remind myself it is just caching earlier calculations. DP is just caching.


DP is just overlapping recursion with caching. That's it.

Richard Bellman just wanted to give a fancy name for mathematical work he was doing without being suspected of it.[1]

[1]https://en.wikipedia.org/wiki/Dynamic_programming#History


Parenthood. Now I see how much my mom was right, but couldn't put things in proper words.


Parenthood is the Matrix. Nobody can tell you what it is, you have to experience it for yourself.


Calculus, even the basics like derivatives, 2nd derivatives and integration didn’t really click with me until I did some scientific computing/signal processing with SciPy and Pandas a couple years ago. Graphing the discrete versions on time series data, e.g. graphing the difference between consecutive values and seeing the derivative pop out, or a rolling sum for integration, etc finally made it all click.


Visual thinking! [1]

I had read a bit about different types of thinking but I didn't really understand it so well until I saw a kid who had amazing visual thinking capability and could visualize years' old memories in a very detailed manner. It has also been related to photographic memory according to some research.

[1] https://en.wikipedia.org/wiki/Visual_thinking

Nikola Tesla's Creative Thinking Secrets: https://www.sers.si/wp-content/uploads/2014/10/angle%C5%A1%C...


You should read Moonwalking with Einstein and look into memory palaces. I have memory palaces I can still walk through and remember after a decade.


Thanks for the suggestion, that does look like an interesting read! I will check it out.


https://en.wikipedia.org/wiki/Aphantasia

This could be some fun reading for you.


I learned classical piano, and a bit of music theory, which was mostly about analyzing what I had in front of me.

I always found classical music to be a bit dead - beautiful, but dead : I guess I see it as a dead language, like latin or greek. The jazz I was given was fully written, so it was dead too, except in a few cases where I was supposed to magically improvise.

Then I 'got' Jazz, 25 years later : it's entirely about learning the language and grammar and vocab of music, and speaking it, creating new sentences and expressing yourself. The process probably matters more than the style itself.


I'd have to say... Linear Algebra & SQL.

SQL is something that is a permanent journey for me. It is a domain-specific language, so the nature of its use depends almost entirely on how the domain was modeled in the first place. Much of my interest has been drawn towards the modeling aspects and how we can arrive at schemas that business experts can tolerate.

Linear algebra properly clicked for me when I started getting into writing my own 3d rendering software. Systems of linear equations are absolutely everywhere. I didn't really tie together their use in control theory, DSP and graphics until I was forced to learn them "for real" to achieve my project goals.


Something in linear algebra clicked for me when I realized the infinite dimensional case is different from the finite dimensional case.


What are some applications / examples for the infinite-dimensional case?


The continuous Fourier transform, for example, or more generally linear operators in functional analysis.


Quantum mechanics


Mathematical proofs. I studied math in college and didn't really understand anything I was doing until well after school. When I read Book of Proof a few years later it all suddenly clicked for me and made perfect sense. Half of studying math is just surviving. Same thing for my Fourier analysis and signal processing classes. I didn't grok it until a few years after my classes. Don't even get me started on statistics which I still find difficult to really understand "why do I trust this magic formula?". The hard science behind statistics is actually incredibly complicated when you start to dig into it


The fourier transform. Encountered it first in my undergrad engineering degree, it was presented as dry mathematics, with no real explanation, just threw complex exponentials at us, and pages of derivations. Years later I actually use it in my job, and through that and other material can see its beauty, and how its actually not that complicated. Some great resources like this helped a lot:

[0] https://betterexplained.com/articles/colorized-math-equation...


This is what made it click for me, can recommend it:

3Blue1Brown -- But what is the Fourier Transform? A visual introduction.

https://www.youtube.com/watch?v=spUNpyF58BY


That video did the same for me. I also like the reducible video for Fast Fourier Transforms as well as the Veratasium piece that shares some fun history.

Reducible - FFTs the most ingenious algorithm ever https://youtu.be/h7apO7q16V0 Veritasium - The Most Important Algorithm of All Time https://youtu.be/nmgFG7PUHfo


100%,those are excellent, but didn't exist when it first clicked for me. Being in university now must be great having such access to explanations that really get to the root of a concept


The FFT is something I still can't quite grok, for some reason.


I had the same problem. Then I took a math course where we covered the general fourier transform and it made way more sense. An the FFT is the result of a simplifying transformation based on discrete regularly spaced points and that's really opaque from the other side.


I guarantee that you will be able to "grok" it from this book: https://news.ycombinator.com/item?id=34207380


In case you didn't already know; Who Is Fourier?: A Mathematical Adventure is an excellent illustrated book on this.


Thanks, I'll look it up!


Voltage and amperage for me, years after a college electronics course and a childhood of soldering kits I finally started to get it. I was always tripped up with analogies about how it's like water in a pipe or something, which can be useful, but aren't quite right.


Can you share what helped it click? I’m in the same boat, trying to understand it through analogies.


These are physical properties that can be experienced (although there is some risk) for voltage you can touch a Van de Graaff generator, for current you can touch a battery to your tongue.

That's the way to understand electricity without analogies. :)

Anyway, voltage is always measured between two points (one is typically called "ground" but that's not important here.) It's a difference. (A difference of what? No one knows. That's just the way it is.)

Current is always measured through a single point. It's a count of the charge flowing past a point per time unit. (What is charge? No one knows. "Charge" is just a name for the mysterious something that "voltage" is a difference of...)

When electricity moves it obeys a (very simple) Law called (Ohm's Law) e.g.:

    E = V/R
The current (E) is the Voltage divided by the Resistance.

E.g. is I have a 5 Volt power supply and I put a 100 Ohm resistor across it (remember that voltage is always between two points) then 5/100 Amperes of current will flow through it.

Also, If I take just the resistor and drive 0.05 Amps through it, it will develop a voltage of 5 volts, as measured from one end of the resistor to the other end.

Now if that doesn't help you, remember the advice of John Von Neumann to Felix Smith: "Young man, in mathematics you don't understand things. You just get used to them."


>Current is always measured through a single point. It's a count of the charge flowing past a point per time unit. (What is charge? No one knows. "Charge" is just a name for the mysterious something that "voltage" is a difference of...)

I think you're being overly mysterious here. It's the number of electrons passing through the wire per second. And charge isn't the thing that voltage is a difference of; it's potential difference, as in electric potential energy. The "water in a pipe" analogy really isn't that bad; it's gravitational potential instead of electrical potential, and amount of water instead of amount of electrons.


> I think you're being overly mysterious here.

Maybe, but I feel electricity is mysterious, eh? I mean, we don't know what it is, nor why the Universe has it, and we aren't likely to ever figure it out, eh? Really that's all I was trying to point out with that: we know how electricity behaves but we don't know what it is. Sometimes people get hung up on that.

In any event, I usually recommend William Beaty's 'What Is "Electricity"?' http://amasci.com/miscon/whatis.html


I would love to hear this in a way other than water in various sized pipes.


I studied functional programming and I took terms like "expression", "evaluation", and "value" for granted until it clicked that these terms have intuitive meaning when interpreted in colloquial/human terms.

- Expression as in someone expressing themselves, a tentative gesture.

- Evaluation as in scrutinizing something to learn what it's really about.

- Value as in "valuable".

We don't know if an expression is valuable until we subject it to evaluation, or we get an interpreter to interpret it for us.


Have you ever studied compilers? You'll gets a good idea of these concepts from a toy compiler for an imperative language.


Yeah I've built a compiler and several interpreters in an academic context. I'd appreciate any concrete suggestions for further playing/learning.


Discrete mathematics and all sorts of its application in real-world (software development) related problems. Also how any given solution to a problem in one problem domain can be transferred to a problem in another unrelated domain. Think Galois theory but waaaay less fancy :-)


When I did Mathematics A-Level's, we had to select four modules. I chose pure 1, pure 2, stats 1 and, new that year, discrete 1.

So many useful groundings in various super important comp sci concepts.


It felt like my understanding of pointers was a bit vague until I learned assembly. I think programmers would benefit from starting off with a simple virtual machine that runs a tiny set of instructions with everything presented cleanly in a visual format which can be inspected and stepped through.

The biggest a-ha! moment I've experienced was in my Comptuer Architecture course when it finally clicked how you could build up a bunch of logic gates into a real modern computer.


In OO, the interface. It seemed like the most useless construct. Zero implementation. Why make something with no implementation?

Later I realised that the benefit isn’t the code, it’s the freedom you gain later in choosing the implementation. You can create an interface and add a simple implementation, then later swap it out for something more robust. All you’re agreeing to now is the contract of what needs to be done, without any restrictions on how.


f(x) = y etc in linear algebra in school. By the time I was learning this in high school I’d already been programming for several years and perfectly understood the concept of function inputs and outputs. It wasn’t until my early twenties that I realized this was just an alternate notation for functions and was so simple.

Sad that my math education was just focused on “memorize steps” for concepts that weren’t clearly explained.


Compound Interest. The idea and the numbers made sense but it is only with many more decades of life that I understand how much impact time has on compounding. If I had understood this when I was younger, I would have made different choices in regards to saving more. Small sacrifices when you're younger and can easily tolerate them, have massive dividends later in life.


In my case, almost everything has been like that.

1) I learn rote.

1.1) Almost no improvement.

2) I "get it."

2.1) My development in that area suddenly explodes.

This has happened with almost every software concept, from calculus, to Structured Programming ("Whatever will I do without my precious GOTOs?"), to Object-Oriented Programming, to Design Patterns, to Protocols, etc.

It usually only takes weeks or months, but I suspect some have taken years.


Power of static typing, that it allows one to develop complex programs faster and better, not slower.


I think the develop faster thing is a poor argument for static typing. The refactoring piece of the equation is the most important to me. Static types provide confidence (and speed) while making sweeping changes to code.


That's what I meant by developing faster. Refactoring is an indispensable part of the building process.


Yeah totally agree that refactoring is part of development but I don’t think people frequently think about in comparison to maintenance.


Simple one: understanding probabilistic vs. deterministic thinking = the serenity prayer*

The former is for dealing with any situation where we can't control the inputs and outputs. The latter is for when we can.

*"Grant to us the serenity of mind to accept that which cannot be changed; courage to change that which can be changed, and wisdom to know the one from the other."


Complex numbers, as a convenient representation for certain operations, in the same way as we use negative numbers to represent debts.


The certain operation almost always being rotation in my experience


For me it was Microsoft’s COM and how it’s all powered by interfaces. It took a long time before it clicked for me. Fortunately, it didn’t really matter because 20+ years later, it’s still relevant. I’ve had plenty of time to work it out.

The other Windows-related idea that took an embarrassingly long time to work out is device / printer / memory contexts.


"The road to hell is paved with good intentions.", "Perfect is the enemy of good", and more generally idioms that sound like either paradoxical assertions or obvious platitudes.

It wasn't until late teens/early twenties that lots of those piece of wisdom went from "cliché phrases you've heard all your life", to (often) deeply impactful aspect of human conditions.

The "road to hell" idiom explicitly didn't click until I had enough experience with counter-productive efforts who looks good on the outside, but actually fail to take into account a more nuanced reality.

It also kinda works with the distinction between "values I believe in" and "arguments articulated around shared values". Being able to disagree on something even when (especially when) it's invoking values I believe in is one of the most important thing I've gained as I matured, and yet it needs experience to 'click'.


I never remember the right "direction" for these platitudes. I feel like "good is the enemy of perfect" is just as valid as "perfect is the enemy of good". Meanwhile, the road to heaven is also paved with good intentions. Good paving material, it turns out, but asphalt is cheaper.


object oriented programming.

i must have encountered it during my computer science classes and i certainly did some form of oo programming with LPC in MUDs, but only when i was programming modules in Pike for the Roxen webserver, it really clicked how oo programming worked.

in Roxen each http request causes a request object to be instantiated which lives for the lifetime of the request. further, each Roxen module gets instantiated as an object for the lifetime of the server process. the request object would call module objects to process the request. the modules would make changes to the request object which would produce the response to the http request, whereas storing data in a module object caused it to be persistent.

i have been working on this for a while when one morning i woke up with a literal eureka moment as i realized how objects and encapsulation worked.

this happened a few decades ago and it was the most clear moment of this kind that i can still remember.


I think I've read about git multiple times before I started using it. And even then at first it was with a dash of scepticism. I can't put my finger on that moment, the moment of clarity but it had to be quite glorious. Before, I was "what else do you need we have svn". Sounds horrible now that I think about it.


Pfft. I believe that anyone who read about git, and responded "yeah, duh, it's so obvious" is a dirty, dirty liar.

An explanation of "interactive rebasing" to me might as well been an explanation of quantum entanglement to a learning-disabled ant.

Sometimes it still trips me up. I have an embarrassingly large number of copies of source folders, despite git's safety net of "don't worry, you can always restore it; pinky swear..."


I would never rebase at the command line and avoid rebasing generally to the chagrin of team mates. A merge retains the information to reconstruct the rebase anyway so might as well use the simpler option. Unless there are a lot of commits such that in aggregate they conflict but individually they apply smoothly.


For me it was pointers. I used C throughout college and I never really grokked the "address to a value" description, but one day it finally just clicked and now I love pointers


A lot of the math I learned in high school didn't make any sense until I revisited it in college. The difference between stupidly copy/pasting rules and theorems without understanding them and having to demonstrate them from the ground up before being able to apply them made a humongous difference for me.


The attachment theory of adult relationships [1].

Sometimes you just have to go on the journey to understand what stopped you getting to your intended destination.

[1] https://en.m.wikipedia.org/wiki/Attachment_in_adults


1. Functional programing

It took me a while, but once it clicked, everything changed. Programming became a process of data consumption, transformation and then presentation. It was if/then/loops/classes/libraries/languages before that.

2. Entropy

The example of the messy room that gets messier with time is probably the worst. Entropy is everywhere (in real life or in IT); and it’s actually the reverse of that example. For me now, entropy is life.

3. Bitcoin

I was quite a crypto/bitcoin skeptic back until 2014-2015. The whole thing seemed like a ponzi/pump&dumb scheme. Until I read the book “Mastering Bitcoin”. It goes into the details of how Bitcoin works. That’s when Bitcoin (as a cool technology) clicked to me. I still see the trading activity as suspicious but I made my peace with it as humans doing what humans always do: speculation.


Staying out of debt. A simple concept that took me until I was 37 to appreciate and understand.


Yes, don't rent money unless you have a very good plan.


for me it was functional programming. and not just fp as is, but it's relation to another popular concept - oop.

i was properly introduced only to oop, and grasped a little of fp here and there, especially learning go and javascript.

what i consider a 'click' for me is when i realised that all of these paradigms are interchangeable. like, an abstract method is just a function, or a function signature is the same as in interface with a single method.

after that i write code however it feels more appropriate for the situation i am in and don't think too much about fancy words and patterns. it really feels like programming languages are becoming 'native' for me.


You should learn Haskell, that's what I'm doing now, in order to learn functional programming from the ground up. Languages like JS which have FP concepts aren't really functional programming fully.


i don't care about idealistic concepts


Not sure what is idealistic, it's a useful programming language.


I was at a Denny's around 2AM once and in an instant became totally convinced that I understood the Ontological Argument perfectly and that it was 100% correct and undeniable proof of the existence of God. But then I lost it.


which of the many Ontological Arguments?

(eg Gödel's https://en.wikipedia.org/wiki/Gödel%27s_ontological_proof#Ou... is an exercise in order theory, in which one proves that a certain axiomatically presented mathematical structure has to have a maximum, which then, for the religiously inclined, could be identified with God)


Related : I have a request for the 1 major concept that has never clicked for me --> Can someone help me understand 'productive value' in an economy ?

I have tried hard, and can't for the love of me understand what lies at the bottom of trade, what is the base value of assets / activity. It just doesn't click.

Questions like:

- Can a services based economy work if the consumers of the services weren't producing some non-services based value ?

- How can there be any economic value in middle-men ?

- Is speculation on artificially limited assets (housing) just a pyramid scheme ?

- How is it NOT a zero sum game ?

- Is a Fiat currency just 'vibes based economics' ?


Here's something to think about: suppose you buy an assortment of candy from the grocery store and distribute a uniform collection of pieces to a room full of middle school students.

Some students will like the sour candy; others won't. Some will like coconut; others won't. Some will love nuts; others will hate them.

Now you let the students talk to each other and trade candy. The ones who like sour candy but hate coconut will trade with the students with the opposite preference, and so on.

At the end of the trades, everybody is happier with their new bundle - why would they have traded otherwise? So value has increased. But this is magic - no new candy was produced! Through trade we have increased overall value - the exercise was positive sum.

With some effort you can see that a middle man might add value by expediting trades - connecting people who like coconut with people who hate it (suppose there are thousands of students in the class who all speak different languages).

And you can see how fiat currency might make things easier. Rather than having to come up with the barter value of coconut candy and sour candy to nuts, you can convert both to a single unit and then trade units.


I can answer two of those. Middle-men are useful whenever there is fan-out or fan-in. A producer wants to sell to consumers, so there is a lot of fan-out. The producer could get the consumers to come to the factory and to buy good directly, or they could go door to door, or they could use middle-men: retail stores. The stores buy the goods wholesale and take on the work of selling them to individual customers, in exchange for a portion of the total profit. Customers visit stores for the same reason, to avoid having to visit every factory for every producer of the goods they want. With time and effort saved on both sides, the stores benefit both the producers and the consumers.

The economy is not a zero-sum game because we are summing the value, not the prices, of the goods that are traded. If I grew wheat and you grew apples, then we can trade between us to mutual advantage, because after the trade we can both make apple pie (assuming we can find some cinnamon). The total amount of wheat and apples hasn't changed, but their value has gone up. Every trade involves two parties who must both believe that they will gain value by the trade. Of course it is possible for either party to be mistaken (or even deliberately swindled by the other party), but as long as the majority of trades have net positive value then the economy as a whole will be a positive-sum game.


Please see this layman’s opinions in line.

- Can a services based economy work if the consumers of the services weren't producing some non-services based value?. Hypothetically, yes. But it would not be very wealthy. Examples are tourist oriented economies and cities like Las Vegas, New Orleans and Cancun. The reason is that they depend on discretionary income. When times are tough, service industry is hit hard.

- How can there be any economic value in middle-men? Middleman can provide value. For example, a real estate agent ideally speeds up your house search when buying and how long your house is on the market when selling. In many cases, saving time equals money.

-Is speculation on artificially limited assets (housing) just a pyramid scheme ? No. Housing is in the USA is a low risk investment. Artificially limiting it, decreases risk and makes it even more valuable. This is the opposite of speculation. We speculate with crypto and baseball trading cards as they are risky with arbitrary random monetary value. Ponzi schemes are outright theft as they are intentionally scams.

- How is it NOT a zero sum game ?. Classical economists argued trade increased wealth as it allowed for specialization.

- *Is a Fiat currency just 'vibes based economics'. I don’t follow what you mean. But even if there is no currency and only barter. We would still be dependent on government provided laws and security. Otherwise we would live in a society that would raid and steal what we want. In other words, trade and property rights are only possible because of government. Not in spite of government.


I can try some of these

> Is a Fiat currency just 'vibes based economics' ?

I'd think of it more like contract-based/trust-based. If a government has promised you can pay your taxes and debts in a fiat currency, and you believe them, then you'll value that currency. You may need it some day. When the trust erodes, so does the value. I don't think this is simply "vibes", it's people behaving rationally about what they expect the future to hold. And in fact this is just what "value" means for all things generally. Someone thinking rationally about how to value a gold-backed currency is still asking themselves whether they expect gold to be worthwhile in the future.

> How is it NOT a zero sum game ?

A zero-sum game would be one where there's just a fixed amount of value to allocate however, and you're stuck to the pareto frontier; that is, there is no re-allocation which would leave someone better-off and nobody worse-off. But in reality there's obvious misallocation all of the time! That's why you would ever buy or sell something: because two parties both believe they benefit from that exchange. Whether that exchange is actually positive or negative for society depends on externalities. But it seems pretty clear to me that a lot of activity is a negative-sum game (if it cause pollution, addiction, etc), and a lot of activity is a positive-sum game (most other things), and almost nothing is actually zero-sum. How is it NOT NOT a zero sum game?

> Is speculation on artificially limited assets (housing) just a pyramid scheme ?

Kinda.


Leg drive when bench pressing. The idea that anything you could do with your legs could possibly help drive the bar upward was always ridiculous.

Many, many years later, I finally found the right combination of articles and videos where it finally made total sense.

Instant improvement and I wished I would have understood it sooner.

The problem is really the name, which leads to a particular mental image, which is incorrect, but difficult to break.

I think naming is important and for me, can cause enough cognitive dissonance that I can’t get past what something is called to understanding very easily.


Do you have links to those articles/videos?


Here are a couple videos that provide most of the insight…

https://youtu.be/4T9UQ4FBVXI https://youtu.be/Bmjr4Q6je8I

Basically, you can’t bench with elbows straight out, because you’ll hurt your shoulders.

So you tuck your elbows a bit, but that means you’re fighting some leverage because the bar is no longer above your shoulders.

To fix this, you arch your back, which rolls your shoulders back and puts the bar back above your shoulders, but with your elbows still tucked.

One side of the arch, your shoulders, is pinned to the bench by the weight of the bar.

The other side of the arch, your butt, is not. So your arched back will straighten out under load, which rolls your shoulders forward, which moves the bar out of position.

So, to keep the arch solid, you use your legs to push your butt toward your shoulders, thus reinforcing the other side of the arch, keeping your shoulders rolled back.

So, leg drive is _not_ driving the bar, it’s driving your butt toward your shoulders, to keep the arch in your back solid, to keep your shoulders rolled back (and chest up) to keep the bar above your shoulders with your elbows tucked a bit, to keep from impinging a tendon.

You could bench with your butt off the bench and make a big arch, though this article explains you’re diminishing your returns by reducing range of motion.

https://startingstrength.com/training/keep-your-butt-on-the-...


Makes perfect sense, thanks!


I have one example and an anti-example. Both are related to algorithms and computer science.

A) Packing a binary tree into an array. Anyone that has attended an algorithms course has likely created a binary tree with nodes, leaf nodes, left and right child etc. Seen the pine-tree like sketch with a larger example where each node except the leaf nodes have a left and right child. So how do you pack this tree into an array and traverse it efficiently?

Well you turn it 90 degrees side-ways, slightly shift all nodes on the same level so that none align and put them into an array by going from leftmost to rightmost. (or other way depending on if you shifted 90 degrees or -90 degrees). Congrats, you've packed nodes into an array. How do you traverse it? Our root node is at index 1 and if you packed the array correctly then `idx = (idx * 2) + 1` will move down one side and `idx = (idx * 2) + 0` moves down the other. I don't have a good visual explanation of this but you can think of the integer/index as a bit-sequence describing when in the tree a left vs. right path was taken (with the exception of the root node).

B) Anti-example: Ford Fulkerson algorithm for finding shortest paths between all nodes in a graph. The algorithm is basically just three for-loops stacked on top of each other, but I still can't grasp why it works. Something with dynamic programming and incrementally building on prior established intermediate paths. The algorithm is truly the product of a beautiful mind.


A) I've seen this before but I've never thought of the "index as a bit-sequence describing when in the tree a left vs. right path was taken" before! This is a very nice intuitive explanation that'll really help in describing this to others.

B) Did you mean Floyd-Warshall's?


On B) yes, of course. Aaagh, all these algorithms and their indistinguishable names!


One concept that took me a while to fully understand was the concept of decentralization in blockchain technology. When I first learned about blockchain, I understood the basic principles of how it worked, but it wasn't until I started working for a blockchain company that the concept of decentralization really clicked for me.

Decentralization is a key feature of blockchain technology, and it refers to the fact that the blockchain is not controlled by a single entity or organization. Instead, it is maintained by a network of computers working together, and every transaction is recorded on a distributed ledger that is available to everyone in the network.

Realizing the potential implications of this decentralization - such as greater transparency, security, and accessibility - was a big moment for me, and it solidified my belief in the power of blockchain to transform industries and the way we do business.

If you're interested in learning more about blockchain and decentralization, I highly recommend checking out Rather Labs (https://www.ratherlabs.com). We're always looking for passionate people to join our team and help drive the adoption of this exciting technology.


You should look up a concept called "Gestalt Learning". I liken it to the "Eureka" effect that Thomas Edison talked about.

For any professional educators or other folks more familiar with this I am about to butcher this concept I am sure but my layman's understanding is the following. Basically rather than a bottom up approach to learning it's a top down approach where you see the whole picture first rather than each piece and then dance around it until you just "get it". Think of imagining a house and then learning about it until the entire house system makes sense. You might imagine the house, then learn a large "chunk" which is the foundation, then another chunk which is the plumbing, etc. etc. A more typical learning method might be you learn what a brick is and then you learn how they're put together, then you understand a wall, etc. etc. This style of learning is more common in folks on the ASD spectrum but isn't exclusive to ASD folks. I find for me I'll encounter a concept, then think on it periodically over some period of time until one day I just sort of "get it". It's not better or worse than other ways of learning but it just means you need to approach things differently to learn more efficiently.


This might be a good thread to ask this. I have done all the math in college, but I don't fundamentally understand sine, cosine, tangents, and logs. For sine, cosine, and tangents, I understand how they may correlate on a graph, but that doesn't really give me additional insight into how to apply or use any of this. Similar thoughts on logs, especially when it comes down to O(logn).

Maybe someone can direct me to a video to help me develop a deeper insight?


Consider following John von Neumann's advice "in mathematics you don't understand things. You just get used to them." Write down the first 100 equations involving sin, cos, tan, log. Use those equations in 100 larger derivations. Visualize the functions in 100 different ways (2D, 3D, animated).

It's a lot of memorization and "getting used to" the math, punctuated by a few "aha moments" of insight when you realize the 100 examples can be compressed into say only 50 examples. In retrospect its tempting to think the insights led to understanding, but really they were just the dopamine hits on top of the real stuff of understanding: tedious acclimation to a new abstract realm.


I don't know exactly from what angle you're looking at this, so let me explain through trigonometry.

We know that triangles may be displaced, rotated, flipped and scaled while still looking the same. We have a word for this: we say two triangles are congruent when they are the same up to these operations.

This means that there is something intrinsically invariant about triangles. Can we find it? Actually yes! If the sides of a triangle have lengths A, B, C, then the ratios A/B, B/C, etc. are invariant. That is, if you make a triangle twice as big, all sides will multiply by 2, so A/B becomes (2A/2B)=A/B -- it's the same!

So, we can come up with names for these invariant ratios. They're most useful for right triangles. To give names, we need to pick any of the two smaller angles as a reference, call it "a". Now, let C be the largest side, A be the side opposite to the angle and B the adjacent side. Then, the ratio A/C can be called sin(a), B/C can be called cos(a) and A/B can be called tan(a).

Since sin(a), cos(a) and tan(a) are ratios, they only depend on the angle, not how big your triangle is. But if you know some side of the triangle, then you can know all of the other using these values. So sin, cos and than really capture the uniqueness I was talking about!

---

Now the applications. I memorized these as the "divide by C" rule.

The Pythagorean theorem says thar A^2 + B^2 = C^2.

Divide by C^2, and you get sin(a)^2 + cos(a)^2 = 1.

Divide by cos(a)^2, and you get tan(a)^2 + 1 = sec(a)^2.

These are all the Pythagorean theorem on disguise.


Your "divide by C" rule is awesome, totally stealing this! I also never really grokked trigonometry, but your invariant explanation made it click instantly, wow.


That's kind of what I mean. It's just a mechanical equation and that's how I understand it. They are all ratios, but is that all it is? what insight can I get from that and why do I care about its application in the workforce? when would I ever need such functions? etc...


They are very useful when building things properly—a corner stone of civilization if you will.

Precomputed values in a book made things an order of magnitude easier at the job site or battlefield before computers.

Depends what you do for a living. Artists won’t. Civil engineers and artillery software programmers will every day.


I self taught myself c++ many years ago as my first lang (no idea why, I must have googled how to make games and it came up). No internet, just some downloaded word docs and html pages I saved on flash drive and went to a friends house. I was so bad when I think back that it's actually funny now that I've become so proficient in it. I used to think the runtime was the compiler and my very first hello world was done in Word so you can imagine the torture I went through trying to get that to compile.

So the topic that took me about 2 years to get at this inefficient setup, (had finally gotten a sam's teach yourself in 24 hours at some point iirc) was pointers. I could never grasp them. It suddenly came to me one night when reading something about letters that it clicked, the concept had something to do with delivering letters to houses and each house has an address. Can't recall the exact details but man was it an enlightening moment.

I googled pointer mailbox analogy and this is very close if not what made it click for me - https://www.eskimo.com/~scs/cclass/notes/sx10.html


The expression problem. I had vague notions of “horizontal” and “vertical” abstraction but they weren’t concrete enough to discuss or make informed decisions about.


Blue Noise dithering based on a HN post from @todsacerdoti. https://news.ycombinator.com/item?id=25633483 At my previous job we had an ASIC hardware block to implement blue noise dithering. No one, even the people who created it, could explain to me how I needed to use it. Years latter, I read their blog post and a light bulb went on.


Dancing. I was both horrible at it and or disaster happened when doing it* so I became 'too cool' to dance.

One New Year's I finally had a girlfriend I could be authentic with, we were at the city's 'First Night' party, so different activities all over town, and I told her 'I really want to go dance to the big band in the grand ballroom. But I am a horrible dancer and every woman I have danced with has made fun of me'. She was down, was patient, and it was one of the most fun nights of my life. After that we started taking salsa lessons. So just dance. And if the people you are with make fun of you, don't stop dancing, stop being with those people.

* Example: Junior prom, slow dancing with my date in her fancy dress with her $200 hairdo. The gum I was chewing (because I took her out to sushi, which she informed me was gross, and therefore would not kiss me until I got the taste out of my mouth) somehow attached to a hair of hers in my mouth. Before I could unstick it she flicked her hair, yanking the gum from my mouth and into the rest of her long hair. Fun times were not had by all.


I wouldn't say years, but it took quite a few weeks, maybe even months:

The idea that the expansion universe is akin to the surface of a balloon expanding (albeit in 4D).

I don't know why. It is so absurdly simple in my mind now. But when I was first told this (after naively asking where would the center of the expanding universe be), I just couldn't put my head around it, it seemed absolutely gibberish to me at that moment.


Why is it like that?


Fractality of things is all around us. The whole world is inside you, but at the same time the same world contains all of us. Most simple things are fractal, and most complex things are made using this concept. It's very applicable to code as well. I kinda encountered this my whole life, but it started clicking only few years ago on the 5th year of spitting code out of my head


1. The Fast Fourier Transform

2. Quantum computing

3. Godel’s Incompleteness Theorem

4. Denotational semantics

In all four cases, I make no claims to real expertise. However, in each of these cases there was a powerful moment of epiphany, when after much groping in a conceptual fog a light seemed to turn on, and an essential clarity, simplicity, and intuition clicked.


Special Relativity. E.g. how could it be that observers A and B (moving relatively to each other) both think that time passes more slowly for the other one? How is that not a contradiction?

Man, that took me a while. The solution: A and B have different notions of simultaneity and won't agree on which events occur at the same time. In particular, they won't agree on time measurements.


The meaning of "adaptive" in the sense of evolutionary theory, and the capacity of species to evolve to extinction. Reading the selfish gene as a youth I missed the point; "an alien god" [0] and some Hanson articles got the point across finally.

[0] https://www.readthesequences.com/An-Alien-God


Accepting the fact that some people are just arseholes and it is no reflection on me. I used to worry that it was something I had or had not done that made them interact in the way they did. Now I will initially give them the benefit of the doubt, (bad day, tired, hangover) but if it continues I no longer interact with them and move on. I no longer even consider or think about them.


Long time ago, but mathematical functions.

In hindsight stupid, but it took me embarrassingly long to understand that a function just takes a value for x, calculates it with the rest of the numbers in the function and that way basically assigns a y-value to every x-value. No idea why it took me so long, I didn't have any similar problems with high school math


The Kalman filter. I felt like I understood it watching some videos but it always turned out I didn't really understand it well enough to put it to use or explain it to others.

Gaining a better understanding ultimately just took a lot of time playing with the equations and understanding how measurement and process noise/uncertainty get incorporated into the Kalman gain, and how that in turn affects the updated state estimate - e.g if you have zero noise in the measurement, you end up fully trusting your measurement. This tutorial [1] is the one I ended up studying. This is a case where memorizing the equations (with the help of Anki) helped me reflect on them and keep everything in my head long enough to improve my understanding.

http://www.cs.unc.edu/~welch/media/pdf/kalman_intro.pdf


> This is a case where memorizing the equations (with the help of Anki) helped me reflect on them and keep everything in my head long enough to improve my understanding.

Dr. Mark Colyvan once preparing for his test couldn't understand a proof. So he went ahead and tried to memorize it anyway but in the process of memorizing he understood the proof so he had no need to memorize it now.

https://www.youtube.com/watch?v=KqYh1h2t8WU


That's a great way of putting it - the memorization becomes sustained through deeper understanding.


My top ones:

- abstract interpretation. It was at least a year after I first heard of the concept that I grokked it enough to try to write my own, and it wasn’t until many goofy failed attempts and something like 6–7 years that I actually understood it enough to write a good one. And then it was another 5-6 years before I understood the theory well enough to understand abstract interpreters designed by others. So, like, that took me more than a decade to understand. Maybe I still don’t fully understand it.

- SSA IR design. It’s so easy to understand the facts of what SSA is, and it’s not that hard to use an SSA IR designed by others. But it took me over ten years from when I first read the papers and did my first attempt until when I actually got it. And I still feel like there are aspects of SSA that I don’t fully understand.


The fact that essentially all concepts in math, programming, physics etc. are just applications of fixed points which is predicated on the idea of nilpotence. Fixed points go by many names like invariance, spectra, diagonalization, embedding, braids etc.

By fixed point I mean something like the "Lawvere's fixed point theorem". https://ncatlab.org/nlab/show/Lawvere%27s+fixed+point+theore...

I have a braindump on this https://github.com/adamnemecek/adjoint

I also have a discord https://discord.gg/mr9TAhpyBW


Also Brouwer's fixed point theorem, and the Y combinator?


Someone once told me he derives his self-confidence from the dialogue with himself. It took me a few years to understand it:

1. A clear dialogue with oneself is establishing certainty about the inner self

2. Certainty with the inner self enables one to see the outside in a clear way

3. That clarity contributes to confidence in one's actions.


The monitor model in second language acquisition. Or, more accurately, the more contemporary synthesis that it's developed into in the decades since it was first proposed.

The model itself is easy enough to grasp. But concretely understanding what it implies about how I should be studying took much, much longer.


Very intriguing! Any links you can share on this theory? A quick Google search gave me an overview, but I don’t see how this is particularly useful for second language acquisition.


The big take-away is that explicit study of grammar rules, and even vocabulary to some extent, is kind of a waste of time. They encode information in a format and region of the brain that generally isn't accessible to the areas that are used in fluent communication.

The way our brains have evolved to build up a functional language model is by observing lots and lots (and lots) of examples of the language being used for communication. Which implies that, even from the very beginning, graded readers, level-appropriate dialogues, etc. should be the foundation of a language study program and not, as most language instruction courses and apps make it, just a little bit of icing on top.

The primacy of input is also kind of a big deal, and, at least for me, it took a long time before I was willing to let go of forced production exercises such as "translate this sentence into your target language". Perhaps in part because they're so endemic to language learning resources. You can't abandon them without also abandoning Duolingo and most formal classroom programs. But the problem with these kind of practice exercises is, we now know that it's normal for there to be a long delay between when someone can comprehend a grammatical structure, and when they can use it in a natural setting. (Anyone with kids over a certain age should be familiar with the phenomenon.) Forced production relies on - and reinforces - that aforementioned misplaced encoding, and there's a mountain of research demonstrating that skill in performing those sorts of exercises simply doesn't correlate with the development of communicative fluency.

Tangentially, the transformer architecture that's taken natural language processing by storm has some interesting similarities to the leading model among SLA researchers for how language is represented in the human brain. Which isn't the monitor model itself, but might hint at a mechanism for a few parts of the model. Acquisition order, for example.


Given you eureka moment, besides what you didn't do (as you outlined), what did you do to increase your second language real world examples with feedback? Did you try immersion?


"Immersion" is a tricky word that I don't feel comfortable using without qualification.

What I like to do is spend at least an hour a day reading, listening to podcasts and audiobooks, or watching videos. A good day for me is a day when 100% of my media consumption for entertainment purposes is happening in my target language.

And that really is it.

I don't really bother soliciting explicit feedback. I suspect that it's potentially harmful because it can trigger the affective monitor. I've also encountered some SLA researchers saying the research indicates that it's not actually helpful. I'm becoming increasingly enamored of Bill VanPatten's conceit that, in a language learning context, there's no such thing as errors, there are only differences between the learner's interlanguage and their target language. Which is something that should be embraced as a natural and essential part of the process rather than a problem that needs explicit correction.

So what I do instead is just pay attention to whether successful communication has happened. When getting input, that means I'm focused on whether I understood the content or not. I want it to be a little bit difficult, enough so that I'm not getting bored, but not so difficult that I feel I'm really straining to comprehend. (There's nothing scientific to that, it's just personal taste.) When I'm having a conversation with somebody, I'm just interested in whether or not I am having a successful interaction. In a sense, what I'm trying to do is set up a feedback loop that optimizes for pure enjoyment, which, for me, seems to be a very good proxy for learning effectiveness.

To that end, I don't really intend to shit on Duolingo and practice exercises and whatnot. A lot of people enjoy those approaches to language learning, and the single most important thing is that you enjoy what you're doing.


I read “Deep Survival” a few years ago pre-pandemic and was pleasantly surprised it wasn’t a book of survival stories. The main takeaway for me was that sure, nature doesn’t care if you live or die, yet surviving in modern life is an illusion - a papering over of the brutal, unfeeling and inescapable “nature.”

The author wrote of survivors coming home after a harrowing near-death experience and realizing that survival is one day at a time, even in the comfort of your own home. Once you taste true survival it may haunt you. Survival is an easy concept with subtle and deep physical and mental consequences.

Kind of reminds me of David Foster Wallace’s “This is water” - https://youtu.be/eC7xzavzEKY


1. Object-oriented programming. At that time, no one gave a clear explanation of what was actually meant by it and how to implement it. The various schools of OOP often did not acknowledge the existence of other (heretical) schools, which caused a lot of confusion in my brain. Critical voices were hounded and silenced. Also, the gap between implementation and theory was always quite wide. I did not know Smalltalk at that time and have not seen it in use to this day. 2. Electromagnetic fields. I'm still not sure I understand them. I found this video helpful: https://youtu.be/XoVW7CRR5JY


Programming as “just math”.

I got wrapped up sitting there memorizing ins and outs of each language and the ecosystem, others compositions in the form of Apache and the like.

Now it seems bizarre we’d think of it as anything but add, div, compare of electrical state in a memory address. It’s not a 1:1 machine translation but it’s the abstraction that’s made me most productive.

A whole lot of baggage comes with software that I’m hopeful ML libraries will allow us to retire. Currently wrapping UI around OpenCV to make my own “Photoshop”, for example.

I really can’t see any other way of approaching programming as anything but a waste of human agency. There are social problems we could be focused on if we were less focused on butts in chairs cranking out code.


Socialism

When I got out of my middle class bubble, made friends with people working multiple jobs and struggling to make ends meet, and experienced a period of financial instability; I began to realize that something wasn't working in our current system.

Then I started struggling with burnout and other issues and found that corporations were happy to just replace me. I also found that management wanted programmers to be replaceable cogs instead of professionals. At that point I started to suspect that the idea of a dignified professional lifestyle may not be true.

I observed that technology and products were getting worse over time. For example, Google search has become mostly useless and it's hard to find products that are made to be repaired. I concluded that the invisible hand and/or the price theory of value were not true.

Then I saw Republicans gain power and run up the national debt. I also observed that when wages actually started increasing the economy fell apart and the Fed started taking steps to prevent wage increases. I concluded that "free market" rhetoric was a lie.

At that point I looked for alternatives and found socialism. In particular the "social democracy" strains of socialism as opposed to those advocating central planning or anarchist organization.


Observing the debate in the US from the outside, I believe the term "socialism" has to die. It's carrying so much baggage it's a major reasons the US cannot have a sensible debate over what alternatives there are to full-blown free-market capitalism.

While also somewhat contested in Germany, and not free of (some) valid criticism, I would advocate to try and use the term "Social market economy" [1].

[1] https://en.wikipedia.org/wiki/Social_market_economy


> full-blown free-market capitalism

Well, full-blown free-market capitalism would be one alternative to the current US system, albeit probably not the best.


A big one for me not necessarily for “socialism” but for questioning the bog standard capitalist narrative was seeing how the quality of content often goes down when creator monetization is added on a platform.

That’s not supposed to happen. Adding a way for people to make income is supposed to incentivize better content and allow people to invest in that content. Instead what you get is a flood of addictive and sensational / tabloid trash.

… or another way of looking at it is that you do get “better” content but better is not defined in a way that truly benefits anyone.

YouTube is the most dramatic example. The quality of the whole platform tanked hard when monetization was added.

After seeing this a few times I have started noticing it everywhere. The profit motive just doesn’t incentivize quality the way we are taught that it should. It can when the incentives align but they often do not.

Another common example is actual reductions in quality to drive more spending like engineered obsolescence or “nerfing” software to drive lock in. The incentive is to produce an inferior product since that is most profitable.

This doesn’t mean I think the answer is a government bureaucracy running everything like a single monopoly. That leads to a whole other set of perverse incentives and an inability to go elsewhere.


> ... Adding a way for people to make income is supposed to incentivize better content and allow people to invest in that content. ...

What it does is incentivise more content. More is not always better for everyone, and is even less likely to be better for those who were early adopters to the platform - but what it is usually is more popular. It's just democracy in action: the largest numbers decide, and sometimes privileged minorities lose out.


It’s not quantity. It’s a change in the nature of the content. People start chasing the algorithm and trying to amp up the addictiveness of content. You end up with a cross between a casino slot machine and a supermarket tabloid.


Interesting. Here in Czechia, “socialism” and “social democracy” are two very different things — one is the oppressive regime we had decades ago, the other is the current system that works reasonably well.


Czechia is still undergoing catch-up growth, much like Germany during the Wirtschaftswunder (that made their "social market economy" widely popular) and emerging countries today. When people say "social democracy" doesn't work all that well compared to leaner approaches, they mean countries where that growth process has concluded. With an overregulated economy and a high burden of all sorts of excess red tape, they tend to get stuck in a middle-income trap that leads to widespread unhappiness.


This is not an attack on you, you are free to your opinion and to voice any or all of it to whom you choose, largely because you don't live on a socialist society.

My opinion, however, is that my parents did the right thing to drag me out of such a system and into a capitalistic one. I am watching my peers suffer.


>In particular the "social democracy" strains of socialism as opposed to those advocating central planning or anarchist organization.

Yes; Most people get hung up on terminology and/or specific interpretations but i am convinced this is the natural order of things for the Human species. The balance without going to extremes is what is important.


> Most people get hung up on terminology and/or specific interpretations

Because terminology matters, and some terms are quite misleading. What some people tend to miss about the "Socialism with Scandinavian Characteristics" that folks claim to like these days is that Scandinavian countries are actually near the very top rankings by economic freedom and lack of excess regulatory burden. I.e. they're actually some of the most free market and capitalist, while applying effective redistribution after the fact (leading to moderately high tax rates). So I don't get why people decry capitalism and free enterprise while praising Scandinavia as "successful" socialism? It makes no sense.


I’ve thought for years that the ideal would be a very free market with very little regulation combined with a basic income and a strong social safety net.

Let people go wild and try stuff and do whatever but raise the floor up to the point where people can recover from failure and where the less fortunate are not suffering.

This kind of “social capitalism” could be freer and less regulated than what we have now.


Sure, but the whole issue is how to get from here to there given the current regulatory environment. Adding even more red tape and wasteful government spending as in the orthodox "socialist"/"anti-capitalist" approach is unlikely to be helpful.


Here's a concept I still don't get, even after multiple attempts: quaternions!

I mean I get what they are and how they are used and could do operations with them by following the rules. But... I still don't really understand them.


When I was 12 my wrestling coach was getting his PHD in fluid dynamics. He gave a tour of the lab where he worked. On screen was a simulation. He explained that they fly a plane back and forth through a certain column of air in the stratosphere to get starting data. Then, using his model he predicts what the flow will look like over a period of time. There were specks flowing through the space onscreen. I asked what they were. “Bugs” he said. Six years later I was sitting in the library studying and I slapped my forehead. “Bugs! Of course! Ha!”


I don’t get it.


One thing I feel is clicking for me this year is that of inelastic systems. Be it monetary systems or one’s life occupation. The name of the game is being elastic and rigidity is the end all. Unfortunately as we age ?and as systems age) rigidity seems to be the default path and it takes greater and greater power and aptitude to fight that.

One I’m working on still is the unfairness of life. I understand it is unfair down to the level of the cell but how to come to terms with that I still grapple with.


> no one is coming to save you

Thought that was about working extra hard to make money and CYA, but really it's about finding your own happiness, regardless of what it may be in a money / status / property sense.

> everything related to RSA & trapdoor algos

Like I go the rough idea and implications, but didn't "click" until I was trying to explain what the square root of 11 was to my nephew. Was helping him with homework and was doing everything in my head just fine until we hit that and I had to stop and think and it clicked.


Bayes theorem really clicked when I saw 3Blue1Brown's visual representation. https://youtu.be/HZGCoVF3YvM


>longer wires mean more resistance while thicker wires mean less resistance.

Intuitively (perhaps incorrectly) I would assume that it's like trying to force a fluid through a skinny long pipe vs a wide pipe - the skinny long pipe will have higher pressure inside.

Or, the long thin wire has more distance for the current to travel (= more resistance), while the thicker wire has more "options" for the current to choose a path of least resistance (literally), tending towards a lower overall resistance compared to a thinner wire of the same length


For me it was design patterns. I read the book in the late 90s. I was working mostly on Mac development using PowerPlant. But despite the fact that it used several of the design patterns described in the Gang of Four book, many of them still didn’t seem very clear to me. Then when MacOS became OS X, and I started using Cocoa, it just clicked. Having delegates finally made sense, for example. Previously, it was so abstract, but once I saw it in action within Cocoa, it made a lot more sense for some reason.


"The Innovator's Dilemma". I read the book and I took the class in b-school.

At the moment I thought it was OK, fast forward a decade and it's one of the pillars of my business thinking.


Differential equations took a couple of years to grok. I first encountered them in high school, while preparing for physics Olympiad. I could solve basic differential equations by following the "rules" (such as for dampened oscilator), but I didn't understand what was going on under the hood. When in university I did some more math courses, suddenly differential equations clicked and made sense (and I could even derive some of the rules)


I failed calculus twice in college.

Then a summer went by before the third time I took it.

When I stepped into class that third time, everything clicked and it all felt very obvious to the point where I could anticipate where the lecturer’s equations were headed.

I stopped attending class and still got an A. I even ended up helping classmates in a study group.

I still can’t explain what that brain process was that resulted in the pieces subconsciously lining up over the summer.


Any books you could recommend?


Systems in equilibrium. A lot of my college engineering courses had these (what seemed to me to be) hand-wavy assertions of equality and what seemed like just an assumption that the system would converge to that point.

I was probably in my late 30s or early 40s before I really grokked why that tended to be true. (I could blindly accept and grind through the equations to get the answers in college, but it was decades later that I developed a feel for why.)


Monads in a software engineering context ("a particular set of rules for composition of two pieces of code"); defunctionalization; Lisp structural macros; fexprs; the Rust lifetime system; how to structure functional programs. All of these things had a delta in years between when I first encountered them and when I finally understood them, with repeated (4-8) spaced exposures over that time period contributing.


1. Being able to derive bottom up dynamic programming leetcode solutions. The only way I really "figured out" how to do this was by reading the dp chapter out of the Cormen algorithms book. It's crazy how illuminating (and rare) clear explanations are

2. The chain rule. I knew the actual rule from a calc class, but the intuition behind it didn't make sense to me until I read a couple pdfs on backpropagation

3. Money brings out the worst in people


Homologation implies not only "the granting of approval by an official authority", but compliance with social agreement about unification of solutions / standarisation of thought.

I've always thought that homologated solutions are parts of a bigger whole, bigger "homo". Stating that the producing party has all the right knowledge to make the part fit and adhere to safety guarantees.


Everything is a monoid to be useful.

Now i understand more about why 1+1=2.


It took me like 4 years of playing/training table-tennis to understand what being relaxed when hitting means and how it feels like. It's very easy to tense up and not have all your muscles work harmoniously, but it takes years of practice to simply "be relaxed" when playing.


Shader and OpenGL programming. It was many foreign jagons, concepts, and computation models. Took me a while and couple tries to get a hang of it.


What were you trying that allowed you to get the hang of it? Also were there any online resources that helped?


Wrote 3D games and simulations. That forced me to go through the full pipeline end to end.

Also UC Davis professor Key Joy's Computer Graphics course [1] is the best intro course on 3D graphics programming, covering many key concepts and math. I found it better than any other courses from MIT/Berkeley/CMU/etc. It doesn't have some of the newer techniques in recent years but is still very much relevant and forms the foundational knowledge to level up.

[1] https://www.youtube.com/watch?v=01YSK5gIEYQ&list=PL_w_qWAQZt...


For me, it’s the determinism of nature. The clear hard fact that nothing I can do will impact the pre-determined outcomes. This is cold physics as we know of today. Also, the ‘present’ feeling different from past is just a trick our mind plays on us. This realisation has had profound impact on me - but this dawned on me quite late despite reading a lot on this topic for years.


The importance on clean code, architecture and testing. That was during my university when we had a serious diploma group project on the last semester.

Also importance of reading books even later. My mother told me this all the time but school books were not the best choice. In high school I started reading fanfics, then occult books and finally I landed on self-helps and ones related to my SWE career.


For me it was lambda calculus. I remember in high-school going through the motions of beta-reduction but having no idea what it meant. Much later I think I finally saw an implementation of natural numbers and addition using bare lambda calculus and it clicked that you really can represent any computation with just variables, abstraction, and application.


Quantum mechanics as continuous probability distributions.

It took me about 3 tries in taking QM classes to learn and understand that the whole thing was about continuous probability distributions (at least, the beginner, static stuff). And finally to understand how to understand what continuous probability distributions are, and how to use them.


It's not really a discrete concept, but the course western philosophy took over the past 500 years, and its dialogue with science, was something that took me decades to appreciate. In college I sort of understood the outlines of this development, but years of learning, reflection, and life experience have really solidified it for me.


That Pareto is everywhere and, when applied to knowledge, a comprehensive study plan, like courses or a book, for most subjects, tend to obscure what's the meat.

Waking call:

https://sive.rs/kimo

Corollary: a place where there's a fat-ass book of "best practices" has probably lost its focus.


Bayesian probability or Bayesian thinking in general

Like many, I learned Bayes' theorem for an exam. I even did well. But it clicked only when I was reading The Scout Mindset (by Julia Galef). I cannot really tell why. I think it helped me connect math formalism with more real-world examples outside of statistics.


Sadly, React. Was using Angular 1, and Vue made much more sense. I'm not a Frontend Developer though, but needed to learn React but it happened to late. At least, when I needed to learn, components were not classes anymore but functions which made learning curve somehow simpler for me although its not much difference.


The Nyquist frequency. Looking back, it's hard for me to understand why I had trouble with it. I remember a friend telling me it was really simple but at the time I just thought it was because he was smarter than me or something. I guess I didn't see a good demonstration of aliasing until years later?


Basically all of calculus, but especially multivariable and differential equations. I was first introduced to them in high school and again in college, I struggled with them in high school, but breezed through them in college. Not sure if I just got smarter, worked harder, or some combination of the two :)


A bit more light hearted, but I would’ve had far more dates (and one nighters) if I’d smiled more and worked on my small talk.

It only dawned on me many years later that I was:

a. physically attractive to the opposite sex, especially during my teens and twenties, regardless of my ethnicity.

b. my blank expression was intimidating.


Everyone should practice being approachable and talking to strangers. Not just for expansion of romantic opportunities but basically every kind of opportunity.


MPI.

A prof. teaching a grad level course on parallel computing mentioned in class that even he himself took a few years to get used to program in message passing protocol like MPI.

Funny enough, even when I don't need to program in MPI extensively, after a few years my fear of writing in MPI just went away.


Programming languages don't matter.


When we think about history over any time scale, we tend to think about one region, person, group, etc within that period of time. In reality, “everything” was going on during that time.

History is far richer than we tend to realize. So much is going on all the time; it’s impossible to grasp it all.


programming in general, i tried probably half a dozen different times throughout my early teens to understand how to write programs in pascal, C, python, java, and it wasn't until I was 18 when I found Zed Shaw's "Learn Python the Hard Way" that it clicked.


Steely Dan


(got a smile out of me, both because the band name and original meaning) Do you mean whole discography? Because I really like their earlier stuff but lost interest at one point. Asking since if there was enlightenment with whole discography then I have more interest to take new dives.


Actually skip the question .. I took their albums not linearly and got to Aja, so basically I already got nearly everything but have skipped few. Still a smile:)

I thought they did more stuff on 80s..


UML

When I first saw it, I thought it was a process step that was unnecessary.

Then I thought it was a way to program visually, but generated code only works in narrow domains.

I realize that UML is tool to define your system, and it lays bare your assumptions, which is one of the hardest problems in computing.


UML is hard.

Any abstraction can evolve into wasted time not making the thing.


Anyone who is excels is out of their depth.

By definition if you continue to progress (in career or life), you will be doing something you have never done before. This means everyone that we look up to who appears to be on a upward trajectory is making it up as they go.


I didn't really understand HTTP until I started using Fiddler which was probably 10 years after I had written my first web server application. "What do you mean I can't change the headers because they've already been sent?"


Coriolis force. It's pretty basic but I never really got it until I watched the Tom Scott video https://www.youtube.com/watch?v=bJ_seXo-Enc


Data visualization. I had many years of experience with data analytics, but only when I read "Now you see it" by Stephen Few I started understanding how data visualization is linked to the way the human eye and brain work.


Understanding your data representation is more important than understanding how to write code. It makes sense once you've used code to solve problems a few times, but when you're just starting out you don't get it.


Central Banking 101 by Joseph Wang does a great of explaining the repo and reverse repo markets.

https://books.google.com/books?id=wPs_EAAAQBAJ


call-with-current-continuation. First ran into it as a teen and the concept pretty much completely bounced off my brain. I think it took a couple more tries before I started getting it, and arguably I'm still working on it.


The concept of decorators in Java and Python - I know what they're supposed to do, but each time I find myself grinding my teeth when using them, thinking there's a better way of doing the same things more simply.


Not my click exactly, but hearing the “Unicode sandwich” described ~15 years after studying Unicode and using it in production was the last piece of the puzzle no one ever mentioned before. Or perhaps in not so clearly.


Functions in calculus... I never understood them until I learned about functions in programming several years later. They're the same damn thing! I really wish I had learned to program before learning calculus.


Hopefully, you learned about functions long before calculus...

When we learned about functions back in something like 5th grade, I noticed that my classmates were very confused by the concept. I also remembered that I found the explanations in the book and the ones from our teacher quite unclear.

My guess back then was that the notation was partly to blame for that -- and I still think that's true.

So, my questions to you are:

1) were you taught the "f(x) = 3x + 2"-style notation? 2) have you seen the "x --> 3x+2"-style before? Or the "f: x --> 3x+2"-style, where we give the function the name f? 3) did you find it confusing that f(x) was sometimes used to refer to the function and sometimes to its value when applied to x? The notation of question 2 should eliminate that confusion.

I'm guessing that part of the confusion was also that your teachers were unclear on what functions could be used for -- but didn't they ask you to draw graphs for various functions? And didn't they also introduce functions like sin/cos/tan?


That sometimes the extreme needs to manifest in order to pull toward the middle


Specification vs Implementation: There could be only one specification, but many implementations. There is only one Python specification, but it has many implementations, cPython being the most popular.


The relationship between entropy and states of matter (including pressure and temperature). How “degrees of freedom” are calculated and compared for gas molecules, if they are infinite. All that stuff.


Haskel monads.


There is a common saying that if you can't explain something to someone else then you don't really understand it yourself. Monads made me realize that some things need to be experienced in order to be understood.


A bon mot I saw on a sign outside a liquor store: When I was young I thought a $1000 was a lot of money. Now I’m older I realize that $1000 is a great deal of money indeed.


The importance of play.


Entropy. Both in information and in thermodynamics, and how brilliantly they are connected. The audiobook "The Big Picture" by Sean Carroll has helped a lot.


Objective-C message passing/variable declaration's the syntax is so different from C, C++, Java, Javascript, Ruby, Python and other popular OOP languages


Ram Dass - Be Here Now

Amazing book with mind blowing illustrations about life, purpose and spirituality. Changed my life. Every time I re-read it I discover something new.


• Basic concepts of dynamic programming and precomputation.

During lockdown I attended a course that prepared students for IOI. I couldn't focus/follow at home via web conferencing. One year after I attended some lectures from the same course, but physically. I understood everything when it was explained in person.

• Basic concept of buckets in distributed hash tables. After reading BEP-0005 I was left puzzled and left the topic for some months. Reading a Wikipedia article about Kademila, first understanding static buckets and transferring this to dynamic buckets with splitting was way easier to grasp.


Socrates’s “all I know is that I know nothing” and “know thyself” more and more reveal to me something new about life, philosophy, and the nature of being.


the null concept being referred to as “the billion dollars mistake” - until discovering optionals and gradually opening up to FP, category theory, etc


The “billion dollar mistake” was not the null concept itself, it was having all reference types be nullable, i.e no distinction between nullable and non-nullable in the type system.


For me it was basic statistics. Statistical significance and variability.

Had Statistics in college, but only understood it now that I use it to understand data.


I'm currently doing a dive into classic distributed systems papers, mainly from the 70s (actor model, logical clocks, that kind of stuff).

I'd "understood" the concepts before, but now because I am:

- brushing up on my math to understand every equation or proof they drop in there

- reading them in combination with applied stuff that uses the same concepts, ie the "designing data intensive applications book"

- reading over them slowly, I want my fundamentals to be strong and etched into my head

Things are clicking in a way they never did before.

TL;DR - studying compsci concepts, slowly, from multiple angles (completely mathematical to practical engineering) is just a different level of understanding from doing one or assuming your mind will bridge the gap.

YMMV.


it was very hard for me to understand what was the point of classes when I opened my first C++ book when I was.. 11/12?

thankfully it became very clear as soon as I started having actual projects and noticing all the C apis I was writing would all look like

    foo* foo_create();
    void foo_destroy(foo*);
    void foo_set_stuff(foo*, int);
    void foo_bloberize(foo*);


Forward-Backward algorithm before there were all sorts of resources and explanations on how it works online.

The wikipedia page for it explains it well.


That the integral I-V relations for an inductor and capacitor are the fundamental forms, not the differential relations.


Recursion. I tried and tried and tried. Only when I untried the exact same number of times did it click.


Orbital mechanics. It was after playing KSP that I had that "woah! this makes sense!" moment.


Eating fiber.


The purpose of C++ template generics only made sense to me after I started using Haskell.


I have to agree. I couldn't understand templates until Haskell introduced me to it. But Haskell is expressive enough that the intuition comes naturally.


That there will always be work to be done.

Whether you define it as working on yourself, on your relationships, fixing up the house, the car, washing the dishes.

I guess I sort of lived as if there would be some kind of 'over the rainbow' someday when I could retire and just chill, but this year when watching 'Stutz' on Netflix, where he states that there are three inescapable things in life: "pain, uncertainty, and constant work," the work part really sunk in for me.

I've been watching my mother try to create a kind of peaceful cocoon with her new house and her manicured lawns and list of friends curated down to only the ones which don't ever challenge her etc, only to find that even nearing her eighties she constantly has surprises and challenges intruding on the peace she is trying to cultivate.

The tldr; of it is that I've been looking at work the wrong way, i.e. I've been trying to avoid it or push it away, but I feel I should embrace it. Some recurring tasks can even be sort of comforting in that it creates a routine.


Shaders.

I tried shader tutorials multiple times for years and they only started to click recently.


Any good resource on that? Bonus if it’s for Unity


Ben Cloward - Shader Graph Basics for Unity - https://www.youtube.com/watch?v=OX_6_bKpP9g&list=PL78XDi0TS4...


theory of reliativity, took me a while, but now I think I got the main gist


I'm not sure if this even possible, but I'd appreciate a "relativistic simulation for dummie programmers" handbook


People like to pretend they didn't work hard for what they have.


Internet Protocols. Specifically the beauty and simplicity of IPv4


It took me 4 tries over 10 years to get antlr to work/grok.


object oriented programming, with public, private, classes. inheritance, parameters... ugh. It took much longer to internalize these than I'd care to admit.


git.

Only https://www.cduan.com/technical/git/ worked for me.


Gender as a social construct, separate from biological sex.


Basics of software planning by reading phoenix project.


Data structures are more interesting than algorithms.


The simple beauty of the calculus integral.


Refactoring.


Formal logic.


Monads are just a design pattern.


Krebs cycle.


blockchains as a single source of truth

always seemed like a shitty expensive database for 7+ years


What made you believe that they are not a "shitty expensive database" after all?


in the ecosystem of a single blockchain, they are the single source of truth that is an open api-like thing that no company can control

therefore it’s safer to build then on twitter, facebook, apple, etc.

the entire ecosystem may be a fraud, but if it is not then it’s incredibly enduring. i have more faith that i can get what the balance of an account is in 10 years from ethereum than i do from my banks api staying stable or open.


Frechet differentiation.


Bitcoin and the need for an alternative to the US dollar and gold as reserve assets


The cascade in CSS.


There are no rules.


Fourier transform


Eigenvectors


OOP on Java.


The idea that atheism is a form of religion.


Or just a personal relationship with reality. :P


Privacy.


The bowline


Docker.


Trig


(1) Pointers

(2) Recursion




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: