Hacker Newsnew | past | comments | ask | show | jobs | submit | more notdonspaulding's commentslogin

If I were to formalize my colloquial understanding of the phrase "ripped off" it would be something like: a transaction took place between two parties where upon evaluation of the results of the transaction after-the-fact revealed that the overall expectations were not met by one party and had the offended party known fully how the transaction was going to play out, the either would not have entered into it or would have only entered into it for a drastically different amount of money.

This morning, I went to a bakery and ordered a blueberry muffin. It was fresh and moist and delicious. I was satisfied. If it had instead cost the same money but been dry and crumbly and had a taste of old socks, I would have considered myself to have been "ripped off". But I put no thought into the transaction, I had a wide range of acceptable outcomes, and I did almost no due diligence beforehand to ensure I was going to get the expected result.

So now, you ask:

> Why does having a contract eliminate the possibility of getting ripped off?

If transactions exist along an possibility-for-ripoff spectrum from "ordering a blueberry muffin for breakfast" to "hiring a C-level exec for X years for $YYY,YYY,YYY", then the closer you get to the latter transaction, the more specific the contract should be about expectations.

Every step along the way to having a signed contract normally involves an increased awareness of expectations by both parties. That's what the contract is. And the moment of signing a contract is an intentional decision to move forward with the transaction as specified. It's certainly possible to subvert the normal process in such a way that you obfuscate your intentions, but even then, the results of your intentions must show up in the contract language itself, or they won't count. Unless someone is holding a gun to your head, or you're not intellectually capable of entering a contractual relationship, the moment of signing your name to a contract is a moment where you are affirming that you aren't being "ripped off", that you've considered the terms and are willingly agreeing to abide by them.

Perhaps it's still possible to do all that and consider yourself to have been ripped off. And perhaps your examples are good examples of such cases. But if your board or C-level executives sign a contract for employment of a high-level executive which includes a 9-figure deal for compensation and they don't have more awareness than a poor person going to a car lot, they didn't get "ripped off", they're ripping off the investors or shareholders by collecting whatever salary they're collecting.

IANAL, also not a C-level exec, also not particularly smart about how crypto "works", but this all seems like fairly obvious stuff.


I struggle to think of real-world examples where I've just needed to chain and chain and chain values of different types more than a handful of times. The claimed need for the pipe operator is this construction:

    function bakeCake() {
      return separateFromPan(coolOff(bake(pour(mix(gatherIngredients(), bowl), pan), 350, 45), 30));
    }
The piped code looks like:

    function bakeCake() {
      return gatherIngredients()
        |> mix(%, bowl)
        |> pour(%, pan)
        |> bake(%, 350, 45)
        |> coolOff(%)
        |> separateFromPan(%)
       ;
Which is... fine? It certainly looks better than the mess we started with, but adding names here only helps clarify each step.

    function bakeCake() {
      const ingredients = gatherIngredients();
      const batter = mix(ingredients);
      const batterInPan = pour(batter, pan);
      const bakedCake = bake(batterInPan, 350, 45);
      const cooledCake = coolOff(bakedInPan);
      return separateFromPan(cooledCake);
    }
Even if you consider the `const` to be visual noise, the names are useful. At any point you can understand the goal of the code on the right-hand side by looking at the name of the variable on the left-hand side. You can also visually scan the right-hand side and see the processing steps. You can also introduce new steps to the control flow at any point and understand what the data should look like both before and after your new step.

I agree that the the control flow is more clearly elucidated in the pipe operator example, but it tosses away useful information about the state that the named variables contain. It also introduces two new syntactical concepts for your brain to interpret (the pipe operator and the value placeholder). I contend the cognitive load is no greater in the example with names, and the maintainability is greatly improved.

If you have an example where there are dozens of steps to the control flow with no break, I'd be really curious to see it.


Imagine that you asked someone the question "How do you make a cake?" Which response would be clearer?

1. Gather the ingredients, mix them in a bowl, pour into a pan, bake at 350 degrees for 45 minutes, let it cool off and then separate it from the pan.

2. Get ingredients by gathering the ingredients. Make batter by mixing the ingredients. Make batter in a pan by pouring the batter in a pan. Make a baked cake by baking the batter in the pan at 350 degrees for 45 minutes. Make a cooled cake by cooling the baked cake. Separate it from the pan.

For me personally #1 is more readable because #2 is unnecessarily bloated with redundantly described subjects.


Right, it works for your analogy.

Going back to the concrete scenario GP presented, naming things makes it much clearer to me.


In fact, I'm so fanatical about naming things, I'd probably give the two magic numbers and the return value names as well:

    function bakeCake() {
      const bakeTemperature = 450;
      const bakeTime = 45;  // minutes
      // ... 
      const bakedCake = bake(batterInPan, bakeTemperature, bakeTime);
      // ...
      const finishedCake = separateFromPan(cooledCake);
      return finishedCake;
    }
And I'd not look at a code review which quibbled about the particular names I chose as being a waste of time either. Time spent in naming things well is the opposite of technical debt, it's technical investment. It pays dividends down the road. It increases velocity. It makes refactoring easier. It improves debuggability. It makes unit tests easier to see.


Should make it an async function, and await the bake step. ;-)


Sometimes intermediate values either don't have domain specific meanings or the meaning is obvious from the function name that returns this temporary value.

Then naming it is just noise.

If your bake() function was rather named createBakedCake() than naming returned value bakedCake just increses reader fatigue through repetition.

Same way

Random random = new Random();

in C# is worse than

var random = Random();


> Sometimes intermediate values either don't have specific meanings or the meaning is obvious from the function name that returns this temporary value.

I don't necessarily disagree with this. But even granting that this is true: congrats, you've just found the worst part of giving these intermediate steps a name! Like, that's the worst case example of the cost side of the tradeoff we're discussing here. And it's not that big a cost! Like, of all the code you write, how much of it fits this case? Where you're writing a function where there's a lot of sequential processing steps in a row with no other logic between the steps AND the intermediate state doesn't have any particular meaning?

In that worst case, you have a little extra information available (like your Random random = new Random()) example that your eyes need to glide past.

I would wager your brain is more used to scanning your eyes past unnecessary information and can do that with less effort and attention than it can either:

    - bounce back and forth between the chained function calls of the original nested example.
    - synthesize the type and expectations of the intermediate value at any arbitrary point in the piped call chain.
That last thing is the big cost of not naming things. In order to figure out what the value should look like at step 4, you have to work backwards through steps 1-3 again. And you have to do that any time you are debugging, refactoring, unit testing, adding new steps, removing existing steps, etc.

And the work to come up with "obvious" names isn't hard. Start with the easy name:

    batterInPan = pour(batter, pan)
And if the name batterInPan never gets any better and never really helps anyone read or debug or refactor or unit test this code, then in that sense, I guess it's a "waste". I just claim that this case is far less common in the real world and far less costly than having to untangle a mess of unnamed nested or chained call values.

Or maybe you want to just start with the unnamed nested or chained calls, and when you need to read or debug or refactor or test your code you pay the "naming things" price tag at that point. That's often the first thing I do when I come across code with a dearth of names, I just give everything a boring, uncreative temporary name, and then I can do whatever work I showed up to this code to do. It's not ideal, but it's better than every JS library sprinkling a new bit of syntax in just so they can avoid giving their variables names and can use an overloaded modulo operator instead.


> But even granting that this is true: congrats, you've just found the worst part of giving these intermediate steps a name!

Yes. But given that people would usually put you on a stake for naming function bake() because it doesn't tell anything about what the function expects or returns and bare minimum about what it does, this use case scenario is what happens very often, because naming your function in a very informative manner is very important because they are a part of the API.

If you really have functions like bake() or pour() in your code esp in weakly typed language then for the love of God, yes, please name the variables that you pass there and get from them always and as verbosely as possible.

Don't get me wrong, I'm very fond of naming intermediate things too. And with helpful IDE it can even tell you the types of intermediate things so you can better understand the transformations that the data undergoes as it flows through the pipeline.

But sometimes type, that IDE could show also automatically in |> syntax is even more important than the name for understanding. VS Code does something like that for Rust for chaining method calls with a dot. Once you split dot-chain into multiple lines it shows you what is the type of the value produced in each line.

My personal objection to naming temporary values too much in a pipeline is that it obscures distinction between what's processed and what are just options of the each processing step. But I suppose you might keep track of it by prefixing names of temporary values with something.

> Or maybe you want to just start with the unnamed nested or chained calls, and when you need to read or debug or refactor or test your code you pay the "naming things" price tag at that point.

Yeah, that's usually what I do. I start with chains and split them and pay for the names as I go.

> That's often the first thing I do when I come across code with a dearth of names, I just give everything a boring, uncreative temporary name, and then I can do whatever work I showed up to this code to do.

I'm also splitting and naming stuff in that case and checking types along the way. But I prefer that to encountering the code named verbosely and wrongly. Then I need to get rid of the names first to see the flow then split it again sensibly. Of course I don't usually commit those changes in shared environments. Only in owned, inherited ones or if the point of my change is to refactor.

Granted that chaining class member accessor mostly covers up this problem of naming intermediate things if you use classes. That's why we even survived without pipe syntax. But since we would like to move away from classes a bit to explore other paradigms maybe it's time?


Also the second example is easier to manipulate. You can hack in branches, logging etc. during development. I'm also not sure how the proposal tries to solve the problem that we can't easily pluck out members from an object in the first example. Will people just write something like `get(obj, "member")`? Or maybe they thought about this?


How about

    function bakeCake() {
      return do(
        () => gatherIngredients(),
        ingredients => mix(ingredients),
        batter => pour(batter, pan),
        batterInPan => bake(batterInPan, 350, 45),
        () => coolOff(bakedInPan),
        cooledCake => separateFromPan(cooledCake)
      );
    }


...which is just

   function bakeCake() {
      const ingredients = gatherIngredients();
      const batter = mix(ingredients);
      const batterInPan = pour(batter, pan);
      bake(batternInPan, 350, 45); // this is an in-place modifying function, I guess
      const cooledCake = coolOff(bakedInPan);
      return separateFromPan(cooledCake); 
    }
...but with an extra `do(...)` wrapper?

It could at least be

    function bakeCake() {
      return do(
        gatherIngredients,
        mix,
        batter => pour(batter, pan),
        batterInPan => bake(batterInPan, 350, 45),
        coolOff,
        separateFromPan
      );
    }
Although if we had function currying, the convention in ML languages is to put the most-commonly-piped-in param last for these functions:

    function bakeCake() {
      return do(
        gatherIngredients,
        mix,
        pour(pan), // assuming that pour(pan) returns a function that pours something into that pan
        bake(350, 45), // assuming that bake(temp, minutes) returns a function that bakes something at that temperature for that time
        coolOff,
        separateFromPan
      );
    }


What this reminds me is of those hierarchies Cat extends Animal... In these simple "real-world-inspired" examples it seems to make sense, but in programming I'd say a lot of times there's simply no good name for the intermediate steps.


Exactly. As the proposal contemplates this alternative, it claims:

> But there are reasons why we encounter deeply nested expressions in each other’s code all the time in the real world, rather than lines of temporary variables.

And the reason it gives is:

> It is often simply too tedious and wordy to write code with a long sequence of temporary, single-use variables.

Sorry, but...that's the job? If naming things is too hard and tedious, you don't have to do it, I guess, but you've chosen a path of programming where you don't care about readability and maintainability of the codebase into the future. I don't think the pipe operator magically rescues the readability of code of this nature.

The tedium of coming up with a name is a forcing function for the author's brain to think about what this thing really represents. It clarifies for future readers what to expect this data to be. It lets your brain forget about the implementation of the logic that came up with the variable, so as you continue reading through the rest of the code your brain has a placeholder for the idea of "the envVar string" and can reason about how to treat it.

The proposal continues:

> If naming is one of the most difficult tasks in programming, then programmers will inevitably avoid naming variables when they perceive their benefit to be relatively small.

Programmers who perceive the benefit of naming variables to be relatively small need to be taught the value of a good name, and the danger of not having a good name, not given a new bit of syntax to help them avoid the naming process altogether.

The aphorism "There are two hard problems in computer science: cache invalidation, and naming things." is not an argument to never cache and never name things. That's mostly what we software folks spend our time doing, in one way or another.


> The aphorism "There are two hard problems in computer science: cache invalidation, and naming things." is not an argument to never cache and never name things.

Sure, it can’t be completely eliminated, but why not do less of a thing that’s hard, when it can be avoided?

Values have a “name”, whether it’s a variable ‘keysAsString’ or the expression ‘keys.join(' ')’. The problem with keysAsString is that you have to type it twice, once to define it and again to use it. It’s also less exact, because it’s a human-only name, not one that has a precise meaning according to the rules of the language. (E.g. a reader might wonder what the separator between the keys was - if you don’t store it in a variable, then the .join “name” tells you precisely right at the site it’s used.) Making the variable name more precise implies more tedium in the writing and reading.

If the value is used twice or more, I would usually say storing it in a well-named variable is preferable, but if it’s cheap or optimizable by the compiler I might still argue for the expression.

This may be a irreconcilable split between different types of thinkers, perhaps between verbal and abstract.


If names are the source of crisis, wouldn’t it be better to define temporary variables without names?

  var [$1, $2] = foo(bar(envars))
  console.log(chalk.bold($2), $1)
Job done, no plumbing needed. Has the same level of semantics as %.


and yet looking through code from the place you work I see something like this

    let field = ve.instanceContext.replace(/(#\/)|(#)/ig, "").replace(/\//g, ".")
Which you apparently claim should be

    const fieldWithHashMarksUnesacped = ve.instanceContext.replace(/(#\/)|(#)/ig, "");
    const field = fieldWithHashMarksUnesacped.replace(/\//g, ".")

https://github.com/mirusresearch/firehoser/blob/46e4b0cab9a2...

and this

    return moment(input).utc().format('YYYY-MM-DD HH:mm:ss')
Which apparently you believe should be

    const inputAsMoment = moment(input);
    const inputConvertedToUTC = inputAsMoment.utc()
    return inputConvertedToUTC.format('YYYY-MM-DD HH:mm:ss')


You've confused method chaining and nesting. The proposal itself says that method chaining is easier to read, but limited in applicability, while it says deep nesting is hard to read. The argument against the proposal by the GP comments is that temporary variables make deep nesting easier to read and do it better than pipes would.


Thanks for taking the time to look and reply.

In your first find, yes, your modification helps me understand that code much more quickly. Especially since I haven't looked at this code in several years.

In that case, patches welcome!

In your second case, as the sibling comment explained, I'm not opposed to chaining in all cases. But if the pipe operator is being proposed to deal with this situation, I'm saying the juice isn't worth the squeeze. New syntax in a language needs to pull its weight. What is this syntax adding that wasn't possible before? In this case, a big part of the proposal's claim is that this sequential processing/chaining is common (otherwise, why do we care?), confusing (the nested case I agree is hard-to-read, and so would be reluctant to write), or tedious (because coming up with temporary variable names is ostensibly hard).

I'm arguing against that last case. It's not that hard, it frequently improves the code, and if you find cases where that's not true (as you did with the `moment` example above) the pipe operator doesn't offer any additional clarity.

Put another way, if the pipe operator existed in JS, would you write that moment example as this?

    return moment(input)
      |> %.utc()
      |> %.format('YYYY-MM-DD HH:mm:ss');
And would you argue that it's a significant improvement to the expressiveness of the language that you did?


|> is for functions the same thing that . is for methods

If you program in object oriented style then . is mostly all you need.

If you program in functional style you could really use |>


I like that the expected variable name has a typo.

Those typos leak out to calling code and it's hilarious when the typo is there 10 years later once all the original systems have been turned off


fieldWithHashMarksUnesacped

The code removes all “#/“ (or just “#” if a slash isn’t there). After that it replaces slashes with dots. How on earth is that “hash marks unescaped”?


My hypothesis about git apologists is that they fall into one of 2 groups:

1. People who have an organizational development model and workflow which is equivalent in complexity to the Linux kernel development model and workflow (i.e. thousands of developers loosely coordinating the release of systems or mission-critical software on a regular cadence). For them git fits their needs better than almost anything else, and it makes sense because it was made for exactly that use case.

2. People who learned their first VCS after Github had reached critical mass and if you had to pick one VCS to start with, you picked git by default. For them their brain fits git, and they think "the git way" is synonymous with "the right way".

Many folks in group 2 never stop to realize that they are on a development team of 5 (or 50, or 500)...and they always cut their release from `master`...and all of their branches are always pushed/pulled from the same Gitlab remote...and within 5 minutes they can chat with anyone who's made a commit in the last 2 years.

I don't take issue with group 1 people who understand the git model and love it and adopt it in their workflow because they need its complexity and they're willing to pay the UX tax to get it. I take issue with the folks who don't understand the git model or realize they don't need any of the complexity and aren't aware of the existence of solutions like fossil and mercurial.


It's different for the same reason math teachers want you to "show your work" in addition to simply writing down the answer to a problem. They way you as a human have approached the problem and worked it out is valuable information in both the review of the work itself and in understanding what has gone wrong when something has gone wrong.

Think of your branch as being one long multi-day math problem. If I'm grading your work, I don't want you to show me all the parts you think are neat and tidy and important after you arrive at what you think is the answer. I want to see everything you tried, even the stuff that didn't work.

I'm not opposed to only having merge commits on master, but somewhere, on some branch which is recorded for all of time, I want to be able to see every decision that was made to bring HEAD to what it is right now, on the most granular level possible.


There is a path between these two points of view.

Yes, a maths teacher wants to see the working to a problem, but the working can still be the second draft, written neatly and well explained. For a complicated problem, it should not be expected that someone will read through all the “scratch work”.


> For a complicated problem, it should not be expected that someone will read through all the “scratch work”.

Do people really read through the changelog commit by commit? What's gained by that?

I don't read through it at all. I zoom into a point that I need to know more about. I have information (bug report, runtime behaviour on other data) that allows me to zoom into a specific part. The information I have is from the committer's future. It's highly unlikely that the details needed are in the summary that they wrote.


> I am never going to put my life in the hands of some software doing image analysis using machine learning.

Well...on your car, at least. I'm not sure how comforting that approach is when you're surrounded on the interstate by Tesla "FSD"s.


Ironically, I feel at much more ease driving surrounded by Tesla "FSD"s, as opposed to being surrounded by average Seattle drivers. At least FSD always uses turn signals when switching lanes and is way more cautious (while still remaining reasonable).

Back when I spent a full year commuting to work daily on a motorcycle, Teslas with "FSD" were the least of my worries. All while people who sharply switched into my lane in front of me with no turn signals used, they ended up almost killing me a couple of times.


I'm ok with doing a bit of defensive driving against Teslas or bad drivers in general.


> original christians (at the time of Paul) were at odds in practically anything, including the idea that Jesus came in flesh (you will find this being condemned in the new testament itself).

What are you saying is being condemned in the NT, exactly?

> Paul (who never knew a real Jesus, as he confess by the way).

Where does Paul confess that he never knew a real Jesus?


> Where does Paul confess that he never knew a real Jesus?

In Galatians 1:12 to 1:16, Paul affirms that he didn't lean about Jesus from flesh and blood, but from God's revelation. In no moment Paul says he ever saw a real Jesus, only his "divine revelation", which is a big confession from someone who lived in Jerusalem during the supposed ministry of Jesus. Moreover, given his age he should have been himself among those who killed Jesus, but he never mentions this little fact.


iirc, he actually didn't see anything, supposedly just heard a voice.


I like this idea. I have no clue whether all the concerns here about ToS/legal/chargeback issues are valid, but I've thought a service like this should exist for a while now (whenever a relative asks me how to get started with a website for a simple product they want to sell).

However, I think you need to consider the overloading of the term "link" in your product. I know it's your name, but the example shows just how confusing the overloading of the term is to your users. Here's the breakdown of what my mind does as it scans the example link page:

- Ah, I'm at a site, "Paid Link dot To"

- "This link costs $15 to access" - OK, the seller page I just left is selling me something called a "link"

- "You are trying to accessed a link" - Skip over typo...OK, what's the link I tried to access? Like, what's even the thing I was trying to do on the last page?

- "Autofill your card with Link" - Huh? The link already has my card?

- "... or create a Link account" - Is link the name of the site I'm on, the site I came from, or the thing I'm buying? Why do I need/want an account from any of them? If any of them, I'm assuming the account I should want to create is with the seller with whom I've just decided to transact business.

- "Link logo Learn More" - Is this the link I need to click on to get the thing I want? Like a "Download Now" button on a link scam website like softpedia?

- "Access Link" - does this button take me to my Link account? A new website called Link? Ah, it's the content I've been after this whole time.

Certainly none of that is insurmountable for the user, but I just wanted to put it out there to give you my impression as someone who's brand new to your site.


I'm neither a statistician nor a pilot, but I think the author here actually undersells the need to be 100% right in your decision-making. Because the decisions are actually linked together by the fact that they occur on the same flight. If any single decision is a bad one, the whole flight is unsafe.

If a pilot makes 50 safety-critical decisions over the course of a single flight, at a 99% success rate, I believe the actual chance of completing the flight without incident is given by 0.99^50, or ~60%. So, 2 out of 5 flights would result in an incident.

Of course, the reverse is true, if you know your actual rate of safety-critical incidents to be 1% of your flights, you can calculate how good your decision-making has been by 0.99^(1/50), or said another way: you've made the "safe" decision ~99.97% of the time.


Safety critical decisions aren’t binary. It’s generally a case where each bad decision makes choosing the next correct decision harder, but pilots do recover from some extreme situations.

A better model is perhaps how accurate do you need to be to avoid 10 tails in a row when flipping 10,000 coins every week for 30 years.


Doesn't the range of an EV drop as the heating load of the cabin of an EV increases?

And doesn't the heating load of the cabin increase as the air velocity over all of its metal and glass surfaces increases? Isn't the shell of the car essentially a giant heatsink with a 70mph relative wind moving across it?


Yes, yes, and yes.

However the speed of air over the car while driving is unrelated to the airspeed that meteorologists use to calculate the wind chill in your local weather report


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: