Hacker Newsnew | past | comments | ask | show | jobs | submit | more try_again's commentslogin

Does someone know if this is purely a property of languages and their evolution or if there is a biological or neurological foundation for this? I understand that you could make names for colours as fine-grained as you want with visible light being a continuous spectrum, and at the most base level there is only a concept of "colour" without further distinctions. But to me it feels like the major divisions as we know them in English are intuitive beyond language. Surely when you look at grass and the sky you feel you need different terms to describe them?


I put together a image to try and explain the difference: http://a.gln.io/blue.png

There's 16 different shades, or colours, there. If I was to point to any one of them individually and ask my young children what colour it was they'd almost certainly say "blue". And I'd understand them fine and consider it correct. Likewise if they were explaining something they saw during the day and said it was "blue" I might make an assumption about which of these shades it was, but I intuitively know it could have been any of them. And most of the time the distinction isn't that important for understanding and sharing experience.

When the distinction is important my kids would probably simply say "light blue" or "dark blue". Additional adjectives will get used to clarify the relative difference between the colours.

Soon they'll learn "sky blue", "baby blue", "navy blue". Then teal, turquoise, aqua, cyan, cerulean, etc.

Assuming the language has those words. That only occurs when the need to distinguish is common enough to established a shared understanding across a large enough group of people that they effectively reach a consensus that it's now a thing, like English speakers did a few hundred years ago with the introduction of the colour orange. Nobody invented a new colour, we started using a new word to describe something that had always been.


Definitely tangential to this discussion, but it's about language and sufficiently geeky I think the HN crowd would probably appreciate:

I read a book a few years ago called Alex's Adventures in Numberland (https://www.amazon.com.au/Alexs-Adventures-Numberland-Alex-B...). In it he has a story about a group in South America who have no words in their language for a number greater than two (or maybe it was three? It's been a while since I read it). Anything larger than that was just referred to as "many". It's not as though seeing more than two of anything was uncommon, most families would have a half dozen to a dozen children. But if you asked how many children they had it was just "many". Whether it was eleven or twelve just wasn't an important distinction to them.

He goes on to discuss how language can expose what's important to a group and shape thinking. The introduction of a concept and word for zero was hugely important for our advancement in all number of fields. He also discusses how our constant pursuit for ever increasing levels of specificity has it's trade-offs: we seem to be becoming increasingly bad at estimating (which is both language, social expectations around what we value, and a reliance on tools).

Anyways, it was a story about language and numbers that I thoroughly enjoyed.


Incidentally, in Italian I would call the two extreme colours in your image in two different ways: 'azzurro' (sky blue, light blue, etc) and 'blu' (dark blue, ultramarine, etc).


> But to me it feels like the major divisions as we know them in English are intuitive beyond language.

Did you perchance grow up as an English speaker, or a speaker of a language with a set of colour terms similar to English?

> Surely when you look at grass and the sky you feel you need different terms to describe them

Well, I'd guess that all languages have different words for sky and grass. The difference is how you relate those words to words for other things that have similar colours. There are many languages with less colour terms than English, but also some with more - and as far as I remember it's usually green and blue that have more shades, if you'll excuse the pun. Like the slice of spectrum covered by green-blue in English will be covered by more words in some languages.


> Surely when you look at grass and the sky you feel you need different terms to describe them?

That probably depends on how frequently do you need to describe something as "sky-colored" vs "grass-colored". I can't really think of many things in nature that are blue (some people's eyes, the occasional flower or gemstone) so if you don't need that word to describe anything else you might just leave it at "the sky is a weird shade of green" rather than having a color that only describes one thing in the universe.


This is interesting, because I've latched onto a particular color of masking tape for labeling. It's a light green, a color shared by very few other objects in the indoor environment.

https://www.paintersmategreen.com/

It stands out against cardboard and all the variously-colored plastic boxes I own, and it's light enough to offer great contrast with a black Sharpie on it.

So for me, that color is useful specifically because it's relatively unique.


Do you feel like you need different terms when you look at sky, at blueberry and lapis lazully? You probably do, but do not need to distinguish them often enough to justify promoting words for them into primary colour words. Similarily its possible to have a situation when there is a single word for green and blue, and people make do with saying "grass grue" and "sky grue" in cases when they need to distinguish between them.


> Do you feel like you need different terms when you look at sky, at blueberry and lapis lazully?

FWIW, in Italian the sky is commonly referred as 'azzurro' (azure), except when it really is a deep blue, while a blueberry would definitely be 'blu' (blue). So, yes, in Italian we tend to distinguish the two colors more than, for example, english. Which I guess is the point you are making.


> Surely when you look at grass and the sky

Sometimes maybe. Most of the time, who cares? Grey, clear or dark skies seem most important to distinguish. Do we mean the temperate daybreak blue, midday tropics blue, or depth of full moon night blue? Grass and other plants can have degrees of blue in there too. What about sea? Sometimes blue, sometimes green, most of the time somewhere in between. What probably matters most to a mariner is swell and temperature(?).

I suspect it only really started to matter after Perkin's mauve in the mid 19th century, and matters far more now in a world of a trillion pantone shades.


Surely when you look at the sea and the sky you would think they're different colours? And yet sea blue and sky blue are both called 'blue' in English (but not in other languages).


Printers use cyan, magenta and yellow because printing is a subtractive process. It's the complement of the additive RGB that monitors use. For example, cyan acts as a filter for red, so a combination of magenta and yellow filters out everything but red and thus appears red. Ultimately, the difference is in whether something emits light (a monitor) or has to reflect light (a white piece of paper).

You're right in that what we're taught is often incomplete or misguided. Teachers are fallible. But as a child you assume their authority implies them being correct. I reckon seeing through that illusion is an important part of growing up. And to me, part of us growing up as humanity must involve not having to rely on the authority of governing bodies.


The additive/subtractive colors thing is has a really neat sort of symmetry to it. That's how I knew it was the correct explanation. There was no such beautiful internal logic to the Red Blue Yellow system and I couldn't figure out how people came up with it.

I never believed my teachers. As early as 4th grade they were treating me like a troublemaker for not following rules like their three-paragraph essay format.

I don't like the idea of having authorities on knowledge. I much prefer Montessori or Socratic teaching methods, or explorations. They're harder to do, but they produce a better understanding of the material and they allow the student to teach the teacher as well.


Yeah, unfortunately color perception is one of those things that is a bit too complex for a self-discovery method. While the "symmetry" explanation is satisfying it really isn't correct at all. Color perception and color matching within art (where it was useful) and more recently as a science is something that is complicated and took many years of the best scientific minds to figure out. Sometimes you need authorities on knowledge, and stand on the shoulders of giants as it were.

https://web.archive.org/web/20080717034228/http://www.handpr...

The teachers aren't completely "wrong", they were just conveying a simplification of history of pigments (also touched on in that article). It is after all true you can mix those colors and get a wide-range of colors (including a blacker black then you would with CMY). But any pedagogy that says there is such a thing as "primary" colors that make all colors is necessarily going to be wrong, even if its CMY.


It won't protect against a newer version of an established library introducing malicious behaviour.


This is correct, the few dependencies you would use would need to also target very specific versions to achieve the same.

Lock files are used to lock dependency versions all the way down your dependency tree, not just your immediate dependencies.


Unfortunately this seems to be the case; lockfiles would be unnecessary only if all your dependencies (and their dependencies, recursively all the way down) reference explicit versions, the risk being that a new malicious version would be published. I'll research if there's a workaround.

Thanks everyone for pointing out this issue.


In my opinion a lock file really is the "work around", I don't see a huge issue in using them since it's given for free by npm and yarn with no additional overhead.


Having the means in place for total surveillance and thinking it will be kept in check with "strict control" is a total delusion. Who determines what constitutes proper grounds for using the captured data? The government, I suppose? And if you want to automatically recognize certain individuals it must by design mean everyone gets scanned.


We have nuclear weapons, but only certain people are allowed to use them. The IANA has root key-signing keys, but their use is strictly controlled and made highly transparent. If we approach facial recognition technology with an equal sense of caution and respect, I’m sure we can (for example) help police catch rapists, murderers, and child-abductors without building or maintaining a database of every innocent citizen’s movements.


A well-known quote is that whoever doesn't understand Lisp is doomed to reinvent it: http://lambda-the-ultimate.org/node/2352


It's exactly like the premise for Black Mirror episode "Nosedive"!


I also recalled hearing exactly this. In fact, in my recollection Burger King had a vegetarian burger option for some time before dropping it due to complaints of it being prepared on the same grill as the meat patties. Meanwhile McDonald's also offers vegetarian options which apparently aren't prepared separately to avoid cross-contamination. Only their options seem to vary quite a bit by region, so maybe they've just cleverly avoided the scrutiny that comes with a global marketing offensive like this.


I'd wish that not every other article I read these days feels the need to focus on detecting misogyny or racism. It is absolutely important to identify the ways in which outdated world views led to injustices and move towards a brighter future but there's a fatigue setting in with it taking center stage in every discussion. Particularly troubling is the tendency towards wanting to separate the art from the artist whenever the artist held problematic views. It's a result of a cognitive dissonance setting in when you can see value or beauty in someone's work but don't agree with some aspects of that person. But the person behind the work is an integral part of it and vice-versa. If you hold someone's output in high regard but find objectionable fragments in it or in their life story you can't just go dissecting it into those pieces you wish to keep and discard the rest. I guess what irks me is the classification of Lawrence's essays as "indefensible". A view can be troublesome, even reprehensible, but to deny it representation by calling it indefensible is setting up a dangerous precedent where anyone can be silenced or their opinions simply cherry-picked to find what fits our desires. And that's no different than the biases that are railed against.


Complete rewrites seldom tend to work out well. Using a current version's behaviour as a "requirements doc" rather than the code or an actual up-to-date document probably misses out on a good deal of functionality that is rarely used but critical in some conditions. The thing about legacy software is that the software has encoded in it years or even decades of changes to functional and technical requirements, workarounds for edge cases that weren't foreseen, bug fixes, optimizations. Each of those can appear like a mess to the observer but it is foolish to think a new system will not run into similar issues. The idea of starting over to "do it right" is a fallacy because perfection does not exist. This is why the software industry is increasingly focused on processes that reap the benefits of quick turnaround and making change easier to deal with.

Believe me when I say the day you switch the old system off will never come in the vast majority of scenarios like this. The result is either wasted development time or, probably worse, you now have two systems to maintain and keep running.


That is basically an example of sunk cost fallacy. Yes, those bug fixes are important, but its easier to find them and fix them again in a tool with proper architechture. The idea is to run the original and the redesigned code parrallely and find the differences in behaviour and fix them.

We recently moved from a heap of matlab code which was started with a student project in 2001 to a huge tool used in the industry today to a new implementation in C++ and python. It has been a huge sucess with our customers.


It's not a sunk cost fallacy at all, sunk cost is throwing further money at something that already failed. If the code is being used, it obviously succeeded. The product works. Otherwise it wouldn't even be worth rewriting.

The reason why people warn against rewriting is that it's a risk, a gamble, and often a conceit by the programmers. Programmers will also often spectacularly underestimate how hard a full rewrite will actually be.

You're taking something that works, and attempting to recreate it. You can find lots of examples where rewrite projects went spectacularly wrong. A commonly cited example was the Netscape rewrite (which killed a hugely successful company).

Your gamble paid off, but it's almost always the worst decision you can make. There's even examples in this thread of when rewriting goes wrong.


Of course it depends on several factors. In our case, we were having a mess of spaghetti code, which crashed at important moments and there was no easy way to locate the issues. Sometimes, when your code required prayers and sacrifices to appease the gods and devils, then maybe its time for a rewrite. However, there are programmers who just want to rewrite because that think the old architechture is bad , when it is mostly fine.


But the forces which caused the current code to end up in such a sorry state will still be in force and cause the rewrite to end up in the same place.

If you have reason to think you can do better this time - e.g. the team have learned how to avoid unexplainable crashes - then you could apply this knowledge to fix the issues in the current code, which would a much less risk and less time.


In our scenario, The tool had no requirements. A student project did something which a team found useful and saved money. More students were given similar projects. Then an engineer comfortable with coding glued the code from all these students. This happened for 17+ years (converting several person months of work to few hours). The team lead for this team got promoted to a very high position in the firm and he hired a software team to redo everything with coherent code.

For us the first project acted as requirements analysis. In most cases bad software is mainly because of lack of proper requirements. In hindsight, its easier to make complex tool coherent.


It is not "sunk cost fallacy" to not wanting to fix the same bugs twice!


I think the word "bug" is not correct in this context. If a rewrite of the software contains the exact same bug, then it means that the requirements were not well defined.


There are certainly situations where that can work well, but there are also many instances where running two solutions in parallel is anything but trivial and adds another layer of complexity. Possibly to the point or multiplying the costs of the rewrite. I've been part of projects where that exact scenario occurred.


Slightly ironic that a system which has scalability as one of its goals went down due to traffic. Although I guess few things are prepared for an onslaught of catching attention on sites like HN or Reddit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: