Hacker Newsnew | past | comments | ask | show | jobs | submit | rlx0x's commentslogin

and you'll find z-code interpreters for pretty much any device you can think of. I like in particular Gargoyle [1][2] which stands out as the only one with great font rendering/typography.

One of the coolest/ridiculous things about infocom games were the feelies [3] (a term they coined) There is a great collection [4] of photographs of all the packages and extras, some of which were including necessary clues to solve the games.

[1] http://ccxvii.net/gargoyle/ [2] https://github.com/garglk/garglk [3] https://en.wikipedia.org/wiki/Feelie [4] http://gallery.guetech.org/


Well that didn't make a whole lot of sense. The most methodical use of the methods of rationality you'll find are the natural sciences and they are incredibly open to anyone interested. Its just that most people are not actually interested at all. Nor are they in rationality.

This guy is a bit like a confused version of Noam Chomsky.


He's essentially trying to say that there is more to knowledge than reason. Which is true and often ignored in today's "scientific" society [for example, the demise of humanities funding in colleges]. Proust is completely unreasonable when he talks about certain smells evoking memories of his mama, but it is unfair to dismiss his thoughts - as many uber-rationalists do - as worthless sentiment.


It's reasonable, and scientifically compatible, to talk about smells evoking memories of parents. There isn't a rational process that creates the link between a sensory trigger and a memory, but there's a rational process and scientific field(s) of study by which that link (neurobiology) and its formation (psychology) can be understood.

That kind of argument seems to me like, "Nobody [unless there's a God] rationally constructed the theory of gravity, or the complexity of fluid dynamics, therefore there's more to gravity or fluid dynamics than can be understood through rationality and science."


Scientifically compatible, yes. Reasonable, no. Emotion is not reason; it is a completely separate enterprise. That's partly what the article is talking about.


Emotion is ingrained reason through the process of evolution. For example, we feel disgust seeing an open wound because apes who didn't care got infected and died. The ones who learned to stay away, through reasoning their observations, later on abstracted it to the emotion of disgust, rather than spending energy to reason it out every time.

Similarly, a good fragrance could easily be a bad fragrance to an alien. Maybe, because good fragrances are associated with eatables, our mind categorised it to be a "good" fragrance.

Emotion is a mechanism developed by the brain to not spend energy reasoning things out every time. We understand this today and hence decide that emotion is a bias in the scientific method, but since we are humans and emotional by evolution/definition, we prove that bias did not occur by providing data for the experiment to be reproducible.

However, the evolutionary reason for the development of a particular emotion might not exist anymore. We now know that urine is sterile and no longer need to be disgusted. There are many tribes that have learned this and although the natural emotion of disgust might kick in, they still use it for its antiseptic properties to heal wounds. Many hindus drink cow urine.


It's interesting that you bring up disgust, because that exact reaction is at the core of the seminal work by the sociologist Norbert Elias, The Civilizing Process[1].

He traces the evolution of manners through etiquette books (a remarkably enduring genre going back many centuries), and shows how things that evoke a strong digust response in us today were actually slow-evolving social norms that have been internalized and turned into habitus (or a super-ego), and he even mentions urine, which for a long while hadn't evoked the same reaction as today. For example, some centuries ago in Europe, urinating under the staircase indoors was actually quite acceptable, and blowing your nose into the tablecloth was considered good manners.

This, of course, doesn't mean that the capacity for disgust isn't evolutionary, but that its particular triggers are social, even though we perceive them to be natural.

[1]: https://en.wikipedia.org/wiki/The_Civilizing_Process


Actually not. This is your explanation for what emotion is, based on your scientific framework. It is not some kind of absolute truth. Given that you have no explanation for consciousness, there is a limit to your framework when it comes to explaining emotion.


No this is not my conclusion. The reason I specifically talked about disgust, as opposed to the context of good fragrance in the previous comment, is that it specifically is a scientific conclusion, read in the works of Steven Pinker and Paul Bloom among others.

The debate is whether emotion is independent of reason, and both are pretext under consiousness, so you are sidetracking.


I'm not sidetracking. I'm pointing out that you are affirming the consequent by simply declaring statements about what emotion 'is'.


Does that "separate enterprise" amount to knowledge? I don't think so. It's an artifact of neurology. It's an important artifact, and one that can't be ignored (when studying psychology or sociology, or managing humans, or planning events involving humans), but nevertheless that emotional artifact is not useful knowledge. Only the [scientifically and rationally understandable] mechanisms behind the emotional and sentimental connections are useful knowledge. The connections themselves may serve sociological purposes, enabling cultural knowledge generation and accumulation, not to mention improving societal stability, but in themselves emotional artifacts are not knowledge.


Sure, knowledge is true justified belief (we can add that it must be able to be transmitted). This separate enterprise is certainly true, insofar as it is a qualitative experience of somebody, it is justified and it is also a belief. He transmits it through his writing.

Your argument is presupposes your conclusion that this kind of thinking is not knowledge. That said, even if it is isn't knowledge, so what? Does not mean it is not valuable. With questioning the primacy of reason we can also question the primacy of conventional modes of knowledge.


> knowledge is true justified belief

Or is it? https://en.wikipedia.org/wiki/Gettier_problem


> Only the [scientifically and rationally understandable] mechanisms behind the emotional and sentimental connections are useful knowledge.

You understand that that is a pure value judgement.

At their heart, love of freedom over slavery, compassion over apathy and wisdom over ignorance are value judgements. I know them to be true, but I cannot prove them rationally.


> It's an artifact of neurology

The fact that something is an artifact of something else doesn't mean that it can be meaningfully reduced to it (or even tractably reduced to it at all). Suppose we discovered the most basic of physical laws, and suppose that somehow computational power made a simulation of them tractable. Is our ability to simulate the universe the same as understanding every aspect of it?

On a more basic level, running software is an artifact of hardware (yet the software can simulate a computer with different semantics than the computer it's running on). So is the study of hardware the only relevant knowledge of the running software? And if you say that the software exists independently of the hardware, you'll find yourself with an idealist philosophy that, according to you, is at odds with your materialistic view.


You fly through words like "knowledge" and "useful" without giving them proper thought. What do these words mean? We lived in a philosophy-starved culture. Too much social media, not enough deep thought.

And how can you know that what you call rationality is also not an artifact of neurology? In fact, Gödel proved that if you don't doubt your own rationality, you are in fact being irrational.

I think science is cheapened by this sort of blind faith. The defining characteristic of the scientific attitude is doubt, not certainty.


>And how can you know that what you call rationality is also not an artifact of neurology? In fact, Gödel proved that if you don't doubt your own rationality, you are in fact being irrational.

That is not at all what Goedel's theorems actually say.

>We lived in a philosophy-starved culture. Too much social media, not enough deep thought.

No, we live in a culture that loves to engage in cheap, shoddy philosophizing by generalizing incorrectly from facts.


> That is not at all what Goedel's theorems actually say.

An informal description of his second incompleteness theorem (from the "Stanford Encyclopedia of Philosophy"):

"For any consistent system F within which a certain amount of elementary arithmetic can be carried out, the consistency of F cannot be proved in F itself."

One example of a sufficient "certain amount of arithmetic" for this to apply to a system is the use of the integer numbers, addition and multiplication. Such a system can no longer prove its own consistency.

If you think that this does not apply to human efforts at rationality, I would like you to explain why.

Debate becomes cheap and shoddy not when someone is wrong (I could be), but when you resort to name-calling instead of pointing out where you think the mistakes are.


>If you think that this does not apply to human efforts at rationality, I would like you to explain why.

Human beings aren't proof systems. We don't operate under conditions of certainty via deductive reasoning. We're inductive (or rather, abductive) reasoners from the get-go.


Sure, and abductive reasoning can be formalized in certain modal logics with Kripke semantics.

What Gödel tells us is that, as long as you have a sufficiently powerful formal system, you cannot prove the consistency of the system itself. Modal logics are no exception.

If you are a computationalist (that is to say, you believe that the human mind can be emulated by a Turing machine), then you might want to take a look at Gödel, Escher, Bach, where Hofstadter discusses how the second incompleteness theorem applies to Turing machines.

You might also enjoy "Forever Undecided" by Smullyan. It uses puzzles to guide you to an intuition about what the incompleteness theorems means to human knowledge and its limitations. In the worst case it's a fun read.

Peace!


> Sure, and abductive reasoning can be formalized in certain modal logics with Kripke semantics.

No, it can't. Abductive reasoning is probabilistic modelling, and notably, there's a line of research by Cristian Calude showing that you can soundly, non-paradoxically place probabilities on Halting questions.

(Computational tractability is still an obstacle with his current approach, but it has been shown not to generate paradoxes, which is already a major step forward.)

>you might want to take a look at Gödel, Escher, Bach, where Hofstadter discusses how the second incompleteness theorem applies to Turing machines.

This is backwards: halting problems and Kolmogorov complexity for Turing machines give us the two Incompleteness Theorems for proof systems, via Chaitin's Incompleteness Theorem.

Which also neatly gives a way around the Second Incompleteness Theorem: a hierarchical-probabilistic reasoner can create an abstract, compressed model of themselves which consists of small-enough amounts of information that they can reason about its behavior without becoming subject to Chaitin Incompleteness.


> there's a rational process and scientific field(s) of study by which that... can be understood.

Yes, but only for a very specific definition of "understanding". See my other comment about universal computation and phenomenology. There are other, no less valid, forms of understanding. I believe that the idea of universal computation reconciles materialism with idealism, putting them both on equal footing. The workings of the software cannot be tractably (and certainly not meaningfully, by any common sense of "meaning") reduced to the material existence of the computer.


He makes some good points that are worthy of more thought and consideration.

It does seem dated, though. The point about experts not being really free to express themselves is well taken, but the idea that knowledge is being hoarded makes no sense in the Youtube era.

In the last month I've watched videos on how to grind your own optics, make a vacuum chamber to aluminize them, etc. etc. We're in the middle of a creative explosion.


That is a very 19th-century view that you're expressing (although I know that some popular blogs are espousing this rather quaint view of rationality today[0]). Today, the view is much more nuanced. Science is based on a few assumptions and interpretations that have been the study of what's known as the philosophy of science[1]. Just to get a taste of the difficulty of going from science to knowledge, read about epistemology and, especially, the Gettier problem[2]. The paradox raised by the Gettier problem is not interesting in and of itself, but it strongly ties what we know or think we know about the world, to what we are and what we think.

This inseparable connection is a source of more modern views on the relationship between science and knowledge, like phenomenology[2]. If you want to translate these views back to scientific, or mathematical terms, you can see the essential problem a brain — i.e. a computer — introduces into the universe. Due to universal computation, a universe may contain a material approximation (that it is just a finite approximation matters little) that is more general than the containing universe itself (as it can contain any universe), and thus more general than the laws of nature, which are particular to the “host” universe. This makes subjective experience, namely the inner workings of the computer, not secondary to objective experience, namely the laws of the host universe. The mechanical construction of the computer bears little relevance to the to the computation -- or simulation -- the software is carrying out. Truth, therefore, can mean different things depending on which universe you are talking about, and neither can be said to be secondary to the other.

The scientific method has, obviously, been extremely useful in uncovering certain types of truths, and extremely unhelpful in uncovering others, that cannot be said to be of less import. Since we live in a world constructed by our software, it makes little sense to say that it is the laws of the host universe that matter more (except in the sense that they can kill us, or interfere with the software, but that only makes them important -- not supremely important).

[0]: Although the modern reincarnation justifies itself through utility rather than a deeper philosophical justification, namely, science is useful in the physical world, hence science is the "best" form of knowledge (accepting the supremacy of the physical world as either an axiom, or a materialist belief that smooths over definitions of reduction). You can call this "utilitarian epistemology", namely the view that 'what we know is what we can use'.

[1]: https://en.wikipedia.org/wiki/Philosophy_of_science

[2]: https://en.wikipedia.org/wiki/Epistemology

[3]: https://en.wikipedia.org/wiki/Phenomenology_(philosophy)


Your initial assumption was correct, it is widely deployed in the industry, there are multiple vendors that provide fingerprinting technology that is orders of magnitude more advanced then what the article suggests.

The visible fingerprint might just have been there to deter the clueless. In any case even with a fingerprint its hard to proof without a doubt, the individual who actually stole it.


The visible fingerprint might also be there to prove intent or add circumvention charges in court.

I don't think they need to prove that a certain individual stole it. The party which received the advanced copies likely had to take on liability, so HBO can go after them. Of course, that doesn't mean they won't turn around and lean on the actual culprit, but whether or not they catch someone HBO will have their due.


It's there to advertise HBO. It's that simple.


Doesn't the process of compression (to make it seem louder) destroy some of the information? I always assumed this was a irreversible process?


Yes it does. That is (partly) why those who love music often hate (too much) compression.


then the premise of the article is inherently flawed :(


If YouTube compensates for this (lowers the volume of tracks that have been mastered loud), then it removes the original motivation for doing so.

This is why the 'Happy' YouTube mix is less loud than the CD version linked to - the author asserts that the studio has done this because the incentive to apply dynamic compression on YT is now gone...


Dynamic range compression does destroy some information, but you can still change the final playback volume.


They could atleast work together with the internet archive to conserve it, but no of course not that would require be a bit of an effort, google can't do that, because google doesn't care if thousands of open source projects are going to be lost forever to the world. Don't be evil my ass. Archive team is now our last hope as usual.



Further reference, here's an NSLondon talk on drawing lines quickly in OpenGL ES on iOS:

http://vimeopro.com/user20904333/nslondon/video/98274186

Nigel's written a GL based vector lib:

https://twitter.com/Vector_GL


I always thought its kind of funny that everyone believed this to be true when in reality you could just deactivate the sound in the driver settings. Which I did pretty much immediately, but nobody I knew ever even thought of.

Its a perfect example for the fact that most people will never ever change the default settings.


I remember punching in ATM0 to turn off speaker so my parents wouldn't know I was dialing in when I should have been sleeping. =)

http://en.wikipedia.org/wiki/Hayes_command_set#The_basic_Hay...


I liked hearing the sounds. I could tell just from the connection sounds what speed I connected at. If my modem was falling back below 33k frequently, especially below 28k, then I knew something was wrong.


It's also a great example of a terrible default, and why it's important to think about what the default should be. I'm sure the guys designing it were like "Well, people will want to hear this, so they can tell if something is going wrong." In reality, the modem noise is the opposite of what anyone ever wanted to hear, ever.


the modem noise is the opposite of what anyone ever wanted to hear, ever.

Speak for yourself. Not only is it music to a nerds ear, but it also provided useful information (line busy, handshake problem, connection speed, sysops mom picked up the phone) long before the modem itself would report back.


Given that auto redial was also a popular default, having the speaker on is important so you know when you've dialed a person instead of a computer. A terrible default would have been having the modem speaker be on for the duration of the call, and not stop once the handshake is done.


You upload your files to dropbox servers, how in the world can you come to the delusion that you would notice when they accessed/searched/data mined your files?!


(The article is about uploading files outside your dropbox folder)


It should be mentioned that this does not support and will indeed never support line numbers due to some strange opinions of the lead developer:

http://highlightjs.readthedocs.org/en/latest/line-numbers.ht...


I think this comment/discussion ends up over-emphasizing the 'strangeness' of the line-number situation with this library, the page linked ends with: 'This position is subject to discuss. Also it doesn’t stop anyone from forking the code and maintaining line-numbers implementation separately.', which seems pretty reasonable to me!


Yeah, this is pretty much the behavior that I want to see from open source maintainers: make decisions and build conventions but be open to having your mind convinced of alternatives.


http://prismjs.com/ is an alternative that supports line numbers.


And in particular prismjs lets you select lines of code without selecting the line numbers, which you probably do not want.


Why wouldn't I want that?

If I am copying and pasting code from a website, it usually goes to my editor and I don't want to have to delete the line numbers from it.


You don't want the line numbers.


Oh if that's what he meant then I read it the wrong way.


Prism is pretty damn good. Although the name sounds like an NSA plan to intercept all the javascript libraries in the world.


I already like this one better, to be honest. It also already has Julia and Rust support. If it were to include Elixir support as well, it'd be perfect.


Line numbers are really helpful when presenting code at an event, in a meeting, etc.


Genuine question, why?


"So if you look at line 7, you'll see..."


... that the presenter should buy a laser pointer!


Laser pointers don't work if the video is being recorded or screencast at the same time. They only work when the viewer is in the same room, and even then, they are typically pretty poor.


Out of several dozen different uses of laser pointers, I've only ever been able to (easily) see perhaps 3 or 4 of them, plus another small portion if focusing entirely on the game of spot-the-laser-pointer. It's related to my colourblindness.

As such, verbal cues (or a long stick) are preferable to a laser pointer. Of course, use a laser pointer to assist those who can see it, but it's better assume half the people in the room can't and use words to this effect.


This also applies to comments within the presentation, not just the verbal part of the presentation.


Is line numbers considered "highlighting"? I mean, I get why you would want it but I never would have thought to ask for it.


It does seem more appropriate to have a separate module for line numbers. I would definitely understand the separation of concerns argument, but the maintainer instead makes some sort of emotional appeal about clutter and simplicity to explain why there is no line number support.


> the maintainer instead makes some sort of emotional appeal about clutter and simplicity to explain why there is no line number support

Clutter and simplicity is not an emotional argument. It becomes verifiably more difficult to maintain a code base with each new line. And since he's the one doing the maintaining, his position seems quite reasonable.

Secondly, he seems to have a strong aesthetic aversion to line numbers, and I can totally respect that. Frankly, among popular open source projects, those whose developers have the strongest sense of style and design and usually (not always) the best.


Many popular syntax highlighting libraries do include this functionality. For example, the popular Python syntax highlighter "Pygments" supports line numbers[0].

[0] - http://pygments.org/docs/formatters/#HtmlFormatter


I personally use Prism.js for my blogging, which is what Smashing Magazine and CSS-tricks uses. Prism includes line numbers.

I like Prism so far and don't have any issues with it yet



I understand the author's point-of-view on this, but disabling line numbers sounds like something that should be user-controllable. Right now it feels like a misfeature, whereas "we don't implement line numbers by default, but if you want them, do this and that and you can have them" would feel way more like a good feature to have.


Has anyone actually tried submitting a patch that implements it yet?

https://groups.google.com/d/msg/highlightjs/UVJaQcQNC1c/1C6U...


It's free software so that's no problem if you feel the need.


I think we are talking about CGI faces here and thats just not doable yet and this research won't change a thing about it. The last movie I remember having a CG face was Clu in Tron Legacy which certainly was state of the art CG, unfortunately 'state of the art'[1] in face modeling/animation is just not perfect yet.

[1] https://www.youtube.com/watch?v=CvaGd4KqlvQ


That link is real time. State of the art server farm rendered is orders of magnitude better.

"The last movie I remember having a CG face was Clu" By strict definition if you remember it having a CG face then it wasn't good. It's not possible for you to remember a good CG face because if it's good you wouldn't know it was CG!

The next big test might by Paul Walker in Fast and Furious 7 or Philip Seymore Hoffman in Hunger Games 3. Both big budget movies where the actor died and CG will be used in at least some places. That's the ultimate. I think we can already get away with CG if the actor isn't known. But for actors who have faces we know and mannerisms we subconsciously recogonize it's another step up in difficulty.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: