Hacker Newsnew | past | comments | ask | show | jobs | submit | IvoDankolov's commentslogin

Lovely effect! Particularly enjoyed how it seems to 'fake' refraction by sampling higher or lower from the background based on the drop shape. The trails squishing back into raindrops to simulate surface tension is also a pretty nice touch, though perhaps a bit over-exaggerated.

One caveat is the merging of the drops sometimes looks quite unnatural, but I'm not sure there's any simple way to represent that as just a couple of textures and a transformation, as real drops would have attractive forces on a molecular level pulling them towards one another once they're bridged, deforming pretty unevenly.


There was an attempt of making the drops merge more naturally, but it was affecting the refraction effect, which I deemed was more important.

It should be solvable though, I just couldn't get the values right at the time.

(Also thanks!)


It's due to the day/night buttons being anchor tags as opposed to any Safari-specific issue.

If you wanted to keep the the ability to link to specific states, but avoid the history issue, you'll need to use the History API — https://developer.mozilla.org/en-US/docs/Web/API/History/rep...


Well, all of supervised learning is basically approximating an unknown function from a finite list of samples.

But it's still an approximation, with things like e.g. backpropagation 'simply' (in the abstract mathematical sense) tweaking weights in the direction of the derivatives to get closer to expected values.

The vast majority of machine learning just builds on that by going deep (more layers), automatically generating inputs (e.g. in game AIs playing against themselves), etc.

One might argue that's even worse than function optimisation as you can only vaguely guess at the target and thus all your validation is suspect and you have to prove it using humans by, for instance, beating them at Starcraft.


Yes, you are missing something, which is a bit of a quirk of C++. Normally, having two functions with the same signature in a parent and child class results in overloading.

  struct A 
  {
      void f() {std::cout << "A";}
  };

  struct B : A
  {
      void f() {std::cout << "B";} 
  };
  //.,...
  A a = B();
  B b = B();
  a.f();
  b.f();
  b.A::f();
This would print ABA, as the child's f() function simply hides the parent one when working in the context of B(), but it does not override it.

Now, what virtual does is it tells the compiler, "from here on out, resolve clashing signatures for this function in child classes by overriding. I'm sure you know how our example changes when you mark f() as virtual.

That's all old news, though. The more interesting bit, final, acts like so: from here on out, prevent overloading in child classes. Now, that only makes sense in the context of virtual, so that's the only place where you're allowed to use final, and here the actual difference to the first example becomes apparent. You see, final does not negate virtual, what it actually does is completely lock up the function signature from being used in any child classes. Adding virtual final to our example would not cause B's function to hide A's - it will simply not compile.

If you think that's all pointless semantics, you are absolutely right. Locking up names is not the point of "final". In fact, declaring something as virtual final in the base class is completely pointless from any practical standpoint. The actual problem that final is meant to solve is this:

As I mentioned earlier, virtual changes the way resolving names works in child classes forever. These two pieces of code are absolutely equivalent.

  struct A 
  {
      virtual void f() {std::cout << "A";}
  };

  struct B : A
  {
      void f() {std::cout << "B";} 
  };
And:

  struct A 
  {
      virtual void f() {std::cout << "A";}
  };

  struct B : A
  {
     virtual void f() {std::cout << "B";} 
  };
Therein lies an interesting dilemma. What if I wanted to prevent any child class of B, and only B, from changing the implementation of f. Under C++03, that is not possible. In C++11, final solves that.

But why go through the trouble of introducing a new keyword - and a keyword that is only a keyword in class and function declarations to boot (Holy context dependent grammar, Batman!) - and not just drop the virtual qualifier? Backwards compatibility - ever a dreaded thing when you wish you could undo your old mistakes.

It isn't all that big of a deal (said the C++ developer about every strange rule in the language, ever), though, since when you use "final" for the intended purpose, you won't actually be needing the virtual qualifier.

  struct A 
  {
      virtual void f() {std::cout << "A";}
  };
  
  struct B : A
  {
      void f() final {std::cout << "B";} 
  };
  
  struct C : B
  {
      void f(); //I'm afraid I can't let you do that.
  };


Could you be a bit more specific as to what you find confusing? Is it:

- That you can use the name of a variable, h, as if it was a function? That's because Javascript has first class functions [1] - the language is defined to support passing them around as variables and calling them like that.

- That you can use h at all even though it's neither a local variable nor a parameter of the anonymous function? That's because functions in javascript aren't simply procedures in the traditional sense - i.e. description (function signature) + code - they are also closures [2]. If you declare one function inside another, it can capture (have a reference to) variables and parameters of the outer one. You are also guaranteed that local variables and parameters will not get cleaned up while a closure still exists that holds a reference to them.

[1] : http://en.wikipedia.org/wiki/First_class_function

[2] : Can't vouch for any particular article, try googling Javascript closures


OOOOOOh, ok I see how this is working a bit, I think. I am very inexperienced with Javascript, and I think I only just "got" what was being done here, so I'll try to explain my thought processes as a newbie:

So, in Javascript, variables can be functions. Which means you can pass in a function as a variable to another function. And the part that says h(h(y)), basically says that "h" has to be a function. Which means you pass in a function, and then it get's applied to itself in the way specified within that function, "g".

Another odd part is the function you pass in:

    g(function(x) {
        return x * x;
    })(3);
because you are passing in a function, but I'd assumed that if you can only pass in one variable, and that variable has to be a function, then it seemed like you wouldn't be able to pass in an initial value for the function you want to apply. But I guess in Javascript you can pass in a value for an anonymous function defined inside a function call by using this syntax:

    g(function(x) {
        whatever it is that happens in this function;
    })(some_value_to_be_passed_into_the_previously_defined_anonymous_function);


`g(f)` returns a function, which is then immediately called with another value. You're passing `3` to the result of `g`, not to the function you passed to `g`.


Ah, I see, that makes sense. Thanks for the explanation.


Just to help in case it isn't obvious - g also returns a function - which could be assigned to a variable. In this case it's called straight away but you could do it like

    var powerOfFour = g(function(x) {return x*x;});
    powerOfFour(3) === 81; //true


In what terms do you think you "understand" it?

What do you make of this problem with distant entangled particles? The double slit experiment and interference in general? The Heisenberg uncertainty principle? (Or as I'd like to call it, Heisenberg's horribly mislabeled-in-order-to-confuse-students principle)

A shot in the dark - many of the problems with coming to terms with quantum mechanics arise from trying to impose on it that it should somehow behave like classical mechanics, or that somehow we humans stand above it and look down upon it, and heaven forbid that we're part of a qunatum system).


Exactly, we humans try to learn new things by relating to what we already know. This leads to trouble when we encounter new subjects that have no connection to previous experiences, because we have a hard time relating to it. However, the problem with people finding QM esoteric and weird is a bit more nuanced than that.

In most of the other areas of knowledge we can make progress because they do resemble reality in ways that we are familiar with. For example rotation in classical mechanics or special relativity can be somewhat confusing to a beginner, but if we think a little bit deeper than what we are used to then we can see that the results make sense and match what we experience. And also this experience is consistent at different scales. Now you move to QM and the first thing that hits you is that reality "breaks" after some point in the size scale, beyond that everything is different, with uncertainties, probabilities and a bunch of other odd properties. Learning QM is an exercise in mind-stretching, even for the most capable of us. For me the problem is not that it is different, but why is different. Why do we have such a gap between the large scale and the small scale? That is what baffles me.


It's always amusing to see the kind of excuses that pop up when you try to explain entanglement or interference in the mindset of wavefunction collapse.

I'm still not entirely sure why so many people consider collapse to be simpler. Is it too daunting to think of the world as an amplitude distribution that propagates in a way that we're not intuitively used to? Too hard to think of ourselves as part of quantum mechanics, because that thing I see there on the measuring device must be the reality, damn it, and what the hell do you mean I've just entangled myself and the device along with the system?

Or maybe just tradition and accepting the "scripture" coming from the established authority. How could we best test that?


But when you "collapse the wavefunction" and "create" the internal state of the particle, do you also create the internal state of the other one that is entangled to it? Do you create it instantaneously?

You see, the problem is not whether you can transfer information in human readable form (though if you could that would certainly be a huge problem with relativity!), but whether any effect that propagates faster than the speed of light exists.

You'll have a hard time explaining that in the frame of wavefunction collapse, I think.


The way I've usually seen this one presented involves precisely the observation, in that you gain knowledge of the other particle. Of course, saying "because it was always a ~k particle" does not a good explanation make, because that would imply that the resolution of the measurement was somehow predetermined, which is a fancy way of saying that there's a hidden variable.

Not that interpreting all of this to mean that physics is non-local "spooky-action-at-a-distance" is the only viable route, mind.

Consider this: why, exactly, do you believe that when you measure the particle you somehow force it to enter one particular state and therefore the entangled one that's sitting X miles away suddenly enters the opposite one? Are we, humans, sitting outside of quantum mechanics and looking down upon it - and then what we observe is the one true way the world is?

Why would you not, instead, when you measure the particle, entangle the measuring device, and yourself, with the state of the particle? You are, after all, only another part of quantum mechanics the same as anything else.


> [...] which is a fancy way of saying that there's a hidden variable.

I'd say it's a less fancy (and more specific) way of saying "hidden variable." I read the parent as saying, "I don't understand why there can't be a hidden variable" which is quite a different thing than saying "this can work without a hidden variable: [thing involving hidden variable]".


Yes, I can quite easily think of Graham's number - I call it G.

You might think that I'm saying that in jest, but it is actually the way of things. There are quite literally an infinite number of numbers. Why should Graham's be important enough for us to consider. For that matter, why should we consider 1?

The answer lies in part in the question - why - it must be important in some way. That is exactly the case - we only consider numbers that, for some reason, relate to problems we are facing, whether it be in mathematics, programming, or grocery shopping, and even then not on their own merits. Past a certain point, you don't hold numbers in your head as little balls, but rather their decimal encoding. That's a clever hack, but not without its downsides. Ask some people randomly if they were in a position to decide, how much budget would they allocate to save 10000 birds, and ask others the same with 100000 birds. The mean of the second answer would not be ten times that of the first (unless your subjects know and actively try to work around that particular bias).

But back to Graham's number - if you think the only way to imagine it is to hold its decimal notation in your head - the obvious question is why? Why not hexadecimal? Octal? Why any base-p encoding at all? The usual reason to use base-10 is to quickly get an idea of the magnitude of a number relative to things we are familiar with and do some simple arithmetical manipulations on it. Both of these are pointless for Graham's number. Relative to familiar things like billions and trillions it's quite a ways of the chart, and adding, subtracting, or even raising to powers of such makes relatively no difference.

Besides which, in this case it isn't really pointless, it's impossible. Jokes about collapsing into black holes aside, there aren't enough bits of information in the universe to encode the decimal representation of Graham's number. There aren't enough bits of information to encode the number of digits of that. Not enough even to encode the number of digits of the number of digits. Interestingly, there also aren't enough bits to encode the number of times you'll have to repeat taking the base-10 logarithm to get back to a universal scale.

Which goes to show that not only your mind, but no entity in the universe can accomplish the feat of representing an arbitrary number in decimal notation, mathematician or otherwise. And what is the point, really? G is a perfectly good symbol, and so is g_64. Usefulness is important, not imagining a string of balls. For that matter, I bet you've been perfectly content to use Pi on at least a few occasions, even though in terms of representability in decimal, it is infinitely worse than Graham's number.


> The mean of the second answer would not be ten times that of the first

I'm not sure how good an example this is to argue your point, because the second answer should only be ten times the first if the cost structure is a linear function with a zero constant term. Which seems like an awfully big assumption -- you might get economies of scale (nonlinear, decreasing derivative), or you might pick the low-hanging fruit first and then have to go after more difficult birds (nonlinear, increasing derivative), or you might have fixed costs that give you a constant term.

That being said, many people do have a lot of difficulty conceptualizing the difference between millions, billions and trillions [1].

[1] http://xkcd.com/558/


Thanks, you just gave me some awesome thinking tools.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: