Hacker Newsnew | past | comments | ask | show | jobs | submit | more zabcik's commentslogin

If you choose a dictionary word to name your startup or library or whatever, you're just asking for a collision.


I gave up trying to understand it and just clicked through for the eye candy. Cool stuff as usual from Steven Wittens.


If I understand it correctly a lot of the geometry effects build on the ability to do texture reads in the vertex shader (OpenGL calls it "vertex texture fetch"), a not much-noticed, but incredibly powerful feature of modern WebGL implementations. The reason it is so powerful is because one texture can be used as a write target for the fragment shader and as a read target for the vertex shader, essentially creating a feedback loop that lives entirely on the GPU.

Not all browsers support the feature though (check the MAX_VERTEX_TEXTURE_IMAGE_UNITS constant). Mobile devices could be problematic too since most (if not all) OpenGL ES 2.0-era devices don't support it in hardware.

Still this is one of the most impressive WebGL demos I've seen. Fantastic stuff.


The major weakness in using textures as intermediate targets is the loss of precision from the texture formats as well as the intermediate values. OpenGL ES 2 (and thus WebGL) does not require a full 32 bit floating point pipeline, so the results may vary if you run on mobile devices (that are not the latest generation GL ES 3.x devices).

In proper OpenGL, you'd be able to use transform feedback to write in to buffers with no loss of precision. And using buffers is less limited than texture fetches in the vertex pipeline.

For applications where precision matters (ie. everything scientific), WebGL on GLES2 devices is a no-go. WebGL standardization should pick up the pace to better match the development of OpenGL.


It's a bit of a shame that WebGL settled for the lowest common denominator (i.e. OpenGL ES 2.0 capabilities).

This was probably to enable WebGL on mobile devices that would have otherwise been locked out, but it heavily restricted things on the desktop which for the most part would have OpenGL 4 capable GPUs these days.

However given that WebGL on mobile still mostly sucks anyway, not sure if going for the lowest common denominator was the right decision.


WebGL 1.0 is almost 4 years old. OpenGL ES 2.0 was then latest and greatest.

And WebGL 1 has taken this long to reach mostly-working in implementations, it would have probably died in the crib if it had targeted the nascent GLES 3 feature set.

Running GLES shaders safely and reasonably fast in a sandbox (on top of insecure & crash prone drivers) is high wizadry.


> WebGL 1.0 is almost 4 years old. OpenGL ES 2.0 was then latest and greatest.

Latest and greatest for mobile yes but the desktop world was already on OpenGL 4 at that point.

My whole point was that they could have just ignored mobile and delivered a much more powerful WebGL based on OpenGL 4 instead.


> WebGL standardization should pick up the pace to better match the development of OpenGL

Your wish is granted: WebGL 2 draft supports transform feedback. http://www.khronos.org/registry/webgl/specs/latest/2.0/#3.5

(See my other reply about problems tracking latest GLES tightly)


Same :)


I hope it sticks around! They're so fascinating.


I think it's only the first two. Javascript happened because of Netscape (1).


Yeah, it made me view the whole article as amateur and stopped reading.


It's a blog, not a peer reviewed ACM article. I read and learn from others no matter how they decide to express themselves as long as the content is relevant and helpful at solving current or future issues.


Thank you, I appreciate the comment


Interesting feedback, I didn't expect the exclamation marks to have such an affect. It is more of my personal writing style, however since there were two remarks on the exclamation marks, I edited the article and cleaned out some of the exclamation marks. Hope you enjoy the article.


I do find it an interesting article, I don't want you to think I'm entirely negative. I just kept stumbling at the punctuation. It's more about me than you.


I think CoreOS solves a number of these problems. It's no walk in the park to set up but it's a really nice stack.


Speaking of GHC, you guys should work on supporting Haskell.


Is btrfs stable with hibernate now? The last couple of times I used it, it crashed on me.


I had a couple problems requiring a btrfsck, specifically related to google-chrome cache directories. Never had it since. I use btrfs snapshotting a lot (for convenient system rollback).

I definitely am not running btrfs on a production machine, but I love it on this laptop. Wouldn't call it stable yet.

edit to note that I don't hibernate, just sleep.


I don't know you'd ever use hibernate anymore. I can cold boot to desktop faster than it takes to read 16GB of ram contents off a mechanical drive.


Speed isn't the only consideration. If you're using disk encryption, the only way to remove keys from RAM is to power off or hibernate. Preserving state between sessions with hibernation is much more convenient. Also, Linux only uses RAM * 2/5 as the size of the hibernation image [1]. You can make this even smaller by changing /sys/power/image_size. So with 16GB of RAM, it only has to write/read about 6.4GB.

[1] https://www.kernel.org/doc/Documentation/power/interface.txt


If you use hibernate won't the keys be stored on the swap partition?


You can use LVM to make the swap partition inside the encrypted container. If you don't want to use LVM, you can just use a swap file on an encrypted partition (but this isn't supported with btrfs).


This is inflammatory and vapid. OpenSSL just needs better code review and should follow the standards they are programming to. It's not the death knell for an entire programming language.


True, though hopefully it will make people thing that, no matter how smart the programmers working on a project of a similar caliber are, there will be, for the most part, vulnerabilities like this caused by using C.


vulnerabilities were not caused by using C, they were caused by human error. C may be harder to read, but it is way faster than most of the alternatives.


That particular human error would have been impossible if using another popular language (other than perhaps C++).


Perhaps you missed the part of the drama where it was revealed years ago that OpenSSL prefers to roll it's own malloc rather than work out issues in their code that were exposed when porting to other platforms? OpenSSL would just have rolled their own shit to circumvent whatever protections you think other languages provide.

It's the attitude that was wrong, not the language. Stupid always finds a way. Back in the day, porting to different compilers and platforms was one way to find and quash bugs. Nowadays I guess you can just pitch a single OSS compiler and rely on it's implementation details and bugs. Drag your own chunks of libc around and presto, no porting headaches. That's such a stupid attitude.


Umm... April fools?


Is not an april fools joke. Was a hackday project that turned cool.


You should wait until Paypal accepts bitcoin to say that.



Too early and not funny — but good catch, trolling will be starting soon.


They seem to be pretty serious.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: