Hacker Newsnew | past | comments | ask | show | jobs | submit | danheberden's commentslogin

NodeLists don't have .forEach in their prototype chain kiddo ([].forEach.call(nodeList, hollaback);)


probably because github uses jquery


No, just the amount of text content downloaded and parsed into the DOM was the issue.

Kinda surprised with the downvotes considering the poor intentionally provocative OP.


Intentionally provocative? It's simply a counter argument.


"poor intentionally provocative OP"?

How is a list of irrefutable facts a "poor intentionally provocative [original post]"?

Show me how smart you are!


You seem very upset and are surely alone in feeling that way. Please refrain from posting before you start yelling people who oppose the opinion of the topic.


how in the hell am i going to have bandwidth for jQuery AND my 200K of carousel images?!


It's not just about bandwidth, it's also about parsing and executing ~82KB of JavaScript before doing anything on the page.


Do you happen to know what the performance impact of parsing and executing ~82KB of JavaScript is? Genuinely curious to know the numbers.


I get about ~7ms in chrome on a mbp and ~200ms on my shitty nexus s. http://danheberden.com/tools/howlongtoloadjquery.htm


Cool. A few comments:

* You're loading the unminified version.

* You're factoring the network request time into the benchmark. I was interested in pure parse time. Which was 27ms according to Chrome Developer Tools: http://i.imgur.com/hgFiGTu.png (the score on the page was 110ms).

I'm curious what people who complain about "parse time" consider to be an acceptable score.

Edit: Ah, I didn't realize you were discarding the network time. I was fixated on the number spat out by the page.


74ms for initial load and averaging about 20ms subsequent loads on a MBP with Chrome. Ubuntu Chrome is similar. 837ms initial load, average about 140 ms subsequent loads on a Moto X on T-Mobile.


I thought we were taking download time out of this


The "subsequent load" time should be pretty close to this. I'm getting about 20-30ms of "Evaluate script" time when I run a profile in Chrome's Timeline tool.


And i'm sure no one uses methods like on, off, live, delegate, data, splice, slice, eq, add, addSelf, prevAll, nextAll, next, prev, siblings, children, wrap, clone, detach, appendTo, serialize, animate, width, height, ready, type, $.when, $.data, $.Deferred and more.


Odd, but the Crawl Errors actually displayed "No known errors" - which is odd, since it was definitely having issues fetching as googlebot. Thanks for the suggestions and finds, though - we'll be looking at it more often for sure! :)


That's usually the case if you've just verified the site for the first time. It takes a few days for the first data to be collected. Because of this we always recommend webmasters verify early (before things break) and monitor constantly.


For the most part the compatibility modes work, but there are inconsistencies - such as bugs that appear in IE7, but not in IE8 in compatibility mode. Its a headache, and one that's somewhat remedied by just using the actual browsers.


Does Microsoft Expression Web SuperPreview (Microsoft's official tool for this purpose, which comes with the various rendering engines plugged into a meta-browser) have these same deficiencies?


Thanks for the info. That's what I was hoping to find out by having these. They now live happily within my suite of various development/testing tools.


Did you try running it in compatibility mode and as admin? It seems IE7 on win7/64bit is hit or miss :/


I did, thanks for the suggestion.



I'm not getting any web seed activity. I'm guessing it might be due to the seeds linking to /spoon/ie6.exe/ etc, with the last slash included, which doesn't go anywhere if you visit it with a browser. Know of a way to edit the file?


Yeah, I don't think uTorrent created the urls properly. TPB reports "temporarily disabled" for editing the torrent though :/ I'll bump my seeding cap up a bit to help.


Fixed it for you :) Also removed the now-non-existent tbp tracker (unless I've missed the news?)

http://dl.dropbox.com/u/363028/temporary/Spoon.net%20Sandbox...

edit: uh, I think. Transmission is throwing fits and not writing to disk...


doesn't your torrent client have the ability to add arbitrary webseeds to any torrent? just use that.


Unfortunately no, though I'm hunting for one that does, and lets me save it.


Do you have python? And a BitTorrent encoding library: http://pypi.python.org/pypi/bencode/1.0 ?

Then I have something which will help: http://pastie.org/1559062

It conforms to http://bittorrent.org/beps/bep_0019.html and can (of course) be trivially upgraded to conform to http://bittorrent.org/beps/bep_0017.html also.

Happy downloading.


If you read the content of the site, w3schools was attempted contact multiple times. They also browsed the github repo and started making changes to their site to save face.

You're absolutely right, if they made it something community driven and contributed to, it'd rock! That's why MDC is linked to, it's a wiki!

p.s. ie8 working cool for me :)


Yeah, I spend my time on an open-source library used by thousands of people and supported said library, helping users with problems and contributing to documentation. So dumb to link to already available, good resources instead of building a better alternative. I wish I wasn't so lazy! </sarcasm>


Out of curiosity, can you link to one of those good resources, and to an equivalent w3schools page?

Setting aside questions of Google rank and discoverability, I suspect the "good resource" will have much more information, will be more correct, but will also be more verbose, may assume more prior knowledge from the reader, and will be less likely to provide instant gratification.

There's a big difference between a good resource for an experienced developer and a good resource for an absolute newbie, and I haven't seen that addressed yet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: