Hacker News new | past | comments | ask | show | jobs | submit | apg's comments login

That's a nice reference. Perhaps ironically, and definitely IMO, vanilla-js looks to be the simplest. Followed by jquery.


With the right case, I don't see how this would withstand Supreme Court review. The Ninth Circuit doesn't seem to like it (Drew/Duval). My guess is the the SCOTUS wouldn't approve of private parties drafting their own criminal laws.


Do you want to run foul on multiple felonies (1 per day per ToS violation) along with dozens of other "felonies" that you undoubtedly commit without knowledge.....

Just so you can petition the Supreme Court to take your case, knowing that they can just sit back instead and do nothing?

Wasn't there this Aaron kid that got hit by that very tactic? Hmm. He didn't take the "decades in prison" so well, did he?


> He didn't take the "decades in prison" so well, did he?

So the example you take from what happened to Aaron Swartz is to apply yet more propaganda from the other side, instead of facts?

It's not even something you have to mislead about; 2-3 years in prison as an upper-end estimate is bad enough to make your point without being evasive, and even the ~6 months upper limit that the prosecution had offered in a plea bargain is said to be serious enough to make your point.

You could even mention that Aaron would have been a "felon" should he have been convicted or plea guilty on those charges and you'd have been accurate.

And yet you choose the only outcome that wasn't actually possible to push your point forward...


Yes, because I try to evade a major defense contractor's attempts to kick me off their network dozens of times a day.


Aaron was sick, don't pretend like anyone under that kind of pressure would simply kill themselves.


You could also say that the agreement is void w/r/t a minor, because they don't have capacity to contract, via Statute of Frauds. Of course there are plenty of times when they can contract, but TOS in general seem completely obtuse and not actually conducive of anything worthwhile.


From what I can tell the biggest issue you (and many others have) is the environment/culture issue. The stuff you see or some of the ways that you are expected to code with C# in many .NET shops will make you want to claw your eyes out. (And though I don't have any first hand experience, I reckon it's the same way in Javaland.)

F# is a mighty fine language but introducing that into an established shop just isn't going to happen. It's personally my favorite language but there is a very steep learning curve coming from C#. For my projects I use Windows & Linux, C#, Python, JS (client), and F# all over the place. I keep the programs small with a service orientation so it's easy to switch languages.

Anyway, all frameworks have their problems. It's hell to make a program of any appreciable size in Python. Javascript/Node turns into callback hall of mirrors. Avoid creating a mammoth framework with these and it all works out great. And while it doesn't have everything in the world like the Python ecosystem, .NET OSS ecosystem, which is really coming around. And Microsoft itself is encouraging exploration of other frameworks, e.g. NodeJs and WinRT's first class Javascript support.

My take is the problems usually aren't framework/tech related. It's about people and process. Diving into a new framework really isn't about the tech, it's about learning the tech, and thereby giving your brain some much needed stimulation.


I have a morbid fascination with articles like this, and it bug me. Even though a few ideas in the piece may feel intuitively correct, this is mainly just anecdote driven journalism; no evidence, no statistics, just a handful of observations from what is (probably) an elite-mid atlantic urban neighborhood. Parents may seem hyper-competitive and hyper-protective in her neck of the woods. In mine, not so much. It's pretty lame to say "American kids are brats" based on the handful of interactions the author has had in her community.

Who knows. I'm sure there is some crotchety French expat in the Netherlands bemoaning the state of French kids today.


Ok, let's say the author's methodology is flawed. Does that make her larger point invalid? Should we not, as parents, be mindful of our children's manners?

My daughter is starting to develop her vocabulary now, and this article was a good reminder to me that we need to start pushing "please" and "thank you".


I'm not so sure that the need for touch is really that great in a desktop environment. It's probably cool for someone looking over your shoulder and wanting to take control of your computer. But for sitting at the computer and making something happen... I don't think the usability needs to be 'equal'. And in the desktop world it should favor the keyboard/mouse.

Here's an anecdote for touch vs. mouse/keyboard:

Pretty sure I'm not - nor is my family setup - typical of a windows user. We have two touchscreen desktops in our house.

I have a two year old that can work the touchscreen very proficiently. My 5 & 7 year old stick with the mouse and never use the touch, I almost never use touch, and my wife rarely uses it.

Based on my personal observations, on the desktop, touch is good for very new users or those who lack the physical coordination to control the mouse and keyboard. Otherwise, mouse and keyboard wins.

The Windows 8 mouse experience is really lacking at the moment. I expect that gets a major update in the next version.


I agree with you, that was the point of my post. I think that touch interfaces and desktop interfaces can stay distinct or complementary. I think there is an expectation that Windows 8 must make a touch based app great in desktop mode and vice-versa which I think is untenable.

It's like old efforts to make cars and vans run on the same body. The cars handled like vans and the vans handled like awkward cars.

If you're looking to only use a mouse, I suspect you will be spending time in Desktop mode. I know that the CTP has enhanced the mouse support in RT mode, but it's still touch-first apps. I don't like tablets and don't love the nature of touch interfaces and I know that I will not be happy with a mouse and WinRT mode ever. Frankly, if the buttons and layouts are accommodating for touch interfaces, it's NOT going to be optimal for use with a mouse. (I think some additional evidence of this would be the intrusion of ribbon into Windows Explorer. IMO it's pretty obvious that it's motivated by touch interface needs.)


I've tried the initial Windows 8 developer preview, and I have to say it has been a jarring experience to switch back and forth between Metro and the "Windows Classic" desktop. The visual experience of Metro vs the Win7 style is so different that it feels slapdash and disjointed. This Windows 8 was just the initial developer preview though. I'm sure the next CTP will be worlds better - and I'm sure it Win8/Metro will be sweet on tablets.

There was a mega post at the build windows blog justifying (rationalizing?) their decisions. It's interesting reading to slog through. They've certainly put a lot of resources onto this decision, and it feels like the future of Windows based desktop PC's is in the balance. It feels like a really risky bet.

http://blogs.msdn.com/b/b8/archive/2011/10/11/reflecting-on-...

Ideally, MS would add a "Windows 7" mode that I can turn on and use my computer the way I've been accustomed to. I don't want to be learn more efficient with swipes/fewer clicks/etc - I've got a good thing going here. First thing I used to do with XP was switch it to the Windows Classic mode, so I'm probably not their target user.

FWIW, I also dropped Ubuntu post-Unity (running Debian now) and haven't looked back. This dog is too old to want to deal with learning any cheese moving tricks.


Everyone hates change and would love to go back, until they actually do go back. The status quo is familiar and therefore seems more efficient. It's "muscle memory". The current desktop system is a metaphor that never really played out.

Will Windows 8/Unity/Gnome 3 be better? Who knows. Will be it worse? Probably not. More than likely it will just be "different". Either way, if computing doesn't keep changing and evolving, we're going to be stuck in the barely-functional status quo forever. I'm willing to live through (and learn to love) the rough parts along the way until we reach the best we can possibly do.


> Everyone hates change and would love to go back, until they actually do go back. The status quo is familiar and therefore seems more efficient. It's "muscle memory". The current desktop system is a metaphor that never really played out.

This is largely quite true. I was probably one of the few people among my peers who actually appreciated the new taskbar introduced in Windows 7. Although that's not to say that criticism of Windows 8 isn't important; I think one of the things Microsoft did learn with the Windows 7 betas and release candidates was that feedback from the user base can very occasionally be helpful. In some exceptional cases, monitoring user feedback can short-circuit certain disaster.

The other thing to keep in mind is that change for the sake of change isn't always good--and likewise, it isn't always bad. Paradigm shifts will happen, things will change, and sometimes legacy designs will continue to persist for wont of familiarity. What I'm saying is this: Changing too much too drastically is a bold move. It might pay off; it might also be disaster.

My personal preference is to agree with apg; the developer preview was less than stellar, but my expectations weren't terribly high. I don't really like the paradigm shift Microsoft is trying to force across the Windows brand as it exists for a desktop OS, but it might be great for what it was designed for--tablets and phones.

It should be said that I also didn't care much for Unity even though I gave it an honest try for about two-three weeks. There were some things that it did well--and were equally quite handy--and there were others that felt too jarring and awkward. Metro feels somewhat similar in that regard. However, I'll reserve my full judgement until we're closer to a more finalized product so we can see what Microsoft's vision ultimately boils down to; thusfar, I'm not terribly impressed, but I admit they're making a decent effort. I do look forward to seeing the finished product even if I absolutely hate it. :)


I'm going to make the opposite argument. I think the current batch of Start Menu, dock, and panel launchers represent the product of continuous refinement. I honestly question whether the interface can truly be improved (versus made different) for current desktop interfaces. By current desktop interfaces, I mean a monitor, mouse, and keyboard. Right now, there no interface better for accomplishing real work on a computer.

Tablets and future, immersive, physically interactive interfaces (e.g. "virtual reality") will certainly be different, but I don't see why we should completely warp our well-established paradigms of interacting with current technology.

Incidentally, the greatest sin I see is that they seem to be keeping the Start Menu concept, but they're hiding it!


> and I'm sure it Win8/Metro will be sweet on tablets

Exactly. The hot trend right now is to throw desktop aesthetics and performance out the window and just build products for small touchscreens.

Dumb.


They will learn that it was a dumb move when corporate customers with hundreds of thousands of installations refuse to upgrade because the eye candy does nothing to improve productivity on their (obviously non-touchscreen) workstations. Coming Soon: Windows 9 Business Edition with the Classic keyboard-and-mouse UI.


I also hope KDE5 will be more like KDE3 than KDE4.


As soon as I "experienced" unity I RAN to gnome, screamed in horror, then ran to KDE and just cried a bit to myself. What the hell is happening to linux? Ever since the nearing of Steve Jobs' decline & death it's like every amateur designer in the world's been tyring to take his place.

They're all trying to mimick his vision and design but without understanding it at all. I think this is why we keep ending up with all these heartless, sterile, Kindergarden UI designs.


Try XFCE. Linus Torvalds said he uses it.

Meanwhile, KDE isn't all that bad nowadays. I actually kinda like KDE 4.8, it's come a long way since the early days of KDE 4.


Mint is still using Gnome 2, or I think there is a new fork of it named something else.

I agree though, although I blame the shift mainly on the tablet/smartphone mentality.


Mint is using a Gnome 3 fork called Cinnamon.


EDIT: I am referring to the dumbing down, over simplification, extreme minimalization, and looks over usability of next generation UIs in general. NOT to Window's entire "Metro UI".

From this day forward we shall call this abomination "Kindergarden UI".

Example: Tom: "Hey how's Windows 8 compared to Ubuntu?". Stacy: "Oh they're both going to shit, they've got kindergarden UI now."

I wish Microsoft would realize that Windows is NOT OSX. In their desperate attempt to fabricate a Windows Cult following and "it just works" culture they're going to end up destroying the very thing that made windows what it is.

I tried unsuccessfully to convert to mac a few years ago, and numerous times to convert to linux. In my perticular situation, my linux installs have less stability than my windows 7 installation. After all trialing 10+ operating systems over the last 7 years I realized: I love Windows, it's Microsoft I hate.


From this day forward I shall call people who tell others what to think "doomsayers". Judging a UI before you can actually try a working, functional version of it and then espousing that ignorance to others is just asinine. Even worse if you've tried it and hated it just because it's different.

"Kindergarden UI" is a ridiculous term, and very subjective. I really hope no one actually takes your advice on anything.


From this day forward I shall call people who tell others what to think "doomsayers".

But freehunter, I'm just trying to mimic Steve Jobs like the other designers :(

Naa, miscommunication here. I try out a new linux flavor almost twice a year. I tried out the Windows 8 preview. I kept my mac around for months when I was trying that out. I LOVE different, I love simple. What I don't love is "minimalization without understanding what needs to be minimalized".


I apologize if there was a misunderstanding. There's a lot of hate for change, especially in this thread. Metro UI might not be the best, but the desktop is a tired metaphor that wore out its usefulness after Windows 98. I welcome UI changes with an open heart and an open mind. I'm willing to hear them out (which I got burned on by Canonical when I began using Unity and it didn't get better).

I may have jumped to conclusions with your post, but you must admit there was no obvious sign of open-thinking ;)


Well, the preview of Win8 showed that you can switch between Metro and the classic desktop. When I deploy Win8 to my users, it'll be locked to the desktop mode. Metro will be a netbook/tablet/embed only thing.

Don't take the complete fail that Unity is and apply it to this situation. Shuttleworth has no big corporate customers he needs to cater to. Those that don't like unity can figure out how to install Gnome or LXDE or whatever. Windows on the other hand will have both options ready to go from day one with an ease of a click. MS knows who butters their bread: corporate clients.


This is how I feel about Unity and why I moved to Debian+Openbox.

Unity might be incredible on a tablet when ctrl+space to bring up a launcher and typing "fir<enter>" to launch Firefox is slower than slapping an icon with my sausage finger. But on the desktop, Unity is a step backwards and adds nothing to me when my fingers are on home row all day. And trust me, I really tried to use Unity and I'm rather open-minded about new paradigms (Openbox was new to me).

I haven't tried Win8 yet, but it sounds like a similar situation. But I don't feel like the future of Windows-based desktop PCs is under fire. As long as Windows has powerusers, there will always be a solution to what might be a step backwards in design.


I don't think users will easily switch from the XP/7 experience to the Windows 8 "tile" one. And it seems all they are doing is forcing even more this new experience on them.

Whether it's old Windows users, Mac users, or Linux users - the transition will definitely not come as easy for them as it was the transition from XP to Win7, which was also heavily helped by the fact that people were already very tired of XP.


A lot of people who are complaining are most likely suffering from Baby Duck Syndrome. I haven't had the opportunity to try out Windows 8 yet, but I'm sure this is a calculated risk which can hopefully appeal to newer generations.


> I also dropped Ubuntu post-Unity (running Debian now)

Why? If you're technical enough to run Debian, you're technical enough to run Ubuntu with Window Maker or some other non-Unity UI.


Because Ubuntu has become, for the last 2 years or so, a game of "what will they break next". Started, arguably, with the not-even-halfbacked initial Pulseaudio implementation, and continuing on to Unity.


Canonical seem to be playing the long game and tossing things out as they become available. It's like a waiter bringing you your meal as each piece gets done cooking. First he brings the sauce, then he brings the chicken, then he brings your beer, then he brings the beans. It might be a fantastic meal when put together properly, but piecemeal it's just an unappetizing mess.

I have no doubt that Ubuntu 16.04 (or whenever they finish) will be amazing, but in the meantime I really wish they would just wait until they have everything ready.


* It's like a waiter bringing you your meal as each piece gets done cooking.*

And then when you complain, they say, "don't be a whiner, you received a three-course chicken cordon-bleu meal prepared by a trained French chef!" Nevermind that it was out of order and disassembled.


Not a bad analogy.

I recently migrated my main desktop back from Windows 7 to Linux, and I actually ended up on Fedora. I've been pretty pleased with it overall. Sure, I ripped out Gnome 3 and dropped in KDE 4.8, but on the whole it really works quite well, including 3D (I recently replaced my old ATI card with an NVidia 560, which certainly helped).

Surprised even me. My linux journey, going back 12 or so years, was more or less Mandrake -> Debian -> Gentoo -> FreeBSD -> Ubuntu - Slackware. I tried Debian recently, didn't really like where any of the distros fell - unstable was TOO bleeding edge, while even testing lagged annoyingly far behind. Arch is nice, and I like it on servers but too fiddly for a desktop. Fedora really seems to be right around the sweet spot where it's useable, pretty up to date, but more importantly it's very much "linux" as I remember it, not with lots of not-very-well documented magic going on.


See, I have a different story with Fedora (and maybe this is partly due to Gnome 3). I installed it on my desktop/home server. I've got 6 hard drives, and I couldn't manage to get SMART warnings to stop popping up because one of them had 6 reallocated sectors. Like, every 3 minutes SMART was screaming at me.

Then I was trying to install Amahi and it required a package that was different than one that was installed by default. I could not figure out how to get yum to replace the current package with the one my software needed. The help from the developers? "--force is never a good option, so we don't offer it". Thank you, Microsoft, for protecting me from myself. I uninstalled the package, and it automatically uninstalled Network Manager and everything related to DHCP/networking. After that, I couldn't install the package I wanted because I couldn't configure a network connection.

It really seems the only time I have to resort to using the command line is when the distro's "intelligence" breaks the GUI. Ubuntu doesn't not break things, but at least when it does I know the best recourse (since I partition /home on it's own drive) is to just reinstall. Wouldn't have fixed a thing in Fedora.

I grew up on a command line, and I really hate being forced to go back to that just because the developers don't care enough make the GUI work properly.


Is there any way on HN that I can save this comment for future reference? That was a pretty apt analogy.


I use OneNote for this :)

I have a notebook set up in there for "profound internet quotes".


Ubuntu 11.04 is a few versions behind on node.js. So a lot of modules won't install. If they're behind there I'm sure they're also behind in other programming environments as well. You can always install these things from source, but for programmers a rolling release distro like Debian or Arch makes more sense.


I would really love an actual example of what he is talking about. Sure I can use my imagination, but it would make the post more interesting. It's a good point he makes, but concrete details always provide an excellent analytic touchstone.

Incidentally, this is a good trait (IMO) to instill in your kids. I love arguing with mine.


I thought of this after we hired @qrush. He instantly started arguing with everyone about everything from technology choices to feature selection and I thought to myself, "that's so awesome". And then I thought that we've had a long history of hiring people like that and how well it's worked out.

And at the same time, how less than stellar it worked out elsewhere both for myself and others who fell under the spark-on-the-first-day column.


My method is like yours, basically. I use Notepad and good-old paper.

I'm leary about the idea of ever having a "living calendar". You've gone from the one dimensional todo list of "what" to a more complex decision tree with another dimension of "when" in addition to "what".

I do like the concept of a "living todo.txt". Sure it is unrealistic to be able to strike off every item on your ToDo, but every day brings re-evaluation. Approach your todo like Yoda - there is "only do, or do not".


This all makes no never-mind to me, as I'm going to turn that right off (you can do that, right?). I prefer to evaluate on my own what information is relevant and what isn't. Happy accidents, you know.

Did Google ever announce what the rationale for this is, from a revenue perspective? Is a link with content that a person is likely to recognize also likely to generate extra revenue?


I'm going to turn that right off (you can do that, right?)

Yes. Settings icon ( ⚙ ), then Search Settings, then Do not use personal results.


What if you don't want to tell them who you are at all?

I.e., you block cookies.


They'll still no doubt be using geolocation to filter results, as well as time of day, browser being used, etc...

At their scale, they can no doubt see patterns in data that make it worth personalising search based on those factors, regardless of wether or not you are signed in.


I'm less concerned with receiving unbiased search results than I am being in control of the choice to divulge my own identity.


The rationale is clearly to drive traffic to their own products at the expense of competitors like twitter and facebook. SPYW is just more advertising, except for now it's limited to google properties. They're obviously banking that people won't notice or care that most search results are now paid placements.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: