Hacker Newsnew | past | comments | ask | show | jobs | submit | oddgodd's commentslogin

I'm not sure that it's a settled matter that HIPAA doesn't apply in this case. PHI includes demographic information which would seem to apply here just based on what we already know has been leaked.


In modern usage of the term they don't. The term originated in underground circles where anonymity by all participants was assumed, and where there were probably legal or criminal revenge consequences for tying a pseudonym to a real identity.

Kind of like how troll now means 'person who is an asshole on the internet' instead of 'post designed to rile up and elicit frivolous responses'. The meaning has changed over time for better or worse.


I don't know, even Wikipedia seems to agree with me.

http://en.wikipedia.org/wiki/Doxing

And didn't the GGers "dox" Randi, Anita, Brianna, etc?

But I'm even more old school because I'd just call it skiptracing instead of doxing....


You're ignoring the bits of that article you don't like:

Essentially, doxing is revealing and releasing records of an individual, which were previously private, to the public.

Where's the "reveal" in this hack? They'll use the hacked info privately or sell it.


What are you talking about? I never said this was "doxing"; I know there was no reveal. I was referring to the definition of "doxing" which I felt didn't fit.

Did you read the comments completely? If Randi, Anita, and Brianna weren't anonymous but they were "doxed" it seems to me that "doxing" doesn't have to refer to revealing info of an anonymous person.


A modern clone of the PCB has been done: http://www.willegal.net/appleii/apple1.htm

Sourcing all the required components is likely to be difficult.


My understanding is that he objects to the non-free status of the firmware present on other devices.


Look at the "Try Turnkey on Amazon..." banner at the top of the turnkeylinux.org home page and tell me the artifacts don't look like hammered ass. I'll stick to lossless for anything with textual elements, thanks.


I also don't get the use of these icons where the text is really fuzzy. Can you make out the word "Linux" at the bottom?

http://www.turnkeylinux.org/files/images/icons/drupal.jpg?12...


No, but I can't make that out in the png version either. http://www.turnkeylinux.org/files/images/blog/jpg-vs-png/dru...

Looks like an issue with the original file - it was probably created at a higher res and then down-sampled.


Thanks, that's a good point. I actually stopped noticing that bit of silliness. I just sent Alon, who did an otherwise commendable job slapping together those icons in the GIMP, an email asking about that...


It really is just a bad idea. Or at least one that is working against ideas central to the way Linux is currently used.

Universal/fat binaries made sense on the Macintosh because there is no concept of program installation on these systems. While I think that eschewing installation is generally a better design, one drawback is that if you want to be able to support multiple architectures in one application you have to do the architecture check when the program is loaded.

Central to Linux and Windows is the idea of program installation, either through packages or installer programs. No one is interested in making it so that you can drag and drop items in Program Files or /usr/bin between systems and expect them to run, which is the only thing that using fat binaries really gets you over other solutions.

Nearly all of the commercial binary-only software I have seen in Linux (and other Unixes) uses an installer program, just like windows. There is no technical reason why such an installer couldn’t determine the architecture to install.


Not quite. Current Linux packaging formats encourage the developer to build one package per architecture. This means presenting several download choices for the user which can be confusing. The user doesn't always know what his architecture is.

The problem can be solved in two ways:

1) Distributing through a repository and have the package manager auto-select the architecture. However this is highly distribution-specific. If you want to build a single tar.gz that works on the majority of the Linux distros then you're out of luck.

2) Compile binaries for multiple architectures and bundle everything into the same package, and have a shell script or whatever select the correct binary.

While (2) is entirely doable and does not confuse the end user, it does make the developer's job harder. He has to compile binaries multiple times and spend a lot of effort on packaging. Having support for universal binaries throughout the entire system, including the developer toolchain, solves not only confusion for the end user but also hassle for the developer. On OS X I can type "gcc -arch i386 -arch ppc" and it'll generate a universal binary with two architectures. I don't have to spend time setting up a PPC virtual machine with a development environment in it, or to setup a cross compiler, just to compile binaries for PPC; everything is right there.

I think the ultimate point is not to make impossible things possible, but to make already possible things easier, for both end users and app developers.


> I don't have to spend time setting up a PPC virtual machine with a development environment in it

Neither the compiler nor the emulator are perfect. Somebody has to test and debug your packaged app on actual PPC hardware.


Somebody has to test and debug your app on actual PPC hardware.

Our Xcode-supported unit tests transparently run three times -- once for x86_32, once for x86_64, and once for PPC. The PPC run occurs within Rosetta (ie, emulated).

If the tests pass, we can be reasonably sure everything is A-OK. In addition, we can do local regression testing under Rosetta (but it's rarely necessary -- usually everything just works).

The only native PPC testing we do is integration testing once we reach the end of the development cycle.


I doubt anyone would have a problem with toolchain support being added. But you don't need a kernel patch to fix the user end of the equation. It doesn't add anything that can't be provided just as conveniently (or more so - the shell script approach doesn't require any changes on the users side) without it.


To your point one, I agree. I also fail to see how universal binaries help. The problem isn't with supporting multiple architectures, it's with supporting multiple distributions.

Regarding point two, the developer is stuck with the packaging hassle regardless. The binary goes one place, config files and man pages in others, maybe you want a launcher in the gnome and kde menus... You are stuck with writing an install script anyway.


Yes, universal binaries do not help when it comes to supporting multiple distributions. However I have a problem with the fact that Linux people downright reject the entire idea as being "useless". This same attitude is the reason why inter-distro binary compatibility issues still aren't solved. Whenever someone comes up with a solution for making inter-distro compatible packages or binaries, the same knee-jerk reaction happens.

And yes, the developer must take care of packaging anyway. But that doesn't mean packaging can't be made easier. If I can skip the step "setup a cross compiler for x86_64" then all the better.


>So please think hard on this, before you dismiss this as stupid or untenable.

I have. This is stupid and untenable.

Problem one: Right now if I encountered a login form that didn't mask the password I would probably attribute this to incompetence, not usability. I don't think I'm the only one.

Problem two: Right now all login forms work the same. The top field is the username and under that is the password field. This would break that consistency by adding the "show (or hide) password" behavior. In his description he even suggests that some sites default to a different behavior based on some notion of degree of security. Now logging in with someone looking on becomes quite a bit more nerve-wracking because you need to figure out if the password field will disclose your password. This is less usable.

Now, where I think this may be useful is if it is added as part of the "invalid password" behavior. Offer to give the user help only if they need it. Provide them a button to show the password they entered, and allow them to try again underneath it to fix any typos or verify that they correctly entered the password they were thinking of. This helps the user without changing the way the login form operates in the default case where a correct password is entered (a password that's probably in the user's muscle memory because they use it for everything). I know I've actually seen this done somewhere, although I can't remember where.

Mobile is a bit different. I’m completely behind the times in using a mobile device to access the web, but I know that my terribly slow phone running its gimped browser (netfront, I think?) on its tiny screen quite a few years ago provided the option to display masked fields in the editor window it would switch to whenever filling out an input field. This seems like a better solution to this problem to me (and was almost a necessity on that device since it didn’t have a proper keyboard).


I'm 100% in agreement with you here.

Mac OS X's Keychain Access has "show this password", and iPhone does masking but still shows you the last character you just typed for about a second.

I think these are both good compromises.


Wicked. That is a much better considered solution to the problem.

That I think is the problem with the original message - the problem IS sort of there but the solution is just too readical for it :P


That was Informix, not Oracle.


I suspect the grandparent is referring to a "cross site request forgery" (XSRF) style attack.


I wonder if he's using UUCP?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: