Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
NSA Chief Hacker Explains How to Avoid NSA Spying (techlog360.com)
149 points by awqrre on Jan 30, 2016 | hide | past | favorite | 50 comments


He's misdirecting people. Sure this is good advice. The problem is they only protect upper layers of the stack. NSA TAO hits the lower ones, too. Some NSA tooling even attacks in ways that require TEMPEST-style shielding. An old enumeration of issues I put together shows more of the places they're hitting plus assurance activities that stopped their hackers in the past:

http://pastebin.com/y3PufJ0V

Suffice it to say, the TAO is going to breach most of what people use. What people in high-assurance security usually did was a combination of airgaps, embedded hardware, micro/separation kernels, things like serial ports to avoid DMA risk, and so on. You have to get all the attack surface out of the equation. Then, you make the TCB simple and strong for the rest.

It's more work than most will do. It also counters the mainstream favorites like Linux/BSD, at least usual usage. So, uptake of high-security methods stayed low enough for TAO to have an easy job. Snowden leaks haven't changed that: economic and social factors remain for proprietary and FOSS. So, follow this advice or not, they'll still probably get in because root problems are still there. Might stop others, though.


Absolutely. His advice is accurate and something everyone should follow, but it really only scratches the surface. Presenting it as "follow these 6 weird tips to keep TAO out!" is an attempt to give a false sense of security.

That said, the NSA is technically in charge of defensive information security for the country, so he can't be dismissed as wholly disingenuous, either.


When it comes to securing ourselves, anything coming from the NSA should be treated as some form of disinformation. The NSA betrayed its "information security" brief long ago and can never be trusted on that front again.

We are all better off keeping up with and following consensus best practices and ignoring the NSA, rather than trying to out-think their latest bit of disinformation.


While I trust the NSA about as far as I could throw the Puzzle Palace, I think you're overstating. Part of their brief, however obscured by their reprehensible behavior of late, is to help defend US-based economic interests against foreign intelligence, governmental or private.


"When it comes to securing ourselves, anything coming from the NSA should be treated as some form of disinformation."

"The NSA betrayed its "information security" brief long ago and can never be trusted on that front again."

Err, sort of. They were a valuable part of the field in the 80's and 90's, esp for evaluations. Their own stuff that's actually designed for security is done well and has some things to teach us. These are often called Type 1 products, GOTS high-security, and so on. They're not available to the general public because they work. ;)

Here's an accurate picture of DOD/NSA's involvement in creating INFOSEC in business and destroying it later for politics/legal/business/intel reasons from one of old guard:

http://lukemuehlhauser.com/wp-content/uploads/Bell-Looking-B...

The reasons they stopped pushing it aren't even clear to me at this point. Individuals and businesses rejected almost all high-security products in favor of maximal features, backward compatibility w/ insecure stuff, integration with insecure stuff, faster even if unsafe (eg C lang), cheaper even if less safe, and smaller even if forcing insecure integrations. This was true before, during, and after Walker's CSI. Still is post-Snowden as real security cost money and has tradeoffs that most users refuse.

So, the reason we're getting hacked all the time & NSA's IAD is doing little isn't really NSA: it's demand side where people refuse to do anything secure and use insecure stuff en masse. This is true even with usable alternatives: Facebook Messenger vs Threema or whatever; no disk protection vs Truecrypt full disk; no backups vs automated backups to USB; paid, non-snooping, $5/month email vs free, sell-you-out email. Convenience or price almost always win out.

So, NSA could disappear entirely and our situation would still be the same. Plus, they deserve credit for their participation in high assurance stuff that stops even them, which is published in enough detail to copy, and which continues to this day. Example on CPU side:

http://www.ccs.neu.edu/home/pete/acl206/slides/hardin.pdf

https://www.nsa.gov/ia/programs/inline_media_encryptor/

https://www.nsa.gov/applications/ia/tempest/index.cfm

So, no, you're best off learning what CompSci, DOD, NSA, and INFOSEC's founders taught us under the banner of high assurance security. Ignore whatever they teach general public or business unless trustworthy pro's will peer review it. Best practices are insufficient as they don't address lifecycle assurance of computer hardware or software. They leave in the vulnerabilities NSA and black hats will continue to hit. So, one must from bottom to top apply high-security techniques to make a better stack.

Even post-Snowden, that activity is pretty much non-existent enough that I can count the projects on one or both hands. I rest my case about the problem being on demand and supply sides.


That's a common misconception. The law/directives required them to do COMSEC for US govt. Later expanded to offer protection for defense contractors. They still do both. See Type 1 certified gear, NSA inline media encryptor, TEMPEST, etc. Defense use only. ;)


True.


Great in depth post.

One question: What's your opinion on Qubes , or it's windows brethren Bromium[2] and blueridge Appguard[1] ?

Not against TAO , like you said it won't work, but maybe it's the best a regular person or small company can do ?

[1]http://www.blueridge.com/index.php/products/appguard/consume...

[2]http://www.bromium.com/


Those kind of tricks are basically risk mitigation against untargeted attacks or attacks targeted against components that can be damage limited. They're fine for that. They fall short when attackers start aiming for the hypervisor, firmware, etc or can do damage within the compromised VM/partition (app-layer attack). Security against those attackers has to somehow isolate the effects of their data or code from those things. In other words, you need it at every layer with each component assured and the integration done safely. None of those meet that. Only one's that come close are commercial and might not even serve a small business. :(

My old strategy was several cheap computers hooked up with a KVM switch and a guard to assess data moving between them. Basic airgapping w/ trusted transfer. The simplest scenario splits confidential and untrusted (esp Internet). Wasn't that hard to use: just inconvenient on occasion. Setting up guard and software properly is tricky to say the least. A specialist is necessary although an OpenBSD or Linux box well-configured can work out for common attackers.

On Windows, I used AV for reaction, Defensewall/AppGuard/SandboxIE (varied) for prevention, did backups/restores with a boot CD, write-protect BIOS if available, regular patching, and Linux Live CD (kept up to date) for stuff like banking. Regular restores from backups, too, with patches applied to clean copies. I used RAMdrives for temporary storage so it autodeleted upon shutdown. I used write-protect HD's to prevent corruption of OS partition but not available now that I know of. Physical wire, not wireless, for Internet disconnected in old school fashion when not in use to deny bot bandwidth if nothing else. :) Non-traditional router made from non x86 or ARM box running a Linux or BSD firewall distro. Sparcstations with OpenBSD were popular among paranoids. Can't remember mine as it was one time.

Just a few things I remember. Had I stayed on Windows, I was going to next explore using Windows Embedded to strip out more attack surface. A friend named tommy, smart but not IT guy, systematically backed up, deleted, tested, and restored on failure his Windows XP box in iterations until he found how little code & data it needed to still function. He got a WinXP system with Office, Firefox, AV, etc down to 650MB total backed up on a CD. Unreal. I figured, as I had stripped 'NIX's, I could apply that strategy to Windows with more ease if I let Embedded's tools safely do first stage for me. Good head start cuz I don't have years of obsession for one hardening. ;)

Also, on lighter side, I put some people with extremely minimal needs on hardened Mac's with PPC processors. I knew malware authors would exclusively target Intel's upon the transition with replacement hardware cheap on eBay. Strategy still works for those using it. One of my air-gapped backups is a PPC laptop I got for $80. What do RaspPi and accessories cost again to deliver what? ;) Otherwise, hardened Mac's on Intel might be a safe strategy for average business due to rare targeting. Even Krebs endorsed that strategy at one point. Still with backups to read-only media and such. So, there's that.


Thank you.

That protection for windows scheme sounds secure, but far too complicated .

>>They fall short when attackers start aiming for the hypervisor, firmware, etc or can do damage within the compromised VM/partition (app-layer attack)

First, i think Qubes added some isolation for USB and graphics , and in general protections against firmware attacks(if you enable VT-d in bios) .

Also , How much of a problem is that if you don't care about protection from state level attackers only commercial attacks, and use app layers wisely(banking has it's own layer, for ex.) ? Because i don't understand how targeted vs un-targeted applies to state vs commercial.


"That protection for windows scheme sounds secure, but far too complicated ."

We used to train laypeople to do it and even use NoScript with pre-defined profiles. Hard parts could be automated, even sold commercially.

"First, i think Qubes added some isolation for USB and graphics , and in general protections against firmware attacks(if you enable VT-d in bios) ."

Probably. That could contain some issues. I remember when I first evaluated it that she didn't know user-mode drivers had benefit, thought Mac OS X was an example of a microkernel, hadn't considered a trusted path, and disagreed with me about Xen being likely insecure. Now, it has a UI/graphics subsystem or something plus she was criticizing Xen on mailing list for insecurity. Lol... For these reasons, I avoided it for strong attackers as I'm not sure they're qualified on defense as much as offense. A common problem.

Still probably good for containing ordinary malware, though. I plan to re-assess it for that in the near future as I heard usability is good.

"Also , How much of a problem is that if you don't care about protection from state level attackers only commercial attacks"

The problem is that they're the same: nation states just have more 0-days and toolkits affecting more places. All of them hit OS's and app level. So, a kernel-level 0-day is a problem there whether it's nation state or not. More are hitting firmware and routers. Hard to say what the difference is in risk level today or near future.

For most, their security comes from the fact that nobody is targeting them specifically. If you're not being targeted, just using obscure stuff that has basic protection or QA can stop the wide nets. If your targeted, you want less code in trusted computing base (eg privileged) and good architecture.

Currently, the only OSS project implementing a sound architecture, trusted path, microkernels, etc is GenodeOS. They got started in CompSci research leveraging proven techniques and components in defense. Their architecture itself needs more peer review as novel stuff often has trouble. Regardless, they're at alpha-stage for desktop usage and improving steadily.

People just protecting their businesses should just use a hardened Mac, BSD, or Linux setup with regular backups. Optionally, if they can use it, NoScript or an ad blocker. Add HTTPSeverywhere. That will stop vast majority of attacks that would take over their system while being pretty easy to use. At least until this gets popular or a black hat group has a beef with one. ;)


Thanks for this comment! I was curious recently for some "layprogrammer" info on differences between Qubes and GenodeOS; although this is not a full blown article, I find it interesting and valuable (and only the second bit of such info I stumbled upon yet; from the first one I gathered that GenodeOS has a microkernel architecture, while Qubes does not [although I'm not quite sure what it does have instead]). You wouldn't be planning to write some quick note on that topic, would you? :)

edit: given that from what I managed to understand from the high-level public information till now, both of the projects:

- seem to have kinda similar goals/aims (VM-like separation between applications/application domains/groups);

- seem to employ a similar GUI concept of visual differentiation of apps running in different "virtual domains";

- seem to mention some established hypervisor/VM technologies, like Xen/KVM/...


Eh, I don't know about "layprogrammer" but I think laypeople can follow at least key principles with right source material. Especially programmers. Here's the differences I put in my original critique to Joanna about Qubes' design choices. The Xen project stemmed from the Nemesis OS:

https://en.wikipedia.org/wiki/Nemesis_%28operating_system%29

The architecture was designed for efficiency and management goals. The Xen project became a virtualization system that used a chunk of Linux in Dom0 for drivers, put guests in other VM's, and had plumbing to let them work together. IPC & VM launching had horrid performance. Kernel was complex. A nice architecture for virtualization that did improve over time in many ways. Unfortunately, you have to trust significant components like Dom0 and Xen that weren't designed for high-security. The complexity of Xen alone gave difficulty for the only team I know that tried to increase its assurance directly:

http://www-archive.xenproject.org/files/xensummitboston08/Xe...

So, Qubes built on that foundation. That was my primary gripe with it as this probably wouldn't reach either assurance or performance (esp IPC) of microkernels. Otherwise, they did add a lot of good usability features and isolated some pieces. That was good. The category I place it in is a cross between a MILS and CMW system whose attack surface is in the middle. To help you understand that, I'll give an old paper on CMW's that shows what they looked like and one describing MILS architecture (i.e. separation kernels).

http://web.ornl.gov/~jar/doecmw.pdf

http://www.omg.org/news/meetings/workshops/RT_2004_Manual/00...

Note: Argus PitBull is a commercial example of CMW still around. MILS was implemented by Green Hills, Lynx, & VxWorks by 2005-2006. All still around if you choose to Google them.

So, MILS desktops were built like the Orange Book systems to be certified at EAL6 or EAL7. That required a ridiculous amount of security activity. See pg 8 and 12-14 here...

http://lukemuehlhauser.com/wp-content/uploads/Karger-et-al-A...

...plus Section 6 and 7 on p 125 here...

http://aesec.com/eval/NCSC-FER-94-008.pdf

...to see what kind of assurance went into security kernels from prior times. Went all out with MAC, integrity levels, segments, object-based protection, covert channel analysis, generation on-site, config management. Lots of stuff to counter all threats, including malicious insiders. Current certs require even more on assurance side although HW-assists can be weaker. That's why only a handful make the cut & usually cost $$$. ;) Hope you see why my baseline was MILS-style separation kernels rather than Xen or whatever. So, what is similar in OSS world?

Nizza Architecture (2004 onward) https://os.inf.tu-dresden.de/papers_ps/nizza.pdf

Perseus Architecture http://www.perseus-os.org/content/pages/Hypervisor.htm

Note: Closer to Xen models but with Nizza- or MILS-style design attributes. This was implemented as Turaya Security Kernel & Turaya Desktop. It may be used in Sirrix TrustedDesktop but not sure.

Genode Architecture draws from Nizza etc at Dresden http://www.slideshare.net/sartakov/genode-architecture

Note: Different enough from traditional that I'd love to see formal analysis of their model by external parties. Could be issues there. However, they keep to ease modelling of overall thing and using where possible components designed specifically for secure operation. That's rare in OSS projects.

Muen Separation Kernel http://muen.sk/

Note: Fiasco.OC and seL4 are others. I know this one has been ported into GenodeOS, though.

So, I hope those links paint an overall picture for you. Security of a solution against strong attackers depends on the underlying foundation. Is it trustworthy on its own? Does the use of it facilitate easy or difficult security? Historically, the security kernels and capability systems did really well on making secure operation, at least containment, be easier to pull off. UNIX-like and monolithic architectures made it horribly difficult with several attempts at secure, compatible UNIX failing.

Xen itself borrows some principles from more robust systems but wasn't and still isn't designed like security kernels. It also doesn't integrate much out of high-security research although some researchers still add improvements. Qubes builds on it & will inherit some of its risks. GenodeOS is designed a bit more like security/separation kernels. It does borrow plenty from high-security research, including underlying kernels. So, although actual assurance varies, architectures like GenodeOS building on high-assurance components have much higher potential for eventual security than architectures like QubesOS building on low-to-medium-assurance platforms.

And this isn't even counting the number of application-layer issues that show up within a partition. The MILS, OKL4 CAMKES, GenodeOS, etc methods of splitting applications between partitions w/ protected communications is very important for that use case. E language is a good example on capability-security side. Gotta protect trusted part or secrets from untrusted part (esp transport or UI). Mandatory access controls or CMW-style features within the partition can also help but they're fighting weaker attackers. So, these are overall the kinds of things I think about when evaluating platforms like this.

Pro tip: look at underlying platform, design principles, security architecture, and assurance activities first. If these fail, then what depends on them fails. Most security-oriented tech builds on foundations of quicksand. Things built that way typically sink under weight of strong attacks.

EDIT: I just noticed your edit after doing this research and write-up. Fortunately, I think it still addresses it. Talk about preemption. :)


I agree. None of what he suggests will help avoid TAO spying. I laughed at his insistence re off-site backups.

But I wouldn't call this guy the NSA's chief hacker. He only took over that department in 2013, after more than two decades at NSA. He is a manager, not an agent. His practical knowledge is limited to what he has been told and has bothered to read. You don't get to be head of a department in a spy agency by helping the public avoid spies.


" I laughed at his insistence re off-site backups."

Was that because you knew about interdiction or the fact that a 3rd party might have no security? ;)

"He only took over that department in 2013, after more than two decades at NSA. He is a manager, not an agent. His practical knowledge is limited to what he has been told and has bothered to read. "

That's probably true. In that case, though, he'd still know they attacked stuff lower in the stack, did implants, used radar, etc. It's in the basic catalog. How much he'd understand I don't know. That he neglects their methods of hitting hardware or getting physical access suggest he's misleading intentionally.

"ou don't get to be head of a department in a spy agency by helping the public avoid spies."

And that's why. I thought at headline that the guy would have to be a defector or new whistleblower if we were going to get anything out of this. He still works there? And is helping us stop TAO? Who buys this kind of BS...?


The only thing I think this achieves is helping stop FOREIGN government agencies from getting the same data. US government agencies have access to interdiction, implanting backdoors in nearly all the electronics, and in networks. You'd have to build your own hardware from scratch to avoid getting spied on by USA based state actors and then you'd probably run afoul of the FCC.


Anything NSA can do they can do if they know the method. 0-days in lower than app layers abound, esp for Chinese and Russians. Russians also mastered emanation attacks long ago. They and U.S. had a nice cat and mouse game on that. Chinese actively counterfeit our gear with who knows what added in there.

So, my framework and counter to the NSA guy doesn't just apply to the NSA threat. It applies to foreign, nation-states too. The barrier to entry, esp for non-app attacks, is lower than ever with even hobbyists publishing attacks on the net due to seemingly no effort at device- or router-level security.


am i reading this right? is it illegal in the us to build ur own computer device from the most basic components?


You'll get patent suits for sure if you sell it. There's a patent on almost everything about making CPU's and networked devices. Also, anything that uses high-assurance security (EAL6/7) is still under old classification they used to reduce exports for crypto. They didn't mention the implications of that when they opened up crypto exports. ;)

So, your best case is to build something that barely functions by modern standards but still usable. Here's you some inspiration:

http://www.homebrewcpu.com/

Good security model to start with to leverage existing code:

https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/


As far as I know, building it is fine but you can't network it without FCC licensed components.


>>> Was that because you knew about interdiction or the fact that a 3rd party might have no security? ;)

Because the interception of data flowing between data centres was/is the backbone of the prism program. NSA has access to he fibre over which those backups will travel.


Not too long ago, "off-site backups" did not mean you used an internet connection, and in many instances, it still doesn't. Tape is still in use. Along with that, off-site doesn't necessarily mean "another company". It can be a separate office at the same org.


>>off-site doesn't necessarily mean "another company"

That is part of what prism was tapping, data moving between google-owned data centers.

Tape comes with its own set of problems.

"A contractor working for the US Secret Service walked onto a Washington, DC Metro train carrying two tapes full of extremely sensitive data. He got off at his stop carrying neither. "

https://nakedsecurity.sophos.com/2012/12/10/secret-service-s...


Technically, it was MUSCULAR tapping the fiber of Google-owned datacenters. PRISM was another name for FISA court requests.


Yeah, that too. Good news is that CompSci and OSS have given us many ways to protect confidentiality and integrity of that data. Off-site backups are fine if one is doing that. I usually recommend a data diode or guard between the storage/crypto and transport parts of that use case, though. That's because just trying to do the off-site, online backups opens one's endpoints up to attack. One-way flows with error correction built-in or limited back-channels guaranteed with custom HW are traditional way to do it.


I won't say that he's misdirecting people. It should be obvious enough that NSA relies on vectors this does not cover as well, and can adapt by finding new vectors. But this is presumably bread-and-butter stuff, and I won't be surprised that the vectors covered in the talk are the most common, cheap and reliable ones that his team most commonly uses.


He's talking about TAO and nation state attackers. He knows what both use. His recommendations ignore entire categories of risk and assurance activities TAO relies upon per their catalog. That's misleading.


These seem to be live notes from a conference talk and are a little rough. The article refers to [1] which is better journalism. [1] http://www.theregister.co.uk/2016/01/28/nsas_top_hacking_bos...


Yes, The Register's article is much better/higher quality.


Ugh, this article is terrible. Who upvotes this crap? It's difficult to read because of all the grammatical problems, and doesn't really give much information anyway (apart from the link to the Register article, which is indeed much better).

Flagged due to general crapness.


THIS is exactly what NSA should be doing domestically. Help us harden against foreign threats. Talks are great. Even better ... Give small American businesses tools or services so we can harden. My business would smile paying taxes if the USGOV had our back on commercial cybersecurity... Even in partnership with private companies.


You can watch the talk on YouTube now: https://www.youtube.com/watch?v=bDJb8WOJYdA


TL;DR - "If you really want to protect your network you have to know your network, including all the devices and technology in it"

There, saved you a click.


I've always found it better to emulate high-security engineers at defense contractors and IAD instead of listening to their hackers. The hackers are good for spotting problem areas. As in my other comment, the defense side in academia and high-assurance market has been effectively countering them for decades. People just don't apply what was learned.

On my end, being limited back then, I combined whatever proven method I could with obfuscation across the board. That by itself, per monitoring, stopped very sophisticated attacks that common approaches failed to. They'd have to hack it to even figure out how it worked and was configured. Then they could hack it normally. See how that doesn't work out for them? ;) The attacks would transition to physical infiltration (never happened I think...) or social engineering (mitigated many).

Examples for your amusement: (a) using obscure processors + OS's + network-layer guards while advertising they're x86 Linux boxes; (b) sandwiching a custom, randomly-generated, messaging protocol or middleware w/ security features between others; (c) my polymorphic ciphers w/ several AES candidates, random iteration, and random counter values; (d) real port-knocking like SILENTKNOCK variant that doesn't advertise itself; (e) hiding authentication or taint (for tracing) data in checksums at various layers; (f) straight ripping privileged code out of a system while using minimal, unusual libraries for stdlib etc; (g) tools like Softbound + CETS to automatically make stuff safe or languages that do by default.

Lots of tricks that require almost no work, don't negate the security of proven components, require almost no maintenance (outside OS stripping) once you deploy them, and stop or detect all kinds of attacks. Proven against nation-state attackers time and time again. Just wait till I integrate stuff from SecureCore, Air Force's HAVEN, SAFE, or CHERI processors plus my randomization/obfuscation shit at HW level. Need funding & specialists for that but I'm still doing high-level designs & research in case I ever get it. NSA already failed to breach weaker versions of that. TAO gonna scream when next-gen versions get rolled out into production. ;)


I'm convinced that its wrong to say there is no security in obscurity. It's just one more layer in what should be many, and it has many benefits(simple example: ssh port changing at least reduces alert fatigue so I know when I get a lvl 10 from OSSEC I know I need to dig into it asap!)

Port knocking is great as well.

Sounds like you are doing great work!


"and it has many benefits(simple example: ssh port changing at least reduces alert fatigue so I know when I get a lvl 10 from OSSEC I know I need to dig into it asap!)"

Interesting point. False alarms were small enough that I never thought to measure the impact of alert fatigue in these changes. Might consider it in the future.

"I'm convinced that its wrong to say there is no security in obscurity."

It might help to just call what we're doing obfuscation. That's the proper term. Security by obscurity is either defined as or is connected to the concept that only hiding the details would keep you safe. In our examples, we're using a strong, security technique plus hiding a critical aspect of how we use it. Take away the hidden part, it's still at least as strong as a good, security technique.

Hence, we're obfuscating otherwise good security rather than doing security by obscurity.

"Sounds like you are doing great work!"

Appreciate it! Email me if you want to see other solutions or essays I did. I let people copy them without fee so long as they give credit. No blog so it's currently a lot of text files and links to master copies on Schneier's blog.


You have a lot of fun :)

Thanks for the read.


Sure thing. I hope you copy one or more of them to have fun yourself at attackers' expense. I like to imagine my work results in hackers doing stuff like this:

https://images.duckduckgo.com/iu/?u=http%3A%2F%2Fcdn.patch.c...


It's much deeper than that. I think he also made a subtle joke at how even if you think you know what's on your network, they know the entire technology stack running it.

It also had a lot of common best practices like asset management, strong network segmentation and inspection, anomaly detection, and application whitelisting.

The sort of stuff you hear about on the Critical Security Controls Top 20.

https://www.sans.org/critical-security-controls/


Are there any "reputation-based" tools available for home users? I have always wanted a tool that would visually show me every device on my network and give a clear indication when a new device was added.


NMAP. I'm sure you can find a gui-like program for it. Just scan your routers subnet.


Question: If I did somehow manage to avoid being spied on, wouldn't I stand out because there is very little data available about me?


Yes.


In case you already weren't aware there is no news here.

This is all just high-level, best practice, straight from the CEH or CISSP exam, textbook information.

His information was applicable for big organizations with a network and not really individuals. Like anti-virus is going to help you against the NSA?


The nsa knows that it's very difficult to implement security. Nothing new here: look for the weakest link then pivot once inside.

That being said, I really feel the future is in predictive and reactive machine learning algos on data.


What a waffle. Can anyone extract useful information from this meandering trainwreck of an article?


OT:

Interesting how submission sometimes works. I submitted a link to this from Wired: https://news.ycombinator.com/item?id=10994795 more than a day ago which got just about no interest and here it is now trending on the front page.

Not complaining, just making an observation!


Our friend Zoz of Defcon fame has a talk about NSA spying that enumerates and gives details to some of the ways that the NSA can 'see' you.

https://www.youtube.com/watch?v=J1q4Ir2J8P8


As I posted before, usernames like this are distracting and you should make one that doesn't implicitly troll every thread: https://news.ycombinator.com/item?id=10837069

I admit it's amusing to try to get around the restriction by mocking the moderator. But not amusing enough.


Very first sentence is glaringly shitty. It's a tech "log" all right. Life's too short.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: