Hacker Newsnew | past | comments | ask | show | jobs | submit | symtos's commentslogin

Another reason is that the official Firefox builds for at least GNU/Linux doesn't employ standard exploit mitigations (stack canaries, position independent code, read-only GOT).


One of those groups is Global Cyber Allience (GCA):

"GCA, a 501(c)3, was founded in September 2015 by the Manhattan District Attorney’s Office, the City of London Police and the Center for Internet Security."

https://www.cityoflondon.police.uk/advice-and-support/cyberc...


> Quicklisp is de-facto the only widely used library manager in Common Lisp world, and so it’s written in Common Lisp and doesn’t have any tests. It’s a wonder for me how it’s not breaking!

Quicklisp also downloads and executes code over plain HTTP with no integrity checks whatsoever.


Yes, that is the default. But you could connect through an https proxy or check PGP signatures (see http://blog.quicklisp.org/2017/09/something-to-try-out-quick...).


> In case of Maven - and likely most others - packages are not even digitally signed by the publisher

Last time I explored the atrocious state of language-specific package managers, Maven Central was (and I'm guessing still is) the only language repo that requires that packages are signed [1][2].

Now, whether package signatures are verified on retrieval is another question... (they are not, unless you use a plugin such as pgpverify-maven-plugin [3]).

Obviously anybody with the private key can still introduce malicious code even if you verify your package signatures, but at least it's better than allowing any oppressive regime with a root CA trusted by Mozilla/Microsoft to MITM rust/python/npm/ruby/whatever packages downloaded by its residents.

[1] https://maven.apache.org/repository/guide-central-repository...

[2] http://central.sonatype.org/pages/working-with-pgp-signature...

[3] https://github.com/s4u/pgpverify-maven-plugin


The ISP in question is in Turkey, so it should probably be noted that the Turkish government has a root cert trusted by both Mozilla and Microsoft.

https://ccadb-public.secure.force.com/mozilla/IncludedCACert...

https://social.technet.microsoft.com/wiki/contents/articles/...


Pretty sure that such a root cert would be fairly quickly yanked if they got caught using it for MITM attacks. That would cause a lot of trouble for the primary users of such certs. Such attacks are best done with some relatively obscure and unimportant compromised certificate authority.


Had a quick glance and your code is littered with unchecked function calls and potential overflows.

Also: Cookie:../../../<filename>

Where <filename> is a file starting with a value that's interpreted as a valid uid by atoi(). You're saved by a NULL pointer deref when the unchecked getpwuid() fails if the resulting uid is >0 but invalid (unless you're running it on a system where NULL is mapped to readable memory).


Hey! I said don't judge me :-)

The reality is that I only wrote as much as I needed to go back to working on the project I needed it for. It works for the 'Everything is fine' case, which is what I needed to go back to developing the client side. Even a hint of malicious intent could probably bring it to it's knees.

But therein lies the rub. Is it worth hardening it or should I just go to Rust where most of those things just won't pop up?


So... did you use HTTP because the client side required it or it made the client side much easier, or was that just what came to mind? Because this seems to be exactly the type of thing I would generally use the default OpenSSH installaction on the box for, pre-shared keys, and possibly even setting a specific shell on the specified public key on the user side on the server to prevent random shell access.

There's some really interesting advanced features of OpenSSH that most people will never have need for, but you can come up with some really interesting solutions. For example, you could also use a single remote account that allows SSH access and has a separate public key for each user that sets an environment var for the target desires user, and restrict the command run to sudo with that environment variable defining to run as that specific user, and make sure sudo is configured for the users allowed.

A microservice isn't a bad idea, it's just interesting how many ways there usually are to accomplish what seems like odd, specific custom workflow in most UNIX environments.


Client side was all browser. https://github.com/Lerc/notanos also incomplete. I go back and add things to it from time to time. It's my long term toy project.



> What is the boundary, in digital devices, between hardware and software? It follows from the definitions. Software is the operational part of a device that can be copied and changed in a computer; hardware is the operational part that can't be. This is the right way to make the distinction because it relates to the practical consequences.

> There is a gray area between hardware and software that contains firmware that can be upgraded or replaced, but is not meant ever to be upgraded or replaced once the product is sold.

See http://ps-2.kev009.com/pccbbs/mobiles/7buj19us.txt ^F EC version


it should be noted that nightmare isn't safe for untrusted websites: https://github.com/segmentio/nightmare/issues/1060


huh?

"when will we finally throw away binary uploads" https://lists.debian.org/debian-devel/2014/02/msg00622.html

"For instance, when a maintainer uploads a (portable) source packages with binaries for the i386 architecture, it will be built for each of the other architectures, amounting to 11 more builds." https://www.debian.org/doc/manuals/developers-reference/pkgs...


Care to elaborate?

I do not see how the quoted text snippets contradict what I wrote (that Debian builds binary packages fully indepdently from upstream, and prepares to have reproducible builds in the future).

Moreover, a single "huh?" comment is almost never helpful for a civil conversation.


how does debian developers independently building on their machines help? if anything it adds another point of failure. if you trust upstream enough to run their code, you implicitly trust the state of their hardware anyway (since nobody has the time to completely grasp any reasonably large codebase in its entirety); so it seems sensible to trust their builds more than some random debian maintainer


> if you trust upstream enough to run their code, you implicitly trust the state of their hardware anyway

No, these are fundamentally different levels of trust.

Note that we are talking about a breach-in into the webserver, not into a developer's private computer.

For example, some time ago there was a breach-in into the Linux Kernel website. It had almost no effect on the security of this project, because so many people had the sources, and because the Git commits are signed by the authors.

So not only were the attackers unable to distribute their binaries. They were also unable to place malicious commits into the source code. And this was mainly because every distro builds their Linux kernel on their own (and also because sources are signed by the developers and reviewed by multiple developers, although that's not the point of discussion here).

The breach-in into the Bitcoin webserver could have been similarily effect-less, if they were as well-organized as the Linux kernel and worked better together with the distros.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: