Hacker News new | past | comments | ask | show | jobs | submit | sullivanmatt's comments login

You can tell when this deal started to come together by looking at the history of the website on Wayback Machine. In fall of 2024, the website had a checklist comparing SDF to dbt and claiming SDF had a better feature set than dbt Core (page rendering is hit and miss right now for whatever reason): https://web.archive.org/web/20240919110243/https://www.sdf.c...

In December 2024 the page had been updated to now compare "dbt Core" against "SDF with dbt": https://web.archive.org/web/20241217172451/https://www.sdf.c...

Little marketing switcharoo there to avoid pissing off their future owners.


My first employer is now a decently well known B2B SaaS and we didn't build user interfaces to manage various settings for a very long time. For example, we supported custom fonts, but we would have to jack into production to upload them and manually configure the database to make them available to a given customer. That ability didn't become customer-facing for a decade, simply because it was easier to file a ticket and make an SRE do it.

This is a little tangential but another good piece of advice is to avoid over optimization of the engineering stack. A giant monolith running on the largest RDS instance AWS provides a lot more runway than people realize.


In all of these advisories there has never once been a mention of cloud being vulnerable. I think it's safe to assume cloud runs a similar, if not identical, codebase, and that these issues are simply patched there first before vulnerability announcements are published. But that's the type of thing no company is ever going to be willing to say in public.


Someone in here claimed recently that the Cloud products were forked many years ago, which sounds believable - there's tons of little stuff that only works on either Cloud or on-prem.


This sounds way worse than it is.

To be clear, the "remote" part of the code execution is that an attacker controlling your destination server can cause your client to run an attacker-controlled payload, if the client is forwarding their credentials (`ssh -A`). Most people don't tend to make connections to arbitrary SSH hosts, and certainly they don't do it while forwarding their credentials along.

It's a neat attack, and I applaud the Qualys team on their find, but this is not any sort of emergency situation for 99.99% of systems.


I beg to differ: this does not sound way worse than it is. If anything, it's understating the issue.

Not only can it be exploited across a wide variety of clients across multiple platforms, but all that's required is that you're using key forwarding.

This is devastating, because it's not just that you control the destination server and steal the keys, but you can take over the user's entire workstation.

Once you've got the user's entire workstation, you potentially have access to everything else they have, from their email, to other SSH hosts, to key loggers, to Git repos. This is about as bad as it gets, and all because someone is using Agent Forwarding.

Best of all, the victim has no idea that they've been completely compromised. They can live inside your machine for years, upgrade their sploits, and generally exfiltrate all of your secrets.

Never use agent forwarding. Just don't. "Agent forwarding should be enabled with caution" in the man page is another massive understatement. Even if you think you need it, check the other responses in this thread for examples of how to work around it.


> Never use agent forwarding

Agreed. As this exploit proves, it's not even safe to log into your own servers using ssh forwarding if any service is exposed remotely, because if an attacker compromises that exposed service and gains root then they could extend the attack to your workstation, and that's a huge deal - especially considering that you have the private key to log into that server on your computer (so it's not an unsafe bet there might be other keys).


Exactly - agent forwarding is the laziest and fastest path to getting severely pwned, but the irony is that the alternatives are actually fairly simple and fast, if someone is willing to take the time to adjust their process a little bit.


> agent forwarding is the laziest and fastest path to getting severely pwned

Only for people who don't know what they are doing. Usually, such people also make poor replacement decisions that are even less secure.

> the alternatives are actually fairly simple and fast, if someone is willing to take the time to adjust their process a little bit.

I often need to work on code in ephemeral containers. Is there an "actually fairly simple and fast" method I can use to be able git pull and push to and from these ephemeral containers that:

1. doesn't require too much adjustment (a little bit is okay); and

2. is not less secure than agent forwarding with confirmation?


> Only for people who don't know what they are doing.

By that do you just mean that no services are openly exposed on the system? To my understanding, if any vulnerable service is remotely exposed then it's not at all safe to use agent forwarding with the affected version of openssh.


By that I mean to use the now-20-year-old 'ssh-add -c' flag. It'd seem practically no one is aware that an ssh agent does NOT have to silently and automatically sign just any and every auth request, given just how frequently I see people decrying "No! If you forward your agent that means a remote host has full access to everything in your agent! Never forward your agent ever! No buts!"

With the '-c' flag, if a remote host tries to use my agent, I get a graphical dialog on my local machine asking me if I want to let my agent do it. If I'm not expecting the dialog, I can just say no, and now I know the remote is compromised.

Actually, I go a step further. Because it is possible to accidentally accept a signing request by pressing the enter key that I meant to press for sth else, I make my agent require a passphrase on every use, not just a yes/no dialog.

In checking my machines for this CVE, I discovered my agent has yet another layer of security built-in. I use gpg-agent, relying on its famed security posture. Turns out, when forwarded, gpg-agent supports nothing except signing. It does not support adding keys from a remote, let alone fancy operations like loading a PKCS11 provider chosen by a remote.

----

Securing a forwarded agent has nothing to do with whether there are openly exposed services on a remote system. The remote system could have malicious code that entered it not necessarily through an exposed service. A compromised npm package in the supply chain, for instance, is sufficient. It doesn't matter how it got there. What matters is: when it does, can it abuse a forwarded agent. Hence the '-c' flag to ssh-add.


I was not aware of that functionality in ssh-add. Thank you!

Man page reference:

"-c Indicates that added identities should be subject to confirmation before being used for authentication. Confirmation is performed by ssh-askpass(1). Successful confirmation is signaled by a zero exit status from ssh-askpass(1), rather than text entered into the requester."

Edit: After thinking about it more, I think I may have misunderstood how the -c parameter would perform in regards to this CVE.

Would a confirmation prompt actually suffice to prevent this attack? Also, it begs the question, does this exploit rely upon someone already forwarding to a malicious server? I've only read the CVE, and skimmed the reporter's blog, so I don't know with certainty one way or the other.

If the exploit does require that the forwarding has already taken place, -c couldn't really help, right? The decision was made to allow it. I hope I'm not sounding contrarian, I'm genuinely curious about how this would play out.


The '-c' is not a mitigation to this CVE. It is a mitigation to a malicious remote silently opening further ssh connections to other servers you have access to.

My parent claimed that using agent forwarding is always insecure. Judging by their response to my comments elsewhere in this discussion, they seem to be under the impression that a forwarded agent will always silently and automatically sign auth requests. And yes, if you forward an agent with a key in it that's missing the '-c' flag, it will. Ignorance of the confirmation feature is classifiable under 'you don't know what you're doing'.

The same parent has also been beating their drum of 'create keypairs on remote servers' pretty heavily in many places in this discussion. That _is_ less secure than a forwarded agent that confirms each use.

----

I do not claim anywhere that the '-c' flag prevents this CVE from being exploited. It's just that the agent I happened to be using — gpg-agent instead of ssh-agent — just happened to be immune to this CVE by doing what OpenSSH has decided to do in response to this CVE. I.e., I blindly relied on gpg-agent to be secure and it paid off here.

----

> ... the forwarding has already taken place, -c couldn't really help, right? The decision was made to allow it.

It's not a confirmation of "Do you want to forward this agent?". It's a confirmation of "Do you want to sign a request using this key?". That happens in every auth request.

So if you ssh into foo, you'd get a dialog to confirm the use of your private key for this initial ssh. This is not added security, just an extra step in the initial ssh process.

But if you try to ssh into bar from foo, then you'd get another dialog on your local machine to confirm the use of your private key for the auth request by bar. _This_ is the added security vs. malicious code on foo ssh-ing into bar as you without your knowledge.


> Never use agent forwarding.

Just to add to this, with the new -J/ProxyJump directive, it's become (even) easier to login through a ssh host without needing to enable agent forwarding (Given that you're connecting through a not-ancient host running a reasonable version of openssh - beware of firewall/appliances stuck on ancient sshd and/or proprietary/"mini" versions).


Agent Forwarding is not a trivial thing to take lightly, but a knee-jerk reaction "ban it entirely" is too much.

I forward my agent by default because I've set it up securely. My setup is safe from this exploit too (I use gpg-agent as my SSH Agent). In return I get the seamless convenience I cannot get through any other method. Jump hosts are fine (and I use them too) but there is no way I'd be able to do remote git operations in ephemeral dev containers without the peace of mind (and safety) that agent forwarding gives me.

Creating keys on remote dev envs for git operations is _less_ secure than agent forwarding, even when those keys are encrypted (passphrase protected) at rest, because they have to be loaded into memory on the (potentially compromised) remote host.


> Creating keys on remote dev envs for git operations is _less_ secure than agent forwarding, even when those keys are encrypted (passphrase protected) at rest, because they have to be loaded into memory on the (potentially compromised) remote host.

That's not how agent forwarding works. An attacker on the remote server can piggyback on your SSH session and do anything else desired, so your remote git repo is still compromised, but the blast radius of these remote keys is much smaller. (in infosec, we'd usually call this least privilege but separation of duties also applies)

All of this is still possible even with gpg-agent, even if this particular RCE doesn't apply to you, so "Never Use Agent Forwarding" still applies.


> An attacker on the remote server can piggyback on your SSH session and do anything else desired

This myth is about 20 years out of date. See what the '-c' flag for ssh-add does. It was added in OpenSSH 3.6 back in 2003.

In fact, I can prove it to you. Take my pubkey from GitHub (same username) and put it on a host you control. Tell me to ssh into it with my agent forwarded and see if that gives you access to my GitHub account.

----

> All of this is still possible even with gpg-agent

Even without the 'confirm each use' flag, gpg-agent with a zero TTL visually asks for the decryption key on each use. There _are_ some agents out there that have no support for visual confirmations and yet happily accept the '-c' flag (looking at you, GNOME), but gpg-agent isn't one of them.

----

> That's not how agent forwarding works

You seem to be misreading. I'm not claiming that's how agent forwarding works. I'm saying that's how your suggestion of creating a keypair on a remote host works.

It _is_ less secure because it requires those keys to be resident on the remote. If the remote is compromised, decrypting the key in the compromised machine's memory is strictly insecure compared to doing it on my local machine with an agent. Once captured from the compromised remote, those keys can be exfiltrated and used repeatedly. But, if an agent is somehow tricked into signing an unauthorised request, that access is still limited to one use only.


>> > An attacker on the remote server can piggyback on your SSH session and do anything else desired

> This myth is about 20 years out of date.

This hole didn't simply disappear when -c was added.

The vulnerability is simply that the socket file containing the connection back to your agent is accessible by anyone who managed to escalate to root on the remote host.

You're making several assumptions:

#1: someone is using -c

#2: that -c even does anything on their platform

#3: the user pays attention to them and is untrickable

#4: there are no bugs in the ssh-agent or gpg-agent on the client machine

Any one of these being false renders all protection from -c moot; worse yet, a bug in the ssh-agent (or gpg-agent, if that's your poison) like the one in the subject of this post can be leveraged into complete client takeover.

> Once captured from the compromised remote, those keys can be exfiltrated and used repeatedly.

that is true, which is why they should be tightly scoped.

> But, if an agent is somehow tricked into signing an unauthorised request, that access is still limited to one use only.

That one use only is all that is needed. An attacker might install another pubkey, start up another socket process, or even rootkit the remote box if you have sudo, doas or if there are any privilege escalation vulns.

These situations are identical in that the remote box is pwned, but only one of these tries to limit the exploits to just that one remote box and not every other host your keys have access to.


> This hole didn't simply disappear when -c was added. The vulnerability is simply that ...

Why are you conflating the two? You're making claims that sth has always been utterly, completely broken, and the only evidence you can cite for it is sth that was revealed to the world a few days ago?

This CVE is a secvuln. No one is arguing against that. Secvulns happen. No software is bug-free. Does that mean every software everywhere is suddenly utterly completely broken?

Actually, while we are on the topic, why not argue banning SSH entirely. After all, each SSH connection is a connection back to the host where the `ssh` client runs. Tomorrow, there could be a secvuln discovered in the `ssh` binary that can be exploited by simply printing the right characters to stdout. In fact, this very vector has been used before to pwn vulnerable terminal emulators, even over ssh.

----

> ... which is why they should be tightly scoped.

As can forwarded agents (see 'IdentityAgent' in `man ssh_config`). In fact, this is how I separate client projects from each other and my personal projects. (I didn't do it for security, rather for the convenience of not tripping any 'max keys allowed' limits, but hey, I'll take the security benefit too!)

----

> That one use only is all that is needed. ... tries to limit the exploits to just that one remote box and not every other host your keys have access to.

You're making several assumptions:

#1: someone is disciplined enough to use tightly scoped keys

#2: that they bother to rotate all those ephemeral keys without fail

#3: that the sheer inconvenience of constantly updating keys doesn't bother them enough to say 'screw this!'

Guess what the weakest link is when it comes to computer security? The human factor. You're asking humans to go through way too much hassle they're not going to care about. Which means they'll voluntarily break the security of the system without care (and, of course, without understanding).

You also forget that:

1. Agents can be scoped too.

2. Jumpboxes are often used to jump to a large number (most often, all) of servers. No organization is going around creating point-to-point jump links between servers or dedicating a separate jumpbox for every destination server.

3. Your model, when exploited, gives the attacker repeatable, lasting access. You say, "That one use only is all that is needed", and you're correct, but only for the most determined and prepared attackers. Once-only accidental access is better than repeatable lasting access for the simple fact that most attackers are aiming for only the latter.

4. Your model is also more susceptible to silent persistent malware. My method has the benefit that exploited remotes are discovered before the exploit is given lateral movement access.

----

> ... who managed to escalate to root on the remote host.

No root needed. DAC enough. I say this because I run untrusted code in VMs/containers I ssh into, with my agent forwarded. And I consider arbitrary npm/python/etc. packages automatically untrusted.


Lots of people end up with AgentForward on by default as a sort of "make it work" fix, and lots of people use `git+ssh` on untrusted servers. Here's an example:

https://abyssdomain.expert/@filippo/109659699817863532

TBF this is a vulnerable config either way; but RCE on the client shouldn't be possible.


I've been using a separate SSH config for git for a long time now. Nice to see it wasn't just paranoia.

Among the settings are explicitly disabling agent forwarding, and using a git specific identity (SSH key).


I’m not so sure git is secure against a malicious server, even if you’re not simply pulling in a Makefile written by the attacker.


Assuming you do perfect integrity checks of the git repo you're pulling, git uses SSH and obeys ssh config for each hosts under the hood. It's safe to say that if you have forward-agent enabled git is vulnerable.


The attacker controlled destination server could be a compromised host, so this enables lateral movement from a deployed VM or remote dev machine into a developer laptop.


“git pull” over SSH and have your system RCEd? I’d say it levels above “neat”.


You'd need to go out of your way to make git pull do agent forwarding and I can't really think of a reason why anyone would.


I don’t know how prevalent it is as a network architecture, but it seems like a bastion host / jump box would be a juicy target for this exploit, since it’d let the attacker jump upstream.


Sure, but first they have to root the bastion box.

If you root the bastion box, you have user credentials for anything inside the network. Controlling the user's laptop seems unlikely to be your most profitable next step.


> If you root the bastion box, you have user credentials for anything inside the network.

But that's not how a (properly-configured!) bastion host works.

You won't have user credentials for anything UNLESS users are using Forward Agent (which they shouldn't! simplest explanation here.. https://userify.com/docs/jumpbox ).

That's the point behind using ProxyJump. Your connection actually jumps THROUGH the bastion box and doesn't stop for interception along the way.

(And, of course, an attacker can't do anything very useful with ssh public keys except for maybe traffic analysis or learning more target IP's.)


It really depends on the setup; ex. I imagine it's easier to steal company code from a laptop than server, while data is the other way around.


Increasingly, the role of a bastion host is served either by something like Teleport, which handles authn/z and proxying without needing forwarded agents, or newer options in OpenSSH like ProxyJump where you hop via a bastion host but without ever forwarding your agent.


Yeah if I’m reading the technical analysis right, your conditions that you mention have to be correct and also the attacker must have “poisoned” library files on the targets machine so they can dlopen them, is that right?

Pretty unlikely


The libraries are on the client's machine, not the server's. And they're not "poisoned"; the default distro-provided libs already provide the remote execution capabiity (eclipse-titan, libkf5sonnetui5, libns3-3v5 and systemd-boot packages from Ubuntu 22.04).


Ahh I see I thought the attacker also had to have custom malicious libs deployed on the client machine I wasn't sure if standard ones would do, thanks for clarifying that


There must be a specific set of libs present on the victim (client), correct. Qualys claims that stock Ubuntu Desktop systems often have these libs, and that they haven't looked into whether other distros tend to.

But yes, your point stands. Huge number of preconditions here to fulfill.


Are you saying that if you SFTP in to a client machine to upload a file to their server, it’s expected behavior that you’re willing to give them root on your machine?


Aren't all SSH hosts potentially attacker-controlled? ;)


Yea, but there’s a security boundary wherein you don’t want the SSH host to be executing code in your environment. Of course, the attackers can backdoor sshd to log credentials, setup init scripts on the host to execute code every client login and other shenanigans.


Of course. So it's not really that out of the question if you are using agent forwarding, so, yeah, this is a big deal.


If I were a user or integrator, how do I know that the de-identification step is actually working? Is there a way to test (and/or continue testing) your regex patterns or whatever mechanism used continues to accurately strip my sensitive information before it goes to OpenAI?


Good question, some developers implement a manual approval step, so you can review the redacted prompt before you submit it rather than making it automatic. It depends on their product requirements.

Re mechanism, the redactions themselves are powered by a language model.


I'm not even a user of rhel but the difference is: security patches. Enterprise uses rhel because they fix or triage nearly every vuln, every time. If you work for a company with extremely stringent security requirements, or sell to government entities, rhel and its derivatives (CentOS/Amazon Linux 2, etc) are basically the only way you can clear their requirements.

Debian (and by extension, Ubuntu) chooses to not fix a significant amount of security issues. This makes sense given their business models, but is an unworkable position for a huge number of enterprises that depend on Linux. Example: https://ubuntu.com/security/CVE-2021-45464


That, and if you need to run some COTS (commercial off-the-shelf) software package chances are the vendor will only support it on RHEL (or CentOS). However, many of these vendors also support Ubuntu LTS or Suse Enterprise now so there are (usually) a couple of Linux-based alternatives.


I work for a eu government with extremely high security requirements (in the national identity / IDP / health space).

We actually _CANNOT_ use redhat for compliance reasons.

(We're using ubuntu LTS as it goes)


Why? What compliance reasons make Ubuntu LTS work and RHEL not work?


Stupid auditors/pentesters really. Explained a bit in another comment, but essentially we had to explain the concept of backporting cve fixes to the same 'version' of random libs to the auditors and to get certified we would have to demonstrate, with actual source, that each of ~200 or so cve's were fixed in various system parts (individually).

In the end, we just went with ubuntu for those nodes, and they all passed the certification. Shrug.

Since then, we don't even need the OS to be certified, since we are using confidential computing, and we stuck with ubuntu for our k8s nodes etc -- but we are forbidden from using rhel anywhere by our legal / compliance people now.


The issue here is with your auditors. I mean if RH tells you a CVE has been fixed with a backport, sure you can challenge that fact but at the same time and with the same standards, it'd mean your auditor would also have to check the actual source of your patched Ubuntu packages to make sure the new versions fixed the security bugs.

The bottom line really is plenty of auditors I've seen don't know how to check for vulnerabilities other than by checking a version... That's it.. Their tools or reporting only know package must have a version greater than x.y.z.


What about SUSE/OpenSUSE? Surely that would've gotten the green light?


I would recommend reading both Inspired and Empowered by Marty Cagan to help you think about your product journey. Very relevant to what you'll be building, and personally I found Empowered challenged me in ways that both made me uncomfortable and also better at my job as a Product Manager.


Any other PM books you found had a significant impact on your role?


Thanks for the recommendation. Will definitely read it


I built this with AWS Lambda. Relevant info if someone else wanted to try my approach to build such a service: https://mattslifebytes.com/2023/04/14/from-rebuilds-to-reloa...


My day job is security engineering, and I just keep encountering the same problem over and over again: the code is the easy part, it's all the other shit that sucks. I don't want to write a bunch of terraform, set up a CI pipeline, and have to justify why I haven't updated a docker container's dependencies in like 6 months. I just want to write a little Python and make it available on the internet somewhere. Why is that hard?

I'm working on a side project to basically make the tool I wish existed: https://integralcloud.io

If you have a use case that you are willing to talk to me about, please reach out (contact info on my profile). I won't try to sell you anything, ever. I just want to hear about business use cases and try and make something people want to use.


Can't you jusy use a servetless function such as Aws Lambda for this?


I'm working on a product right now that targets this exact use case. I have contact information in my profile, I would love if you would be willing to reach out and just talk to me about your use cases. I already have a day job at a tech company, so I promise I won't try to sell you anything, ever.

https://integralcloud.io is the app.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: