Hey, sorry for all the name changes of Microsoft Defender. I work at MSec (Microsoft's security org).
We ended up absorbing and acquiring a few companies to provide a better offering and a lot of re-branding happened. For example Security Center's old portal for active threat protection, automatic remediation, incident investigation etc is all now absorbed into (the better) security.microsoft.com which is (to my understanding, just an engineer) the current and last (for the foreseeable future) rebrand. The team I work at started as one person working on the frontend for MDE (Microsoft Defender for Endpoint) and now has hundreds of people working on the security portal across India, Israel and the US (as well as a few other smaller sites contributing).
Also, as an engineer I have to say the offering is good. The anti-virus and the telemetry is worked on by some really smart people. Client information is sacred, logging into production takes multiple audits and PII is scrubbed (heavily) any time logs are needed. We still have a lot of room to improve but I am confident in Microsoft both delivering a good product and acting in good faith (and there is a clear business incentive in the enterprise security space to do so rather than benevolence).
Hey, we (Nim programming language[1]) get constant false positives on Windows Defender. This has started relatively recently and we think is due to a recent increase in the number of whitehats using Nim but it really affects our community negatively. It seems that Windows Defender marks anything that looks like Nim[2] as being a virus which is very unreliable and causes many of our users to get hit by virus warnings as soon as they attempt to install Nim. We've attempted to submit the files concerned as false positives to no avail[3].
This false-positive problem is a major headache for a lot of open-source programming languages and programs, whose source code is also sitting right there on GitHub to be inspected and compared with.
Think that Microsoft could do better with its false-positive review process, particularly doing something more for open-source developers and projects.
As long as you guys are not going the route of google's approach to messaging, I'm sure we will forgive you. Nor the route of an NFC pay/wallet/money app that...
You know what? Just don't do the thing where you launch products to consumers so that someone achieves a promotion internally, and then abandon the product.
Frankly, MS has a long history of backwards compatibility, so signs are already positive.
Anything being worked on the IO performance side of Defender? I’m still using a paid third party AV for this sole reason. The impact is so huge with NPM packages as an example…
IO impact is why I disable it on all my dev machines.
Microsoft really needs to make this easier to turn off too. Right now, I have to use an undisclosed privilege escalation hack-around to force things my way.
The challenge is that excluded folders/extensions apply to both real-time scanning and manual/periodic scanning.
What we really need is the ability to disable real-time scanning on one set of folders/extensions, while still including them with scheduled system-wide scans.
Thanks, that's a great point.
It made me think we could probably do this programmatically - perhaps as part of a script to carry out the full scan? Add the folders of interest, scan, then remove when completed.
>I’m still using a paid third party AV for this sole reason.
Would you mind naming it? AFAIK most third party anti-malware solutions act like rootkits, possibly introducing new attack vectors, or have become basically ad-ware and malware themselves trying yo bait you in various subscriptions.
I’m using Nod32 from Eset for almost a decade now.
All AV somehow have to hook into low level system calls so can’t really avoid the kernel driver. Nonetheless, nod32 has been an install and forget AV with no interruption nor bait/nag screen at all. It’s a no bullshit AV and it does well.
I supposedly get the same protection as Defender (according to various AV tests review) and most importantly I get the IO performance back.
> All AV somehow have to hook into low level system calls so can’t really avoid the kernel driver.
While I don’t doubt this true for Windows, on macOS Apple is phasing out kernel modules with APIs that allow software to hook into those low level calls without actually running in the kernel. For AV vendors there’s the Endpoint Security System Extension: https://developer.apple.com/documentation/endpointsecurity
All the AV/endpoint security solutions I’ve seen have switched to this.
I don't really see a difference, as a consumer. If the AV sits at the kernel level or between userspace and kernel, it's still sitting below userspace and can do whatever it wants to the system. Sure, if I trust the kernel is better written than the AV software, I may have a few extra guarantees, but that is not a given and anyway it doesn't mean I can confidently run an AV I think may be poorly written.
It overall seems like a more complex solution that has more chances of being wrong. I would bet the core reason Apple did it was control to lock down their own control of your OS, not any security reason. Perhaps it also simplifies their development somewhat, if they can rid of some stability guarantees for some in-kernel APIs that AV would have needed.
> it's still sitting below user space and can do whatever it wants to the system.
I don't know much about the Endpoint Security extension, but for Apple's network filtering extension they actually DO address this!
The code that runs on every network call is heavily sandboxed and can't communicate at all with the outside. Its only action is emitting some basic signal like "block" or "accept". This means that while the system extension can evaluate all your network communication, and block what it chooses, it can't exfiltrate the specific content. I might have the details wrong, but that's the general intention.
But the security benefits aside, I think the real reason for preventing code from running in the kernel is about stability and not security. Buggy code won't crash the system anymore. They can also enforce stricter performance requirements.
(And at the moment, you can still run kernel extensions on your own system if you really want by disabling SIP and other things, it's just infeasible for any AV vendor to have their customers step through that very onerous process.)
That's also what I'm using and for the same reasons. Defender's protection is fine according to testing, but the IO performance is insanely bad. I wouldn't bother with running ESET's AV if Defender didn't slow heavy disk IO operations to a crawl. And I'm not excluding any development directories because malicious code can come in either as part of the project I just pulled down from github or from pulling in one of its dependencies from a package index.
I've been forced to use a number of products over the years at work from Trend Micro to McAfee. They all need curated exclusion lists and we have to ask developers to put all source controlled files under an excluded path common for all devs.
McAfee is by far the worst offender IMO when it comes to file IO. We eventually dropped it in part to it's insistence on locking files in App Data which is a common scratch space for almost every Windows App.
to be fair, most malware hides in App Data too... it's a convenient place thats hard to find using windows explorer and guaranteed to be user-writable.
Yes and McAfee was locking files for tens of seconds while it scanned. Things like Visual Studio and Notepad++ would become unresponsive after a single keystroke.
Back when I had to use McAfee on a work PC; I was using WSL 1 for building the projects I was running. Symptom was basically: do a compile, lose most of your RAM until next reboot. Stopping WSL wouldn't reclaim it; nothing showed up in Task Manager, Process Explorer, etc. The RAM was just gone; unusable. Post a bug to WSL, was immediately asked if I had McAfee and if so to disable/uninstall. Problem solved. But, due to insurance reasons, I had to have an AV running, and powers that be decided Defender was sufficient. Never McAfee again. It's been a pile of crap for decades; no signs of it getting better, either.
I really don't understand why the IO hit... If you're designing the OS, you can either scan a file when it's written to disk, or when it's read from disk, or sometime inbetween. when you have scanned a given file, you need not rescan it if the file hasn't changed.
These facts together mean that it should be really rare that any application needs to be waiting for any scanning - since scanning can happen anytime between a data write and a read of the same data.
But if you control the kernel and all the code that runs in the kernel, you know exactly who has written to disk and when. So if nobody wrote that data, then it hasn't changed.
In Windows there's a hook on file handle close which AV products use to implement their file scanning.
I know this because I did a bunch of reading on the topic after encountering catastrophic halts inside of CloseHandle, deep inside the kernel. And even with administrator privileges I could not kill the process, or the attached debugger, and the machine was unable to shutdown because even it could not kill the stuck process. I had to hard power cycle to get back to a usable system. Near as I can tell this was because of the AV product the company was using that crashed or deadlocked or something.
I recently had a new issue with the Defender: there are 2 apps I use that can delete files from disk, one is a mp3 player (foobar2000), the other is a video player (PotPlayer), both have a hotkey to "delete the current file being played". I've been doing it for years, but recently when I do it, the app will freeze for 5 seconds, meanwhile the CPU usage of window defender will shoot up in Task Manager. I tried to tweak all kinds of different settings in the Defender, and couldn't find a fix.
So on the one hand I want to validate and recognize both that you're having this problem and that finding a real live person who might be able to help you with it totally, very reasonably, evokes a "hey, can I tell you my problem?" response.
I don't want to minimize that, but I do want to (gently, good-naturedly) say that I think it's kinda funny.
As someone who once worked at a big tech company I think it's totally hilarious how telling people "I work on X product at Y company" totally evokes this sort of response. Tell enough people where you work and you'll see some non-zero percentage of people respond like this.
Like, when I was talking with a mover who was unloading my stuff and he asked "So, what brings to you these parts?" and I told him his first response was "Really? Y'know, I've got that software and it doesn't work for me under these very specific conditions. Why is that?" (At the time I think I mumbled something about "I don't know". In retrospect I was moving to start work there so I realistically couldn't have known yet).
So - I hope you get your problem sorted out, thank you for giving me the opportunity to talk about this, and I think I'm gonna go chase some kids off my lawn now :)
The problem is that quite often asking some random person that works on X at Y company to take pity on you and look into your problem is more effective than the official support channels because they are so useless that they may as well not exist.
As a mechanic in a previous career, I can verify that.
The only answer I ever gave was:
"Sure I can look at that, bring it into the shop, we're open eight to five Monday through Friday, eight to two on Saturday, and open until eight PM on Wednesday and Thursday, but only for drop off, pick up, and tire changes."
I have no idea and I don't work on that bit but there are lots of troubleshooting guides on "what to do if Defender is slow in situation X" in our docs - in 99% of cases it's interference (another program scanning) or specific programs acting in ways that trigger scanning often.
In all cases there are workarounds - I'm not using the Windows version (I'm on a mac) but when I had a performance issue it ended up being a script that created and deleted thousands of files quickly and tweaking it was fairly simple.
This is a pretty drastic fix, but if you don't really care about the video and audio files, and you're willing to lower security a bit, you can exclude the library locations from defender. I've done this with a couple of folders so defender doesn't scan executables from local compilation of code. Just remember to scan things when you download or copy from friends.
Saying "Client information is sacred" and stealing executables off all windows machines with the automatic sample submission on by default does not go well together.
Thats an awfully low standard to set don't you think?
I don't think it makes sense for the public. Stealing files from unsuspecting users without as much as a popup saying "hey, we just snatched this file without you knowing this is even a possibility" is just sad.
While it has been normalized, the ops point is correct that the lip service to client data being sacred, does not match the actions of uploading clients data!
That's whataboutism. I absolutely do not want Microsoft grabbing stuff from my PC without asking me, it's so insidious. And then they put the switches to turn these off behind so many loops and registry flags that's it's a nightmare to turn this crap off.
And, if you turn off automatic sample submission, the Windows Defender icon in the task tray displays a scary exclamation mark, warning you that you might not be secure.
That incident did not provide proof. A text string is not a program. Given that over the last 20 years no one has shown that there is any code for the fantasy backdoor is near proof there isn't. Reversing the binaries and demonstrating such a backdoor would make one famous.
So no, this is not proof. At this point the lack of proof is near proof no backdoor existed.
"Reversing the binaries and demonstrating such a backdoor would make one famous."
At the risk of sounding tin hatty, not true. (In)Famy doesn't equal success or money, anyone attempting to post state secrets without the help of another state is probably not hard to intercept and buy off...
Anyways, there doesn't need to be proof - a (forced) system update straight from the source (Microsoft), targeted to your machine, is all it takes to make all security redundant, and there are enough publicly known phone home systems in Windows that we don't really need to prove they don't already dragnet.
I'll grant you, subverting corporate sec is a bit harder, but usually boils down to a bit of carfully targeted infil, put the right exceptions in the right corporate solutions, and corp security is also nill.
Hasn't this been debunked multiple times, or at least never been proved to be a backdoor? I mean I don't doubt there are backdoors or exploits used by the NSA in most mainstream OS, but I don't think this is a good example of that
The naming and, from what I've gathered, recent changes are a mess.
Recently I looked at M365 business premium and thought that would only include Defender for O365 (why not M365?) and require a separate subscription for Defender for Endpoint, but now it looks like Defender for Business is included.
I am not an expert - just a user and an engineer working on this. I'm happy to ask one of our PMs to review it they know and understand the product a lot better than I do.
From reading the article everything "sounded right" but that's hardly an educated opinion since I only worked on _some_ parts of the product.
Actually - I think I'll ask our red team or security guid - that's also probably a good source.
Why are Windows updates such an absurd experience? All my Macs and Linux machines update without any hassle and without taking so much time, Windows always takes very, very long and what is even more annoying it not only takes forever, but even after waiting 20 minutes for updates and rebooting it still needs new updates.
It is absolutely embarrassing and horrible. A bad, unusable system.
Fortunately I replaced all Windows machines with actual operating systems, so I do not have to use that ugly joke system too much.
First of all thank you for a great job. Since its Windows XP Security Essentials incarnation I consider it the best choice for Windows PCs protection.
But bloody please add an option to turn off hunting for "hack tools". As an advanced user, SMB admin and private programmer I use NirSoft tools and also keygens for my own apps but Windows vigorously deletes NirSoft (and perhaps some SysInternals also, but I'm not sure) tools and every keygen it would notice. So I have to disable it. It has even deleted qBitTorrent once although it is a perfectly legitimate app and I use it to download legal things like Linux distros and legitimately purchased Humble Bundle stuff.
Why can't I [with reasonable ease] configure it to only watch for real viruses/spyware/ransomware which really threatens to infect the PC?
In my opinion we even have to consider actual pirates using really illegal keygens because this simple fact: there are many of such in mediocrely developed countries, they get confused, disable the protection, get infected and join the botnets. Even when there is a criminal we dislike to support and want to punish, we don't want them to get infected with anything and spread the infection further.
There should be clear distinction between unquestionable malware everyone wants and needs to be protected from for everyone's good vs questionable apps some people (justifiably or not) actually want to use for sake of their pragmatic interest.
Does Microsoft offer favourable treatment or withhold patches when it comes to state level APTs? Can we trust Microsoft to be neutral and offer security patches in a timely manner and defend the interests of their consumer customers above all? With the whole conflict in Europe, the issue of state level adversaries is raring its head again.
A state level attacker can likely acquire 0-day exploits that are not patched and bypass defenses.
Microsoft's offering does some really cool stuff like:
- Automatically detecting anomalous behavior in the network and isolating suspected devices/ips/machines/programs.
- Have real time security engineers constantly monitoring your network and hunting attackers and suspicious activity.
- Tools that automatically isolate possible attackers and help measure the impact of attacks.
> Can we trust Microsoft to be neutral and offer security patches in a timely manner
Yes, that for sure. Once an exploit is discovered it is typically very quickly identified. A lot of the times security patches don't come from Microsoft though - if you consider something like Log4Shell (the Log4J vulnerability) for example.
> defend the interests of their consumer customers above all?
I'm... not sure about "above all" since I am not sure what "all" is but if the implication is that Microsoft won't patch a security flaw for a state level APT then "yes". At least - if it ever happened it happened _way_ above my pay grade and if employees would learn of it there would be outrage.
> With the whole conflict in Europe, the issue of state level adversaries is raring its head again.
I think state level actors have consistently been a problem.
Note again as already mentioned none of this represents the opinion of my employer, just my thoughts.
> I'm... not sure about "above all" since I am not sure what "all" is but if the implication is that Microsoft won't patch a security flaw for a state level APT then "yes". At least - if it ever happened it happened _way_ above my pay grade and if employees would learn of it there would be outrage.
cough NSAKEY cough
Provided a backdoor for state security forces. Did it in NT, and then even after they were caught, did it again in Win2k.
You underestimate people's moral flexibility, especially that of "patriots."
You have to assume that the state is collecting every scrap of data that MS gets from telemetry. I'm sure MS is collecting a ton of data for themselves as well. As much as the company line might say that PI is scrubbed we know anonymization is a joke, and PI isn't limited to names and credit card numbers. You can learn a lot about a person by the contents of their Prefetch folder, their Steam folder, their internet history, the amount of *.h *.c *.py files are on their drive and how often they are updated etc.
I'm sure on enterprise systems MS is pretty hands off. They've got a very stable cash cow there they don't want to scare away, but home users are probably screwed and you should expect that. MS has been making their opinion of their user's right to privacy and to control what their own computer very clear over the years. They've repeatedly demonstrated a willingness to leverage their OS against the users and their wishes for power and profit. Act accordingly.
Even with LTSC you can't disable all telemetry, It could have been a great (and very popular) option for companies (even though it wouldn't help home users), but MS didn't want that and discouraged its use in everything but the most extreme cases (https://www.computerworld.com/article/3326065/microsoft-tras...) even before they cut support to 5 years instead of 10.
Is there something you could say about complementing Defender with paid MalwareBytes? Is there too much overlap to justify this? Or is performance hindered more than the additional benefit accrued (not that I feel it, system is responsive enough)?
> Hey, sorry for all the name changes of Microsoft Defender.
Let's be fair, naming is not a strength of Microsoft. It seems that every product other than Windows and Office is renamed every couple of weeks; and even in those two examples, explosions of SKUs manages to muddle the waters just as well (Apple's "Choose a Vista" was very much on-point, even if you preferred Windows over Mac).
When I was at microsoft, I campaigned hard that we should name windows releases after dog breeds. Apple did big cats, who wouldn’t love to download windows 10 golden retriever? No one wants windows 10 fall 2021 update for creators.
As an avid Apple user, I can't stand their naming conventions. I don't have any idea if High Sierra came before or after Mojave, or if Lion was before or after Mountain Lion. I would much prefer version numbers/years.
Yup, I also hate this in the Linux world. Debian Bullseye? Ubuntu Focal? WTF? Ubuntu 20.04LTS thank you very much please cease and desist with the "cute" names that force me to consult a chart every time I encounter them.
iPhone 10, Galaxy S7, RX470, such product names are significantly easier to keep track of.
Debian, Ubuntu, and Android names are all alphabetical, so you can tell which versions are newer and older. That's all version numbers are really useful for anyways.
Just checked. Android and Ubuntu, yes. TIL. Debian I don't think so though? It goes Jessie(8), Stretch(9), Buster(10), Bullseye(11). Many other examples of non-alphabetical products exist.
Regardless, numbers are easier to work with, easier to remember, and it's immediately apparent what's going on with them. "debootstrap focal target" is more difficult to remember than a hypothetical "debootstrap ubuntu 20.04 target" would be. In practice I have to consult a chart almost every time.
Honestly, I had no idea. I recently had to have my MBP repaired (new logic board... as always). So, I got it back with the latest OS (Monterey). I needed to download some software that was for specific versions of MacOS, and I honestly didn't even know there was a OS 12. I thought I was still on "OSX".
If it wasn't "Monterey" and was just MacOS 12 there would've been no confusion. I feel like it's always an exercise of looking up the code name to find the version whenever someone is like "Yeah, I'm on Big Sur" .... ok one sec, let me google what that even means.
I spent some time as CPE so I guess the naming convention always made sense? You went from lesser cats to greater cats and now to different sites in CA. Each naming always comes with a version number to fall back on...
That doesn't help when people like to refer to the version by its name only, omitting the number, which forces you to do a lookup against wikipedia or whatever.
TIL that they're alphabetical. I always hated them before just now but I guess that's slightly more tolerable. Really though, please just stick to numbers if you ever have to name a product lineup. It's immediately obvious to anyone that Firefox 77 came after Firefox 76.
Once you run out of letters, it gets confusing too. Ubuntu's wrapped around I think twice in its history; it goes through code names from A to Z before starting over again. Is Gutsy Gibbon newer or older than Babbling Baboon? At least the $YEAR.$MONTH release numbers make sense, no questions about the relationship 18.04 and 21.10 have.
Didn't they drop it? Or de-emphasize it? I know current version is android 12 but have no clue of the name. Same for all previous. My old note 8 has android 9. My original htc I think enddd on 1.6 or something. I still have an ancient Asus tablet on android 4. Etc - I never knew or cared, and didn't HAVE to, about the cute name.
Naming and version numbers together. I still can't get my head around the series of organisational perversions that would be required to go through that whole period (lasting years) of .NET/.NET Core/.NET Standard/crazy versioning/divergence and convergence malarkey. This all appears to be on a more sane course now but it's taken far too long.
I still don't fully understand which parts of the various .NET frameworks I can safely use with a fully FOSS stack on an arbitrary Linux distro or a Mac or whatever and which parts are effectively limited to Windows. The entire thing is incredibly confusing.
`dotnet-dump analyze` allows analysis of dumps on Linux using the same SOS debugging commands that work under windbg. They even added some native memory examination commands. You are lacking the rest of a native debugger, but well, asking the team to produce a full blown multi-platform native code debugger would be a bit much. You can instead SOS into lldb for mixed mode debugging of both live processes and dumps on linux.
For tracing, well that means very different things to different people. One can capture most of the CLR level events that would go through ETW on windows with the `dotnet-trace` tool, which can also function as a primitive sampling profiler. But despite its name it does not use the profiling apis to implement a tracing profiler.
The analysis side of the dotnet-trace tool is subpar, but honestly even the best tools I can find often leave me underwhelmed on the analysis side.
Ahem.. Looking at you Visual Studio 2005 Team System, I mean Visual Studio Team System 2008... I mean Team Foundation Server, no wait Visual Studio Team Services.. So sorry, Azure DevOps, obviously.
Github Azure Edition doesn't seem to be a real product that I could find any reference to.
Searching "Github Azure Edition" brings up this page[0]. That's the name of this promo page, and it talks about Github integrations into various Azure products. The very first product it talks about is Azure DevOps. Direct quote:
"Simplify deployment from your repository with seamless access to the Azure portal and Azure DevOps using your GitHub account credentials."
I can confirm that the parent comment you replied to is correct, as I used to work at Microsoft during the time when the renaming from Visual Studio Team Services to Azure DevOps was happening internally. As in, not that I was involved in it, but that we used VSTS, and then we noticed that the name was changing to Azure DevOps everywhere. All while we were still accessing the same service and all as before.
I see, so they offer Github hosted on Azure as well now, thanks for sharing it, I actually didn't know.
Though the parent point still stands valid, as Visual Studio Team Services indeed became Azure DevOps. Github hosted on Azure is just self-hosted Github and has nothing to do with VSTS/Azure DevOps.
> Microsoft both delivering a good product and acting in good faith
I’m going to call you out on that. Microsoft lost my trust to act in good faith with personal data when they started capturing my private OS user input (e.g. the history from Windows R (run) and forced me to link it to my personal identity.
I observed first hand that disabling sending telemetry also disabled the history for win+r.
It’s also quite easy to observe that when you type anything into the start search interface you are steered or defaulted to searching Microsoft internet services.
> I observed first hand that disabling sending telemetry also disabled the history for win+r.
1. Okay, but how's that relevant to my original question? Is the history being broken supposed to be smoking gun evidence that windows is sending your "history from Windows R (run)" to microsoft?
2. I just tried and failed[1] to reproduce this on a VM with a fresh install of Windows 10 Enterprise LTSC 2019 with "telemetry disabled". There isn't an universal standard for "telemetry disabled", but at the very least I have the "Allow Telemetry" and various search related group policies activated. I suspect what's happening is that you ran one of those "disable telemetry scripts", and that unintentionally broke it.
>It’s also quite easy to observe that when you type anything into the start search interface you are steered or defaulted to searching Microsoft internet services.
but we were talking about the run (windows-R) dialog, not the start menu?
> I suspect what's happening is that you ran one of those "disable telemetry scripts", and that unintentionally broke it.
I am 100% certain that’s not the case.
—
Yes - the point is when setting all the most private privacy options on Windows 10 stops keeping a win+r run history. I didn’t go as far as installing a custom root CA and intercepting binary telemetry data to prove that the data was being sent. I think the fact that the MRU list is disabled strongly suggests that the product team assumed or new that it was collected.
If your experience is based on LTSC Windows/Office you probably have a different experience.
While MS isn't collecting your run history (at least not as you type it) the point stands. They've decided to use their OS to collect personal info on users for their own profit. The extent to which this happens can be limited, but not disabled entirely (for most users). That's reason enough to not trust them.
If they are doing a keystroke logger (ie, capturing typed private user data) where are they logging this keystroke log too? Or is the run command history sent up?
Are you talking about folks with Send my activity history to microsoft checked?
I have a script that sets default privacy preferences to my own preference when I start using a machine, you might consider that.
> I have a script that sets default privacy preferences
Then you will probably notice that you no longer have a history for Win+R run history.
It’s not unique to Windows to mine user input but it’s more recent than for example the search in iOS and less obvious than Google search.
I believed that the “personal” in Personal Computer meant that it belonged to me, and that used to be true. We are sliding down the slippery slope of allowing the software vendors to own our devices.
I think the staring point with Windows was product activation in XP, and that was quite legitimately intended to stop software licence abuse. I am still comfortable paying for closed source software but Microsoft seem to have given up on that business model.
hello, they send anything you type in the start menu right into bing, and they cut every way to disable it. by lying they trick every inexperienced in their methods user to enable online/edge account so everything you type almost anywhere except may be notepad but pretty sure include office are linked to you, your payment and billing info, your ssn, your location, your purchases, and all people your interact with. and then they feed you with generic tabloid puke from enforced by spyware internet explorer site msn.com you also can’t remove easily from appearing at start. the amount of trackers on this site is staggering.
The inability to simply "see client data" even if you go through multiple bastions did kind of surprise me. I worked at several startups before Microsoft where just asking the client for permission was considered OK.
This certainly makes debugging production issues much much much harder - there are certain environments whose data you simply can't access (either as a user or as an administrator) and you have to rely on telemetry (much of which you can't gather since it can possibly be used for PII - this is all an audited process) to debug issues (attaching a debugger is also prohibited since you can read data that way and the port is closed).
Instead of trusting me - think of the corporate incentive to do well here. Consider how much it would cost a company like Microsoft if employees were exposed to confidential customer data (our customers can work with medical data, so a fairly expensive legal nightmare) vs. what the company gains (engineers have a slightly easier time debugging). At Microsoft scale I guess it simply makes sense to be super strict about this.
>Instead of trusting me - think of the corporate incentive to do well here.
Unfortunately, when it comes to anything Microsoft related, due diligence research and logical thinking is rarely employed by the HN crowd, and instead replaced with anger and FUD. I've lost count of the amount of comments saying Microsoft is forcing TPM to spy on us.
Not saying that the alphabet agencies or nation states couldn't misuse Microsoft's reach to get more private customer data, but that would apply to all US based corporations, not just Microsoft. And since AFAIK, Microsoft seems to never have been hacked for its customers' data to be leaked like it happened to Sony and Facebook, it seems they're doing a good job so far of keeping the amateur bad actors out and their customers safe.
So thanks for commenting and sharing inside infos, as some big companies ban their employees from doing the same.
> Instead of trusting me - think of the corporate incentive to do well here. Consider how much it would cost a company like Microsoft if employees were exposed to confidential customer data (our customers can work with medical data, so a fairly expensive legal nightmare)
The last 3 decades of big players misbehaving taught us they usually get a slap on the wrist for pretty much everything at worst, and a fine of half the money they made from the feature at best.
> Instead of trusting me - think of the corporate incentive to do well here. Consider how much it would cost a company like Microsoft if employees were exposed to confidential customer data (our customers can work with medical data, so a fairly expensive legal nightmare) vs. what the company gains (engineers have a slightly easier time debugging). At Microsoft scale I guess it simply makes sense to be super strict about this.
Your employer has been caught twice providing NSA backdoors in its operating system and its "home" edition makes it impossible to disable a stunning and completely unnecessary level of telemetry data.
Look, it really was not a personal attack on Microsoft engineers, but the plain and simple reality that Microsoft is a US Corporation and Azure falls under the "Cloud-Act" says everything, the fact that engineers don't have access to customer data is probably just to prevent leaks. And i bet Microsoft makes more than 99.9% compared to others to protect customer data...but then, no one can proof it.
We ended up absorbing and acquiring a few companies to provide a better offering and a lot of re-branding happened. For example Security Center's old portal for active threat protection, automatic remediation, incident investigation etc is all now absorbed into (the better) security.microsoft.com which is (to my understanding, just an engineer) the current and last (for the foreseeable future) rebrand. The team I work at started as one person working on the frontend for MDE (Microsoft Defender for Endpoint) and now has hundreds of people working on the security portal across India, Israel and the US (as well as a few other smaller sites contributing).
Also, as an engineer I have to say the offering is good. The anti-virus and the telemetry is worked on by some really smart people. Client information is sacred, logging into production takes multiple audits and PII is scrubbed (heavily) any time logs are needed. We still have a lot of room to improve but I am confident in Microsoft both delivering a good product and acting in good faith (and there is a clear business incentive in the enterprise security space to do so rather than benevolence).