The company I worked for and our competitor VMWare both had to create small kexts to enable on-the-fly capture of USB devices. We both accessed USB devices from user-space, but we needed to put small kexts in the kernel to prevent class drivers from being automatically attached to general USB devices. There was really no race-free user-space way of doing this and Apple ignored our messages on their dev forums when we asked for assistance.
I wonder if they’ve added an interface for this? Or maybe promiscuous USB device control just isn’t going to be possible now?
If I'm correctly guessing that you might have worked on Parallels, thank you for one of the best programs I've ever used. So many nice delightful touches, like their autodetection of USB devices and asking if I want to use them with the VM or the native system. Without Parallels I could never have used a Mac as my main machine.
I've since moved back to Windows, but I hope Parallels might make a Windows version. The Windows competitors aren't as good as their Mac versions, and painfully slow compared to Parallels running on my ancient MBP 2012.
Parallels’ way of handling USB is just perfect. I was trying to install RockBox firmware to my iPod Classic and the easiest solution was to use Windows XP in parallels. That’s when I realised how well it’s implemented! It first asked me if I wanted to attach usb with Windows VM or MacOS, and later I could even make it default. Very good UX.
Edit: Windows XP, because I still remember its Product Key. Ha. I probably reinstalled XP in my PC as a kid about a hundred times.
Why can't someone slap the XNU api onto Linux and create a runtime system that permits Apple's UX frameworks/libraries to be used by Linux developers (who will create more performant apps that do incredible things while maintaining the look-and-feel of the Macs that we once loved)? As long as we're booting Linux on machines bought at their local mall outlet or online store, I don't see the legal problem. Maybe a new generation of managers at Apple would even cooperate with such a Linux distribution (something like A/UX of yesteryear)?
Probably because importing Mach messaging and VM APIs into the Linux kernel, and recreating IOKit and similar from scratch is a pretty major undertaking; the APSL isn't GPL-compatible, so using the existing XNU code and trying to make it fit into the Linux kernel isn't something that could ever be upstreamed. Aside from the fact that Linux's device model doesn't exactly map to IOKit well.
Then there are invariably plenty of assumptions that userland makes: launchd is probably assumed to have PID 1; all sorts of stuff like that.
Plus of course, there's the question of whether this would be inline with the EULA of Apple's userland…
There's always GNUStep if you want to write Cocoa apps in Objective-C on Linux. Somehow I have a feeling the APIs aren't what's been holding back Linux on the desktop.
Darling is a translation layer that does something similar [0]. Development has gone up and down over the years. One of the main pain points is the way Apple adopts, and then discards, a new GUI API every few years.
As a side note, the forums aren't the best place for that; the post could just be unseen by the people that matter. The appropriate channels are through Feedback Assistant and/or a technical support report (https://developer.apple.com/support/technical/) which has bit more visibility.
For things vaguely like this, the places I've worked had contacts within the company helping us to varying degrees. Apply was not one of these companies, and unless there's money or contracts are being signed, getting that contact is a challenge.
Second this, you get two free Technical Support Incident requests per year as a registered developer, and you can purchase additional ones.
We regularly file such requests and have always got timely and helpful responses.
Ah - good to know and glad to hear it for the sake of my former colleagues; Apple had already redesigned the USB kernel interfaces to make this more difficult and I wasn’t sure how they were going to keep doing this at all.
The developers of Little Snitch have explicitly said they favor this move:
>Using the Network Extension framework instead of an NKE very likely allows us to build a version of Little Snitch which requires no Kernel Extension at all. That is a good thing.
Of course, you reposting the same inaccurate rant multiple times throughout these comments (including reposting as the top comment grew) signals you're aiming to perpetuate your anti-Apple fervor, but here's to dissipating said FUD for those of us who use/love Little Snitch.
> The developers of Little Snitch have explicitly said they favor this move ...
Sure. They also said that if Apple doesn't provide them the required API's, they will continue using a kernel extension:
> But what happens if Apple will not or cannot provide the required features? NKEs will be deprecated, but not Kernel Extensions in general. We can still implement a Kernel Extension to augment the functionality of the NE framework. The basic filtering can be done via the NE framework and additional functions would be provided by our own Kernel Extension.
The unsaid part is ofcourse - "as along as Apple allows them to do so."
With the way Apple is tightening down the Mac systems, through both hardware (Apple T2 security chip) and the software (changes to macOS making it more like ios), there is absolutely NO GUARANTEE that Apple will allow unapproved Kernel Extensions to run on the future versions of macOS.
This is not a conspiracy I can believe it. I just think that Apple doesn't want third party code running within the kernel, with such high privileges, creating either vulnerabilities and/or instability, and they're creating user land alternatives.
I presume Apple looks at Dropbox creating a kernel extension and isn't too impressed, so they give them a safer and more secure method to achieve the same thing.
I would like to very much believe you as I turned to Apple because I didn't like both Microsoft's and Canonical user spying practices and unfortunately have to use use some non-linux commercial software applications.
However, recent moves by Apple over the past years make me very doubtful. Apple is going about this just as Google did in the beginning - loudly and convincingly promising "privacy" while slowly intruding more and more and collecting more data on its users.
I have run both MacOS as well as iOS behind outbound firewalls, and there just isn't any data going to Apple that I cannot easily disable in settings, except maybe checks for updates (haven't tried).
Your conspiracy is also missing any motivation Apple might have for gathering private data. They don't run an ad network
> it does nothing to stop your data from "leaking" to the internet in the first place
That's just blatantly false. Apple has always had restrictions on third-party software. Recently, they have tightened the screws considerably. I'm not entirely sure about outgoing net connections. But just gathering data that is valuable enough to exfiltrate is near impossible, considering apps can only access what's specified in their manifest and explicitly allowed by the user.
> That's just blatantly false ... But just gathering data that is valuable enough to exfiltrate is near impossible, considering apps can only access what's specified in their manifest and explicitly allowed by the user.
I think we are talking about different things.
First, I was specifically talking about ios / iPadOS. Neither platform has any outbound firewall that can stop apps from access the internet (unless you count the very basic one that allows you to restrict net access to apps when you use "mobile data").
So if you are on Wifi on your iPhone/iPad/iPod, any app you are using has free access to the internet. This has the potential to "leak" user data.
E.g.: I like to use the Adobe Acrobat app to read any PDFs. But when you use the app, you have no idea if Adobe is collecting meta data information on the PDF's you have, or even uploading them. Same with Microsoft products. Or even Google's.
If I could disable internet access to these apps, 90% of which don't need them in the first place, I could use them with a bit more peace of mind. That's what I meant by when I said allowing unrestricted internet access to any apps, can "leak" our personal data to the internet, on ios / iPadOS platform.
> Your conspiracy is also missing any motivation Apple might have for gathering private data. They don't run an ad network
It's called "surveillance capitalism". Please look it up on Google.
You don't need an "ad network" to monetise user data. (Oh, and Apple does have an ad network, though it is in "hiatus" now - it was "shut down" right when the marketing about Apple's media campaign on increased user privacy picked up, while at the same time they started rolling out forced ios updates that had features to collect even more of your user data (all "anonymously" ofcourse - and there have been enough recent article debunking how anonymous data can still be de-anonymised).
Let's not forget that as a public multi-national company, that is bound to the American philosophy of capitalism, Apple is duty-bound to make more money for it's shareholders in any way possible. (I've even heard that shareholders can otherwise sue them to in American courts). So everyone of us would be a FOOL to believe that Apple will not monetise our personal data, one way or the other, now or in the future.
> Apple doesn't monetize data, so why would they want this?
Look up "surveillance capitalism". And Apple absolutely monetises user data - just not in the way you think it does (i.e. through an ad network). It uses the data to profile you to learn more about you to better "exploit" you and make more profit from you.
When most people say "monetize data" they mean advertising and marketing.
Monitoring the usage of their own products to improve offerings, and never sharing that data, is not what people mean when they say "monetize." You're deliberately trying to muddy the term because you dislike Apple.
Yeah, and Apple using your personal data to market their own services and products to you comes under the ambit of what we are discussing, if you want to be really pedantic about it.
And stop acting with a cult like mentality that any criticism of Apple is an attack on Apple.
Have you seen the number of MacBooks at MIT, or Google's or any other tech company's campus? The difference between your notion and reality should really motivate you to question your assumptions.
The two most glaring errors are, first, that "advanced users" wouldn't care about ease-of-use. And, second, that "advanced users" would care about price as much as you do.
Apple is great hardware, an OS that works and that doesn't constantly bother you with notification (the most glaring problem with Windows, when I last used it a decade ago), and a native Unix underneath.
Also, if you buy a Mac that's a few years old, you still get a great computer at a low, low price. I picked up an 11" air for under $400 off eBay and have used it to build a startup that supported me and my support person. Ran dev tools fine, ran browsers fine, and even though it was years old the battery life was fine.
You don't need the new hotness to get good and useful work done.
I wanted full control over settings and programs when I was 14. Decades later now, I want an environment that works and gets out of my way and gives me access to the tools I want to program with.
> I wanted full control over settings and programs when I was 14. Decades later now, I want an environment that works and gets out of my way and gives me access to the tools I want to program with.
Sorry - I though he was being a bit juvenile and felt like being similarly dismissive was appropriate.
I regularly switch between MacOS, Windows, and desktop Linux both for work and for home; I love fiddling around with settings and configurations to get things to work, but only sometimes and only when it’s on my terms. The rest of the time I just want it to work, out of the box or off the install disk.
I’m always going to love tinkering with Arch from time to time, but then I get my work done on Fedora or similar.
Ok, I get what you meant. And I won't deny sometimes I too get frustrated when I can't find the driver for some hardware on Linux or have to download some source code and compile it.
But I will still never accept Apple's approach of total control and deciding for its user. Each of us have different needs and expectations from our computer. And there is no way, even with all the spying that Apple does, that it can provide a "perfect environment" for each of it users. It just isn't true and not possible. So restricting customisation, in the name of security but in reality to limit and control the user from making any changes without "contributing" to Apple's profit, is just bad for us consumers.
I wonder if this is the end of drivers for 3rd party PCIe/thunderbolt high performance NICs? I imagine that this may make Macs less viable for situations where having a reasonably high performance 3rd party NIC is important (eg, video editing).
Are there any details about DriverKit that are available?
I used to be a Mac driver developer ~10 years ago or so. I did one of the first 10G drivers for Macos, and MacOS performance was always terrible compared to Linux or BSD, but I can't imagine that moving it into userspace is going to help anything. MacOS boundary crossing (eg, syscall, ioctl, and Mach traps) performed even worse than their network stack as compared to Linux/BSD.
> Regarding performance, on the other hand Linux/BSD still don't match macOS real time audio capabilities.
Got a source for that? I use a Debian machine for an audio workstation. I moved there from OSX and am using the same DAW and get less latency now. Especially because I can run a real-time kernel...
I wouldn't say it's easier, but you can achieve more deeply tailored audio environments with Linux than with macos. I have a friend who is able to run a vector of 4 samples on Manjaro with the high perf rt kernel and using alsa backend. This is impossible to achieve on macos without significant hacking or maybe one of the new workstations. Even then...
Your statement was that audio latency is lower on OSX compared to Linux, and you cannot back that up. We all know that the prevalence of Linux DAWs in the studios are low. But that would only be related to OSX having the lowest latency if latency was the only thing that ever mattered to professional studios. Which it’s not.
This seems to be an odd position to take in a thread about kernel extensions given that they're a fairly esoteric feature that power users make use of.
They're fairly common on the Mac, even for average users. A 3G mobile network dongle I got from Virgin Mobile required a kernel extension to run on my Mac.
Of course, years later I'd be much more hesitant to install a required networking Kernel Extension from an organisation called "Huawei".
Ubuntu Studio doesn't need to be compiled from scratch to achieve it either..
The applications that most professionals involved in music and audio use don't run on Linux. And with most professionals the application and it's stable running is the key concern, the OS just facilitate the application.
You're mixing up capabilities with extra work with capabilities with no work.
If OS X also had tailor made kernel just for lower real time latency it could do even better.
But out of box, it's already better.
That said, the point is moot. 99% of pro audio/MIDI apps/VSTs don't run on Linux, so only experimental nerdy musicians and FOSS zealots would use it for real studio work. You do get the occasional profile on music tech websites e.g. "Techno musician X uses Linux", and somebody always jumps in a forum to say they use some Linux DAW (and usually add a list of ugly hacks, workarounds, and things that don't work), but it's clearly a total outlier situation...
Regular people aren't compiling kernels, they're installing something like Ubuntu Studio (https://ubuntustudio.org/) that comes with everything you need preconfigured out of the box.
No, you just have to go to the Nvidia website and fuck around for 5 minutes with nested lists of devices and models and shit the average user probably has no idea about just to download some incredibly huge download full of over complicated software that no one will ever use.
At least this is how it has been every single time that I've installed an Nvidia driver on windows.
I stand by my position, it is easier, and trivial to install a real-time kernel in Debian than it is to install an Nvidia driver in Windows.
And if an audio professional can't be bothered to figure this out to make money while a teenaged gamer can figure out an Nvidia driver just to play games then they should maybe consider changing professions because this trivial roadblock will be the least of their technical worries in the audio world.
You were not speaking of a crowd in your original statement:
> Regarding performance, on the other hand Linux/BSD still don't match macOS real time audio capabilities.
It's better to say that you were wrong at this point then to try to shift yr statement to a popularity contest instead of the soft-realtime latency contest you, YOU, started with.
They don't match, because I have been pointed out in this thread that I need to either install a special kernel version, go hunt for a customized Ubuntu version or compile a customized kernel.
The macOS kernel doesn't require any of this, it just works.
No audio professional using Apple hardware needs to learn these workarounds to actually do their work.
That seems pretty patronizing of that "crowd". Is the topic of this thread (the top-level, not the topic of audio latency) something that matters to that crowd you represent?
Professional audio producers and musicians can just as easily gain a high-performance DAW by using Ubuntu Studio, out of the box. My system thus configured, runs REAPER far better than the MacBook or Windows machines that are in the studio for the same purposes.
> I ... get less latency now. Especially because I can run a real-time kernel...
Where does it say they needed the real-time kernel to match macOS? It says "especially because", not "exclusively because". No data is shared on precisely how suitable it was or wasn't before the kernel tweaks.
Like others, I'm finding that audio latency on Linux surpasses that of MacOS, easily. I am using Ubuntu Studio with Firewire-based audio interfaces, 50+ channels of digital audio - Linux definitely performs better than my MacOS systems, using the exact same external hardware .. except my Linux box is slower and has fewer cores than my MacBook ..
With the fancy new userspace networking stack, I wouldn't assume that a DriverKit implementation is necessarily slower. At least compared to the old networking stack.
(It says it's meant to be used "to develop drivers for USB Ethernet adapter", but I don't believe there's anything specifically tying it to USB. DriverKit currently doesn't support PCI access; I don't know if that's coming, but the PCI APIs for kexts are not deprecated, so at minimum you could write a kext just to map the device into userspace.)
I was toying with trying to port the Linux Mellanox ConnectX drivers to macOS to avoid having to pay 2-3x the price for ATTO FastFrame cards. You need a special entitlement from Apple to sign network csrd drivers though.
FWIW I can only get about 60Gbps out of the ATTO FastFrame on macOS versus easily 98Gbps for Mellanox ConnectX on Linux.
It's perhaps worth noting that macOS already has drivers for at least one 10Gb chipset, since 10Gb interfaces are present on the Mac Pro and iMac Pro, and an option on the Mac Mini.
They use it in SmartSync, so files that are actually stored in the cloud and not physically on your local machine, are visible and selectable in the Finder and automatically downloaded on demand.
It would be great if there were some user-space support for this in next year's macOS release. I think it would be useful for a lot of cloud storage providers.
Apple previously added third-party sync integration points into the Finder to stop Dropbox from injecting into that process (when it was easier and Apple was locking it down). It’s likely they do not want to break Dropbox and will try to support whatever it needs.
Just like iOS. Which sucks because being able to be on my home network while not physically at home would have been nice in the past, mostly before Spotify was a thing.
ZeroTier has eliminated the use of the tap kernel extension in releases 1.42+ when used on macOS 10.13+. They aren't using DriverKit but a combination of poorly and undocumented features. They expect to eventually have to develop a DriverKit based implementation. Details at https://www.zerotier.com/how-zerotier-eliminated-kernel-exte...
DriverKit has only been around since 10.15 so currently it's not easy to correctly support a range of versions with everything so massively in flux and Apple's aggressive deprecation schedule. Writing and maintaining macOS system level software (including drivers, etc.) has generally become a pretty big headache since around 10.13, stuff is constantly breaking due to OS changes and regressions.
This obviously always happened to some extent, but the pace of breaking changes and bugs picked up massively around that time - I think a large part of the problem is that Apple's own developers don't actually need to use any of these features themselves, so they are just dumped onto 3rd party developers in a half-arsed state. I'm thinking of kernel extension authorisation (which was super buggy in earlier 10.13.x releases and still has weird quirks), various user consent additions (there are no APIs for directly checking or prompting for many of the permissions, let alone notifications when the user grants or revokes consent), DriverKit, EndpointSecurity, etc.
Yep, I hope so as well. I'm not sure if it's a motivation for the developer(s), but I dutifully donate each year.
If it doesn't get updated, I'll have to start looking at custom mechanical keyboards. Their controllers (some small Arduino variant) allow any random key mapping.
Might be worth noting that Karabiner Elements can be sponsored through GitHub - I would very much encourage anyone who finds it as invaluable as I do, to consider giving what they feel it is worth to them!
The good news is that there appear to be multiple options for taking KE into this new kernel world, and the only cost is dropping support for older versions of macOS. Unfortunate, but not unexpected.
That was before the WWDC 2019 announcement that all kexts are now deprecated and will be disabled soon. The Little Snitch team has been quiet about this.
Panic got some special exemptions to be in the Mac App Store previously, but they are not allowed to talk about it in detail. Perhaps LS is going a similar route and it will be revealed when Apple releases the next version of macOS.
As a registered developer, there's a clear path to reach out to Apple and make my case, but that doesn't guarantee I can use the MAS. A lot of devs don't see a need for it though, because the Mac userbase, in general, knows how to get apps on the www.
As long as I can distribute outside, I'm okay with it. But I, and even the biggest Apple fans in the press, have acknowledged the walls closing in.
That's about a different deprecation back in 2018(?). Note this bit:
>But what happens if Apple will not or cannot provide the required features? NKEs will be deprecated, but not Kernel Extensions in general. We can still implement a Kernel Extension to augment the functionality of the NE framework.
The developers of outbound firewall like Little Snitch, TripMode and RadioSilence have already voiced concerns how the alternative offered doesn't have the requisite feature capability required for their software and would actually cripple it. Apple has asked them to file "enhancement requests" with no guarantee that it will be implemented - https://forums.developer.apple.com/thread/79590 .
> Kernel programming interfaces (KPIs) will be deprecated as alternatives become available, and future OS releases will no longer load kernel extensions that use deprecated KPIs by default.
This is the key part, as presented during the WWDC 2019 talks, the long term roadmap is to make macOS into a proper mikro-kernel OS.
Nothing about Xnu leads me to believe they are pursuing a bona fide micro-kernel architecture. This doesn’t really indicate anything in that direction either. They’re just doing the typical Apple thing of enforcing a “one proprietary port, one licensed plug” policy.
We had a very serious performance issue resulting in crashes across our fleet of macs that was ultimately traced back to an endpoint security solution that was patching the kernel and doing dumb stuff.
These changes from Apple have forced this vendor to completely rewrite their product the right way.
I saw a rewrite of some functionality from a vendor (rhymes with mcafsee) as well. They decided to start running lsof to find open files. This can be excruciatingly slow and cpu intensive on macOS. It was running it near constantly effectively turning machines into beach ball render farms.
I suppose avoiding kernel panics is an improvement but let’s be real: these enterprise vendors have always made shit software and rarely keep up with OS releases or updates. They’re not about to change any time soon.
Indeed, but they are what causes OS vendor to actually do something about it.
Android now is requiring hardware memory tagging on ARM, Fortify by default, and Treble requires out-of-process for new drivers exactly because of the same kind of issues.
"MacOS 10.15 Catalina will be the last release to fully support Kernel Extensions without compromises." Specifically, for the capabilities supported by System Extensions and the device families supported by DriverKit, using a Kernel Extension to do that same job is now deprecated and a future release of macOS will not load Kernel Extensions of these kinds. In future releases, we will add more kinds of System Extensions and more device families to DriverKit.
In turn, Kernel Extensions of those kinds will also be deprecated."
Banning third-party developers from loading modules into kernel-space does not a microkernel make.
Apple-approved developers are still going to be loading whatever crap they want into kernel space, incl AMD or nVidia graphics drivers and the lot. It's not like they can convert XNU into an actual microkernel in less than a decade or two.
This just means that if you're not on the Apple-whitelist then macOS becomes a yet even more close platform for you. Wanted to developed a driver for a PCIe network card? Sorry, no can do!
GPUs are kind of interesting, because the kernel space part of a GPU driver is managing the MMUs, and a lot of microkernels see MMU management (even of devices) as still one of their core roles. It's basically a really legacy IOMMU inside most GPUs. Split off a user space process to kick off command list execution and respond to interrupts (most of the rest of the kernel driver these days), and I can totally see a microkernel with GPU drivers in it still being ideologically pure.
If they were going to make a microkernel they would have done it when they made mach, not after 20 years of writing kernel GPU drivers, network drivers, etc.
Here's a text transcript from the WWDC 2019 session that gave an overview.
>A System Extension is part of your app that extends the functionality of the operating system in ways similar to a Kernel Extension but running in user space outside the kernel.
Drivers are moving to user space too, but it's another case of something that will phase-in over two versions of the OS.
>DriverKit is a new SDK with all new frameworks based on IOKit but updated and modernized, designed for building Driver Extensions in user space outside the kernel.
You can build and test certain types of drivers in Catalina, but they won't be required until the next version of the OS ships.
>In Catalina, you can control USB, Serial, Network Interface, and Human Interface devices.
They will announce support for more kinds of user space drivers over time.
If there is not an available user space option for a particular use case, the existing kernel space option will continue to work.
I'm curious which ones -- CDC drivers are supported automatically and since Mavericks there is built-in support for FTDI and CH340G written by Apple. I think there is still a PL2303 driver but I've not used something with one of those for a couple of years.
I definitely do not miss the kernel panics from unplugging an older FTDI based Arduino while it was sending serial data!
I've had some problems with one device I've got with the Apple-provided driver and an FTDI adapter. Looks like the Mac doesn't respect hardware flow control, or the flow control is configured in some slightly different way, and it continues to send data even when the other end says it isn't ready.
Works fine with the FTDI driver and pre-Mavericks Apple drivers.
(I'd far rather use the Apple driver, which is much better at reliably detecting the device without needing to reinsert it, so this is a bit tiresome...)
FWIW, I’m having serious issues with my FTDI board on Catalina —- even after installing the VCP driver. I’ve had to downgrade to Mojave for the time being
Apple’s FTDI driver doesn’t seem to support baud rate aliasing, so nonstandard rates are not possible. FTDI chips are very capable but interfacing with them is surprisingly difficult on macOS.
I've typically gone a different route and used an ATMega32U2 (or U4) with Dean Camera's LUFA code to create a CDC to custom hardware bridge. Then the baud rate is irrelevant (or you can use it to set modes). I did this because OpenOCD was taking many, many minutes to program a tiny XC95144 CPLD using an FTDI JTAG cable. Yeah, sorry, trying to do it the "cheap way". When I got it working, the ATMega32U2 "serial" solution could do it in 2.3 seconds. Admittedly this was a few years ago, so things have likely improved.
One funny thing I did find doing this; I have not checked recent macOS releases - I should, was that if "Camera" was in the USB Device Descriptor the device would get claimed as a "serially attached camera" and the "serial" port would not show up - doh.
Kernel extensions being deprecated is not news (it was announced at WWDC). The only real new info in this post is that 10.15.4 will pop up warning dialogs when loading the certain types of kexts that have replacement APIs available.
It might mean switching more towards virtualization for Hackintoshes. It's now possible to run a completely, 100% unmodified macOS in qemu/kvm on Linux and use vfio passthrough to place GPUs (mostly Radeon cards) and other PCIe devices into macOS.
Speaking of virtualization, what about using macOS as a VM host? This typically requires kernel mode drivers. It would be awfully inconvenient to not be able to host VMs on a Mac development machine.
The Hypervisor framework was released in Yosemite (macOS 10.10). Nice overview of it here <https://www.pagetable.com/?p=764>. There is a (basic) but free implementation xhyve.xyz <https://github.com/machyve/xhyve> which can be used to run various hosts. I know several folks that run Linux so they can use various FPGA development tools to remotely display X windows on the Mac using XQuartz.
It's maybe the best for macOS, but it's far from being the best VM software. It's full of ugly bugs, especially when trying to script it and use the 'vmrun' tool. It's expensive and support is nearly nonexistent.
You can say you don't like VMware and that's ok. But I remember when I first used it in the 90s and it was absolutely mind blowing. They needed to do slightly crazy things in kernel mode to get that done.
I worry if there is a hypervisor implementation monoculture where everybody is wrapping the same kernel interface, we lose some possibility for a better implementation.
Or if some problem of the future, not VMs, some day requires kernel hacking, but this is artificially locked out. Apple is choosing to not have that innovation happen on their platform. Seems short sighted.
The framework is a relatively light wrapper of the Intel virtualization extensions. The handling of interrupts is definitely not as efficient as it could be, since it doesn't use the professor support for APIC register virtualization and injection of interrupts without exiting to the hypervisor.
(I mentored the Google Summer of Code project that brought the Android emulator's support for Hypervisor.framework to upstream QEMU).
I work on a hypervisor that needs much more than the Intel virtualization extensions. All sorts of registers need to be saved, adjusted, used by virtualization, sampled, and restored. It isn't just the normal guest state that needs to be dealt with.
They'll probably say something about requiring to port your virtualization software to the macOS Hypervisor framework. It has already been the only option for apps distributed via the App Store which want to provide virtualization. Qemu already supports it as the "hvf" backend. I can't speak to what the performance is like.
(there's a bit of history there - a company developed it for a commercial product, Google ended up forking the open source parts for the android emulator which eventually resulted in them contributing it back to qemu upstream).
It could be this is done by MacOS itself, in a similar manner to Hyper-V on Windows, where you can no longer use VMWare/VirtualBox/etc. Virtualisation is critical to Win 10 security, wouldn't be surprised to see MacOS going in that direction.
My Windows used to BSOD when running with hyper-v enabled. I disabled it and I'm not going to enable it again anytime soon. VirtualBox works just fine. I don't know why do you think that it's critical to Win 10 security.
Wow, that's fantastic to hear. I quit messing with Hackintoshes quite a while back, as I just couldn't afford to risk messing around with a daily driver work machine. This could get me back into it.
I know using them for a development house would probably draw Apple's ire, but I really wish they'd license macOS, or maybe a headless macOS for servers/cloud. Doing CI on iOS and macOS apps is a right pain without some kind of build farm. It's a shame like this (https://www.sonnettech.com/product/rackmacmini.html) have to exist (It's not a dig at the product, it's a dig as the situation that created it). I could just imagine spinning up a couple of EPYC systems with macOS VMs to do CI for software for Apple products...
OpenCore loads kexts differently (sorry for lack of technical explanation) so you can keep SIP enalbed, in addition to other advantages over Clover or other methods. More info below.
Curious if anyone knows what Dropbox will do about SmartSync. KAuth extensions are likely out as of 10.16, and the Endpoint Security extensions don’t let you block a reply for more than 60 seconds, so you can’t dynamically page in large files anymore.
They have access to a kernel module and entitlement that only they have access to, com.apple.fileutil - and Apple has said they aren't giving out any more entitlements for it.
I would bet hard cash that it'll be out as of 10.17. Almost every major vendor is done, and the ones that aren't have a 2020 OKR (or equivalent) around this.
> more than 60 seconds, so you can’t dynamically page in large files anymore
certain vendors with legacy engines they've ported 10 years into the future are in trouble. But, assuming a 10.17 release, which they've worked out with apple? They'll be fine, mostly.
To quote the document, KPIs "will be deprecated as alternatives become available". For VFS, no alternative is available, so it's intentionally not deprecated.
They currently offer no replacement for VFS. But I bet they will offer something fuse-like if they offer anything at all. Also: the file provider api they use for iCloud Drive that was supposed to ship in Catalina but got yanked is still likely to happen.
The latest version of FUSE for macOS, which is still freely available to end users but no longer open source and not licensed to allow redistribution, works on the current version of macOS. The open source code has not been updated in a few years and as such only works on older macOS releases.
2 weeks ago I used OSXFUSE to read an old Linux hard disk. It was running ext4 and I was able to install OSXFUSE and EXT4FUSE under Catalina.
It let me browse and copy files off. I had a few directories that were password-protected and I wasn't able to access them. There might be a way to do so but they just weren't important enough to me to do more investigation.
Just looked through my kexts. I've got two that I use, one that I dont, all signed and approved by Apple:
* Driver for RTL815X that works with my USB Ethernet adapter from RTL.
The standard Apple driver refused to run at 1G, only at 100Mb. It'll be interesting to see if they update the drivers any time soon.
* Tripmode that controls network I/O (I don't use it)
* Karabiner Elements to handle my custom keyboard
I use the (excellent) Karbiner-Elements to handle my keyboard and it has a kext that sets up a virtual keyboard and virtual pointer as HID devices. So I'm guessing that's going to need to be converted from a kext to something using the HIDDriverKit.
As usually, Apple does not give developers any time to react. There is no migration plan save for one video from WWDC where they briefly describe some alternatives.
> In macOS 10.15.4, use of deprecated KPIs triggers a notification to the user that the software includes a deprecated API and asks the user to contact the developer for alternatives.
Developers who are affected by this (and who were on top of last summer's announcements) knew this day was coming, and have been working on updates since then. 10.15 serves as a testbed that supports legacy kernel extensions as well as the new user-space frameworks (e.g. EndpointSecurity and NetworkExtension) that largely replace the deprecated APIs. Developers that have a revision ready to test against the first 10.16 beta this summer should be in good shape (assuming that Apple doesn't swerve wildly between 10.15 and 10.16 cough 64-bit Carbon cough).
Yes, the documentation is scanty in places, and there is some missing functionality at the moment. And the support burden on these companies, once 10.15.4 ships, will be a pain ("It's OK, the software still works, don't worry about that alert message"). But it is incorrect to say that Apple has not given developers time to react.
> But I find it totally bizarre that they never shipped 64 bit Carbon.
I don't. Carbon was a >20-year-old codebase which had always ran on a 32-bit system. I wouldn't be surprised at all if they simply determined that there was too much code which would have to be rewritten to run correctly on 64-bit.
You think 20 years is a sign that a codebase needs to be replaced?
20 years is nothing (/grabs your lips and makes them say "NSObject"). Don't talk to me about code quality either, just because something is old doesn't mean it's bad (or good) it just means it's old.
This is especially true for "the last 20 years" in particular (that featured elimination of most American tech workers with more than 20 years experience during the dot-com crash followed by a six year timeout where upon most American universities did not graduate many CS majors and those that did had no real "industry mentoring", just Google searches and blog sites.) IN MOST CASES, THE QUALITY OF THE OLD CODE FROM BEFORE THAT ERA IS GOING TO BE FAR SUPERIOR TO THE CODE PRODUCED DURING THAT PERIOD.
Things are just now (14 years after the new CS majors began to arrive at their desks) starting..and I do mean starting..to get back to normal (the people who infested the industry in the absence of such formally trained talent are still around, many have whipped investors into FOMO, allowing them to live very well and thus be influential and even exploitative of the younger formally trained CS majors, instead of the other way around).
OpenSSL was deprecated 8 years ago and hasn’t been removed yet. Your bet of 1 year is not well-supported by their historical trends on Carbon, x86 32-bit, PowerPC, 68k, Garbage collection, and so on.
Unless you’re suggesting that Apple will only give 1 year’s notice when they remove deprecated features - in which case, they might only give 0.25 years (at WWDC), which happens quite frequently.
What do you think is the sensible strategy when Apple deprecates a feature?
Directly jumping on the provided alternative is basically doing beta testing for Apple. Very often the alternative is not ready yet. For years after it was announced Swift was not production ready. SwiftUI will probably replace Cocoa but again it is not production ready yet (and doesn't work on the majority of Macs).
What if there is no alternative? There are many crossplatform projects that rely on OpenGL and moving to Metal is problematic since it's an Apple only API. No replacement for Carbon either because at some point Apple decided to abandon the port to 64 bits.
How much time should one wait before fully jumping, say, into Swift? Nobody knows. Apple could announce today that it plans to kill ObjC by 2025 and give devs a good picture of what it's going to happen and its commitment to X feature. I suspect the problem is that not even Apple knows when it will finally decide to do that.
There is a lot of uncertainty. How can one decide to start a new macOS project today without knowing how long that investment will last? For example if you invested heavily in OpenCL a couple of years now you'd have to rewrite all that code to Metal. Or moving from ObjC to Swift. Or moving from OpenGL to Metal. Etc. This is a luxury that not everyone can afford and Apple doesn't seem to care.
No wonder Apple is the only big developer working on macOS exclusive products and they are trying to bring over iOS devs via Catalyst.
Spend the six months after WWDC-announced deprecation porting your use cases and reporting bugs and evaluating whether it’ll be easy or hard to flip the switch if you need to.
Each WWDC thereafter, either test your deprecated code for surprise breakage on the new betas (this happens), or remove your deprecated code if the betas remove support for it.
If you aren’t committed to reviewing, testing, and updating your codebase annually (or shutting down products that are no longer a good fit, like iDefrag) then you should probably not develop for Apple platforms.
If you’re concerned that there’s not enough revenue to support this every year, then you should reevaluate your position on subscription revenue models or accept that you’ll operate at a loss re: keeping your platform up-to-date.
The shortest on my list is GC, which they removed after a short 4 years of deprecation. I'm short on examples of things that were deprecated one year and then removed the next couple years. If you have some?
They could remove Perl/Python/Ruby this year, having only deprecated them last year. Historically they're not inclined to do so, but I'd expect those to be left deprecated until some event permitted their removal to be more convenient (like "new architecture added", which would force a global recompile, at which point lots of deprecated things can go away more smoothly — OpenSSL, OpenGL, P/P/R, etc.)
How do you get 6 months from June 2019 announcement for the next major OS release which won't be out before September 2020? It still works in Catalina and they won’t change that in a point release.
Because you don't have one year to update and release your software. You have to have it ready and begin testing at least by the time betas of the next version of OS ships.
6 months since the announcement of the entirely new not-seen-before thing with next to zero documentation to understand, develop an alternative (if it's at all possible) and distribute it to users.
I really hope that this is the final nail in the coffin for third party Anti Virus vendors.
It's been really worrying to see large enterprises that have moved from Windows to macOS and carry their thinking that the need to install a third party AV product (which in turn often acts as a near-unrestricted way into the kernel).
A lot of existing antivirus software was already doing that -- and yes, it was glacially slow. But at least doing this through a first-party interface means it probably won't block the rest of the system quite as badly.
JAMF uses the MDM APIs that Apple provides. It also can't access personal data via those APIs. Your anti-virus may use kernel extensions, in which case it will stop working.
JAMF additionally permits the provisioning server to run arbitrary scripts on the endpoints, using an installed daemon that implements functionality beyond what MDM offers.
1. My parents lost money in the Internet. They treat scam messages ("you've been hacked, we have discriminating stuff against you, give us bitcoins and we don't tell anyone") seriously. They wanted to buy some wonder drug that didn't even exist. Now they contact me in every case that involves money on the Internet, and since several years there were a few other cases where I had to say "don't do it, this is a scam". So I'm effectively performing the role of a human thirdparty AV.
2. My girlfriend has been hit with a clickjacking compaign. She was crushed; I want to lessen the chance of this happening ever again. I've also educated her about it and how it works, so she will probably not do it anymore, but in the scale of whole humanity, the "let's teach them how to defend" is not a good strategy for trying to eliminate some harmful event, there will always be people that will resist learning.
3. I want to help with limiting spreading malware. If I'm using AV, this means that I influence the environment I live in so that everyone sees lower malware spread. This means people like you can wonder what is the point in AV tools, since you probably don't know even 1 person that had been hit with malware attack. Similar to vaccines I guess.
You shouldn’t be installing third party antivirus on macOS (or BSD/Linux for that matter), they are little more than another path the kernel and root user space. MacOS has its own built in malware / virus protection.
What should I say to my parents when they enter a website that advertises some kind of diabetes wonder drug and they download some crap and try to run it? That they shouldn't install AV because it potentially opens an attack vector? Do you think they will know what it means?
This is yet another reason to move away from MacOS. The blocking of 32-bit apps, a requirement to have Apple notarize every application, and now the removal of kexts.
If Apple still made great software and even better hardware then I wouldn't care, but that hasn't been the case for years.
I agree. I work in healthcare and HIPAA rules dictate some sort of Anti-Virus solution for all computers (stupid rule!). My employer chose Sophos, which uses several extensions, and my system is definitely more unstable and less performant because of it. Hopefully forcing vendors to use the proper APIs and preventing them from monkeying around with the kernel will improve my experience.
If you've been unhappy with hardware and software for years, why haven't you moved to another platform? At least your OS if you didn't want to buy new hardware.
Other platforms are even worse right now, in different ways. Perhaps that will change, or perhaps Apple will get so bad that they’re no longer the least-worst option.
You effectively can't use alternative OSes on new mbps easily. Drivers are not available and wifi firmware is broken (wontfix, don't care) for use with Linux stack.
Reasons to stick with them: company purchasing policy.
...which is unlikely to have the same performance and control benefits as running "on the hardware", and I bet they will also gradually lobotomise the APIs in userspace too.
Apple's control-freak attitude is nothing new. This is just another step along their plan to turn desktops and laptops into the same dumbed-down walled-garden platform as mobile devices.
I understand where this sentiment is coming from but I think that it may be a little alarmist. Security has been lagging behind in desktop operating systems for a long time now… it was only a matter of time that they’d catch up because it’s become increasingly clear that unquestionably running code that can steal or destroy user data and especially code with kernel access is a catastrophically bad idea in an always-connected world.
I suspect that even Linux will be adopting a more restrictive (if decentralized and overridable) model for third party binaries sooner than later.
Two decades ago, most people thought Stallman was alarmist... but he was right.
because it’s become increasingly clear that unquestionably running code that can steal or destroy user data and especially code with kernel access is a catastrophically bad idea in an always-connected world
...and over the years I have increasingly become convinced that "security" is simply a convenient and difficult-to-oppose excuse to take away freedoms and push society towards an authoritarian dystopia by spreading such fearmongering paranoia. There's enough sci-fi around to show what that could look like.
> over the years I have increasingly become convinced that "security" is simply a convenient and difficult-to-oppose excuse to take away freedoms and push society towards an authoritarian dystopia
... and presumably you'll continue to feel that way right up to the point that a bit of malware steals your banking credentials or encrypts your hard drive and demands a Bitcoin ransom.
Reminds me of how Steve Jobs opposed releasing iOS SDK to the public and wanted to replace everything with web apps. Why? In the name of security, of course!
You absolutely do not need kexts to steal user data, it's totally orthogonal to user's data security, and I think (correct me if I'm wrong) even in Catalina this XKCD is still relevant: https://xkcd.com/1200/
That is only for a post written in, say, 2019. A new post that cites old information is still a new post.
Additionally, they _announced_ the deprecation at WWDC, but that did not take immediate effect. The point of this post is that they are as of right now no longer supported.
I was the external contractor brought onto the project as the project's macOS kernel extension expert; my contract expired in November after the port was put on ice. As far as I'm aware it's no longer being pursued, with sparse checkouts being the new hotness.
I don't want to and can't speak for Microsoft, and I did not make the final decision, but:
* The user space alternatives (NSFileProvider, EndpointSecurity) are not up to the job for various reasons.
* Porting everything to the much more involved VFS KPI would have been a large amount of work, and with a near-100% risk of having the rug pulled out under it yet again.
VFSForGit is/was not a true file system, it just intercepts file I/O events to dummy files to lazily fill them with content and to log writes so that 'git status' does not have to scan the whole repo for changes.
The macOS port uses the KAUTH listener API, which was indeed deprecated with 10.15.
Whoops, my mistake. (In my defense, it has VFS in the name! So I just assumed it uses xnu's VFS API, but apparently the name comes from the Windows VFS API.)
It sounds like FileProvider would be perfectly suited to the job if it weren't pulled.
Someone else on the project experimented with FileProvider during the Catalina beta, so this is all second hand, but it wasn't really suitable for the job - it seemed more designed for having a special folder like the iCloud Drive, and didn't work well with CLI tools. I actually didn't realise it was pulled before the Catalina launch, but that makes sense, I got the impression it was nowhere near production ready.
I wonder about drivers for PCIe add-in cards for the new Mac Pro. I have a client that has asked me to develop such a driver for their card so they can run on modern Mac Pro systems. I’d work entirely in userland via DriverKit if I could, but I don’t think the necessary APIs to interact with PCIe hardware are available.
>IOUSBFamily has been deprecated and headers removed from SDK since macOS El Capitan (10.11). All clients should move to IOUSBHostFamily or USBDriverKit, where appropriate and outlined below.
lol which one of you? i fully sympathize but oof, guess you lost that fight
As a total layman to developing kernel extensions, is this another example of apple limiting what you can do with their hardware, or just an effort to get developers to modernize how they make drivers for macOS?
Well, apple makes the hardware, so they would typically make the drivers as well. From what I read and understood is that without being able to load own extensions you can only do whatever Apple allowed you to do.
It could be compared to changes that Chrome plans (or already introduced?) to restrict extensions that filter sites which affect adblockers and you're at mercy of Google what you can block and what you cannot.
Similarly here, you will only be able to hook to interfaces Apple provided, but if you plan to do something that wasn't thought of, or Apple does not approve, tough luck.
How will hackintoshing work without kexts? I'm no expert but have built a few, and they make extensive use of them to make non-apple hardware work properly and to add functionality.
Do you have any official references for this roadmap? There are several comments on this page alone that try to debunk the 'road to microkernel' plan. If you don't have any reference, please edit your comment and insert 'I think' into proper places.
If you are relying on the ability to develop and load specific types of kext in macOS in future, I recommend getting in touch with Apple DTS. (Note that communications with DTS are unfortunately typically under NDA.)
The official references are a mix of actually watching the WWDC 2019 session videos, reading Apple documentation and forum discussions, and thinking what having a kernel where all drivers are only allowed in user space means in OS architecture.
I checked my Hackintosh and I had like 32 kexts, although a few of them were immediately noticeable as unrelated to Hackintoshing. However, there does appear to be quite a number, so this is a bit worrisome of a development.
Hopefully people will find new ways to support macOS on PC hardware, since Apple doesn't really seem interested in the prosumer desktop market.
Oh, I do have more 3rd party kexts but they're not related to hackintoshing and most of them are for usb stuff; Apple seems to have provided a migration path for those.
I might need to get a new Creative Labs audio thingy because they're not known for updating drivers for old products...
It's shit like this that drives me never to use Apple products. Oh, your company gives Apple laptops to SWEs and I can't order something different? No, I won't work with you. I need full control over the machine I spend my day driving, and this control includes loading kernel modules on my own damned computer if i want to do that.
Right now I use an empty kext to prevent macOS from seizing control of a hardware programmer (keeping it from being accessed by the program that uses it). It was a hack of a solution, but it worked.
This means for the moment, I don't have an upgrade path that works for me...
I need Kernel Extensions for Karabiner and USB-to-TTY drivers to talk to and program firmware on embedded boards. If I can no longer do this, my next machine is not a Macbook. Unless there is a different way, this is a dealbreaker.
LuLu uses the now deprecated NKE API, and definitely does some things that aren't supported by the replacement NetworkExtension framework. [1]
I've looked at doing some similar stuff, and I think they'll have to reimplement it as a "VPN", where the VPN is really just a user space process that performs the desired filtering. It'll be interesting to see how the Little Snitch guys approach it.
macOS provides an API called Hypervisor.framework for virtual machine managers like Virtualbox. It means they don't need kernel modules anymore. QEMU also uses it for acceleration.
I'm pretty sure Parallels will come up with a solution. If you want to play with virtualizing Windows via Hypervisor.framework right now, I remember that the people from Veertu.com had Windows running via their Veertu Desktop product. But they seem to be focused on virtualizing macOS-on-macOS right now.
If you actually mean now, VirtualBox still works. If you want a Hypervisor.framework based system, I think Veertu used to have an app on the App Store.
Short term, I suspect that it’s going to be more relevant for porting functionality to iPadOS and/or allowing that functionality to be shared between Mac and iPad (in Catalyst apps).
I'd just like to point out that the title is a bit misleading -- They are deprecating extensions that use parts of the kernel API that have alternatives, as these become available.
How well that will work in practice remains to be seen. But they aren't simply deprecating all kernel extensions.
No; none of those are using deprecated APIs. VirtualBox and VMware could avoid the need for a kext by moving to Hypervisor.framework, but there's no immediate need to.
Perhaps worth noting that Parallels can already use Hypervisor.framework - it's a configuration option per VM (although I don't know if you can have a mixture of the options running at the same time).
I tried switching Parallels to the Apple hypervisor once because I was curious. I ran a quick x264 encode to test performance, and Apple's hypervisor was quite clearly worse.
Far from a comprehensive test, but, yeah, there's probably a reason Parallels defaults to their custom kext.
I just found Turbo Boost Switcher, which uses a kernel extension to manage intel turbo boosting. I'm a little worried that this sort of change will prevent this kind of innovative work (and quite possibly the application altogether. http://tbswitcher.rugarciap.com/
There's probably easier ways of going about it. I doubt Apple really cares that much about Hackintosh users. Many of them probably have legit Apple hardware as well. I personally own a Hackintosh, a MacBook, an iPhone and an Apple Watch, and also have a MacBook at work.
I see it as part of a general trend whereby Apple are locking down macOS and making it more akin to iOS.
Think System Integrity Protection (SIP), the read-only system volume introduced in Catalina, etc.
All good for security and your typical macOS user. That these changes are also frustrating for hackers is just a side-effect they are willing to accept.
Tuxera is a kernel extension? I have Tuxera installed right now, and there's no kext for it in /Library/Extensions.
I also could have sworn Tuxera was FUSE based...
Edit: Oh, but it shows up in `sudo kextstat`. Okay, I'm completely wrong then. I was wondering how a FUSE filesystem could have such good performance...
Yeah, I was going by Tuxera's website FAQ that has an answer for "I'm Getting A System Extension Blocked Message During Installation". Didn't actually check for the kexts myself, so thanks for verifying.
A macOS without Tuxera NTFS and Parallels Desktop is going to be really problematic for me, but presumably Apple will find workarounds for them. I already switched back to Windows a year ago anyway.
Merely formatting a drive as NTFS only requires a user space program to write to /dev/rdiskN. Only mounting the formatted drive as a file system requires a kext.
The company I worked for and our competitor VMWare both had to create small kexts to enable on-the-fly capture of USB devices. We both accessed USB devices from user-space, but we needed to put small kexts in the kernel to prevent class drivers from being automatically attached to general USB devices. There was really no race-free user-space way of doing this and Apple ignored our messages on their dev forums when we asked for assistance.
I wonder if they’ve added an interface for this? Or maybe promiscuous USB device control just isn’t going to be possible now?