This is really cool. Kudos to Microsoft for really getting open source lately. I wrote an app (which failed miserably) called zenaud.io . When I started writing the app, Apple was hands-down a better developer experience. Now, it's the exact opposite -- MacOS is increasingly painful, throwing up more and more roadblocks and constricting their platform ever more. And Visual Studio is better than Xcode IMO.
Also, as a C++/Python dev - it's increasingly hard not to notice the awesome momentum Rust has garnered.
OT: I must agree about your comparison of macOS and Windows. IMO Microsoft is doing a lot to improve the developer experience. WSL2 is so freaking good. It has its quirks and it has issues with some workflows, but I’m thinking about moving out of macOS after having tried it.
Apple may have the fastest processor, but Microsoft has the most comfortable tools. Both companies are not perfect, but if we must choose the lesser evil...
"Apple may have the fastest processor, but Microsoft has the most comfortable tools. Both companies are not perfect, but if we must choose the lesser evil..."
It's very fast for a low power, laptop-focused processor and even then only truly excels at single-threaded workloads. It's out classed by AMD mobile offerings (4900HS and 4800U) in multi-threaded workloads on most tests[0]. If you step up to desktop processors, the top end processors like the AMD 5950X are in a different class of multi-threaded performance.
Don't get me wrong, it's an exceptional processor and incredibly fast for its sub-25W TDP.
This is simply a matter of adding more cores...I mean, I would hope the 5950x with its 16 full strength cores would be better than an M1 with it's big/little design...
Looking at the die shot [1], they have plenty of space for cores and cache if they remove the GPU. Surely it's not that simple, but I believe they should be able scale to at least 8+4 cores without large interconnect changes, and at that point they are already knocking on the 5900X's door.
Indeed, WSL2 is pretty cool. Also, Windows Terminal is actually pretty sweet, and I even gave PowerShell a spin the other day. The crazy thing is you can basically use it as a bash shell, and it gets the job done. My developer experience right now consists of PowerShell, where I do all my regular directory jumping, editing (vim), etc., and a Developer Shell with god awful classic "terminal" which I only use to call conan/cmake/clang-windows.
I have to add the following: MSVC supports clang on windows. And CMake - all within Visual Studio. And it works perfectly, with perfect support for C++17. Badass.
MS has always been solid with developer tools, documentation, and dev relations.
Even though I prefer administering Linux servers any day over Windows servers, I find myself often missing PowerShell when I use bash. It has some quirks but some of the design decisions are exactly what you'd hope someone would make if they redesigned a command-line shell 40 years later.
I still find it comical that we proudly paste around commands that just wrangle text no differently from what perl programmers did in the 90s, using sed, print, cut, etc, when things like PowerShell moved to piping objects between commands. It just removes a whole class of ambiguity.
> MS has always been solid with developer tools, documentation, and dev relations.
in my 25 years of using their tooling and reading their documentation, they've never been more than what just qualifies as borderline acceptable
I booted up VS2019 today for the first time in a while (after waiting 90 minutes for it to install) and it still feels like using a Jetbrains IDE from 15 years ago, and it's still worse than what Borland produced in the 90s
... and it's even slower than IntelliJ IDEA, which just seems amazing as IDEA is written in Java
I find Visual Studio intolerably slow, I don't quite understand why but it's been bad since at least Visual Studio 2017. Rider and IntelliJ are both much faster and I prefer them.
The documentation is excellent in my experience though. I love it. Visual Studio is really the only Microsoft developer tool I don't like. Even Visual Studio Code is much better.
Agree. going through .Net ans ASP.Net documentation and none of the classes/methods have usage examples except for the super obvious ones (like the "String" class). They just show method signatures and that's it.
It as if the tools were built "by IDE users", "for IDE users".
They may have stagnated but MSDN was, for many, many years, some of the best documentation I had seen anywhere. (Java was pretty solid too, and also PHP).
But maybe I didn't stray too far off the beaten path?
> MS has always been solid with developer tools, documentation, and dev relations
The single most heavily used Microsoft dev environment is the Microsoft VBA Editor. It has not had any update in nine years and is virtually unchanged in 22 years since the release of Office 2000, incredibly outdated in terms of usability. It also cannot be replaced by using a text editor like other IDEs can. It is anything but solid.
[0] "1987 - Larry Wall falls asleep and hits Larry Wall's forehead on the keyboard. Upon waking Larry Wall decides that the string of characters on Larry Wall's monitor isn't random but an example program in a programming language that God wants His prophet, Larry Wall, to design. Perl is born."
Windows Terminal is very difficult to work with if one has astigmatism that prevents working with dark background.
I don't usually use Windows, so perhaps I didn't spend enough time on it, but I was unable to create a colourmap with white background that didn't look horrible with some software. No matter how much I changed the colours, there was always some combination that gave me light grey on white or something like that.
If anyone has a colourmap I can use, that would be really appreciated.
Here's the colormap I use, which I've made sure never has too-bright colors on the near-white background: https://pastebin.com/raw/AdR3sBSs
Microsoft publish a tool in the Windows Terminal GitHub repo, ColorTool.exe[1], which can turn iTerm2 color scheme files into Windows Terminal ones. That might be your best bet because there are huge repositories of good iTerm2 schemes[2] and really slick tools to quickly make your own with live previews.[3]
I guess it depends. I use the Windows Terminal, and I have high astigmatism and myopia. I find it very readable using the Cascadia Code font. Btw I agree about dark backgrounds being less readable but I can’t get away with light backgrounds because of eye floaters.
I don't think it happens to everybody with astigmatism. I cannot read any text with black background. After a very short amount of time I'm unable to read anything, and once I take my eyes off the screen it feels as though I've been staring into the sun.
There used to be a great Firefox plugin that allowed me to change the colours of web pages that use black background, but it doesn't work anymore, and I haven't found a good replacement.
I mean sure it’s not, but back in the late 2000s I did everything under it and it made for a great experience. It’s little package manager was handy, everything I needed was there, it worked surprisingly well.
I agree. I recently bailed on WSL because my poor crappy laptop was buckling under the weight of the extra resource demand.
I’m not using Cygwin, but similarly, I decided to trick out my git bash with extra packages from MSYS2. I have all the Linux tools I need, been having a great experience with it.
The amount of resources it takes to run WSL, or a virtual box on older hardware can be devastating. You don’t hear this mentioned much.
WSL1 doesn't use a virtual machine, and only implements Linux syscalls as NT kernel syscalls. There's no VM or OS overhead; WSL1 is 100% userland software, except for the kernel translation stuff.
You should not have any issues with WSL1 in terms of system load or "weight" (whatever the hell that is supposed to mean in computer terms.
This is used all the time and I've never once seen a definition for it. What is the unit of measurement? What's the border between "lightweight" and "not lightweight?" This industry as a whole ingests far too many hallucinogens. )
Cygwin offers excellent out-of-the-box GNU userland functionality. It is fast and open-source, with a generous set of packages and languages. I can pin it and have a stable, internally consistent GNU userland. Very good for a developer using several interpreted languages.
Cygwin has its quirks and flaws, like any software. But so does a full distro (Ubuntu, no less) running inside Windows 10.
WSL1 isn't a full distro, it's only the userland stuff. You can also easily make your own userland in Linux, tar it up the deploy it as a WSL environment. It's really nice.
There was even a period of time when Cygwin's X server supported GLX and I managed to get some OpenGL software I wrote for Linux to run in Cygwin, but for some reason it was removed or stopped working.
You can still accomplish this with the cygwin packages xorg-server and xinit. You can then export DISPLAY=:0 in your bash shell and have working OpenGL, e.g. glxgears assuming you have the necessary packages available in your "remote" session/WSL. Here's a GitHub gist I use for this: https://gist.github.com/andrewmackrodt/b53943185bbbd804ef4b0...
It's like the the terminals found on Linux. I would give it a recommendation but I still encounter issues with tmux session and mouse (yes I have the latest from github installed). I recommend the terminal in VSCode which works as expected with ssh + tmux.
> Apple may have the fastest processor, but Microsoft has the most comfortable tools.
It doesn't matter how fast the M1 is if XCode can't keep up with modern development tools. I don't understand how Apple developers produce any software with it, the experience is truly awful compared to nearly every alternative. It's slow, buggy and inscrutable. How long does it take to onboard a fresh grad at Apple, I wonder?
I think IDEs may be a bit more subjective than are often presented. Personally speaking, my experience with Xcode has been just ok (not amazing) while IntelliJ based IDEs have been clunky and arcane, with some of their "smarts" functioning in ways that aren't expected or intuitive at all. Both are functional if you let me pick which to spend a work day in, it'd probably be Xcode.
What one is first exposed to and how they're exposed to it probably makes a big difference.
I have an existing app in Swift and Interface Builder. I hate xCode. Simple things like deploying the app to a device is hit or miss (usually miss). Is there a development environment that I could use for my app which is better? Happy to use any platform (MacOS, Linux, Windows).
App Code is probably the only thing worth checking out. Swift isn't a useful language outside of the Apple ecosystem and interface builders come and go, usually go (design in Figma/Adobe products, implement in Code rather than UI wysiwyg)
I am on WSL1 two things hold me back from upgrading
1. How's networking? I go on and off VPNs quite a bit.
2. How's cross system access to files especially performance wise? I edit with phpstorm for windows, I share the files with Slack also windows, I access the same files with WSL git, LAMP stack and more.
1. Just works. Only issue I found is that Wsl2 will not update its dns resolver IPs when connecting to the vpn. There's a workaround script. So it's either exit the terminal and re enter it or run the script to update them.
2. This is my current pain. Windows file system is slow already and accessing it from wsl adds an overhead. Ideally I keep projects in Wsl2 storage and IDEs in Windows. I searched a lot for a solution but haven't found one. On the other hand, Wsl2 Linux storage speed runs circles around wsl1.
I never had an issue with WSL1 until some bizarre bug that cropped up seemingly out of nowhere in which I would blue-screen during some Rust compilation. I didn't have the issue after maybe four or six months of using Rust in WSL1 until suddenly I did. Upgrading to WSL2 fixed the problem.
Publishing Rust crates does require a workaround[0] in WSL2 if it's from a Windows directory. That's annoying but pretty infrequent (for me) and not a difficult workaround.
Except for those two issues, I've not had any problems in either WSL version - certainly nothing that would give me pause to using either one.
I use X410 and run tools from inside of WSL2 by exporting the display. Works well for Emacs, JetBrains Rider, and every other GUI application I've tried so far.
Concur. X410 is great and survives where xming and vcxsrv both crash for me. It's not free but the "not crashing" feature makes it well worth the $10 I paid for it.
That sounds excellent: I’ll have to give it a shot. VSCode under WSL2 with its Docker support is neat, but can be slow due to the storage system overhead. This might solve that for my team!
re 2) try NFS exports on either host windows or WSL2. i've used an nfs mount inside WSL2 and it works very well. don't put it in /etc/fstab, though - for me it caused a WSL2 hard lock on start.
Please know that networking in WSL2 is a nightmare if working in a VPN. We are currently staying in WSL1 because the internet wont work in WSL2 under the corporate VPN.
cisco anyconnect is notorious for that. you have to bump its interface metric above a certain value so WSL2 network or whatever does the routing can do its work.
>This is really cool. Kudos to Microsoft for really getting open source lately
If they get open source so much it means not open sourcing what really matters is intentional. And quite frankly getting rid of patents, their litigiousness and data collection. But I'm asking too much and I would settle for them to just stop suffocating competitors, innovation and stop with vendor lock in. Same deal for their competitors.
Anything that really matters is just like the same old MS you know. DirectX, Office, Xbox, everything SaaS, IDE, compilers, debuggers, language servers, file formats, UI frameworks, UI patents, GitHub, Windows, Server, you'll find examples in every area. Practices like buying or killing competitors like Vulkan related acquisitions. I get it they are a company and need to maximize profits, so it's cool.
Microsoft has so many quality projects and good people working for them, it's just so frustrating that it's still like this. This will only get worse as the exploitative behavior and business models of their competitors like Google force their hand to do the same.
Microsoft joined the open invention network; a defensive patent pool protecting Linux (kernel and distributions). This directly cut into their patent revenue and removed some of their leverage towards Android OEM's. This matters a lot to the android and wider Linux ecosystem.
Moreover, every single Microsoft patent will now be used to fight against any patent claim concerning Linux, related open source software, and a limited set of codecs. [0]
Given the recent inclusion of an exFAT driver in Linux, this hurt MS's business even more.
Moreover, every single Microsoft patent will now be used to fight against any patent claim concerning Linux, related open source software, and a limited set of codecs.
Did Microsoft contribute all of their patents to the OIN? I seem to recall IBM only contributed a specific subset back in the day.
According to the zdnet article I posted in a sibling comment, they did contribute all their patents, which is about 60,000 of them. They gave up significant licensing revenue.
Is there a breakdown somewhere of their patent licensing revenue from Linux licensees, and the legal expenses they have in enforcing it?
Did they give up Linux patent licenses from Android makers? They had billions coming in from Samsung and LG in the past but that was all under NDAs, we don’t know what patents were under discussion.
I have no idea what their legal expenses were, but they explicitly said it covered all their patents. The link I posted mentions Android as something that would be covered.
My recollection from previous OIN threads was that it came with a lot of caveats. I can't comment any further from lack of knowledge about patents and its subtleties. Would love to see an analysis of what really happened in practice since they did that.
All the technologies listed above are good from a technological standpoint. My critique is about corporate ethics rather than whether or not those technologies are good or convenient.
Many years ago I worked developing on Windows 7, using C# and MS SQL Server, and had a satisfactory experience at that time. I can see how that convenience has captivated many users.
But knowing how those technologies came to be makes a difference for me.
For example, Direct3D can be great, but the resulting vendor lock prevents other operating systems like Linux from getting game releases. There was a time where OpenGL was the most popular graphics library, but Microsoft frightened OpenGL users and told them that in future Windows releases, OpenGL would go through a compatibility layer with a significant performance cost and that they should switch to Direct3D. As a result, now everyone uses Direct3D.
Fortunately, projects like dxvk have implemented Direct3D on top of Vulkan and now many projects like Wine and Proton use it to run games using Direct3D on Linux.
> IBM + Microsoft. Expected: OS/2. Actual: MS NT kernel
Not entirely true, because in fact NT is heavily "inspired" by VMS, that Dave Cutler, the main architect of NT kernel, used to work in DEC as a technical fellow. This is also one of the reason DEC Alpha can run Windows NT out of the box, as it is quite similar to VMS in nature.
It’s not happening anymore, partly because of reputation and partly because they’re no longer the 800lbs gorilla they once were - but the “Microsoft kiss of death” was a thing - cooperating with Microsoft often resulted in great damage to the other company.
SGI; Nokia; Sando; Spry; there were many others through the years.
Nokia have themselves to blame, with the internal teams competition and the board promising an hefty bonus to Elop if he managed to do what he did, selling the mobile business unit.
Similar examples can be given for other IT giants.
Microsoft unilaterally changed the OS/2 3.0 API to the match the Windows API, IBM did not approve of that, and then the project split, with the Microsoft version of OS/2 3.0 becoming Windows NT.
Apple lawfully licensed technology from the Xerox PARC from Xerox. Xerox knew they were licensing the technology, with the likely objective of copying it. That's a substantial difference with respect to what Microsoft did.
Xerox's decision is considered dumb, but they were told exactly what was going to be done. The executives were stupid enough to agree because they did not want to hear about anything other than photocopiers and toners.
Microsoft on the other hand was initially a close Apple partner, developing the Z-80 SoftCard for Apple II and then helping develop applications for the Macintosh. Once they gained enough trust, they used that trust to clone the Macintosh (Windows 1.0).
> Anything that really matters is just like the same old MS you know.
I have been hearing this kind of thing for years, and I just don't get it.
Microsoft has turned around completely, becoming a huge open-source contributor. They committed all their patents to OIN. They make .NET Core available for MacOS and Linux (including open-source). They are noticeably absent from the congressional hearings of the other huge tech companies who have been bad players.
And yet we hear that they're "the same old MS". I get that no company is perfect, but in all honesty, what could Microsoft do that would change your perspective on them? And do you hold other companies (FAANG) do the same standard?
>what could Microsoft do that would change your perspective on them?
I mean it's really complicated. For starters I'd like them to stop forcing people to use their bad products just because they were there first to lock down the market and or abused their position. This is still happening today.
Then I'll be more open to use their good products, and there's plenty of that. I want to be excited when MS announces a new technology, not to be reminded of how bad they behave as a company and the negative impact they have on my life.
>And do you hold other companies (FAANG) do the same standard?
Yes. At least with e.g. Apple and Google I can just not use their products, but with Google it's getting harder and harder as they monopolize the web and close/lock Android even more. Google removed don't be evil from their motto, MS should change theirs to We love open source when it's convenient. Nothing wrong with doing manipulative PR like everyone else, but don't be surprised when some people don't want to drink it.
Agreed. BTW if you're looking for a apt/brew like experience on Windows proper, try chocolatey: https://chocolatey.org/
(Obviously there's also the Windows store, which is not bad for GUI programs, but for more developer-type stuff - e.g. installing python - chocolatey is great.)
I love chocolatey and use it quite a bit. But I like Arch Linux's AUR more because it contains way more packages. Though Microsoft also seems to be on their way to make their own
For what it’s worth, the GPU thing can be solved using an eGPU on Intel macs — but the thermal throttling and CPU perf basically killed that dead in the water for me; I was running an RX 590 which worked brilliantly, but gaming itself was lacking sadly.
Gave up and built a mini ITX box next to my laptop. Swap DisplayPort, swap one USB-C, done!
Apple are throwing more roadblocks at least partly because developers are becoming more and more deplorable, trying to claw every penny they can by collecting and selling every bit of metadata (or even data) they can get their claws on. Microsoft aren't throwing similar roadblocks at least partly because they're one of the deplorables people need protection from.
All big software corporations use open source strategically: keep the core money-makers closed, release tooling and other trinkets for developers so that they do some free advertising for the company. They also release expensive-to-develop software for free to destroy competitors and expand their influence.
> throwing up more and more roadblocks and constricting their platform ever more
Do you have any specific examples? From my perspective as an app user, rather than developer, the restrictions they've put in place seem to be beneficial to me. I like sandboxed apps, absolutely love that I can tell an app to bugger off when it tries to access some folder that it has no business reading.
What about GNU? It seems like most Windows-ₗusing devs use WSL2’s Linux VM. What advantages does that have over keeping the MS OS’s forced updates, BSODs, etc. in a VM, while keeping a free OS stably settled on bare metal?
I can imagine drivers, but if you stick to only Dell Developer Edition, Lenovo Linux-certified, Purism Librem, System76, or similar (still significantly wider selection than Apple s̶h̶e̶e̶p̶ fans seem satisfied with) hardware, things should work more smoothly than with Windows (drivers are built into the kernel and update with the OS).
Or that Rust isn’t iron oxide. I’ve edited that line to be clearer, and I’ll link definitions for all the other words ~~if someone will show me a pastebin that will hold the 583732 byte file I’ve just created for shell (xargs & dict) practice for free~~ Edit: https://ghostbin.com/paste/kgwWc.
> I can imagine drivers, but if you stick to only Dell Developer Edition, Lenovo Linux-certified, Purism Librem, System76, or similar (still significantly wider selection than Apple s̶h̶e̶e̶p̶ fans seem satisfied with) hardware
I don't need a huge selection, but I do need polished hardware that I can walk into a store and try out before buying. Just things like the feel of the keyboard or the trackpad can make a system a nightmare to use if they're bad. I also need to be confident that I'm not going to get given the runaround if a piece of hardware (e.g. docking station or monitor) doesn't work with the machine.
> things should work more smoothly than with Windows (drivers are built into the kernel and update with the OS).
That's actually one of the things that worries me the most about Linux - I can't pin a driver to a version that's working, and if the kernel drops support for my hardware then I have to choose between losing support for my hardware or never upgrading my OS. I was excited for GNU/kFreeBSD up until the point where systemd arrived and destroyed all the advantages of free OSes.
> I do need polished hardware that I can walk into a store and try out
Doesn’t this apply to my first two suggestions? The Dell XPS Developer Edition is the same hardware as the Windows version, just with Ubuntu preinstalled. And Dell has upstreamed the drivers. Similar with Lenovo hardware, except IIUC they haven’t started shipping Linux preinstalled yet, you just buy a normal Thinkpad and install your distro of choice. Purism has a basic return policy, though IIUC you have to pay shipping and, if there’s no hardware defect, a 10% restocking fee: https://puri.sm/policies/. System76 has a 30-Day Limited Money Back Guarantee linked in their websites footer: https://system76.com/warranty (^f 30 and hit enter a couple times).
> if the kernel drops support for my hardware
Is there precedent for this? It still supports 32-bit CPUs long after most people have upgraded — indeed certain distros like Ubuntu have stopped supporting them, but there are probably hundreds of other distros that haven’t done that¹, like Devuan, which also champions init freedom and maintains a list of a ~two dozen Free distros/OSes that don’t force systemd: https://www.devuan.org/os/init-freedom
> Doesn’t this apply to my first two suggestions? The Dell XPS Developer Edition is the same hardware as the Windows version, just with Ubuntu preinstalled. And Dell has upstreamed the drivers. Similar with Lenovo hardware, except IIUC they haven’t started shipping Linux preinstalled yet, you just buy a normal Thinkpad and install your distro of choice.
Drivers make a lot of difference to the feel of a trackpad, I wouldn't want to buy without testing the actual drivers. I wanted to like the XPS but its keyboard felt too rubbery to me. The idea of buying something and then shipping it back really doesn't appeal to me (I'm not from the US and the idea of just returning stuff isn't so much in our culture); I really want to go to an actual showroom-like store, try out a bunch of different laptops, and then walk away with the one I like, and I accept paying a premium for that. (I'm sure others will feel different, and maybe I'm not being reasonable; just trying to describe where I'm coming from).
> Is there precedent for this?
Yes, I've had three different pieces of hardware go unsupported in Linux (Logitech QuickCam USB - eventually support reappeared in a different driver; Asus A730W PDA; an old Hauppauge TV tuner card). Linux is openly really hostile about out-of-tree drivers (no stable API as a matter of policy) which the first two were, but even in-tree drivers are aggressively deprecated (the Hauppauge driver was one of those). I switched to FreeBSD on my home server because I was just fed up with all the churn of Linux and it's been a lot better.
Rust needed a GUI and Microsoft provided one. They seem to be very focused on giving developers what they need, but only to a point. I've been doing some system glue stuff and while it's nice that powershell has ssh an scp they are missing some options I want. I was going to use curses with python (batteries included!), only to find out it's not supported on windows.
It almost feels like a strategy - be standard enough to bring people in, but idiosyncratic enough to lock them in.
I do think it is a strategy, but I think it's a rather simpler one than that: basic work triage and scope management.
ssh and scp make sense to put into powershell, because they're everyday sysops things. curses is pretty posix-specific, and apps that use it are likely to need other posix stuff, so handle that with WSL rather than unnecessarily re-inventing a wheel.
It's hard to forget that Microsoft's official, documented policy for a very long time was Embrace, Extend, and Extinguish.
It all feels very vaguely analogous to the West's relations with China and Russia -- both China and Russia appeared very open for a time, and then closed back down after gaining enough leverage.
It's hard for me to see why we are doggedly ignoring the existence of WSL in an effort to manufacture a controversy.
We've got a square peg, and an operating system that has both a round hole and a square hole. There is nothing nefarious about choosing not to use the round hole.
Much as I like WSL, WSL is the natural "embrace" phase of an EEE plan, and a partial implementation with key compatibility differences the "extend" phase. I will be optimistic but forever wary, and continue to do all of my serious work in native Linux.
Please be aware that if you do this, your application won't be accessible with screen readers or other assistive technologies on Windows and Mac. At least not now. Maybe I'll have time to implement GTK accessibility backends for those platforms someday.
Yet another reason for them to do this. Not just a GUI for Rust, but the only accessible one. It really is a solid strategic offering to bring Rust developers to their Windows platform. But IMHO developers who do are trading tomorrow for today.
Screen readers, despite the name, don't do OCR, they access informations provided by the GUI toolkit (which is one of the vastly improved area in GTK 4 Afaik).
Some screen readers can do OCR on a specific part of the screen (e.g. an unlabeled image) on request. But while OCR is useful for getting text out of an image, most implementations can't discern the structure of a UI, e.g. which part is a button, which part is an edit box, etc. Also, OCR is typically done just once on-demand, not continuously as the screen changes.
However, VoiceOver for iOS has a new feature called screen recognition, which is exciting because it overcomes these limitations and provides some level of access to applications that are otherwise inaccessible. Hopefully other platforms will catch up.
Even then, true screen reading will be much more CPU-intensive than what screen readers currently do. And anyway, it's not here yet, except on iOS. So I will continue to warn developers away from toolkits that are inaccessible, in hopes that some blind person somewhere will be spared the pain of being blocked from doing a task because of an inaccessible application.
Then why don't actual screen readers exist when a.i.'s at this point can practically solve captcha's by enhancing the reflexion in the eyes of a highly compressed jpeg and reading the text in there?
Certainly there would be significant demand for a sightless man to be able to read the dankest memes from pictures?
> Then why don't actual screen readers exist when a.i.'s at this point can practically solve captcha's
Training a computer to solve a captcha is a lot easier than training a computer to understand interface conventions.
There isn't an AI that can look at a jpeg screenshot of an interface and say, "there's two input elements, and it looks like they're grouped together and control the list to the left of them, and one of them is selected, which I can tell because it has some kind of subtle glow effect on it, but not the glow effect you would get if you moused over it."
There's nothing that can realistically do that today, and it wouldn't be fast enough or performant enough for low-powered cell phones and laptops even if it did exist.
If you're just looking at describing pictures themselves... sure, Facebook does auto-generate alt tags for images if you forget to put one in. And Youtube auto-generates captions. Those are valuable services, but they have a lot of glitches and mistakes. If you're a blind reader, you'd prefer not to have that experience when you're using a piece of software, you'd prefer something that just works reliably.
It's the same reason you probably used a keyboard to type this comment instead of speech to text. Speech to text is useful in some cases, but not good enough or accurate enough that you would want to use it as your main input method.
Converting bitmaps to text isn't enough to make an interface usable. You need to be able to quickly convey the structure of the interface and what controls are available, and to do that well you need some kind of semantic insight into the interface.
Screenreaders don't just read text, they control the interface itself using standardized keyboard shortcuts and input components within whatever graphical framework you're using, and they communicate what that interface is using a set of standardized terminology.
Sorry, I guess? It's been an industry standard phrase for a pretty long time. Screenreaders probably have a historical reason why they're named the way they are, but the short version is that's just what everyone started using. A lot of software terminology is like that, it's weird to people who are unfamiliar with it because there's no central committee somewhere that decides what everything should be named.
If you're trying to do a search online for the kind of tool you're looking for, probably the phrase you would want to search for is "OCR software", short for Optical Character Recognition, or if you're trying to tag images just straight-up "image recognition."
What you describe might be more robust in some situations but vastly less for software designed to be accessible. Take "alt" attributes in HTML img tags for instance, or various metadata attached to buttons that use an icon instead of text (like a play button, or a X close button and similar things).
You can see an example next to this very comment actually: the up and downvote buttons won't be accessible with OCR, but they have "title" attributes describing what they are. And consider that there's more to understanding a given user interface that raw text: radio buttons are tied to certain labels, there's hierarchy, all sorts of layout cues that would be opaque to a screen reader.
I suppose that ideally you'd want both: use native accessibility data if available, fallback to OCR when there's no alternative.
I still do, because people are stuck with these platforms, for reasons beyond our control, and I care more about people's access to applications that they need than about being a purist.
If the assistive technology they need to use isn't available on Linux or they don't have the ability to run Linux it's not that meaningful of a difference.
And rather sad that free software - which in theory ought to be a perfect place for exploring assistive options - seems to lag far behind the closed shops.
Having discussed this with Drew, I think he was really talking about what I, personally, should choose to work on. And I think he's right that I should work on the accessibility of free software as much as I can, rather than accessibility on proprietary platforms. I don't know yet how soon I can actually start doing this.
Well they bring in the language and its runtime to windows development via WinRT. They don't bring WinRT to Rust. This is the windows team writing adapters for their COM API surface. They do the same for C++, C#, JS and now Rust.
Didn't they remove the JavaScript WinRT projection?
The docs[1] describing how to call a WinRT component from JS say: Universal Windows Platform (UWP) projects are not supported in Visual Studio 2019. See JavaScript and TypeScript in Visual Studio 2019. To follow along with this section, we recommend that you use Visual Studio 2017. See JavaScript in Visual Studio 2017.
Sort of, it was/is built in to Chakra (the JS engine in IE9+, old Edge, and the original UWP WebView based on those browser engines) and hasn't been ported to the new Chromium-based Edge or the WebView2 based on it.
It works. The windows theme is a little dated, but Windows users are used to a random mishmash of inconsistent styles, so likely won't cause complaints. On Mac, well actually I haven't personally tried it on Mac, just Windows/Linux, but I hear a lot of vocal complaints about GTK not fitting in from mac users. I'm not sure this means it's worse than on Windows, I think Mac users just expect more.
You'll need to install windows-curses, since cmd.exe didn't support vt100 escape sequences until relatively recently, and still requires a special WinRT call in order to enable them.
But it's a bit telling that the first hurdle you hit in running python on windows was the operating systems choosing different forty-year-old terminal emulator escape sequences. :)
"On Linux, if you use ls --color then different file types use ANSI escape sequences as color indicators. If you pipe this output to less, then you get paging while retaining the color information. If you redirect this output to a file, that file contains the ANSI escape sequences. If you then use the cat command on the file, you see the coloring as the ANSI escape sequences are rendered by the terminal."
Interpreting commands from log messages! Because we haven't learned from history:
inb4 the first discovery of a program which "doesn't log plain text passwords" by logging them with the foreground colour set to the background color.
And then the one which exploits a terminal for arbitrary command execution with a buffer overflow in the VT escape code parser. Wait, what am I talking about "inb4", that happened already and it didn't even need a buffer overflow: https://www.proteansec.com/linux/blast-past-executing-code-t...
> "mod_rewrite.c in the mod_rewrite module in the Apache HTTP Server 2.2.x before 2.2.25 writes data to a log file without sanitizing non-printable characters, which might allow remote attackers to execute arbitrary commands via an HTTP request containing an escape sequence for a terminal emulator."
"still requires a special WinRT call in order to enable them"
Which WinRT call would that be?
Hi, owner of the Windows Console here.
Enabling VT control sequences is a matter of setting the ENABLE_VIRTUAL_TERMINAL_PROCESSING mode on the output handle, and it's available through the same interface as every console mode flag that came before it. SetConsoleMode has a long history--dating back to the nineties--that this just builds on.
It's behind a flag so that applications developed before ca. 2015 that like to emit control characters to the screen don't melt away into gibberish.
Yes, that'd be the call in question; it requires cross-platform vt100 apps to specifically know about and call a feature on windows in order to enable it. They can't just emit control characters to terminal, they must call this WinRT function first. This isn't something you can fix, for the reasons you listed, but it's something that's true.
As a side note, Windows Terminal (the app) is absolutely fine letting programs emit and handle VT100 escapes without them issuing any particular opt-in call themselves... And upon looking, you're the person who enabled this feature! Thanks for that, but, why is it good for terminal but bad for conhost?
Sorry, I had my Microsoft-colored glasses on. We use WinRT almost exclusively to refer to the new COM-based API surface that “modern” applications use. I see now that I’ve misunderstood you :)
Back when VT parsing was implemented, it was an entirely new output stream parser built into a console that hadn’t been updated in a rather long time. We were careful commensurate with its age, and opt-in made sense. Language runtimes or compatibility layers like Cygwin could handle the decision for all of their hosted applications(1) and everything else would generally continue working properly. Now that we’re working on conhost’s replacement, we get to revisit some of those decisions!
Cases like this are especially acceptable because a user can always fall back to conhost. That escape hatch isn’t one we intend to get rid of.
1) this doesn’t do anything for manual or direct ports, and Cygwin is far from the only provider here. Representative example, etc.
Yep, WSL2 is standard enough to draw some in. Its DirectX support is a[n early baby] step in the direction of making it idiosyncratic enough to keep them locked in (we should probably expect more in the future. Right now it otherwise seems to be just a VM, without a lot of non-standard stuff to give it an advantage over running the more stable OS on the bare metal and keeping the BSODing one contained with virtualization).
That's exactly what this is, a way to sink their proprietary claws into Rust and try to influence the market the way they have done with most of their software for decades.
The solution for this is rather simple, and until Windows does it, I will always be skeptical of their renaissance as some benevolent contributor to open source: open source windows.
Sure! Presumably you're volunteering to track down the current rights holders for all code derived from third parties and will negotiate the relicensing of their contributions?
Microsoft are a gigantic corporation, they're not the good guys or the bad guys, they're collectively amoral and profit oriented. That will never change.
Being suspicious of their actions and their intentions is a very reasonable stance.
The old hatred flares up every time Windows 10 asks me if I'm really really sure I don't want to default Edge as my browser, or "accidentally" changes it.
No they have not reinvented themselves. They are ruthlessly taking over OSS software projects by buying the type of developers who play politics and love to command others around.
I take it that Rust is next after Python, whose development has stagnated and where the mailing lists are now censored.
> They seem to be very focused on giving developers what they need, but only to a point.
This may be a consequence of the consent decree they signed in the early 2000's, where it was alleged that they used their control of Windows APIs to further Internet Explorer market share at the expense of other browsers. Since then they have had to be careful not to act like a monopoly.
curses specifically is majorly antithetical to how powershell and by extension Windows/server has decided to evolve. infact text-based UI is the reason CMD.exe cannot and will not ever be improved.
munging anything around with text very much not the Windows way. For better or worse.
I was curious how this worked: The previous iteration of this only worked for WinRT API, and this new crate seemed to also work by generating code from WinMD files. But WinMD files only contained definitions for WinRT/COM APIs, so how could this possibly work?
Well turns out, microsoft started a project to also generate Win32 API information in WinMD file, to generate APIs from them automatically for all native languages! See win32metadata[0]. This could make interfacing with win32 APIs a lot more convenient!
Kenny Kerr's blog post on this may also be of interest. In particular, it answers the question I was going to ask about how they're handling Win32 and WinRT in a unified way.
I wonder if Rust is becoming Microsoft's way forward for development rather than C++ (i.e. Rust for Windows rather than C++ for Win32), leaving .NET for higher level development? The bold introduction in the blog post surprises me, coming from Microsoft themselves who're right now hard at work on these individual and ununified technologies.
Note that there are still some teams like Azure Sphere and Azure RTOS, which are only providing C based SDKs, so no everyone is on the same wave length.
I had the exact same thought. I almost didn't bother following the link, because Rust for Windows is already a thing, but this is essentially a Rust equivalent to C++/WinRT.
To be clear, it's a bit broader than that. WinRT is a specific subset of windows APIs, and the Rust bindings for that have existed for a while. This is for all Windows APIs.
As someone who writes Windows software now and then, I’m genuinely excited. I tried using this early, when it was limited to WinRT bindings. It looked promising, but compile times were prohibitive. It seems like they now include a build.rs and have clear recommendations around caching — I hope this solves the problem. Has anyone tried a recent version?
The lesson from Microsoft I think is that the fish really does rot from the head. Put another way: who the CEO is really does matter. We have night and day here with Ballmer compared to Nadella.
Credit where credit is due: Microsoft has really been doing a lot to try and rebuild their credibility when it comes to the developer community. Off the top of my head I can think of TypeScript, open sourcing .Net, WSL and now this.
Oh and they haven't done an Oracle or a Cisco (or, let's face it, a Google at this point) with their acquisition of Github by letting it die on the vine or with hostile forced integrations.
For someone not familliar with Windows API, why does creating a Windows needs unsafe and other low level things? I guess it's the same for the C++/C# version?
On a side-note; do you know of any good resources where someone tries to wrap a non-trivial C wrapper where they go over common C idioms and ways to provide a (safer) API inside Rust without too much overhead?
I've always found it to be incredibly difficult because of the number of gotchas which can leak into causing a segfault in Rust; it's immensely frustrating.
My co-author Carol Nichols gave a few talks on this a while back, but there's not a ton of resources, it's true. I would look at big libraries and see what they do. I know it's not the best thing, but it's probably the best that exists right now.
It's a call out to Windows libraries that long predate Rust, and they are implemented in (mostly) C++. They don't provide any of the safety features on any data structure you pass to it. I don't see how it could be anything other than unsafe.
Does it actually wrap libxcb, or does it generate bindings from the XCB protocol descriptions (XML)? I would think the latter would be less work and higher quality.
I don't know how it's achieved, but as I look at the bindings for this Windows a.p.i they return all sorts of raw pointers and other things, whereas in XCB bindings for Rust wrap the types in a Rust-friendly way so almost all of them are safe.
This Windows a.p.i. in Rust seems of a frankly atrocious design and most of the functions seem simple `extern "C"` declarations rather than actual attempts at proper wrapping.
I don't think that's a fair characterization. Most safe rust wrapper libraries are built in two layers: 1. Map the api's raw interface into raw rust types, usually with a simplistic code generator; this enables the bindings to closely track upstream api changes. 2. Use that raw interface to build a wrapper library that translates the api to use rustlike idioms and expose safe constructs.
This seems to be #1 only for now, which is fair because winapi is enormous. Also there may be many ways to expose a safe rust interface all with different tradeoffs, by leaving #2 open they don't lock in a single strategy prematurely. That said I am looking forward to a safe wrapper as well.
That it is atrocious because the project is young, and that it might become less atrocious in the future, is no argument that it not be atrocious.
However, my original reply was to this:
> It's a call out to Windows libraries that long predate Rust, and they are implemented in (mostly) C++. They don't provide any of the safety features on any data structure you pass to it. I don't see how it could be anything other than unsafe.
It very much can be, and is often done, it simply isn't (yet )here.
Hating this is like hating a primer-painted part for being grey: you denounce the propose and existence of primer and claim that it should have been painted with the finishing coat from the start.
I'm sorry you think the dull finish is ugly, but looking pretty is not its purpose. Its purpose is to bind strongly to the substrate it's on and provide a better surface for that shiny coat you so desire to adhere onto. Primer and language bindings alike.
> Hating this is like hating a primer-painted part for being grey: you denounce the propose and existence of primer and claim that it should have been painted with the finishing coat from the start.
No, it's simply arguing against the claim initially made that there is no way for it to not be grey.
> I'm sorry you think the dull finish is ugly, but looking pretty is not its purpose. Its purpose is to bind strongly to the substrate it's on and provide a better surface for that shiny coat you so desire to adhere onto. Primer and language bindings alike.
Maybe it isn't it's purpose, but the original post claimed that it was impossible for it to be difference, which is certainly false.
There is some good stuff here but also some sloppiness.
1) Having everyone generate the bindings means there will be many copies of each type, causing type errors. This only works if Windows is always a private dependency, a concept which isn't even fully implemented, and that is a bad assumption. Public dependencies are the logical default.
2) Putting the proc macro and non-autogenerated parts in the same create is cute but a sloppy conflation of concerns and bad for cross compilation. There is a underlying https://crates.io/crates/windows_macros thankfully, but that should be the only way to get the macro.
> we could support this in future, but it is not an immediate goal. If this is something that folks would like to do, feel free to chime in on this issue and let us know.
Note that most of Microsoft regards Linux mainly as a server OS. You are not supposed to use it on the desktop, instead you should use Windows there.
>> Note that most of Microsoft regards Linux mainly as a server OS. You are not supposed to use it on the desktop, instead you should use Windows there.
Then what's the point of WSL? To allow server software development on a Windows desktop? OK. But then what's the point of trying to bring DX graphics to linux?
The question is valid. The winapi crate binds the functions in the Windows headers explicitly. You can call `winapi::...::CreateEventW` because winapi has a `pub extern unsafe fn CreateEventW(...)` in its code.
This new crate is a generator. If this new generator requires access to the .winmd files or .h files from the SDK or whatever, where is the non-Windows builder going to get those from? And will this generator look for them there?
For example, I used to maintain a generator that took COM typelibs and generated Rust bindings for them, but it required calling into the Windows API for working with typelibs and thus obviously required a Windows builder. The functionality of this crate is split between the windows_gen and windows_macros crates, and from a cursory glance I could not tell how it works wrt win32 bindings.
> This new crate is a generator. If this new generator requires access to the .winmd files or .h files from the SDK or whatever, where is the non-Windows builder going to get those from? And will this generator look for them there?
From memory (I haven't done Windows development in forever): there's a few projects that try to provide an open source version of the Windows API headers. The MingW project, for example, has a full set of drop-in replacement headers (meaning you can take code that compiled with Visual C++ and compile it directly with MingW's GCC and it should 'just work'). The LCC-Win64 project has a similar set of headers.
The winapi does a lot of work to support that use-case. Importantly, it ships (mingw) import libraries to allow linking against windows libs from linux trivially[0].
Wow, I thought by the name this would be an awkward Windows distribution of Rust packaged in an MSI. I'm pleasantly surprised. Microsoft has become one of the best big tech companies for open source in the past few years.
Wish there was something like this for Linux too. Rust system programing on Linux consist of dealing with a dumpster fire of badly implemented and incomplete wrapper crates for the kernel interfaces.
I assume you're talking about more than just libc. Many of the Linux specific facilities are captured in higher-level cross platform implementations, like mio abstracts over kqueue on BSD and epoll on Linux.
What are the APIs you're interested in that are missing or of poor quality?
Really pleasing to see that MS have done this without feeling the need to start nailing proprietary extensions onto the Rust language. I feel Rust adoption is still at a low enough level where a separate windows fork would have been especially harmful. I guess there are a couple of factors helping here:
- MS were already moving to a model where they prefer to stick with the standards-compliant form of a language: vide C++/WinRT
- Rust has enough features built in to facilitate this, specifically being able to hook into the compilation process
Since Windows ships a stable ABI, why does this project need to generate the bindings at build time? Couldn't all of the bindings be pre-generated, eliminating the build-dependencies?
Are these D bindings relevant to Rust developers? Or asked another way, is there some reason that the D bindings would be better to use than these native ones in Rust?
When I am writing or choosing (or bindings to) an API I always go and look at other languages (especially what the functional people do) to avoid repeating non-obvious mistakes and seeing where the impedance mismatches are.
Most recently, I've been playing with eBPF on Linux: The system call API is terse and pretty impenetrable, whereas the high-level APIs either mean writing gadgets in C with a nearly-blind debugging experience or relying on a library to retain the power of the featureset the kernel exposes.
Very cool to see the start of official support. Unfortunately it looks like it requires unsafe { } for now, though maybe it's intended as a low-level foundation on which a higher-level, safer API can be built
Well, the Windows API is "unsafe" by design, is C-based, you pass pointers around, datatype sizes, etc... How can you avoid "unsafe" in this scenario? You're asking for a new framework or an API rewrite
The domain itself isn't fundamentally unsafe, only the way the C API has been designed. You're right that meaningfully different abstractions (instead of a one-to-one translation) may be required to make a safe API, which is why I suggested that could be a possibility.
This ought to be doable automatically, at least a lot better than the current code:
let event = CreateEventW(
std::ptr::null_mut(),
true.into(),
false.into(),
std::ptr::null(),
);
The .into() is silly but tolerable. But the first parameter is a pointer to a struct, and it’s clear enough from the signature that a reference would work. It could be mut to be on the safe side. (Yes, this involves someone making sure that the API doesn’t retain the pointer.). The last argument is, per the prototype, is a string. Admittedly, the Windows API has a truly horrible idea of what a string is, but surely they could do better than using a pointer.
Both of these parameters can be null, and references cannot be null, so it is not possible to use references directly without losing some functionality.
(They could remove the intos if they wanted to, I agree it feels kind of weird but I'm not actually sure if doing it is better or worse, I am conflicted.)
It could very well be Option<&mut T>, though. Personally I think having a 1:1 mapping to the C API is reasonable enough as a first target. It should make porting code easy and referencing the official documentation is probably easiest this way.
No matter how many heuristics they apply to make a somewhat more idiomatic mapping, it'll never feel right until it's manually designed to fit with Rust conventions. So I'm okay with it as it is now and can hope for a much thicker abstraction in the future.
There are a few different possibilities, and some of them involve Option, yes. But doing this would still possibly create an annotation burden that the parent is complaining about.
My comment was trying to be pretty narrowly scoped to "why not use a reference here." You and your sibling are both right in ways!
But that wouldn't eliminate the need for unsafe code at the border. All ffi calls are inherently unsafe to Rust. You can design the cleanest nicest safest C api in the world but it's still unsafe as far as Rust is concerned.
It's very much possible (essential, even) to be able to present a safe API to external callers of your code, despite using unsafe code under the hood. In this case you're making a human-checked assertion of safety, which is not guaranteed to be without bugs, but the important thing is that you're minimizing the surface area of un-safety and declaring a contract with your users. You're "stopping the buck" of unsafety rather than passing it on. The parts that really have to be unsafe can enjoy extra scrutiny, and everything else (including the caller's code) can be checked by the compiler. This is not uncommon in the standard library and other low-level libraries.
Sure. So this library exposes all of Microsoft’s DLL surface area in an API-native way. That’s important.
Now that it’s released, library authors can wrap this with another library, which abstracts over win32, reexposing it in safe rust. Wrapper libraries like that almost certainly won’t cover 100% of the api surface area - there are so many functions in the windows APIs.
Anyway, give it time. People will wrap this with safe rust.
Yes, but those unsafe blocks can be wrapped into the library. "Unsafe" doesn't inherently mean "bad", it just means that the programmer is explicitly taking the burden of making sure that the normal invariants still hold. So long as the library authors ensure that, then the "unsafe" blocks remain within the library and don't need to be worried about by the users of the library. On the other hand, if a library pushes that additional responsibility to me, it makes me more worried.
Yes, but if the library abstracts away the unsafe code, then users of the library don't need to write any themselves. This has the advantage of a single point of unsafeness, which can be fixed by the library author/maintainer for all clients.
This makes no sense to me, what you’re saying here. You pass raw pointers to the Windows API for one, for which it makes no safety guarantees. Short of rewriting Windows from scratch I don’t see how this isn’t a fundamentally unsafe domain. Sure you could put more and more wrappers up on top (it’s not an even an issue of one-to-one translation, but how resources such as memory are wrapped), but whatever binding there is pretty much must be unsafe.
That it’s possible to write a safe wrapper is kind of obvious.
Unix libc is also unsafe, but the Nix crate mostly built a safe, Rust-esque interface around this and abstracted most of the types so that they are safe.
Since the unsafe APIs directly interface with memory directly, they can bring the whole Rust application and/or put it in unknown state by messing up memory.
> Since the unsafe APIs directly interface with memory directly
I'm not sure I follow this. Are you saying the unsafe APIs somehow treat memory differently than any other C API that has been wrapped with a safe C wrapper?
In any case, I'm saying their current unsafe wrapper can sit under another layer, which provides a safe API. Of course, if there's a bug somewhere in the lower layers, it could still be unsafe, but that's true of all C bindings.
Someone replied and then deleted their comment, but I'll reply to it anyway. They wrote:
> This is the safe abstraction! I mean it's a pretty mechanical translation but it's nonetheless as safe as Rust will ever consider it.
No, it's not safe. Assuming I understand Rust correctly: if your bindings allow Rust programmers to trigger undefined behaviour even when they restrict themselves to the safe subset of Rust, that means that either your bindings have a defect, or the underlying (wrapped) library has a defect.
I've seen some discussions on HN in the past where some rustaceans have argued that it's fine to expose raw C APIs in Rust using safe interfaces even if they're fundamentally unsound. I fail completely to understand the reasoning behind that (and I've really tried) but apparently it's still an open debate to some extent.
Yes, exactly. It'll be pretty unlikely that raw API bindings can be made safe (even for APIs that just take raw integer handles - you could double-close one and cause corruption/etc).
Unsafe seems like Rust's equivalent of "checked exceptions" - annoying and misdirected red tape. Pretty much all graphics code (ex: vulkan) is nearly 100% unsafe too. The official docs for these libraries just say "wrap everything in an unsafe block".
Is marking the apis unsafe going to make people choose an alternative? No. Does it prevent something bad from happening? No. Do users even know what unsafe means, other than that it sounds scary? No. If they did know, would it have imparted useful knowledge to them (that documentation wouldn't have provided more specifically)? I don't think so, for most users.
Does this automatically generate safe API's as expected for Rust? If so, I wonder how they manage it? Did their metadata format have to be extended to describe the constraints on Rust callers and callbacks?
It does not generate safe bindings. Currently the metdata is scrapped from Windows headers which don't have all the necessary information to go that far. In the future the metadata could be improved. The metdata format is the same as used by .NET and WinRT.
This might be an inconsequential niggle, but I kind of dislike the Microsoft's approach to naming. Like calling this "rust for windows" makes it sound somewhat grandiose, and maybe as if Rust is somehow endorsing or expanding support for Windows, when it appears to be just a crate which auto-generates wrappers for windows API's.
The same applies to "Windows Subsystem for Linux" - maybe I'm alone but to me this sounds more like Linux was getting a bit windows embedded inside it rather than the other way around.
Also I find it a bit arrogant that VSCode wants to name-squat the very general term "code" in my PATH. vscode is already quite short and clear, and would have made a perfectly suitable cli name.
Also, as a C++/Python dev - it's increasingly hard not to notice the awesome momentum Rust has garnered.