I'm passionate about programming, and I'm quick to learn. Most of my experience is in full-stack development, but my interests span all across the board.
I started in software by maintaining waveform code for military radios. I dug deep into legacy C++ code and closely followed strict specifications. From there, I found my way into B2B ecommerce, serving customers all around the world. Then I planned a break to dedicate myself to my own projects!
I released https://messagehighway.com, an SMS service written in Clojure for organizers to easily communicate with their users. I am becoming more involved in the open source community by sharing my libraries and packages on Github (https://github.com/balloneij). I am most proud of Slouch, which is the first idiomatic Clojure interface to Apache CouchDB :).
I love challenging work, I'm quick to learn, and I thrive working in enthusiastic teams.
If you think we would work well together, please reach out via email! You can find it on my resume.
Location: Milwaukee, WI
Remote: Yes
Willing to relocate: No
Technologies: C#, JS, Clojure, Java, SQL, Selenium, AngularJS, Azure
Résumé/CV: https://balloneij.com/resume.pdf
Email: see resume
Hey - my name is Isaac Ballone.
I'm passionate about programming, and I'm quick to learn. Most of my experience is in full-stack development, but my interests span all across the board.
I started in software by maintaining waveform code for military radios. I dug deep into legacy C++ code and closely followed strict specifications. From there, I found my way into B2B ecommerce, serving customers all around the world. Then I planned a break to dedicate myself to my own projects!
I released https://messagehighway.com, an SMS service written in Clojure for organizers to easily communicate with their users. I am becoming more involved in the open source community by sharing my libraries and packages on Github (https://github.com/balloneij). I am most proud of Slouch, which is the first idiomatic Clojure interface to Apache CouchDB :).
I love challenging work, I'm quick to learn, and I thrive working in enthusiastic teams.
If you think we would work well together, please reach out via email! You can find it on my resume.
I think the misunderstandings are because they are targeting a specific audience. They aren't trying to teach you Clojure, Datomic, and Reagent because they assume you are already knowledgeable about it.
Clojure and Clojurescript are the same language (more or less). They just target the JVM or Javascript.
The author is mixing the frontend (Clojurescript and Reagent) code with the backend (Clojure and Datomic) code in the same expression. Then through their magical system and the beauty of lisp, they pull the frontend and backend parts out to serve them separately.
I still haven't tried it, but what if you downloaded the iOS apps on the M1? Wouldn't it be more containerized if you ran those apps from the phone "emulator"
You're absolutely right, but I would assume Zoom has disallowed its iOS client from being installed on macOS. Since Apple closed the loophole allowing sideloading, we're out of luck on that front.
> On Windows, assume a new process will take 10-30ms to spawn. On Linux, new processes (often via fork() + exec() will take single digit milliseconds to spawn, if that).
> However, thread creation on Windows is very fast (~dozens of microseconds).
One of many reasons why I prefer to run Emacs under WSL1 when on Windows. WSL1 has faster process start times.
But then with git, there are other challenges. It took me a while to make Magit usable on our codebase (that for various reasons needs to be on the Windows side of the filesystem) - the main culprit were submodules, and someone's bright recommendation to configure git to query submodules when running git status.
Here's the things I did to get Magit status on our large codebase to show in a reasonable time (around 1-2 seconds):
- git config --global core.preloadindex true # This should be defaulted to true, but sometimes might not be; it ensures git operations parallelize looking at index.
- git config --global gc.auto 256 # Reduce GC threshold; didn't do much in my case, but everyone recommends it in case of performance problems on Windows...
- git config status.submoduleSummary false # This did the trick! It significantly cut down time to show status output.
Unfortunately, it turned out that even with submoduleSummary=false, git status still checks if submodules are there, which impacts performance. On the command line, you can use --ignore-submodules argument to solve this, but for Magit, I didn't find an easy way to configure it (and didn't want to defadvice the function that builds the status buffer), so I ended up editing .git/config and adding "ignore = all" to every single submodule entry in that config.
With this, finally, I get around ~1s for Magit status (and about 0.5s for raw git status). It only gets longer if I issue a git command against the same repo from Windows side - git detects the index isn't correct for the platform, and rebuilds it, which takes several seconds.
Final note: if you want to check why Git is running slow on your end, set GIT_TRACE_PERFORMANCE to true before running your command[0], and you'll learn a lot. That's how I discovered submoduleSummary = false doesn't prevent git status from poking submodules.
--
[0] - https://git-scm.com/docs/git, ctrl+f GIT_TRACE_PERFORMANCE. Other values are 1, 2 (equivalent to true), or n, where n > 2, to output to a file descriptor instead of stderr.
To precise, you say WSL1 is faster compared to Windows, or compared to WSL2? With WSL2 (and native-comp emacs branch) I've never noticed any unusual slowdowns with magit or other.
WSL1 process creation is faster compared to Windows, because part of the black magic it does to run Linux processes on NT kernel is using minimal processes - so called "pico processes"[0]. These are much leaner than standard Windows processes, and more suited for UNIX-style workflow.
I can't say if it's faster relative to WSL2, but I'd guess so. WSL2 is a full VM, after all.
It shouldn't actually be a noticeable difference. HW virtualization means that unless the guest is doing I/O or needs to be interrupted to yield to the host, the guest is kind of just doing its thing. Spawning a new user space process in a VM should, in theory, be basically the same speed as spawning a new user space process on the bare metal. How that compares to the WSL1 approach of pico processes I don't know, but Linux generally has a very optimized path for spawning a process that I would imagine is competitive.
Yeah, I hope this is one of the issues Microsoft address some time because although CreateProcess is a slightly nicer API in some regards the cost is very high. It may not be possible to fix it without removing backwards-compatibility, but maybe we could have a new "lite" API.
The bit about Windows Defender being hooked into every process is also infuriating. We pay a high price for malware existing even if we're never hit by it.
Yes. This makes me wonder if I could speed up our builds by 2x by whitelisting the source repository folder. If it's at all possible (and company policy allows for it)...
One thing that deeply frustrates me is that I simply don't know which things are slowed down by Defender. I can add my source repos to some "exclude folder" list deep in the Defender settings, but I've yet to figure out whether that actually does something, whether I'm doing it right, whether I should whitelist processes instead of folders or both, I have no idea.
If anyone here knows how to actually see which files Defender scans / slows down, then that would be awesome. Right now it's a black box and it feels like I'm doing it wrong, and it's easily the thing I dislike the most about developing on Windows.
Writing things that do a lot of forking, like using the multiprocess or subprocess modules in python, is basically unusable to my coworkers who use windows.
Startup time for those processes goes from basically instant to 30+ seconds.
I researched this a little bit and it seems that it may be related to DEP.
It's basically just Windows: Back when the current Windows architecture was designed (OS/2 and Windows NT going forward--not Win9x) the primary purpose of any given PC was to run one application at a time. Sure, you could switch applications and that was well accounted for but the entire concept was that one application would always be in focus and pretty much everything related to process/memory/file system standpoint is based around this assumption.
Even for servers the concept was and is still just one (Windows) server per function. If you were running MSSQL on a Domain Controller this was considered bad form/you're doing something wrong.
The "big change" with the switch to the NT kernel in Windows 2000 was "proper" multi-user permissions/access controls but again, the assumption was that only one user would be using the PC at a time. Even if it was a server! Windows Terminal Server was special in a number of ways that I won't get into here but know that a lot of problems folks had with that product (and one of many reasons why it was never widely adopted) were due to the fact that it was basically just a hack on top of an architecture that wasn't made for that sort of thing.
Also, back then PC applications didn't have too many files and they tended to be much bigger than their Unix counterparts. Based on this assumption they built in hooks into the kernel that allow 3rd party applications to scan every file on use/close. This in itself was a hack of sorts to work around the problem of viruses which really only exist because Windows makes all files executable by default. Unfortunately by the time Microsoft realized their mistake it was too late to change it and would break (fundamental) backwards compatibility.
All this and more is the primary reason why file system and forking/new process performance is so bad on Windows. Everything that supposedly mitigates these problems (keeping one process open/using threads instead of forking, using OS copy utilities instead of copying files via your code, etc) are really just hacks to work around what is fundamentally a legacy/out-of-date OS architecture.
Don't get me wrong: Microsoft has kept the OS basically the same for nearly 30 years because it's super convenient for end users. It probably was a good business decision but I think we can all agree at this point that it has long since fallen behind the times when it comes to technical capabilities. Everything we do to make our apps work better on Windows these days are basically just workarounds and hacks and there doesn't appear to be anything coming down the pipe to change this.
My guess is that Microsoft has a secret new OS (written from scratch) that's super modern and efficient and they're just waiting for the market opportunity to finally ditch Windows and bring out that new thing. I doubt it'll ever happen though because for "new" stuff (where you have to write all your stuff from scratch all over again) everyone expects the OS to be free.
> Also, back then PC applications didn't have too many files and they tended to be much bigger than their Unix counterparts.
Okay, let me interrupt you right here. To this very day Linux has a default maximum number of file descriptors per process as 1024. And select(3), in fact, can't be persuaded to use FDs larger than 1023 without recompiling libc.
Now let's look at Windows XP Home Edition -- you can write a loop of "for (int i = 0; i < 1000000; i++) { char tmp[100]; sprintf(tmp, "%d", i); CreateFile(tmp, GENERIC_ALL, FILE_SHARE_READ, NULL, OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL); }" and it will dutifully open a million of file handles in a single process (although it'll take quite some time) with no complaints at all. Also, on Windows, select(3) takes an arbitrary number of socket handles.
I dunno, but it looks to me like Windows was actually designed to handle applications that would work with lots of files simultaneously.
> fundamentally a legacy/out-of-date OS architecture
You probably wanted to write "badly designed OS architecture", because Linux (if you count it as continuation of UNIX) is actually an older OS architecture than Windows.
> I doubt it'll ever happen though because for "new" stuff (where you have to write all your stuff from scratch all over again) everyone expects the OS to be free.
I think one way they could pull it off is to do a WSL2 with Windows - run the NT kernel as a VM on the new OS.
As for the price, I think they're already heading there. They already officially consider Windows to be a service - I'm guessing they're just not finished getting everyone properly addicted to the cloud. If they turn Windows into SaaS execution platform, they may just as well start giving it away for free.
>My guess is that Microsoft has a secret new OS (written from scratch) that's super modern and efficient and they're just waiting for the market opportunity to finally ditch Windows and bring out that new thing. I doubt it'll ever happen though because for "new" stuff (where you have to write all your stuff from scratch all over again) everyone expects the OS to be free.
>My guess is that Microsoft has a secret new OS (written from scratch) that's super modern and efficient and they're just waiting for the market opportunity to finally ditch Windows and bring out that new thing. I doubt it'll ever happen though because for "new" stuff (where you have to write all your stuff from scratch all over again) everyone expects the OS to be free.
More and more stuff gets offloaded onto the WSL for stuff which doesn't need interactive graphics or interoperability through the traditional windows IPC mechanisms.
In my experience, Magit is slow even on Linux. On my small repos at home, subjectively magit-status seems to take around 0.2-0.3 seconds. And that's just status, the most basic information you ask of git. Committing is several times slower. On a large codebase at work, magit-status usually takes around 10 seconds, sometimes longer. Again, I'm usually running it to just check some basic metadata (what branch I'm on, do I have a dirty tree, if yes, then what files are changed), so it's frustrating to wait. Honestly, I'd expect stuff like that to update effortlessly in real time without me issuing any commands. This is what happens in some other editors. However, currently I'm glued to Emacs because of Tramp for working remotely in a nice GUI and org-mode for time-tracking (TaskWarrior/TimeWarrior isn't for me).
I prefer Fork on Windows and Mac (prefer the Windows version for aesthetic reasons). Unfortunately, it's not available for Linux.
At what point does this get in the way of your programming? If there's always more to learn about the tool, I think that would bother me.
I would like to like emacs, but the couple times I tried it it gets in the way because I don't know it well enough. Vim seems more appealing because there is less to know up-front, but it is still extensible
At the point where you find yourself dinking with Emacs and making excuses to yak shave instead of getting your work done -- which, given that it's Emacs and makes most other tools seem positively stone-knives-and-bearskins by comparison, is in practice a quite tempting trap to fall into.
The best way to approach things is incrementally. Find some pain point in your current workflow and write a little bit of Elisp to automate the painful bits. Let's say you're running manual UI tests and then eyeball-grepping the logs in your backend. "But nobody does that!" you say. "Why would anybody do that?" Heh. You'd be surprised. Anyhoo, thr crank has to be turned on a prod deploy by COB Friday and you've got no time to investigate Nightwatch or the other UI-testing libraries out there, let alone integrate them with a log watcher.
Thankfully you're an Emacs user! So you write a little Elisp to start the back end, capture its output in a buffer, and count the number of occurrences of the log cookie that represents a successful (or failed) row insert, etc. You haven't automated your whole process, but you've automated part of it, and that's saved you considerable pain. And it took maybe half a page of Lisp, if that.
Recently I was confronted with an ancient web service whose only integration test process was "manually hit the endpoints with Postman, then check to make sure the records hit the database". So I wrote Emacs Lisp code to send the requests, run the queries, and even manage the Docker container where the service lived. This let me run the whole process much faster with a few M-x commands. It was nothing fancy, either, I just wrote code to spin up curl, Docker, and SQL*Plus and capture and examine their output, sort of using Emacs as a powerful full-screen shell. Rather than get in my way, it became a versatile tool I could apply to the task to get things done much faster.
Again, this is the difference between an extensible editor and a completely malleable one. VSCode, JetBrains, or Eclipse cannot be extended as incrementally and on-the-fly as Emacs.
I've been using and programming Emacs for ~12 years. After all this time, I am at the level where it really feels like an extension of myself, almost like a cybernetic extra sense that I can put to use as I see fit.
Practically, as I go about my daily programming, if I feel something is off and could be improved, I have a pretty accurate idea of how long it'll take for me to implement it. Usually it's not that long and frequent 10-15 minute diversions of this sort happen multiple times a day. It simply means that the process works as expected, I'm approximating an ideal -temporal- mind model.
A few times a month, I'll do more extensive work, focusing on a number of long-term projects or exploring various ideas I've had and put down in my notes. All this work is geared around helping me manage information more effectively. I'm an information junkie, hopelessly addicted to the Internet and that's by choice. I wouldn't give it up for nothing.
Emacs is the tool that makes the difference since this continuous feedback loop of me adapting Emacs to help me deal with more and more information, allows me to keep up. Being an information consumer is easy today, you simply sit back and absorb what's being blasted at you. Being plugged into numerous signal sources and managing that information _on your own terms_ is the tricky bit and where Emacs shines while most other tools fall flat, since they're either too constrained or lack the necessary programmability.
> I've been using and programming Emacs for ~12 years. After all this time, I am at the level where it really feels like an extension of myself...
The problem is that these days, you could use vscode or atom for a month or two, and get that same feeling. Emacs from scratch is not a great use of time anymore.
Maybe so, but the ceiling with those tools is very low. I simply can not do what I do every day with Emacs using vscode or atom. I know why you compared them, but I feel that these comparisons are meaningless. A category error.
What are some examples? I'd say the ceiling is marginally lower (maybe) than emacs, but the learning curve to extend is orders of magnitude more approachable.
Almost everything I do on the computer happens inside Emacs, using programs written in Emacs Lisp:
Writing code, reading/sending email across ~30 email accounts spread out over different services, reading newsgroups, mailing lists and RSS feeds, maintaining a presence in 7 different IRC servers with ~30 channels total, using a variety of connection methods, watching and filtering certain twitter feeds, controlling external applications through Apple Events [1], file management local and remote, remote system management, local virtual machine management, note taking, calendar, agenda, notifications, bookmarks for external applications (Chrome and Preview), password manager, version control, music playing.
I can keep going. The only programs besides the OS and Emacs I use on a regular basis are Google Chrome, Preview.app, VMWare, Calibre, mpv, bash/ssh, Unix command line tools (less and less) and my bittorrent client written in Common Lisp.
I've almost finished an Emacs Lisp controller for Chrome, allowing me to bring a lot of information (e.g. tabs) into Emacs in order to rapidly manage it on my terms. I've done the same for Preview.app. I treat it as a dumb vessel that does the rendering. The actual information (filename, metadata, current page, date etc) is extracted from it, stored and manipulated in Emacs. That way I don't rely on Apple to dictate how I'm allowed to use my computer. I can experiment with different paradigms and find the one that fits me best. Emacs is the magic that makes all of that happen.
That speaks to the general breadth of the emacs extension/package ecosystem, which is impressive (but somehow not as impressive as you'd like), but how many of these things did you engineer yourself? And how many these things could not be implemented in modern editors? I'm not sure any couldn't be. Most already are.
Extensibility was emacs' killer feature when its main competition was vim and vimscript - that's just not the situation anymore.
Emacs main competition in the noumenal realm of ideas wasn't vim, but the commercial Lisp Machines mainly Symbolics. They were superior but technical superiority doesn't mean much in the domain of "Worse is Better" [1].
I didn't write all of the Emacs Lisp programs I mentioned, but I did go through most of them, changing the way they work to fit how I wanted to use them. This was an iterative process that is still ongoing.
What should impress you is not the breadth, but the paradigm itself as expressed in Lisp. You can probably write an IRC client in JavaScript and have it in VSCode. But that's not what I'm talking about. We're not ticking boxes down a feature list here to say "JavaScript can do that!". We're talking about a paradigm, and specifically about interactive development with a short feedback loop through Lisp which is "a difference that makes a difference" [2]. But in order to truly understand this, you have to go through the process. Philosophical truths can not be transmitted like pieces of eight.
VSCode is quite restrictive in how it can be extended actually.
Atom is much better, and feels closer to Emacs in terms of allowing you to extend and modify almost every part of it.
That said, Emacs is just the king. Not only can you really just change absolutely everything, it is really easy to modify it all at run-time on the fly, which even in Atoms isn't quite so.
I had settled on Atom for a while, but the slow performance and high memory use made me switch to Emacs.
It gets in the way of your programming immediately - the friction involved in bootstrapping a viable emacs environment that unlocks all the potential everyone is always going on about, is very high.
Deep diving into emacs is not a productivity booster - its a hobby that one does for its own sake, if that's the kind of thing that floats your boat.
Remote: Yes
Willing to relocate: No
Technologies: C#, JS, Clojure, Java, SQL, Selenium, AngularJS, Azure
Résumé/CV: https://balloneij.com/resume.pdf
Email: see resume
Hey - my name is Isaac Ballone.
I'm passionate about programming, and I'm quick to learn. Most of my experience is in full-stack development, but my interests span all across the board.
I started in software by maintaining waveform code for military radios. I dug deep into legacy C++ code and closely followed strict specifications. From there, I found my way into B2B ecommerce, serving customers all around the world. Then I planned a break to dedicate myself to my own projects!
I released https://messagehighway.com, an SMS service written in Clojure for organizers to easily communicate with their users. I am becoming more involved in the open source community by sharing my libraries and packages on Github (https://github.com/balloneij). I am most proud of Slouch, which is the first idiomatic Clojure interface to Apache CouchDB :).
I love challenging work, I'm quick to learn, and I thrive working in enthusiastic teams.
If you think we would work well together, please reach out via email! You can find it on my resume.