In my experience, Low Code tries to fix the non-problem and makes the real problem worse. They will get you up to speed fast, but with a much lower output plateau than normal programming tools. Some experience from one low code tool I used this year:
Non-problem: Writing code. This is the easy part. COBOL took typists, gave them a week of courses, which made them successful basic coders. Low code helps the most basic junior but slows down the average coder by forcing everything trough drag and drop.
Problem: Reading code. Most low code platforms I've seen show you only a small part of the code, needing a lot of clicking around in a GUI to make sure you found it all.
It either transform it in a mess of arrows and boxes or spread it out so wide you spend more time scrolling than reading. I've found myself reading the XML dumps of our current tool just to spare me some time.
Problem: One size fits all. You can't polish or finetune the standard components. What you see is what you get. This guarantees you both a minimum and a maximum level of quality. Yes, there are escape hatches. No, they won't help you. You will make parts of your program unstable or less user friendly because your low-code vendor didn't foresee all of your needs.
Problem: Versioning. Boxes and arrows don't merge well. There is generally only a small team working on 1 piece of code. You can't scale it past 3-4 people. Also, emergency fixes in prod don't easily propagate back to dev, especially in a high-stress situations. You'll have to do it manually. This almost guarantees regression bugs.
Problem: Searching code. If you have enough code, the day comes where you'll need to find all references to something. I've grepped code bases of >10 000 000 lines. Can't do it in more than the most limited way with low code.
Problem: knowledge exchange. Something like stack exchange works because you can type text. Print screen is the only option available in most low code tools.
As the saying goes, the core of ICT is not programming but Information and Communication. If you want to make programmers obsolete, you need tools that help you organize information and ease communication.
Low code is simply the wrong way to look at the problem. it ends up throwing tons of man-hours at a problem. In the long term, it creates more programmer jobs, not less.
But that's exactly what people used to think in the 60s and 70s: instead of requiring a bunch of electrical engineers to build some arcane contraption, now ordinary folks can just write something that almost looks like English and you can automate anything and do calculations in seconds that used to take months! If that didn't pan out even though it seemed so freaking obvious that it would, why will No Code be any different?
To add an anecdote: No Code already was the hot new thing in the 90s when I studied CS. You could click together custom interfaces in Delphi and even do basic wiring with clicking alone, IIRC. Devs expected that laypeople click together the solution they want and developers do the remaining wiring. Yet no non-developer could actually use that thing. Nowadays I think the main hurdle is the transformation of a fluffy real world problem into something of an algorithm. Developers do this almost unconsciously, because they practice this all the time, and thus are usually not aware of it. Yet this process of quantification of the real world problem often is the actual problem, not writing it down as code.
> I think the main hurdle is the transformation of a fluffy real world problem into something of an algorithm.
I came to a very similar conclusion after I had been teaching programming in high school for a few years: the difficulty of "programming" is in learning to think algorithmically, and no amount of "No Code" tooling gets you around that problem. The article alludes to this with the "PBJ sandwich problem" - people are used to specifying processes based on a collective (and often unconscious) cultural understanding, which computers obviously do not share!
I'm inclined to agree. One of the most successful "No Code" programs is Excel. Yet we still, time and time again, see people struggle with basic calculations in it. It's literally elementary school mathematics we're talking about.
I think most "No Code" and especially RPA in general will fall into that. The required mindset to think programmatically is not something the majority of people have unfortunately. But "No Code" will enable those that is somewhat technically inclined and able to think sufficiently programmatically.
Yes! SQL for example, was invented for business people to allow them to pull their own reports iso having to bother programmers to do it for them. We all know what really happened.
We are doing that though. There are tons of flexible systems like that, where developers provide components/plugins and somewhat technical people, or rather domain experts fit them together for a specific task. Wordpress, Unity3d, Shopify to name a few.
What are you doing in a terminal that you need real-time collaboration? Here's what I do in my terminal:
* log in to AWS
* push commits (though that's mostly in Emacs now)
* tail logs from remote services
* use ssh
* random grepping/cat/awk/sed one-off stuff
I don't see any of those benefiting from real-time collaboration. The use-cases presented on the landing page don't make sense to me either - when my entire team is debugging production, we usually fan out and all look at different things rather than needing to fan-in and all do/look at the same thing. And if I want to chat I'll have Slack open next to the terminal.
Thanks! It's on the low end of what systems in other domains like LMAX have done, but can make a difference for the payment systems that we worked on in coming up with the design, to go faster and reduce costs.
Ahead of performance, we are especially excited about TigerBeetle's safety fault models, where we think there's an opportunity to break new ground, with all the new storage fault research that's been coming out the past two or three years, especially from UW-Madison.
Zig has been fantastic, in particular for Direct I/O alignment and static allocation, and it's only getting better with things like the self-hosted compiler. Having done a bit of security work in the past, I really like Zig's unique approach to safety, for example checked arithmetic enabled by default in safe builds, all the explicitness around return values and exhaustive syscall error handling, and of course comptime. The speed and ease of the compiler itself and cross-compilation tooling is excellent and the readability of Zig is also remarkable.
Huge props to the Zig Software Foundation for what they are achieving as a fully open source community funded project.
I wish there was an easier way to figure out if you had COVID.
As someone who's only got one dose so far, I would gladly skip the second one if I knew for certain I had COVID before, so someone else could get my second dose.
Don't some antibody tests do this? I think antibody tests that detect antibodies to the nucleocapsid should be unaffected by the vaccines, which only spur resistance to the spike protein.
See the section called "Binding Antibody Tests" here:
This is true of some - but not all - of the vaccines. The whole inactivated virus vaccines contain more than the spike, but the mRNA and vector vaccines only contain the spike.
I think they are relatively expensive, ~$200. Probably worth it though.
We would need some form of official recognition of antibody tests though. Pretty sure most places that require vaccination don't accept antibody test results.
Antibody tests at the supermarket clinic here is $40. Unsure if that’s a crappy test or if any subsidy is provided. $40 vs $200 does seem like the usual EU vs US markup too I guess
Antibody tests were only widely available in July-August, so people who got COVID in Feb or March often didn't have high enough antibody counts by that point to flag positive on the tests.
The half-life of antibody levels has been estimated at 73 days. The antibody tests are quite sensitive, time has not been enough of a factor yet to produce false negatives. In a year or two it might be.
The only time concern is that you need to wait 2-3 weeks until after you recover from COVID to ensure you test positive.
The article is not advocating doing anything unsafe or otherwise endangering others. The scientific question is quite valid - do we have to use our limited vaccine supply to ensure all get two doses or can we save a dose on those who already had COVID and give it to someone who needs it?
First paragraph of the article: “Many people who’ve been infected with the coronavirus might be able to safely skip the second jab of any two-dose vaccine regimen, a growing number of studies suggest. These results could help to stretch scarce vaccine supplies and are already influencing vaccination policies in some countries.”
US might have enough for everyone, and wouldn’t it be also great if we didn’t need as many doses and could share with hard hit places like India? I see this as a net positive if true.
The US is in desperate need of people who are willing to take all the vaccines the government has purchased for us. What’s the shelf life on the mRNA products? Looks like Moderna’s lasts for six months in the freezer. Pfizer for 30 days on dry ice, another 30 days in a regular freezer?
Lots of doses are going to get tossed. Most the people who are going to get the shot have gotten it already.
In europe, meaning shorter shipping routes, recycling existing batteries, and doing that using a novel hydrometallurgy process with 97% lithium recovery rate, lower emissions and a factory powered 100% from renewable energy. Plus they have $27B in orders already.
They will be able to recycle batteries, but that's not where much of their current order volume or materials come from. The article is a word salad that conflates a series of unrelated points.
I often encounter documentation where the user is always referred to as just "she", even though most users are presumably male. In my experience it's a lot more common than the user being referred to as just "he", but that may be observation bias on my part.
I'm not a fan of "default she" or the even more awkward he/she - it just strikes me as weird pandering or over correcting.
So often the gender is completely irrelevant to the discussion, so just don't mention it. Use "they" or even avoid the pronoun completely and use "the user" or whatever.
> or even avoid the pronoun completely and use "the user" or whatever.
The problem is in practice it doesn't work and you would end up with monstrosities like "the user should define the user's own preferences in the user's preferences file located in the user's home directory."
No, because we're pragmatic people who (generally) know to not write shit like that? No matter what pronouns (or lack of) you use in that sentence, it's still bad.
I never said "avoid pronouns at all costs", but rather that there are alternatives you can use if it makes sense.
> I often encounter documentation where the user is always referred to as just "she"
As far as I know, no man has ever complained about this. Whenever I read "she" I don't feel excluded as the documentation is just giving an example. Why are the pronoun-warriors so focused on these issues?
Totally. I much prefer that we bring everyone in line with English, we should start doing PRs with other languages next. They must really be suffering without "they/them" pronouns :3
Too bad, as developers, we scorn those platforms instead of improving them to the point we'd be obsolete.