Hacker News new | past | comments | ask | show | jobs | submit login
The “No Code” Delusion (alexhudson.com)
385 points by ealexhudson on Jan 13, 2020 | hide | past | favorite | 323 comments



I was once at a company (X) that acquired another company (Y), and Y's main product was a graphical programming tool. Y advertised that their tool could speed application development up 10x.

My (nontechnical) manager asked me "why don't you use Y's tool to build the project you're currently working on?" I answered with the following metaphor:

Imagine you have to pick a bike to go on a trip. You're travelling on a well paved road through the woods. If you take a light & narrow wheeled racing bike, you'll travel much much faster than if you ride a knobbly wheeled mountain bike with suspension, as long as you stay on the road. As soon as you need to go off the road and cut a new path, you are going to wish you had that mountain bike, and the road bike is actually going to make you go much slower or just stop altogether.

So does the "speed bike"/advanced framework make you go faster? Yes, as long as you stay on the road (i.e. constrain your requirements to their feature set). The moment you need to "go off-road" (i.e. do something the framework doesn't do), such tools actually make it difficult or impossible to make progress.

This is why, for a complex applications (most commercial software), I prefer the mountain bike. Yes, it's slower on certain paths, but when we need to "go off-road" I don't slow to a stop.


Reminds me of something Bruce McKinney said about Visual Basic when I was much younger and beginning to learn about computers in general, I'm paraphrasing here but the gist was "Visual Basic makes 95% of your task easy and the other 5% impossible".

Of course Bruce is a hard core programmer so he went on to show you how to do the 5% in VB using some crazy hacks or, better, do it in C/C++ using COM and glue it onto your VB code using the common MS interface of the day (pre .Net that is).

I've forgotten more of it then I remember but I do remember really enjoying reading anything Bruce wrote!

http://www.vb.mvps.org/hardweb/hardbook.htm


> "Visual Basic makes 95% of your task easy and the other 5% impossible".

Think what you will of the readability of Perl, but it's hard to beat its motto: "making easy things easy and hard things possible."


Indeed. When I was finally exposed to Unix (SCO, yuck!) and Linux (Slackware, yay!) it was because I was trying to learn C and wanted free tools because I couldn't afford any commercial stuff at the time. But before I did much with C I became quite proficient with Perl. This really wasn't that long after absorbing all the VB material I could find at the time so it was interesting going from the one to the other, not to mention the OS differences.

I still love using Perl 5 today even though it's totally out of style at this point. VB? Not so much! :-)


> I still love using Perl 5 today even though it's totally out of style at this point.

Same here!


I suspect there's more of us than we think!


:) how is Perl out of style ? ... 20 years ago it had features the more enterpricey languages only got in the last few years


VB had access to the entire Win32 API. Nothing was impossible.


Per smhenderson (https://news.ycombinator.com/item?id=22038454), you'll have to take that up with Bruce McKinney, not me; I merely quoted it to compare.


That's often the trade-off of any high-level tool or library. It will make a big chunk of related tasks easier, but won't be able to handle a small subset. Hopefully there are ways to work around the limits.


The limits of tools only expose themselves when they're being stretched in ways the authors did not intend. Good tools have enough escape hatches to get you by those moments, and the bad ones break down entirely.


Such tools should have a degree of "hooks" built in so that work-arounds are easier. For example, a ORM should have an operation to send direct SQL to the database for the times it can't generate SQL as intended. And maybe even modify specific clauses if needed so it's not all native or all raw, but something in between. Of course, if it's too open-ended it becomes a hacker's playground, so a happy medium needs to be found.


Back in the COM days, we were largely a C++ shop. There were times where our business folks did not quite know what they wanted - so we'd prototype the COM component out in VB. It worked a little too well - for about a quarter of the time, you ended up with something that only had a 10% (or less) performance penalty.


I remeber the same argument but it was applied to asp.net webforms, the stateful monster that MS created to usher desktop devs to the web world.. That was a crutch that turned out to be quite bad in the end, lots of new devs that came that route had a problem understanding that the web is a stateless protocol.


All you've said is 100% true, but in addition I'd argue that a big part of reason why no-code tools don't get more traction is also just the simple power of habit and the lack of motivation to invest time into learning it. We already know how to build things the traditional way, and all these no-code solutions require learning a lot of new, proprietary interfaces and stuff, playing with it and figuring it out, learning to work around the limitations. Many other tools and frameworks that we choose to use have similar, probably even worse problems, but they're popular so one feels like it's worth the trouble to learn them - just to stay in the game. No code tools on the other hand are proprietary, not very popular and the benefits they give you are just not that significant to a seasoned programmer. In all honesty, I never even gave them a chance.


I don’t have enough experience with no-code tools either, but I don’t agree it’s motivation that prevents it from taking off.

No-code is used mostly by non-technical founders who do seem quite motivated (enough to do it themselves).

Engineers are more likely to build it themselves (realistically, over-engineer).


Those "non-technical founders" are usually the ones that have "ideas" to build another Facebook and don't want to pay a professional to do it.


The trouble with this analogy is that at some point an ATV comes along that’s much faster both on and off the road, but you’re still clinging to x86 because it’s more powerful than the “high level” languages like C.

I generally agree with you, but I also suspect there’s a lot of room to improve the tools and it’s not always obvious when something is a paradigm shift in productivity or a dead end that will only work in tightly constrained use cases.


This is the perfect metaphor and the one I use.

Just because we can build a Ferrari doesn't mean that a Ferrari is efficient in all situations.

We have race tracks (smooth, evenly surfaced roads), highways (generally smooth roads over long distances), city streets (some potholes), gravel roads, dirt roads, and no-roads.

A tank isn't great for a race track. And a Ferrari isn't great for no-road.

The author muddles up the question to make their point.

What they're really saying is "no-code tools aren't a good fit for general purpose coding" (for all the reasons mentioned).

Which seems fair and accurate. But all coding is not general purpose coding.


Reminds me of a similar situation with my technical manager. He’s very dogmatic in that we should never waste time “re-inventing the wheel” (e.g. there’s already a library/tool out there that does X, don’t waste time remaking it). This makes a lot of sense in many business cases, but not always...

I like using this analogy to demonstrate why:

Say we’re designing a race car, and our race car needs wheels.

“Wait!” says my manager, “don’t waste time designing wheels for our car when we can simply buy these wheels from a vendor and slap it on our car!”

I look at the wheels. “Uh, sure those are wheels, but they’re wheels designed for a shopping cart! I mean sure we could slap these on our racecar but it’s then going to drive like crap!”

If you want a great product, sometimes you’re better off re-inventing the wheel… (and not using a “No Code” solution…)


The problem is sometimes you get the converse where developers have the "not invented here" mentality. I interviewed at a shop one time where they had built their own JS framework. Mind you this was a financial shop not a tech shop. They bellied up about 2 years after I interviewed with them and I cannot help but reason that they had "tech gone wild" as one of the reasons. The CEO fashioned his company as the Google of the financial industry but in the end they where a bunch of developers stroking their ego.

It is a balancing act, a lot of times building on what exists is the best case but when it comes to your core competency and what distinguishes your business, that is where one should focus their development efforts. If you are rewriting Postgres you will just end up with a shittier database.


Totally agree it's a balancing act. Sometimes the existing tool is exactly what you need, in which it would be a waste to re-invent it. Ideally though, things should be evaluated case by case, rather than sticking to a dogma.

I just happen to work at a company that's experienced the flip side of your experience. We ended up wasting millions on software some managers thought fit our needs. However, turns out it really didn't fit our needs after all was said and done, and they said if they had known everything up front (rather than what the salesman had said) they wouldn't have bought the software in the first place.


Most importantly, by the time you need to go off road, it's too late to switch bikes...


We used to make this sort of analogy a lot at a web dev place where I once worked. Eventually it got boiled down to a pithy joke: "Ruby on Rails is great, but only if you stay on the Rails!"


I think this drives your point with incredible wit: “On The Turing Completeness of PowerPoint (SIGBOVIK)” by Tom Wildenhain[1]. (5 minutes video)

[1]: https://youtu.be/uNjxe8ShM-8 (Nerding out should always be hilarious like that!)

[2]: Recording of a guest lecture he gave for the Esoteric Programming Languages course at CMU. (~1h) https://youtu.be/_3loq22TxSc


What you need is a speed bike that lets you switch to a mountain bike when you decide you need/want to go off the trail. That is, a "no code" or graphical or whatever whizz-bang tool that lets you drop into C++ or Python or Lisp or whatever when you need to.

And to do this, it needs to be better than JNI in Java. It needs to be able to have something better than a gouge-your-eyeballs-out-ugly syntax for interfacing to the "real" programming language.


I’ve found that most no code environments that I’ve tried advocate that they have this capability but in practice it’s still nearly impossible to get what I need because their abstractions aren’t at the right level. Not saying this is impossible just that my personal experiences have made me skeptical of this approach.


I mean, that's pretty much how I program things. I start new web apps with an MVC framework that auto-generates new pages. When I want to do something, I try to use someone else's code as much as possible. If I'm doing a simple, well contained task, someone else has already thought of it and made a library for it. If I need to send an email, I just do something like calling _emailSender.SendEmail(message). Easy peasy.

But processing data is complicated and nothing can completely abstract it away.


I am a lazy programmer in that way. I'd rather autogenerate and use robust libraries as much as possible.


It's not lazy, it's just smart. The less code you write, the better it is.


I think the abstraction layers are important, they should not be leaky. You shouldn't have to go up/down the stack. If you can't build what you want with one tool, then choose another tool.


This is appealing, but I think it's important to recognize just how hard that is.

Abstractions by definition lose information. If that information becomes important, to any user, at any time in the tool's lifespan, somebody has to go and claw the thing open to find it.

Picking an abstraction that never leaks essentially means correctly predicting the entire scope of your problem. There are ways to improve on this, good abstractions try to offer contained 'leaks' that you can enable or access if needed, but that still means predicting the set of possible use cases.

(For a code-adjacent example, Excel is popular as an abstraction on an RDBMS, but it's low-level and even Turing complete. Higher-level tools which don't permit arbitrary data association constantly get scrapped or exported down to Excel because they don't cover every interesting relationship, and Excel data semi-regularly gets pushed back down to databases.)

In school, I learned a lot about the wonders of encapsulation, black boxing, and so on. Within a few years out of school, I realized that one of my most valuable skills was being able to move down the stack when those things inevitably break down. When I do have to go and a library's source, and very occasionally find a bug in it, it often feels like something that couldn't realistically have been anticipated.


Breaking abstraction layers will kill productivity. There's one article that gets posted to HN every year but I can't find it atm. It's about how long it would take to do something from scratch. Taking it into the extreme, including growing your own crops. For computers that would be like mining your own silicon in order to build a CPU, so you can build a PC, to build a boot-loader, a OS, a compiler, a web browser. It will take you years to write that "hello world" program. If you find a bug in a compiler, then that's one day of work gone to debug and path it. Same with libraries, having to write pull requests to the libraries will take time away from whatever you where working on. Leaky abstractions are worse then abstractions. If things are hidden, it will take extra time to dig them out, you could just as well have them all in the open.


Couldn't disagree more. Unless your job is unbelievably trivial, you should use a range of tools and use the best one available for the task at hand.


Lets say you call for pizza, but you need to know how to bake in order to use their self help service, then that's a leaky abstraction. As high levels of abstractions it's OK to only know the name of the different pizzas. And the baker doesn't need a master in chemistry or engineering to operate the oven. We humans are very good at abstracting things and take most abstraction layers for granted, even though the world around us is very complicated.


I fail to see at all how that relates to the question.

If I order a pizza, I’m using a communication channel (phone, internet) to place an order; one tool.

I deeply hope the pizza place is using chefs and an oven to make the pizza; completely different tools.

If I demanded the pizza chef make the dough, slice the ingredients, and bake the pizza with just a stand mixer... that’s just dumb.

Use the appropriate tool for the job and use the right level of abstraction at each step.


Yeah while I think one of the hardest parts if a software developer's job (working at multiple levels of abstraction), that's literally the job. You're translating between the business and the sillicon.

By all means restrict yourself to assembly but yhe rest of us will get more done by using things at convenient levels of abstraction for those specific units.


This is what I get from Pure Data (a graphical audio programming language). Once things get too unweildly, I write a C/C++ extension or Lua script.


At certain large tech companies it’s the difference between an all terrain vehicle and laying frikin tracks for a frikin train.


Most commercial software is basically information systems, with very standard architectures, and no complexity whatsoever.

At least not business-related complexity. Most complexity in those systems is accidental, exactly due to us using the wrong tools for the job (namely, 3GLs + frameworks), which do not provide the proper level of abstraction for the problem at hand.

Even for a single system, no one should be forced to use a single level of abstraction everywhere. The game industry has been mixing 3GL code and assembly whenever needed since it moved on from assembly. You don't have to stick to one for everything.


The only problem is, it’s actually not a road bike, but a shopping cart. The advertisement is that you can sit and not pedal (when down hill). And it’s not faster in any case.


I've spent a long time trying to build "No Code" solutions. Probably 3 different products, 3 different companies. But once, I tried something different. I pushed back, instead I proposed we build a domain specific language using Ruby. I already had some trust with my boss... and he was pretty convinced it was going to fail, he had zero faith these smart (but non-technical) users could successfully use it. But he was willing to let me fail. So we tried it, and really surprisingly, they caught on very quickly, and they were producing complex content within a week.

"No Code" solutions are like asking your user to communicate using pictographs. It's possible, but language allows users to communicate in better detail faster. In school I learned how to write, I'm probably not anywhere close to writing a novel. I'm a terrible writer, but I can write an email. Frankly, that get's me pretty far.


I second this. People can go very, very far with simple and sand boxed Domain Specific Languages that are targeted towards the core function.

There are so many commercial successes for this approach: CAD, MATLAB, Simulink, LabVIEW, Dymola, Excel (?)

The biggest issue with many of these tools is that they tend to be closed source, proprietary formats, onerous licensing terms, not easily extended and aren't easy to deploy into an automated production workflow.

Some are addressing this with an option to export a compiled binary, but many try to up sell you complete ecosystems (PLM) to keep you locked into their proprietary formats. This tends to frustrate devs.


I mostly agree with you. As an independent professional, I hate complete ecosystem platforms also. We need composable systems. Not more locked down.

Mendix (https://www.mendix.com) does a lot of that well but as you pointed out it's proprietary.

If we look at history, successful predecessors like C, Java and Python (sorted by difficulty) are all open standards.

I think the next low/no code platform to open their specifications wins the race.

Disclaimer: I've worked for Mendix.


Did you do anything special presentation/interface-wise?

My impression is that a good part of it for many is not making them realize that they are "programming" until they've already accepted that they can do it, because otherwise they "know" that it is too difficult.


> because otherwise they "know" that it is too difficult.

This is something that frustrates me in general, and I'm sure others, too:

That there seems to be among a certain population of people a mindset that declares failure before they've even tried. Or, they try, but at the first hint of failure or trouble, they declare that they can't do it, and stop.

So what is different about those people who don't do this? Why do they instead take up new challenges, and when they fail, try again. When they run up against difficulty, they step back, think about their options, perhaps consult experts in the domain, and continue on?

And how do we get the former group to join the latter?

I know there isn't an easy answer to this, if there is one at all; I know I'm not the first to observe this issue either - it's likely something that has been observed and wondered upon for thousands of years.

...but nonetheless, it continues to be frustrating.


> That there seems to be among a certain population of people a mindset that declares failure before they've even tried. Or, they try, but at the first hint of failure or trouble, they declare that they can't do it, and stop.

You just described my three year old's eating habits.


Actually I think there is an easy answer: Security. One is willing to experiment and fail if one is confident that it will be safe to fail.


My suspicion is that it's not really that they aren't capable of programming, they just find it boring and would rather be doing something else.


I think its a matter of motivation. They’re not sufficiently motivated to push through the boredom or frustration they feel at the start. That’s ok, most people don’t beed to learn it, but as I say in another comment here, I do believe that most people can learn if they have a problem that they could solve with programming that the want to solve badly enough.

Most people don’t care enough though and life’s too short to spend on something when other things are more important to you.


In the 1970s, secretaries not only used Multics Emacs, they were trained to customize it in Emacs Lisp. Because they were only ever told they were "customizing" the editor, not programming it, they developed useful extensions -- in Lisp! -- without ever realizing that they were writing programs!


I actually wrote a plugin for sublime, which allowed them to write the script then directly upload to our service, it also had some code snippets in it so they could right click, then quickly add and modify what they needed while they were learning (this feature was not used as much though, watching them, they seemed to prefer looking at scripts that did something similar and just copy and pasting from it). But other than that, it was just sublime.


if you just remember that all the classic unix-editors were used by secretaries/data input persons picked right from the typewriter, that's not surprising at all. What's surprising is that this insight was so fully eradicated by years of IBM/MS/Apple-marketing (except for the lone warehouses still running on IBM mainframes with terminal frontends)...


I’m a programmer with 15 years of professional experience and another 10 as a student and hobbyist before that. My brother is a carpenter by training who was never even the slightest bit interested in programming. Last year, he learned Pine script[1] because he wanted to customize stuff on TradingView.com. Sure, he doesn’t really understand all the details and sometimes asks me for help, but he is able to successfully get the results he wants. I think most people just need sufficient motivation and they’ll get it. I’ve always said that the reason most people don’t learn to program isn’t because they can’t, but because they don’t really have enough reason to put the time and effort in.

Learning a skill takes time and tenacity. Years ago, I tried to learn guitar, but gave up after a few months because progress was too slow for me. I was impatient and not motivated enough and therefore ultimately didn’t get anywhere. Two years ago, I decided I wanted to learn sleight of hand card “magic”. There was one flourish in particular that I just couldn’t do, but I kept trying anyway. Day after day, I couldn’t do it and then one day I realised I could. The only difference between that and guitar is that I kept at it and out the effort in.

Sure, some people are predisposed to certain things which is a bit of a shortcut (I definitely found programming easier to learn than card magic), but I believe that most people can learn most things, if they have sufficient motivation and put in the time and effort (I include finding a way that works for you as part if effort, just doing something repeatedly may not be enough on its own, as they say: “practice makes permanent; perfect practice makes perfect” — ie be careful of learning bad habits that may get in your way)

I’m personally not against visual programming and have had some good experiences with it, but the name “no code” in my opinion completely misses the point: its still code (and programming). The text was never the hardest part, so by eliminating that, you’re not really winning much. The hard part is the logic, calculations, data manipulation and translating ambiguous requirements given by people who don’t really know what they want. Very little of my day to day is actually about the text I type into my editor, but rather the problem solving that goes on in my mind. Visual languages don’t magically make that go away, they just represent the code in a different form. Sometimes this can be really useful (visual languages make flow explicit and I personally tend to think in “boxes and lines” anyway), but often thats not the biggest roadblock. Often the roadblock isn’t the code at all.

[1] https://www.tradingview.com/pine-script-docs/en/v4/index.htm...


"A picture is worth a thousand words..." When doing presentations or writing, I make a lot of effort to visualize what I'm trying to communicate, as it really helps making sense of all the words.

I work for a low-code, no-code vendor. We usually approach new features by first defining a DSL for a need, and than have one or more visual editors for these DSL. Every part of your application is still customizable (Java, typescript, react widgets, etc), or you can just call a microservice built in another stack.


That is really cool. How did you go about designing the language. Did you go for a full lexer and parser ? or it was more functional. Curious to understand how you built it.


Ruby's super power is it's meta programming ability. So they were really just writing Ruby. If you google, I think there's a bunch of tutorials out there for creating a DSL in Ruby... but I learned from a book called "Metaprogramming Ruby"


Thanks for the book recommendation, I am eager to read it.

I looked on Amazon but there are only used paperbacks and no Kindle edition. But I saw from the cover that it was a Pragmatic Press book, so looked for it there and they have a no-DRM ebook of the second edition - for a lot less money than the used copies on Amazon!

https://pragprog.com/book/ppmetr2/metaprogramming-ruby-2


Very cool. Reminds me of the Ruby DSL for music performance: https://sonic-pi.net/


I think those with a programming background take terms like "no code" far too literally. For those who've worked in finance IT, you may know of staff who've build incredibly elaborate models in Excel, and then have come to you when they need something that can't be done in Excel.

"No code platforms" will likely work the same way. These platforms provide just enough interactive/dynamic functionality for non-programmers to build a decent prototype of what they're trying to create. They can then take this to their dev team (if they have one) and ask for refinements.

Even if the end result requires a full rebuild, I'd wager the devs would be happier because they wouldn't waste time building something based on vague specifications communicated by a non-technical staff member. They'd have a prototype to work off of, and can ask clarifying questions that will be easier for the stakeholder to answer because they can see where the devs are coming from, instead of simply speaking in abstractions about something that doesn't yet exist.


IMO, Excel was one of the first 'no code' platforms. The first part of my career was taking Excel solutions and turning them into something that could used corporate wide. It was pretty fun because by the time it got to my team, the requirements were pretty well hammered out.


I worked with or around a number of people who had to productize an Excel spreadsheet when the team got a little too big for that to work and everyone was always surprised by how long it takes to recreate all of the functionality they had built up in Excel.

When Oracle bought Sun and got Open Office people as part of the deal, I thought for sure that Larry had some plan for an Excel killer to make it quicker to transition people out of Excel into relational databases.

I kept waiting for the other shoe to drop and it never arrived.


I have spent 5 years on and off in the no code space and IMHO you're almost spot on. Almost all standard enterprise platforms are non complex and can be delivered in no code solutions, requiring maybe 5pc as real code. Failing to acknowledge this is why IT organisations in large enterprises are typically sidelined by shadow IT teams, who often deliver great solutions in Excel and Oracle Apex.


A large part of my job is writing custom logic for an enterprise application with much-touted "no-code" facilities.

You _always_ need code. Some customers don't need much, but they all need _some_.

The first example that comes to mind is duplicate detection. Sure, the basics are simple. No two agreements of the same kind for the same customer. A lot of no-code solutions struggle at this point.

But then you get to the slightly more complex requirements. You _can_ have two agreements of the same kind for the same customer, as long as they are not overlapping in time. But if customer B is a subsidiary of A, and A already has the agreement, then B cannot have its own, if A's agreement is marked "company-wide". This also goes for C, which is a subsidiary of B.

I've yet to see a no-code solution that makes this sort of thing accessible to non-programmers. And programmers will prefer just writing the code.


"Shadow IT" teams are great at delivering solutions quickly because their clients tends to be themselves. When you're writing solutions for your own problems you're cutting out multiple communication layers which will reduce the effort involved by several orders of magnitude. Also by not making it an "official" project you can cut through lots of red tape and requirements not related to the problem at hand.


I've spent 2 years in a fairly large bank, and I heavily disagree. We have tons of procedures and edge cases that are subtlely unique.

Our problem is purely one of speed. When there's 3 quarters and 5 org layers between you and the developer, you are much more likely to just grab a tool you have access to and get to work.


I agree with this, but it also means the hype is overdone, because these "no-code" tools that are gonna "revolutionize the industry" have been here for ages.

For example in the web space, Microsoft took a stab at it with FrontPage since the 90s. WordPress was released in 2003 and sits behind a third of websites [1].

New tools are popping up that are more complex, but IMO that's because people's expectations of a good web experience is also more complex. I am unconvinced that new tools in this space are fundamentally changing it— I think they're just keeping it up with the times.

[1]: https://wordpress.org/news/2019/03/one-third-of-the-web/


> I think those with a programming background take terms like "no code" far too literally.

Agreed. Poor term. But what these platforms can accomplish is liberating for those who can't code yet want to get something live on their own.


I've been hearing about "no code" or "no programmer required" business solutions for over 20 years. Cynically, I encourage this thinking because my billable rate to untangle someone else's hot mess when urgent deadlines are looming goes up. Practically speaking, if the business problems being solved are complex you might be able to pull off a low-code solution but without knowledge and practice of the essential architectural patterns of software development a novice will paint themselves into a corner before they even know what they are doing, requiring an expert to come in an clean things up.


Nearly 40 years for me; I remember reading about The Last One [1] back in 81.

[1] https://en.wikipedia.org/wiki/The_Last_One_(software)


The pipe dream is far older than that.

Around 1960 some people seriously claimed that within about 5 years there would be no more professional programmers because with this new language, everyone could write the software they need themselves, since it was so easy to use.

The language was COBOL.

Oh, and look, this one still seems to be around: http://www.makeyourownsoftware.com/


> Around 1960 some people seriously claimed that within about 5 years there would be no more professional programmers because with this new language, everyone could write the software they need themselves, since it was so easy to use.

Of course that prediction seems quaint now, but I posit that the prediction failed because they greatly underestimated the increase in demand of software as much as overestimated the expressiveness/productivity increase of COBOL (and later systems).

Heck, considering what a professional programmers job might have been like in the 50s, its not far fetched at all that such profession indeed has disappeared as modern programmers work at such a different level of abstraction


This reminds me of the Jevons paradox from economics. The story is that when steam engines became way more efficient, the demand for coal exploded. Even though each engine used much less energy, the total amount of energy consumed went through the roof.


Yes, given that back then people were typically working in assembler (with frequently changing platforms, to boot) and writing relatively simple data processing applications, the claim is somewhat understandable.

But I think the main thing it failed to take into account is more how hard it is to translate business requirements into complete and unambiguous instructions.


The even bigger thing is how better tools and abstraction create more demand for code, as more can be done. The entire history of computers has seen software demand increase over time.


To be fair, afaik (second hand account), when COBOL appeared it drove a bunch of non-expert programmers into the field, mostly domain-driven people. It really was a new age of programming. (which then-experts didn't really see in a positive light)

Episode of Command Line Heroes by Red Hat about COBOL: https://www.redhat.com/en/command-line-heroes/season-3/the-i...

I think it's a little unfair to characterize COBOL as a failure in that regard. It tremendously increased the accessibility and use of programming and computing in general by way of consequence; it's been instrumental in allowing the programmer population to grow massively.

Granted, extraordinary superlative claims never materialize, but the intent, the vision is important ime. Especially in business settings.


Oh my god, I was convinced until the end that it was a top notch joke site... then I got to the credit card form.


There used to be an early ARPAnet mailing list called "INFO-COBOL@MC", that was actually for exchanging jokes and copyrighted Dave Barry articles (which was an officially prohibited abuse of the government sponsored ARPAnet). It was a great stealth name because nobody took COBOL seriously, and we would just laugh at people who posted COBOL questions.

Here are some old INFO-COBOL messages archived in the MIT-AI HUMOR directory:

http://its.svensson.org/HUMOR%3bINFO%20COBOL

Then there was The TTY of Geoffrey S. Goodfellow's spin-off, the specialized "DB-LOVERS" mailing list, just for dead baby jokes.

https://news.ycombinator.com/item?id=8418591

Speaking of COBOL jokes:

https://medium.com/@donhopkins/cobol-forever-1a49f7d28a39


That's pretty funny, thanks for sharing!


Thanks for sharing—that site is a real gem!


Yes, in the 80's and 90's we called it CASE (Computer Aided Software Engineering). It was just as fascinating then and equally impractical now. Text turns out to be a great, compact way to convey ideas or instructions which is the heart of software development.


Conveying ideas and instructions is also at the heart of architecture. Digital representations are embedded at every stage of a contemporary building design and construction pipeline. 98% of those representations are something else than text.

I strongly believe software application design is fundamentally closer to architectural design than the kind of work done in a purely textual realm — say, writing a novel or a research paper. But it's a really hard nut to crack.

I hope CASE today is like AI and neural nets were in early 2000s — a bit of a laughing stock, "something people in the 1980s wasted a lot of time on but nowadays everyone knows it doesn't work."


I don't think text is nearly as compact as people claim it to be.

I dabble in graphical programming languages from time-to-time, and one feature they all share is the editing environment for code makes entire categories of syntax error impossible; there is no way in the language's design to pass a string to a function that only accepts a number, for example, because the "blocks just don't fit together." It's a level of integration between the space of all possible arrangements of elements and the set of arrangements that constitute valid programs that I see text-based IDEs approach, but struggle to catch up with. And I suspect part of the challenge is that as a tool for describing programs, sequential text has too much dimensionality; the set of strings of sequential text is much, much larger than the set of valid programs, and it's easy to put text sequences together that aren't valid.


Do you have examples of delivering actual production applications with a graphical programming language in less time than developing the same thing with a traditional text based language?


I don't, and the gap is (IMHO) wide between where the ones I've used are and where they'd need to be to compete with text input. The main hindrance is UI; keyboard is a relatively high-bandwidth input (in terms of bytes-per-second of usable signal), and most of the graphical languages I've seen are heavily mouse-centric with not enough accelerator keys to close the gap. I can generate a lot of incorrect code-per-second with a keyboard, but I can also generate a lot of code-per-second period.

I'm hoping someone can close the gap and bring us a graphical language with a robust keyboard interface to navigate through it and edit it, with the advantage of strong edit-time validation excluding invalid programs. If someone can close the gap, it'd be a hell of a thing to see.


FWIW I would argue that Sketch (and other applications like it, plus Unity, etc) are most accurately described as graphical programming languages -- highly domain-specific ones.


> there is no way in the language's design to pass a string to a function that only accepts a number, for example, because the "blocks just don't fit together."

Which is easily achieved by the first typed language that comes to hand, no?


Not without an IDE. It's still extremely possible (in all text-based languages I'm familiar with) to write the program with a typecheck error; the typechecker will gleefully catch it for you. At typecheck time. Often (depending on my toolchain) minutes after I've written the offending code and mounds of code depending on it.

It's possible, with many languages, to write IDEs that will make this hard, but I've yet to find one that makes it impossible. Certainly not in the same way that, for example, Scratch will simply refuse to let the offending blocks click together. I certainly don't advocate transitioning from text languages to Scratch, but I think there's meat on the bones of asking the question "Why, when I'm editing code in text-based languages, am I even allowed to reference a variable that is demonstrably the wrong type? What benefit is that gaining me?" Because I think the answer in a lot of cases is "No real benefit; editing, typechecking, and compilation have just historically been disjoint concerns with no feedback channel because writing those is hard."


I'm strong believer, even if that belief doesn't come up often, in structured editing[1] which sort of bridges the gap and makes writing invalid programs impossible, at least syntatically but I don't think adding type level checking is a big jump and I haven't kept up with research so that might already be there. Unfortunately I don't know any successful examples of that beyond research projects that I could point out. I remember hearing rumors that some of the LISP machines might have veered to that direction, but idk

Even more I don't believe in plain monospaced ASCII being the ultimate format for code. Luckily there I know one example that goes at least a bit further: Fortress[2], a language from Sun that had richer rendering format available (although the underlying form was still afaik textual). And of course APL is another example, albeit bit less easily approachable. Raku also has some cute tricks (of course it does) with unicode characters[3], but they are more really just tricks that radical revolution in design.

There are also lots of other interesting ideas on how to format and layout code in the research, just one random example is "code bubbles"/"debugger canvas"[4]

While I think all this and so much more has great potential, there is huge cultural entrenchment around simple text based programming that seems unlikely to be overcome any time soon. As for graphical programming, I think the common failure there is being often also heavily mouse-driven, while keyboard is really powerful input device.

[1] https://en.wikipedia.org/wiki/Structure_editor also known as projectional editor

[2] https://web.archive.org/web/20060819201513/http://research.s... has some examples, slide 33

[3] https://docs.raku.org/language/unicode_ascii for example "The atomic operators have U+269B ATOM SYMBOL incorporated into them. Their ASCII equivalents are ordinary subroutines, not operators". Obviously.

[4] https://www.microsoft.com/en-us/research/publication/debugge... paper has few screenshots


Re: "The atomic operators have U+269B ATOM SYMBOL incorporated into them. Their ASCII equivalents are ordinary subroutines, not operators". Obviously."

Except for the short-circuiting operators (such as || and &&) and the assignment operator, all operators in Raku are just subs with a special name. Adding your own operator to the language is as simple as adding a subroutine, e.g.:

    sub prefix:<√>($value) { sqrt($value) }
    say √9;   # 3


Are you suggesting that you can do visual development without anything more than a text editor?

Apples to apples would mean comparing your Scratch experience with a nice IDE for a typed language.


"It's possible, with many languages, to write IDEs that will make this hard, but I've yet to find one that makes it impossible."


I'm actually quite fond of some "no/low code" tools but there is a threshold of complexity beyond which if you use them then terrible abominations will result that are far more complex then the equivalent code and actually require more technical expertise then 'code' - so you end up with components that only a skilled developer can maintain in a platform that developers will hate.


I’ve dealt with a few of these too, and the point at which you should just give up and switch to python/bash/anything always comes sooner than you think.

By the way, this is an anti-pattern known as a Turing tarpit: https://en.wikipedia.org/wiki/Turing_tarpit


The classic magnet for this: Microsoft Access. It's a fantastic force-multiplier until Dr. Jekyll turns into Mr. Hyde and begins grinding your business to a halt.


Absolutely true.

There are also a large number of projects - maybe majority? - that never reach that level of complexity; and the "no/low code" solution (like Excel, Access, etc.) enabled a non-engineer to solve a business problem without hiring any engineers.

That's where these kinds of systems shine, if done right; allow a "non-engineer" who has a logical mind / the engineering tao to create solutions without having to learn traditional development toolchain.


"Low Code" is currently where it's at.

Intelligent subject-matter experts who are non-programmers can build 80 to 90% of their business info capture and reporting requirements inside the walled garden of their chosen platform.

Programmers are called in temporarily to complete the final 10 to 20% of the LowCode app, and integrate with external services.

It's been happening since Excel, through to Wordpress and nowadays splintered into 100's of directions and platforms from Wix to Podio to Notion.so


I'm compelled to invoke the "Ninety-Ninety" rule when I hear about solutions like that, although I'm sure it works sometimes, in my experience it usually turns out more like this.

The first 90% of the work takes 90% of the time, and the remaining 10% of the work takes the other 90% of the time!


Isn't the majority of software following this rule ? This is not specific of low/no code environment


Yes, absolutely.

But to hear it explained that way, it just seems like wishful thinking based on a circular reasoning, that invites an invocation of the rule... "We spend too much on our developer staff, so in the future we have adopted a strategy where we will avoid most of the things that we need a team of developers for, so that our developers have less work to do, so that we can have fewer expensive devs (of which we know we cannot dispose entirely, [because we are subconsciously aware without them, there is no innovation to speak of at all.])"

The problem that "Low Code" or "No Code" addresses is a real one, where devs like myself, (surely not myself, but someone more junior...) confuse poorly architected slipshod solutions for innovative ones.

If we could reliably keep our code as simple as it ought to be, the market for tools like this would probably not be as large as it is.


Yes it is, but if you're doing the first 90% properly you have a much better shot at mitigating the difficulty of the last 10%.

I think there's some vague point in any project where it goes from being 'easy' to 'hard' to add new stuff. Basically the only factor that matters for productivity is how long you can delay that point. If you just do the first 90% as quickly and cheaply as possible, you're just resigning yourself to hitting that point as early as possible.


I think this is best explained without exaggeration by the famous Design-Stamina Hypothesis[1], which states the notion that time spent on Design is something which you can trade away to improve development speed, is reliably false in the long-term (even if it seems to be working in the near term.)

The graphic also suggests that there is an inflection point, as you suggest, before where time spent on design really is just slowing you down in the beginning of your project, but also that the costs of waiting too long to switch modes (from doing no design, to doing good design) after you have crossed that line, are substantial and compounding; the longer you wait, the more your lack of good design costs.

And of course, not pictured, is "bad design" which can be even worse than no design. Trying to find that inflection point and put it on your development schedule in advance is also a bit like trying to catch a falling knife (not likely to succeed.)

[1]: https://martinfowler.com/bliki/DesignStaminaHypothesis.html


Kinda, it sounds very similar to the 80/20 rule. The 80/20 rule says 80% of the solution takes 20% of the time. So it's not quite the same.

In other words, the 80/20 rule says the last 20% takes 4x as long as the first 80%. In comparison, the above quote says the last 10% takes just as long as the first 90%. So slightly different.


Both this "90-90" and "80-20" indicate that the devil is in the detail. e.g. You can expect surprises as you're almost done, there's an inherent complexity to the solution, etc.

But saying "the first 90% takes 90% of the time" blatantly ignores these anticipatable unknowns; so it's a much more tongue-in-cheek thing to say.


The other way to read it, I guess, is that you can correctly anticipate those unknowns. The canonical way I hear is to add 1/3 to your estimates.


In my experience, "Low Code" is almost always weasel-wording. It's used to describe products that try to be "No Code", but fall short. It's a way of making excuses for everything you can't do, because you can get a "real programmer" to come in and paper over the cracks. Actually writing this code is rarely a pleasant experience, and the learning curve is a cliff that goes straight from "flowchart" to "writing React" (or worse).

As other replies have pointed out, the really successful tools are like Excel: They have a real programming language at the heart of them, and they don't try to hide it away.

Disclaimer: I founded and run something you could call a "Low-Code web environment" (https://anvil.works) - but I prefer "Visual Basic for the Web" (or "Web Apps With Nothing but Python"). We built it around the idea that writing code is inevitable - indeed, it's the best way to tell a computer what to do. So don't try to hide it - make it a first-class experience!


That is because we've built tools that are good enough for most use cases, and the cases don't actually differ all that much. For every new case however, new software has to be made. It isn't getting any easier, there is just more of it now.

The problem is that a "problem" is essentially a fractal; it needs a defined accuracy and scope to be solvable. Differing scopes and accuracies again require overhauls of software that would otherwise be acceptable for the same task.


About 2/3 of software cost is typically maintenance, not original creation. If some RAD-ish tool makes maintenance more difficult, saving time up front doesn't mean a lot, other than maybe as practical prototyping. Maintenance costs include getting new staff familiar with a tool, and the turnover rate is typically around 4 years for developers. Self-taught programmers tend to produce spaghetti systems, in my experience. My first programs were spaghetti, I must say. I was an IT puppy. I short, consider the medium and long-term costs of RAD/code-free tools.


What do you mean Excel? The formulas? VBA is a full blown language.

WordPress is great for low code.

I did my html and in an hour had it working with WordPress.


Reminds me of being warned, over 20 years ago, that the viability of my new career as a web developer was in doubt thanks to tools like FrontPage and Dreamweaver.


My dad found himself with a useless MBA in the 80's recession, so retrained as a programmer after I'd already decided that's what I wanted to be when I grew up (not the last time he would steal an idea from me and run with it).

When I was in highschool the fervor over so-called 4th Generation Languages was kicking up and he worried the career might be going away. I wasn't that worried, and I can't recall entirely why I wasn't.

Today I'd say that they will always need mechanics for the robots, but in fact the situation is not even that dire. AI, which I still believe will fail all over again, replaces less than half the code we write - it deals with the decision whether to do something, but not how to accomplish it. And we already know that we're fucking up data management pretty badly. If 40% of my job went away at the wave of a wand (nothing is ever that simple, and it takes a painfully long time to migrate to new systems), we'd have more time to invest in the provenance of data and protecting it once we have it. I'd still be overbooked and our customers would be a little less salty.


Frontpage and Dreamweaver ... never before could you produce so much bad code in so little time. And never before did the person who had to clean it up to have any chance of breaking out of the box and change something hate you so much. Especially if said person was you six month later.


I always thought the failure of tools like Frontpage and Dreamweaver was a reflection on the inadequacies of HTML/CSS/JavaScript as much as the tools themselves.

If I want to typeset a book or magazine I'm not going to start by writing PostScript or PDF files in VIM. If I want to design a tool then Autocad is likely to be my starting point. Why should the web be any different, why do I need to learn some arcane text based language when all I want to do is adjust the margins on an image or include some data in a page.

Sometimes I feel this industry is going backwards rather than forwards.


Well, but if all you cared about was the design and not the code, then those tools were very useful. Sometimes i use Adobe PageMill (which is essentially a rich text editor ala WordPad that saves in HTML and has a sidebar with a file browser) to create very simple sites (e.g. [0]). I do not really care about what the code looks like since i'm not going to edit it by hand anyway (though FWIW i find it very readable).

Many years ago i worked at a company where i had to edit PHP files edited in DreamWeaver by someone who had no idea about PHP, programming or even HTML (they'd design and write the text for the pages, i'd supply some code). The only issue i remember having was a couple of times making DreamWeaver unable to open the page, but that was fixed quickly.

Honestly personally i'm a big fan of WYSIWYG applications and i really see it as a big step backwards how many things nowadays rely on (often several) preprocessing steps of codes like Markdown, RST or adhoc syntaxes. I understand that it is much simpler and easier to implement this way and i do it myself too, but i do not pretend that this is somehow easier for the end user or it has better usability (and especially grinds my gears when someone tries to convince me otherwise by using edge cases that only happen at 0.1% of the time, if at all, when actually writing/editing the content is what you'd do 99.9% of the time).

[0] http://runtimeterror.com/tools/lila/


My first job involved an element of hand-validating third party HTML and really, those two tools germinated a seed of hatred for generated code that grew and flourished unabated until the first time I looked at the output from an SCSS file. If not for Less and Sass, I might have gone to my retirement never having liked a single code generator. Because they all wrote the most atrociously wrongheaded code from perfectly reasonable user inputs.

(By one definition, compilers generate machine code, but those did not fall under my umbrella definition of code generator)


Frontpage was awful, but I felt Dreamweaver did a pretty reasonable job generating markup. At least in comparison to FP, it was a radical improvement.


Good old GruntPage and Screamweaver. I miss the late 90s/early 2000s internet. As a web dev I cited the fact that I wrote HTML by hand as a competitive advantage -- my work loaded faster. In the days of dialup that was still a big deal.

Today, web developers still don't write much of their own work; they delegate that to JavaScript frameworks and components.


I mean, if you spent 20 years implementing nothing but static web pages and learning nothing then that prophecy would have come true by now, no? I don't see any jobs for just HTML and CSS anymore.


Today there is stuff like Webflow which is quite successful.

https://webflow.com/


Do you mean "developer" or "designer"? Dreamweaver wasn't a hobbyist's tool; it was priced like enterprise software, and had a ton of advanced functionality that really only web designers would have used.

It's quite a bit different from paying $10/mo for Squarespace.


There's no such thing as "no code". There's just interfaces which are more-or-less capable, and more-or-less intuitive. Any time you define a set of instructions for a computer to execute, you are creating code.

We use Scratch at my Girls Who Code club. It requires students to consider branching paths, and data types, and asynchronous execution. It does not require the students to type, but thank god for that, because they're 9 years old and type at approximately 1 words per minute.

Scratch is still code just like Java, and Lego Mindstorm's EV3, and Automator, and IFTTT. Not all of these systems are Turing complete, and each one contains different restrictions in an attempt to stop users from shooting themselves in the foot: Java doesn't have manual memory management, Automator doesn't have conditionals, and IFTTT is limited to a single trigger and action. But there're still code. Users of these tools need to carefully consider input and edge cases, and test and iterate to see if the code is doing what they intended.

IMO, the primary reason "professional programmers" type their code is because once you know what to type, keyboards are fundamentally faster than mice. That's also why Excel power users frequently rely almost entirely on keyboard shortcuts, and why the command line for an experienced user is often faster than a GUI.

---

Edit: BTW, for the same reason that HTML isn't a programming language, I don't consider most "website builders" to be code, even though I do consider IFTTT to be code.

Code in this context means creating instructions for a computer to follow. Laying out elements on a page (which then adapt to different screen sizes based on predefined behavior) is just that, laying out elements.

I don't know about you, but I can feel myself entering a "coding mindset" when I load up IFTTT—it's simpler than writing Javascript to be sure, but it's the same type of brain muscle, and there's no avoiding it.


This made me wonder: which side would other activities fall on, if event handling was the defining characteristic of "code"?

Not code - Excel, SQL, HTML, CSS

Code - Email inbox rules, IFTTT. Alarm clock?


Hmm, y'know what, I'm now reconsidering that part of my post.

Setting an alarm clock is definitely not coding. All the instructions are built into the clock:

    if (currentTime == alarmTime) {
        soundAlarm();
    }
All the user does is set the value of alarmTime, a single variable.

But, what if the user can choose from a set of alarm sounds? I feel like that's still not coding, but I also can't describe how it's different from IFTTT.

There's a spectrum here for sure—any time you interact with a computer, you're giving it instructions. I was thinking about this in terms of the mindset required. When I volunteer at GWC, there's a very specific skill that I see the students coming to grips with over the course of the class. I don't have a word to describe it.


Yeah, I was thinking of the alarm clock app on your phone rather than a hardware clock by the bedside, so you're practically hitting "add rule, set trigger, define action" much like IFTTT or email inbox rules.

When I've taught (adults) some basic coding, it's typically been state that's the challenging concept. You have a bunch of logic, but what was the state of your program (or the world) when the logic executed? If it's not what you expected, how did it get into that state? These questions seems fundamentally different to me compared to working on an Excel spreadsheet (which of course can still be complicated and have bugs).


> Not code - Excel, SQL, HTML, CSS

> Code - Email inbox rules, IFTTT. Alarm clock?

All of those are code, just varying in their level of Turing-completeness.


The shown C code has a buffer overflow vulnerability:

    #include <string.h>
    #include <stdlib.h>
    
    char *add_domain_name(char *source) {
        const size_t size = 1024;
        char *dest = malloc(size+1);
        strncpy(dest, source, size);
        strncat(dest, "@example.com", size);
        return dest;
    }
`strncat` takes as a third parameter the maximum length of the appended string.

        strncat(dest, "@example.com", size - strlen(dest));
would be correct.


This isn't even quite right, since the first argument of strncat needs to be a null-terminated string, and strncpy may not null-terminate. I would honestly just give up and write

    size_t len = strlen(source);
    char *dest = malloc(len + sizeof("@example.com")-1 + 1);
    strcpy(dest, source);
    strcpy(dest + len, "@example.com");


I read somewhere a quote that stuck with me: "No tool is ever going to free us from the burden of clarifying our ideas."

And that's how I view my job as a software developer: clarifying ideas.

Any "No Code" tool is still going to either force you to clarify your ideas, or have a large amount of assumptions. "Idea People" and "Business" don't like that, so they'll probably end up delegating the use of "No Code" tools to programmers of some sort.


There are plenty of tools that help us in clarifying our ideas though. Take mathematical notation for example. Or music notation. Or a CAD program.

All are examples of domain specific tools that allow the user to specify their intention in a non-ambiguous way. And they also allow the user to think about and experiment with the problem space.

I think what we need is something akin to this. The "business logic" that we are talking about here is to me a non-infinite domain. It should be possible to boil it down into a domain specific tool that is more helpful than a general purpose programming language.


> Take mathematical notation for example. Or music notation. Or a CAD program.

Or programming languages.


Business domain vs. Math., that's an interesting comparison. The problem is that business domain is exact an "infinite domain", in comparing with Math. Or to say that the competitive nature of business is very much reliant on its ability to break its domain boundary. Again, not so much in the case of Math.


> Again, not so much in the case of Math.

I will resonantly disagree here. Math is successful exactly because each time it had a rule that was problematic, people just extended it.

And the success of modern (20th+ century) Math is caused exactly by the fact that we got an inifinitely extensible language, so nobody has to ever break it again anymore.


They all help, yes, but you still have to know what you’re trying to achieve.


some sort of "common business-oriented language"?


I rather disagree.

I work at a large non-profit. I'm not a coder, but I'm IT literate, and know enough about database design to not do absolutely stupid things. I'm happy to document processes enjoy clarifying ideas.

We use Sharepoint. In many many respects Sharepoint is detestable garbage, but I have managed to create some really quite complex automated workflows which are saving people in the organisation a lot of time, using Sharepoint Designer (no coding). It's documented, fairly robust and I wouldn't have been able to write it in a conventional language.


> I'm not a coder, but I'm IT literate

I'd say you're at least some kind of coder. My point is that there will always be room in an organization for people like us, who aren't afraid to go down and use our tools to the fullest extent, to save "people in the organization" enormous amounts of time. These tools can be code or something else, but the point is clarifying ideas. The exact tool you use doesn't matter.

"No Code" is not an existential threat to developers, it's just another programming language.


Eh, the automation I've done between python, vba, and SharePoint (language/markup) has convinced me anything is possible.


Stateflow enables better communication with domain experts at automotive OEMs than any other formal or non formal specification language I know. So I'd count this as "plus No Code".


Part of it is "clarifying ideas". But I think there's another important part:

Clarifying processes

I'd wager that the vast majority of businesses have no idea how their internal processes actually operate, what steps they take, what steps are already automated, what steps are not automated but could be, and what steps don't seem like steps but are actually super-important parts of the overall process.

I'm not an expert in the domain, but a few employers ago I worked for a company that was heavily involved in Six Sigma. Yes, it was a management buzzword. In many cases it was probably being used wrong. Or was being applied in a manner orthogonal to the problem. Or...any number of other things.

But one thing we studied in our "off time" (the company was a focused membership organization - we had magazines, conferences, everything) was how to apply 6S to our own business (you'd think that would have been done from the beginning - you'd be wrong). One thing we looked into, and attempted to understand and apply, was business process mapping.

That is - everything (and more) that I noted above - in various forms of flow-charting and other process mapping diagram systems, mostly done by hand, as it was easier for multiple people to see the processes and reason about them. Once we had things relatively "tacked down", we would convert those over to an electronic form.

It was an interesting exercise, and we never completely finished it before I moved on (the company went belly up soon after I left, as I was the only SWE left - it wasn't a large business). But we did notice in the exercise some interesting things:

1. If your business process flowchart looks messy, and can't be "untangled" - there are problems with your process.

2. Similar to #1 - if the flowchart looks unbalanced, even after untangling, there may be issues with the process as a whole.

3. Process flow lines that cross should be avoided; usually this is just a result of how things are arranged, but if you re-arrange things and still find a lot of criss-crossing lines, and can't make them not cross - again, issues may be there.

4. Soft processes are real processes - and trips things up. These are things where you might do something "out of the loop" or talk to somebody about something - but it doesn't seem like a real part of the process - but if it weren't done - the whole thing would break. Usually, these kinds of things aren't uncovered until one or another party leaves, either permanently or while "on vacation". Sometimes, the issue doesn't appear until some automation is put in that leaves that soft-process out, or unintentionally goes around it - then it can stick out like a sore thumb. Identifying these processes - and they can be difficult to identify, as sometimes even the person doing it doesn't know they do it, as you talk to them about their role in the overall process. You have to watch them do it.

This last one - there's a story I once read, I'll condense it as best as possible:

A woman brings her car into the shop complaining that the vehicle isn't running well after driving it a while. She gets in it to go to work, and at first it's ok, but within a mile or so it doesn't run very well. She doesn't know what is wrong. The mechanic looks at it, starts it up, it seems like it runs well. He tries it in the morning, everything is ok. He calls the owner and she comes into the shop and gets her car, but returns the next day complaining that it is still running strange. The mechanic asks if he can take a ride with her, to show him the problem. She says sure, they get inside the car, and as he sits down in the passenger seat, he sees her pull out the choke and hang her purse on it. It turns out that her previous car had a special pull out "hook" for just that purpose, and she didn't know. After the mechanic explained the problem, she had no idea, but the problem was fixed. No charge.

Ok - showing my age a bit there, and it's a somewhat contrived story - but the point is there: A process can be so ingrained for a single individual (or even within an automated process) that it is forgotten that it is needed (or shouldn't be done - depending on the situation) that it is overlooked when automation, or even just "process mapping" is done.

This can lead to interesting problems - and sometimes they can be hard to understand unless you are "in the driver's seat" so to speak.

So, that's also part of the job of a software engineer - figuring out these processes. The problem is for many if not most companies, they don't even know what their processes are, because they never actually planned them. More often than not, they mostly grew organically, and evolved, and quite often if you attempt to process map (physically graph) these processes, you'll find the many of the issues I noted above stand out. You'll find omissions and inefficiencies all over the place. You'll find redundancy (and sometimes, this redundancy is there because if it is taken out - things break in weird ways - figuring out how to restructure things to fix this can be a real challenge).

You'll find you're dealing with an organism more so than a machine.

Automating such a thing can be an exhausting challenge even for a proper team of developers...


could it have been https://www.xkcd.com/568/


My response to "No Code" solutions:

If there was a faster and easier way to develop software, we would be doing it. We are already using the easiest to understand tools we can find to develop the best software possible.

There is no secret programmer cabal where we are holding hostage the REAL tools we use to develop software, and we keep them secret and hidden while we fuck around every day.

For someone to develop a "no-code" solution that fits the huge variety of tasks developers are doing every day, they would be a genius in the same vein as Turing. They would be billionaires.


It's a bit like the conspiracy that pharma companies suppress cures because ongoing treatments make more money, and it has the same main flaws.

First, without a huge amount of collusion, somebody would release the cure to beat everyone else's treatment. (Which totally does happen.) And in software, people constantly build and release tools for free while decrying paid software as evil; if neither Stallman nor Oracle is offering something, you can't bet they don't have it.

Second, making these things is hard. If you talk to programmers, we link to Programming Sucks and complain about how bad our tools are and how most software is terrible. If you talk to biochemists, you'll hear all about FDA hurdles, the rising cost and difficulty of finding new drugs worth bringing to market, and how they want to find real breakthroughs but companies can't afford speculative research.

So it can't really be the case that people are just casually choosing not to produce these things. Maybe there's a vast conspiracy that nobody's leaking, even when they're drunk and complaining about work at 2AM. But if it's not that, it has to be that we just don't know how to make this stuff happen.


>> First, without a huge amount of collusion, somebody would release the cure to beat everyone else's treatment.

Unless the same party owned the treatment and cure. Dont you see this everyday with x-AAS subscriptions? I mean, they could sell you the software once, but instead it is a service. I've already paid for MS Office three times over because of the service cost


> If there was a faster and easier way to develop software, we would be doing it. We are already using the easiest to understand tools we can find to develop the best software possible.

Not really.

As we evolve software development, historically, we've done this through adding new layers of abstraction.

Assembly --> C --> Interpreted Code (Java, Ruby, etc...)

Why isn't the next layer of abstraction simply what is being termed as "No Code" today?


> Why isn't the next layer of abstraction simply what is being termed as "No Code" today?

Cynically? Because people were also selling "no code" ten years ago, or even further back.

I know that sort of inductive argument doesn't actually work. We went through a dozen nonsense models of infections, but germ theory was actually correct in the end. AI has seen frauds and baseless hype since the mechanical Turk, but there still came a point when computers won at chess and Jeopardy. But it's a pretty good warning sign.

The gradual progression of abstractions still caps out somewhere around Python and Rails. We have drag-and-drop website makers, and visual tools for "plumbing" ML flows backed by pandas, or engineering calculations backed by Matlab. They mostly work, and people who know the underlying tools sometimes use them for speed while mixing in actual code as needed.

Meanwhile, promises of codeless business apps don't seem rooted in any of those advances, and continue on from their dysfunctional predecessors. And people actually making business apps in a hurry pull from NPM or PyPI without ever touching "no code" tools. It seems like we might get much closer to that layer within a decade or so, but I don't think we're there yet.


> Cynically?

No, not cynically :-)

> Because people were also selling "no code" ten years ago, or even further back.

Not really sure it matters when it starts. Sometimes markets aren't ready and products are too early.

What matters is are these solutions providing value to people today, and is it enough value for them to buy/spend time on/etc...

Seems there is enough value today, with a mature enough market, based on the results these platforms are experiencing.

> I know that sort of inductive argument doesn't actually work.

Then why waste time making it in the first place? :-)


Smalltalk was that next level, but for whatever reasons, the programming community never fully embraced that approach, although some modern environments and IDEs come close.


It doesn't matter. I'm not arguing the chronological order in which layers of abstraction have been created and when, just that this is fact. And if this is fact, why not this as the next layer?


Conversely, we have not in 2020 reached the peak of software development. Presumably there are faster and easier ways to develop software that haven’t yet been invented.

Of course, it’s probably more likely that a no-code solution like you’re describing would be the result of decades of iterative development rather than one genius project. Put that way, it seems like we’re heading in the right direction, no?


For what it's worth, tools like Webflow (and their CMS feature) are incredibly well-done.

From a front-end development perspective, no code is legitimate and practical today.

From a back-end or business-logic perspective, no code is more difficult because a lot of it is bespoke (except for things like generating a PDF from HTML).

It's not a binary "killer," but it's certainly a smart thing to play with as a programmer. I've saved a ton of time on things like marketing websites and CMS-level work using Webflow. With time and iteration, I'd imagine some pretty neat and powerful stuff will appear.

Read: don't be naive/stubborn; give it a try.


While I agree with the author, it's only true when speaking of software that requires custom logic.

For instance, "no-code" website creation offering has done wonders in replacing the "install wordpress on a very crappy cheap host and let it rot with vulnerable plugins" paradigm.


Is that very different from the websites of the 90s? I distinctly remember creating a somewhat-dynamic site in FrontPage, then rendering it into static HTML and FTPing it to the webserver.

Perhaps this was an impedance mismatch: "web requires coding" - it mostly doesn't, most people would be okay with a better HTML editor, and providing that via WP was a historical quirk.


Those would be examples of static sites, which are "read only" to most users. Wix, Squarespace etc can handle form submissions and online payments of course. And with third party solutions like Intercom and Optimizely, things like customer support chat and A/B testing can be done with pretty much no code.


Web hosting companies in the old days often provided a collection of CGI scripts that users could invoke from their HTML pages, things like counters, e-mail form handlers, guestbooks and other popular functionality.


The no code websites are so much better now. Shopify powers a ton of big eCommerce now, for instance.


I agree with the author in all points about the no-code movement and goals, but disagree with the larger points about software development and engineering in the business setting.

In particular, the attractiveness of no-code should not be that one does not have to have in-house software development, but that one has less technical debt and thus smaller technical interest payments. Businesses will always have problems with computers, because computers are rigid and unyielding, while businesspeople are soft and compromising.

It is all to easy to read the beginning few paragraphs as the sourest of grapes: The businessperson, having embraced fax, email, paperless, Web, and mobile, is nonetheless no closer to having embraced computer. The "traditional sense" of creating software is derided as "expensive, in short supply, and fundamentally [not quick at] produc[ing] things." But that is all explained neatly by market forces: Developers are expensive because competency is rare because understanding is low because computerizing is a painful commitment because operationalizing a business is equivalent to automating it away. Computers reveal the rent-seeking and capitalization simply by being themselves on their own.


Fred Brooks - No Silver Bullet: Essence and Accidents of Software Engineering, IEEE Computer, Vol. 20, No. 4 (April 1987) pp. 10-19.

https://www.cgl.ucsf.edu/Outreach/pc204/NoSilverBullet.html

Read and inwardly digest


Yes!

I think this line in particular is a good TL;DR of the issues discussed in the linked article:

> The complexity of software is an essential property, not an accidental one. Hence descriptions of a software entity that abstract away its complexity often abstract away its essence.


This reminds me of Gartner's "Citizen Developer" line used to sell expensive software to gullible managers.

I note there is - strangely - no corresponding "Citizen Management Consultant" role.

https://www.gartner.com/en/information-technology/glossary/c...


While everyone is right that a "no code" platform cannot replace all developers, if you take away the extreme framing, there is an interesting trend occurring. These days, business power users can accomplish a lot with tools like Excel (this one was always true), Google Forms, Salesforce, Airtable, Looker, and so on. People can define entities, custom fields, workflows, reports... things that used to be feature requests for developers.

Of course, many developers have had the experience of having to come in and deal with some insane Excel spreadsheets, but many of us have also been called into deal with crazy legacy systems built by professional developers. That itself is not an indictment.

As these tools grow, presenting new and lower cost ways of getting a certain class of activities done, I think we would be well served to figure out how to play nicely with these tools. (It's not like we want to spend our time building custom CRUD apps anyway.)


This delusion is especially visible in the DevOps space. For some reason we have decided as an industry that instead of writing some code in whatever 'real' language we will base operational work on YAML with ad-hoc templating and stringly-typed programming constructs.

The main culprits are Ansible/Salt and all the string-templating based tools for Kubernetes (Helm/Kustomize/...).

Especially with tools like Helm I believe we reached peak insanity levels. Instead of using a general purpose or configuration-specific (like Jsonnet/Cue/Dhall) programming language to build and then emit in-memory objects to YAML manifests, the industry is using YAML to define templated YAML and then parametrize this templating with even more YAML.

Now we just write complex declarative configuration with no debugging facilities, a non-existent type system, inversion of control that usually ends up working against us and no way to interact with external data apart from stringing everything together with fragile shell scripts. But hey, there's no need to write a single line of Go/Python/Java/Ruby!


Well, I would have to disagree (but agree somewhat, as well :)). These insanity levels you describe, and I agree with you here, are actually pushed through projects by coders for coders.

Here in lie much of the ”devops” problem, imo, as coders seem to want to allow easy configuration management through “simple“ declarations, inventing almost a new language in the process.

Having worked for 20 years with systems/configuration management, mainly as a developer I can tell many coders have not. Not at scale, and not supporting 100s of different services simultaneously. Hence square wheels gets re-invented.

To be rid of all yaml templating and loosely coupled technical integrations you will have to treat config/service delivery/ci/cd/what-have-you as a business domain of it’s own, and develop it the same way.

I guess we agree, completely perhaps?! :)

I’ve been fortunate to have been able to work everything from small upstart to big corporate enterprises and to me this is more true now than ever.

As a developer I build my own version of a CMDB, I do not use crazy templating - I write an API, that leverages the CMDB and produces sane output. It’s still configuration as code, just something that is easier to scale and adapt.

Guess I’m only disagreeing on whom to blame for this mess.


YAML is okay for representing a simple declarative configuration. It starts to break down when you try to encode too much operational logic into it.

If we step back, there are a few different strategies for dealing with operational logic. From least scalable to most (roughly):

1. Human processes (ex: checklist)

2. In the config itself (ex: Kubernetes' YAML)

3. A config generator (ex: Helm, Kustomize)

4. Software (ex: Kubernetes Operator)

I see a lot of people get stuck doing one for too long.

When doing one becomes unwieldy, it's time to consider the next. That or find a way to reduce the need for all that operational logic.


> simple declarative configuration

The point of all this is that software configuration is never a simple declaration. It is always a mess of behaviors and derived values.

I am not sure I totally agree, but it at least correct enough to always break something. And adding abstraction layers just because you choose to use an underpowered notation at the bottom isn't a good practice.


To play devils advocate, isn't the idea with Ansible at least that it's idempotent. The YAML should describe the final state and should be concerned with branching or lower level features like that.

(It's the same with SQL, you describe what you want back rather than how its achieved. A declarative approach works fairly well there.)


However, that final YAML-defined state is not guaranteed to define the entire functionality of the system. The typical counterexample is this: removing a 'create file' clause does not cause that file to be removed on subsequent runs, as the tool has no concept of ownership nor diffing against previous states. The emitted YAML does not represent the indended full state of the machine, just what actions to take to bring it from some unspecified historical state to an underspecified target state. There is no guarantee of consistency.

Thus, it is very easy to get in a situation where an Ansible playbook applied against two machines with slightly different production history will result in very different behaviour.

If you want real declarative configuration management, try Nix/NixOS.


> removing a 'create file' clause does not cause that file to be removed on subsequent runs, as the tool has no concept of ownership

Terraform on the other hand does have a concept of ownership, diff-ing, and applying changes. It takes some work as you now need to track state, but I've been very happy with Terraform.


> isn't the idea with Ansible at least that it's idempotent.

Sure, that's true with many of these, so what?

The problem isn't that the languages are declarative—functional code is declarative.

The problem is that they are extremely limited in their expressive capabilities, so that it is much more complex and error prone to describe the final state in them than it would be—even in a declarative style specifying the final configuration—in a more complete language than YAML (or, in many cases, a language essentially limited to JSON’s expressive capability even if it also supports a YAML serialization.)

> It's the same with SQL, you describe what you want back rather than how its achieved.

YAML would be an inadequate alternative for SQL’s role, too.


While (most) actions in Ansible modules are idempotent the entire playbook is not. So how your system is going to look at the end of a run is highly dependent on the order of everything in your YAML files and the current state of the system. You can ensure a file exists in and part and remove the directory containing it in another, and you are non the wiser unless you run the playbook multiple times and pay attention to the changes.


There are plenty of declarative programming language families. Lisps, MLs, SQLs. Configuration is code. Use a programming language.


Here's an example of a config system using prolog: https://github.com/larsyencken/marelle

It's a hard to tell where marelle ends and the config begins, since it's all just prolog.


So how do you differentiate a declarative language like SQL with a declarative YAML file for Ansible?

What would be the benefit of adding a declaritive language into the mix over YAML?


Tooling. Data transformation. Libraries.

Emacs is configured using Lisp. Because of that it is amazingly configurable. XMonad uses Haskell, which gives types to avoid lots of error cases.

The thing is providing an actually programming language doesn't remove anything specially if the same config can be written as cleanly.

And you would avoid "YAML templates for creating YAML", when you actually need to process some data (even if it's just for pre/suffix creation in names). Secrets retrieval is another thing that you also need to do and template.

Also, YAML is a terrible to parse language, with can give out weird error cases. Other languages compilers/interpreters are more mature.

And ofc, if you provide SDKs for your IaC tool, you should be able to use the language that your developers are more familiar with. Taking advantage of the good practices their are used to. Don't limit with "declarative languages". Use a programing language that make more sense, and leave data languages for data.


If you want to build a DSL, build a DSL. DSLs are easily embeddable in the mentioned programming languages.

YAML is not typed and you can basically do whatever. The worst of all worlds.


> What would be the benefit of adding a declaritive language into the mix over YAML?

The capability to readily define, store in libraries, and use reusable abstractions that apply within a configuration or across multiple individual configurations.


What's needed is an internal data-structure that defines the state needed -- YAML declares the structure directly, but an alternative is to use either a DSL or even a general-purpose language to build that data-structure.

Of course, you could write code to generate your YAML, but the tooling is _not_ going to help you with that today.

Maven and Gradle are examples of each of these methods -- both build up an internal model of the build, but while Maven uses XML, Gradle uses a Groozy DSL.


It's fascinating reading such a strong argument for using Lisp...and then considering anything else than Lisp.


Of course, I meant Groovy. The keys (at least on my Dvorak keyboard) are right next to each other :P.


I'd argue somewhat that the approach works well for SQL. On a superficial level it allows you to hide the implementation details of your query execution. In practice not so much. Hence keywords like "explain" were invented and statements like "CREATE INDEX". And table partitioning. And hints. And myriad of other practices where the SQL abstraction "leaks" details from the layer below.


You can implement LUA in less lines of code than YAML, lua has been around longer, easier syntax, loads of support, proper community around it. Yet people still use YAML on hype alone, it's awful.


So, those two things are quite different. One is a programming language, the other a markup specification.

And LUA has its own hype, just not in configuration scenarios.


The ironic thing is that—specifically in the cloud space—we call this no-code abomination “infrastructure as code”.

But at least there is the AWS CDK and Pulumi now to enable “infrastructure as code” as code.


> Especially with tools like Helm I believe we reached peak insanity levels. Instead of using a general purpose or configuration-specific (like Jsonnet/Cue/Dhall) programming language to build and then emit in-memory objects to YAML manifests, the industry is using YAML to define templated YAML and then parametrize this templating with even more YAML.

As a big proponent of Dhall, I gotta say, probably the primary reason for that at this point is due to a lack of language bindings. Most tools in the Kubernetes ecosystem are written in Go. Currently, there is no official Go language binding for Dhall - one implementation[1] is pretty close, but the marshalling features need some work, and it's not the author's day job. The only way to get Dhall configuration into a Go program today is to either shell out and call the Haskell-based reference implementation directly or to script your Go binary such that the Haskell-based reference implementation translates the Dhall configuration to JSON and then feeds that into the Go program. Even that's not ideal - you end up maintaining your schema in Go code and in Dhall simultaneously.

I very much believe that Dhall will solve the stringly-typed insanity which you refer to, but the language is not quite there yet. While I'd love for some of the Kubernetes core developers to step up, I mean, I can understand people's unwillingness to adopt something that isn't 100% perfectly well-supported and handed to them on a silver platter.

[1]: https://github.com/philandstuff/dhall-golang


There is a levels of indirection problem with that explanation.

Using Dhall to create the YML you will use isn't great, but it's quite possible today and is much better than using YAML to script a YAML generator that will read some more YAML to create the YAML you will use.

I guess some people simply like YAML.


I agree, which is why I use Dhall at work. But the fact of the matter is that `kubectl apply -f config.dhall` isn't realistic yet, and there's a big segment of the market that's just not willing to accept anything but that.


This sounds a lot like the "We do everything with XML now"-phase that the Java world has gone through.


Agreed, this is a very strong parallel.


Powershell shines here.

DSC or pure powershell has bunch of useful tools and a real programming language to use in between. Yaml like configuration can be achieved with nested HashTables that have almost equivalent readability without loosing any of the devlopment capabilities.


I'm reminded a bit of systemd. Whatever their problems, when things went south, it was usually fairly easy for someone who could code to walk through the init scripts involved and figure it out. The config files of systemd at first glance look quite simple and declarative, but if things don't work, one almost immediately slams into a wall of complexity.

Sometimes one doesn't "need" a coder because the system is such that it wouldn't do you any good anyway.


Exactly, the fix is also obvious but not quite happening just yet. Several languages now support internal domain specific languages. Several of those are strongly typed. And several of those can compile to native code and integrate with other stuff around them and also come typically with nice tool support in the form of syntax highlighting, auto complete, etc. Basically all the stuff that yaml sucks at can be fixed by having a richer language model than free form strings in free form objects and lists with no semantics whatsoever; no auto complete; and nothing but run time errors to tell you you copy pasted it wrong (because that's what happens when there is no tool support).

IMHO a big part of the problem is that the companies involved are actively incentivized to justify their existence by making things more complicated instead of less complicated. If things are too simple, nobody needs their consulting, training, support, etc. Worse most of these companies are competing for attention and typically only provide part of the solution instead of the whole solution. So there's a lot of money in cross integrating multiple complicated solutions from different companies. Indeed many of the companies involved are serial offenders when it comes to this. As soon as you get companies like Red Hat, Amazon, Oracle, etc. get interested in stuff, brace yourself for some convoluted vendor lockin. That's basically what happened to Kubernetes.

When it comes to languages, I'd love something that has a compiler and auto complete. I'd also like something that has a notion of internal domain specific languages. IMHO Typescript, Kotlin or Rust could work for this. The point of this would be leveraging a type system for static checks and code completion. That kind of rules out dynamic languages like javascript, python, or ruby (all of which have been used in devops in the past). They are nice but not nice enough.

However, typescript has enough static typing that you get some meaningful autocomplete. Also, it's already popular in the node.js world (i.e. lots of stuff available to integrate). Kotlin would also be able to fit in that world since there's a transpiler to javascript, it can utilize typescript type definitions and it integrates with node.js and npm. Rust would be a bit of an outlier here and probably a bit too much in terms of language complexity for the average yaml loving devops type with a mere few years of full stack experience. With Kotlin, the transition of gradle from groovy to Kotlin has been interesting. Groovy basically is a dynamically typed language and they've slowly been replacing it with the more strictly type Kotlin. That sort of stabilized with v5 and with v6 it got usable for a lot of projects and recent updates to Intellij have improved how it is able to support developers with e.g. code completion and other features. Something like that aimed at devops could work.


Thank god trends like Pulumi and the new AWS sdk is emerging.

General purpose programing languages are getting more expressive by the day, why do we use data serialization languages instead for configs? it doesn't make any sense.

Configuration is code not data.


> General purpose programing languages are getting more expressive by the day

You know, once upon a time, we understood that declarative approaches to software engineering were superior to imperative approaches, when declarative approaches are feasible. Declarative approaches are much safer and easier to test, at a cost of only being able to express what the tool accepting the declarative approach can understand. Imperative approaches are strictly worse for any problem set where a declarative approach solves the problem within performance requirements. The additional expressiveness of languages like Pulumi is the last thing I want.

YAML is a horrible language for declarative system configuration because a) any sufficiently complex system will require you to generate your declarative codebase in the name of maintainability, b) generating code for any language where whitespace is significant will lead you to an early death, and c) stringly-typed languages are fundamentally unmaintainable at sufficient scale. But this is not an indictment of a declarative approach! It is an indictment of YAML.

> Configuration is code not data.

Data > code. Data does not need to be debugged. The best code you can have is deleted code - deleted code does not need to be maintained, updated, or patched. Code is a necessary evil we write in order to build operable systems, not a virtue in and of itself.


I use Lua for configuration files. It's easy to restrict what you can do in Lua (I load configuration data into its own global state with nothing it can reference but itself). Plus, I can define local data to help ease the configuration:

    local webdir = "/www/site/htdocs"

    templates = 
    {
      {
        template = "html/regular",
        output   = webdir .. "/index.html",
        items    = "7d",
        reverse  = true
      },
      
      {
        template = "rss",
        output   = webdir .. "/index.rss",
        items    = 15,
        reverse  = true
      },
      
      {
        template = "atom",
        output   = webdir .. "/index.atom",
        items    = 15,
        reverse  = true
      },
    }
When I reference the configuration state, templates[1].output will be "/www/site/htdocs/index.html". And if the base directory changes, I only have to change it in one location, and not three.


I think "declarative" is a bit of a red herring here. Deterministic/reproducible/pure is a more appropriate distinction: configuration languages like JSON/YAML/XML/s-expressions/etc. are trivially deterministic, but not very expressive, leading to boilerplate, repetition, external pre/post-processing scripts, etc.

Allowing computation can alleviate some of those problems, whether it's done "declaratively" (e.g. prolog-like, as in cue) or not (e.g. like idealised algol with memory cells).

The main reason to avoid jumping to something like Python isn't that it's "not declarative"; it's that Python is impure, and hence may give different results on each run (depending on external state, random number generators, etc.). Python can also perform arbitrary external effects, like deleting files, which is another manifestation of impurity that we'd generally like to avoid in config.

tl;dr The problem isn't the style of computation, it's the available primitives. Don't add non-deterministic or externally-visible effects to the language, and it wouldn't really matter to me whether it's "declarative" or not.


That's a bit of a no-true-scotsman there. If the problem is just the markup of choice, we should see an alternative markup emerging any time now. If we see imperative-focused tools instead, maybe it's not just about the markup.


We do see alternative "markups", if you want to call them that, emerging that solve the generative issues - the two that come to mind are Dhall and CUE. They bring forth JSON to help them interoperate and be relevant in a world that predominately expects JSON/YAML, but they can also be read directly.


There are declarative general purpose programing languages.

That data you are talking about does need to be debugged, like Helm charts and pipeline definitions. Sure data is better, but config is code, not data.


Generators need to be debugged, not data. It's very easy to test a generator - a few unit tests checking whether, for a given input, the generator produced the expected output, and you're set. Data sometimes needs to be cleaned, but there's no such thing as a bug in data.

Whether the generated declarative output produces the expected behavior on the part of the tool interpreting the declarative output is part of the tool's contract, not the generator or the declarative output. If you need to check the tool's behavior then either a) you wrote the tool or b) you're writing an acceptance test for the tool, which is an entirely different endeavor.


Things like pipeline definitions and helm charts are generators.


No, Helm uses charts (data) to generate object definitions (in YAML). Helm is the generator.

There's nothing that prevents you from writing a unit test that runs `helm template` directly to check whether a given chart with given values will produce a given set of YAML files.


>but config is code, not data.

Config is both. Config variables are data. The code that accesses and uses those variables is...well...code. they should be kept separate. Like any other code and data. Config isn't a separate special entity, it's just another part of the program. The data part should be represented as such and the code part should be code. Trying to combine them and create a special 'config' language is mistake.


The main problem is that, in my experience, the majority of DevOps teams are Ops teams that have been renamed and refocused towards automation.

These are people that by and large don't want to code, not saying that they can't or won't.

To be fair this has been in Windows shops, where scripting has only recently (last 5-10 years) taken off, so you've got a lot of windows admins that the closest they've been to code is Batch scripting with a bit of Powershell. This is a big change for them

As it happens i read about pulumi recently and I've put it on mt list of todo things, but I can't see that I'll be able to sell it to our team and our team is blessed (cursed?) with three former developers


> These are people that by and large don't want to code

I disagree. That's the stereotype that tool builders have of such people. Good ops people have always loved coding, or we wouldn't be living on the mountains of Bash scripts also known as "Linux distributions".

(Besides, people who don't like to code won't like writing tons of declarative markup either. So there is little point in the current approach either way.)


It might be a stereotype but it's also my experience, which granted is limited and Windows based, which as I pointed out in the previous comment hasn't been really onboard with the scripting experience until relatively recently

As to the declarative markup, for instance Azure Devops still doesn't have feature parity between YAML pipelines and classic pipelines, so while you could well be right about the same resistance to yaml, it's not, necessarily an issue yet.


Most devops tools are built on Linux for Linux, then ported to Windows later as an afterthought, so I don’t think platforms are much of a factor. It’s a defect in the production pipeline somewhere, something like feedback from potential users not reaching developers until it’s too late. It doesn’t help that, in some cases, vendors just impose what is going to happen, and the community is simply forced to put up with it.


> The main problem is that, in my experience, the majority of DevOps teams are Ops teams that have been renamed and refocused towards automation.

I'm on a relatively new team kind of like that (it's not a former Ops team but most of the team was pulled from Ops/DBA/analysis teams, and fits the description, and it's not a really a DevOps but more of a Dev+Ops team) and the general reaction of the team to being introduceed to the AWS CDK has been “we need to move to that as soon as we can”, even from people who very vocally never wanted to be programmers. And that's with, in many cases, a couple months experience with both programming and YAML IAC in the form of CloudFormation.


> Thank god trends like Pulumi and the new AWS sdk is emerging.

The more things change, the more they stay the same.

The pendulum is swinging back towards scripting languages. But give it a few years, and we'll be railing against scripting languages for not being idempotent enough, and moving back to the latest version of YAML and XML (perhaps we'll go with TOML this time) with custom interpreters.

We have seen this half a dozen times already, between turing complete DSLs in ruby, to plain bash, perl, and python scripts. The other side are the DSLs written using YAML, JSON, and XML (and every other config language written in history).


I still have my reservations against Pulumi in particular, though.

I've only really skimmed their documentation, but the idea that calling 'new' against a VM class instantiates a VM in production seems to be to magical for my taste, and might be an indication that it's yet another product that targets the happy path more than what we tend to experience in production: failures.


> Configuration is code not data.

As any Lisper knows, code is data is code.


> new AWS SDK

Could you share some details on that? A quick Google search didn't reveal anything about major changes to the SDK.


Probably referring to the CDK (Cloud Development Kit), a newish SDK for developing Cloudformation infrastructure-as-code in general-purpose programming languages.

https://aws.amazon.com/cdk/


I believe the OP is referring to the AWS CDK (Cloud Development Kit) - https://docs.aws.amazon.com/cdk/latest/guide/home.html


Maybe because we've chosen such rigid, inexpressive and DSL-hostile programming languages for devops... I mean, Python, Go... ofc you'd rather write your DSL in YAML instead and forego any checking the language might provide since everything else would be too awkward. Something like a "statically typed Ruby" would probably shine here.


TL;DR: "your language sucks, my language would shine"

Language doesn't matter. What matters is the attitude of tool builders that everything should "simply" be described as data. When that turns out to be insufficient, as it inevitably does, hacks are introduced and sooner or later you end up with crippled and idiosyncratic pseudo-languages.


I like to think of the no-code stuff like this:

- People who are into this stuff know there's something to it, but as a movement, we don't know exactly what it is.

- My personal feeling is that any no-code tool should be useful enough that I would use it. I want some no-code to make me feel for my career a bit.

- The "threat", I think, is very real. For example, whenever I see myself following a set of rules to write software and not thinking, I start to wonder if some abstraction is lurking in there. Maybe the solution is a programming library, but increasingly, I think there's opportunity for this stuff to be more visual.

Why visual?

- UI programming is necessarily visual, and a visual tool for building interfaces makes sense

- Tools around managing software development. GitHub is IMO a no-code tool. VSCode is. Many IDEs are.

Why not visual? Algorithms and business logic. Like the author, I'm unconvinced that flow diagrams will provide enough flexibility to be useful for all but the simplest cases.

I guess my feelings aren't that different from the author's but I think the difference is I'm optimistic that the movement will be generative.


I don't think your career is any danger from better tools. (At least short of full-AI, which would put every career in danger.)

All kinds of amazing visual interface building tools have been created, that are very easy to use, easy to teach, easy to get started, and are very powerful. I'm still not sure that a better UI development tool than Hypercard has been invented yet.

So why do professional programmers still exist? Because most people don't want to do even that level of software development. Either they find it beneath them, find it boring, get frustrated when they want to do something that stretches the tool's capabilities, don't want to be responsible for fixing bugs and maintenance, etc. etc. etc.

It's not that most professionals are not smart enough to be programmers. It's that they dislike it enough to pay someone else to do it for them, or would just rather focus their time doing other things they enjoy more or believe to provide more value.


> So why do professional programmers still exist? Because most people don't want to do even that level of software development. Either they find it beneath them, find it boring, get frustrated when they want to do something that stretches the tool's capabilities, don't want to be responsible for fixing bugs and maintenance, etc. etc. etc.

I'm not so sure about this. Hypercard was extremely popular among non-programmers in its heyday. It is not their fault that Apple, who had no idea what to really do with the software, let it die on the vine.

Computer companies have inclucated a computing culture that places a sharp perceptual divide between users and programmers, leaving little room for anything in between. My belief is that this has happened for wider structural/economic reasons (contemporary emphasis on consumerism, short term thinking, etc) rather than any general distaste for "real computing" among regular people.

If we do not provide regular people with "Hypercard-like things" and instead give them shrinkwrapped solutions, we will of course have the perception that they have no interest in what we call -- for lack of a better term -- end user programming.


We had a lead who wanted to do front-end code but hated HTML. He really, really wanted us to use something where he could pretend he wasn't writing HTML. But AJAX was the big thing, our app required a lot of interactive HTML, and CSS3 was just on the radar.

He would mark stories as done that had tons of layout problems. He just couldn't be arsed to dig into it, because you actually had to stick your whole arm into the guts to get things to work. Between me and another guy who were doing 60% of the interactions and north of 80% of the bugfixes we finally browbeat him into going back to real HTML templates. He retreated to writing mostly backend code from then on, which honestly reduced our workload by itself.

Pretending you have a different set of core problems than you actually have has never ended well for me, and I'm convinced it has never ended well for anyone else either. Don't abstract away your problem domain, and don't kid yourself about what your problem domain is.


I think layout is a big hurdle for no-code tools. I also think it's going to get solved. But until then, I think you really gotta dig into HTML and CSS and understand them well to get things to work properly.

"Don't abstract away your problem domain"

I dunno. I tend to think of it as getting good at what you do, then finding the rules, then taking those patterns and making a company out of it where those patterns are built-in to the product.


> I want some no-code to make me feel for my career a bit.

It sounds like your career is software engineering, if so...

> My personal feeling is that any no-code tool should be useful enough that I would use it.

Someone who's a software engineer is not the litmus test.

These tools, from what I've seen so far, are NOT for software engineers.

So you wouldn't need to use them.

Instead, from what I've seen so far, these tools are primarily for those folks who:

- Cannot write software - Do not want to learn how to write software (gasp!)

But DO want to put their (software) ideas out in the world, have control over them, without:

- Spending the money to hire a software engineer - Partner with a software engineer


You're right, most successful no-code tools are for non-programmers, and programmers shouldn't be the ultimate litmus test.

I also believe that if a coding tool will make me more productive, I'll use it, even if it's not a library or a language. Right now the visual tools are limited, but it doesn't have to be that way.

I've used Flash, Windows WPF apps, those visual tools for making apps in xCode, and others. I think there's something to having visual tools for building apps. I think it's clear that UI doesn't need to be in code. Maybe state machine logic shouldn't be written in code either. Maybe high-level software architecture constraints shouldn't be in code.


This rings very true.

We sell an integration platform-as-a-service[1] and it offers a 'low code' visual environment to stitch together integrations.

But from pretty much the start, we built-in support to drop down to Javascript to build logic and we think we hit a sweet spot there.

You can visually string together most components but drop in a bit of script to concisely define some hairy transformation or complex logic.

These kinds of low code environment are great for doing customizations or enhancements or integrating many things together. It's very much not an optimal solution for building entire applications or software solutions.

There's also the issue of tooling. There's a huge amount of infrastructure built around maintaining large application code bases (version control, merging, diffing). If you want to build large pieces of software in a no-code environment you still need all of those tools - except they don't exist and are non-standard.

[1] https://lucyinthesky.io


+1,000,000

As the article points out, the problem with these is they are sold as a silver bullet and companies who don't know any better will spend millions building putting a lot of code in the script nodes on their "low code" platform.

Code that can't be unit tested properly, that has at best some really crappy tools for automation and limitations that drive everybody crazy (ie getting logs into es/splunk or whatever - syslog UDP over the internet ftw!)

You are clearly a responsible vendor, I wish the others I deal would be honest with their customers about what the low codes do well vs what they dont


The quote that sticks with me is "New Blocks means new builders".

I'm pretty sure this new 'movement' will gain a lot of steam. Probally mostly because of the 'no developer'-dreams.

But the most value I find, is when working with very structured people - who understand data AND LOGIC - but doesn't know how to code. They do not have to write a spec, but can instead make a working prototype pretty quickly.

I actually think the biggest change from earlier on, is that the 'No-Code' doesn't seems to be a dead-end. As it has been earlier.

If you grow out of the No-Code tools, it possible to replace parts of the No-Code expirience using microservices and serverless.


Once read how the "no code" trend isn't really more than a marketing one. Went more or less like this: it will be great to build software without having to write code: you'll just have to specify what your program does. Except that a program's exact specification is the definition of its code and that the job of writing such exact specifications is what developers do. Conclusion: any "no code" tool will just be another way to write code ;)

Obviously the trick is in the difference between exact and fuzzy specifications. As a software engineer I could say my main job is to translate approximate specifications of business needs into a list of specific instructions on how to solve them i.e. code that can run on a computer. We'll probably have better tools and more efficient abstractions to do so but we'll still have to write some code.

I guess the "no code" topic could better be treated as "years of experience vs. hours of training". During my first job I once had to use a graphical programming tool to design and run computer simulations of a power plant: just connect some predefined building blocks, input some simulation parameters in each block and voilà! Except that the simulations quickly started to require hundreds of said building blocks, so that my coworkers spent their days clicking around only to start over when we needed to use different simulation parameters. Got trained a few hours on how to use the graphical tool but couldn't accept I now had to click and input numbers all day long... Somehow found that an old user manual actually existed and discovered that I could instead write plain text files to specify the computer simulations. It used a very verbose ad-hoc language so I wrote a Python script to generate the files I needed and when the requirements changed I could redefine the simulation in minutes instead of days. Later got fired for being some sort of troublemaker but I now work as a fullstack dev in Python and Javascript.


No code has been a thing for a long time. I don't think we need to be so defensive, though. The limitations of this approach haven't changed in the decades since its first inception.

I think it's a little ignorant and short sighted to speak down on the current trend. The reality is, there used to be a whole host of no-code products that solved a whole host of problems. We haven't outgrown those problems, but the solutions we had didn't scale well with the technologies we use today.

We need similar tools to those that we had before, updated and adapted for modern technology. Some of the new tools go a step further, which is incredible. Think about every aunt and uncle who have an idea that would solve a problem they and 10 of their friends have. Those 11 people are super stoked about the tool that Aunt Linda built. Why can't we let them have that?


The "No Code" movement is not a delusion. Most of us in tech are very ego driven and don't think this will happen, the majority is not in tech and want this to happen. It will happen.


I feel like everyone misunderstood what the "No Code" movement is supposed to be about.

It's "No Code is the Best Code". The idea is that the code you should be writing should be things that are the core of what your business does. Those things should be sufficiently hard to replicate. Everything else should be a commoditized product or open source infrastructure that you deploy.

This way you avoid time reinventing the wheel or, much more importantly, making your product something that's easy to commoditize!

I've worked at companies with hundreds of complex Java microservices that essentially replicate built-in functionality provided by NGINX, HAProxy, etc. Literally for no other reason that the people making the decisions know how to write code, but they don't know how to productionize and configure NGINX or HAProxy.

These companies are more than happy to waste hundreds and thousands of man-hours building and maintaining this code that isn't their product versus hiring more competent ops folks to stand up infrastructure or train developers on cross-functional skills.

When the size of your engineering org starts climbing into the hundreds and thousands, NIH Syndrome kicks into high gear. Heck, I'd actually say it starts around 30.


I don't mind it happening. Just most of these tools are rehashes of stuff that happened in the 90's and still have the same problems.

Programming has been an evolution higher and higher abstractions. It's not different.

We will probably move to a higher abstraction, but it will probably something like describing the problem with tests, and the computer automatically writing something that passes these tests.

You still need these skills. You will just accomplish more in less time.


Is it no code or no programmer that’s the goal. The latter seems reasonable if you’re software product customer that would like to trim out the person between you and the product that seems to be slow only to misunderstand what’s needed.

E.g. if you could get the end product with Less waiting and miscommunication. I can see that being a sales pitch that resonates


>or no programmer

That seems to be the gist of it. SaaS seems to tackle this specific issue by trying to move everything away from the customers in the first place. Don't need sysadmins or software devs(here) if they're in the cloud(over there).

Their pricing model is often advantageous to smaller companies aswell; only paying for what you use, even if that price is ridiculous per unit sold, it still often beats hiring dedicated staff. This effect lessens over time as company grows.


The add_domain_name code has another buffer overflow problem

  const size_t size = 1024;
  char *dest = malloc(size+1);
  strncpy(dest, source, size);
  strncat(dest, "@example.com", size);
  return dest;
The first strncpy will not append a NUL character when strlen(source) is exactly equal to size; the rules for strncpy and NUL termination are just about useless outside of a few specialty cases.

(heinrich5991 found a seperate problem with the strncat).

IMHO, this helps prove the point about levels of abstraction. The only reason that we argue over the idea that higher level == better is that we're programmers, and we argue over everything :-)


I have a hard time taking the author seriously if he confuses low level vs. high level languages for primitive vs. advanced in language design.


Everyone seems to want to make an analogy for why no-code tools are bad, or will end badly.

But I've watched countless numbers of individuals start & build full web apps, websites, mobile apps, and more - all without code.

No-code tools are viable to use to build a product or business, and they're here to stay.

They're not here to "take dev jobs", but they are here to empower the 99.5% of people that don't know how to code.

Imagine what features more advanced no-code tool platforms like Webflow.com & Bubble.io will have in 5 years.

While y'all discuss pros/cons, we'll be busy building.


Well, you could argue that _all_ software applications -- web browsers, word processors, etc, are highly abstracted instances of graphical domain-specific programming languages.

I actually think this is a useful way to think about things. We _all_ are computer programmers - every single user. We just operate at different levels of abstraction, and use different tools or languages to program the machine.

So then, rather than "No Code" vs "Code", it becomes "What is the cost of a higher-level abstraction" vs "What is the cost/benefit of a lower-level abstraction".

And if it makes sense to name one level of abstraction "code", and a level above that "not code", then, so be it.

[EDIT: I think some useful examples are Sketch, Unity, Pro Tools, Photoshop, etc. Graphical interfaces for users to perform complex operations with a high level of abstraction. I guess I am arguing for, strictly speaking, considering all software user interfaces as highly abstracted DSL's.]


I've read plenty of comments about this article the last few days, it has made the rounds in popular sites, and the verdict is the same as the author's.

What's surprising is that no one sees the elephant in the room: the problem with coding is that the abstractions available in coding are the same one offered by operating systems: files, processes, threads, mutexes, sockets etc. These are the wrong types of abstractions for the majority of cases!!!

What we should have instead is data, functions, events and channels.

If we had that, then we wouldn't need to have articles about 'no code', we wouldn't need visual programming tools.

The abstractions we have now only get in the way of programming. They make development really slow. If we had the set of abstractions mentioned above, it would have been a lot easier to create solutions!

I am not saying that the abstractions we have now are totally useless; they are not, of course, because they allow us to make things, but they aren't what we need in most cases.


As a marketer, I always look for no code solutions first. API integrations? Use Zapier. Email form integration? Sumo or similar. Triggers? Google tag manager. Etc. The reason being that the dev team at any company never has time for new projects.


Yeah I've always read this to be a failure of engineering to produce stuff in time. I wonder if it's a failing of open source that it hasn't gotten to web infrastructure yet.


So, instead of "learning a new language", they're "configuring a new integration"? Guess how much these differ...spoiler: two marketing labels for the same thing.


I can train one of my juniors in how zapier works in an afternoon. Teaching them (or myself) a programming language would take years and be more error prone.


I can teach IF, FOR, and the basics of python in a week. I can teach ruby in about the same. For what they are going to need for a low code solution, that's enough. You don't need to teach all of programming - only a subset that would be covered in a low/no code solution.

If you don't believe this, look at how many non-CS professionals have learned VBA as their excel-fu reached a limit. Even if it's not clean to start, the natural human curiosity starts to take over.

One of the better programmers I ever knew had a high school education and was working in an airplane maintenance facility when he was frustrated with the tools he had and just started learning VBA on his own.


I think you might be underestimating what these tools can do. I'm not the person you're replying to, but I have also seen a junior marketing person create a Web form to be sent to clients (using Jotform or similar), have it output to a Google sheet, have Zapier pick it up and move it to Airtable, where he defined a bunch of extra fields for internal staff to annotate the submissions, all while getting notifications when certain conditions were met.

Not only could he not learn enough python in a week to do this, but the professional developers on my team could not do this in a week.


This actually feels very unixish. `form | sheet; crontab: zapier | airtable`. Simple tools that can be simply combined - I feel similar about Tasker, where I can combine various building blocks without worrying about compiling an app. (It does get cumbersome without an IDE)


> I can teach IF, FOR, and the basics of python in a week. I can teach ruby in about the same. For what they are going to need for a low code solution, that's enough. You don't need to teach all of programming - only a subset that would be covered in a low/no code solution.

OK, now what? Do you teach them deployment of those "low code solutions"? Is it another week?

"Oh, now you want to send a notification to Slack? That's a little above basics. What about another week of learning the concept of libraries?"

Do you see the pattern?


Doesn't matter if teaching fundamentals only takes "a week" -- that can't be afforded when you need a tool right now.


While true, that's a big failure of business - fix the pipeline before the fire, no?


Okay, now that makes sense. I have originally understood it as "...and the dev team also gets to manage the no-code thing."


    BASIC was an attempt to allow people to write software
    in what looked like English, and indeed it was extremely
    successful (cf. Visual Basic).
The language designed to look like English was COBOL. BASIC looked nothing like English. The reason it became popular was because you could develop programs interactively - edit, run, load, and save programs - without ever leaving the system. It was still programming, though (as was COBOL).

Visual Basic was released many years later, and looked very different from the original BASIC.


Wasn't the main reason for VB's success the fact that it had a really good GUI builder and that a version was integrated into Excel?


Yes, the VB GUI builder was excellent (as compared to the language itself, which was meh, IMO).

Some people even used it to build GUIs for applications where the bulk of the code was written in another language.

VBA was in all the Office applications... Excel, Word, Powerpoint, Access...


The idea isn't new. I first saw it as '4GLs', which were higher-level languages. Later we had rule engines and now AI.

Programmers never go away, though. Besides the need to write the customizations that the higher level languages miss, there is the need for disciplined thinking and testing. No matter how simple the tools become, the need for these remain.


You forgot about "model-drive development". And before that, COBOL was sold on the same pipe dream.


Cute article, but worthless and shallow.

Sure you can replace 4-5 lines of python with some box-based UI rule system, but anybody who’s not first week out of a 2 week coding boot-camp knows that the devil is in the detail. What if instead of “display error” it’s “try again if it’s a mail server error up to a maximum of 4 tries then email the administrator and save debugging info”

Then what about a loop? Have you ever tried to debug a loop in a UI programming environment? What about a complex loop with custom data types and no ability to see a stack trace or variable list? You gotta put print dialogs and system exits all over the place, and oh, the print dialog only prints a small string? Now you gotta do 1 variable at a time, 200 times to debug. Now I’m showing this to the “non-coder” who’s thinking “this is a giant headache” - and you know what??? HE IS RIGHT, because we have both been forced to use this crappy, blunt tool. Wasn’t this supposed to make things EASIER? Now the business guy AND the programmer are tied up trying to shoehorn what would otherwise be trivial for the programmer into this weeks novelty rinky-dink “no code” tool

Ui drag and drop replacements only work for trivial examples, or they become a giant unmaintainable spaghetti diagram that’s so complex you can’t even follow the lines, let alone understand program control flow.

The fundamental problem with existing software is composability. Us geeks have it at the command line, but regular users can’t pipe their brower through grep them to their printer,

Users are stuck with the fixed options available at software design time, if we could standardise a composable interface on all software (I don’t think “bytes” is the right abstraction for this, it’s too low level) then users can start dynamically composing their own solutions, which will take a decent shift in thinking for most people - but a generational cycle will fix it if young kids grow up taking composability for granted


That's... exactly what the article says.


I think we’ve already been moving closer and closer to “no code” simply through the evolution of programming languages and tools, much more than we have through the advancement of specific “no code” frameworks.

Programming languages are becoming higher and higher level. Abstracting more and more boilerplate and plumbing. Then you have frameworks specifically designed to fill that role, and package repositories, further reducing the amount of code you have to write.

The outcomes are that:

1) Your source code has a much higher ratio of business logic to everything else

2) Your business logic is expressed in less lines of code, and in ways that are increasingly becoming more similar to their natural language representation

We haven’t reached “no code”, but we are currently at “much less code”, and a lot of projects can be achieved with “very little code at all”.


I've seen this so many times - rather than just write code to do something, wouldn't it be grand if the users could plug their code in with some scripting language or twiddle some configuration and poof! magic! new features just happen!

It's one of those things that looks grand on the whiteboard and fantastic on the slide deck, but holy fucking christ when it comes down to implementing it and moreover, having to support it forever after, the pain spirals. Users can't actually implement their plugins, so you end up writing them for them, in a gimped DSL or scripting language, and all the initial choices that were made become ossified, because you then can't ever risk breaking some potential plugin code out there that depends on the API.


> “No Code” systems are extremely good for putting together proofs-of-concept which can demonstrate the value of moving forward with development.

Another positive aspect is the ability to teach junior coders. One of the coolest things on that front is MIT App Inventor https://appinventor.mit.edu/ — a webapp for creating Android apps using Blockly-style blocks, see screenshot here here https://appinventor.mit.edu/explore/designer-blocks#blocks

I was super surprised to see middle-school kids building real .apk and running on their phone.


What makes programming difficult isn't the syntax, it's the need to define the problem in an completely unambiguous and consistent manner.


You forgot a couple of things. You need to define the problem in a way that's complete (no missing corner cases) and be able to debug it when it goes wrong.


Good points.

I think it's interesting to compare source code and legal documents.

A similar need for precision, internal consistency, lack of loopholes is needed when drafting a legal contract or legislation.

This is all done in natural language, but it doesn't make it magically easier.


I often am thankful that I work with software and not contracts. Could you imagine writing software where the instructions are not guaranteed to work the same each time, and there is no way to test before going into production?


"The last one" avoided the need to code back in 81 https://wikipedia.org/wiki/The_Last_One_(software) and no one has coded since.

But abstractions and declarative languages have their place. Consider relational algebra and SQL, and all the little declarative languages within other languages, like regular expressions, printf/scanf format strings, even char escape sequences.

But these require a lot of thought and talent to up with, and to design (e.g. codd didn't design SQL). Whereas TFA is talking about whole languages, and designed ad hoc.


"No Code" tools do have an actual advantage in that they only allow to create valid programs. They don't prevent bugs however so that advantage is pretty tenuous, it's akin to a type system that works as you go.

The main disadvantages are that they are harder to edit, copy, reuse, and quickly become a tangled mess when programs grow in size.

Maybe there is a middle ground to find somewhere, keep text for editability and because our whole education system is based on it and I'm pretty sure humans have dedicated brain structures to process language, write basic blocks in a regular programming language, and create DSLs to write "business logic" in.


The current focus No Code will create a large number of tools a skilled technologist can use to improve productivity. The new tools will open up problem domains where the time and budget wasn't there before. It's clearly not going to replace code, for reasons that everyone has enumerated here.

The reason it's getting VC attention right now, though, is because No Code is a way to discipline the market for programmer salaries. Even if the effect of it is more glue code over more protocols, the goal is to shift jobs from engineers to technicians.


"No code" does not equal "no programming". For me, programming is 90% about how I structure/organize a process, 10% is the expression of said process through code.


We tried building NoCode (used to be hosted at noapp.mobi), I think around January 2017. The project has been abandoned since then, you can find the project at -

1. Docs - https://github.com/veris-pr/v3-docs/wiki

2. Core - https://github.com/veris-pr/v3-core

3. Backend - https://github.com/veris-pr/v3-api/tree/develop

4. Mobile App - https://github.com/veris-pr/v3-app/tree/feat-offline

5. Mobile App - https://github.com/veris-pr/v3-client

Quick demo (forced under 2 minutes time, we tried to take this to TechCrunch), https://www.youtube.com/watch?v=HfZT75rduPg

This was a POC project. It was quite suitable for simple business process management based apps. One of our customer has been using a conference room manager built on this platform, for 3 years now.


I keep saying it. Whatever non-code representation you may have for your logic -- dataflow diagrams, interpretive dance, Minecraft redstone, whatever -- you need to make it unambiguous enough that a machine can act on it in a well-defined way, and once you do that what you have is isomorphic to a construct in some textual programming language. And lines of text are going to be easier to input, edit, and maintain than your chosen representation, barring some major UI revolution.


The problem with "No code" platforms is that you're relying on magic provided by the platform tools so when it's time to extend or modify the code it's a total nightmare (remember Dreamweaver?). In a sense it's vendor lock-in. "No code" systems work for simple things like visual layouts (Squarespace) or IFTT GUIs for simple actions but anything beyond will be a hassle to maintain and extend when you need to.


I'm not really sure I'm seeing anything that isn't the logical continuation of the move from low-level languages to higher level languages, and from lots of bespoke software to frameworks and libraries.

A big pile of Zapier gunk might be brittle, but I think tools like it are a natural progression in a society where everything is dependent on computers. To me it's part of "Everyone should learn to code".


Visual tools work great for creating models (i.e. entity diagrams). There is no reason why not generate 90% of the typical business CRUD app boilerplate from such model. Also business rules tend to be simple (decision tables, process workflows), no need to "code" these. If they are not simple enough for business user, representation in code will not help, so business rules should be kept simple.

What I lack in visual/no-code environments: easy bidirectional integration.

The no-code environment should be able to call into external service (REST, SOAP) or external library (any language) and represent this call as a Black box. The interface should be simple to be able to easily create the missing functionality in best-suited language for the task. But more importantly, the external implementation should be allowed to use all the powerful high-level abstractions that the no-code environment presents to the user. Otherwise, the external service will need to re-implement many of the useful functions of the no-code environment. You then lose all the benefits of the no-code counterpart and this transition is hard.


The closest I've been to "no code" is using LabVIEW, Function blocks diagrams in PLCs, and more recently Node-RED.

I've seen engineers (with self-professed terrible programming skills) build working things with those tools - and yes, there's some code here and there. But those have been for rather specialized and smaller tasks...I can only imagine what a HUGE LabVIEW project would look like.


This says it's very either or, and I think misses the point of a lot of nocode tools.

One of the best measures of tools is their composability. Can you use this for what it's good at, and integrate that solution with another tool that's much harder to use but can solve the last 5-10% that's just too hacky or not supported directly by this tool?

For my that's where zoho creator falls down a bit. Their APIs are just web form post handlers. But it works!

We're doing a lot of stuff on webflow by dropping some custom javascript in there to hook it up to an API or do some very data-driven stuff beyond their interactions.

The point is, nocode isn't just for proofs of concepts, although it's great for that.

When evaluating nocode tools, look for integration/extension points, and think about your migration path should you need to replace it. Hopefully you're succefful enough to need that but don't build 100% of your app by hand because 10% actually needs hand-written code.



Microsoft are pushing the concept of citizen developer like crazy with their Dynamics 365 platform and things like Flow, Logic Apps and PowerApps.

Having worked on that space, I find the potential users (that are not developers) to be pretty limited, YMMV


I think no code has its place. I view that as a step/experiment towards a common "language/notation" to describe how a business interacts with its surroundings that is not Java or the likes. Will be really interesting to see where this all leads or if it blows over.

BTW - of course we are also build such a thing at Bitspark - https://bitspark.de/slang/ or https://github.com/bitspark/slang


As I recall, Assembler was the first "no-code" tool. After all you could get rid of all those pesky programmers who knew all the number codes for instructions. Anyone could write "ADD R, R2". See!


The article mentions that, if you are not free to go, then the police must read you your Miranda rights. This appears to be a common understanding as well.

However, other sources I've read state that police don't necessarily have to do this unless they want your testimony to be court-admissable (https://www.nolo.com/legal-encyclopedia/police-questioning-m...).

So I'm a bit confused - could someone clear this up for me?


In the US - See https://www.nolo.com/legal-encyclopedia/police-questioning-m... for example.

Read carefully and you will notice:

- they don't have to Mirandize you even if arrested or detained. If they don't, your answers and evidence based on them will probably be inadmissible, if your lawyer is not an incompetent boob.

- they don't have to Mirandize you at all with no penalty unless you are detained or arrested. If you are "free to go", for example if asked a question at a traffic stop, Miranda is irrelevant. What you say will always be used against you.

- always, in all circumstances, assume that your statements WILL BE USED AGAINST YOU.

- At a traffic stop, you say "here is my license and registration" and "please, thank you, have a nice day". NOTHING else, which is quite a bit harder to do than it sounds.

When in an adversarial position with the police, it's best to remember that it is literally not their job to be helping you. It's not their job to be "fair" or to be your ally. They are ALLOWED TO LIE to your face, and might do so.

But the rest of the time, which is most of it, they are your ally. You will know which time is which - act accordingly. In all cases, respect is wise, is warranted, and is well deserved. Remember the number of our friends in blue who have died because of some bad person. And note that until they understand you, they may quite reasonably assume you are one. Don't be an A-hole and prove it to them. Don't lie, it would be obvious, suspicious, and insulting. Just say nothing or shrug. Don't be a dick and try to argue - save that for the courtroom which is when it is appropriate.

But you still don't ever have to say things like "Gee I thought I was only doing 80" or "I didn't see that guy, he musta came outa nowhere"...


As someone who has been managing Low Code/No Code on a large scale for a major enterprise from before this became buzzword soup, I can categorically state that the benefits are real.

HN isn't the intended market here, and that disconnect is shown in the currently leading comment thread which is talking about DevOps deployment. DevOps is way beyond what this space currently targets and is working to achieve.

To be crystal clear, Low Code/No Code is not going to replace a company's custom developed application that drives their core business. It isn't even going to replace the need for custom application development throughout an enterprise. This is an augmentative technology stack intended for a different audience.

What it does do is empower non-IT employees to become what the industry likes to call "Citizen Developers". Think more Excel/Access less <insert favorite language/framework here>.

Where this technology currently shines brightest is at the lowest levels of work. Every enterprise has pockets where work is getting done from Excel spreadsheets, Access databases, email chains, SharePoint/Office documents and worse. Essentially work that could be done easier and better with an application, but where the current enterprise cost, developer availability, political/agility issues, or simple knowledge that the problem exists prevents that application from being created.

The best products in this space allow IT to setup a space where non-IT staff can create applications within the confines of the environment to address these problems. For example, a department can go in on their own and replace a shared spreadsheet that is locked all the time from coconcurrent use with something that is truly multi-user. On top of that, they get to enhance their process and improve productivity by having it generate emails to someone when work is needed, or automatically create new tasks when certain statuses are encountered, and other similar things associated with "modern" technology. They do all of this on their own without needing formal IT development resources.

There are obvious benefits to an enterprise here - like replacing multiple unsupported/bespoke solutions with a single known and supported platform - but there are multiple side bonuses as well. For an example of one, the best products in this space have great APIs. The staff making/managing these applications may not even know they exist, and certainly can't write against them, but now IT has a standardized way to get this data somewhere else when needed. Data driven organizations can use this to drive what I call "Small Data", which is applying Big Data practices/analytics to low level work product to improve processes.

The next major development in this space will be low code/no code integrations. When you're talking about these low levels of work, they are generally mid-process actors. Work comes in, gets done, and is handed to the next in line. That data largely gets to these teams from some kind of export process in the upstream system - CSV dump, daily email, etc. - and then needs to get fed back up via a similar or worse method. You can use the aforementioned API to fix this, but that again requires developer resources. This may be doable to for interacting with a core upstream system but it isn't always available when you have two bottom level processes looking to communicate.

The ability for "Citizen Developers" to be able to obtain the data they need from an upstream process, complete the work on it inside their application, and then return the results back to the upstream process or next process in line without having to write code or otherwise manually intervene will further drive productivity and usefulness in this space.


I don't think the intent here was to say that LC/NC isn't "real" but that it has been oversold.

At my last company they attempted to implement a project in a low code framework and they eventually had to pull back significantly because the requirements were simply too complex for the framework to handle in a reasonable fashion.

The lesson is "use the right tool for the job."


Completely agreed! A key portion of managing these platforms is understanding what should NOT go in them.

Because the products are designed for low or no code development they are inherently limited in what they are capable of doing. This is the obvious trade off for the simplicity of that platforms.

Initial analysis to make sure the platform fits the requirements is very important. Also important is understanding that many projects may start on this platform, grow in functionality over time, and eventually need to go beyond the limitations of the platform. I call these applications "graduates", and you need a method to manage the transition out of Low/No Code platform to something else.


And using our new graphical "workbench", your business users will be able to integrate applications without placing any burden on your development teams. Sign here. . .


I agree with the author for the most part.

An exception to this rule is that in terms of front-end (or end-to-end) testing, if an application is deterministic, you most certainly don't need to write any code to create tests for it. You can record actions and snapshots and replay the actions and compare snapshots. There can often be more to it than that (like filtering dynamic elements/attributes and pattern matching), but for the most part, this approach works quite well.


I use this metaphor: If you only learned 50 words from a new language, that may be enough to have a great 2 week vacation but it won't be enough to actually life somewhere else and do successful real legal business.At least it will be quite hard to do so.

Learning a language to be able to do that will take some time and training. But once you learned it, life will be much more easier and success will be easier.

Computer languages are called "languages" for a reason.


Disclaimer: I work at a low-code startup.

If anyone discounts nocode/lowcode they are missing the forest for the trees.

NoCode always existed, the term is getting popular now. Wordpress is a nocode website builder.

When you hit a 3rd party API, you are reducing a part of the problem to nocode. Would you care if the API was written in C/Java/Python or was a Zapier interface?

Can nocode/lowcode reduce bottlenecks or speed up developer of critical but not urgent things/components definitely.


I've luckily missed all the "no code" stuff. This has so many shades of that "no ops" nonsense that was being peddled about 5-10 years ago, which seems to be rooted in similar misunderstandings.

If you want to run your company on a "no code" basis, good luck with that! I'm sure your Finance and product managers won't complain at all when they have their VBA macros taken away.


> software developers are expensive

in most businesses the business experts are more expensive. wasting their time with building systems can be costly


> Increasingly popular in the last couple of years, I think 2020 is going to be the year of “no code”: the movement that say you can write business logic and even entire applications without having the training of a software developer.

Last couple of years? I'm sorry, how long have you been in the industry because the industry has talked about that forever.


The idea that you can write software without having to learn to code goes all the way back to COBOL in the fifties. It was a big step forward in abstracting some of the details. As languages have progressed they abstract away details, but you still have to be able to think like a programmer, especially if you want your code to be performant.


Whatever the abstraction of the language you use, you still have to code. It's just a different type of coding.


The no-code solution would probably come from machine learning, and not from automating logic-based programming.


"No code" doesn't mean no logical thinking, no worrying about changing requirements, no version control, no uptime and performance considerations, no cost considerations, no analysing requirements, ... very little of what a coder does is write code!


I have a working concept for "no framework Code" in front end web development. https://github.com/imvetri/ui-editor#ui-editor


Change control is always a people problem re: “change control” becomes a software problem rather than a people problem.

Changes are made by people and the changes affect people. Software can help track, control, sustain. But the actual change start with people.


The delusion is thinking "no code" solves all your (classical) problems. It does not. No code is the future riding the waves of simplifying programming. Like going from C -> Java -> Python -> (Low code) -> No code


I miss the days when you could create a web site without coding. Remember Dreamweaver?


I was pretty young back then but I do remember a lot of these things being driving forces behind 4GL and 5GL languages. I played with a lot of these things because I was just getting into programming and I was still at the "syntax is hard, maybe what I need is a simpler language" phase...

A lot of these things failed for reasons that are, I think, fundamentally unavoidable if your goal is to not learn anything and just do it:

- Tools that emphasize (or function exclusively based on) configuration over over code are limited by things that you can configure. As soon as you stray beyond that, you not only have to code, but you have to code for a system that has a lot of implicit behaviour. That's a hellish experience if you're an inexperienced programmer. Figuring out why your code produces a result other than what you expected is hard enough. Figuring out why your code produces a result other than what you expected because there's a default configuration flag that alters its behaviour (or, worse, that alters the result after your code correctly computes it) is way more complicated.

- Tracking changes in systems that emphasize (or function exclusively based on) configuration over code is insanely difficult, and reverting systems back to a specific state is pretty hard, too. This may be less of a problem today, in the age of containers, I guess.

- Validating changes in such systems is even more difficult than that, because a lot of the logic is implicit and hidden from sight. If you have to make a change in a system you're not too familiar with, reading the code can give you an idea about what behaviour would be affected. If your change breaks something, reading the code can help you debug things. If there's no code to read, or you can't do it in the first place, and changing the header of a column just broke your reporting tool, you're going to spend a few fun evenings at the office poking it with a stick until it un-breaks.

- Tools that emphasize integration of separate tools over coding result in systems that aren't very fun to maintain. Changes in the way components interface were a problem even in the early 00s, and they weren't on a rolling/continuous release schedule, weren't delivered "as a service", and "move fast and break things" was just called being sloppy. Integrating a dozen third-party modules today requires full-time maintenance, from someone who can definitely code.

I do think that making it possible for end-users to automate their work is a direction worth pursuing, but this isn't the way to do it, and I think there's a wealth of lessons from two decades of failures to learn.

IMHO, some broad directions worth pursuing would be:

- A simpler and more stable development framework. Keeping up with JS and CSS frameworks (many of which exist precisely because pure JS and CSS are pretty painful to use if you want to develop a desktop-like application) is hard even for people who use them professionally. There's no way you can ask business people to keep up with that. A more stable framework, that hides the fact that browsers were never meant for application development well enough (even if that means less flexibility) could go a long way towards making things easier.

- Better DSL integration in business tools. A lot of the business logic is written in the same language as, and embedded in, the overall application logic. That doesn't necessarily have to be the case. Template engines are sufficiently advanced today that you can do a lot with them, and I've seen people without much programming knowledge being able to use them productively.

- Oh and if we're being honest: better code quality where code really can't be eschewed. Virtually every business app I've seen in the last ten years is the same story: a user needs a change, they can articulate it perfectly well, they can describe the logic in pretty good details, sometimes they can even give you an Excel sheet that implements it so that you can see a few examples. The only reason why they can't make the change themselves is that sorting a table by date instead of name doesn't involve replacing SORT_BY_DATE with SORT_BY_NAME in a template, it involves changing half a dozen SQL queries, two hardcoded parameters in some JS function calls, and introduces a subtle bug because passing SORT_BY_DATE in that function call actually modifies the table in place, which is a global variable, and that table is then reused by another function which assumes it's sorted by name.


Re: "A simpler and more stable development framework. Keeping up with JS and CSS frameworks (many of which exist precisely because pure JS and CSS are pretty painful to use if you want to develop a desktop-like application) is hard even for people who use them professionally. There's no way you can ask business people to keep up with that. A more stable framework, that hides the fact that browsers were never meant for application development well enough (even if that means less flexibility) could go a long way towards making things easier."

Indeed! We need a GUI/desktop-oriented markup standard for in-house and custom CRUD. It's a big market that shouldn't ignored at the expense of other markets (such as sales-oriented websites).

All the companies who want a chunk of Microsoft's business should get together and help define such a standard. Google, IBM, Oracle, Amazon, Apple, do you hear that? The MS pie is waiting to be sliced up for YOUR benefit. Get on it!


"no code" will work when the halting problem is solved...


Having made a no code tool myself at yazz.com I definitely agree with many parts of this article. For prototypes or very simple one screen crud apps no code can be the right fit


I'm super skeptical about anything that claims to be 'no code'. That said, I had an in depth look at Mendix and I have to say I'm impressed.


It depends on the use case. For processes that are stable and where most edge cases have been solved, no-code is ideal. Take Salesforce for example. You can build a number of mini-applications, like say a university application tracking system, simply by throwing together built-in components in SF.


By "No Code" I like to think people mean "the code is already done and all the abstractions defined suit my needs as desired".


This really resonates with me, but I don’t agree with it.

It resonates because there’s a problem with software development that remains unarticulated. It seems no one is happy. Developers aren’t happy, and the stake holders aren’t either.

For stakeholders, the people originating the ideas, signing the cheques and bankrolling the entire endeavour, programming really is a money pit, and programmers really are a pain in the ass. So no wonder people are so enchanted by the idea of software that doesn’t require code or coders. All they want is for their grand ideas to be realised as they imagine them. How hard can that be? If Computers can be made to identify faces, or translate French into Chinese, then surely their vision of an app that figures out the most efficient way to deliver parcels should be a walk in the park!?

For me, herein lies the problem. Computers are seen as magical devices capable of anything. Software, or code, merely the spells and incantations that breath life into those machines and commit them to do your bidding.

But! They are not magical devices. Code is not spells. Programmers are not magicians. There are constraints. Boundaries. Limitations. Great software is, in some way, defined by those boundaries. Great software is created not by people who believe that computers and code can be made to do anything. Great software is not the product of whim and whimsy. No. Great software is created by people who know the constraints, but see the opportunity within them.

Take for example, the Spreadsheet. No accountant, or any other form of number cruncher was involved in the conceptualisation, implementation, or evolution of the spreadsheet. The spreadsheet was born out of the minds of people who understood the capabilities of computers and code, and saw how they could be used to approach a problem from a different angle.

I would argue that the most significant event in the evolution of the web — after it’s invention — was Ajax. Yet no one, not even the people who first implemented XMLHttpRequest, fully understood its true potential until Jesse James Garrett came along and clarified the concept and significance.

HTTP had existed since 1996, and four years later, Roy Fielding gave us Rest — The recipe to a dish we already had the ingredients for.

What’s my point? I suppose I may have gotten off track. My point is this: The problem is not that code is hard, or programmers difficult. The problem is that software — which was once borne out of opportunity that only the programmer could see — is now born of the whim and whimsy of people who know nothing of what a computer can do, or how it does it. Software is the folly of people who believe in magic. People who aren’t thinking “what can this machine do”, but rather “make this machine do what I demand”.


This is not new. There were plenty of companies promising the ability to create applications without writing code since the 80s.


There's a variant of this fallacy where people are sold on GUI frameworks in which design is separate from the code that drives the logic (i.e. XAML). The argument is, the designers can't code, and the coders can't design. That too doesn't really work if you want great UI, and it complicates the lives of both the designers (tooling is shitty) and the devs (the databound visual tree code is opaque AF and therefore impossible to debug).


No-node app builders are certainly no panacea, and I say that as a co-founder of an app builder (Calcapp). However, if your requirements fall into a domain that is common enough that no-code tools exist, people who don't consider themselves developers can often get a solution in place remarkably quickly. Database apps (CRUD) appear to be the target of most no-code products -- define your tables, build your user interface, bind fields to columns and you're good to go.

Of course, a real pitfall is that you often bump into the limitations of your chosen platform after having completed 80 percent of the work, and then you're either stuck or need to devise elaborate workarounds.

We see many app builders that truly use no code in the traditional sense. The simpler of these tools essentially only allow users to wire together pre-fabricated building blocks, like content screens, chat rooms and the like. More powerful app builders allow logic to be described using visual flow charts.

We have taken the approach of trying to cater to spreadsheet users, and put formulas at the front and center of everything. (Much like Microsoft PowerApps, in fact, though Calcapp precedes it.) Formulas enable complex ideas to be expressed. Ultimately, we put our faith in users having an easier time learning a reactive, functional programming language than they would have learning a traditional, imperative programming language. Fundamentally, the reactive model enables users to express relationships between entities without having to concern themselves with ordering. Supporting only pure functions, with no side-effects, means that we can cache results aggressively, much like a spreadsheet.

There are limits to this approach, though. If you want users to press a button and have different things happen depending on a condition, how would you approach this? Currently, you can't, at least not with Calcapp. Taking actions in response to an event being fired is, per definition, something that requires an imperative approach.

With some trepidation, we're working on enabling formulas to be "run," but only in response to events being fired. These formulas would have access to non-pure functions (changing global state) and even an assignment operator (as well as semicolons, so that multiple statements can be executed). We will likely call them "action formulas."

The challenge is that all this should feel familiar and logical to an Excel user. For instance, we're introducing anonymous functions (lambdas), which can run asynchronously in response to, say, a user pressing a button in a message dialog, but lambda parameters will have default names, meaning that using "arrow syntax" (like in Java and ES6) to name parameters will be optional. We're trying hard not to introduce syntax that would look strange to an Excel user.

I think it's reasonable to refer to app builders supporting spreadsheet-like formulas as "no-code tools." However, what about app builders supporting imperative programming, using a text syntax? We're well aware that we could introduce functions like WHILE, DO and FOREACH that together with lambdas would turn Calcapp into a full (Turing-complete), imperative programming language. Can such a tool still be considered to be a no-code tool?

Probably not. I suppose that's why there's an additional moniker: low-code.


> if your requirements fall into a domain that is common enough that no-code tools exist,

… then you can quickly and easily, with very little cost, produce something that anybody else can quickly and easily, with very little cost, produce. Try not to be too surprised when it doesn’t turn out to be particularly valuable or helpful.


At the heart of the issue is the concept that “computer programming” - which is to say, writing code - is a constraint on the development of software. That there is some “higher level” on which people can operate, where development is much simpler but the end results in some way the same.

Specifically, the idea of writing business logic in text form according to the syntax of a technical programming language is anathema. To make an analogy: it’s a little bit like saying all software developers are like car mechanics, who get into the guts of an engine. But most people just need to be able to drive the car, and a simpler interface can be put on top of the machine (a steering wheel and some pedals).

This is a bad, bad analogy. There is a higher level on which people can and do operate, and it's called 'drawing flowcharts'.

The correct analogy for the case of a car is not abstracting the work of car mechanics to be more like a car dashboard, but making it easier to view car schematics as opposed to textual descriptions that convey the same information but have to be processed in linear rather than holistic fashion.

To be sure, there are many 'build without code' tools available that over-simplify or promise that to never expose the user to any complexity, and that is indeed a delusion at best and a lie at worst. But the idea that complexity can only be effectively represented in textual form by elaborate syntax is equally absurd; it's simply a product of what technology was practical at the time. It's not better, as such; it's just what he have a lot of.

Consider assembly language. I love it; it is so basic that it takes me back to the delight of understanding Towers of Hanoi type problems and I get a kick of out of reverse-engineering things in a debugger. But writing any kind of large project in assembler is not a good idea if you want anyone else to get involved, because it's not very accessible - hence the popularity of high level languages.

But high level languages are in many ways a product of keyboard input and monitors as assembler was a product of punch cards and blinkenlights. They're great, but they're also hard to read - and so as time has gone by things like syntax highlighting and autocompletion have become standard in editors. You could learn to code without any syntax highlighting; it didn't exist when I started. You could learn to just read everything in assembler.

But once you have a useful tool available, demanding that people refrain from using it because it makes things too easy is just expressing a sort of anxiety about the existence of shortcuts that were not available to you, and the resulting ability of others to catch up to you in less time than it took you to learn.

So it is with the 'no code' trend. People who spent a long time learning to code have developed very valuable engineering skills, but that doesn't make the more high-level component-assembly approach of no-coders invalid. The no-code people want to focus on domain problems (which they often understand very, very well) and not get caught up on the minutiae of language syntax; they're comfortable with the fact that someone else has been able to automate that in such a way that the computer does most of the work. Fighting this trend is like getting hung up on the inadequacies of Lego - all that blockiness, the general inefficiency, and path-dependency problems of building things out of lego rather than making them from scratch - while overlooking the fact that it's so easy that kids can do it and that it's a fantastic way to quickly prototype any sort of toy you want.

In the first [flowchart] example, I need to know how the visual environment works. In the second, I need to know a language and a development environment. But both of those are skills that are easily acquired.

No! one of these things is not like the other. It's waaay easier to understand flowcharts than it is to acquire all that syntactical and semantic knowledge. This isn't to deny the utility of text for specificity, but to point out that it only seems equivalently easy because the author already learned to write code and has not yet learned visual programming in depth. I encourage people who doubt this to spend some time building actual electronic circuits. After all, programming is just high level circuit design, so it should be easy, right? And if you're already good at electronics, why not just get into making your own components? After all, making electronic components like capacitors and resistors is just high level material science, right? Real programming is about pulling stuff straight off the periodic table and just working out the math, right?

Of course it isn't, circuit design is its own discipline in which you spend a great deal of time wresting with (or exploiting) the fuzziness and quirks of electrical current flow instead of being able to take exactitude and precision for granted, and so on down. Conversely, the path forward for established coders in an increasingly modular world is exploiting their acquired skills of extreme precision and efficiency to make better modules that do things by magic and finally fulfil the promise of libraries to provide collections of functions that Just Work and do so reliably and speedily enough that they don't need to be rewritten over and over and it doesn't matter what language paradigm the downstream user has in mind.

In the article's example, all the downstream user cares about is whether the email validator is RFC5321/2 compatible and what speed throughput it has. It shouldn't matter how it was made, the same way it doesn't matter to an electronics hobbyist precisely how a 555 timer works* and it doesn't matter to a low-level circuit designer how it gets manufactured.

* although when you do start caring about how a 555 timer works, this is the most fun way to explore it: https://www.adafruit.com/product/1526?gclid=CjwKCAiAx_DwBRAf...


anyone tried bubble?


TL;DR: it all boils down to reinventing Common LISP.


Robotic Process Automation (RPA) is the new "No Code". And it will fail the same way every other "No Code" fad has failed over the years, no matter how successfully McKinsey sells it for the next few years.

It will never completely disappear, but there is no future where a BA with an overhyped macro suite replaces the whole of software engineering.


RPA just seems crazy... I mean, I see the golden shimmer, but why do so many enterprises refuse to actually, you know, solve the problem?

I have, btw, just finished reading “the phoenix project”. Brilliant book! Having lived through pretty much that story a couple of times, the solution is just that simple. Not easy, but simple.

Been around a while, at several big companies, I know most of them will do RPA (they are of course the ones having a blockchain project plodding along), but only a few will go down the “software company” route. It’s kind of tragic.


Because they can’t go straight from A to B, because there are so many audited processes that are tightly coupled to A. So they replace A with B one step at a time, and keep A in sync with B using RPA. When the whole of A is replaced, you turn it off along with the RPA.

The problem comes when RPA stops being a short-term tactic, and starts being a long-term cottage industry. Unfortunately, RPA vendors have a vested interest in keeping A around as long as possible in order to maintain the necessity of their solution.


Well, they can’t because they usually don’t really try. Most likely they have outsourced years ago and lost a lot of competency along the way. They have invested heavily on in really crappy software and solving this with something like RPA seems to me like introducing a ginormous footgun/perpetual roadblock. That RPA is not going anywhere soon... Or will it be replaced by the billion dollar, just a bit late, “phoenix project”? The ONE platform that promises speed and agility? That one? :)


There is a competition between RPA and automation by Python ongoing in my workplace. We are low on devs and high on buzzword-susceptible non-technical managers. The argument that it is a lot smarter to hire a few more devs that would add a lot more value to the company, especially given that we have already made significant inroads in 'proper' automation, than pay for a proprietary tool that is essentially an overhyped Selenium browser falls on deaf ears, since learning is hard and klickety klick is simple and easy to outsource. So it goes.


I’m sorry to hear.

The RPA abstraction just seem so fragile and unflexible. I _had_ to build “rpa” like solutions to problems in the late 90s using tools like scriptit, and later vbscript. Crap software have no APIs...

I guess, it’s the same type of companies, still with the same type of software.


ok, if it is, what the wiki-article tells, we had this with autohotkey 15 years ago. How is the RPA integrating with the data from this processes? Does it rely on accessibility APIs (good luck with that for more specialized software)?


I think the "strength" of RPA is creating GUI-based automations and then laying programming interfaces (web-based or otherwise) on top of them.

It's still hogwash.


I'm looking forward to the day when someone builds a labview type thing for some other common programmer function. Labview was literally an extinction event for instrumentation software engineers. Month long projects done in minutes.

Same thing will eventually happen with something like devops, regardless of what programmer bros think.


I always thought "no code" was a meme. Best example of "no code" were the traders I used to support working at a bank, they had these massive Excel sheets with weird macros that would break in non-deterministic ways.


Why does the common non programmer need a visual UI? Why so allergic to typing text? The rest of the average office worker's job takes place through the traditional CLI known as English (or other language).


Discoverability is vastly superior in visual interfaces.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: