I am on the other side of low code, building the builders for the most part, here is what I see.
Low code is very easy to sell. All you have to do is make a boogie man out of the IT department, and play on existing frustrations. Then play a demo where a sign up form is made in ten minutes and say your IT department would have taken 3 months. You just make sure to walk the happy path during the demo and don’t stress the tool.
Many things could be low code. Why do you need a developer for a sign up form if you are just making an API call in the end? Wiring up a form shouldn’t take html, js, and a serverside language.
On the flip side, don’t put your core business logic in low code. Low code builders assume anything complex will be offloaded to a specialized system. The best ones provide escape hatches by calling remote APIs or calling a code module.
Low code gets sold to smaller customers because it’s touted as a developer replacement. But really it’s most powerful in larger enterprises where individual teams may have technical people who have business knowledge but not much much IT power. It’s easy to get them to configure something in a SaaS they already use, than get a custom solution from IT. Also low code often comes with guardrails and permission sets tied to existing users.
I see low code as a tech equivalent of the last mile problem in logistics. It addresses a large number of concerns with simple tools, but doesn’t scale well and needs to work in tandem with a stronger central system. End to end low code is a trap, but low code at the fringes is a multiplier.
Low code with an escape hatch is quite nice, because yeah - many things can and probably should be low-code. It's a big productivity and standardization enhancer.
That escape hatch is absolutely necessary for longevity though. It lets you keep your low-code environment simple because you can leave it and write real code when necessary, rather than forcing everything into an over-complicated and under-capable custom thing with no editor tooling.
Low-code tools have a market fit problem because of that escape hatch. The players keep trying to sell it as a solution to IT deficiencies but it should be sold as an IT empowerment tool. It really doesn't matter how good your last mile delivery is if the shipping container with the product isn’t where it’s supposed to be to begin with.
All the components and modules that low code tools provide should be nothing more than an onboarding tutorial like the first few levels of Factorio, before letting the engineering team loose hand in hand with the users. It shouldn’t be an escape hatch, it should be the front door.
As such all these low code tools make the mistake of making it really difficult to bring the engineering team into the fold: modularization, logging, debugging, version control, and development tools are absolute garbage so instead of engineering providing a few sane company specific building blocks that they can tend and nurture, it inevitably turns into a shitshow because you can’t use a tool that ultimately depends on the IT department to fix the IT department!
The best “low-code” tools have already been around for over a decade: it’s the headless CMS and autogenerated admin pages ala Django and Wagtail. They’ve been focused on solving the content management problem for e-commerce and marketing, but IMO it’s the write path for other groups too. The engineers write the pages and blocks and components while defining an input/configuration schema for an automated tool that is usable by laymen. Up the level of abstraction to a well curated (by the engineers at the customer level) IFTTT layer and bam, you’ve 95% of use of low code without the 5% that inevitably ruins it.
The escape hatch is where most low code products fall down, I think. This is because of market fit as you correctly point out, but not only that.
It's also because knowing where the escape hatch and how to use it requires greater than average development skill and finesse and it isn't at all obvious when this is required.
The people who use these platforms aren't usually able to tell what kind of problem requires a developer and what kind of problem requires them doing just a bit more research. They're usually vaguely aware that the limit is out there somewhere but in the specific instances when they hit it they often don't know it's happened.
I've also seen some low code platforms get a little excitable about the idea of nearly or even fully reinventing the turing complete programming language to introduce more flexibility to their platform and make that their escape hatch. This is when things go really downhill.
This reminds me of the deskilling problem, where operators whose tasks are automated away and end up being relegated to monitors and troubleshooters need to be even more knowledgeable and skilled to identify and resolve the ever-more obscure problems/edge cases that the automation cannot handle, but because the operators do not perform those tasks any longer, there is no reasonable way for them to maintain their advanced level of knowledge and skill. And the more automated the processed get, the greater the deskilling problem.
I totally agree on the "IT empowerment" perspective, but unfortunately the price model of these tools makes it hard for many IT departments to introduce them, at least in established corporate environments, for internal use. Startup and public facing stuff is different of course but my experience is medium enterprise internal IT..
Per end user rather than per developer means they're far too expensive to introduce as a general IT toolbox item, they need to be part of a major strategic project where the $5-30/month per staff user has a hope of being justified.
But that also often takes it outside the "IT Dept", which is often just "infrastructure and pc fleet" support, not development ( at least, that's my experience ). IT might do internal scripting and some service interface tooling, but business tooling and software is rarer, that's usually either dev teams or ERP teams. The ERP teams will already using ERP platform tooling, so that further narrows the market.
I don't have a good solution for this, but it's always been the hurdle I've tripped on working for medium size enterprises.
There would seem to be an opportunity for "open source platform, commercial training and support" here, but vendors seem to gravitate to per user head and cloud only for more immediate revenue and easier support., but again many enterprises still have huge internal only IT landscape's, because cloud is still expensive and the value often isn't seen in relatively static envs.
It's possible this niche has been filled now too, it's been a while since I looked...
Possibly they can be introduced on a "just those who need it" basis, but honestly that's just so bloody tiring for internal tools, not to mention demotivating as you can no longer build tools for "everyone or anyone", it's back to specific narrow business cases, not IT empowerment, but narrow business case also means your usually competing with cots tools or consultants.
The success of low code implementations often comes with a curse of investing man-years of development effort to build increasingly complex applications in proprietary low code languages executable in a closed ecosystem (and commercial terms) of a specific vendor.
I believe there is a place for enterprise app platforms which are a) open source, b) not based on proprietary languages, c) with low code capabilities, fueled with AI code generation, d) runnable anywhere without staying dependent on typically user-based commercial model.
Shameless plus: we are working on such a thing, and competing with traditional low code platforms is not easy, I could tell a few stories about what we have tried, what works and what not really, if you are interested. I would be also extremely thankful for any comments and hints you may have, see https://openkoda.com
> effort to build increasingly complex applications in proprietary low code languages executable in a closed ecosystem (and commercial terms) of a specific vendor.
That's the definition of vendor lock-in. Once the vendor has it's hooks in your organization, good luck removing it. Sometimes it's just the cost of doing business, but the more the hooks, the greater the chance that a vendor triples or quintuples the cost of their product that affects your organization's secret sauce.
I feel like the goal of low-code solutions is to get you over the barrel. Much in the same way AWS tries to get into your company's operating costs.
I'd love to hear some of your stories about marketing it!
I create nocode plugins on a specific proprietary stalled platform, for a living, for a few years now, so I am definitely aware of the pitfalls in the area.
For me the problem is that the API to it got created in a nice format but then got abandoned, if they just listened over the years and added the changes we third party devs needed it would have been great.
Having escape hatches is critical, but they should also be well built, or it can cause just as much headache.
Example from us using Azure Data Factory: You can add a step to call out to an API, which we did for a data flow that had a lot of calls. Performance was atrocious. Dug into it, and the API getting called was replying in 100ms or so, but ADF's "escape hatch" was adding 5-10 seconds of overhead to send the POST and parse the HTTP status code.
Microsoft Support said that's normal, expected behavior for the service.
In the end, we had to write an additional batching layer ourselves.
This ADF abbreviation remind me of another framework we used, it is Oracle ADF (application development framework) and it was awesome low-code tool! You can literally create CRUD with entry form within minutes, we Spring Boot and ReactJs it could take substantially longer. The good part of the tool is that the code is available and you can make any changes you want. In the enterprise they prefer time to market over beauty or UX, so it did its job perfectly.
Agreed, with the caveat that you also need a policy/culture that the escape hatch should be avoided whenever possible and the low-code work should not be done by developers.
Whenever I've discussed this, the common theme is that business users continue to request features that would be easily achieved in the low-code platform being used. It's hard to blame them; that's been standard procedure for them for their entire career.
But if you're not strict about saying "no", you still end up writing all the same methods but now on top of that you have a GUI that's not providing any value. Or maybe worse, your developers end up maintaining all of the low-code stuff too when they could have just written the code, switching context pointlessly and (probably, depending on the platform) not using source control.
An interesting thing with developers getting involved in those boxes-and-arrows UI things is outages. I made a mistake with one once, and the postmortem quite reasonably asked:
* Where was the design doc?
* Where were the alerts?
* Where was the code review?
* Why didn't you write an integration test?
* What do you mean it just rolled out to production instantly?
When we're considering options in advance of building something, it's a more time-efficient, less wasteful alternative to programming. But having built it, everyone acknowledges that what we have done is programming, and now they wonder why we've programmed so badly.
Maybe the standard IDEs, Git, code review, CI, metrics, and incremental deploy workflows were fine actually?
For me, none of the learnings is a direct result of using a low-code, arrow-boxes environment. I can deploy instantly to production using any programming environment, if I don't automatically have design documents when using a much-code environment.
Without discipline, any programming environment can lead to failures.
It is true that there aren't any well-defined workflows for using an arrow-boxes environments but that does not mean that these environments don't support specific workflows.
Are these environments attractive because boxes and arrows are actually better than characters for expressing programs? Or are they attractive because they encourage you to skip steps that turn out to have been important? Sure, you can replicate a normal, responsible development process with a no-code tool, but at that point do you really have a compelling alternative to a more traditional programming environment?
I develop plugins for a specific nocode proprietary platform, sixth year now doing it for a living, 95% of all apps use plugins, which are the custom code interface, the escape hatch.
The ones that don't are very CRUDy apps with no remarkable features, like just some pretty design to input and show information.
This is still usually tech solving an organization problem. Launching normal apps gets gated under security, product, or technical reviews but the low code tool becomes a backdoor to all that.
If you just deleted a bunch of processes, or just reserved it to when it actually matters, you wouldn't need to pay a low code vendor to basically allow your team to do their job.
We tried low code (citizen development) as a solution to the "IT dept sucks" problem. It worked pretty well at the start but eventually became a data governance nightmare and as soon as we needed to restructure the business we ended up with ownerless applications and datastores all over the place.
It eventually turned out there was a prioritisation problem rather than a development capacity problem.
my guess: There's enough developers but things need to be prioritised one by one. If everything is a priority and everything is being worked on at the same time then there's not enough devs to go round.
Not a parent poster - but building all kind of crap that anyone can come up with is not the right thing to do.
One might think you need all of it right here right now - but in reality if you build 20% that is really really needed you get 80% or I could bet even 95% of actual work done needed for the company to improve productivity/performance etc.
> Low code is very easy to sell. All you have to do is make a boogie man out of the IT department, and play on existing frustrations. Then play a demo where a sign up form is made in ten minutes and say your IT department would have taken 3 months. You just make sure to walk the happy path during the demo and don’t stress the tool.
I feel like you just explained how salespeople can scam decision-makers into thinking low-code solutions will do more than they can do, and in no way countered any of the arguments in the OP about it's dangers.
> But really it’s most powerful in larger enterprises where individual teams may have technical people who have business knowledge but not much much IT power.
It's not just business knowledge, it allows the people who are most committed to project success to do the work.
I think that's the real pain point with IT departments in large organizations. They aren't feeling the pain that made you need the software in the first place.
>Many things could be low code. Why do you need a developer for a sign up form if you are just making an API call in the end?
Oh my... Many, many, maaaany reasons.
For example, your entire stack is built in a certain way and you don't want to introduce new dependencies.
What if your cicd requires your config and code is separate and that you build a code artefact, and let's say 3 config artefacts (dev, cert, prod), all these are then uploaded to a central repo and handed over to some proprietary security/code scanning thing every time you merge new code. Then let's say your deployment is done the same way, you have your "deployment config" artefacts for each environment, but an infrastructure team manages all the infrastructure-as-code artefacts that take your config.
I worked in a bunch of big companies each having their own version of such process.
In such an environment, creating an "example project" that contains all of the scaffolding required and just writing that sign on form is going to take waaaaay less time than even initial planning how to integrate the "no code" tool into our processes.
this is simultaneously a valid reason not to use low code tools - and why they find favour in many organisations.
>In such an environment, creating an "example project" that contains all of the scaffolding required and just writing that sign on form is going to take waaaaay less time than even initial planning how to integrate the "no code" tool into our processes.
I've also seen the opposite. Someone in the org wants a simple site. Maybe a sign up form, or CMS/wiki for internal docs, etc. The dev team says "sure, that'll be 6 months". A large part of which is constrained capacity - the devs need to fit it in alongside a buch of other stuff. Another part is tech choice: the corporate stack uses e.g. React on the front end, calling web services written in Java, backed off to Postgres for storage (or whatever). The devs estimate for building the CMS/wiki/whatever from scratch - because it has to fit the tech stack.
At that point the (internal) customer screws up their face and utters all the familiar frustrations about "IT". Someone somewhere mentions to them that there's a way to sidestep it all, and do it yourself. In their position it's very hard not to see that alternative as attractive.
It's a hard problem. That same internal customer will similtaneously rail against the recharge in their budget for IT. It's a cost: a drag on their P&L. IT says they're under-resourced, and they could do it quicker with more people - but that would increase the P&L drag. Vicious circle.
Software is a sociotechnical endeavour yet we too often focus on the technical and ignore the social aspect. Yet "Low Code" and similar emanate primarily from the social side. Coming back to your post though, not exclusively. Development teams can be equally culpable when zeroing in on tech stacks that aren't a good fit for the problems at hand. Or, perhaps, stacks that are a good fit for the problems they were chosen to solve - but not so much when the new requirement comes along.
Of course, low code is no panacea either. Most non-technologists have no perception of the need for ongoing evolution, even if there's no new feature development. Patching/upgrading is a must. And new features always emerge - most after the original "citizen developer" has moved on / lost interest / whatever. So the whole shebang gets foisted on IT, who are expected to operate and maintain it. Usually with no tests, automated builds, documentation, ...
It's pernicious. At heart, though, it's primarily a social problem that needs a good underlying relationship between the customers/users and the developers. It's Conway's Law. Of course tech choice still matters. But no tech stack is going to magic away problems rooted in organisational friction.
> It's a hard problem. That same internal customer will similtaneously rail against the recharge in their budget for IT. It's a cost: a drag on their P&L. IT says they're under-resourced, and they could do it quicker with more people - but that would increase the P&L drag. Vicious circle.
I started my career as a Solutions Consultant. Our primary customers were small business units in large organizations that were frustrated with “IT” and looked externally to solve their problem. Low code is a variation on this strategy.
Our delivery time estimates always beat IT estimates and our costs were often less.
Maybe because we were seen as a competitor to IT, or maybe some manager was being sneaky our first interactions with IT was usually after the engagement contract was signed. (None of these projects had RFP/RFQs)
During the discussions with IT was when the hard parts of the engagement really happened. It was then we learned about the compliance requirements. Security, data integrity, availability, platform standards, ci/cd, pmi, etc…. These unknowns often dragged out our delivery times and skyrocketed our billable hours. Putting us equal to or behind internal IT.
In my experience at large organizations Compliance is more closely aligned with legal than IT but is often an a function of IT. The rules set forth by the legal teams are enforced through technical/process controls by IT. This makes IT look like the ‘problem’ went in fact they are just following mandates set forth be legal.
It’s often easy for a business unit to complain about IT preventing revenue growth and get an exception. If a business unit complained to upper management that legal wouldn’t let them do something I doubt exceptions would be granted as easily.
I’d suggest that compliance be its own department and review all external tools or vendors instead of IT. This would put external consultants or low-code solutions on par with internal IT. If would also shortened the feedback loop between those creating the rules and those it causes grief.
I like your analogy to the last mile problem in logistics. I would argue in that case people know they are using bespoke, last mile only, systems because there is no other option. Low code vendors promise much more than that, end to end capabilities.
I adopted an official personal policy of not working on any data project referred to as "last mile". It's a big sign it's going to be under-resourced and under-appreciated. The last mile is brutal. And that's in the physical world!
You can tell when your last warehouse is close to the customer by looking at a map. You can't tell when your tool is close to what the user needs with nearly the same accuracy. There are a ton of gotchas, and as part of the "last mile", it's now your responsibility.
> as part of the "last mile", it's now your responsibility.
You also are at the whim of the SaaS vendor to give you the help you need. If they can't do something you think it should, good luck making a hacked workaround to function as it must.
I'm curious, do you ever mention anything along the lines of "don't put your core business logic or mission critical stuff in this" ? That could backfire, but it needs to be said.
So from a certain perspective, the positive niche / need it would fill is enabling the local power users who might also do complex Excel queries, or have their own MS Access, etc?
Similar, low code is a bit broader and is usually associated with a business process like taking in a lead, providing a quote, or approving a transaction. Excel can do the calculations for these things but still requires a lot of human interaction. Low code can do a bit more like generating a form, evaluating branching logic, or handling an async process over time.
> I see low code as a tech equivalent of the last mile problem in logistics.
Ironically it's the OT and logistics people who've figured this out, with low / no code solutions fit for purpose, which don't necessarily run in the cloud, which have full microservice / SQL integration... and baked in OT drivers, RFID and bar codes.
Seems like the generators and gem ecosystem in Rails already solved these problems. I can easily build a devise login in less than 10 minutes. Low code in Rails is almost instantaneous and you get tests automatically too.
As an SRE that occasionally encounters low-code things I'm also pretty skeptical..
* there is like no source control, or if there is the source controlled thing is an unreadable artifact which generally doesn't belong in source control.
* the workflows are poorly if at all documented.
* they still require a lot of custom code but that code isn't as reusable, tested or testable
* they often require a proprietary runtime, worse this may only run on windows
* they are difficult/impossible to instrument and so are difficult or impossible to monitor. Or if they do have monitoring it is going to work differently than the rest of your stack - likely using email based alerting which isn't ideal.
* the work is still 100% done by engineers either way, I've never seen a low code or DSL be picked up by non-engineers. (I am also skeptical towards DSLs)
The only tool that was semi-reasonable here was Looker which isn't exactly marketed as low code but at least in the version of the product I used did integrate with source control. Though generally the people writing lookml were still engineers.
I'm much more a fan of composable platforms that compress complexity but still make it possible to delve, customize and extend when necessary.
> * the workflows are poorly if at all documented.
Ideally it would be easier to understand if there's less code involved. Things should be more declarative, or the low-code solutions would generate good descriptions for what is actually happening.
> * the work is still 100% done by engineers either way, I've never seen a low code or DSL be picked up by non-engineers. (I am also skeptical towards DSLs)
Or worse: "Why does this connection to this server fail with SSL Certifcate Invalid? Oh, nm, we'll just uncheck the SSL validation box."
But really you are missing a key piece of the puzzle, it matters less what is happening and more why. Sure a low code tool could churn out a textual description to say if the value of some variable is < some threshold branch to X else branch to Y, but thats generally easy to figure out, why is that threshold important, that’s a question that requires understanding the intention of the user of engineer who set it up, that’s not something you can just puke out of an auto-doc tool.
I would go so far as to say the inability to capture intention is one of the sharpest edges for low-code tools, it makes the solutions built on then extremely brittle and creates silos of knowledge.
Expanding on that further this is why most auto generated documentation is worth the effort put into it.
All programming languages are some sort of abstraction to the underlying machine language. Low code is just "one more abstraction to machine language".
Previous art in this area would be Lotus Notes (1990's), Hypercard (1980's), and Lotus 123 (1980's).
Low code should be as close to a speaking language as possible. Declarative ideally. e.g. "Create a read-write rest framework for the database table named orders".
An engineer would then go, "Oh, so you know you're missing permissions from the LDAP groups", and then solve that problem, and then figure out how make the LDAP groups map to the low code framework.
It should be easier to understand but lowcode designers seem to like making you jump between 30 different screens instead of having it all in one place. Unspooling a lowcode implementation has got to be one of my least favorite activities.
I think it would be too dense to have everything in one page. I don't remember any example of having it all in one page. Excel for example hides it all in modals.
I think Retool is the best I've seen. They have source control, great documentation, reusable custom code, can tie in external apis for monitoring.
I'm someone that has a little less than junior dev experience (I can hack together a website), but nowhere near the ability to work on production code, yet I was able to be proficient with Retool. The only downside is the cost.
Yes. Onshape is web based. Every change is tracked and can be rolled back. There are versions and branches. Multiple people can work on the same document. The equivalent of a git comment would be a version with a comment.
> here is like no source control, or if there is the source controlled thing is an unreadable artifact which generally doesn't belong in source control
I think this is an artifact of "source code is text" that our current tools assume (and is invalid IMO).
i think the concept of "source code as AST" or something like that is basically a fine one, but the devil is in the details. your "true source" must continue to support (just off the top of my head):
- precise "decompilation" to readable, idiomatic text
- comments
- line numbers or some semantic equivalent
The goal should be to store the inputs the user has provided. If your no code solution uses png files to encode the input, then store those, not an intermediate textual representation of them.
You do understand that source code as text in current form is most efficient way to do it?
There is no more efficient way unless you go to down to writing assembly or go down to writing 1 and 0.
Everything you build up as low code has to create much more data which takes obviously much more space than high level languages we use today and if you want to do change control - any other data structure will take even more space. Then all of it will be obviously slower because more data is not just a bit more data but at least order or two orders of magnitude more.
LLMs are great at compression but compression with them is not lossless so even if we write behavior to be interpreted by LLM to be executed it might turning out differently each time you run it.
That last one is not a tradeoff we can deal in I would guess 95% of applications. People expect 2+2=4 from applications but they want to say „computer add two numbers that add up to 4” and they don’t really want to to crash because pi+x=4 and LLM went hanging because it is now calculating PI and then it will move to finding X.
So this is why it will never work other way than „source code is text”.
SQL is a DSL for data manipulation and I like it more than non-DSL code (ORM frameworks). Puppet language is a DSL and and I prefer it to Ansible (and alike) Yaml files (yaml is OK for small projects but hard and tedious to maintain for large ones).
"Low code" means a low surface area of code you can directly interface with, but there's plenty of code shoved out of sight and out of reach. Total amount of code is probably actually quite a bit larger, actually, given all the code needed to create the abstraction layers to give the appearance of low code.
Layers of abstraction are powerful when they're tight and well-suited to their purpose (I'm happy to never think about memory allocation -- thank you garbage collected languages!), but when leaky or ill-suited can be very frustrating and make things harder or impossible.
It's hard -- probably impossible actually -- to build a "general purpose" abstraction layer that will work for specific purposes, but that's essentially the pitch of low code -- we don't know your specific use case, but we have a tool that will abstract away all the code previously tailored to that use case.
Now, in fairness, there's a lot of code that has too much accidental complexity in it. A good abstraction layer strips away that accidental complexity and leaves only the essential complexity that's specific to your domain and use case. But that's hard to achieve -- it takes skill and experience (and not just general experience, but specific to your domain experience) to achieve. Most companies don't have that available to them, and don't want to pay rates to build teams that do have that skill set and experience. Low code holds out the promise of being a short cut to that, but clearly if your own dev team can't manage it, the odds of a generic platform doing it for you is slim to none.
All I know is that I was quoted $50K from a Ukraine team to build an MVP, as long as I could provide detailed specs down to every function. I hired an intern who used bubble/airtable to build our product in two months and had ten paying customers in 6 months. After almost two years have yet to find a reason to move to a traditional stack. We have had 6 hours of down time in those two years thanks to bubble issues. I can live with that.
Not a fair comparison. One you were asking for fixed price contract the other you are paying a salary. The salaried person is more likely to be able to be agile and not need requirements (if you said to the intern $20k once I am happy with the job they’d say “define happy” and you are back to requirements!)
As for code vs. no code. If (big if!) bubble can do what you need it can be a cheaper route to launch an MVP.
Where I see no/low code fall apart is when complex input data validation is required. However, I can also see that this might be our old practices getting in the way of innovative forward thinking.
Maybe you've found a way, since you are starting from scratch :)
FWIW I run the IT dept for a mid-level talent agency. I moved us off an old system and into Airtable specifically because it’s flexible enough on every front. The killer feature for me is the concept of “views”, which means we can have all the data available at all times, but only surface the data that a particular user needs for them. In a company full of less than technical people, this has been an absolute godsend.
And when we continue to grow, there are ways off of Airtable since it’s all just CSV underneath.
I'm 100% against enterprise-level low code stuff. If you're big enough to afford that junk you're big enough to afford proper development teams with proper planning and organization.
In general I see little value in any large system intending to reduce the amount of code or the needed skills of the developer. Code is not costing you money. Good developers are not costing you money. You know what costs money? Bad developers making bad decisions using bad tools, necessitating years if not decades of paying armies of lesser-skilled developers to maintain the mess, slowing overall progress to a glacial pace.
Largely agree... and to top it off, the pricing models mostly work against ever having a successful deployment of one. Either you price per end user, and optimize against building anything that is widely adopted, or you pay per developer, in which case you need to figure out up front who are the people with unmet needs who also happen to have the knowledge, skills, inclination, and time to build their own solution. (and this is enterprise pricing, so you need to know up front for at least a year how many of these you'll have, or you'll end up in a contract renegotiation or paying expensive overage fees.)
I think problem is that not every company can hire and retain proper development teams.
That is some wild goose chase that also causes buildup of „agile” smoke and mirrors market and the same with „low code” smoke and mirrors market.
I was quite skeptical about talent shortage in IT but the loner I work on the space the more I see and understand. As there are real issues with talent and agile or low code are ways of coping with the problem.
Most enterprise software is terrible, it's the lowest quality software you will come across. Paying more for good developers won't fix that because the problems all emanate from the enterprise itself.
I’m assuming you’re taking a rhetorical liberty here and really mean “net money”, but then what if you can get a similar end product for lower price? That’s higher net money.
What counts for "similar end product" typically ignores all future needs of the business. One project gets accomplished and everyone gets a pat on the back and a bonus. Then as the years go by the accomplishments become more and more mundane. For example, upgrading the enterprise software to a newer version suddenly counts as progress and everyone once again gets a pat on the back. Eventually everyone wonders why the IT expense is so huge and why the company's software seems to be treading water. So someone bites on another sales pitch and years are spent transitioning to some awful alternative enterprise turd, and the cycle continues until bankruptcy or acquisition.
A lot of useful MS Access and Excel apps written by non-programmers beg to differ. Sure they have their problems, what doesn't? But these low-code tools allow smart, but less technical, people to get a lot done without getting in line to beg time from a bunch of grousing devs stack-hopping their way to their next job. Use a commercial vendor and you might even get documentation and support. 80% solutions usually do more than was needed if the requirements had been thought through carefully. Most business apps are tables, filters, queries, detail views, and workflow. So what if a product with 5 users in a department is trash? Works well enough most of the time.
Startup can't afford more than one developer? Product owner can hack the thing until you get customers, funding, or go out of business. Oh, and that bus factor goes up to boot. No more pager duty.
Code things that are core to your business and plan for the TCO. Use low-code for internal business apps and let your smart intern or low-code consultant make and own it.
I've seen the opposite. Business hires new CTO who has no knowledge of existing stack and why it is breaking but sells the top brass on a low code replacement because of how quickly he can build with it. Spends a year building the replacement, forces a hard switch and fireworks ensue.
I've never seen a local hardware store shed kit that wasn't complete garbage for the price.
Building a simple shed isn't hard, if you can handle assembling the kit you can buy and cut the basic materials yourself. The trickier parts like getting it watertight or installing doors and windows have to be done either way, all the kit gets you is a bundle of inferior materials and a bit of time savings cutting studs.
I was involved with a project to decomission a low code on premise platform (Sharepoint 2013). What we found was there was a lot of user enthusiasm to create stuff but when the platform was end of life and had to be decomissioned the users enthusiasm melted away.
The "IT" dept had to spend a fortune re-validating the user requirements for the various applications, documenting and then converting them into a new platform. Obviously a lot of feature creep and previously accepted bugs (no longer accepted now that responsibility was no longer theirs).
A lot of the applications were frankenstein efforts from people over the years - lots of dead code no longer used, no docs etc. As others have mentioned people create mission critical stuff for their project or team, and then leave or be away on extended leave and it breaks etc.
Can I check a "low-code" implementation into version control, using standard free tooling? Can I, when something inevitably goes wrong, do a `git log` equivalent to spot the source of the problem? Can I then roll back to a specific commit, using off-the-shelf free tooling?
I find that generally the answer is most often "no," which is kinda a nail in the coffin for serious usage of "low-code."
You do realise that there were ways to debug things before GIT came along? Indeed before source control came along?
What im saying is that you should be wary of ignoring things that don't fit into your current work-flow, because sometimes that "critical" thing in your workload is less critical than you think.
To a significant extent, yes, but there's also a lot of people just following fashions. There are whole teams of devs who really have no idea how to use git, but who are using it because everyone else does. I'm no git expert myself, but I can't count the number of times I've seen people, e.g., comment out large sections of code 'in case I want to add it back in again later'.
There's always been a form of version control. In the sense that you can copy folders to an archive, and/or make backups whenever you like.
So it's not like you can never go back and compare. And there are lots of ways of comparing text files if you need to know the difference.
Because it is "harder" to rollback, or perhaps because its harder to merge, different developer habits emerge.
A) signing code. As in identifying who wrote what and why. Done right there in the comments. There's no external comment section (aka checkin) so the comments go with the code.
B) there's more care over the code. Typically means fewer programmers. Typically means the code is "owned" by an actual person, who either makes the changes or is there to consult.
There's a lot less of "hire 20 contractors to earn in this" approach. (In a low-code situation, IME, programming teams are very small, and often single.)
C) domain knowledge of the whole system is more valuable than "git skills". The code is not the glory. The value is in the whole system coherence, the way everything interacts in service to the big problem area.
An imperfect analogy would be painting. You can get a squadron of labor to paint a house. But if you're looking for art then you let one person do all the work.
The goal of low-code systems is to remove the cheap-labor part (coding) and allow people to focus on the unique parts (the art). Which is perhaps why it's not beloved by those with cheap-labor skills. (And, not meant as an jnsult, but source-control is a cheap-labor skill - which can be used by laboror and artist alike.)
I feel like you're making a bit of a strawman of the state of modern development. I don't know anyone who is valued because they are a "git ninja". Being competent with git is a prerequisite for doing any programming-at-scale in 2023 in my opinion, on par with knowing how to use the shell effectively.
Most of the places I've worked value "global domain knowledge" over specific knowledge of slices. Junior engineers understand a small slice, seniors understand a subsystem component, architects understand the whole system. There are people who specialize, but they need to be a true specialist (i.e. understand their slice VERY deeply) for their value to be appreciated.
I agree that low-code has its place, like in painting a house versus painting a masterpiece. If you need a generic, off-the-shelf solution, low-code is great. If you need a work of art, it's just the wrong tool for the job.
The problem is that a lot of companies selling low-code applications try to advertise themselves as "able to create a masterpiece". A literal example of this is Dall-E and Midjourney trying to suggest that AI-generated art could belong in a museum. Maybe it could, in some one-off outstanding case. But for the vast majority of use cases that's just not true.
Decision-makers seem to be suckers for this kind of advertising - and why not? If you promise me a Rolls-Royce for $20,000 and without any pesky mechanic-work, I'd WANT to believe it's true! But unless they have experience as a mechanic, or as someone who's purchased a lemon, or just someone with good intuition, they won't be able to resist the urge to say yes.
...is baked into git, and is something I've used at my last two employers. You could daisy-chain a bunch of tools together for that, but why? And getting back to the original point: code signing isn't usually possible on "low-code" tools.
> there's more care over the code.
I think what you mean is "the minimum threshold of carefulness required of everyone at all times is higher." I don't think it's true to say "when you toss out industry-standard safeguards, everyone does better at never making any mistakes." I do, however, think it's true to say "when you toss out industry-standard safeguards, the likelihood of making a mistake is higher because of unfamiliar, imprecise, and/or buggy tools, and the cost of each mistake is higher."
This gets back to my original question about low-code tools: when (not if) a human makes a mistake, do low-code tools guarantee fast and simple rollback? Often the answer is "no."
> domain knowledge of the whole system is more valuable than "git skills"
This is just a strawman. No job where I've ever worked has valued "git skills" over systems knowledge — and even if you dug up a job where that was the case, that's a shortcoming of the company culture, not of the tool. In my experience, it's more common that companies value low-code-tool skills over domain knowledge, usually because they've gotten locked into their proprietary low-code tool already and are unable to select from the total pool of developers unless they're able to do a wholesale rewrite, and must instead prioritize "knows Pentagon" or "knows Salesforce" etc as their top hiring priority, instead of "understands software architecture".
I build and maintain stuff in Power Apps, when it makes sense. You can export your solution as a ZIP file containing human readable-ish XML/JSON definitions of everything (GUI forms, tables, processes, etc.).
This is the version control/CD process I designed:
- We have a normal Git repository, hosted in Azure DevOps cloud
- Developers work in the "unmanaged" cloud-hosted Development environment
- When ready to move a set of changes forwards, the developer branches the repo, and runs a PowerShell script to export the solution from the Development environment to a local ZIP file, which is then automatically unzipped. The only other thing I do is reset the version number to 0.0.0 within the `solution.xml` file, because I'm pedantic and the code isn't necessarily a deployable build yet - it could be an intermediate piece of work.
- Developer reviews the changes, commits with a message that links to work items in Azure DevOps, and creates a pull request -- all standard stuff
- On the Azure DevOps side, once a PR is merged, we build a managed ZIP -- which is really just the same code, with a flag set and re-zipped up with a version number that matches the DevOps build number (in the Power Apps world, "managed" package deployments can't be customized on UAT/Production without explicit configuration allowing it)
- We then use all the normal Azure DevOps YAML pipeline stuff to move this between UAT, Staging and Production with user approvals etc. The Azure AD and Power Apps integration makes this very pleasant - everything is very easy to secure, and you don't need to pass secrets around.
- You can rollback to a specific commit using normal Git tooling or just redeploy a past build if there is one available.
That all being said, most people don't do the above. I should probably sell a training course, because it took quite some time to put all the pieces together. The bigger problems are planning ahead for changes without breaking stuff that is already live, because you generally only use 1 development environment for an entire team.
Do you normalise / deduplicate / otherwise clean up the xml or json files?
I'm thinking of visual studio solution files which were XML that picked up a lot of spurious churn when written by visual studio. A formative moment was discovering that colleagues edited these by hand to remove some of the noise when checking them into source control.
XML has a canonical form idea, sorts attributes and similar. Mostly wondering if opening the trees of emitted xml/json in a diff tool is a vaguely credible way of diagnosing whatever seems to be going wrong / rolling back one part of a change made in the GUI tooling.
Node-RED has git support (via the project feature) but doesn't support any visual git operations.
I am a big fan of Node-RED but it lacks a visual version control and without that, it makes little sense creating visual code and then use a textual version control to revert changes.
There's lots of good general criticism about Low Code tools, and certainly lots of it is valid depending on the tool and the context.
Of course, just like say the SQL or NoSQL debate there are contexts where one wins, and another fails.
I did have to smile at this though;
>> "Now, rather than being to recruit from a large pool of ubiquitous, open-source language developers, the company has to find maintainers who are very specialized in this tool."
From a company point of view this is correct. You cant just discard programmers and get a cheap, out-sourced one as a replacement.
On the other hand, as a somewhat older programmer, I like the idea that I get more valuable with age not less. I like that I can't be replaced by some ubiquitous open source freelancer who works for $20 an hour.
For the record I've been working with a low-code tool for 30 years (since long before "low code" was even a description) and I've seen thousands of one-man companies be successful developing programs where the owners actual programming skills are, well, in most cases, somewhat poor.
Turns out, making something useful has more to do with fixing pain, and less to do either writing code.
And yes, all the points listed in the article, and these comments, are true. And ironically almost none of it actually matters.
There are a million reasons it -shouldn't- work and yet, somehow, at least in my world it does.
I think this is the key a lot are missing. It’s about solving the problem, not necessarily doing so in a way that’s preferred or familiar to you. Low code has prod/cons but it has a place in the industry that many developers seem to be too arrogant to acknowledge.
Eh the thing is that Worse is Better and "Actually available to people in Operations without having to beg for permission" is the best of all.
VBA (Visual Basic available in Excel and Outlook) had a lot of the same issues as low code, as in no version control, no workflow for review or continuous testing and integration, no ability to do lots of custom things, that did not stop it from running the planet. I mean if you did the VBA version of Thanos-snapping half of all VBA code out of existence, most of society would cease to function within a week.
Power Automate is going to replace VBA soon because VBA can no longer talk to web pages thanks to the end of Internet Explorer and lack of VBA updates. And like VBA, Power Automate has all of the same problems - no concept of code review, version control, history, logging, actually its worse.
VBA at least let you have custom Types and Classes but in Power Automate everything, literally everything, is a global variable. Power Automate is lower code than VBA. But I think it will be used even more.
Because it is already installed on every almost every office computer in the world. This is the only tool ordinary Ops people will have available to them to automate stuff, so that is what will get used.
So how long until we get the scenario "Billion dollar company suffers major outage of business-critical system because person that wrote Power Automate script went on holiday for too long and their connections timed out"?
I have been since the first people at my previous employer told me it would replace all the programmers.
Then I found out the developers assigned to working on the low/no code parts had a component where they could write Java inside their low code process. There were hundreds of lines of code in each business process in the low code parts. They were writing giant methods in their process wired up by the true low/no code part.
To make matters worse the tool did not support unit testing of the custom code.
It ended up being like dozer mapping entity to entity and then passed along to custom code. All wrapped in some crummy UI. It produced a jar file which would be deployed anyways.
Maybe the tools are better now. We had some Tibco product at the time.
This to me is true for low code tools that are aimed at non-developers. “You can write software without software developers!” But low code tools made specifically to help developers’ workflows can be great. Maybe would be good for OP to clarify type of low code tools
I think it's fine to use low-code specifically for the lie that the only thing preventing people from writing code themselves is a slick UI.
Sure there exist applications that do equivalent things, but when people start using the word 'low-code' to describe such an application they suddenly get the weird idea that the hard part of coding is typing the damn syntax and that if only you put a GUI in front of it they can do it themselves.
Conveniently forgetting that people have made the exact same promises about the first IDEs. Pretty sure there's an ad to that effect for Turbo Pascal somewhere.
It is so so so close to being a silver bullet tool for quick front end crud that can use your AD to authenticate.
Instead though, it’s got the most absurd pricing model that even their own reps don’t understand and is missing critical or basic features, requiring strange workarounds (fucking UTC dates vs what’s actually in the DB).
They USED to have no way to reuse code as well but they fixed that.
I feel like the issue is they’re still too married to basically low/no code environments.
Having a “I’m going to need to do a custom script in whatever” option would smooth out the edges so much
Power Automate is great, but PowerApps is just a mess where every click in the dev interface takes like 5 seconds. On paper it looks so good, but in practice it is so painful to get that on paper performance and functionality.
For unrelated reasons i've had to hop back in to tweak an app we made a few years ago that's been buzzing along in production ever since. Last time I touched it was more than a year ago.
The actual editor interface is horribly unresponsive, but it seems to fix itself if i close and reopen the browser (firefox). Still it gets worse with time and it used to be no where near this bad.
As bonus points, functionality that is working in production no longer works in the editor! I found that I had to add yet another clause to all my functions because for some reason it no longer pulls all data from a sql table unless you use show columns to specify you do in fact want all of them.
I tried to use PowerApps 3-4 years ago and it was a hot pile of garbage. Slow, impossible to use, underwhelming configuration options, no version control...a developer's nightmare
We started out with webflow for our landing page. Problem with low code or any abstractions in general is that it becomes leaky the moment you want to do complex stuff. The reason webflow worked for us initially was we did not have to bother with hosting, CMS etc. It because very restrictive at one point. Only couple of people know how to make certain changes.
Eventually we moved to nextjs and vercel. We are faster with iterating on our landing page as well as any engineer can be pulled into the implementation.
All lowcode platforms are great in basic usecases but fails when the usecases because complex.
We went the other direction (to be fair, just building a simple static site). I’m a developer therefore I should make the marketing site! Problem is that with limited resources that took away time from building the product.
We’ve moved to various platforms over the years so that our designers with a touch of web knowledge can build our marketing site. They’re happy bc they don’t have to wait to make changes and devs are happy bc they don’t have to work on marketing sites.
The designers actually enjoy the constraints to a degree bc it simplifies their job as well.
But like I said, it’s just a simple static marketing site
Maybe I'm not clicking deep enough, but that marketing site looks like a super trivial Webflow CMS use-case.
Although if your team is all engineers pushing to git every day, I could totally see how they'd find Webflow frustrating (since a UI instead of a codebase would be a huge pattern-interrupt for their daily workflow).
That said, once your startup grows, what typically happens is you'll need to bring in specialized marketing/sales/design/SEO folks (vs. Swiss army knife SV startup hustlers). Their hours are far cheaper than engineer-hours and they're also much better at marketing/sales/design/SEO work, like your landing pages & blog.
They will not be able to change anything on your slick NextJS/Vercel setup , and will be filing tickets daily and overwhelming your engineers.
Then you'll probably have to resort to some nightmarishly bloated "headless CMS," and waste a huge amount of time on implementation, and then it will be impossible to change.
That's when it makes sense to switch back to something like Webflow.
We use one of those headless CMS already. You have a point. It works for us for now, we will see how it turns out. You need a bit of discipline to execute it.
But the big issue seems to be that they're not marketing anything to the developers. They're marketing everything at the managers -- particularly the kind of manager that doesn't really understand software.
Where I use low code is in essentially expert systems, where I need to encode the expertise of some SME. For instance, a lot of places have regulatory compliance burdens. The detailed rules for compliance changes by jurisdiction constantly. Most enterprises setup some sort of requirements to engineer pipeline that everyone’s bitterly unhappy with. It is never fully compliant, has tons of errors, the engineers resent their jobs, and the SMEs resent the engineers. Instead by instrumenting the system with a low code surface area with a policy management system overlaying it you get an excellent platform engineers like to develop, SMEs can author and manage their policies directly, and you capture a rigorous workflow which auditors and regulators really dig. This doesn’t make the engineer less important, in fact they’re much more important to ensure the policies are expressive enough and touch enough business data to ensure the policies are complete. They’re just not playing regulatory policy telephone games in code.
There's definitely something to this. What comes to mind is things like tax compliance. It's a moving target, software devs almost universally do not want that domain knowledge and accountants/lawyers almost universally do not want to represent that knowledge in C++.
I'd find this article more useful if it named names - I still only have a loose idea in my head as to what qualifies as a "low-code tool". I'd like to know which low-code tools in particular the author has experience with.
Salesforce is the most well-known example but it’s so popular that actually it’s not hard at all to hire experienced developers who know it’s custom server-side language and all its workflow automation and reporting tools. They just hate their lives.
Will add! For what it's worth, it's just my own internal list of tools in this space.
I have a few dozen such lists, I'll review one or two of them when building new things in a specific domain that might benefit from such tooling.
No requirement for open source, but I call it out explicitly in my notes since I do mostly use open source/self-hostable tooling, especially when working with nonprofits.
Maybe it’s just me, but I assumed almost all devs with basic coding ability would be against low code. It has so many issues, from poor version control to never (and I do mean never - would love to be proven wrong) doing more than or even meeting the feature set of whatever competing “some code” solution offers.
In the end, you end up with either less skilled people getting in too deep, or highly skilled people unable to fully avail their skills. I get the appeal, but it always seemed like a local optima to me. It turns out code is a pretty good interface.
That's what people don't seem to get. "Low code" is literally just a software library with a gui on top.
They could just provide the software library as a standalone but then there's too much competition and the competition is often free to use. So they don't even bother because their whole product is redundant without the GUI, and the GUI itself is just a pointless obstacle to a developer.
A programming language is a pointless obstacle in the way of writing the compiler IR directly. And the compiler IR is a pointless obstacle in the way of writing the machine code directly.
Low code GUIs are a weird code generator, just the same as all the other programming languages.
Whether it's an obfuscation or an enabler depends on your perspective and what you're trying to do with it.
In particular, programming languages are pretty much a library (called the compiler runtime) with a text user interface on top.
No, a programming language is a useful tool. There is absolutely nothing stopping you from writing machine code directly, go right ahead if you think that's easier for you.
Such as in the implementation of a JIT. There emitting the machine code directly is less bother than emitting assembly and much less bother than emitting C, compiling it and then loading it into the address space.
The low code systems being derided here are programming languages. They're languages you use to program computers with.
I suppose one could define "programming language" to be "a language for programming computers that is useful in some context", but they're pretty much all useful in some context. Maybe one could define it as being mandatory to use text editing to count as a programming language, but the smalltalk instances are something of a counter argument, as are the UML things.
You say low code systems are programming languages, I said they're GUIs on top of software libraries. So in essence we both just said low code systems are graphical programming languages, in different words.
The problem with these graphical programming languages is that they are constrained. To a developer like me they take away my freedom to do what I want, especially if they attempt to target non-developers. And even if there existed a totally complete general purpose graphical programming language, it would just be another programming language with the caveat that instead of just writing whatever I want I have to find it in some menu and drag it somewhere and whatever else. It's just a less convenient way of doing the same thing.
You're right in a broad sense and wrong in a somewhat subtle one. A major part of the value add of any programming language is what it does not let you express.
If you move from asm to C, suddenly goto is stuck within a single function and calling conventions are absurdly constrained. If you move further up the stack goto might be removed entirely and function calls might throw an exception if the calling convention is violated. People like this restriction.
CPython has id() to get the address of a value. Really convenient. Totally messes up alternative implementations of the language. Other languages won't let you talk about the address of a variable at all.
Removing capability from the programmer helps communication with other programmers, simplifies parts of the implementation, sometimes simplifies writing programs in the subset that remains.
Working in a non-turing complete language can give you hard guarantees on termination. If you go as far as regular languages, function equality is decidable. As in convert to a DFA, minimise it and alpha-rename, and the two functions compute the same result iff said DFAs are definitionally equal. You don't get that with a general purpose language.
Regardless, I share your irritation with low code systems to the extent I've tried to do things with them so far. An escape hatch to java is really not a thing I want. A wholly general purpose Turing complete graphical programming system sounds awful to me.
That doesn't make the fundamental premise of a heavily restricted language with a weird interface wrong though. All programming languages are more or less heavily restricted already, and a bunch of them have weird interfaces.
I believe low code style systems are the right things to program some applications in for much the same reasons that config languages shouldn't be Turing complete.
Less clear is whether any of the current low code tools (other than Excel anyway) deliver on that optimistic view.
Man, y'all haven't been around software consulting long enough. First, you have to sell low code based on the premise that you don't need coders to make your software. Than, for custom functionality, you explain that you need developers on the platform to get certified. Finally, you make sure to not document anything so that consulting firms with legacy experience have a monopoly on developer talent in order to keep them happy and recommending your software to others to score large consulting contracts.
Everyone is happy! Especially the people who actually have to use the software made this way! As we all know, enterprise software has a reputation for being extremely user friendly...
Talend is an ETL tool: Extract data from a data source (file, database, API), Transform it, and Load it into another data source (file, database, API), all in a pipeline. Talend's happy path is the same as Unix shell's happy path: A pipeline of data with no side-effects and no globals. Try to deviate from that and you hit walls, some more climbable than others.
For example, Talend absolutely will not allow you to re-combine a pipeline you've split; you can combine two separate data sources, and you can split to two separate sinks, but you absolutely cannot go from one pipeline to two pipelines back down to one.
The saving grace is that Talend is fundamentally a GUI for writing Java. You can see the Java it's writing, and you can add your own Java in a few ways, from adding jar files to just plopping a pipeline component that allows you to insert your own Java into the big God Method that the rest of your pipeline helping to form. Be sure you know how scoping works if you do that.
In extremis, you can "use Talend" in that you use the Talend GUI as an Eclipse IDE to write the one pipeline component that does everything because it contains all of the useful Java code. You can use it the way I often used it, to take care of some tedious "plumbing" code to route the pieces of data created by the hand-written component to their respective outputs, including handling some minor output conditioning that can be done in a stateless fashion. It can even be used as it was intended to be used, as odd as that sounds.
Code is a way of explaining rules. If you have a lot of complicated rules, whatever system you use to represent it is going to be complicated. Best case scenario for low code is when the rules are simple or very predictable. In my experience, this is very rarely the case, no matter how simple a problem seems from the outset
I’ve seen this exact thing, a “low code” engineer creates an app, we asked for some changes and all of a sudden “no can’t be done” - it is almost comical tbh
For compilers or anything else, if you have a bunch of spaghetti code, the first step is always to clean up the code. Refactor it to the point where you can easily swap in your low code module. Then determine whether it's still worth it to do so.
What happens all too often is a hack week project that brings in some low code functionality in some simple use case gets shown off and kicks off a bigger effort. But then the edge cases buried in the spaghetti code come up in some way tangential to the low code approach, so it gets hacked in. This happens over and over, and eventually your low code model is just as spaghettified as your code was, except now that spaghetti spans yaml, some interpretation layer, and the remaining two thirds of the original code that you haven't migrated yet.
For this I'll coin my law of declarative intentions: the resulting low code cannot be simpler than well factored code itself.
In my experience, low-code is “easy software development” but without the guard-rails of version control, debugging, open standards, local development, unit-testing etc etc…
These are the things that really matter as software complexity grows…
It's true that most low-code solutions have no version control but that's not because it's not possible, its because no one builds version control as low-code solutions aren't taken seriously. And without a VC, low-code solution won't be taken seriously.
I have been using Node-RED[1] for some time now and have began creating a visual version control concept. The difficulty is differentiating between visual changes and logical changes - visual changes aren't usually the cause of bugs.
[1]="Low-code programming for event-driven applications" - https://nodered.org
These tools can be useful for non devs to make very basic things, but they won’t be able to handle anything complex, and it’s hard to test and proof the correctness of the result. If you take it far enough you are always going to end up hacking around the tool.
For people who do write code these tools could be useful, but code is much easier to use as you are not stuck within the constraints of the tool and know what you are doing.
I think there is one success story for low code and its basic websites, mostly the part where a design is turned into code. I use Framer because it’s objectively quicker than code, with great results (I even design in framer), and once it’s set up non technical colleagues can make changes. Great for a landing page, but it won’t work for apps.
I think the article is right in pinpointing the logic as the problem. Coding is not about syntax and the language, it’s about the logic and ability to map that out into a program with all the edge cases and details.
My experience with low code is it's easy to get an 80 to 90% solution. Getting that last 10 to 20% may be very difficult. Also, source control, versioning, deployment may be open questions depending on the platform and project. Additionally, you will be locked in to a proprietary platform. Perhaps that is okay.
The hard part about programming isn't code. It is wrangling complexity, it is understanding cause and effect, anticipating edge cases and by clever design choices avoiding catastrophal (or sometimes just annoying) problems from the start.
Low code is a dumb persons idea how to make programming easier — the kind that thinks memorizing all the weird letters and signs is what makes programming hard. The result will be that you will have non-programmers learn how to become programmers by failure — in production — on the job.
I always welcome things that bring complex system interactions closer to learners. But it comes with considerable risk : )
The hard part of programming should be the problem you're solving. Fairly frequently, the hard part of programming is all the incidental crap that gets in the way.
You change the existing system slightly and a bunch of not obviously related parts fall over. Someone updates the OS on the CI box and now you're dealing with dependencies with dubious ideas about semver for the rest of the day. Nothing changed overnight but now the performance bot is reporting a regression.
Yes and the problem you are solving is always about complexity. It is extremely rare that you have a problem where your solution can be a simple black box with a few inputs and one output.
Most of the problems have many inputs, some of them malformed, many outputs, race conditions, multiple writers, time dependencies, weird edge cases, etc.
If you really wanted to punish me you would force me to reimplement the good software I wrote in no-code tool. And I am a fan of node editors in VFX. I know how to wrangle complex node graphs efficiently. But in VFX you have a clear direction of flow and it is the simple black box described above.
Structured text is a great and extremely efficient way of describing, editing and manipulating programs. Your node editor has to beat that for anything non-trivial — because even the simplest scripts I wrote for my employers had to handle any number of non-trivial cases.
You are aware that dependency problem would be literally the same if your dependencies were made out of nodes and connections? Only now the debugging is harder. If you hate dependencies fucking your day up in code, you could always just not have them and write everything yourself. Like our fictional node wrangler would have to. I mean I can feel you. But nodes don't make these problems go away and structured text didn't create them.
I meant what I said about complexity. This is not about code, it is about complex systems. A programmer who knows race consitions is also likely to be able to predict one in a completely peoples based procedure that doesn't involve computers. A programmer who knows about distributed systems will be likely to predict the same problems emerging from multiple distinct departments of a company, etc.
If you want to tackle a problem involving complexity, you get people who know how to do that and you don't care whether they use nodes or structures text to do it as long as it gets the job done. Or you could make the tools look harmless and give that power to people who have no idea what they are doing. In many cases it might go fine, in many it will lead to much more complexity for when you finally hire aomeone to clean it up.
IMVHO the "low code" environment have a name: end-user programming. The thing the entire entire industry have done it's best to destroy it after Xerox, Symbolics.
The rest is the nth failed tentative to makes users jailed by ignorance selling the idea people can do thing without knowing things. It will fail like any other tentative done before.
Microsoft Excel is end-user programming. The amount of businesses that have critical processes done entirely in Excel is both impressive and terrifying.
It is basically a database, reporting tool, IDE and half a dozen other things all rolled into one giant mess.
I know an SMB that has all its operations on Google Sheets. They told me they don't have time to learn any tool (all their efforts go to keep the business running), and they don't have the money to hire a full-time dev. I offered them the opportunity to develop their platform pro bono. They're thrilled since they will be the first business with that type of platform.
This applies up, as well. I was involved in talks with a household name (in the US at least) retail store that still managed purchasing for all of their locations via spreadsheet, and had a team of 8 people who were allowed to touch it at any given point to reconcile inventory.
They had looked at getting a proper purpose-built system in place, but I don't know what happened after that.
It's a mess because it try to be "made for ignorant users" instead of choose the classic desktop way.
First: in classic systems the OS is a single application/framework where anything or nearly anything is accessible by end users, so there is no special integration limit that force devs to reinvent the wheel to have anything in a single modern app and reinvent it in bad way since they have no time nor resources even at the big tech sizes, plus the need to invent ways to circumvent system design limits.
Being a single app means if I have a CAS installed I can solve an ODE in an email just typing it and pass the math expression to a relevant CAS function, no need to reinvent a basic or less basic calculator. No need to know all the stuff about making it. I just access the relevant functions written by some expert and tuned for that purpose. The dev does not need to know, it's the user who know and use.
Secondly means an incredible simplification. Let's talk for instance about a Plan9 mail client: what kind of beast is? Well it's just a base64-to-text reader and writer. Nothing more. The connection to the webserver is just a universal system connection to a remote file server, a remote filesystem mounted somewhere under the local root. All you need to do is knowing how to access a local fs, read text files, read/encode base64. Nothing more. Sending an email is just saving a text file with a given name in the right place. Publish a website is the same BTW, so it's sharing a file. Being a single app-framework means that anything is simpler, there is not much duplication of functions, only tuning.
Now try to see an example of modern Emacs in action like https://youtu.be/B6jfrrwR10k how much complicated is creating a slide? Well just zoom some org-mode text. A table like a spreadsheet? It's just text and can be passed to any programming language supported by org-babel as data.
Doing so meaning no lock-in possible and the end-user in control, that's why all the modern IT industry starting back then with the not-so-modern IBM, have demolished this, but the result is a mess. A decade after another we tend to the old model wasting gazillions of resources to keep up an untenable business model.
The opinions of developers about low code solutions are not very relevant.
Low/no code solutions exist for when there isn't enough time or budget for a developer created solution, which is to say, pretty much all always. In the real world of business problems, almost nobody has access to a professional developer.
Sure, but they are there for the /big/ problems. The myriad of small real-life problems that sometimes are solved by an intern with Excel and VBA are way too small to get handed to the professionals. Those things that would cost just 2 developer days are never handled (because the dev org doesn't keep a bench of developers sitting around for these small one-off tasks and thinks in multi-year projects).
Why would you assume this? The company that I work for (~ 1000 employees) has one part time employee who manages sharepoint. Any new software we wanted had to go through the procurement pipeline before we discovered we could use powerapps without asking anybody's permission.
My low-code horror story contribution. Big Important Company decided to develop some in-house learning courses for their Marketing and Sales folks. They used a trendy-for-the-time low-code learning course creation software package to develop the courses, and a different piece of software to deliver it to staff on their on-site intranet.
A few years go by. Someone in Big Important Company decides the courses need updating. Work ends on my desk as my bosses have the contract to maintain various other bits of BIC's creaking self-hosted intranet. I ask questions like "What software did BIC use to develop the courses?", and "can we charge BIC the cost of getting the software so I can do the work?" Of course, the answers are all along the lines of: no.
So all I have to work with is an extensive set of hefty change requirements to the courses, and a SCORM package downloaded from BIC's intranet. A huge amount of learning (how to manually edit SCORM packages) and frustration (how to test changes to SCORM packages), and a few months of time when I could've been doing something better without deadlines harassment ... I completed the work. Which I wasn't allowed to test on BIC's intranet servers for "security". So I just uploaded the new SCORM package to their production servers and went on 2 weeks leave.
Power Automate Cloud already has an AI builder, you type in "At 4pm every day check my Outlook inbox for an email from Suzie Snowflake, and if it says Critical, then check Sharepoint for a file from Integration Services and if its contents contain the string "Blorb" then send an email to Escalation"
the result is often slightly wrong, but if someone in MS management is paying attention and could get it to ChatGPT-python level of accuracy, it would take over an enormous amount of stuff.
Software Engineering will be fine: it has been eating itself since Lisp invented macros. Maybe software engineering should be called “The operations of automation using data on general purpose machines”. That will not die. It will look different though.
As someone who worked at a low code workflow SaaS, I agree with this. The automation is a commodity, the value was in discovering that people wanted to talk to the LLM to orchestrate their automation vs bespoke workflow design UX. You can also decouple models from API scaffolding like Gorilla LLM (or rather, broad models that dive deep into use case specific models on demand).
Doesn’t matter, as long as users are getting value. How you solve the problem is less important than solving the problem. I expect LLM output to improve over time, along with mechanisms to coax deterministic behavior.
At my workplace they are replacing spreadsheets with web apps. The users are very proficient with Excel. I would rather set the data free and let people play with it in spreadsheets however they want. For complicated functions you can make an add-in if you really need it, which I understand are a pain to deploy and manage. Web apps are twice as much work though and more restrictive for users. Plus we have a big cloud bill now and everyone’s beefy workstations are idle. I shouldn’t complain though, this stuff keeps me employed.
A great example of a low code tool that works is Unreal Blueprints. Full games are made with blueprints. It's not for everyone and it doesn't support text diff but it plays well enough with source control and other tooling.
I think this only works because a lot of games can be shipped and never touched again. A self-contained single-player experience doesn't really need to be maintainable.
I guess I don't have the data to refute you but Unreal is used for many live games; Fortnite, for example. They're probably not all 100% blueprints but BPs are used extensively.
From what I can tell, the market for these low-code tools is as much non-tech managers as it is the actual users. Slick demos go a long way to convincing people who own budgets and are tired of expensive engineering salaries.
Which I can understand, because engineers are expensive and behave in ways that are unintuitive for non-tech people.
But I fear “generative AI coding tools” appeals to the same people for the same reasons, leading to the same results.
Python is low-code compared to assembler. SQL is low-code.
Much discussion around low-code (and no-code) miss that it is just abstraction. The question is not if we should use abstractions or not, but what constiutes good and bad abstractions.
Another term for "low code" is end-user programming.
> Despite forty years of commercial products, open source, and deep academic work, we have yet to reach an end-user programming utopia. In fact, the opposite: today our computing devices are less programmable and less customizable than ever before.
COBOL, BASIC, SQL. HTML, CSS, JavaScript. All these started as relatively low-code solutions to what came before. And surely textual programming languages aren't the only way to empower users to take control of personal computing. Excel formulae to express dynamic values. Visual builders that internally compile to syntax trees.
They all have their own problems and room for improvement, but one could say the same for "high code" programming languages. It's a means to an end, and for computer users who don't have time to dedicate to become a high-coder, they will use whatever is available to solve their needs.
The only 4gl I know of that held the test of time was SQL. Every other one ended up being an embarrassing wart of "technical dept". The same could be said of most "enterprise software". For them that don't know enterprise software is code for "It's 70% done and we will charge well to finish it for you"
I make a SQL-only website builder (SQLPage) that could be qualified as low code. I think all the points mentioned are valid, but some of them are easy to work around:
> They wanted truly custom functionality that the low-code solution could not handle.
It's important that the low-code solution has an escape hatch, or a way to to interact with external "high-code" APIs. In sqlpage, we have sqlpage.exec
> They implemented a bunch of custom functionality in a product-specific or even proprietary language and now their pool of potential developer talent is tiny.
I agree that low-code makes sense only if the low code is in a standard, popular language. In SQLPage, it's all just SQL.
> Upgrades to the low-code platform would break their custom implementation.
This is a true problem. The low-code solution really has to be careful with updates.
> The underlying database structure was an absolute mess, especially after a bunch of incremental modifications
This! Most low-code tools take your data hostage. You shouldn't use them. In SQLPage, we add a layer on top of a database that you still fully control.
I'm a founder of Appsmith, a well-known open-source low-code platform. We offer an alternative to tools like Outsystems, Retool, and PowerApps. I often meet people who share the skepticism about low-code seen in this post.
It's crucial to understand the advantages of low-code for specific situations, especially in developing internal applications. These tools, often neglected due to limited resources or developer disinterest, can be rapidly and effectively created with low-code solutions. Low-code excels in scenarios where internal teams are overwhelmed and business users face long waits for features. Although it's not a cure-all, low-code is highly effective for certain tasks where speed and ease are key, despite sometimes facing limitations in customization.
Some modern low code tools are also evolving to be closer to web frameworks. Plasmic is one my favourite examples of this.
> Some modern low code tools are also evolving to be closer to web frameworks. Plasmic is one my favourite examples of this.
So this hits on one of my questions/observations. How different is low code to the generators of various web frameworks? I'm thinking things like Rails, Django and the like where a lot of the "low code" appears to be templatized. They sure feel VERY similar.
People have the wrong mindset about all this. The most continuously integrated / continually deployed code process in the world is desktop publishing. You draw. You press print. Something prints.
Contrast to printing per se: You have some text and graphics. You do wire frames. Now you have to decide about the next part of the process: are you going to go with litho or a more traditional layout process? Now you do either cut & paste or cast / etch / burnish plates and / or masks. Now you print, which actually involves doing multiple prints in different colors which all have to line up. (And that's the simplified version, it's a lot more complicated than that.)
Desktop publishing is actually a code development process. A Postscript(r) printer knows a programming language called "postscript" which interfaces to something in the printing engine which is called the "raster image processor". Postscript programs have libraries, and the printer gets transpiled code for the RIP to execute. (Does this sound like your web dev shop?) The location of the RIP has meandered back and forth on the workstation <-> server <-> print engine continuum due to various factors; this looks a whole lot like the compute / display continuum and the endless debate about how smart a display device needs to be.
Now, the color is never quite as good with DTP, mach banding, dithering, blah blah blah. Like I tell my wife, the artist, the only person who knows that's supposed to be insane blue, and that over there is outrageous orange is you. (When she prints cards she secretly does hand touchups with the true pigments which are a mote in her mind's eye.)
For most things on the planet, normal people will settle for desktop publishing's shittiness. What's not shitty about DTP is the democratization. Instead of one person publishing to a million, it's (potentially) one to one.
I wish, I truly do, that all of this mattered for the web but it doesn't. The web is largely free, and at the scale of free everybody gets the same unique experience. On the other hand if I have a $1,000,000 warehouse facility even if there are 100 people working there I want them to have the best experience possible. I'm going to tailor that experience based on job function, where in the facility they operate, what they do. If someone is missing a finger, they should have an interface which takes account of that. $50,000 piece of machinery with one user? Optimize that user to optimize that machine. Welding gloves, or gloves covered in fish guts? Special interface for you too.
A good low code solution comes with OT drivers, speaks to SQL and microservices; produces true and correct (legally speaking) audit / transaction / snapshot logs (because if someone gets poisoned by the product or loses a hand working with the machinery...); has MES (workflow) as a low code function.
"Low code" is not just literally some software library with a GUI on top. There is a whole CI / CD pipeline which kicks into gear. What does that GUI do, and what does it look like? In some fashion or other it lets you compose the application GUI; it doesn't have to be WYSIWYG like DTP or composing a slide deck, but many are. Although the GUI is ubiquitous, it isn't necessary. What is necessary is the CI / CD and devops pipeline in a box and the extensible ETL plan which allows it to be tied to business logic.
I think the point are valid but it comes back to a much simpler idea - what’s the right tool for the job. Figure out what you want to do then find the right level of abstraction to get it done. And salespeople are one data point, generally not to be relied upon. Or have I over simplified?
I agree. It isn't useful to think of "low code" or not. You think of what the tool offers and if it can be extended easily to do what you want. If you choose it and it isn't as advertised or doesn't fit, it was the wrong choice. We rely on all sorts of "low code" stuff and it works great because it's battle proven and provides the right abstractions. "Low code" is a misnomer, just say "esoteric specialized tool that isn't customizable".
My personal view is that you're right to be skeptical, but perhaps not for the reasons you mention.
I think that GPT-4 today is good enough to replace about 80% of programmers in the right hands. Put differently: we probably don't need bootcamp grads anymore. The folks who keep their jobs are the ones who intuitively grasp what needs to be done, how to break that ask down into iterative tasks, and how to word those tasks as prompts.
Instead of scrambling to replace application stacks with layers of dumbed down abstractions, we are actually replacing the less experienced people working with application stacks.
Dramatically better outcome for everyone but the people who thought they could take a bootcamp and create generational wealth.
Last night, I spent about five hours just beginning to think about reverse engineering a proprietary file format that packs multiple MIDI files so that I could extract them. There was a whole lot of reading the MIDI spec, searching for strings in a hex viewer and calculating values in a hex to decimal calculator. I didn't write any code in this time, just satisfied myself that it was possible.
Today, I asked GPT to do it for me, and it basically wrote the program for me. I did 1-2 follow-up requests, but I figure that it saved me about 2-3 days of effort. It's relevant to say that I wouldn't have ever actually proceeded with that project because 2-3 days is not a luxury I can afford.
Now, someone who regularly works with MIDI files - heck, someone who regularly works with binary files - could probably do it in a few hours, but this entire process took minutes. It took longer for me to verify the results as perfect than it did to interact with the GPT instance.
I assume that the person who downvoted my comment is a bootcamp grad. Good luck with your future endeavors.
> Now, someone who regularly works with MIDI files - heck, someone who regularly works with binary files - could probably do it in a few hours, but this entire process took minutes.
In my experience, I think this is the key area it exceeds but this isn’t all that common. Hard to imagine 10x productivity boost unless you’re a jack of all trades master of none.
In my experience it’s really useful when you need to do something in a domain you aren’t super familiar with (that you don’t need to become proficient in). But when it comes to the everyday stuff that is my comfort zone I don’t need it. I’m years past looking up syntax or standard library functions for the languages I work in regularly. Every time I’ve tried to use it for a remotely complex algorithm I’ve found it to be much faster to implement by hand then try to debug whatever it got wrong. But for makefiles and gh actions — it’s a true godsend and those things can be major time sinks.
I’m also a developer who really likes to know what my code is doing. If I can’t code something myself I don’t feel comfortable deploying it. But I’ve found this is less universal of a sentiment than I would have imagined in the past year.
It looks like you got lucky and this proprietary format is nothing more than standard MIDI file concatenated together with perhaps some additional data that you are able to ignore +/- some header patch. Frankly this barely qualifies as reverse engineering, at least it represents some trivial case, I mean I'm happy it was easy, but reverse engineering just rarely ever works out so straightforward.
And I would expect someone with competence in scripting language of choice to pop out that script which is a loop and file IO in a few minutes, not hours (assuming it is even correct). And if they have a basic experience working with binary files should know how to google the necessary info about MIDI in seconds.
However looking at the transcript I am also confused because it says (correctly): MIDI files typically start with the header "MThd" followed by the header length, format type, number of tracks, and division. It goes on: "Once a MIDI section is found, we'll extract it according to the MIDI file structure". OK.
But the script does NOT do that it reads 4 bytes starting from offset 8 as a 32-bit big endian "length" which is not "according to the MIDI file structure". The standard format is 2 bytes for a format specifier (AKA type) (0, 1, or 2), and then 2 bytes for the number of tracks.
ie, this is wrong in some way:
# Read the MIDI header and length (14 bytes in total: 'MThd' + 4 bytes for header length + 6 bytes header data)
midi_chunk = io.read(14)
# Extracting the length of the MIDI data from the header
midi_data_length = midi_chunk[8..11].unpack('N').first
So either the proprietary format you're dealing with actually does have a variation on the header of the embedded MIDI file.
If that's the case, I would have to deduct points from ChatGPT because I would expect a competent developer to comment/document this fact, no where in the transcript is this stated.
The other possibility I can see is that if your file is a bunch of standard Type 1 MIDI files, the unpack/parse is going to read that as 65536 + some small amount and will extract files that are all around that size. Since the next step is to look for another MThd magic it will just gleefully resync (I assume these are small segments), but you will end up skipping a whole bunch of files and they will be unceremoniously tacked onto others (which will just be ignored in many players).
So what did it end up being?
If it was the second case, I would also be suspicious that a first crack LLM follow-up "fix" isn't subtly wrong and prone to false splits.
On further thought, how could it be the first case? If it were the outputted files are not standard MIDI. So something is fucked here. Either you have something totally broken or you have further follow-up and we have to believe it is not subtly broken.
"There was a whole lot of reading the MIDI spec, searching for strings in a hex viewer and calculating values in a hex to decimal calculator."
One pearl I would lend in relation to this: use your REPL, that is a productivity accelerator.
I am also sincerely interested in examples of LLMs reverse engineering something with compression or encryption or some checksum, or like some actual complicated structure that has to be teased out (this is something humans do all the time), maybe something that is most easily solved by cracking open the compiled parser, I'm not saying they can't do it, but plainly put this example is too trivial to be interesting and frankly barely qualifies as reverse engineering at least insofar as some sort of RE Turing Test analogue.
----
If the format works the way I think it does (and this is based on nothing more than general experience and this thread, so give me a break), the only robust way to deal with this is to either figure out where in the proprietary data some type of length field is, and clearly ChatGPT was not going in that direction, nor do I believe it would be able to divine that information from a file upload. Or to use this slightly wonkier method but actually read every MIDI chunk header, since standard MIDI has no total file size length encoded in it. The loop should be: look for MThd, read the NEXT 4 bytes for the length, skip, read and write out chunks (ie 4 byte magic followed by 4 byte length), split when chunk type not seen (that's what makes this a bit fragile, but its probably good enough). If you just look for MThd, you'll split if the MIDI data has an 'MThd' in it.
Ha! First: I appreciate the detailed and thoughtful reply, even if I feel wildly judged.
It's distinctly possibly that you're simply "better" at reverse engineering than I am, which really just means that you might do it frequently and I might do it a few times a decade. This isn't going to keep me up tonight, because my identity isn't tied to being someone who reverse engineers things.
That said, I am pretty thrilled with this solution. I launched a web-enabled version last night and so far about 1100 people have used it to convert 6800 files after I replied to some posts on relevant musician forums around the web.
In my defense, what you're not taking into any consideration is that until 48 hours ago, I'd never looked at the MIDI spec or opened a MIDI file, before. You clearly have a huge amount of domain knowledge that I don't pretend to have.
I also, shocking as it may seem, haven't worked with binary formats in over a decade. I'm a web developer. Binary formats aren't an alien mystery to me, but all of the tools for working with them had to be re-learned as I was working on this.
Anyhow, don't fall into the trap of equating typing speed with the time it takes to learn a domain and consider (design) an approach. If I could think at the speed I can type, John Carmack would have nothing on me.
In the end, I absolutely did get lucky. The proprietary format was, as you proposed, a bunch of 1 track/format 0 MIDI files, bounded by hierarchy metadata that was discarded.
Curiosity did get the better of me and you seem to be spamming every fucking forum so WTH. The longest time it took for anything was waiting for the file to download. The rest of this was about 5 minutes of effort. For the record I do not know the MIDI spec, but it must be one of the most easy to google and well documented things out there, and it's pretty simple likely because it's nearly 40 years old and had to run on potatoes.
The file you could have reverse engineered is not enough of a challenge for an interview question in a low level/systems field and yet you didn't even attempt to do that part. The file hierarchy is absolute offsets and sizes in 4 byte little endian quantities. It took less than 30 seconds to figure that out. How? Find MThd string, what offset is it at, search for that offset as a value. Notice that it is adjacent to a null terminated string with a name ending in .mid and a small quantity. Is that small quantity added to the original offset the start of the next MThd, yes? Done. Anyone with the shittiest hex editor and Ctrl-F can do this in a minute.
Rebuilding the hierarchy is quite simple since the absolute file offset to the entries is adjacent to every directory name.
These midinfo things are interesting they also have some sidecar data. Their content might also be something novel to reverse, that might be worth bragging about.
> I spent almost a week (!) reverse engineering their absurd proprietary format using a hex editor and the MIDI spec.
Since this whole affair seems to have just boiled down to naively iterating for the 4 byte MIDI file magic and then examining the MIDI file metadata you didn't even need all the bluster of breaking out the hex editor and a calculator... Bam https://github.com/jimm/midilib, done (the actual get the MIDI data part can be done in 1 line with a string split).
It's too bad ChatGPT didn't suggest that. That should extract the 3 pieces of metadata you attempted to store in the directories.
Sidebar, nothing here stands out as absurd. It just looks like the obvious solution some working stiff would put together to bundle some data up. They don't obfuscate or do anything that stands out as fucky. Since it's read only its not like not using sqlite is some cardinal sin and they probably gave it all of 3 seconds of thought it deserved.
If you were using this as a learning exercise, fine, but then go back and check your work because your tempos and key signatures are off. eg the tempos that you categorize as 104 have an actual encoded tempo of 571429 µs/beat, or 1.7499 bps or 104.9999 BPM, ie it's what humans would call 105 BPM. Not the end of the world but this is pretty rookie floating point mistake.
And I'm pretty sure you bungled the key signature because at least ones that say Abm are Fm. Why is that... ah you have ignored relative keys. For instance since the meta event FF 59 02 FC 01 has a sf value of 0xFC which is 2's complement -4 that's 4 flats. If this were a major key that would be Ab, but it's a minor key so it's Fm. Oh no, even simple keys are fucked. FF (1 flat, or F major is coming out as Cb, 7 flats), seems like a bungling of 2's complement.
My music education culminated in 2nd chair trombone in the 8th grade and everything I know about the MIDI spec I got from the top 3 hits of google, so caveat emptor.
The only reason I knew it was wrong was because I checked the files with mido (I know you claim ruby is superior, but I seem to be doing ok with dull old python like a rube).
Also in the file I count 640 MIDI headers and I only got 639 files out of your thingy so you might want to revisit that part I mentioned about bugs.
It might actually be interesting if you reconstructed the hierarchy in the metadata. That is not hard, it is all absolute offsets, sizes and null terminated ASCII strings.
But what you did seems to have written a (possibly buggy) loop that calls basic standard library functions in ruby, and attempt to poorly parse/convert a few MIDI meta-events. It's not reverse engineering if you have a nearly perfect spec in front of you, you completely skipped the (simple) reversing part and bungled the easy part. And this task shouldn't take a few days nor hours for someone experienced with basic scripting. https://adventofcode.com/2023
> the entire process of iterating through the binary blob, pulling out the MIDI header/track chunks, and then creating entries with smart naming in a zip archive is done entirely in-memory. For the non-programmers, this is otherwise known as "hard" or "showing off".
We have very different definitions of hard. Unless you were writing a shell script doing it any other way is bananas. These are like a few hundred bytes a piece.
Your technical chops are discordant with the arrogance and incivility you've displayed towards others on this forum. (And name dropping Carmack, that's bold)
eg: "Serious question: are you wickedly combative in all discussions, or just when you get the cold feeling that perhaps you might not know as much as you think you do?" in response to someone who simply had a differing opinion to this: "you keep implying HTMX is a good choice, when there are vastly more powerful solutions in play."
"I assume that the person who downvoted my comment is a bootcamp grad. Good luck with your future endeavors."
Eliminating 80% of programmers does mean that on paper "output per worker per hour" has increased, but it doesn't necessarily follow that you can deliver faster i.e. the output/hour of the business as a whole might stay the same.
In a degenerate case this might take the form of AGI replacing the lower tier workers but doing the same thing at the same speed.
Yes, and I strongly suspect that this is conservative.
Mid-term future iterations will quickly get to 10x and beyond.
Also: your math is based on the random elimination of 80% of programmer roles. I was specifically talking about the elimination of the worst 80% of programmers.
By worst, I mean "the least productive" - not "terrible persons".
Not to be a raging egalitarian, but it’s hard for 20% of workers to have enough business context as 100% of workers to be able to maintain the same level of productivity. Things like gathering requirements become massive bottlenecks to productivity.
As people in this thread have stated elsewhere, there’s a HUGE last-mile problem in programming. For this reason having 20% of the programmers we have now might not be the ticket. We may even want to take some of these mediocre programmers we have now and have them do other duties like say, systems administration. In other words, it could be that hours spent programming decreases rather than the number of workers, to allow for higher communication bandwidth.
I wish the author named the low code tools. There's so many of them, they can be very different, and offer different levels of control. Having specific examples would help me get a better understanding. Since this is in MS context, I'm guessing Power Apps; description: https://uk.pcmag.com/cloud-services/89779/microsoft-powerapp... (2018).
Not being the author but I agree with your request. For me:
* no-code
Basically all SaaS solutions that provide a web form to integrate and link services. Changes cannot generally be reverted nor is there any sense of version control. Only limited approaches for collaborative workflows.
* low-code
Things such n8n.io and nodered.org - visual programming environments that place a focus on data flows and not algorithmic code, alternatively known as visual flow based programming. These tools currently lack proper workflows for version control, collaboration amongst folks involved in the project. Hence these solutions generally do not have alternatives for the much-code development workflows.
* much-code
Textual code solutions using any one of a number textual programming languages. Very clear workflows and well defined processes for version control, testing and collaboration. Being extended by using AI for code completion and code generation.
It interesting to realise that much-code solutions have been around since we stopped using punch-cards and many developers believe there are no alternatives to the classic keyboard,screen,mouse approach to software development.
Perhaps it is time to rethinking our approach to development. Perhaps a two dimensional approach is more appropriate. Text, for me, being one dimensional: text is read from top to bottom, there is no left and right with textual code (in its basic presentation).
My interpretation of low-code includes headless CMSes, authentication as a service with pre-built components e.g. Clerk, plug-and-play collaboration e.g. Liveblocks. Those services save me a lot of time without having to worry about underlying implementation, and they also have some nice escape hatches.
However, I share the author's skepticism about other low-code tools that involve a lot of UI/visual flow building with limited insight into how the black box works and what its constraints are.
I love that this piece talks about the database. That is a problem I focus on. It’s API versioning all over again, and we never _actually_ solved that before we moved forward.
I see this being our reality anyway. In many cases; not all.
You can’t ask the LLM, nor a hypothetical AGI, nor anything, to build the aqueducts. Especially if you don’t know you need them. And they won’t solve the problem if neither a human nor an AI know what the problems actually are. Let’s make em out of lead!
And they definitely won’t waste tokens on a postgres rollback migration unless we make that essential in the acceptance criteria.
An LLM is all of our ideas, in a search UI that dang nearly passes the turing test. (That says more about us than the state of AI technology.)
We are pattern machines in many ways, and many things we code are even lovingly called Software Patterns. The promise is not that software engineering goes away, it’s that we’ll all be managers for a while.
Low-code, to me, is different from “statistically generated solutions that pass the requirements.” Genetic programming was an idea for decades, but that is also different from stuff that combines yours and my most relevant github commits into a 90% best guess. Not random, hoping for the best, in this regime. Educated guess.
Much earlier in my career I built an entire business converting MS-Access database/forms solutions that were created by non-developers/end-users to solve a particular problem without involving IT folks, most of them 'sort-of' worked, but only up until the point things got complicated, or the requirements changed, or it needed to scale and nobody (the non-developers) could figure out how or even realized their entire foundation was built on quicksand. (I would generally convert them to true .net client/server applications with MS-SQL Server as the back-end or web front-ends with a proper database on the back).
I suspect there is another opportunity coming along to make some good money bailing out folks who built 'no code' solutions that their business now depends on, but no longer meets the needs and has become to fragmented and complicated to tinker with and difficult to change, scale or adapt.
That said, I am not opposed to no-code - it is a good and often inexpensive way to solve some problems, or at least get a POC working - many of those solutions may never need to scale or adapt.
...but some will, and thus the coming opportunity.
Low-code is just good libraries and good frameworks. My hot take is that anything “low code” beyond that is just an aggressively user-hostile config interface.
Most(all?) self code self-hosts, which is extremely valuable. There no shortage of smart people who can creat complex tools (see Excel for examples) if hosting it wasn't currently such a ridiculous prospect.
My version of low code is writing HTML pages with a big script element full of vanilla JavaScript, and putting them in this one git repo we have which gets deployed to a web server.
Obviously it's not actually low code, it's 100% code, but it creates a space without a lot of the complexity and ceremony of 'real' software development. I don't write tests, I don't future-proof, I don't set up deployment or monitoring machinery, I don't worry about anything except solving a small, well-defined problem quickly.
It's definitely not a good approach to everything. But it's quite common for one of my users to say something like "I need a dashboard showing how the current reactor temperatures compare to the averages for yesterday, and also the available cooling water for each one", where that's all data we have existing websocket feeds for, and I can get that to them very quickly without it becoming formal feature development.
Lack of testing and monitoring means the products often break. But when they do it's typically benign - the page doesn't load, or shows no data, and if a user needs to use it, they'll complain, and I can fix it.
If a page ever got too complicated, or too critical, I might want to port it onto real software. But that hasn't happened yet.
Reactor temperature monitoring is surely the exact area of software engineering (along with medical software) where you'd want the most robust processes isn't it? If your solution says everything is fine when in fact it's not, someone will have a big problem on their hands.
I'd argue that is actually the hard part of software engineering. Sure, I can learn the basic syntax of a new coding language in a week or so, but learning all the deep frameworks to do non-trivial stuff, conventions, deployment, and how to diagnose bugs can take years to become expert at.
Much of this feels like needless complexity, but its ubiquity in almost every stack points to something fundamental, in my opinion: the classic 80/20 rule. That last, irreducible 20% complexity is where the dragons lie. Expectations for software have grown, and boundless flexibility is table stakes.
I feel like we are kind of trapped between simplistically polar notions. There's coding as we know it - complex & interwovenines of code - and there's this low code ideology.
Any area of code itself is usually fairly boring & irrelevant. It's systems that accrue mass & complexity. Low code systems sound good by demonizing the weird symbols & incantations that make up the lower level fo coding, but it's this systematic complexity, it's the aggregation of many things happening that makes things opaque & difficult.
Finding patterns to suss out broader-scale understanding is what really tempts me. I don't have super strong vision here, but I'm interested in projects like Node-RED, or even to a degree works like jBPM. Both kind of speak to going beyond coding, but I think the valuable thing here is really that they are assembly toolkits for modules of code & data. They are just examples and not even good ones but this quest, to me, to get to the next levels of computing, is to make clearer portraits of what happens, that either are above to code level, or where the code's running creates higher level observability surfaces.
Code is so easy to demonize. The layering is often so implicit, handlers and DAOs and routes in different layers maybe but with no clear overview layer. Figuring out how to make legible these systems, without diving deep into the code, is what, I think, will make systems less intimidating & will serve as a bridge to make businessfolk and novides better able to intuitively grasp & meddle with these these artificed put before them, with courage. And making people less afraid of the machine, more interested in peeking in and meddling, that's where so much hope lies.
Low-code platforms like PowerApps and PowerAutomate are great LEARNING and PROTOTYPING tools.
Give an analyst with some technical acumen PowerApps and ask them to mock up requirements, and after awhile, they can produce a click-through prototype.
That same analyst will likely improve their UI, API, JSON and SQL skills in the meantime.
Low-code tools are also good for digitizing CRUD forms.
Much beyond that and things get very proprietary, very fast.
A few comments here about lack of version control being a problem generally with low code.
We're a low code platform (app integration) and what's worked well for us is to have the platform store the generated workflow design (yaml/json) in source control.
Additionally, we map environments on to source control branches so merging to a branch is what promotes a design version to qa or prod.
Anyone who lived through Model Driven Development era (late 90 and early 2000) knows that this solves only like 20 percent problems that those who complain about lack of version control are really talking about.
The real unsolved problem is lack of visual comparators that could show what really changed between versions. If you try to do diff those text serialization formats (those being yaml, json or, in old days, xml) you have to do a lot of mental gimnastic to map those onto visual changes that are meaningful. And most potential users of your low code tools are not capable enough to use it this way.
If you take into considetation that visual comparators of workflows is extremely hard problem to solve (probably only bunch of people in the whole world is cable enough to do this in a way that anybody sane would use) then you easily see why all visual dev tools of last 30 years went bankrupt. And if you did any sensible research you would know there were hundreds attempts at it with some spending hundreds of millions of dollars (Rational and later IBM being main ofenders here)
I wish I could show you something I’m working on — I’m just finishing up documentation now. I don’t think many people get this too. I haven’t launched yet but I think I solved this problem, my email is in my bio if you have any interest in talking about sane visual diffing and merging
Hmm this looks more like na attempt at building better merger by getting rid of unstructured text and replacing it with some kind of structured/hierarchical data.
I think that this can be useful but I don't see how this helps to solve general problem of visualizing diffs between workflow versions (and making it readable and mergeable for mere mortals)
Yeah, sorry (as I said in the below comment the site is super incomplete, showing screenshots of visual diffs is actually not that easy and I am yet to upload a demo video). Probably can’t give a great pitch here but the whole thing is based around building diffable plugins (which are really just html documents that are run in an iframe). Most of the documentation at this point is based on teaching version control to mortals rather than demoing the 4 diffable applications I made to launch with. But it really is built for mortals and is generalized so anyone can build applications that can meaningfully diff.
There’s some snippets of a visual diff in the docs here https://floro.io/docs/product/product-and-terms. If you do want to give feedback or see a demo I’d love to chat. Again, my email is in my bio.
It is deeply stupid that I spend much of my time looking at C++ (and asm, and IR...) diffs in a tool that looks at the raw bytes instead of at the AST they represent. Likewise git merge would do a lot less damage to codebases if it spliced the AST instead of the raw bytes.
I'm at some risk of replacing that with something hacked together out of XML. Hopefully you're doing something useful in this space.
Totally. Sadly I’m focused on GUI use cases and not AST. I’m more focused on making distributed version control for tasks that are handled better by visual editors than plain text. Universal AST diffing and merging is probably an impossible problem to solve (entirely). You could write a driver for llvm that would work for any language that shares the same IR state. Even then there’d be a lot of cases where you’d have to depend on a developer to manually resolve syntactic conflicts and enforce consistency in the IR state, which makes it pretty impractical. But two things, first, I would look at Pijul, my understanding is there is (some) support for AST diffing and merging. Second, git does support custom merge and diff drivers — I’m not sure if this is where you’re going with your xml hack but I don’t think it’s too cumbersome to write a custom differ if your painful conflicts are relatively constrained to certain functionality or part of your code base.
My site is half built (lots of broken links and only half of the documentation is done so please don’t judge me), but I think my technical explanation does an okay job of talking through some of the complexities to think through in regards to structured diffing and merging. It might be helpful to you if you do decide to write a git merge driver. https://floro.io/technical-overview-part-1
The term "low-code" is in itself a compromise based on the very issue the author is identifying. The old term was "no-code", but too many people needed custom functionality.
Aside: The contrast on this website is really borderline. Extremely light gray, low font weight, white background. I thought we were past that particular trend in web design.
>> A lot of low-code solutions seem to hit about 80% of a company’s requirement.
The 80% is not evenly distributed. Marketing might have problems that are 95% satisfied by low-code and they can change their process for the last 5%, or accounting can do 100% of a 50% subset of their job with low-code. Logistics can't really do any of their job so they steer clear.
All the other points are related to trying to satisfy 100% of a custom problem with a generic low code framework. Don't do that.
You run into all of these problems with custom code solutions too; The debate should be about investment vs. control, of which no/low/custom coding is a component but not the differentiator as presented here.
Very recognizable.. I've done a few projects at large-scale government and retail where me and team have been brought in after-the fact to fix "performance" and tech-debt issues with these things. In particular these platforms: "Be Informed" and "MuleSoft AnyPoint".
The story is exactly like the fine article mentions. Heavy reliance on proprietary stuff making the organization heavily reliant on very expensive consultants to get stuff done, no way to sanely version things..
I now go out of my way to avoid working rescue jobs like these, unless the client is actively migrating away from them.
It's a hard topic to discuss sometimes because low-code means different things to different people. There's a space, for example, where it mostly just replaces spreadsheets on a shared drive (or being emailed around). And there's also "mostly web forms with a little bit of workflow and approval hierarchy". Then "simple-ish database with reasonable basic CRUD gui". The various vendors usually start with one of those, then feature creep into more customization support, including code that's invariably hobbled in some way that makes some tasks impossible.
I have been a proponent of no-code low-code and in the last year or so your second point has been the biggest problem we encountered on projects
“…developer talent is tiny.” The talent is now more expensive than code developers. However, the net benefit for most web apps is still there, that once you established the “platform” i.e main aspects of your “thing” the low maintenance burden starts paying big dividends. With rapid prototyping we do traditional dev, then transition web apps to no-code because the burden is in the through life aspects and no-code wins there.
Not having to write messy business logic in whatever the hot programming language of the week is (which typically means many more lines of code than SQL and an order of magnitude slower).
The expensive part of software development is not coding, but figuring out what needs to be done. It appears people are trying to optimize the less important thing.
Low code is absolutely not about optimising coding (although I can see how you might get that idea). It's about moving the executable work away from specialised coders and to the people who actually have the problem. And when you can do that, you should.
Low code is all about optimising the more important thing.
The downside is that all those software engineering concerns that the average business practitioner is poorly placed to cope with often find a way of reasserting themselves and when they do, you typically end up with a mess worse than if you'd started with a code solution in the first place. However, in the domains that they don't, or where the negatives are not business critical or can be adequately mitigated, low code can be incredibly empowering to noncoders (excel is probably the canonical example).
What I mean is that if the people who actually have the problem could describe the problem with sufficient detail and unambiguity, the core coding part would be very straight-forward, fast and cheap to do, regardless of the programming language.
Of course the additional things beside the core, like change management, monitoring, security, APIs etc. would add to the cost, but they are at least possible if you are not on low-coding platform.
The problem is that for anything basides the simplest applications, people cannot describe what they need in sufficient detail and unambiguity. This is where the cost of the application development comes from.
There are many means to bridge this gap: user interface design, prototypes, agile development, waterfall requirements gathering etc. Excel is a way to iterate, that many professionals learn to use. If the end users can develop software that solves their problems, and does not cause other problems, there is no problem.
From what I have seen, low-coding is sold to IT departments with promise of cost savings in the coding part, and to other departments as a means to bypass IT that they consider slow, expensive or not delivering what they want.
So we need to get down to what we mean by low-coding and what context we are talking about. In addition to Excel, Django appears to hit another sweet spot in many contexts. It is very fast to develop and iterate with, comes with built-in admin UI, but it is still possible to add fill all the corporate technical requirements as well.
The low-coding platforms the link points to that seem more problematic usually have some custom programming language, abstraction layers that often limit the extensibility for more complex applications, graphical ui designer, have difficulty supporting enterprise requirements (those security, auditability, monitoring etc.)
Why is it expensive? If somebody can describe the problem with sufficient detail and unambiguity, the core coding part will be very straight-forward, fast and cheap to do, regardless of the programming language. The problem is that very often it is impossible to do.
I don't think anyone mentioned matlab's simulink or it's free clone, scilab xcos, which are more targeted at modeling electronic and mecanical systems, solving differential equations, that sort of thing, I have fond memories of them from my classes, but never got to use them in the workplace
then there is also labview, the programs for the equipment were mostly made by former students in that program
I've used AWS Amplify in the last two years. As the others low-code tools it's fine until it isn't, then it's a nightmare.
But with time they've started to allow for more and more escape hatches.
They've recently announced a V2 marketed not as low code but as code first, built on CDK.
They've probably found that most of their users are devs and code is the right interface for devs.
Having worked in PowerApps the last year and integrating a rest API in Azure that then updates Dataverse, low code is just short for abstraction layers. As a developer, this irritates the hell out of me.
Sure, you can view your Dataverse database in SSMS, but it's read only and you get no autocomplete.
found that "
that low code platforms, while allowing for fast learning development processes, enabling a more systemic view of software projects and providing easy integration with other application endpoints, can't escape good Software Engineering, as low code's inherent abstraction requires following good development practices"
So no matter,you have to a developer and not just citizen developer
> Low-code or no-code is okay with small apps. But as complexity increases, you won’t have fine control or scalability needed using these tools.
Won't a better approach be a low-code tool that grows with your needs? It would seem to be a waste to build an mvp in a low-code* environment and then throw away all the learnings and rebuild it in a code-based solution.
*=the assumption is that low-code is not no-code, i.e., the mvp would have some amount of code that would be throw away.
I use n8n.io hosted locally to abstract away authentication when building complex api integration mvps that I use to show as possible and working, and then that gets sent to engineering to convert to something that follows better best practices.
maybe low code is the next step after text- based programming?
There is a comparison here to be made with ascii graphics and they could be quite charming, dwarf fortress for example looks great but there is only so much you can do with text
some languages it seems to me are running out of symbols and their meaning is not obvious at the glance, for example what is the meaning of ', @, $, !: in diffent languages? why should it have that meaning
in the end code is much more logical and better understood if it is a diagram of blocks, after all that what code is supposed to represent, a series of decisions that lead to a desired state in the data
low code should be a result not the goal.
Let me explain:
When you choose the right platform to build the applications, the platform should take care of gluing services, adding visualization, handle common concerns such as reliability, ease of debugging, documentation (with the visualization) etc. This is an often overlooked aspect of application platform that otherwise takes the bulk of the time to create apps.
tools focusing in low-code does not solve them and the tool that solves typically don't market them as low-code - because the are not. low code comes as a result.
Low code is what IBM, SAP, Salesforce have sold as "the solution" for the past few decades, but packaged in a different way. This is no farther along than we were 20 years ago IMO.
was really interested about MS low code platform, but then... it's a mess glued on top of another mess(o365), glued on top of another mess (azure)
what seemed to be true that you can easily build a gui that will break in the next months and only have terrible default ui-elements, like calendar selector you cannot really prevent going past time or 99 other akward user interface gimmicks.
why should company try to save 2000$ on development, but then loose 10-100$ every single day with bad usability?
The security threat models are wildly different though. A “no-code” solution won’t be convinced to export private data with the right conversational English smooshed into the username field, for example.
actually, software itself and any sort of interface/automation in general is really just an interim solution before transition to just having real-time access to scalable raw intelligence and hyper awareness through wetware.
Is it a programming language, a framework, a compiler/linker/IDL, etc.?
I mean, there are some that could argue that C is "low code," because it isn't Machine Code.
I started in Machine Code, and, these days, write mostly in Swift. Swift seems like "low code," compared to Machine Code.
I assume that they are talking about things like Qt, React Native, Xamarin, Electron, or Ionic (You can tell that I develop host apps).
I write native Apple, using established Interface Builder SDKs, mainly because of the first point. I like to have 100%, and Highest Common Denominator approaches don't allow that.
Also, I find that upgrading can be a problem. I have been in the position of not being able to support the latest OS, because a library hadn't been updated, yet.
Generally low-code are those gui-driven development tools.
E.g. You can write a data pipeline in SSIS by just dragging boxes around and entering connection details.
Sometimes the abstraction doesn't expose something you'd like, so you add a bit of code (SQL/C# for SSIS), but that's the "escape hatch" rather than the default workflow.
You've also got the approaches like Power Query, where a frontend action is automatically recorded as code (M in that case) , but the code is largely hidden from the end user and only used for source control/escape hatch.
Oh. I have yet to encounter one of these that I'd take seriously.
I have an anecdote, from a friend of mine, from the 1990s.
He was a fairly well-paid team leader for an NYC bank. Ran a C++ shop. He had to wear a tie to work, but was paid enough to buy a house, in his twenties.
In any case, when I knew him, he was an even-higher-paid consultant, working for the same bank. This time, he wore a three-piece bespoke suit to work. He now owned a house in Port Washington, and he was still in his early thirties.
He told me that the bank wanted to release some new server-based product. This was before the Web was really a thing, so it probably was an EDI system (I didn't ask him what, as I knew he wouldn't tell me).
He consulted with his team, and submitted a proposal for a C++ project, taking several months, and costing six figures.
Some VP (banks have lots of those) came in, and had been studying Visual Basic. He also hated the IT teams (they can be like that, you know).
He told his higher-ups that he could write the whole thing, in VB, in half the time, for a quarter of the money (since he'd be doing most of the work, himself).
Stop me, if you've heard this before...
My friend lost the bid. He quit the company (along with most of his team), and traveled around the world for a couple of years.
When he got back, the bank was in a shambles. The VP had screwed the pooch in a big way, and was long fired. Attempts to redo the project were dumpster fires, since they no longer had any trained engineers on staff, and they couldn't hire new staff.
It's interesting to think about this in the context of model based development / UML.
Both are situations where people are trying to make the primary source something other than written text. I guess I'm biased, but I think written text has been wildly successful. The more experience I've gained, the more I use text based tools and generally prefer text to alternatives. Even in a recent case where I wanted a diagram, I used graphviz to automatically generate it.
At my last company, I inherited the reins as custodian of a low-code application. Except this low-code application was designed in the early 2000s. Back then, "low-code" still involved HTML, CSS, Javascript, and a database scripting language - so I at least FELT like a programmer. That job paid a round-robin of 10 developers in 3 different locations over the course of ten years. In total, it probably took a total of 30 person-years to develop and maintain the software in its lifetime at the company.
This is on top of the $1 million/year licensing fee for the proprietary tool in all its own glory. This included a:
- proprietary, closed-source database
- proprietary, closed-source scripting language
- proprietary, closed-source "CI/CD" (more like, barf out some arbitrary, insecure mash of HTML, CSS, and Javascript into the browser)
We spent a significant amount of time trying to peer under the hood. Except, it was more like we were trying to rip the head off the cylinder block with a bunch of vice grips, and then safely put it back together. I cringe thinking back to the tactics we used to "research" - it was like hitting an engine with a hammer and trying to figure out what it does. We came up with all sorts of theories about hammers, about sounds that engines make when hit with hammers. But it was a complete waste of time. That's not how you study an engine. No transferable skills are learned in the process.
I try to make sense of my time there - I'm only a few years removed from that job still. I sometimes feel embarrassed that I didn't get out of there sooner. Why did it take me so long to figure out my software career would go nowhere if I worked on stuff like that?
I'm in a much better place now, working on good software in real languages with smart people. My skills have grown unbelievably in just a short space of time.
===
Low-code applications are worse than what everyone is saying. They are career-destroyers. It's okay to dip your toes in the water. But it's too easy to become reliant. You'll find that database details, CI/CD, even version control are handled by some mythical low-code tools that you don't understand (and can't configure).
Work on software where the company is in full control of every line of business logic they own. You might own only one small slice of the software, but it's so much easier to sleep at night knowing that every bite of the pie can be fine-tuned according to SOMEONE at the company.
I'm skeptical of code. Code is needed to create operating systems, database engines, game engines, network servers, graphics and plotting libraries, and other tools that are typically used by software engineers but not end users, and yet most of us are not creating these tools. Most of us are using these tools to serve end users. This isn't a demanding task and it doesn't demand much code. Maybe in the UI, but for the vaunted "business logic"? Please. You don't need reams of code in a general purpose programming language to provide business logic.
Have you done a lot of low code work? I’ve found it just ends up with escape hatches everywhere because the low code platform can’t handle some edge cases or gets too complex for the non engineers to reason with. Other issues are observably, logging, testing, debugging, etc are often impossible or very difficult. Could work for small shops but the enterprises it gets marketed to usually need the normal code.
I do almost all of my work in the database. I don't know if you want to call that "low code" or not. Evidently, it doesn't look like code to many other developers but then again it precedes current low-code platforms by about 40 years, so who knows?
I am for whatever tool I can sell to frustrated executives for a handsome commission and be half way on to my next job by the time they figure out it’s pure, 100% snake oil.
Low code is very easy to sell. All you have to do is make a boogie man out of the IT department, and play on existing frustrations. Then play a demo where a sign up form is made in ten minutes and say your IT department would have taken 3 months. You just make sure to walk the happy path during the demo and don’t stress the tool.
Many things could be low code. Why do you need a developer for a sign up form if you are just making an API call in the end? Wiring up a form shouldn’t take html, js, and a serverside language.
On the flip side, don’t put your core business logic in low code. Low code builders assume anything complex will be offloaded to a specialized system. The best ones provide escape hatches by calling remote APIs or calling a code module.
Low code gets sold to smaller customers because it’s touted as a developer replacement. But really it’s most powerful in larger enterprises where individual teams may have technical people who have business knowledge but not much much IT power. It’s easy to get them to configure something in a SaaS they already use, than get a custom solution from IT. Also low code often comes with guardrails and permission sets tied to existing users.
I see low code as a tech equivalent of the last mile problem in logistics. It addresses a large number of concerns with simple tools, but doesn’t scale well and needs to work in tandem with a stronger central system. End to end low code is a trap, but low code at the fringes is a multiplier.