Hacker News new | past | comments | ask | show | jobs | submit login

Can I check a "low-code" implementation into version control, using standard free tooling? Can I, when something inevitably goes wrong, do a `git log` equivalent to spot the source of the problem? Can I then roll back to a specific commit, using off-the-shelf free tooling?

I find that generally the answer is most often "no," which is kinda a nail in the coffin for serious usage of "low-code."




You do realise that there were ways to debug things before GIT came along? Indeed before source control came along?

What im saying is that you should be wary of ignoring things that don't fit into your current work-flow, because sometimes that "critical" thing in your workload is less critical than you think.


Those tools are in the current workflow because of industry-wide lessons learned. What are the alternatives you allude to?


To a significant extent, yes, but there's also a lot of people just following fashions. There are whole teams of devs who really have no idea how to use git, but who are using it because everyone else does. I'm no git expert myself, but I can't count the number of times I've seen people, e.g., comment out large sections of code 'in case I want to add it back in again later'.


What was it like to work on software before version control? What are some pros and cons to that kind of workflow?


There's always been a form of version control. In the sense that you can copy folders to an archive, and/or make backups whenever you like.

So it's not like you can never go back and compare. And there are lots of ways of comparing text files if you need to know the difference.

Because it is "harder" to rollback, or perhaps because its harder to merge, different developer habits emerge.

A) signing code. As in identifying who wrote what and why. Done right there in the comments. There's no external comment section (aka checkin) so the comments go with the code.

B) there's more care over the code. Typically means fewer programmers. Typically means the code is "owned" by an actual person, who either makes the changes or is there to consult.

There's a lot less of "hire 20 contractors to earn in this" approach. (In a low-code situation, IME, programming teams are very small, and often single.)

C) domain knowledge of the whole system is more valuable than "git skills". The code is not the glory. The value is in the whole system coherence, the way everything interacts in service to the big problem area.

An imperfect analogy would be painting. You can get a squadron of labor to paint a house. But if you're looking for art then you let one person do all the work.

The goal of low-code systems is to remove the cheap-labor part (coding) and allow people to focus on the unique parts (the art). Which is perhaps why it's not beloved by those with cheap-labor skills. (And, not meant as an jnsult, but source-control is a cheap-labor skill - which can be used by laboror and artist alike.)


I feel like you're making a bit of a strawman of the state of modern development. I don't know anyone who is valued because they are a "git ninja". Being competent with git is a prerequisite for doing any programming-at-scale in 2023 in my opinion, on par with knowing how to use the shell effectively.

Most of the places I've worked value "global domain knowledge" over specific knowledge of slices. Junior engineers understand a small slice, seniors understand a subsystem component, architects understand the whole system. There are people who specialize, but they need to be a true specialist (i.e. understand their slice VERY deeply) for their value to be appreciated.

I agree that low-code has its place, like in painting a house versus painting a masterpiece. If you need a generic, off-the-shelf solution, low-code is great. If you need a work of art, it's just the wrong tool for the job.

The problem is that a lot of companies selling low-code applications try to advertise themselves as "able to create a masterpiece". A literal example of this is Dall-E and Midjourney trying to suggest that AI-generated art could belong in a museum. Maybe it could, in some one-off outstanding case. But for the vast majority of use cases that's just not true.

Decision-makers seem to be suckers for this kind of advertising - and why not? If you promise me a Rolls-Royce for $20,000 and without any pesky mechanic-work, I'd WANT to believe it's true! But unless they have experience as a mechanic, or as someone who's purchased a lemon, or just someone with good intuition, they won't be able to resist the urge to say yes.


> signing code

...is baked into git, and is something I've used at my last two employers. You could daisy-chain a bunch of tools together for that, but why? And getting back to the original point: code signing isn't usually possible on "low-code" tools.

> there's more care over the code.

I think what you mean is "the minimum threshold of carefulness required of everyone at all times is higher." I don't think it's true to say "when you toss out industry-standard safeguards, everyone does better at never making any mistakes." I do, however, think it's true to say "when you toss out industry-standard safeguards, the likelihood of making a mistake is higher because of unfamiliar, imprecise, and/or buggy tools, and the cost of each mistake is higher."

This gets back to my original question about low-code tools: when (not if) a human makes a mistake, do low-code tools guarantee fast and simple rollback? Often the answer is "no."

> domain knowledge of the whole system is more valuable than "git skills"

This is just a strawman. No job where I've ever worked has valued "git skills" over systems knowledge — and even if you dug up a job where that was the case, that's a shortcoming of the company culture, not of the tool. In my experience, it's more common that companies value low-code-tool skills over domain knowledge, usually because they've gotten locked into their proprietary low-code tool already and are unable to select from the total pool of developers unless they're able to do a wholesale rewrite, and must instead prioritize "knows Pentagon" or "knows Salesforce" etc as their top hiring priority, instead of "understands software architecture".


I build and maintain stuff in Power Apps, when it makes sense. You can export your solution as a ZIP file containing human readable-ish XML/JSON definitions of everything (GUI forms, tables, processes, etc.).

This is the version control/CD process I designed:

- We have a normal Git repository, hosted in Azure DevOps cloud

- Developers work in the "unmanaged" cloud-hosted Development environment

- When ready to move a set of changes forwards, the developer branches the repo, and runs a PowerShell script to export the solution from the Development environment to a local ZIP file, which is then automatically unzipped. The only other thing I do is reset the version number to 0.0.0 within the `solution.xml` file, because I'm pedantic and the code isn't necessarily a deployable build yet - it could be an intermediate piece of work.

- Developer reviews the changes, commits with a message that links to work items in Azure DevOps, and creates a pull request -- all standard stuff

- On the Azure DevOps side, once a PR is merged, we build a managed ZIP -- which is really just the same code, with a flag set and re-zipped up with a version number that matches the DevOps build number (in the Power Apps world, "managed" package deployments can't be customized on UAT/Production without explicit configuration allowing it)

- We then use all the normal Azure DevOps YAML pipeline stuff to move this between UAT, Staging and Production with user approvals etc. The Azure AD and Power Apps integration makes this very pleasant - everything is very easy to secure, and you don't need to pass secrets around.

- You can rollback to a specific commit using normal Git tooling or just redeploy a past build if there is one available.

That all being said, most people don't do the above. I should probably sell a training course, because it took quite some time to put all the pieces together. The bigger problems are planning ahead for changes without breaking stuff that is already live, because you generally only use 1 development environment for an entire team.


Do you normalise / deduplicate / otherwise clean up the xml or json files?

I'm thinking of visual studio solution files which were XML that picked up a lot of spurious churn when written by visual studio. A formative moment was discovering that colleagues edited these by hand to remove some of the noise when checking them into source control.

XML has a canonical form idea, sorts attributes and similar. Mostly wondering if opening the trees of emitted xml/json in a diff tool is a vaguely credible way of diagnosing whatever seems to be going wrong / rolling back one part of a change made in the GUI tooling.


Have you tried the new pipelines built into PowerApps?


Depends on what you mean by "low code".

Does Node Red count?


Node-RED has git support (via the project feature) but doesn't support any visual git operations.

I am a big fan of Node-RED but it lacks a visual version control and without that, it makes little sense creating visual code and then use a textual version control to revert changes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: