The first one is concerned about solving problems (domain experts). They don't really know much about programming, they know how to solve problems though. Once in a while someone clever creates a visual tool for them and they become magically super productive relative to their peers. Be it business process automation (BPMN, workflow automation), signal processing, simulation programs, webscraping, even Excel. However as these people get proficient, in their top-down learning of programming, they start to hit limits of the tool. Then you can see the typical spaghetti code, because the visual tool lacks basic programming constructs like looping, functions and conditional which would nicely compose the mess away. Additionally it can't scale beyond RAM and is hard to put into version control, because they are not in control of the text representation of objects they work with, even though the software uses it under the hood.
The second group of people are programmers. They start learning bottom-up, ie. from conditionals, loops, functions, threads, etc. to actual problem solving. They know all the stuff about proper branching, version control, how to structure the code, programming paradigms, etc. They don't get stuck in spaghetti code, because they have super composable functional languages, where any pattern or duplication can be abstracted away as a function.
There is a huge gap between the problem-solvers and program-creators.
Anything which can be represented as a visual language can be also represented as a text. Unfortunately we don't have textual programming languages powerful or intuitive enough to cater to top-down folks.
I would go as far to suggest that our current formalisms are insufficient for this task. Lambda calculus is very bad abstraction for working with time and asynchronous processes for example. Workflow automation, where 99% of cpu time you just wait for real world tasks, doesn't map to lambda calculus well. Other formalisms like pi calculus or petri nets are much better suited for this and unsurprisingly the visual programming tools often resemble a petri network.
Bottom-up text-based programming leads to much greater complexity because most programmers don't properly model their software (e.g., with state machines, statecharts, Petri nets, activity diagrams, etc.).
But it's not entirely their fault -- code is inherently linear. Mental models are not - they're graph-based (i.e., directed graphs, potentially hierarchical). Text-based code is merely trying to shoehorn graph-based mental models of what the code should do into a linear format, which makes it less intuitive to understand than a visual approach.
Correct, it was an exaggeration. Bottom-up programmers are supposed to have the tooling not to end up with convoluted code, but they somehow manage to do it anyway.
> Bottom-up text-based programming leads to much greater complexity because most programmers don't properly model their software (e.g., with state machines, statecharts, Petri nets, activity diagrams, etc.).
I'd argue that these text-based programming languages and computation models don't correspond to human intuition when they solve a problem and that is the main problem.
> But it's not entirely their fault -- code is inherently linear. Mental models are not - they're graph-based (i.e., directed graphs, potentially hierarchical). Text-based code is merely trying to shoehorn graph-based mental models of what the code should do into a linear format, which makes it less intuitive to understand than a visual approach.
This is one of the limitations of bottom-up code. It is easy to represent linear program flow. It is not sufficient though, as you point out, problems in real world are graph based in general.
On the other hand not all code is linear. Eg. looping is a typical cycle in a computational graph, or petri net when you represent a data flow graph.
init --> loop body --> end
| |
\_________<_____________/
I'd describe parent as a control flow graph, not a data flow graph. Control flow makes clear the interpretation of that cycle as an iterative "loop". In data flow, the cycle shown in your parent comment would instead represent an arbitrary fixpoint: the output of 'end' would be some value x = end(loop_body(init(x))). This inherent ambiguity where the same constructs are given different semantics is actually one reason why visual representations can sometimes be confusing.
The same applies to parallelism - does it represent divergent choice, or a fork/join structure where independent computations can be active at the same time? You can't make both choices simultaneously within the same portion of a diagram! Of course you could have well-defined "sub-diagrams" where a different interpretation is chosen, but since the only shared semantics between the 'data flow' and 'control flow' cases is simple pipelining that's so limited that it isn't even meaningfully described as "visual", it's hard to see the case for that.
Disclaimer, I'm a bottom upper that became a top downwer.
The top down folks in math disciplines have Matlab Julia and R. Visually they often use simulink or labview. These last are more library than language.
Because of their math background the Matlab usually gets done in a functional style which Matlab supports really well. No spaghetti.
right! statebox tries really hard to strike a balance between these two.
in order to make diagrams _as_ composable as functional code, you need proper theory of diagrams and a "compiler" that checks your diagrams for "type errors"
The first one is concerned about solving problems (domain experts). They don't really know much about programming, they know how to solve problems though. Once in a while someone clever creates a visual tool for them and they become magically super productive relative to their peers. Be it business process automation (BPMN, workflow automation), signal processing, simulation programs, webscraping, even Excel. However as these people get proficient, in their top-down learning of programming, they start to hit limits of the tool. Then you can see the typical spaghetti code, because the visual tool lacks basic programming constructs like looping, functions and conditional which would nicely compose the mess away. Additionally it can't scale beyond RAM and is hard to put into version control, because they are not in control of the text representation of objects they work with, even though the software uses it under the hood.
The second group of people are programmers. They start learning bottom-up, ie. from conditionals, loops, functions, threads, etc. to actual problem solving. They know all the stuff about proper branching, version control, how to structure the code, programming paradigms, etc. They don't get stuck in spaghetti code, because they have super composable functional languages, where any pattern or duplication can be abstracted away as a function.
There is a huge gap between the problem-solvers and program-creators.
Anything which can be represented as a visual language can be also represented as a text. Unfortunately we don't have textual programming languages powerful or intuitive enough to cater to top-down folks.
I would go as far to suggest that our current formalisms are insufficient for this task. Lambda calculus is very bad abstraction for working with time and asynchronous processes for example. Workflow automation, where 99% of cpu time you just wait for real world tasks, doesn't map to lambda calculus well. Other formalisms like pi calculus or petri nets are much better suited for this and unsurprisingly the visual programming tools often resemble a petri network.