It also doesn't help that French and Quebecois French are different dialects to the point of almost being different languages. We "offended" our entire office in Montreal when we rolled their telephony system into our headquarter's PBX and added localization for IVRs, Voicemail and whatnot using prompts recorded by a Parisian. I could understand the French French, the Quebecois was..., well, honestly about as different as street Spanish in Mexico and the Columbian Spanish I learned early in life.
French spoken in Quebec is the same French as in France, they use their idioms and slang at time with an accent but we can 100% understand each others.
If you are not convinced you can find multiple interviews of Quebecois speakers done in France on youtube and see for yourself that it is without filters or subtitles. Here is an example for celine dion https://www.youtube.com/watch?v=4x6ezHzWPDU
I am a French speaking Québécois and do work with many French people (downtown Montreal). You can't deny that the French French and Québécois French uses a lot of different words. They are still the same language for sure, but we often have so much fun exchanging funny expressions.
A funny anecdote was when I was working with a guy name Jean-Nicolas and all the Québécois people called him Jean-Nic. That was very funny to the French guys hehe
Not a native French speaker, no. Apologies if that one-sentence broad generalization was tiring to you - I meant neither to explain nor be exhaustive. ;)
Obviously they're not different languages - I was a bit exaggeratory in that regard. And I do understand both, in as much as I understand either. French is my third or fourth most proficient language after English, Spanish and maybe German, but I never get to use it (or German for that matter).
The two are however different enough in some pronunciation and idioms that we ultimately had to do a second localization to support both. And the differences reported to us ran the spectrum from simple things, such as how to correctly pronounce the (cardinal) number 1 (a cleaner "un" compared to what sounds to my ear like "arn"), to the longer phrases one might expect to hear in a voicemail prompt. Ultimately our translation department got things ironed out, but it was a (somewhat amusing/bemusing) learning experience for us.
Definitely. Some of the idioms are different and the language sounds come from the back of the throat vs. Parisian French. I had the same experience in terms of understanding and being understood.
I am Québécois and am in no way hostile to other languages. What we are protective of is that the official language is French, for historical reasons yes. Meaning, if I encounter an English speaking person, there is a big probability that I'll switch to English just because I'm not an asshole and want to be nice. But, I also don't want my whole culture to disappear and be assimilated to at the same time I do care that I can be served in French when I go out to the restaurant, etc.
If people made fun of OP's accent then they were dicks and I don't think the majority of people would do that. We do however switch to English quickly when we determine the other person is anglophone just because most people in Montreal are bilingual and hey, let's make it easier for everyone. If the other person asks me to continue in French, I'll gladly do so.
There were two accents in common use in 18th century France: the 'bel usage' and the 'grand usage'.
The short explanation is that after the French revolution, Parisians adopted the 'grand usage'—a pronunciation which till then had been reserved for public speeches and church sermons—for everyday speech.
The 'bel usage' was the usual French 'code' reserved for every day use. It more closely resembles the French spoken in Quebec.
But the 'bel usage' wasn't just a plebeian accent for the unwashed masses, the King's court spoke in 'bel usage'.
But after the revolution, people wanted to change things up: hence the new pencil head pronunciation they use there, which everyone says is the 'correct' one. lol
In Québec, we retained the original 'bel usage' and never adopted the new accent from Paris. We are the keepers of the proper pronunciation of French, unless you want to give a sermon or make a political speech that is...
I don't see a problem with either accent, as long as you enunciate, you will be understood across the entire Francophonie. (Well to be honest, I do prefer my 'accent' from Québec, because I don't have to lug around a dictionary to make sure I pronounce very si-ng-le sy-la-ble cleanly and correctly.)
I feel like a lot of users on HN are producers rather than users of software, and haven't used node-based systems like in blender or nuke. They are extremely productive, and end up being very similar to functional programming while being super easy to pick up. It's a great "inbetween" representation for non-programmers that really need to do domain-specific programming.
Experience with tools like Blender or Nuke and particularly with visual programming in games engines is actually where a lot of the better informed dislike of visual programming comes from.
The biggest problem with these tools is scalability and maintainability. You will hear many stories in the games and VFX industries of nightmare literal spaghetti code created with visual programming tools that is impossible to debug, refactor or optimize.
Visual programming seems easy for very small examples but it doesn't scale. It has no effective means of versioning, diffing or merging and usually lacks good mechanisms for abstraction, reuse and hierarchical structure. It doesn't have tooling for refactoring and typically lacks tooling for performance profiling.
Some of these problems seem to be more fundamental and others like they could potentially be addressed with better tooling but that tooling never seems to emerge.
I've got a lot of experience with shader programming and have never found node based shader editors to be better than text over the long term, although there are some nice visual debugging features which are rarely implemented in text based IDEs (though I have seen it done). I've also found visual scripting to all too frequently get out of hand and have to be replaced with code due to being unmaintainable, undebuggable or unoptimizable.
I think there is possibly fertile terrain to explore in trying to get some of the benefits of visual programming approaches while avoiding all these downsides but many of us have been burned enough to be very skeptical of the majority of visual programming systems that don't even try to fix the worst problems.
> code due to being unmaintainable, undebuggable or unoptimizable.
I would argue that the shader editor in UE3 had none of these properties. It showed you cycle count and each step of the graph visually for debugging.
Also, I don't mean to be blunt but you aren't the target for those tools. Where they shine is when you have a level artist that needs to make a small tweak to how a shader looks. With those systems you don't have to loop in a dev to make it happen. You still need a solid tech artist to make sure things don't get out of hand and they're not a tool for every problem but in the domains where it aligns you see 10x gains on a regular basis.
I think rather than the shader editor (which uses a similar-but-different node based interface) they are referring to the Blueprint programming system in UE4 which effectively wraps C++.
It's extremely powerful, but it comes at a cost because it's nigh impossible to create diffs between different versions of Blueprints AFAIK.
That is neat, I'd never noticed that. Although in the context i've heard it discussed, it was more in the vein of "can't generate textual diffs/patch with them, as you can with C++".
I could be wrong but I believe the Blueprint assets are stored in a binary representation. So that rules out vanilla tools like patch/diff for the most part.
See the sibling poster, they did actually add a visual diff/merge tool in a much earlier version and I missed that.
Ascii is a binary representation. It's just that we've built up a lot of tooling around being to visualize/manipulate that representation. There's no reason that similar tooling couldn't arise for other binary representations
I'm not really sure what your point is. The point is ASCII/UTF-8 are a binary representation that is easily parsed by humans, which is why we use it for writing code. Sure, if you want, dump both files with xxd and do a diff on that instead.
There's been ways to diff binary files forever. Doesn't mean it is a great idea to store source code in that manner though. With UE4 you're not really supposed to be able to edit the Blueprint "code" outside the UE4 Editor.
> It has no effective means of versioning, diffing or merging
Color solve this one problem just like they solve the text equivalent better than annotations. This is not fundamental.
> lacks good mechanisms for abstraction, reuse and hierarchical structure
VLSI circuit designers have some abstraction mechanisms that are not that bad. This one is fundamental, but it's not as bad as most people say.
> It doesn't have tooling for refactoring and typically lacks tooling for performance profiling.
I can't imagine how those could be fundamental problems either.
In principle the largest problems of visual programming are our lack of capacity of understand complex images, compared to text, and the added information that most visual languages add (making the complexity worse) (that do not apply to things like GUI designing). I don't see any other showstopper, but those two are really bad.
> our lack of capacity of understand complex images, compared to text
I'm not sure I agree with this. Not as a fundamental law, at least. I find complex text pretty difficult to understand. Sure, I can read the words or even expressions, but to really understand how everything is connected and how the data and control flows through what's written, I find that incredibly difficult. A large part of programming is keeping that contextual information in my head so that I don't have to re-evaluate it all again.
Which is the reason why I love pen & paper and boxes & lines when I try to map out some code, a data structure, algorithm or idea. Obviously everyone is different, but for me, when things get too complex, I reach for images and diagrams to help me understand and form a clear mental model of what's going on or what I want.
Having said that, I've yet to find a visual programming language that does this. When I used Max/MSP a number of years back, I found it helped me think in "code" as I could map out ideas visually, reducing the need for pen and paper, but it had a ton of shortcomings. It gave me a glimpse of what it could be though, and I think the problems could be solved to a point where I can skip pen and paper altogether. We're not there yet, but if you think about how much time and effort went into making textual languages, its no wonder visual languages are so far behind. Also, its clear that not every task is well suited to visual languages, so the best environment would need to be a mix of both.
Good point about complex images. I do not think this issue will be solved anytime soon. An individual's mental image of a complex idea like inheritance or a data structure is subjective. Tying a specific visual GUI to a complex model (that many users would agree upon) is a very difficult design problem. Text allows the user to interpret the model using their own imagery.
I don't think I agree. We know that there are diagrams that come from category theory that behave very well, this is what statebox uses. It enforces certain constraints, so we pivot everything around this and see if we can recover some form of programming from it.
certain cases are still much better done in text and statebox itself is written in text, so nobody is claiming programming itself should be 100% pictures, but then again, I think it can be done and I think there is merit to it.
after all, text is also some sort of picture; on the screen but also in your head
Very much agree on these points. In my experience the pragmatic path for larger projects has been a hybrid of keeping visual programming graphs self-contained -- roughly within the "asset" boundary -- and then using a hierarchical format with textual representation to assemble those assets and drive their interface parameters. This is roughly how animation studios use USD (openusd.org), for example. Arguably this is just a strategy of combining a couple domain specific languages, with edges where the optimal tradeoffs of each domain flip. It's a very interesting question whether these limitations of visual programming are essential complements of their benefits, or if there is some good way to provide better abstraction, re-use, etc. Certainly Houdini, Nuke, Katana, etc. all provide limited forms of those things (ex: Houdini OTL's; scripts that version-bump nodes and try to auto-upgrade the parameters), and they do see lots of use in industry.
so I started my studies in animation actually, I used a lot of those tools, you can think of statebox as applying category theory to restructure aftereffects or https://en.wikipedia.org/wiki/Shake_(software)
when done right, we claim, you can target many different things ("semantics")
what we claim is something like, the compositional aspect of many such node based systems can be described as a certain type of mathematical object (monoidal category) ~ we can build an editor for that and then map that "dsl" to particular targets (image processing, state machines, etc)
> The biggest problem with these tools is scalability and maintainability.
This has been my biggest complaint with both Unreal and NI Reactor... you can build some brilliant things, but not without creating a mess of connections that becomes very difficult to reason about, let alone work on. A lot of production "visual programming" diagrams are spaghetti code, without comments or a changelog you can study.
While written code tends to the same way (just try diagramming the class tree in most software), at least we have tools for dealing with and reasoning about it in text form.
Indeed, spaghetti is a big deal. However, in systems like [Node-Red, Antimony, Apache NiFi] you can make function nodes that abstract out the actual data vs the function. And then, your function primitive can be called and instantiated for the particular purposes.
I know in Antimony CAD, you can even instantiate a function, edit the function's flow or individual python subroutines, and the delta is saved for that instance.
I know the hardest graphical programming I've had was with the Lego Mindstorms ev3. There were multiple "simplifications" that removed functions along with a nigh-unusable gui (the if block was this huge encapsulating thing, and other branch codes also had onerous things on screen).
the tools for reasoning about text can be limited if the text is in a language unsuitable for reasoning
this is why we work in a typed, purely functional setting.
spaghetti is difficult to deal with, for starters, you need "compositionality", so that you get no undefined or emergent behaviour.
then second you need some form of "graphical macros", or a "meta-language" for diagrams, code that generates diagrams or "higher order functions" for diagrams.
> ...and usually lacks good mechanisms for abstraction, reuse and hierarchical structure. It doesn't have tooling for refactoring...
Arguably, the whole point of having a diagrammatic representation with good formal properties is to provide new mechanisms for accomplishing these things. Of course these mechanisms, features etc. won't quite resemble the ones that are in use with text-only languages.
Read it as refactoring, or equivalent; the described functions all fall under organizational and editing tools, both of which are necessary in visual and text environments. Lacking decent mechanisms would make a full buy-in difficult for any larger project, regardless of the precise manner in which its done. Without it, you have a write-only language
You shouldn't generalize like this. Even in massive projects visual programming can be enormously helpful. Nobody seriously tries to script game levels in an IDE. Nobody tries to match sprite parameters, which eventually become code, to image files in an IDE. Nobody tries to design GUIs in IDEs.
The right tool for the right job. For 50% of a game's code visual programming is absolutely the right tool. For some other parts it probably isn't.
You seem to be using a different definition of visual programming than the normal one. The article and my comment are talking about visual programming as a graphical representation of code / logic rather than a textual one. Examples would be things like Unreal's Blueprint visual scripting, node based shader editors (Unreal has one of those too) or the Petri nets described in the article. Visual / GUI tools like sprite editors, level editors or GUI builders are not what is usually meant by visual programming.
Unity is a very popular game engine that doesn't have an official visual programming solution (they're previewing one in the very latest version). Unity has a powerful level editor that is used to lay out the levels in a GUI tool but no visual scripting / programming tools. The majority of Unity games that currently exist therefore do all level scripting in C# code. Many other games engines have no visual scripting solution and all level scripting is done in either a scripting language like Lua or in some cases in C++ code. Unity has sprite editors, visual GUI builder tools etc. but those are not what is generally meant by "visual programming". The closest Unity has had until recently was its graphical animation state machine editor.
awesome comments, really cool to read all of this.
anyway, I would argue both are valid examples is graphical progamming, but they happen at different levels.
the "node based" tools usually define some sort of function or system, ie. a "type"; for this you need category theory to describe how the diagrams look and this is not what any editor I know does, but it means a whole of difference.
And the map editors are for defining "terms of a type", given a definition of a "map datatype" there is a graphical way to edit it.
when we talk about graphical programming we are initially focussing on the first, well defined graphical protocol definitions. you can think of it as type checked event sourcing, where the "behaviour" or "type" is described by a (sort of) graph representing a (sort of) state machine)
but we have relatively clear idea's to extend this to the second case as well.
The difference with other (older) approaches is that in the last 20 years a lot of mathematics appeared dealing with formal (categorical) diagrams or proof nets, etc. that we leverage. I claim we (the world) now finally really understand how to build visual languages that do no suck.
Things like a GUI designer or level editor map a 2D or 3D domain to a 2D or 3D-projected-to-2D space. A 3D animation editor maps a 4D domain to a 2D projection of a 3D representation plus a timeline representing the 4th time dimension. These mappings are natural, intuitive and work well generally.
Visual programming tools attempt to map logic to a (usually) 2D domain where there is no natural or intuitive general mapping. The representation has both too many degrees of freedom (arbitrary positions of nodes in 2D space that are not meaningful in the problem domain) and too few (connections between nodes end up crossing in 2D adding visual confusion due to constraints of the representation that don't exist in the problem domain).
I've been exploring colored Petri nets for our product and they do seem to have promise for certain use cases though so I do think it's an interesting area to explore.
> Visual programming tools attempt to map logic to a (usually) 2D domain where there is no natural or intuitive general mapping. The representation has both too many degrees of freedom (arbitrary positions of nodes in 2D space that are not meaningful in the problem domain) and too few (connections between nodes end up crossing in 2D adding visual confusion due to constraints of the representation that don't exist in the problem domain).
In general this is true, but the diagrams we use at Statebox are different in the sense that there is a completeness theorem between the diagrammatic language and an underlying mathematical structure (a category). In this case the mapping is sound by definition.
Also, it is worth stressing that our diagrammatic calculus is topologically invariant, meaning that the position of diagrams in space is meaningless, everything that matters is connectivity. This is also the approach originally used by Coecke and Abramsky in the field of Categorical Quantum Mechanics, which is getting huge success to define quantum protocols :)
Why does category theory magically transform node diagrams into something usable from something not? Unreal blueprints/Reactor schematics/whatever are quite fine in their current form, even if their usage falls apart in advanced constructions. Is statebox going to magically make huge node-and-graph-designed programs reasonable?
Your writeup didn't convince me that "category theory" adds any significant value, and neither does it help inform as to what category theory actually is. How does statebox improve upon existing node-based programming implementations?
CS formality and big words misses the point of visual programming entirely, in that it is to simplify the process of software creation to make it more approachable to non-programmers. Unless your UX is absolutely top-notch you are going to lose these novice users as they struggle to deal with the constraints without a good reason or UX to do so.
Also- The memetastic design of statebox's main page is a pretty big turnoff :(
> Why does category theory magically transform node diagrams into something usable from something not? Unreal blueprints/Reactor schematics/whatever are quite fine in their current form, even if their usage falls apart in advanced constructions. Is statebox going to magically make huge node-and-graph-designed programs reasonable?
Your writeup didn't convince me that "category theory" adds any significant value, and neither does it help inform as to what category theory actually is. How does statebox improve upon existing node-based programming implementations?
nothing magical, just good engineering and UX design and solid theoretical underpinnings.
cat. th. does add value: there are many ways to build diagrams and build syntaxes for diagrams, but they are not all equivalently powerful or general. but it turns out that there are diagrams that _are_ suitable, and this is what we use.
It will improve upon existing diagram tools in that it gives a formal theory of how they work, so you can really build huuge diagrams and still be sure everything works.
I didn't write the blog post, but I could try to write one about the value of category theory, because it is often misunderstood. It is however very abstract and takes the mind a while to see the value off, which is not so easy to convey.
> CS formality and big words misses the point of visual programming entirely, in that it is to simplify the process of software creation to make it more approachable to non-programmers. Unless your UX is absolutely top-notch you are going to lose these novice users as they struggle to deal with the constraints without a good reason to UX to do so.
oh, yeah this is something often misunderstood, we are not trying to target novice developers (yet). we need to develop a lot of stuff and CS formality is right now still the simplest way to understand the system. I mean, we are not trying to be arrogant or puffy or something, but for instance the way we realise our compilation is with "functorial semantics". we have a functor between categories that does the trick. We could call it something else, but it doesn't help (at this stage).
anyway, if we do our job well then all the category theory would be under the hood and you just get a nice UX for coding with diagrams.
> Also- The memetastic design of statebox's main page is a pretty big turnoff :(
opinions differ :) I thought it was quite funny 2 years ago and many people thought so as well and then it got turned into this homepage.
at the moment we don't really have time to spend time on the site, but it will def. be changed in the future
Smalltalk would be an exception. The graphical elements are provided for you to make your own inside the image on the fly, or you can modify the IDE as needed.
> Experience with tools like Blender or Nuke and particularly with visual programming in games engines is actually where a lot of the better informed dislike of visual programming comes from.
Maybe the better informed dislike, but the bulk of the dislike in general is usually of the form "this kind of thing never works, nobody uses it", when in fact it is being used, in multiple disciplines.
> scalability and maintainability
> easy for very small examples but it doesn't scale.
this is very true, that is why we do it differently; we clearly define the semantics of our diagrams and take guidance from category theory in this. this is different from other graphical languages; we try to assume the minimum but then guarantee you that some stuff is always preserved.
think of it like deterministic, pure functional programming, but with diagrams.
> usually lacks good mechanisms for abstraction, reuse and hierarchical structure.
> f versioning, diffing or merging
very important points, we try to address this by having everything based on immutable, persistent data structures with built in content addressing; similar to git for instance.
diffing and merging is very complicated and still research but there are many hints that this can be done
I think those examples are a bit misleading in that nobody actually prints 100,000 lines of code and looks at them all at once. So those examples are showing probably the whole "program" at once, which is impressive and looks daunting, same as a big textual code base.
Rather, in order to understand a certain aspect of the system, I imagine picking just one of the nodes and then asking the IDE to show me, say, the immediate inputs. Some of these may be semi-hidden (code folding!), others may show more detail, etc.
BTW, I've been thinking it would be great if these systems had a textual representation in the vein of Graphviz' dot language. So one could have the best of both worlds. For diffing, a simple textual diff could do, but one could come up fancier semantic diffs in the same vein semantic diffs for code or XML exist.
When I used Max/MSP a few years ago, a lot of people's code snippets looked like this, however, an important thing to remember is that the target audience for most visual programming tools are not trained software engineers -- ie they don't know about encapsulation and abstraction and all of the other concepts we take for granted.
I found Max/MSP code could look extremely tidy, as I would group my functionality together in a similar way as I would in a textual language (short single-purpose functions and such). You can write horrible spaghetti code messes in textual languages too, its just we've used enough textual languages to have learned how to abstract our code and how to organise it for maintainability. Its not an inherent feature.
I did some programming in Max/MSP way back when, which was fun, and you have a point about the "functional" aspect. It was sort of the opposite of say, C, in that it was actually hard to create side effects even when you needed them.
But overall I agree with other people that I wouldn't want to maintain anything particularly large in that format. The Max "patches" were on the order of big scripts at most.
:) I also did a lot of Max/MSP and many similar systems, [nord modular](http://nmedit.sourceforge.net/) , and also wrote such tools back then, for audio and 3d and video, which is what led to statebox ultimately.
I would say the spaghetti aspect of max is the main complaint with visual programming.
To modularise it (small diagrams), you need to contain the behaviour of the boxes (ie. typed purely functional code).
And to generalise it (audio, video, microservices, ...) you need to separate the syntax from the semantics.
(took me about 15yrs to figure out a way to do this properly :-)
I don't know if any of them are extremely easy to pick up. Any re-use between applications are the domain-specific parts (knowing about color space, UVs, shaders, and cameras) not the UI. Node based interfaces are pretty limiting. A lot of the power of these systems are things like having a Python runtime or text representations of scene files that you can find/replace. The other pro/power features are shortcut keys and templates, just like other apps like Lightroom or Avid.
Node based systems is a great pattern for certain things, but large scenes get unwieldy quick. Even if the project files are text, diffing with version control is useless. Profiling is often difficult--a pet peeve of mine when apps load all the whole scene file and often the geometry when you just want to change a parameter.
You can see node based systems "grow up" by adding variables or attribute references, which means your data isn't just flowing down the graph, but you have to track references in this new dimension. You then often see encapsulation (hda/otl, Gizmos), which can really help with re-use but create more limitations.
The problems with visual programming have never changed: at one point you need functions, macros, identifiers, user-defined types, error paths, diffs...
(EDIT: it's an idea that doesn't work at all for _general purpose_ programming, but for some reason it sounds like a good idea from a distance).
What we really need isn't more syntax, it's core features like threads, operator overloading, a large std library, some real form of serialization (JSON doesn't come close to counting), etc.
Or even more basic things like "a date/time implementation that isn't a horror show" or "\w that behaves like \w in every single other regex implementation out there when the unicode flag is on".
How is operation overloading a core feature and not more syntax?
I like the infix proposal better. Operation overloading has a larger degree of obfuscation.
Like all "theories" of art and music this model is no good as it isn't predictive. There are plenty of good art with brightly saturated color. Deciding spontaneously that muted colors are the one true way and then that preference for saturation is some sort of retardation in development is just laughable at best. I have a friend who prefers muted colors precisely because he's color blind.
I'm going to go against the grain here but just putting cameras in classrooms and dumping that on Youtube works just fine. The content on MIT opencourseware feels a lot nicer than things like Coursera or others even if it's more traditional. At the end of the day you just want to communicate efficiently, and imperfect access to information is a lot nicer than no access. Don't sweat the details.
The only problem here is sound. Sound can be hard to get right, and I've stopped watching lots of interesting "meetup recordings" on YouTube because they just used the on-camera microphone and the audio was awful. It's a shame, but it makes it nearly impossible to watch.
That's an interesting study. I feel like maybe engagement is the wrong metric, much how like A/B testing gives a good local result but a poor global result. I finished a dozen or so of the traditional courses but can't be bothered to complete the split up ones. I'm surely not the only one.
It's because designers are just clueless and don't know how things are implemented. The opposite is also true: the parent is practically an alien here for liking design yet he doesn't even see the point in a photoshop over a wireframe.
I think in general people are too specialized. You probably want designers that do a little development and developpers that do a little design. The full on generalist approach is also poor, I've done both jobs in my life and ended up mediocre at both.
What problems do you run into? I'm familiar with lisp's macro system but didn't get to look into hygienic macros in other language, and was wondering if they are equivalent.