Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why? That seemed a trick in search of a problem. I mean, I get having a small dsl for describing charts. That is why we have quite a plethora of them. They usually have more complexity than you'd hope, because it is a complex problem.

But spark lines, you might protest. Of ridiculous niche value and really need to be super high resolution. The point is to pack a ton of density. If you are just showing a few numbers, consider just showing the numbers.

And if you haven't used the likes of metapost, graphviz, etc., Give them a try.



Yeah, I'm quite familiar with these toolkits. (Been at it 30 years, yo.)

The thing about the ligature trick, and fonts in general, and more specifically the packing of known functional GUI elements into a broader ligature description, is that its human readable/comprehendible without also requiring a great deal of 'programming'. Well, in fact Chartwell "is" programming, or at least coding (not Turing..) of information in a reproducible way - and the fact that we use the language of glyphs as a GUI metaphor, is also appealing. My affinity for this approach is that it is tied to human symbol-making in an intrinsic, self-describing way; whereas a technical GUI system might consists of a multi-variate collection of abstractions, putting it all into glyphs and ligatures ties it into our most basic of operational agilities, reading and writing.

Not, compiling, hacking, code, transmogrifying, mutating, extending, abstraction, tool-pushing, etc., but rather "describe this in glyphs/ligatures like every language ever, or GTFO>.."

I mean, WIMP has its thing, and touch and mobile too, but I do wonder if there isn't something to a pure graphemes and symbols based mode of interaction... which I would argue is where we are heading, with our multiple-gigabyte OS updates, anyway .. when we could nevertheless be doing it in <128k.


I think you are seriously abusing "describe this in glyphs/ligatures". In particular, there is a reason "a picture is worth a thousand words." :)

Some things are simply more easily done outside of the standard glyphs we use for words. I don't expect this will ever change. And I shudder at all of the complexity added to our glyph systems to support efforts at making "one true language" that can do everything.


Oh, no question there is abuse going on here - this thought experiment is really reaching and extending beyond a certain horizon, which may or may not be idiotic. ;)

Perhaps you're familiar with the wonderful and super-crazy TempleOS? There are some great things about the way the UI is expressed there ..

>Some things are simply more easily done outside of the standard glyphs we use for words. I don't expect this will ever change. And I shudder at all of the complexity added to our glyph systems to support efforts at making "one true language" that can do everything.

Certainly a valid concern, and I acknowledge your conservatism, but I think you might want to look at the cyclomatic complexity of the work required to splash a modern GUI up on the screen, and compare it with the cyclomatic complexity required to render a human-readable string of glyphs. There is a lot of opportunity to optimise these processes - and I would wager that having a font full of glyphs required to construct a UI paradigm, having those primitive elements processed by the OS in a simple way, and giving those elements to the end-user (who admittedly would need to learn something new for it to be productive), may indeed produced a "simpler" interaction method for future users. Yes, there is a certain fallacy to the "one true language" approach - but if you pay careful attention, you'll notice that the OS's of most common use in the last 10 years are on that road, anyway.

Break this out of the box a little, lets move from glyph/grapheme/font territory - what if the entire OS was instead expressed with SVG files? I think this is a viable thought experiment, personally.


Most modern gui toolkits bother me in ways that I can't adequately express. All the more so because most fallback to thinking CSS is the answer. Often completely ignoring what you can easily accomplish if you are willing to use absolute positioning, oddly enough!

Seriously, I don't think CSS is the worst answer. However, trying to get everything to work with some default flow behavior is borderline insanity. More, it is completely unnecessary. My favorite example lately has been http://taeric.github.io/cube-permutations-1.html for how you can layout using absolute positioning perfectly fine. (I similarly did sudoku with minimal effort using similar markup.)

But the worst sin is the sheer instability of what we are building as our foundation. The box and glue methods of TeX might not be the most intuitive method, nor the most powerful. However, it is nice that they have been stable. And not just in the "doesn't crash" sense of the word. In the, "I would feel comfortable building on top of it" sense.

So, let me be clear that I'm skeptical, but I would be delighted to be proved wrong.


I've been into GUI's and so on since before the birth of the web, and I've always had this deep discomfort with where we have arrived here and now, today. The Web and its UI is such a disastrous, convoluted mess of abstractions and significance and conceptual complexity - yet, it works "well enough" that a majority of the world can deal with it.

But this doesn't mean we can't think outside the box. Yes, I concur - TeX's box and glue is another kettle of fish - but then so too are things like Box2D's physics forces and contact mechanisms, which I personally believe, were it integrated into a forward-thinking GUI framework, would open the doors to very interesting and versatile interfaces - as has been demonstrated by its application towards making those most intuitive interfaces of all times, games. I would love to be able to say "[ the context of this independent element has a gravity of -1. ]", and then watch as my sentence floats away to the top of the screen, to function as a daily "Todo list" which, once I press the '.' period at the end of the ToDo item, then sets the gravity to 1, and the whole thing lands at the bottom of the screen, away from my attention. There are many abstractions like this out there which could be applied to human/computer interaction - we've selected a set of words, symbols, concepts that are granted us by the designers of modern OS's, but I truly believe that the effort of producing interaction symbology is far, far from where it could be. As do many other people of course (http://worrydream.com) .. there seems to be a plethora of views about this. Almost as many views, as symbols in the world there are to be read ...


I am fascinated by this idea despite the fact that I don't actually understand it. Ever since I read your comment, I've been trying to think what it would mean to build a GUI this way. It has been thought-provoking, and has caused me to read about interesting things like ligature substitution rules in OpenType, but I still haven't made much progress.

I would be grateful if you would elaborate in any way at all.

Thanks!


Here's a simple example of what I mean: consider the title-bar of the window you're reading this page in (assuming your wm has titlebars, but lets assume it does and you know what I'm referring to..)

From left to right, there is a narrative - quite a bit like words in a sentence.

The resources to construct that sentence - the bitmaps for the window control buttons, the spacing and formatting of the title, the right-hand-side elements of the window title - all of these resources consume some kind of storage on the computer, and require a set of abstractions in order to process through the pipeline that eventually renders something visible to the screen .. there are multiple filters in that process - the file-opening, then parsing of the file resource (.png) for an icon or two, then opening some type file, finding the elements of the on-screen elements which construct the glyphs in the title text, etc. This all, eventually, gets stuck up on the screen using some final method - perhaps the OS is using GL, and everything is just triangles, all the way down. Perhaps there's texture blitting and line-drawing primitives too ..

All of these elements can be expressed as glyphs in a typographic way, and all of the changes and states that can occur with these elements, also - as ligatures, individual glyphs, etc.

My idea is that these resources can be replaced with glyphs and ligatures from a font, and that indeed GUI elements themselves can be expressed (I would argue 'better') as graphemes in a common language context.

Think of it like this: instead of having a filesystem full of resources that have to be individually processed and extracted, there is simply "funkyOSfromtheFuture.ttf", and everything that is required to be displayed by the OS, indeed anything, can be expressed as an array of mappings to glyphs within that single .ttf.

Its not just that I hope to achieve a 'resource compaction' down to an ultimate font file and thus do away with a filesystem full of differing technologies to produce, ultimately, graphemes - though that is a nice effect. Its more also that I think that GUI's can be expressed in the same way that letters and words can be used to construct entire useful sentences - and I wonder at what sort of programming advances can be made with an OS (or more specifically, GUI environment) that is designed around the central principle that the elements of the interface are expressible using typographic metrics commonly used to render human-readable text. To me the "close window" icon is equivalent to the "." character, in terms of its effect on my eyeballs. So, why should it be splashed up on the screen using technology vastly different to that used to render other symbols, instead of just being included in a 'new GUI alphabet'.

There have been efforts made to put things into this kind of technical context in the past - take for example NextSteps' approach where "everything is just display postscript" as a metaphor.

"What if we built an OS whose entire look and feel was derived from a single font, and those elements were accessible to the keyboard user?", is the thought experiment.. like, I could have an alternative element in the font, and convenient key combination, that would turn any "?" in this sentence into a clickable link .. and you, upon clicking it, are presented with a means of answering the question...

Instead of having abstract concepts of communication that are entirely different (forms, fields, WIMP elements, etc.) than the base set of tools we're already used to using as humans (letters/words/sentences/symbols), we merge the set. My window title becomes "< hi there .", and if you click any of those symbols, they do things.. of course, this is a very simplistic description but can think of others.

Lets try another simple one: "[ This sentence would be a draggable window because the '[]' chars are considered by the OS to be clickable/draggable and represent a 'collection' of information that the user might want to move around the screen, so to do so they just click either one of the '[]' symbols that represent the edges of the collection.]"

Another one: "Do you want to answer this question? Then simply click the '?' symbol at the side of the sentence, and you will be prompted to input your data - if not, simply click the period at the end of this sentence to dismiss the element."


I've been thinking about this within the context of data visualisation schemas like Vega (http://vega.github.io/). They sort of a represent a middle ground between a full programming API like matplotlib to specify a graph or the XML SVG language, and the language of matrices that bitmap images really are. Easy for a machine to parse and easy to change with a more consistent structure for a given graph, with some limitations on what you can express.


Thank you very much. That is a lot of food for thought.


Given this idea, I can't figure out how to deal with the fact that fonts are finite and, even with ligatures, relatively quickly enumerable. User interfaces, on the other hand, often require drawing arbitrary pictures. As I understand it, Chartwell takes advantage of the fact that charts have finite resolution, so that it's possible to enumerate all the possible heights for a bar or angles for a pie wedge as individual glyphs. But would that be possible for a general user interface? I may be missing something obvious.


Compositionality should help with that. But at the same time, constraints should be domain based. Within the domain model, you can have a language express what you need.

Your comment brings 3 thoughts to me:

1. Compositionality should enable elegant combinations of GUIs. Our programming language primitives are finite, yet there are infinite ways to combine them. Just as there is 26 letters in the alphabet, and yet we we have a growing language.

2. Domain modelling matters. Constraints on the system should be on the domain we are working within. Hence the primitives would have to be designed according to the domain.

3. Something about "constraints" and "creativity".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: