I've written my books in Latex - the pdf output goes to Amazon for the print copy, and then I run it through pandoc to get a perfectly formatted epub that I upload to amazon and apple and anywhere else.
It takes a bit to get setup, but now I have the template going it makes a perfectly formatted book a breeze.
Despite the thousands of pages of ISO 32000, the reality is that the format is not defined. Acrobat tolerates unfathomably malformed PDF files generated by old software that predates the opening-up of the standard when people were reverse-engineering it. There’s always some utterly insane file that Acrobat opens just fine and now you get to play the game of figuring out how Acrobat repaired it.
Plus all the fun of the fact that you can embed the following formats inside a PDF:
PNG, JPEG (including CMYK), JPEG 2000 (dead), JBIG2 (dead), CCIT G4 (dead, fax machines), PostScript Type1 fonts (dead), PostScript Type3 fonts (dead), PostScript CIDFonts (pre-Unicode, dead), CFF fonts (the inside of an OTF), TrueType fonts, ICC Profiles, PostScript functions defining Color spaces, XML forms (the worst), LZ compressed data, Run-length compressed data, Deflate-compressed data.
All of which Acrobat will allow to be malformed in various non-standard ways so you need to write your own parsers.
Note the lack of OpenType fonts, also lack of proper Unicode!
I share that passion too! Most people grossly underestimate how difficult it is to implement something as seemingly simple as a text field or menu, when there are so many hidden issues and techniques that make them easy to use because you don't notice all the support you're getting.
Well implemented user interfaces have polish that makes their inherent complexity invisible, but polish is actually millions of tiny little scratches, not just a clean simple perfectly flat surface.
Accessibility and internationalization are two crucial dimensions that most people forget about (especially non-sensory/motor-impaired Americans), which each add huge amounts of unavoidable complexity to text fields and the rest of the widget set.
Then there's text selection, character/word/paragraph level selection, drag-n-drop, pending delete, scrolling, scrollbar hiding, auto scroll, focus management, keyboard navigation and shortcuts, copy and paste, alternative input methods, type-ahead, etc, all which need to perfectly dovetail together (like auto-scrolling working correctly during selection and drag-n-drop, auto-scrolling triggering on a reasonably scaled timer regardless of mouse position instead of mouse movements only inside the text field, so scrolling is deterministically controllable and happens at a reasonable speed, and doesn't freeze when you stop moving the mouse or move it too far, etc).
There are so many half-assed custom text fields out there written by well intentioned people who just didn't realize the native text fields supported all those features, or weren't intimately familiar with all of the nuances and tweaks that have been hashed out over the decades (like anchoring and extending the selection, controlling and editing the selection with the keyboard, inserting and removing redundant spaces at the seams of the beginning and the end of the selection when you drag and drop text, etc).
Even when somebody achieves the straightforward task of implementing a text field that looks pixel-for-pixel equivalent to a native text field, they're usually making a promise that they can't keep, that it also operates exactly the same as a native text field.
I've seen many text fields in games (and browsers) that break type-ahead by dropping keyboard input when you type too fast, because instead of tracking device events in a queue, they're polling the current state of the keys each frame update, so when you get slow frames and stuttering (which is often, like during auto save or browser thrashing), they miss key transitions.
Most games poll the mouse buttons and positions this way too, so they break mouse-ahead by dropping mouse clicks if you make them too fast, and they perform actions at the current position of the mouse instead of its position when the click happened.
Even a beautifully designed well implemented AAA quality game like Dyson Sphere Project running on a high-end PC has this problem. After you place a power pole, you have to hold the mouse still for a moment to let the game handle the mouse down event and draw the screen a few times, before daring to move your mouse away from where you want to place the pole, otherwise the pole goes into the wrong position, away from where you clicked the mouse, and this really throws a monkey wrench into smooth fluid interaction, predictable reliability, mouse-ahead, etc.
The Xerox Star had a wonderfully well thought out and implemented text editor, which pioneered solutions to many of these issues in 1982 (including internationalization), demonstrated in this video:
See Brad Myers video "All the Widgets (Fixed v2) - 1990".
This was made in 1990, sponsored by the ACM CHI 1990 conference, to tell the history of widgets up until then. Previously published as: Brad A. Myers. All the Widgets. 2 hour, 15 min videotape. Technical Video Program of the SIGCHI'90 conference, Seattle, WA. April 1-4, 1990. SIGGRAPH Video Review, Issue 57. ISBN 0-89791-930-0.
Brad Myers is finishing a book (tentatively titled “Pick, Click, Flick! The Story of Interaction Techniques”) which is partially a history of Interaction Techniques. Probably more than 450 pages. The initial chapter list can be seen at www.ixtbook.com. It is based on Brad’s All The Widgets video and Brief History of HCI paper, and also on his class on Interaction Techniques which he taught three times. As part of that class, Brad interviewed 15 inventors of different interaction techniques, all but one of whose video is available on-line, which also might be a useful resource.
Pick, Click, Flick! The Story of Interaction Techniques:
Here's the video and slides of the talk I gave to Brad's Interaction Techniques class about pie menus -- there's a discussion of mouse ahead, event handling, and polling around 16:30:
Since you asked: Yes, the Pascal-to-C translator was written specifically for TeX.
To be more precise (and add more detail than anyone asked for!):
- TeX is not written in Pascal itself, but in a literate-programming system called WEB [1,2] (a language basically created for writing the TeX program in!), which lets you mix Pascal source code with TeX documentation, and lets you write your program as a “web” of “sections”, each independently understandable, and referring to other sections (with macros etc). The “tangle” program converts the WEB source code to Pascal source code.
- The Pascal used in the TeX program is a limited subset [3] of Pascal (and specifically of what Knuth called “Pascal-H”, the Pascal implementation that was available to him on the DEC PDP-10 system used at Stanford SAIL).
- It is not exactly true that he does not rely on implementation details, but he's conscious about where he does so, and he's included a list of all such places in the index under “system dependencies” [4] — in most places he gives suggestions on what could be done if the compiler didn't work the same way.
- In the early days, when Pascal was the most common language available at the places TeX was usually run (universities, research labs, etc.), the Pascal code (generated by Tangle) was directly compiled into a Pascal program and run.
- With the rise of C (into the wider world outside Bell Labs), there arose both hand-translations of the WEB (or Pascal) source code, and programs to do this automatically.
- Today, major TeX distributions have their own Pascal(WEB)-to-C converters, written specifically for the TeX (and METAFONT) program. For example, TeX Live uses web2c[5], MiKTeX uses its own “C4P”[6], and even the more obscure distributions like KerTeX[7] have their own WEB/Pascal-to-C translators. One interesting project is web2w[8,9], which translates the TeX program from WEB (the Pascal-based literate programming system) to CWEB (the C-based literate programming system).
- The only exception I'm aware of (that does not translate WEB or Pascal to C) is the TeX-GPC distribution [10,11,12], which makes only the changes needed to get the TeX program running with a modern Pascal compiler (GPC, GNU Pascal).
“Minimal code to put pixels on the screen?” is a question I’ve seen often enough that I’ve made a little gist to link whenever it comes up. https://gist.github.com/CoryBloyd/6725bb78323bb1157ff8d4175d... It requires https://www.libsdl.org/ But, making a window and putting some pixels in it, on all the world’s varied platforms, is the premier feature of that lib.
As a professional graphics programmer at a well-known game studio, who has taught and mentored a lot of people: Direct3D 11 is quite good. D3D12 is a bit more like Vulkan, but not too far in that direction.
In order of recommendation for learners, just rating each API on its own merits without considering other factors like portability: D3D11, Metal, WebGPU, D3D12, OpenGL and Vulkan.
OpenGL and Vulkan score last place for similar and opposite reasons: both their APIs are a mess, the debugging and tooling story is still underdeveloped. But while OpenGL has backwards API design that often gives users a wrong impression about how GPUs work, Vulkan instead is much more annoying, requiring the user to cross their T's and dot their I's, while still not giving good guidance towards how they should be structuring their rendering code. I am also eternally annoyed at things like VkFramebuffer which should have never existed (yes I do know it is not required in the latest version).
I would rank WebGPU higher (disclaimer: I contribute to the WebGPU spec) since the API is a great intermediate, except WGSL is too new and too underbaked for me to seriously recommend it right now. The native implementations Dawn and wgpu are committed to supporting SPIR-V input, which is how I would recommend using it right now.
I totally recommend to go for it. It's not that scary, just a few concepts to learn. For example on Windows, you roughly need to know about HANDLE, HWND, WNDCLASSEX, RegisterWindowClass(), CreateWindow(), how to write a window proc to handle events coming from the OS, how to write a simple message loop to pump those events. To put something on the screen, look up the drawing context HDC, the RECT structure, and use FillRect() to draw colored axis-aligned quads.
Optionally, later you move to a custom memory-backed backbuffer allocated using CreateDIBSection(), so you can just set each pixel using the CPU as a uint32_t RGBA value. That allows you to go wild, you can proceed to write your own 3D game engine with nothing to distract you - it's you, the CPU, and the backbuffer memory. (It will be running at software rasterizer speeds, of course - but it should be easy to get very good performance at say 640x480).
It shouldn't take you more than a few hours to maybe 2 days to get the ball rolling, depending on your prerequisites. I initially found the Win32 API to be a bit arcane with its overboarding use of preprocessor and of typedef'ed types (even "pointer-to" typedefs like LPCSTR instead of simply "const char *"). But beyond these superficialities, I find that large parts of it are fairly well designed, and anyway the code to interface with the OS can all be kept in a central place.
Once you're a bit accustomed to these things, maybe afterwards you'll look back and wonder how you could put up with the piles of abstractions all these fluff libraries put on top. And personally, while this approach is not suited to quickly hack up a GUI in a day, I find it's a great feeling to be in control of everything, and this will show in the quality of the product as well.
https://microtronixdc.com/