It's an implementation of an old recommendation to never have more than 80 characters per line, ostensibly to limit horizontal eye movement but mostly stemming from legacy 80-character terminals and punch cards.
The value of that recommendation is rather dubious considering today's high-resolution displays that allow for smaller font sizes. 80 readable characters at 768p are not the same as 80 readable characters at 4K.
The 50-70 CPL is, in general, just well suited to reading.
This has been researched, and by quickly searching I can find the following (beautifully layed out) paper: https://journals.uc.edu/index.php/vl/article/view/5765
To my surprise the paper actually concludes that fast readers prefer shorter line length.
Edit: Usually books and newspapers are also more or less in compliance with this convention and those where around since before computers where a thing.
As someone who uses screen zoom tools constantly, I vote in favour of the 80ch column width recommendation. If you want to support extra wide monitors, consider using multiple columns, rather than a single, wider one.
It actually goes back to mechanical typewriters, which were limited to 70 to 90 characters per line. Commonly used punch cards also had 80 columns. Both were the inspiration for the 80 characters in computer terminals.
Work with OpenDocument to get the necessary features into the next version of ODF while keeping national bodies informed about the status of that effort. In the meanwhile, allow Office to save (with reduced functionality) to ODF in order to fulfill the requirements of existing standards-oriented procurement processes. (Fun fact: They did the latter pretty quickly.)
Here's what they shouldn't have done: Undermine ISO's credibility by ramming a hastily-constructed, not-yet-implemented spec through a fast-track process intended for mature specs by stuffing national bodies. I see no reason to place Microsoft's short term profits over the integrity of international standards bodies, nor do I see one to excuse Microsoft for doing so.
Why on earth would they want to do that? Because they hate having money? Because they suddenly decided that opening the market to competition would be more important than the billions they stood to lose?
These standards determine the tools people use to communicate with tax offices and other government institutions. Thanks to their efforts (supported by as much corruption as necessary), Microsoft didn't have to invent a new file format and would let people just use the file format everyone was already using for official business.
Office allows saving as ODF already and has supported it for ages. It was never about supporting open standards. This is all about corporate interests.
I can't think of a single "open" format designed by a large corporation that isn't "open" as a way to make more money.
In my experience, fintech companies (including ones that either belong to or own a bank) follow one of two playbooks:
- Issue high-powered laptops that the developers work on directly, then install so many security suites that Visual Studio takes three minutes to launch. The tech stack is too crusty and convoluted to move to anything else like developer VMs without major breakage.
- Rely 100% on Entra ID to protect a tech stack that's either 100% Azure or 99% Azure with the remaining 1% being Citrix. You can dial in with anything that can run a Citrix client or a browser modern enough to run the AVD web client. If they could somehow move the client hardware to the Azure cloud, they would.
I don't really associate fintech with a modern, well-implemented tech stack. Well, I suppose moving everything to the cloud is modern but that doesn't mean it's particularly well done.
But that's a tiny model; it's the smallest version of Llama 3.1. The commercially marketed models are way bigger - e.g. GPT-4 has been estimated to use about 1.76 trillion parameters, 220 times more than the Llama build you mentioned. Their resource and performance requirements are vastly different.
You're essentially arguing that shipping naval diesel aggregates must be trivial because you can fit a dozen moped motors on the bed of your pickup truck just fine.
Okay but these tiny models are being used by people and businesses instead of GPT-4. My point was that they consume less energy per user than a rig used for gaming.
I have no insight into how many GPT-4 users are served per GPU, but I would assume OpenAI heavily optimizes for that, considering the cost to run that thing. It's probably in the same ballpark: hundreds-thousands of concurrent user requests per GPU. Still better than one GPU per gamer, even if it requires 10x the energy.
The value of that recommendation is rather dubious considering today's high-resolution displays that allow for smaller font sizes. 80 readable characters at 768p are not the same as 80 readable characters at 4K.