What every well-meaning parent will say, when pressed, is that they want their child to "be successful". But that goal brings us right back to the gauntlet of testing and ranking; to be successful in the systems of industrial society means being relatively better at what is tested, not your overall development. Everyone who doesn't fall in line manages to survive only by finding a "cheat code": a personal portfolio that has no competition, drive to start a business, pulling off a successful crime, marriage into wealth.
That is, the only way to really make the parents happy is to never promise a clear path to success, for once you do they will rush to gatekeep it for their child, disregarding the child's motives or perspective. It's the "extrinsic reward negates intrinsic reward" hypothesis at the greatest scale.
Don't look at individual politicians, look for the emerging trends in the political class.
The NYT has a century-plus history of pushing the defacto national narratives used by politicians - while they might not be openly backed by any federal interest, they are certainly stochastically willing to play along. Therefore, if they publish something like this, take notice, because it's most likely that some group inside the power structure wants it to now become a political issue as a part of a future economic framework. The current framework is in a failure mode and this fact is becoming increasingly evident at all levels - support for a radical change of some kind must be built and factions are emerging across various sides of the issue.
As the article notes, the last time four-day normalization was floated was the 70's, also a period with substantial labor unrest. The resolution ultimately arrived at then was to cut safety nets, taxes and jobs, clearing the way for investment in the productivity booms of the 80's and 90's. This time might be when the issue is campaigned upon, since the prior doctrine has already run its course and the signals are raining down from up high to attempt a green-energy "own-nothing" economy. The contra faction of this, of course, is also something Rick Scott is pushing - the "crypto economy." The actual outcome is likely to have a blend of both approaches, as is typical with political realignments. Four-day normalization could be compatible. We shall see.
> The NYT has a century-plus history of pushing the defacto national narratives used by politicians - while they might not be openly backed by any federal interest, they are certainly stochastically willing to play along.
That's very much true of foreign policy. For domestic affairs, it's no longer true. The NYTimes has lost much of its prestige and influence, as has most of mainstream media, with the rise of the internet, and the conversion of the NYTimes to commentary and news-as-therapy. Remember the first epidemic of "mansplaining"? That was not some machinations of the political class, it was angsty young reporters fresh from Smith college using the Times as a personal blog. So you have to tease apart when something is a therapy article and when it still carries some ruling-class signaling weight.
> As the article notes, the last time four-day normalization was floated was the 70's, also a period with substantial labor unrest.
There is not substantial labor unrest in the country today.
This is again more of a personal blog view of the world, where people are promoting things like "striketober". Look at the BLS, which tracks work stoppages:
Something like 13.5K workers went on strike in October of this year, out of a labor force of 165 million workers. Go back and compare that to historical data.
So if there were only this many workers on strikes, why were there so many news articles predicting massive labor unrest and waves of strikes in October? Because when news turns to advocacy, then the articles reveal what the author wishes would happen, not what is actually happening. E.g. news becomes therapy rather than a description of reality.
So this notion that there is "unrest" is pure cope for a working class in which unions have been completely marginalized.
Instead, what you have is a tight labor market, due primarily to 6 trillion dollars of deficit spending in the last 2 years combined with extremely low rates.
In a tight labor market, wages go up as employers outbid each other for workers, but because in this case there was no increase in productivity, inflation follows and removes the real wage gains after a delay:
J Paul Morrison "Flow-based Programming", while not really about documenting historical methods, got my head thinking about programming in terms of the pre-CPU paradigms of data processing with unit record machines, since that is what this style draws on most strongly.
Along the same lines, while not a book, videos and documents about debugging electromechanical pinballs and modular analog synthesizers provide great reference points for what digital systems could look like or strive to emulate, and physical engineering, woodworking and construction decenter the computer as the whole of the process while still being very logical and systematic. At the top level it's usually finding the right structural metaphor that dogs practical programming and gets it into a loop of endless data collection and reprocessing, so it helps to have other things in mind and try to discern ultimate aims.
There's an arcade bar I frequent to play DDR. If I go later in the evening when it's busy I will always get someone who is a bit awestruck and says "I've never seen anyone so good at that game" and I'm like "yeah, there are better people than me" or "the machine tells me exactly how bad I am". But it does not really matter to this audience that my technical skills are less than perfect because what they saw was astonishing regardless. And for me, it's a personal journey in returning to a game that I played for a while when it was new and then returned to many years later, after realizing that having it there filled in a "missing piece" of life and the actual thing of being good at the game was not really important.
More recently, I've been doing some digital illustrations for fun and have gradually built up a process that is hugely digital in its design: composite reference images together, do some tracing over them to study the proportions and planes, then redraw as needed with a simplified design, using art fundamentals to guide me. Doing this has largely eliminated "guess and check" and gives results that are extremely accurate and detailed in their representation, more than any freehand from-imagination result. But that's just one measure of success in the imagery. I'm still going to be a bit jealous of artists who have great control over their freehand lines, but I can finish work this way instead of sitting and dreaming. So it's a great step past the creative bottleneck.
With stories like those of a Chris Sawyer, there's a combination of obsession with the craft and coherence of purpose. That is, Chris spent a huge part of his life thinking about assembly code and its applications towards games, and then eventually put it towards a project that had few contradictions to it, which became RCT. There are many people, myself included, who put years into their game and then realize they were kidding themselves and had an incoherent approach to the design that ensured it would never feel finished or focused. And when that happens the craft ceases to matter - the project is just a timesink.
Failing in that way, putting in a huge amount of time on a game, really made me despair for a bit, but then right as that happened the cryptocurrency portfolio I had made a few years prior achieved moonshot gains, which is like, "oh, well then, I guess I succeeded anyway?" That moment really clarified how arbitrary succeeding can be; the comparative effort/return of the two endeavors is enormous.
You're only 30, and that's actually fine. Between 30 and 40 often marks a shift in attitudes because you're getting out of the feeling of being a "young striver" trying to get ahead of the crowd in a highly visible space. You can fall into a depressed state if where you are isn't where you saw yourself, but it's also easier to give yourself leeway to pursue things nobody else cares about, which means it can be creatively fertile. It just rests a lot more on continuing to build yourself up beyond your personal issues - health, finances, character development, virtues and all of that. Maybe you don't have the fortitude to make a huge game project or research cutting edge techniques, but you can do more modest things and still find admiration as with my DDR sessions.
I think the analogy of cryptocurrency's position to the 90's internet is the wrong one to pick, because it describes an environment with some maturity and acceptance in its base technologies, primed for waves of consumer commerce to take hold. The proper example to pick is personal computing circa 1982. When you look at the envisioned uses of personal computing, the advertisements really reach: it helps your child learn. You can store recipes on it. Balance your checkbook. Write documents. And (whispers) play games.
As well, one can occasionally find reassurances in the literature that the computer is your friend, not the back-office monster that billed you incorrectly last month. Would you feel convinced? Many people saw no use for computers in their lives. And the reality of personal computing at that point was that it was mostly a nerd hobby, with a few niches where it could have immediate, direct impact(white collar information work). Professional graphic design functions, audio and video were all still years away, far out of reach on the consumer platforms. It all cost too much - the computers, accessories, the networking options.
But it's also a representative inflection point: microprocessors as a product category had only been available for a little over a decade, and it only took about a decade from there for the embrace of all things digital to kick into high gear with the onset of commodity PC clones, the Wintel monopoly, and then the Internet. Cryptocurrency is seeing a trajectory like that - we're nearing 12 years in and, like early personal computing, it's understood by few, often advertised deceptively, and seeing massive amounts of growth and capital investment. The applications are gradually appearing, and industry incumbents are hopping onto the bandwagon, but it's sneered at by experienced code jockeys: they work with much bigger and fancier hardware than these toys. Nobody is sure of the business model to use. Prices are all over the map.
> The proper example to pick is personal computing circa 1982. When you look at the envisioned uses of personal computing, the advertisements really reach: it helps your child learn. You can store recipes on it. Balance your checkbook. Write documents. And (whispers) play games.
Okay so what is the "play games" of crypto today? It seems like the only equivalent is "criminal activity." Unless you count speculation on crypto itself, but that isn't a "use." Computer users in 1982 were not buying IBM PCs, leaving them boxed in their closet, and then re-selling them for 10 or 100x the price in a year or two.
In many of the arts, a typical famous biography will involve study and development of a great amount of technical ability through their 20's, and then to branch off from that some time in the 30's into a more streamlined and conceptual approach. They favor different things: a twenty-something will "take to the grind" because they are hungry to do so, and this often sends them on the path to a totally incoherent result - but is great if it's just a really hard puzzle and all the pieces and prerequisites are there(which is often the case in industry). Someone older may have less sheer energy, but is placing much more calculated bets and using more abstract thought processes, which is good for problems with long timelines and large scale.
It helps to give some context to 90's game coding by looking at the predecessors. On the earliest, most RAM-starved systems, you couldn't afford to have memory intensive algorithms, for the most part. Therefore the game state was correspondingly simple, typically in the form of some global variables describing a fixed number of slots for player and NPC data, and then the bulk of the interesting stuff actually being static(graphical assets and behaviors stored in a LUT) and often compressed(tilemap data would use either large meta-tile chunks or be composited from premade shapes, and often streamed off ROM on cartridge systems). Using those approaches and coding tightly in assembly gets you to something like Mario 3: you can have lots of different visuals and behaviors, but not all of them at the same time, and not in a generalized-algorithm sense.
The thing that changed with the shift to 16 and 32-bit platforms was the opening of doing things more generally, with bigger simulations or more elaborate approaches to real-time rendering. Games on the computers available circa 1990, like Carrier Command, Midwinter, Powermonger, and Elite II: Frontier, were examples of where things could be taken by combining simple 3D rasterizers with some more in-depth simulation.
But in each case there was an element of knowing that you could fall back on the old tricks: Instead of actually simulating the thing, make more elements global, rely on some scripting and lookup tables, let the AI be dumb but cheat, and call a separate piece of rendering to do your wall/floor/ceiling and bake that limit into the design instead of generalizing it. Simcity pulled off one of the greatest sleights of hand by making the map data describe a cellular automata, and therefore behave in complex ways without allocating anything to agent-based AI.
So what was happening by the time you reach the mid-90s was a crossing of the threshold into an era where you could attempt to generalize more of these things and not tank your framerates or memory budget. This is the era where both real-time strategy and texture-mapped 3D arose. There was still tons of compromise in fidelity - most things were still 256-color with assets using a subset of that palette. There were also plenty of gameplay compromises in terms of level size or complexity.
Can you be that efficient now? Yes and no. You can write something literally the same, but you give up lots of features in the process. It will not be "efficient Dwarf Fortress", but "braindead Dwarf Fortress". And you can write it to a modern environment, but the 64-bit memory model alone inflates your runtime sizes(both executable binary and allocated memory). You can render 3 million tiles more cheaply, but you have to give up on actually tracking all of them and do some kind of approximation instead. And so on.
The hypothesis of the "blockchain startup" rests on the idea that this tech can succumb to the standard platform monopolization plays and ultimately see a new "big tech" company emerge. A lot of the market action to date is driven by this: capital pouring money down a chute to create a dog and pony show of heavily hyped tokens. There's still smoke in the air from all the action, but the picture is getting clearer.
By building it as a company, you're taking a fundamental disadvantage to anonymous competitors, because the anonymous act as a sovereign micronation and can make the rules for themselves, while you are just another striver within the state, forced to compromise and not step on toes. I wouldn't rule out tokenization as a company, but I sense it existing in a separate, collaborative niche from the anon projects.
The actual supply coming into the market remains the same while the mining difficulty shifts according to network hashrate(determined by the time between new blocks), hence assuming demand for Bitcoin is static, there is an equilibration mechanism creating a break-even price floor. Bitcoin's price appreciation has demonstrated that demand can be sustained through multiple market cycles, and nothing in particular has changed that assessment.
Bigger worries for Bitcoiners stem from centralization of hash power, and whether transaction fees alone will sustain the network as it exits the inflationary period.
That is, the only way to really make the parents happy is to never promise a clear path to success, for once you do they will rush to gatekeep it for their child, disregarding the child's motives or perspective. It's the "extrinsic reward negates intrinsic reward" hypothesis at the greatest scale.