The CIA 6526 shift register on the C64 is not used. Because the trace on the pcb that connected to it was accidentally cut by the board partner, and Tramiel was unwilling to take the time needed for the rework.
C64 uses the same very slow bit-banging method as VIC-20.
It's crazy that people will watch hours of YouTube content every day, streaming HD video with almost zero buffering, features like auto-generated subtitles that can then be automatically translated into other languages, essentially unlimited storage so the videos they like are around forever (copyright claims and the like are a separate issue), you can embed the videos onto your own site for free, you can use them to livestream... all this and you'll never see a single advert if you pay for YouTube Premium. It's fantastic, but most people are so entitled that they seem offended that YouTube would ask for money to provide this service. I know premium does nothing about the insane amount of sponsorship slop every creator stuffs into their videos now, but maybe if people stopped using every possible tool they can find to block YouTube adverts and paid for premium, YouTube would pay their creators more and they wouldn't need to constantly lie about drinking AG1 every day and the quality of raycon headphones.
This feels like bringing the annoying transitions people used to use between slides in PowerPoint to the web. There's nothing smooth about the transition in the linked demo, it's quite jarring.
Sorry, you lost me here on your game analogy. I haven't played games for a while. Or maybe our understanding of DRM term differs. I have never heard before use this term in the context of a software, more with books and music.
How does my solution differ from something that Postico and Sublime Text are doing?
My plan is to offer a fully featured application for free, but with slightly annoying pop-up that users can remove by paying a (reasonable) one-time fee. Is there a better way to go about this? I'm really curious.
It’s not modern but there’s some value to programming close to the metal so you understand what’s really going on. The problem domain is simple and easy to keep in your head. Good learning exercise. I wonder if there’s an assembly language version of something like this.
I2C is also a bus, just one that's less reliable and involves more custom work to use.
A "virtualized OBD-II" is really just a UDS server if I understand what you're trying to convey. UDS is a dumpster fire of a protocol that should be expunged from existence, but my personal feelings aside can be run anywhere you want. That exists. I'm not aware of many systems that directly connect the infotainment processors directly to critical CAN buses. Usually there's an intermediary component to isolate them.
I absolutely enjoy speed compensated volume. It's nice to have about the same apparent volume inside the cabin as road noise increases while not being very loud when going slow speeds or stopped.
I think it is generally in the best interest of companies to overstate what they are able to do with AI. Investors aren't interesting in hearing "we tried vibe coding but got tired of reviewing and fixing the trash code it produced". No, they want to hear "we've reduced our headcount by 40% while increasing customer satisfaction by 60%". Overstating AI adoption and success amplifies P/E ratios.
> Just 1) eliminate advertising, and 2) actually enforce the most basic of antitrust, namely using one product to subsidize another operating at a loss
Those are good ideas to consider, too
But also very radical
We are beyond the point where we need to be open to the "unthinkable "
With antitrust regulators (including, to a certain extent, the relatively feckless ones in the USA) breathing down Apple and Google's necks, they might not be able to get away with enforcing that rule for long.
Well, one nice thing about doing things this way is that it'll still be as viable in five years as it is today.
I'm sitting on two UIs at work that nobody can really do anything with, because the cutting-edge, mainstream-acceptable frameworks at the time they are built in are now deprecated, very difficult to even reconstruct with all the library motion, and as a result, effectively frozen because we can't practically tweak them without someone dedicating a week just to put all the pieces back together enough to rebuild the system... and then that week of work has to largely be done again in a year if we have to tweak it again.
Meanwhile the little website I wrote with just vanilla HTML, CSS, & JS is chugging along, and we can and have pushed in the occasional tweak to it without it blowing up the world or requiring someone to spend a week reconstructing some weird specific environment.
I'm actually not against web frameworks in the general sense, but they do need to pull their weight and I think a lot of people underestimate their long-term expense for a lot of sites. Doing a little bit of JS to run a couple of "fetch" commands is not that difficult. It is true that if you start building a large enough site that you will eventually reconstruct your own framework out of necessity, and it'll be inferior to React, but there's a lot of sites under that "large enough" threshold.
Perhaps the best way to think about it is that this is the "standard library" framework that ships in the browsers, and it's worth knowing about it so that you can analyze when it is sufficient for your needs. Because if it is sufficient, it has a lot of advantages to it. If it isn't, then by all means go and get something else... again, I'm definitely not in the camp of "frameworks have no utility". But this should always be part of your analysis because of its unique benefits it has that no other framework has, like its 0KB initial overhead and generally unbeatable performance (because all the other frameworks are built on top of this one).
That is a good example, but don't forget your DB will also be doing many other things besides managing jobs.
In the Rails world of using the DB for everything you have your primary DB, solid queue, solid cache and solid cable.
If you lean into modern Rails features (using Turbo Streams which involves jobs, async destroying dependents, etc.) it doesn't seem that hard to rack up a lot of DB activity.
Combine this with pretty spotty SSD I/O performance for most VPS servers, I don't know about hosting everything on 1 box on a $20-40 / month server but I feel very comfortable doing that with Redis backing queue + cache + cable while PG focuses as being a primary DB. I haven't seen any success stories yet of anyone using the DB for everything on an entry level VPS. It's mainly DHH mentioning it works great with splitting things into different databases and running it on massive dedicated hardware systems with the best SSDs you can buy.
I like the idea on paper but I don't think I'll switch away from Redis until I see a bunch of social proof of people showing how they are using all of these DB backed tools on a VPS to serve a reasonable amount of traffic. We have 10+ years of proof that Redis works great in these cases.
This would make concrete and bring coherence to the grab bag of skills and experience I have. Though I think it would be 10x as much in a small group setting. It is like trying to recover the source code a binary where you don't even know the source language.
"Welcome to Delisted Games, a growing archive of 2,163 games you can't play" from game console libraries, Steam, etc. ("the town criers of video game disappearances... We research, catalog, and inform about today’s digital gaming landscape (hellscape) including store delistings, online service shutdowns and, simply put, cultural erasure").
(mentioned on HN previously in connection with the PS4 debacle [0]).
I mean there's a legitimate discussion to be had about game lifecycles and between Steam, Kickstarter, games publishers, video game consoles, value resellers like GOG, HumbleBundle, about what are more transparent, ethical ways to publish(/unpublish) and monetize games throughout their lifecycle, to different constituencies of player:
- early adopters/ alpha playtesters, who are happy with the tradeoff of a buggy and incomplete game they can play online with friends in return for a deep discount, early access, the ability to positively influence game devpt, maybe some swag or convention events
- beta playtesters, who expect a reasonably complete and stable game, an active community, forums, developers who are reponsive, regular (monthly/quarterly) bug rollups and fixes, etc.
- main-phase, who are happy to pay full price for a complete, bug-free game with tutorials, user guides, multilanguage online help, forums, etc., and have an expectation that they can quickly start an online multplayer/remote game with friends, strangers and AI.
- owners who still expect a basic post-lifecycle ability to play a game solo or against AI or other existing owners (after the publisher has disappeared or the game site has taken down infrastructure servers, achievements, forums, etc.) Such as happened with 'Pandemic' and Asmodee.
- to what extent should publishers be able to use exclusivity to lock in owners and extract revenues? what happens to digital rights after the monetizing is over? Can they retroactively convert a sale to a licensing (time-limited, region-locked, limited rights to play with friends and family...)? This is something where consumer law and regulators can limit bad behavior.
As a positive example of how to do this profitably and ethically, Civilization (Sid Meier Games/ Firaxis Games).
"More Catholic than the pope," I believe that may also mean, referring not to loyalty but to intolerably unctuous and hypocritical sanctimony.
We do have that expression in this language, and "papist" is one of the old anti-Catholic (anti-Irish, anti-Italian, anti-Latin) slurs that actually survives, however deracinated, to the present.
One example of the sort of such slurs that did not survive is 'mackerel-snapper,' deriving from the pre-Vatican II meat fast observed on Fridays, which is also what first put a fish sandwich on McDonald's menu.
The CRT is not part of the operating system, unless you count the UCRT on Windows 10 onwards (yes there is also a MSVCRT copy in the Windows folder that Microsoft strongly discourages you from using, so let's ignore that for now). So unless you link against the system-provided UCRT, you will have to either ship a dynamic or a static copy of the C runtime, and linking it statically will be smaller because that will only contain the few string and time functions used by the program, instead of the whole CRT.
> 4 people that started a company and was "acquired" by google less than a year later. The idea that something like this should be blocked by competition regulators is frankly, totally insane.
I wish. Unfortunately, most operating systems do not have the advanced file system malarkey that Plan 9 does, which is kinda necessary for a window system like rio to work well.
Dissemination takes work. Materials in the right languages are needed. Finding the minimum necessary detail and visuals help. Delivery to new parents has to be done when they need the information, else they won't be receptive or remember. Then you need to get these materials into the birthing centers, to midwifes and nurses, etc. An evaluation component is also helpful to see if the approach can be improved, etc. Having this done in a repeatable way is important, every day there are new parents.
I don't see the price tag for this, but a few million dollars isn't all that much given the complexity of the dissemination challenge. It's probably a program but likely not an entire department. Curating knowledge and getting it to right people's attention at the right time is hard work. Did you see the materials they produce/disseminate?
These are new features. Many of them are part of the library not the language. Generally speaking what you do is enable the new features in your compiler, you don't need to disable that to compile old code. It's not a problem to work on legacy code and use new features for new code either.
I'm more curious about how much cost of customer service can Klarna cut by using AI , and how much marginal improvement to their customer service can Klarna achieve. Customer service should be an amazing application to AI: AI solves X% of the problems, and for the remaining 1 - X% of the problems, customers will tell the system deterministically, which means the company can continuously improve their systems with customer feedbacks.