Hacker News new | past | comments | ask | show | jobs | submit | dansalvato's comments login

Let's be real, Commodore has no one to blame but themselves for squandering their 5-year lead in hardware and OS. They were carried hard by the passion of their engineers, but irredeemably greedy and soulless at the top. At Microsoft and Apple, engineers were the lifeblood from the very beginning. At Commodore, they were a spreadsheet column.


It's all relative; Commodore sounds like it was nirvana compared to Atari!


Kind of wild to imagine an alternate universe where Commodore and Atari were still big names in computing. Would have loved to see the Amiga continue to grow, it seemed so ahead of its time.


Commodore probably got more pleasant after Jack Tramiel left to go run Atari. Unfortunately it got left in the hands of Irving Gould, who treated the company as his personal piggy bank and looted it til there was nothing left.


Imagine if Commodore had built ARM, or something like it it. A 16 register RISC in the 80s was basically an instant win. They would have been in the CPU lead for a couple of years.

And they had the resources. I mean hell, Acron was a tiny company and they pulled of building an absolutely incredibly machine in the Archimedes.

Its just what you dare to do. If they were as bold as the original Amiga team, with VSLI Technology 2nm CMOS in the late 80s they could have built an incredible machine. I think they held on to long trying to do their own semiconductors.

Acron didn't have the bandwidth to really innovate on the graphics side of things, and because of their problems with OS, they never managed to get enough software on the platform. And they just didn't have the market either.

Commodore on the other hand, actually had a pretty high quality OS with lots of software and users already.


I think some of this stuff isn't the responsibility of HTML. If HTML already has a full autocomplete spec, isn't it the fault of browsers/extensions/OS if the implementation is broken? Or are you saying the spec is too ambiguous?

A lot of stuff becomes redundant under the framing that HTML is designed to provide semantics, not a user interface. How is a toggle button different from a checkbox? How are tabs different from <details>, where you can give multiple <details> tags the same name to ensure only one can be expanded at a time?

Image manipulation is totally out of scope for HTML. <input type="file"> has an attribute to limit the available choices by MIME type. Should there be special attributes for the "image" MIME type to enforce a specific resolution/aspect ratio? Can we expect every user agent to help you resize/crop to the restrictions? Surely, some of them will simply forbid the user from selecting the file. So of course, devs would favor the better user experience of accepting any image, and then providing a crop tool after the fact.

Data grid does seem like a weak spot for HTML, because there are no attributes to tell the user agent if a <table> should be possible to sort, filter, paginate, etc. It's definitely feasible for a user agent to support those operations without having to modify the DOM. (And yes, I think those attributes are the job of HTML, because not every table makes sense to sort/filter, such as tables where the context of the data is dependent on it being displayed in order.)

Generalized rant below:

Yes, there are pain points based on the user interfaces people want to build. But if we remember that a) HTML is a semantic language, not a UI language; and b) not every user agent is a visual Web browser with point-and-click controls, then the solution to some of these headaches becomes a lot less obvious. HTML is not built for the common denominator of UI; it's built to make the Web possible to navigate with nothing but a screen reader, a next/previous button, and a select/confirm button. If the baseline spec for the Web deviates from that goal, then we no longer have a Web that's as free and open as we like to think it is.

That may be incredibly obvious to the many Web devs (who are much more qualified than me) reading this, but it's not something any end user understands, unless they're forced to understand it through their use of assistive technology. But how about aspiring Web devs? Do they learn these important principles when looking up React tutorials to build some application? Probably not—they're going to hate "dealing with" HTML because it's not streamlined for their specific purpose. I'm not saying the commenter I'm replying to is part of that group (again, they're probably way more experienced than me), but it reminded me that I want to make these points to those who aren't educated on the subject matter.


The interesting thing about testing values (like testing whether a number is even) is that at the assembly level, the CPU sets flags when the arithmetic happens, rather than needing a separate "compare" instruction.

gcc likes to use `and edi,1` (logical AND between 32-bit edi register and 1). Meanwhile, clang uses `test dil,1` which is similar, except the result isn't stored back in the register, which isn't relevant in my test case (it could be relevant if you want to return an integer value based on the results of the test).

After the logical AND happens, the CPU's ZF (zero) flag is set if the result is zero, and cleared if the result is not zero. You'd then use `jne` (jump if not equal) or maybe `cmovne` (conditional move - move register if not equal). Note again that there is no explicit comparison instruction. If you don't use O3, the compiler does produce an explicit `cmp` instruction, but it's redundant.

Now, the question is: Which is more efficient, gcc's `and edi,1` or clang's `test dil,1`? The `dil` register was added for x64; it's the same register as `edi` but only the lower 8 bits. I figured `dil` would be more efficient for this reason, because the `1` operand is implied to be 8 bits and not 32 bits. However, `and edi,1` encodes to 3 bytes while `test dil,1` encodes to 4 bytes. I guess the `and` instruction lets you specify the bit size of the operand regardless of the register size.

There is one more option, which neither compiler used: `shr edi,1` will perform a right shift on EDI, which sets the CF (carry) flag if a 1 is shifted out. That instruction only encodes to 2 bytes, so size-wise it's the most efficient.

The right-shift option fascinates me, because I don't think there's really a C representation of "get the bit that was right-shifted out". Both gcc and clang compile `(i >> 1) << 1 == i` the same as `i & 1 == 0` and `i % 2 == 0`.

Which of the above is most efficient on CPU cycles? Who knows, there are too many layers of abstraction nowadays to have a definitive answer without benchmarking for a specific use case.

I code a lot of Motorola 68000 assembly. On m68k, shifting right by 1 and performing a logical AND both take 8 CPU cycles. But the right-shift is 2 bytes smaller, because it doesn't need an extra 16 bits for the operand. That makes a difference on Amiga, because (other than size) the DMA might be shared with other chips, so you're saving yourself a memory read that could stall the CPU while it's waiting its turn. Therefore, at least on m68k, shifting right is the fastest way to test if a value is even.


That instruction only encodes to 2 bytes, so size-wise it's the most efficient.

In isolation it's the smallest, but it's no longer the smallest if you consider that the value, which in this example is the loop counter, needs to be preserved, meaning you'll need at least 2 bytes for another mov to make a copy. With test, the value doesn't get modified.


That is true, I deliberately set up an isolated scenario to do these fun theoretical tests. It actually took some effort to stop the compiler from being too smart, because it would want to transform the result into a return value, or even into a pointer offset, to avoid branching.


> On m68k, shifting right by 1 and performing a logical AND both take 8 CPU cycles. But the right-shift is 2 bytes smaller

There's also BTST #0,xx but it wastefully needs an extra 16 bits say which bit to test (even though the bit can only be from 0-31)

> That makes a difference on Amiga, because (other than size) the DMA might be shared with other chips, so you're saving yourself a memory read that could stall the CPU while it's waiting its turn.

That's a load-bearing "could". If the 68000 has to read/write chip RAM, it gets the even cycles while the custom chips get odd cycles, so it doesn't even notice (unless you're doing something that steals even cycles from the CPU, e.g. the blitter is active and you set BLTPRI, or you have 5+ bitplanes in lowres or 3+ bitplanes in highres)


> There's also BTST #0,xx but it wastefully needs an extra 16 bits say which bit to test (even though the bit can only be from 0-31)

That reminds me, it's theoretically fastest to do `and d1,d0` e.g. in a loop if d1 is pre-loaded with the value (4 cycles and 1 read). `btst d1,d0` is 6 cycles and 1 read.

> the blitter is active and you set BLTPRI

I thought BLTPRI enabled meant the blitter takes every even DMA cycle it needs, and when disabled it gives the CPU 1 in every 4 even DMA cycles. But yes, I'm splitting hairs a bit when it comes to DMA performance because I code game/demo stuff targeting stock A500, meaning one of those cases (blitter running or 5+ bitplanes enabled) is very likely to be true.


> it's theoretically fastest to do `and d1,d0` e.g. in a loop

That's true, although I'd add that ASR/AND are destructive while BTST would be nondestructive, but we're pretty far down a chain of hypotheticals at this point (why would someone even need to test evenness in a loop, when they could unroll the loop to doing 2/4/6/8 items at a time with even/odd behaviour baked in)

> I thought BLTPRI enabled meant the blitter takes every even DMA cycle it needs, and when disabled it gives the CPU 1 in every 4 even DMA cycles

Yes, that is true: https://amigadev.elowar.com/read/ADCD_2.1/Hardware_Manual_gu... "If given the chance, the blitter would steal every available Chip memory cycle [...] If DMAF_BLITHOG is a 1, the blitter will keep the bus for every available Chip memory cycle [...] If DMAF_BLITHOG is a 0, the DMA manager will monitor the 68000 cycle requests. If the 68000 is unsatisfied for three consecutive memory cycles, the blitter will release the bus for one cycle."

> one of those cases is very likely to be true

It blew my mind when I realised this is probably why Workbench is 4 colours by default. If it were 8, an unexpanded Amiga would seem a lot slower to application/productivity users.


I'm thrilled to see someone bring up Magicore! I don't do much publicity because it's been a quiet 2 years working on the game engine, without much flashy content to show for it. But we're ramping up to begin production of the final game assets, so I anticipate having a lot more to share this coming year.

Here is a small demo I threw together for AmiWest 2024, from last October: https://youtu.be/xIYrhKHEPEA

I also have a personal blog which is largely a development blog for Magicore (https://dansalva.to/blog). My next post will be about a recent feature where I use Amiga's hardware acceleration to draw rays of light that can be obstructed by passing objects. Proof of concept video here: https://youtu.be/rFWFTuWx82M


> For all its flaws, it's the only service that reflects society's pulse in real-time.

Is there such thing as accurately "reflecting society's pulse" within individual posts? Sure, maybe you can pull big data to understand public sentiment on certain topics or events. But the platform's goal is to present users with content they are most likely to engage with, so that they spend as much time on the platform as possible (and therefore view as many ads as possible). That content does not reflect society—it's often targeted ragebait. When users are goaded en masse by ragebait and engagement farming, some percentage are going to develop a distorted view of public sentiment, and their own engagement will further the issue.

The fact is that content discovery algorithms and moderation largely shape the tone and personality of a platform, both for a viewer and for a contributor (who is psychologically inclined to want to fit in).

Twitter isn't going to collapse and die overnight, but this could be the start of a 5-year trend where the "utility" of Twitter over other platforms gradually shrinks as reputable brands and public figures diversify their presence.


Others in the thread have already made good points about Yuzu, but to add to the discussion: A lot of people think Nintendo is indiscriminate with their takedowns, but they typically only go after things that:

* They think will significantly hurt their brand (or already has)

* They think will significantly hurt their bottom line (or already has)

An interesting case study on this is in 2006, before the launch of the Wii, Nintendo issued removal of certain NES ROMs from popular ROM-sharing websites. Rather than removing all Nintendo ROMs from those sites, Nintendo specifically provided them a list of the NES games that were slated to launch on Wii Virtual Console. I'm struggling to find a source for this, but I distinctly remember it happening because there were some odd inclusions like Wario's Woods, while Super Mario Bros. 3 remained untouched. If anyone is good at searching old news articles, I would really love to have a tangible source for this memory of mine.

On the other hand, the worst takedown I've ever seen Nintendo make was when they issued a C&D against a brilliant Commodore 64 port of Super Mario Bros.: https://www.eurogamer.net/nintendo-takes-down-mario-bros-c64...

Pretty sure that one happened because the release effectively went viral, with a lot of mainstream tech/gaming websites covering it. Still, as a retrocomputing enthusiast, it's hard for me to be an apologist over that one.


I wasn't really on MySpace either, but I think it's exactly where your complaint lies that drew such a huge demographic. When I think MySpace, I think teenagers who are still discovering their identity—not seasoned creators with a catalog of work to show off.

The masses were given a means to make a page that encapsulated their identity and connect it with others, during a time where it was suddenly made possible for everyone to express themselves, but still difficult to produce meaningful online content. I think Tumblr eventually ended up capturing a lot of that, but I also feel that there is a sense of pressure around having a space where the purpose is to publish content (even if just reblogging). It was really meaningful to a lot of people that they could have a simple space to express themselves through custom mouse cursors, cringey quotes, and autoplaying emo music.

Nowadays, this expression of identity for younger audiences seems to be driven by being a part of online communities with common interests, expressing oneself through content (now that it's so easy to make and share). But I think MySpace was there for people at the right time.


I've spent most of my life as a hardcore tech enthusiast, always excited for the latest and greatest. But in this era of homogeneous Electron apps, and now LLMs, I can't find much to be enthusiastic about anymore.

Most of the novelty has evaporated since every device nowadays can do everything imaginable (and if not locally, then via cloud). On old computers, it's just cool to see different kinds of software and games running on it. It makes you want to explore the possibilities. People used to be enthusiastic about the cool stuff their computer could do.

For a while now, I've wanted to set up a retro computer as a "daily driver" of sorts. It feels like a lot of our everyday uses for technology have not changed much over the decades—communication, news, entertainment, writing, organization, etc. If I lean back and ask myself what I actually use a computer for (other than specific stuff for work), I find it kind of hard to answer the question, which likely means I'm wasting a lot of time doing things that aren't deliberate or meaningful.

I love retro computers (especially Amiga), and doing stuff on them will always fill me with enthusiasm.


I feel the same way. I'm pretty entrenched in the Apple ecosystem and even switching to new devices doesn't tickle my fancy anymore. When you buy a new iPhone, log in with your Apple account and let it restore an iCloud backup your device will look and feel the exact same. Talk about homogeneity. Sure, it's amazing tech and in a reliability and productivity sense it's the only way to go, but still, it feels... Boring.

Every once in a while I'll switch everything over to FOSS (laptop, phone) and dead simple devices (Casio watch, paper calendar). It's much more fun, maybe because of the jank Linux is still plagued by sometimes rather than in spite of it.


> For a while now, I've wanted to set up a retro computer as a "daily driver" of sorts.

You can’t do that anymore. Most everything everybody does on a “computer” nowadays is all through a Google browser using effectively proprietary protocols. Yes, you could edit your own documents and organize your own information, but the second you want to involve someone else, you’re stuck. You can’t accept an invite from somebody else to a Google document, nor can you share your spreadsheet with somebody else. You can’t even participate in online Teams or Zoom meetings. People you talk to and want to collaborate with don’t know what files are.


GP can always shun whatever technology it is other people want them to use. I think it's always an option to shun, and people could do this more often but don't realize they can.


Like you say, computers nowadays can do basically anything. It is then a funny feeling to take an old computer, one that was once abandoned over all the minor frustrations that surrounded it, and revisit it today, only to be filled with wonder and parent-like pride in what the cute little thing is still able to do. Even trivial things, like playing mp3s! Despite being older in time, this antique relic of the past has its place in a younger part of my mind, and so feels more childish and immature. And yet, look at it go!


The Steam releases of original Myst and Riven are the ScummVM ports, which work great on modern hardware. You'll find it as "Myst: Masterpiece Edition". Masterpiece Edition is effectively the same as the original release, but with better color fidelity on the visuals. I think the one downside of Masterpiece Edition is that some of the audio tracks were cut short to fit on the CD. Still, I'd say the Steam release is definitely the most hassle-free way of playing the game today.


I'm working on a game for Amiga (another 68k-based platform) and settled on ZX0 to decompress assets on the fly: https://github.com/einar-saukas/ZX0

I was originally using LZ4, but I switched to ZX0 after learning that it can do in-place decompression, which means I don't have to allocate separate memory for the compressed data. I'm very happy with the compression ratio, and decompression of large assets (~48kb) only takes a few frames on a 7MHz 68000.

Also of note is LZ4W, included in Sega Genesis Dev Kit (and discussed in the comments section of OP's article), a variant of LZ4 that only uses word-aligned operations. That makes it much faster on the 68000, which can struggle to efficiently handle 8-bit data. More info here: https://github.com/Stephane-D/SGDK/blob/master/bin/lz4w.txt


I wrote a naive lz77 packer/unpacker in C for a c64 game. https://github.com/geon/woorm/blob/master/tools/lz77.c

Not fast, but the compression ratio was decent, and made it easy to fit a bunch of levels into the game.


Nice! When you say “on the fly” are you literally decompressing assets from disk during gameplay? Can you do asynchronous IO on the Amiga?


As of now, I'm loading all the level assets in advance, but many of the assets in RAM stay compressed until they're needed, such as backgrounds and fullscreen graphics. I decompress those asynchronously into video memory when I need them. The second video in this blog post shows the decompression happening onscreen in realtime: https://dansalva.to/coding-the-anime-woosh-screen-on-amiga/


Thank you, great blog post! I do a lot of MS-DOS/Pentium and earlier retro programming so I love this stuff. The Amiga is a platform that I missed out on in my youth but I'm keen to have a crack at it some day. Most recently I made this demo with my group https://www.pouet.net/prod.php?which=95524 which won the 2024 Meteoriks award for best midschool production!


It's actually harder to do synchronous floppy I/O on the Amiga. Data transfer is done over DMA, then you perform MFM decoding. The latter can be done by the CPU, or, asynchronously again, by the blitter, in which case you can't use it at the same time for graphic operations.


Depending on what you are going for in terms of gameplay experience, that can be a reasonable trade off. Say an RPG where you page in/out assets as needed but doesn't break the flow is probably going to be a bigger issue on gameplay styles that need more compute to achieve.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: