Hacker News new | past | comments | ask | show | jobs | submit login
Developers Try Again to Upstream Motorola 68000 Series Support in LLVM (phoronix.com)
128 points by emptybits on Sept 30, 2020 | hide | past | favorite | 122 comments



If you're interested in the current status of the backend, please have a look at this talk on Youtube:

> https://www.youtube.com/watch?v=-6zXXPRl-QI

Slides can be found here: http://m68k.info/assets/LLVM-Backend-for-M68k-Overview-and-S...

And a Bountysource campaign to support the project here:

> https://www.bountysource.com/issues/90829856-llvm-complete-t...

Disclaimer: I'm one of the people behind this project but not the main developer. So funds are going to the developer, not me.


It would be cool if 68k support landed in LLVM. I've been wanting to do a 68k-based retrocomputer project for a while. If I ever get started on such a project, being able to use a modern compiler would be nice.


I have recently built a simple 8085 computer on a breadboard and finding a working and open-source C compiler was a huge challenge. Finally I found this one: https://github.com/ncb85/SmallC-85 but it supports ancient K&R C syntax, without such goodies like subscript operator or character literals. Still I prefer ancient C than ancient assembly language.

On the other hand there is Small Device C Compiler (SDCC) project that provides a decent compiler for Z80 CPU. There is also cc65 for 6502 CPUs.

The strangest thing is that Free Pascal seems to support 68k out of the box (https://www.freepascal.org/) and also AVR and bunch of other archs.


Free Pascal has shockingly broad platform support:) One of the things I always found cool about it.


> but it supports ancient K&R C syntax, without such goodies like subscript operator or character literals. Still I prefer ancient C than ancient assembly language

A thought occurs to me: would it be possible to use that compiler to bootstrap your way to a modern C dialect? At one point, there were no C89 compilers. So, one would have to write a C89 compiler in K&R. Or am I failing to appreciate just how much work it would be?


You need to write a bridge compiler in C89 that can compile newer variants of C, and then feed another compiler's source code into it. These are implicitly written whenever new C features are added to compilers (can't depend on features that don't exist yet) but will be destroyed fairly quickly (why not dogfood our own features?). Furthermore, the compilers most people are going to want are either GCC or LLVM/clang, both of which have non-standard extensions which you'll likely also need to support.

However, a much more efficient approach would be to add support for older architectures to a modern compiler. Then you can build binaries via cross-compilation, or cross-compile the compiler itself to get a modern self-compiler on an old system.


> A thought occurs to me: would it be possible to use that compiler to bootstrap your way to a modern C dialect?

You don't need to. SmallC is a cross-compiler, not a self-hosting compiler; it runs on a modern computer, not an 8085. If you wanted to modify SmallC to accept modern C syntax, you can do that without "bootstrapping" anything.


You could probably write some macros or some other kind of text processing job to convert modern C constructs into K&R and feed it into SmallC.


These folks are working on bootstrapping from a small amount of machine code for multiple arches all the way to the Linux kernel.

https://bootstrappable.org/


I vaguely recall trying to write fixed point math functions using sdcc for a gameboy homebrew project a long time ago. I’d compile and run via an emulator. Good times!


So, use gcc, which is what the BSD ports supporting m86k use. See also

https://www.phoronix.com/scan.php?page=news_item&px=GCC-11-m...


I know about GCC. In fact, I'm the one who created the Bountysource campaign to convert the m68k backend in GCC to MODE_CC.


There are languages that LLVM supports that GCC doesn't.


Fair enough.


One annoying thing is that nobody makes new 68k CPUs anymore. The only real source appears to be scavenged chips from China.

If you want to limit your retrocomputer design to only parts that are still manufactured, like I do: The only cpu choices appear to be: Z80, 68c02 and 68c816


In 2014, Rochester Electronics has re-established manufacturing capability for the 68020 microprocessor and it is still available today.

(https://en.wikipedia.org/wiki/Motorola_68020)


Reminds me I used the 68332 68k based MCU 25 years ago. I just checked, can still buy them from digikey. Some packages are listed as not for new designs. But a couple are listed as active.


Which is a shame. Just from high level descriptions of the 68k architecture, compared to my own (albeit very limited) experience with x86 assembly, the 68k seems a much more elegant and easier to program architecture.


Someone else in this thread has pointed out you can still buy the 68SEC000 from digikey.

But it's not recommended for new designs.



You are correct. I also seem to recall recently reading news that someone had started fabbing 68020s again, but can't find the source at the moment.


Wikipedia says Rochester Electronics, and I did find it here on their site: https://www.rocelec.com/part/REIMC68020EH33E


At $100 for '020 and $300 for '040 I don't see many hobbyists rushing for those. You can get pretty damn nice FPGA for that sort of money, even if I understand it's not quite the same.

https://www.rocelec.com/part/REIMC68040RC25A


You can run a 68k emulator on a 64-bit x86 if you wish to program as a hobby. Or is this about professional 68k programming, or perhaps nostalgia?


There is the Vampire 68080 which features new instructions.

See: http://apollo-core.com/index.htm?page=products


Looks like NXP is still making M68k CPUs, but I don't know about dev boards.

https://www.nxp.com/products/processors-and-microcontrollers...


you know that GCC had support for 68k and it's a modern compiler, not ?


Yes, but GCC cannot be used as a backend for other languages like Rust. My primary goal is to get Rust support for m68k.


You probably don't want to hear this but you could go the LLVM->Wasm->Wasm2C->GCC route.

The Julia folks had resurrected the C backend to LLVM at least once, it is also a possibility.


This is probably an unpopular opinion, but I hate having hobby platform support in modern projects.

I work on a OS kernel (non-Linux), and in my view, we should support x86_64, arm64, and maybe ppc64le. But instead we have all sorts of 32b legacy hobbiest platforms where there are maybe 3 machines in the world running on them. These platforms make it harder to test changes, simply by the fact that it takes regression tests longer to compile & run, you have to build/install more cross tools, etc. They make it harder to develop because all of a sudden you realize that some standard interface is not implemented on them, and you have regression test failures that you need to work around.

I'm fine with hobby platforms if they play in their own sandbox, but not when they impede development of current systems.


I seem to recall the OpenBSD folks deliberately keeping around uncommon and old architectures in part because compiling and running on those old architectures flushes out bugs. So, from that perspective, your regression tests failing on legacy hobbyist platforms is a good thing.

Or maybe I was misunderstanding the point of Theo's post back in 2014 around the discussion of their power bill[1].

1. https://marc.info/?l=openbsd-tech&m=138973312304511&w=2


You can view code that assumes a little endian system or eight bit bytes as a bug, or you can see it simply as making reasonable assumptions. In the end programming always is about making assumptions and reducing scope to prevent an explosion of complexity.


I have a lot of sympathy for this view, especially since 'hobby' platforms tend also to be those whose maintainers are doing it on the side and who are thus less likely to be keeping up with internal API and similar cross-codebase improvements. But it gets complicated when the project is a foundational one like LLVM, where dropping or not accepting support for an architecture is effectively also locking the architecture out from a wide swathe of other projects (for instance, Rust, and via Rust also Firefox). I think the best way to manage this seems to be to have an official 'tier list' with criteria and consequences for being in each tier (so for instance lower-tier ports might be in a "nobody else is expected to build/test this, failures are not a blocker for commits" grouping). That doesn't solve the problems, but it does at least mean everybody knows where they stand. (IIRC LLVM does have a tier list system but I haven't looked up the details.)


> ...I'm fine with hobby platforms if they play in their own sandbox, but not when they impede development of current systems.

LLVM has an "experimental platform" flag that obviates this problem. IIRC, the Linux kernel also has something similar wrt. highly experimental config options.


LLVM’s architecture also makes supporting these platforms much less intrusive on the codebase than in an OS (or even other compilers), especially if they’re only for hobbyist use and aren’t doing many architecture specific optimizations.


> x86_64, arm64, and maybe ppc64le

All of them commercial products and from companies in the US and the UK.

The problem with limited platform support is the lack of competition which helps to lower prices and provides more choice to the customer.

Especially considering the many vulnerabilities Intel CPUs have.

I'm aware that hobbyists platforms are something different, but you excluded many other commercial targets like MIPS, RISC-V, S390 as well.


Practically no one is doing vulnerability research on hobby platforms.


Well, the previous comment also suggested to exclude RISC-V, S390x and so on which are certainly not hobbyist targets.


OP isn't talking about Linux, and no one wants to run FreeBSD on their mainframe. In that sense, it would be a hobbyist platform for FreeBSD.


I would imagine there are 100 times more security researchers and graduate students looking at Intel than those two.


Perhaps that makes them more secure, indirectly.


If Linus had done that we would not have Linux.


DEC Alpha was the first non-x86 platform supported in Linux (and in FreeBSD, where I did a lot of the work). Linux was doing just fine in 1994 when DEC gave Linus an alpha at the Boston USENIX.

The key fact is that, in the mid 90s, DEC Alpha was the fastest thing around, and poised to take over the world. It was expected to be the future (until it was killed by corporate incompetence and itanic). That's why I worked on FreeBSD support, and why Linus worked on Linux support. Alpha was also valuable because it was one of the first 64-bit platforms, and found a lot of pointer/int/long bugs on both systems.

There is a big difference between supporting what you expect to be the future of computing to what you know to be the long dead past.


Some architectures are useful even if one can already tell they are probably not the future. For FreeBSD, sparc64 was important, because you could get multiprocessor sparc64 machines easier (cheaper) than x86 with the same CPU count, even if they were slower; this made it possible to do work on scalability sooner. Now the same is happening with POWER9/10.


Yeah, I was involved in a large Ada project running on 4 socket Alpha servers back in the early 90s. It was a great platform (OSF/1 anyone?) and was very fast and powerful for it's day.

As for bugs, I seem to remember that it had a rather different cache coherency model to Intel, and as you said, was strict about memory alignment.


The sandbox in this instance would be Linux creating a Unix compatible operating system not that Linux supported multiple platforms.


I'd submit that Linux being more choosy about what architectures it supported was a big factor in why it overtook, and ultimately surpassed, the incumbent open source Unixes.


Huh? Linus targeted the x86 architecture.


Which was not a common target for UNICES.


But it was a common hardware platform of the day.


Correct as did other Unix operating systems so if Linux had stayed in his "sandbox" Linux would have been something much different if it even existed at all.


So what? Then the BSDs would have been used instead. Or something else.


I think you'd need to add at least a few more architectures to the list. I for one am actively intending on using AVR and Xtensa (for arduino and esp32 support). And RISC-V probably ought to be in there as an architecture of the future.


Isn't AVR a dead-end architecture? In addition to being an 8-bit or 16-bit platform without an IOMMU? Why should a general-purpose OS target AVR?


It shouldn't, this is the kind of stuff where bare metal is the best option.


For what it's worth, AVR got accepted into LLVM and there is now Rust for AVR.


Many people who contribute code to those legacy platforms also end contributing to the modern ones. NetBSD is a very good example.


OP is talking about FreeBSD, and largely... people don't contribute to the ones he didn't mention. They just languish.


You probably have at least one MIPS device in your house (router or modem). Based on the three architectures you named it sounds like your project isn’t really aimed towards embedded systems. But those legacy architectures are dirt cheap, relatively energy efficient, and good enough for most purposes so they aren’t going away any time soon.


OP mentioned arm64, which can be used in these kinds of embedded contexts (with tens of MB of RAM, an IOMMU, and mostly traditional off-the-shelf operating system).


-DLLVM_TARGETS_TO_BUILD="x86-64;arm64;ppc64"


What do you say to the LLVM maintainers when your branch-to-be-pulled breaks SystemZ or SPARCV8 regression tests?

That's the OP's point. Merging LLVM changes is gated by CI tests for obscure and moribund platforms.


I'm straining to understand how a branch-to-be-pulled for your backend will break SystemZ unless you've been modifying the shared middle end.

If you have then you tell them you're going to reduce your test case and also that you're going to contact the SystemZ and SPARCV8 maintainers. Maybe your branch won't make it into the tree. Maybe it's your fault or maybe it's theirs; if it's theirs they'll WANT to know about it. Maybe it won't make it into the tree for this release but it might for the next after the bug is fixed.

Development is done in parallel and the middle end always gets improved by being stretched and abused by multiple backends. There are out of tree backends which beat the hell out of GlobalISel (Apple GPU). I'm GLAD that they've done the hard work of beating GlobalISel into shape because it makes it a more stable platform for me. There are in tree backends which are models for cloning (MIPS) even though the vendor is basically an IP holding company.

LLVM stands in contrast to GCC. LLVM was architected as a collection of modular and reusable compiler and toolchain technologies. And then when people actually do that it's a good thing.

Right now LLVM 11 is in RC4, about a month late. There is surprisingly little if any sturm und drang on the developers' list about this slip. Work on LLVM 12 has commenced, in parallel.


I'm straining to understand

O_o

how a branch-to-be-pulled for your backend will break SystemZ unless you've been modifying the shared middle end.

OP: They make it harder to develop because all of a sudden you realize that some standard interface is not implemented on them, and you have regression test failures that you need to work around.

I don't think the OP said anywhere that he was exclusively developing an arch backend.

If you have then you tell them you're going to reduce your test case and also that you're going to contact the SystemZ and SPARCV8 maintainers. Maybe your branch won't make it into the tree. Maybe it's your fault or maybe it's theirs; if it's theirs they'll WANT to know about it. Maybe it won't make it into the tree for this release but it might for the next after the bug is fixed.

Yes - and this all goes back to what the OP complained about - that these are moribund platforms and that keeping them running is an undue burden for devs working on living archs and extending the Clang/LLVM spine.

I'm GLAD that they've done the hard work of beating GlobalISel into shape because it makes it a more stable platform for me

...which is why the OP mentioned that it might be an unpopular opinion.

The OP is doing work for you, extending Clang/LLVM while keeping it in working order for platforms that shouldn't be supported based on deployments, "beating [it] into shape" for you. Snarky replies like "use the right compiler options, duh" seem mean-spirited.


I think you deeply underestimate embedded use. There are a huge number of devices running on relatively obscure microarchitectures.


68k is basically dead in embedded. Most of the parts have been out of production for many years, and the ones that remain are only used for legacy applications -- they're 5V parts, making them incompatible with most newer components, and they're completely outclassed by ARM microcontrollers.


I was actually looking into this last week. All original 68k CPUs appeared to be well beyond end of life and the only source appeared recycled chips from China.

But there are modern NXP coldfire parts. The CPU core is mostly binary compatible with 68k, just with some legacy instructions removed.

Coldfire SOCs clock up to about 400mhz and have support for modern devices like ethernet and usb.


For a real-world example of these CPU's being used in actual products today, I was doing some maintenance on a current-model GCC LaserPro Spirit laser engraver/cutter a while back and was a little surprised (and delighted, because it's just fun to see something that's not x86 or ARM in the wild once in a while) to find a ColdFire CPU on the motherboard. These are very popular machines used in production in all kinds of businesses.


You can still get new 68SEC000s from NXP, but the part is NRND (not recommended for new designs), so you'd better order soon if you're interested.

Ironically, a HN comment I made about this a few years ago is now one of the top Google results for "68SEC000", ranking higher than NXP's own product page.


It's a shame getting 68010s is not as easy.

68010 is IMO a bugfixed 68k, and what should have been used for any design that would otherwise target 68k, since the moment it became available.


Most of the bug fixes in the 68010 seem to be with things like virtual memory and virtualisation, which is only really important for things like PCs and workstations, and they moved onto the fully 32 bit 68020 and beyond when available.

For things like embedded systems that continued to use the 68k, they don’t need the bug fixes, and the speed boost seems minor. I guess either the market didn’t see the need to change, or Motorola didn’t price it competitively enough to be a viable choice over the basic 68k.


>and they moved onto the fully 32 bit 68020 and beyond when available.

Sure, but that comes with 32bit bus baggage.

The 68010 uses the same packages (dip included) as the 68k, and truly is a drop-in replacement with less bugs and slightly higher speed.


I actually managed to find a seller on eBay from China who was selling real ones for like $1.50 a piece. They're used, of course, but it's not a rebadged 4 MHz 68000 that throws an exception when you try to change the VBR register.


I got a few from ebay 1-2yr ago. They're not rebadged (at least the half I actually use).


Ended up with rebadged ones at the beginning of the year. I didn't even know about the fakes situation at the time I purchased them. Was real careful the second time around.

My side project is to make an MMU for it with an FPGA. The board doing the logic level shifting was my first custom PCB design!


Also messing around with FPGAs, but it looks like you are further along.

If you don't mind saying, what level shifter chips are you using? I suspect 74LVC245 is far from ideal... and that I'll need to get some PCBs made.


I'm using the 74LVCH245. It's 74LVC with bus hold logic so the inputs don't float if the 68010 pins go tristate. The largest headache I had in choosing was that the 68010 is an NMOS chip operating with TTL logic levels. At 3.3V, the LVCH245's input are TTL compatible, below 0.8V is a logical 0, above 2V is considered logical 1 (same as TTL).

For the pins on the 68010 which are wire-OR (RESET#, HALT#, etc.), I have a 4.7K pullup on them, the input side of a 74LVCH245, and the output side of a 74LVC06 (inverted) or 74LVC07 (buffer), which have open drain outputs. The input side of those go to the FPGA. That way it can drive those pins or see when they're being driven by another device.

The typical propagation delay is 3 ns at room temperature, with a worst case of 7 ns. However, the whole system is operating at a 12.5 MHz. (80 ns clock period).

If it makes you feel any better, I have a few PC/104 25/33 MHz 386EX SBCs from 2001 that use 74LVC245s as bus redrivers. If they're good for a 33 MHz (66 MHz FCLK) 386, I can't imagine any issues occurring with a 12.5 MHz 68k (https://docs.embeddedarm.com/TS-3100).

A preliminary test of this whole thing was attaching an Arduino Mega (16 MHz, 5V) via the external bus interface connected with 3x 74LVC245A (DIP) (one for AD0-7, one for A8-15, one for control signals) to a Mojo v3 (Spartan 6 LX9). I basically made a bunch of ARM style peripherals for it (32 bit counters, etc.) that it could interface to. Didn't have any troubles in the ~12 hours I left it running.

Arduino: https://imgur.com/5uEVJyN 68010 Prototype: https://imgur.com/ydJ5OCU

The breadboard things are using the DIP version of the 74LVC245A, the PCB version has the 74LVCH245, which is only available in surface mount.


Much appreciated! There was too much in there I was unaware of.


Oh right... I think I ran into your comment while researching.

Mouser didn't appear to have any, so I decided it must have gone out of stock in the last few years.

Though now I see digikey still have them.

Are they actually still sold by NXP, or do digikey just have one last shipment sitting in a warehouse?


NXP doesn't sell direct to consumers. Their site doesn't list the part as discontinued, though, so I'd assume that NXP still have some stock of the part. (How old that stock is, on the other hand, is a good question. I wouldn't be surprised if it's many years old.)


Coldfire is not dead, though it's going that way. There's plenty of network hardware etc. based around it. I didn't make it through the whole 68k/LLVM presentation when it came out earlier this week but I would imagine the same compiler backend will be used for the Coldfire ISA.


3.3v 68Ks could be made available.

They do exist in the form of M683XX microcontrollers.


As a curious ignorant - what's used now instead of 5V? Why was there a transition away from 5V? I'm assuming this means 5 volts and not something more arcane.


Power usage is proportional to the square of the voltage. So, you want to keep the voltage low for battery life or to prevent your CPU from overheating.

Lower voltages also make transistors slower, though. That can be offset by making them smaller, but we couldn’t do that decades ago (https://electronics.stackexchange.com/questions/109533/why-a..., https://en.wikipedia.org/wiki/CPU_core_voltage#Power_saving_...)

I guess there also is some effect along the lines “higher voltages decrease sensitivity to noise” that prevented early designs from using lower voltages and lower currents.

As to what’s used now: 3.3V was the new 5V, but state of the art CPUs are even lower at ≈1.2V


3.3V seems to be where everything is going in the hobbyist space.

If your device is powered by batteries, 3.3V will let you get longer battery life than 5V. I think I recall hearing that 3.3V can reduce power consumption by as much as 50% relative to 5V. If your device is powered by USB, which is nominally 5V, 3.3V can still be nice because USB power supplies don't always supply the cleanest power, so regulating it down to 3.3V can result in a more reliable power supply.


Most microcontrollers moved to 3.3V, or even 1.8V. Some of them have some 5V tolerant ports to communicate with 5V devices, but for the most part embedded now runs on and communicates over 3.3V.

The reason is mostly power efficiency and the option for higher frequencies as far as I'm aware


Modern stuff used as little voltage as possible. We are taking under 1v at times for silicon core voltage. They also use VRMs to dynamically adjust voltage depending on load.

These days, IO voltage is usually detached from core voltage, and there may actually be multiple IO voltages. Memory and PCIe often get their own carefully turned IO voltages.

For lower speed, low cost devices, 3.3v is basically standardized, though on battery powered devices you will commonly see 1.8v.

Anything hobbyist is being dragged to 3.3v. 5v devices keep getting harder and harder to find.


3.3v and 1.2v are probably the most common these days but there are other voltages in use. Lower power usage, less heat and smaller packaging are the main drivers for lower voltages.


3.3V


ARM and a little bit of MIPS and maybe some RISC-V in the future!


Even relatively common devices have obscure architectures in them, an example of this is that Allwinner ARM CPUs have an OpenRISC core for power management:

https://linux-sunxi.org/AR100


OP is talking about FreeBSD, and the only "obscure" embedded microarchitectures FreeBSD supports that he did not mention are 32-bit ARM, MIPS, PPC, and RISCV.


What is a hobby platform changes over time though.

https://www.researchgate.net/profile/Paul_Carpenter6/publica...


So you'd rather have bugs because you'd rather not worry about bad assumptions and bad practices which come from monocultures. It's a good thing many of the people in large projects don't agree with you.


No, that's not the tradeoff.


I'm sympathetic and funded Neovim bounty - the first thing of which they used it for was pulling legacy support. But I think that's the model. You have a base tool that does everything and then maybe this other thing that demonstrates there are developments that are stalling because of the base tool's support of legacy platforms.

I suspect for many tools, the hobby platform guys are the same as the other patch guys.


But NeoVIM builds perfectly fine on m68k:

> https://buildd.debian.org/status/package.php?p=neovim&suite=...

:-)


Haha! But not all of the ones where vim does! Right? Unless someone added it back. I haven't followed development since the early days.


I love hobby platforms like 68k, but I definitely agree with you.

It's become unthinkable in the open-source world to not have "one X to rule them all" (in this case X happens to be "compiler", but you see this effect with other tools).

Hobby platforms, especially long-dead ISAs like 68k, should be easy to integrate with LLVM (perhaps through a plugin system? I'm pretty sure both GCC and LLVM have that... you install `gcc` and then you install `gcc-platform-x` or something...) but requiring LLVM core maintainers to think about your obscure arch is not worth it IMO.


68k is not a long dead ISA. Coldfire (68k ISA based) MCUs are still manufactured and used in products. Yes, NXP is pushing ARM instead now, like everyone else, but there is still hardware out there that needs software.


Direct link to the mailing list post for convenience:

https://lists.llvm.org/pipermail/llvm-dev/2020-September/145...

Normally Phoronix is more or less blogspam, but in this case Larabel has actually added some interesting context (history of the backend in LLVM and GCC). So I do not suggest changing the discussion's URL to the ML post.


This would be so cool, and could potentially target things like the Amiga and the old school Macintosh! Yay!

As a preservationist, I can confirm this would be used. Not much, but it would be used.


SPARC + MIPS are both 30+ years old. The 68000 is 41 years old and x86 is 42 years old.


68000 is more like 80286 or even 386 though. So you should probably mention i386, not x86.


68k is an architecture series and this compiler will target the whole series including the later models which are equivalent to Pentium, and the Coldfire processors which are still in production.

So 68k really is comparable to (32-bit) x86.


It is a full 32 bit ISA though, even if the 68000 and 68010 cores had 16 bit buses. They're about twice as fast as a 286 at the same frequency.


I hope they succeed so I can write Sega Genesis games in Rust


Are there efforts to do this for 8-bit processors such as the Z80? Famously they are not very suited for compiling C to because of the limited number of registeres, but LLVM does aggressive register allocation that might make it practical.

Edit: Z80 is 8-bit, not 16-bit.


There actually is a Z80 backend for LLVM:

> https://github.com/jacobly0/llvm-project


Neat. I've been tempted to try writing a Sharp LR35902 LLVM backend... Probably would be very close to Z80.


You might be interested in SDCC, a C compiler that supports Z80, among other small architectures. I'm happy to see it's still actively developed.

http://sdcc.sourceforge.net/


I have heard of SDCC, but even on small programs such as int main() { return 4; } it generates quite a bit of code. Can it be more optimized?


SDCC is not especially great.


There's vbcc[0], which, besides 68000, supports a bunch of architectures including 6502 and 6809.

[0]: http://www.compilers.de/vbcc.html


That's an 8-bit CPU.


this would be fantastic to see. with gcc dropping support, that leaves few options for legacy and hobbyist systems.

you can still download and compile dcc (http://legacy.obviously.com/dice/ and https://github.com/noname22/NeoDICE) but having first class support in llvm would be very nice.


> with gcc dropping support, that leaves few options for legacy and hobbyist systems

TFA clearly states gcc support was rescued by hobbyists in the community, it's not being dropped.


ah, I completely mis-read that, thanks!


DICE's author is still active (Matt Dillon, behind Dragonfly BSD[0]). That's cool.

Other than dice, there's vbcc[1].

[0]: https://www.dragonflybsd.org/

[1]: http://sun.hasenbraten.de/vbcc/


I wonder if this is related to the vampire accelerator (new amiga).


The vampire accelerator is not Amiga.

It's a closed -buggy- FPGA reimplementation of something like 68k (nobody outside them knows exactly what, there are rumors it's based on a ColdFire core that was leaked by mistake) with some custom extensions that are underdocumented and have nothing to do with classic Amiga systems.

The entire platform is meant to lock you into their proprietary FPGA core and is totally counter the spirit of the Amiga which was the ultimate hacker's machine.


Not directly but adding support for the 68080 target would definitely be possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: