Conventional wisdom states that running LLMs locally will require computers with high performance specifications especially GPUs with lots of VRAM. But is this actually true?
Thanks to an open-source llama2.c project, I ported it to work so vintage machines running DOS can actually inference Llama 2 LLM models. Of course there are severe limitations but the results will surprise you.
If I were to tell you a PC has a floppy drive, optical drive, Sound Blaster card, serial, parallel and PS/2 ports running DOS, you would think I’m referring to a machine from the 1990s. But my very modern PC built in 2024 possess these characteristics!
I recently built myself a PC as my previous desktop components was insufficient for my current needs. I decided to build one with the ability to still reach back into the past to run DOS.
In this post, I will go through my thought process I went through to decide on the specifications as well as the journey I made to run MS-DOS 6.22 on it.
I did an experiment on a relatively modern 2020 Thinkpad and found that it can still run DOS.
Modern features like Thunderbolt, 2.5 Gig Ethernet and Sound Blaster compatibility over Intel High Definition Audio can work which is pretty remarkable. Thanks to drivers that can actually be found online.
I also ran 8088 Domination and it seems to work well too.
This is however the end of the line, no more Thinkpads and likely other laptops made after this model generation will be able to run DOS natively.
With the recent attention on ChatGPT and OpenAI’s release of their APIs, many developers have developed clients for modern platforms to talk to this super smart AI chatbot. However I’m pretty sure almost nobody has written one for a vintage platform like MS-DOS.
Thanks to an open-source llama2.c project, I ported it to work so vintage machines running DOS can actually inference Llama 2 LLM models. Of course there are severe limitations but the results will surprise you.