Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Lisp-powered laptop with a battery life measured in years (hackster.io)
823 points by shkhuz on March 8, 2023 | hide | past | favorite | 206 comments


Hello! I was very happy to see this project, it is close to my heart. I had a similar dream years ago, an ultra-low power e-paper laptop. (I noticed people mention such a desire on HN every time something e-ink related comes up! There are dozens of us!)

I also looked into transflective / reflective LCDs as an alternative to e-paper. I haven't built anything though, for various reasons (mostly excuses, if I'm honest! Life is hard, I am busy, but I guess that's true for everyone..).

Are there any other similar devices (or projects actively being developed) with a battery life measured in years?

My current understanding is that I would need to study electrical engineering to have the skills to build something like this, is that true? I have been making some slow progress towards that (learned C, studying assembly, brushing up on my math, studying calculus etc).

But a full EE degree does seem like overkill: I looked at the coursework at a university near me, and it has you learning about optics and quantum physics... so it really seems like most of the work is unrelated to the actual goal of building tiny computers.

Is there a way to find or create a curriculum focused specifically on the things you actually want to build? (If not, I expect there will be in a year or two, at the rate AI is developing ;)

Or perhaps I'm looking at it backwards, has this tech progressed to the point where it is possible to just start building without really knowing what you are doing, and learn as you go along?


As someone with an EE undergrad and masters degree, you really don't need it to hack on electronics, especially if you'll be starting with off-the-shelf parts or a microcontroller kit. Understanding the basics of voltage, current, and the ability to use a multimeter and soldering iron are probably all you need to make some interesting things.


Echoing this point, an electronics project book, a breadboard and some simple components are all you need to get really really far in combining electronics like this. You can get 90% of the way there with just Ohm's Law. For laying out circuits you can learn about capacitance and inductance, but again, you can literally learn by doing. As in all technical endeavors, debugging is the real skill.

The rest you learn along the way.

As for the eInk laptop, I have that eDream as well. The hack I wanted was to use an LCD panel across the top for live editing, undo, etc. And then once the line was done, the eink display would wake-up, take the new line of text and write it to the display and then go back to sleep.

A couple nice eink options for the discriminating hacker

https://www.crowdsupply.com/soldered/inkplate-10

https://www.waveshare.com/


Remarkable just came out with a keyboard for their eink device, so that might do the trick for you for now. Its battery life isn't anywhere near what you're asking, unfortunately.

https://remarkable.com/


> an electronics project book, a breadboard and some simple components are all you need to get really really far in combining electronics like this.

Well, don't leave us hanging, do recommend a book to start with


I started with the Forrest Mims [1] Engineer's Mini-Notebooks from Radio Shack. They contain very little prose and lots of circuits you can build on a breadboard.

https://hackaday.com/2017/01/18/forrest-mims-radio-shack-and...

My recommendation for a current book, that is affordable is

"Practical Electronics for Inventors" by Paul Scherz


As someone with an EE degree I would agree with this. I was disappointed to find out that my career was mostly cutting and pasting stuff out of datasheets and using crib spreadsheets written by some revered dead guy who didn't haven any EE degree anyway and worked it out by reading some books.


Yep. Just started doing circuit design as a hobby to supplement a software project of mine and yeah, pretty much this. It's a lot of fun and isn't super complex to get something working.

I think learning what common design practices like bypass caps, knowing when to apply certain formulas and when not (ohm's law / linear/non-linear components), and knowing where to find help when needed, especially for circuit review, was crucial.

Download kicad and watch a few videos and most software devs I know would figure out enough in a short amount of time.

Bonus points if you understand constraint systems because that's pretty much the entirety of CAD.


You don't need an EE degree (and you wouldn't learn many of the skills needed anyway). You can learn most of the things you need from "The Art of Electronics", by Horowitz and Hill. It's a very complete book on the subject, and it does have only very minimal prerequisites.


The team at Modos https://www.modos.tech/blog/get-involved-with-modos is building an open source Eink laptop


You know, I think the US needs to force more hardware specs for components into the internet as a precondition to selling here. I think it would help the following things:

1) China and "the East" have a dominance in hardware. I don't think this is strategically good. I think we need a lot more skills in hardware electronics, especially if we electrify all our transport and have PV everywhere.

2) I think it is good for the environment: more broken crap can be recycled to adhoc devices and other uses, or might (gasp) be repaired.

3) It will help us be more creative with hardware to maintain the innovator's edge.

4) It will help with onshoring production for supply chain security. This is a country-level existential threat right now. Consider that COVID, which by all historical standards is a very very very weak pandemic, caused such disruption and insecurity in our day to day supplies.

5) I just get the sense that software innovation is declining, well, before chatGPT, but it seems we had a solid decade of "what's the next social network" was the height of software innovation. Um, no it's not, that is deck chair reshuffling.

Of course corporations don't like it, so oh well, it'll be dead on arrival.


> I also looked into transflective / reflective LCDs as an alternative to e-paper.

I looked into this a bit a while back as well. I was disappointed in how few options I could find. What seemed like the best option out there for me is the monochrome SHARP Memory Display Breakout that is used in the Playdate handheld console.

I can't find the 4.4" version mentioned in the article though, the biggest I've found if 2.7".


Please see ESPHome for a bridge between little programmable devices that anyone can pick up. Here is a link to their examples, there is also plenty ofbother guides, https://esphome.io/guides/diy.html


Depending where you go most of the EE degree is just an elaborate hazing ritual, and if you could pass it you could probably just teach yourself what you needed to know.

If you haven't already, get "The Art of Electronics" by Horowitz and Hill - that alone will tell you enough to many things.


Electrical Engineering is much more than just microelectronics. You probably just need a few courses in hobby electronics to be able to build something like that. One of the biggest skills in this field is being able to read datasheets :)


Hey HN! This is my project. Hope you find it interesting. AMA.


This is awesome! This kind of extreme low-power application was the original reason I wanted Mosh -- I always thought it would be awesome to have a wireless (hand-cranked?) laptop that could run SSH and would only consume energy when (a) a key was pressed [to update the locally generated UI], or (b) it wanted to fire up the radio (on its chosen schedule, e.g. <1 Hz) to retrieve an update from the server of the "ground truth" UI and reconcile that with the local prediction.


Thank you! Mosh is great, I have found it very helpful at times in my day job. SSHing into servers in Australia from Norway, the latency was pretty bad.

Have not started working on wireless yet but I hope to get to it. The SoC i use has Bluetooth. Using it for a remote shell would be amazing.


That is such a nerdy wholesome story. Definitely will have to make an eInk, foot treadle powered laptop that runs Scheme and uses Mosh.


CPU -- we all have opinions on uCs. I see that your chosen uC has very low active power consumption, less than 10uA per MHz, suggesting under 1mA full tilt.

Have you run any tests to see if active mode is indeed the mode you're in most of the time? Or is it possible that other uCs that could have better sleep current can beat your uC of choice?

Screen -- I like the Sharp in pixel memory LCD screens, though 4 inches is a bit small. A bigger screen would be what I'd be searching for, but I also know that monitors use a surprising amount of power, and 4 inches is the biggest memory-LCD screen in my area as well.

E-ink, for all of it's faults, does climb to 7+ inches and beyond. But changing eink screens is surprisingly power hungry.

Bluetooth -- lol, there goes the power budget!! This uses what? Probably 10x more power than everything else combined? I/O is a necessity and it's a good feature, but I really do wish* it used less power.


Yes, it's nearly always in active mode now. I haven't done much to use the sleep modes at all, only "sleep until next keyboard interrupt" when the program calls (get-key). Measurement, selected portion is while rendering a fractal - so "100% cpu" - at 1.5mA. https://cdn.hackaday.io/images/4389961676068300828.png

I'm sure other uCs could sleep at lower power, but active mode was what I wanted to optimize for.


This is wonderful information! Thank you! That's 1.49 mA at 3.3 volts?


Yeah, matching pretty well the advertised 5mW. But uLisp is probably not using the CPU very efficiently and so not driving it to the max that could perhaps be seen with something like a well-implemented FFT.


Is this at 48 MHz or 96 MHz? I'm so confused about the 96 MHz stuff in the datasheet. There must be some drawback to 96 MHz, because if not, what's so special about 48 MHz?

Is that using the FPU, or are you doing all the fractal math in integer? I have this vague idea that using the FPU might cause it to use more power than integer code will (and might or might not be faster for fractal rendering).


Using the FPU I believe but quite inefficiently, most cycles are probably spent in integer operations and chasing linked lists for the lisp parser. http://forum.ulisp.com/t/barnsley-fern-in-ulisp/1087

48MHz is the most power efficient mode, you can run it at 96Mhz "burst mode" but apparently it uses disproportionately more power, I have not measured how much.


It's really exciting to see this stuff coming to life!

Does this display have the same nominal 2Mbps speed as the 2.7" one I've been using? I've seen people report that it works well at 6Mbps (and thus 60 Hz) but haven't tried it myself. I'm guessing that it would use more power.

Have you been able to measure how much of the power consumption is the display?


Yes, I'm running the display SPI at 2MHz which is ~25 fps. Overclocking it to twice as fast works just fine but uses significantly more power as expected. I did not try to go further.

Approximately 25%-50% of the total power depending on refresh rate. Worst case seems to be alternating black and white pixels, maximizing the number of transitions?


I haven't measured, but the way the power consumption numbers are written in the datasheet (for the 2.7" one) does imply that alternating black and white pixels like that is the worst case.

So the display is about 25% of the 4.9 mW number? That's exciting! How often are you toggling the polarity of the LCD field? I've noticed that when I unplug the display it continues to display the same data for about 30 seconds on what is presumably the screen capacitance, but the datasheet minimum clock for that is IIRC 10 Hz, and as I understand it, never switching the polarity will eventually destroy the liquid crystals.

One of the things I really hate about normal computers is how high the interaction latency is, and one of the really appealing things about these displays to me is the possibility of getting much lower interaction latencies using partial screen updates, down in the submillisecond range (plus 10–20 ms for the crystals to fully change state, but it should be visible before that).


Oh, responding to your email, I got this bounce:

    This is the mail system at host adjuvant.canonical.org.

    I'm sorry to have to inform you that your message could not
    be delivered to one or more recipients. It's attached below.

    For further assistance, please send mail to postmaster.

    If you do so, please include this problem report. You can
    delete your own text from the attached returned message.

                   The mail system

    <(your email address)>: host mx01.mail.icloud.com[17.42.251.62] said: 554
        5.7.1 [HM08] Message rejected due to local policy. Please visit
        https://support.apple.com/en-us/HT204137 (in reply to end of DATA command)
I removed your email address from this comment in case it gets harvested by spambots.


I wonder if a combination of eInk and pixel memory LCD could be good? Use them as two monitors, keep vim on the LCD, web browser or other documentation on the bigger eInk.


The Barnes & Noble Nook did this, except that they were using a conventional backlit color active-matrix LCD rather than a memory-in-pixel LCD like this.

There is clearly a crossover point at which this combination would use lower power, because the e-ink display uses zero power when not updating, and the memory-in-pixel LCD uses power even to retain the display. So as the update rate goes low enough, at some point the e-ink display will use less power. My vaguely remembered estimate of this crossover point is that the e-ink display starts using less power than the memory-in-pixel LCD when the update rate is less than about one update per hour.


to answer hellbanned canadianfella, if you think one per hour is wrong, i'm interested to see your calculations

probably without showing evidence, neither of us will convince the other


Presumably someone working on an e-ink device at some point ran the same calculation, and decided to use the e-ink display. Which means that e-ink being more power hungry if you update it more than once per hour wouldn't have been true


People building e-paper devices after calculating their power usage definitely doesn't mean that my calculation is wrong. First, they started doing it 16 years ago, and things have changed since then, in ways that have made e-paper less popular; second, power usage is far from the only consideration in product design.

E-paper has some real advantages over memory-in-pixel LCDs: as I said in the other thread, the whites are whiter, it's not glossy, it can do grayscale, some versions can even do color, it's available in larger panel sizes, it existed in 02007 when Amazon launched the Swindle, and it can continue to display Amazon advertising even when the device is entirely powered off.

None of these are true of memory-in-pixel LCDs (though the Playdate does display advertising on its memory-in-pixel screen when it's "off").

Moreover, the computing power budget for the Swindle and similar e-readers is also measured in hundreds of milliwatts, just like the screen. There's not much reason to worry about whether your screen uses 100 microwatts or 100 milliwatts if your CPU is using 200 milliwatts. Reducing your power consumption by 50% or 5% is never a critical factor in the success or failure of your product, because you could just add another 50% in battery weight if it were.

Ambiq didn't even exist until 02010, and didn't sample the Apollo3 until 02018, so to get even a single order of magnitude improvement in CPU power usage, Amazon (or competing vendors) would have had to cut the compute budget for e-readers by an order of magnitude, probably resulting in major increases in space usage, reductions in rendering quality, sluggish interactions, or all three, and certainly increasing development cost. Ambiq's CPUs are fast enough, but you can't just drop them into a new model of Swindle or Kobo and keep the same software; they have tiny onboard RAM, little support for offboard RAM (which would make power consumption balloon anyway), and no MMUs, so they can't run either Linux or any of the other e-reader software that's been developed for existing devices—it needs way too much memory.


For what its worth, my math was ~1 update per 15-minutes was the crossover point between memory-lcd and eink.

There's a lot of different eink and memory-lcd screens that could explain the difference between your calculation and mine. But all else considered, 15-minutes vs 60-minutes isn't much of a difference at all.

For most applications of updates-per-second (or at worst, a few seconds per update), low-power monochrome seems best served by Sharp's memory-lcd.


Agreed. Also, my figures for eink power consumption are very uncertain; possibly you were able to take measurements or get good datasheets, but I couldn't.

A thing I didn't mention is that there are actually eink applications that have lower update rates than this. Store price tags, for example.


I decided to try my math again and document it more carefully.

--------

https://www.pervasivedisplays.com/wp-content/uploads/2021/12...

Pervasive Displays has an oscilloscope measurement of 3.75mA on their 5.9-second updates for their 1.54" eink screen. (Page 14). That's 22.125 mCoulombs of electrons.

--------

https://media.digikey.com/pdf/Data%20Sheets/Sharp%20PDFs/LS0...

Sharp 1.2" memory-LCD (a bit smaller) has 12uW to 50uW on static images. Assuming 3.3V, that's 15uA worst case (0.015mA).

-------

22.125 mCoulombs / 0.015mA == 1475 seconds per update break-even point, or ~24 minutes.

So yet another number, but yeah, memory-LCD is just really good. Remember that the 1.2" screen from Sharp is a bit smaller though, so its not a perfect apples-to-apples comparison.

Either way, its a much longer period between updates than I was expecting.


Yeah man, thanks for this clarification, looks like I've been spending too much time tinkering with apps and I've gotten completely out of touch, haha


thank you very much! this is great!


lol


You could theoretically simply buy multiple of these screens and place them in a grid. Would be slightly annoying when you want to split one window across multiple screens, but I imagine it would still be much more useful.


I've played a Japanese arcade game called Jubeat, that's made up of... I think a 4x4 grid of small touch screens.

With the right interface and design, it feels fine. In particular, Jubeat has a different note assigned to each screen so the divisions feel very natural.

First hit from YouTube: https://youtu.be/LSGOAkyFW7I

Or maybe just natural to people who put in a bit of practice, lol. This is an expert level song pattern, it's not so bad at earlier levels. But it still shows off how the 4x4 screen effectively communicates a variety of button press timings to the player, combo counters, life, scrolling and other such details for this game.


I too have been investigating using eink displays, but the biggest issue I've found is refresh rates are often terrible especially with colour displays. Have you found eink displays with refresh rates <1s?


The E-Ink displays also use a lot of power at reasonable reading rates, like one Swindle screen update per minute seems to use about 100 mW. That's about three orders of magnitude more than the 2.7" memory LCD, which I guess is about an order of magnitude less area, so it's about two orders of magnitude more power per area. But the memory LCD updates at nominally 20 Hz and potentially much faster, so it's three orders of magnitude faster, so in some sense it's five orders of magnitude better than E-Ink.

I have seen a Swindle do partial screen updates much faster than 1000 ms. In fact, I originally wrote http://canonical.org/~kragen/sw/inexorable-misc/tetris.html to play in the browser on a Swindle, with update times of more like 100 ms. And I successfully used IRC on it, too.

The E-Ink display does have some good points: it's not glossy, it's available in bigger sizes, its whites are whiter, and it can do grayscale. But, for interactive computing, I think those aren't a good tradeoff for being orders of magnitude worse at power consumption and update times.


No, I did spend some time reading eink datasheets but in the end it seemed too hard for me to do on my own. There's the Kindle and Freewrite so it seems possible


I find it amusing that this computer you built has better specs than the first family pc my parents bought us in the 80s.


This is a RISC CPU running at 48MHz (capable of bursting to 96MHz but that requires too much power, see elsewhere in this thread by the project builder) which doesn't just put an 80s PC to shame, the SPARCstation 1 in 1989 was running a 20MHz RISC CPU :D basically this potato runs circles around anything from the 1980s except maybe the Cray-2 or such.


As do I. We had a 286 which this would easily outperform. It could probably run DOOM (with dithering) quite well if someone put their mind to it.


I doubt it. It only has 384 KB of RAM.


Apparently the original Doom required 4 MiB of RAM https://old.reddit.com/r/gaming/comments/a4yi5t/original_doo... but I'm pretty sure there have been ports to platforms that had less.


Most of that is static data, so if stored unpacked, a larger flash should be sufficient to make the game need very little RAM. Unfortunately, if I read it correctly, this one only has 1MB of flash. (oh: but SD card. That could work!)


thanks! these are good points!

however, if you have the sd card powered on all the time it will cut your battery life by about a factor of 30


Yeah, most of the 90s home console ports were to systems that had less than 4MB RAM. Jaguar, 3DO, PlayStation, SNES, 32X. The GBA also has less than 512KB RAM and a 16MHz ARM and had Doom and Doom II ports. Some cutbacks might be necessary but probably doable.


Also none of those systems had Flash, and they were all a lot slower, requiring more time/space tradeoffs.


The RP2040 has 264 KiB of RAM. RP2040 Doom: https://news.ycombinator.com/item?id=30672527

PS. although, the RP2040 has 2 cores and programmable IO controllers, which IIRC were pretty needed to deal with the low memory (and sound). I don't know how this chip compares.


Also, as I just realized, the RP2040 Doom author is also the Pico SDK (lead?) developer at Raspberry Pi Ltd.[1], which probably contributed as much to the success as the two cores and their PIO controllers.

[1] https://www.raspberrypi.com/news/doom-comes-to-raspberry-pi-...


You could build a ray casting engine, textures and maps that would fit in that easily. It might not be doom but something similar is possible.


you can get one with 2MB of RAM now: https://ambiq.com/apollo4/


Hi there! Neat project! Can you give us a rundown of some of its features? Which processor are you using? Do you intend to sell these or are you only creating a prototype?


Thank you!

Not many features of note - it runs arbitrary lisp code, has a simple graphics interface. You can save and load code and data/text files to SD card.

The processor is the Ambiq Apollo3 Blue, which is really what makes running arbitrary code with such low power consumption possible. 48/96MHz, 384k of RAM. https://ambiq.com/apollo3-blue/

There's an Apollo4 which is faster and has way more memory, but it's a tiny BGA chip which doesn't have an Arduino core and I don't have the skills to work with that (yet ...)

I hope to create a PCB for it at least, it would make the device tiny and sleek and make it a lot easier for others to reproduce. But I haven't been able to learn PCB design yet.


I'm not a professional, but have done some pcb layout for hobby projects. Are you looking for collaborators, or are you wanting to keep this a personal project? If so, I can be reached at my username@(googles mail platform).

One other thing, I don't see the stl/cad files for the case anywhere, I assume you're still iterating on that. Very cool project!


Yes, I'd love some help with it, or it would probably take me years. Schematics and stls (made in tinkercad) will be coming once I find the time. For now I'm spending too much time replying to comments but I'm enjoying the dopamine


No worries, enjoy the front page! If you can get me even napkin drawings of your hookups, I can do schematics. I work in kicad, and step 1 of a board layout is to do schematics, so it isn't much additional work to lay out the schematics nicely enough for publication.


The Apollo4 can also attach QSPI SRAM which could allow adding more RAM and perhaps allow running common lisp...


Available QSPI SRAMs seem to consume a lot more power than the Apollo4 or Apollo3 itself, and because they're also slower, they increase the processor's duty cycle, which makes it use more power itself (except when the user is waiting on it, I guess, or you're running a background compute task).


i love the minimalist concept of a laptop with so much battery life: Dell and Apple could learn something from this.

so that gray piece on the right is the solar collector, right? since it can go 2 years, what's to stop it from going forever? what if you added a 2nd collector on the left? how big would the solar panel need to be in order for it to run indefinitely?

Super job!


Thanks! The biggest obstacle to running forever may be the chemistry of the li-poly battery which has a relatively low number of charge cycles and is known to degrade over time. A promising option is li-ion capacitors or supercapacitors, which I'm looking into. This is the board I've purchased for testing. https://www.tindie.com/products/jaspersikken/solar-harvestin...


Id just personally go with replaceable parts. AA NiMHs are like $1 to $2 these days, and 6V 5Ah lead acid is like $20 (Lol, first hit on google is $4 from some no-name brand).

Replace the battery every few years and you're set. Rely upon mass production, standard part numbers and highly recyclable parts (lead acid wins at this).

-------

Lead Acid is particularly good at UPS style power usage patterns. It's very easy to perpetually trickle charge lead acid.

------

If you're set on Li-ion, then use a standard 18650 cell, so you know that you'll always be able to buy a replacement. But given the attributes I see here, lead acid probably wins. So you have to replace a part every 3 years that costs $5 to $10, big whoop.


No, no, no! This is the post-apocalypse laptop! You can’t go off buying new batteries in your mad-max car. It’s gotta work from that solar panel!


A post-apocalyptic backyard garage could conceivably recycle a lead-acid battery, they're pretty simple.


As it happens, I've spent a lot of time over the last week talking to a guy in Romania who's trying to fix a sulfated lead-acid battery in his backyard garage. (I don't know if you've been to Romania, but the apocalypse happened 40 years ago there, so his available resources are kind of limited.) I think he's going to succeed, and may eventually progress to being able to recycle the lead into a new battery, but it's not going to be a weekend learning process or even a week-long one.

He reports that it's a "very complex electrochemical device".

The USPTO (?) has assigned the code H01M10/06 to lead-acid battery patents. https://patents.google.com/?q=(H01M10%2f06)&oq=(H01M10%2f06) finds 44'593 patents in this category. You don't need any of them to get a working lead-acid battery, but a significant subset of them are going to be helpful. Some are order-of-magnitude improvements.


What kind of apocalypse happened in Romania 40 years ago?


I presume he's talking about the collapse of the Soviet Union, which caused a huge amount of upheaval in that area.

But I'm a poor student of modern history. Just throwing out a guess for now.


Right, but I got the time interval wrong; it was only 30 years ago.


That sounds like the opposite of an apocalypse


Eventually it was good for Romania but IIRC, the initial collapse was very bad for the locals.

But again, I'd say that Eastern European modern history is my weakest subject by far. I probably should avoid talking about this subject lol.


And in a Mad-Max future (lots of cars around, but not enough fuel), there will be plenty of Lead Acid batteries laying around to recycle.

I do consider it effectively an apocalyptic kind of battery design. It was invented in the 1800s, its chemistry is incredibly simple (Sulfuric Acid + Lead), and is very well studied.


i prefer the phrase "anti-apocalyptic." Because by designing an energy efficient laptop/phone, less lithium/capacitor resources are manufactured and everyone can have one, unlike that single coke bottle in that 80s movie, "The Gods Must Be Crazy" which will lead to one. ;)


Would it be possible to use standard 9v rechargeable cells? I’m sure the lifetime would be even less, but if a battery swap cost $10 and could be found at any corner drug store…


You can look into LFP (aka LiFePo4) cells, which have higher durability than lithium ion. Beware they have different chemistry so charging voltages differ from standard Li-poly or Li-ion (they are also safer, which is nice). I haven't looked at the data, but I suspect supercaps may not be optimized for very low leakage, and generally the energy density is not so good (though if you have access to energy harvesting I guess it may not matter!).


Did you consider the possibility of using a FPGA Lisp CPU? [0] I hope the supercapacitor idea works and you don't have a leakage issue.

[0] https://frank-buss.de/lispcpu/


FPGA soft core CPUs can't beat the same logic implemented in ASIC silicon in terms of static power consumption. The MCU at the core of this project has a very impressive uA/MHz specification that's difficult to achieve alone. I'm surprised to see it being much lower than even STM32L0 running off external SMPS


> i love the minimalist concept of a laptop with so much battery life: Dell and Apple could learn something from this.

Really? What? Put on a tiny black and white display?? No backlight?? Tiny processor?? No SSD?? No HDD??

I mean, this is awesome. And look forward to see where it is going. Not sure what Dell and Apple will learn??


Dell and Apple can learn that there are consumers out there who want low cost minimalist options with long battery lives and long operational lives. Not everyone wants to buy a brand new 600$ computer every 5 years. They can learn that some consuemrs aren't going to put up with the crap of planned obsolescence.


Not really planned obsolescence, just lack of spare parts. 10 year old laptop will work just fine, it will just be hard to get replacement keyboard or batteries.

> Dell and Apple can learn that there are consumers out there who want low cost minimalist options with long battery lives and long operational lives.

They don't want their business and low marigin sales.

Also Macbook air has what, 18 hours of battery life ? That's enough for vast majority of users.


A MacBook has up to 18 hours when new, but you can get far less than that.

The real advantage of long battery life isn’t so much the duration when new but both reducing the number of charge cycles and preserving battery life as the device ages.


> They can learn that some consuemrs aren't going to put up with the crap of planned obsolescence.

The number of consumers who not only complain about planned obsolescence but also put money where their mouth is, is tiny. It's easy to get people on board with the idea behind projects like Fairphone, but then they do a price comparison and buy a cheap Huawei.

The market for Linux laptops is already a small niche and those aren't all that limiting for users when it comes to processing power and software support. Now take away the modern web browser and very few people would consider it for anything more than a little tinkering.


Unfortunately this is almost the least profitable market segment imaginable and isn't going to be addressed well by capitalism.

I don't mean this in a "capitalist bad" way, it's mostly great but there are certain innovations and technologies that don't fit well with a need to get the most return possible on capital invested. There's a "dead space" of techs that would benefit everyone, need some capital to create - but don't allow a lot of value to be captured.

A bit like how we see more VC excitement about "vat grown meat" (a centralised industrial model that exacerbates supply chain dependency and further alienates people from their food - but is perfect for capturing value) vs working on "super potatoes" or algae-based systems that would be low-dependency and could scale down to individuals.

Orthodoxy is that government is supposed to help with this stuff but they have their own incentives (some of them a result of industry capture) which rarely align with decentralisation, degrowth or reduced dependency. For example recent battles to get laws passed to even allow you to repair your own stuff.


It did address it fairly well. They were able to buy cheap, off-the-shelf components and build this with relatively little work.


If you had said that there are consumers who want long battery lives and long operation lives and are willing to pay a premium for that, then Dell and Apple might have something to learn - but the existence of consumers who want a low cost option for that is irrelevant, since serving that market would just lower your profits by cannibalizing your sales to the people who are not only willing to pay $600 every 5 years but $1000+ every 3 years.


Hell yeah that’s some next level lisp hacking! Great work.


Thanks! Have not had the chance to do too much lisp hacking yet though, I still very much consider myself a beginner. Mostly solder and C so far, apart from implementing a crude text editor.


> Mostly solder and C...

Hey! I used to have those two and assembly on my resume as the only entries under "Programming Languages" !

Very nice project.


Have you ever been tempted to build a giant cyberpunk style magnifying glass into the computer to make the screen bigger?


I've been working on a similar project and if you're up for it, would love to talk to you in more depth about your project if you're game. I've been eying up the newer Apollo4 (having recently ordered the devkit to experiment with). My email is in my profile if you'd like to chat and trade information, ideas, or collaborate.


That's exciting! Would you mind if I emailed you too?


Sorry, I just saw this reply. Go ahead!


Very cool project. 2 questions:

1. How did you find working with the Sharp Memory display as a tinkerer? Did you like building with it and would you recommend it? It's pretty new tech to me and I've been mulling whether to splurge for it for a potential pet project of mine.

2. How's the refresh rate on it? Judging from your video it looks pretty dang swell. It looks to be a lot faster than the typical 15hz one sees on higher end e-ink displays available right now.


Using the Adafruit breakout board, I had no problems other than an SPI bug in the Arduino core. Be warned that they are fragile when not protected, I sheared off the flex cable on one which was not fun.

Official refresh rate is ~25 hz, unofficially they seem to be able to go a lot faster (2-3X) if you don't mind increasing power consumption. And it's faster if you update only some lines. But by then you have run into a limitation in how fast the liquid crystal can respond.


Do you think this could work well in the form factor of a programmable calculator?


Yes. A smaller, but higher resolution display is available from Adafruit, and they made a calculator tutorial for it which helped me a lot in the beginning: https://learn.adafruit.com/diy-rpn-desktop-calculator-with-c...


I love this. Thank you.


Wondering if you specced out or thought of an implementation with a e-ink display and internet.

How long can battery last for that base spec, with say 10", 13" models?


Ever hear of ChrysaLisp?


can you get a display with same technology but larger?


I was wondering if such a low power usage device could be charged from button presses, but dismissed the idea as whimsical... and then immediately found another project on the front page that does exactly that!

https://news.ycombinator.com/item?id=35077859

>We extract energy from (...) button presses of a regular Game Boy, using mechanical off-the-shelf button press harvesters. (...) The energy from the buttons is rectified, boosted, and stored in small capacitors.


A few small solar panels would probably soak up enough ambient light. I had a calculator in high school that ran on solar. It still works. 35 years later.


This is great! The combination of an Ambiq Artemis module and Sharp memory-in-pixel displays is exactly my plan for the Zorzpad, which won't have a battery at all (sounds like that's andreer/reerdna/Eriksen's plan eventually). But Eriksen has actually built something, and so far all I have is a pile of electronic parts and design notes and sketches and code prototypes. Hopefully I can learn from whatever mistakes he's made!

Eriksen's PotatoP project page: https://hackaday.io/project/184340-potatop

His GitHub page: https://github.com/andreer/PotatoP/

My Zorzpad Git repo, in Spanish: git clone http://canonical.org/~kragen/sw/zorzpad.git

(There might be some information in there of use to Eriksen, too, for example about Flash. And maybe in Dercuano, Derctuo, and Dernocua.)

My previous comment on here about the hardware: https://news.ycombinator.com/item?id=30691361 three days after Eriksen's first update on the Hackaday project log. I didn't get the idea from him, but I think I hadn't written about it publicly before that, so he probably didn't get it from me either. How did we end up deciding to do almost exactly the same project a few days apart, when the parts had been available for years? Maybe the Playdate is responsible?

As for the battery, my theory is that by not having a battery I can get much higher reliability, because the battery and the charging port are the things that usually limit the life of portable computers. Submilliwatt personal computing is the means to that end.

It's really inspiring to see someone achieve what I've been dreaming of!


Thank you! Yes, I've had this in mind for a while, I have some pictures of beginning the first prototyping by dismembering the keyboard back in 2020, although what I was thinking at the time is not recorded.

I would not be surprised if it's simply the availability of the good-enough CPU that caused the idea to take root in multiple heads independently.

initrd has gathered a lot of resources in https://hackaday.io/project/177716-the-libre-autarkic-laptop and https://github.com/EI2030, he sent me a link to your writing a while back and it made me think of using an sd card for storage, which I had previously rejected for taking too much power. But I have since managed to only power it when reading/writing files and it works quite well.


I didn't learn about the CPU or the display until 02021, but apparently they've been around for quite a number of years. My notes in energy-autonomous-computing in Dernocua http://canonical.org/~kragen/dernocua/ on both the Ambiq chips and the memory LCDs date from July 02021, also three days apart.

For the Zorzpad's main nonvolatile storage my plan is to use bare SLC NAND Flash, but I'll have to write to it fairly slowly to get to my desired design lifetime. So my initial plan of a KeyKOS-like or EUMEL-like transparently persistent system with huge virtual memory is out the window.

If your limit is only the power budget, though, that approach seems like it might be within reach; the energy-autonomous-computing note I mentioned above has a table of memory types and their energy consumption, and if I didn't fuck up the calculations, NAND Flash only costs about 10 nJ per byte to write. So a full memory snapshot (384 KiB) every 30 seconds (104 kbps) would only cost about 130 μW, and if your Lisp (or other virtual machine) can keep track of what data is dirty, it should be a fraction of that.

I'm glad to hear my notes have already been useful to you! I'm sure yours will be for me, too. And thanks for the links to initrd's work! I somehow didn't know about it.


has anyone ever tried to harvest energy from key presses to prolong battery life?

on that one, a little capacitor in the parenthesis keys would probably make that solar panel unnecessary.


Yeah, it is used in https://www.freethegameboy.info for one - not sure if they published numbers on how much energy it produced.


Those switches look like they are the mini generators from ZF Electronics that produce 0.33 mWs of power per activation.

ZF Electronics used to be called Cherry Industrial, which was until a few years ago part of the same company as Cherry AG who make the famous keyboard switches.


Yes, specifically ZF AFIG-0007 energy harvesting switch

https://dl.acm.org/doi/pdf/10.1145/3411839 p. 12

https://eu.mouser.com/ProductDetail/ZF/AFIG-0007


As a heads up, the second link leads to 404 Not Found error.

Edit: Oddly, the product page is viewable if you enter the product ID from the tail end of the URL into the product search field.



Whoa, I have to buy at least 5000 switches. Together with a price tag (~50K EUR) it does not look good for a weekend tinkerer.


I dream of a DMG-1000, an original gameboy with a thousand hours battery life because it would have a CPU made with modern technology. This is actually DMG-Infinite! Amazing. Thanks for sharing.


People reading this might be interested in my notes from 9 years ago on this topic at https://dercuano.github.io/notes/keyboard-powered-computers....


aw what a cool project!


I think I'd prefer a good solid crank to having to grind a keyboard. I remember manual typewriters - they were a hideous, miserable experience.


Presumably the resistance of a typical laptop keyboard is set to get the correct feel, not like a minimum mechanical resistance to get it to work at all, so there must be energy that is currently dissipated as heat that could be harvested for free, in principle without changing the feel.


A hand crank would be really cool on such a device. Rather than scramble for a charger and outlet you could just take a break and crank away.


If only the Playdate could charge using the crank.


> I remember manual typewriters - they were a hideous, miserable experience.

They were just the thing to write rants on, you could truly put some feeling into your writing. Rant loud enough and you'd type holes instead of o's. Overdo that underline and it'd end up splitting the page for real.


Having grown up with a non-Selectric typewriter, I'd be okay with a charging crank that mimicked the action of a manual carriage return that I had to shove to the left at the end of each line of code when the editor goes "ding!". But I'm not sure where it could be mounted on a laptop.


I've considered adding a few small geared stepper motors as a combination input device (scroll wheel, or two for etch-a-sketch cursor control) and energy harvester.


Pedal power, like an old-timey sewing machine.


At $10 a piece and ~90-key keyboard you can have chargable batteries for years.

https://eu.mouser.com/ProductDetail/ZF/AFIG-0007

> on that one, a little capacitor in the parenthesis keys would probably make that solar panel unnecessary.

... ye, you probably can get it with $20 only.


It's a slight exaggeration how frequent parens are used in Lisp. The keys tab, space, shift, backspace, hyphen enter are probably used more often, and many alphanumeric keys.

Perhaps you could have the whole keyboard floating on a pivot like a see-saw, with one of these switches beneath it at the right and left side of the pivot (say beneath s and ; keys). That way any key press off-center would have a chance of activating one of the two switches, and keys closest to the far left and right would have a greater probability of activating them. The keyboard would have a slight wobble.


That link is dead for me


https://news.ycombinator.com/item?id=35075592

You can find the spec at the manufacturer site.


energy from key presses: really cool idea. i hope that gets documented somewhere so the patent trolls don't try to patent it or something.


How long could the battery be expected to last without any solar cell energy input? Such information would be super informative to get a feel for how long it will run indoors in a darker space (e.g. on a nightstand in a bedroom which stays 'blacked out' dark all day).

Also, would this hardware be sufficient to run a terminal and SSH client? I'm thinking about the overhead of encryption and constantly open TCP socket streaming packets back and forth over Wi-Fi.

I don't think this first iteration even has a network stack, is that right? Is having Wi-Fi even on the table as a realistic design target?

Edit: Thank you @reerdna for all the helpful information. 40 days of activity on a single charge would still be pretty compelling :). Heck, even a full week or two of constant general purpose use would be amazingly better than than current laptops which last only 2-12 hours of active use (at best).


Perhaps 200 days? This was already discussed in another thread. But you wouldn't be able to read the screen in darkness, it is reflective and not backlit.

I have SSHed into a Microvax II some times, this is 50 times faster. So possible, definitely.

No network stack yet, although the chip has bluetooth so it'd be possible to make that work, perhaps with a "gateway" device. But it would likely increase power use by 5X.


40 days would still be amazing.


Does it come with one of those magnefying glasses in front of the screen?

You know, like in the moview Brazil?


I wrote a dystopian post on this technology- I also write on a similar project idea: https://iotmote.substack.com/p/remaking-the-nokia-6110-and-p... https://hackaday.io/project/177716-the-libre-autarkic-laptop


Yes. And a drab uniform is provided with each machine sold at Best Buy


what's wrong with Chinese clothing, circa 1960? ;)


Wonderful work! What kind of modifications did you make to ulisp to create potatOS? And how is it working with ulisp? My only real lisp experience is with CL.


Thank you! Not too much really. I "ported" it to the SparkFun arduino core - which mostly involved adding some ifdefs and fixing a few bugs, like this one https://github.com/sparkfun/Arduino_Apollo3/issues/478 , and added the adafruit sharp memory display library which integrated quite well with the existing graphics support. Then I wrote an interrupt routine to scan the keyboard.

I also had to add some error handling patches which someone had shared in the ulisp forum (because you don't want the REPL to crash if you have a typo)

Working with uLisp is fun - it's not super fast and it does have a few limitations and gotchas, but it was by far the easiest option for this project as so much was already supported. And the documentation and community is very nice.


Why this is not a direct link on a project's page? https://hackaday.io/project/184340-potatop


Methinks the inventor had the right idea with the potato name but they should go all-in and charge the potato with another potato

https://www.sciencebuddies.org/science-fair-projects/project...


i really believe one of the most fundamental task for human specie now is to create some kind of backup for all the knowledge of human civilisation. A book you can always open and read, but how are you going to ensure wikipedia remains accessible in case of a true collapse ?


This is the exact topic of Isaac Asimov's "Foundation". (I've watched the series, now digging into the first book.)

Wikipedia provides downloadable data dumps, I guess that's one way to start.

You may also want to check out http://collapseos.org/.


> A book you can always open and read, but how are you going to ensure wikipedia remains accessible in case of a true collapse ?

Print it out


I understand that most newer ultra low power microcontrollers are fabbed using a 40nm process node. Just imagine what little power consumption there would be if they were fabbed using 5nm nodes.


I totally love this, it's a dream: ultra low power computing everywhere and anywhere. Why can't my smart watch run for months instead of days? I have nothing against Lisp, but it would be more practical if it could run Fuzix and if it had network connectivity


It would be interesting have a Forth running on it, as I feel the philosophies are aligned.


I've never looked into Forth, but I'm sure it would be doable.

I have experimented with adding Uxn support, which may be similar to Forth in many ways? https://100r.co/site/uxn.html



My first thought was - how does the cpu compare to greenarray ga144 forth computer-on-a-chip (in terms of power/performance) ?

https://www.greenarraychips.com/home/documents/greg/GA144.ht...

https://www.greenarraychips.com/home/products/index.php


The funny thing is that you could run a whole company with just this device. It has enough processing power to manage everyone's payroll, keep track of cashflow, inventory and orders and calculate a myriad of spreadsheet formulas.


I've imagined having an OLED screen which only flashes on momentarily. Could be used for writing without editing.


I've imagined like an OLED/ePaper hybrid for that application.


Or e-ink display. Unfortunately those are quite pricey compared to LCD


rLCD is probably more effecient, and much more readable under good light, the memory lcd in the picture also has epaper-like retention capabilities.


A great way to resurrect the argument about ideal line length.


I'm pretty sure 53 characters is slightly less than ideal though :-)


This is an 8 year old demo of a SHARP Memory Display, for those like me who didn't know exactly how it looks: https://www.youtube.com/watch?v=VmronCahqd8

The fast response time is the key, I was expecting it to be like an e-ink display!


I've seen this:

http://www.ulisp.com/show?383X

Could a Z-Machine interpreter (Emacs has one in Elisp, Malyon.el) run fast enough on this? v3 version, nothing fancy. v5 and v8 games would require a bit more power.


Yes. The Apollo3 cpu is probably at least 50-100 times faster than you'd need for that. Maybe even the emacs lisp version could be ported easily.



A Computer for Leibowitz


Tip, when the slide-over email nag screen comes across you can get rid of it with the uBlock Origin element picker. Then "pick" the grey overlay as well, and BAM you won't ever be nagged on that site again.


Is there any way to make glasses with a big screen floating in front of your eyes with similar tech? So I am not talking AR, I am talking just having a massive TV hanging in front of you where I can program uLisp that has a watch-sized battery in the glasses that goes for days (not asking years; of course weeks or months would be really excellent)? I think the resolution of the panels + lighting to project it are the problem? Is that doable or we are just bound to massive battery even if we need minuscule processing power?


Maybe you'll find this interesting: https://www.youtube.com/watch?v=50614QMNQPo


Thanks


The weight of the glasses is also a problem I think. Having a screen in front of your eyes is very heavy on the nose. People with high strength corrective lenses glasses will know what I'm talking about. Perhaps a rudimentary vector based HUD, with tiny projectors in the legs of the glasses? But that sounds expensive and hard to manufacture.


I have the Nreal Air which has no batteries. I have fairly high correction and yet is't not heavy at all (I have prescription glasses for the Nreal). It is great tech and I easily work for hours on it. But the battery just drains too fast (it uses the phone battery). The weight is no issue at all so far.

Edit: Lenovo has them too I see. But because they need high processing for 'most' people, they don't have batteries and have cables. If you don't want to watch tv, play games etc, but just program Lisp and sync via BLE, I wonder if it's possible to make it very low consumption just on the glasses. I know that chipsets, including with BLE can be very low consumption. But I'm not sure if projecting the image at high enough res is possible on that low consumption, so that is my question if someone knows?

[0] https://www.theverge.com/2022/9/1/23332907/lenovo-glasses-t1...


Sorry if I am being ignorant, but can someone explain to me Hackernews' obsession with Lisp? I don't hear anyone talk about Lisp or use Lisp anywhere except on Hackernews.


It’s a nice and powerful family of languages with lisp, scheme, clojure etc. Just because people who don’t have experience have the ‘ewwww parens’ reaction when they see it, it’s not used a lot. Similar to APL families (modern versions should be relevant to ML: numpy etc are basically APLs with different syntax).

I personally believe everyone should try different paradigms and languages even if you never use them. First of all it will help you see patterns later on, which results on learning whatever programming language in a very short time when you need it. And it opens your eyes to how powerful some of them are, and even to get most out of your ‘day job’ language.

There was someone on Twitter who was programming c# fulltime for a decade or so, and didn’t know you could ‘int i=0; for(;i<x;i++)’. Then the comments saying ‘oh I thought that was only c/c++’. That is a fundamental lack of understanding; how do they think this works? Magic? When you have seen many languages, you discover interesting properties that are normal in other languages but still possible and actually beneficial in your most used one. Lisp/Scheme deepdives are known to give this type of insight that will stay with the professional developer forever, making them better, not necessarily using Lisp.


Paul Graham (Y Combinator co-founder) has a lot of essays on Lisp or mentions of Lisp:

* Lisp: http://www.paulgraham.com/lisp.html * Beating the averages: http://www.paulgraham.com/avg.html

Other notable programmers have mentioned this. I can think of Yegge's article (http://steve-yegge.blogspot.com/2006/04/lisp-is-not-acceptab...) where he mentions this:

> ...It's dirty laundry that needs airing. The problem: Paul Graham. I mean, the guy's a genius, and I love reading his essays, and his startups are doing great things, etc. etc. You can't fault him. But he's created something of a problem. > Before Paul Graham, Lisp was dying. It really was, and let's not get all sentimental or anything; it's just common sense. A language is always either gaining or losing ground, and Lisp was losing ground all through the 1990s. Then PG came along with his "I'm not talking to you if you're over 26 years old" essays, each a giant slap in our collective face, and everyone sat up and paid attention to him in a hurry. And a TON of people started looking very seriously at Lisp.

In addition, HN itself is written in a dialect of Lisp called Arc: https://en.wikipedia.org/wiki/Arc_(programming_language)

This is why you can't trust HN as the general opinion among people or even programmers. They are a very specific subset of programmers who are more likely to be interested in things like Lisp.


> Paul Graham (Y Combinator co-founder) has a lot of essays on Lisp or mentions of Lisp

Nobody mentions them much here for some reason, but also the books ANSI Common Lisp and On Lisp, at one time considered almost essential reading. I still think they're fantastic but they're not for instant gratification as they don't give you much in terms of immediate practical applications or real world system examples. Nevertheless they are among my favorite books on programming and I wouldn't be surprised if many people who come here similarly "cut their teeth" with those books.


Because HN users tend to be interested in niche but powerful languages, they are not to be trusted to know programming in general?

Weird argument to close an otherwise excellent comment. I'd rather listen to the opinion of someone that also knows niche things, than one that only knows the mainstream and nothing else.

Also which group is more likely to create beautiful products such as a portable Lisp machine?


> Because HN users tend to be interested in niche but powerful languages, they are not to be trusted to know programming in general?

No, the parent comment meant that HN users are not a representative cross-section of programmers in general. This manifests itself, for example, in relatively niche languages being mentioned and discussed disproportionately often.

In other words: just because HN loves it doesn't mean everybody else knows it.


This is similar to the love HN has for autonomous driving. Most people outside here mostly despise the idea. So HN is not representative of society in general, But this doesn't mean the HN crowd is wrong.


MIT taught computer science with Lisp, and did it in a way that made everyone fall in love


Several companies do use,

It was the scripting language for Autocad before being replaced by .NET.

It is the scripting language for Gimp alongside Python (which came later).

It powers Google's travel agency (CTA)

Is powers the Syscog infrastructure (a company for trains and trucks management software/hardware)

It powers NuBank (alongside Erlang).

Just a couple of examples.


Don't knock it if you haven't tried it!


I have tried it: I wrote multiple Lisp interpreters, including a Scheme in C with garbage collection and basic call/cc. I wrote a simple regex engine in Scheme. I wrote a basic just-in-time transpiler from elisp to Javascript. I dabbled with Common Lisp.

I still don't like it:

- a quality implementation of any Lisp is _not_ easy, e.g. just look at SBCL or Chez Scheme

- minimalistic implementations are a selling point, but I take Lua over any minimalistic Lisp anytime. Chances are that a Lua interpreter library will be smaller. Micropython runs on anything slightly more powerful than Arduino nowadays and gives you objects and async on an effing microcontroller.

- developer experience is miserable/ascetic, even compared to Lua (maybe except Clojure). Things we take for granted (dictionaries/hashmaps/object properties) are still missing from Lisps.

- structural editing is now available for any language that supports a proper language server (which almost any modern language does). s-exprs failed to become a common data language for anything. It's possible to have a simple and humane syntax at the same time (Zig, Go, Lua)

- I've learned to avoid macros and dynamic code generation unless I have a very good reason to use them in any language, otherwise they are a code smell.

It was painful for me to watch the video in the article where the author counts parenthesis and gets the job done in spite of the language, not because of it.


Sorry to have caused you pain!

I thought this video would get maybe 100 view at most, or I would have put more effort into it (like implemented paren matching in the editor first).

I like lisp (and apparently it's good HN bait). But the language is not really the core of this project.

It is more about making a truly personal computer. One that doesn't require you to think about charging or software updates. That is always fast and responsive and ready to use as it is never doing something strange in the background. Simple enough for me to feel like I understand how everything works.

And writing your own OS for it is easy if you wanted to - you could implement forth, lua, a small C compiler - whatever you wish, and these properties would still be preserved.

Useful? Probably not. Fun? At least for me it is.


Your project is really admirable (I'd like to make a tiny computing machine and a full computing stack myself at some point), and having fun is a sound reason to use Lisp :)

It was just me and my pet peeve that Lisps stopped being a superpower decades ago.


This is very cool, especially the choice of screen!

I've often thought about how the PDAs of yore (and Nokia dumbphones) had monochrome LCD screeens that were perfectly usable for text and had week-long battery life.

It's something I'm hoping will make a comeback as more people begin to realize that the "there's an app for that" mentality is leading to the creation of apps that aren't necessarily the right fit for a smartphone anymmore.


I sort of miss that era of PIM, and would love to have a modern PDA with a monochrome LCD, great battery life, fold-out keyboard, and—critically—the ability to easily sync with Gmail and Google Calendar. As far as I can tell the syncing story with 20-year-old Palm Pilots is not great these days.


The first machine I ran Lisp on was a Pentium II MMX 133MHz. Depending on the IPC of the Cortex-M4, this could be a similarly fast device. Of course the Pentium had at least 16MB of ram (Maybe 32 or even 64MB? That was about when RAM prices dropped like crazy so I'm not sure), compared to this which, at 384k, has only slightly more RAM than the base-model AT.


Sharp Memory in Pixel display LS044Q7DH01. That reflective lcd display looks nice.

Mods, I think the hackaday link is far better, it doesnt show a login popup that comes up multiple times. https://hackaday.io/project/184340-potatop


The control key is in the right spot for emacs! Bravo on the project. I wish I new hardware better.


To my un-expert eyes and lack of knowledge about Lisp, I have a hard time understanding the difference between this and a TI-86 graphing calculator?


More RAM, a higher resolution display, a proper keyboard, solar power and much easier to program (for me at least). But yes, it is essentially the same sort of thing, and much more poorly made at that. :-)

I loved my TI-83, wrote games and crude 3D graphics on it high school math class.


Mhhhm this makes me hungry for uLisp <> ChrysaLisp Interops


Optional accessories include a magnifying glass and a commemorative LISP postage stamp.


Having a solar panel defeats the metric of "battery life measured in years".

More useful would be runtime of the system on battery (which I'm guessing doesn't make a great headline) as well as the output wattage of the built in solar panel.


12000 mAh battery / ~2 mA power consumption when running code and updating the display every second = 250 days. Still pretty good? Haven't had a chance to do much testing of the solar panel yet, but it does charge while running in good light so it's > 1mA.


2 milliamps? Preposterous

Here's the datasheet for your (or a similar) display (PDF) https://media.digikey.com/pdf/Data%20Sheets/Sharp%20PDFs/LS0...

..250 µW average power consumption on your display panel.. well, maybe that's not so preposterous after all, great work


> Having a solar panel defeats the metric of "battery life measured in years".

eh, I don't see why. Solar pocket calculators were still sold with the understanding that their function and lifespan hinged upon the chemistry of the internal battery.

it's not like these things can function without some form of capacitance, and getting that to last a long time is tricky.


It's impressive to dig a long forgotten solar calculator from the late 70s out of a drawer, and have it still function perfectly.

It seems like a monumental failure of the computer industry that we still don't have the equivalent in laptop form. None of my portable computers will come out of a forgotten drawer 50+ years from now and readily turn on.


A while back I wanted to run OS/2 on real hardware -- I bought an old HP omnibook 800; the batteries were all shot but otherwise it worked fine (the laptop; OS/2 was kind of a pain to setup). That's not the 70s but it's old. Lots of that vintage computer would work fine if it weren't for those pesky capacitors -- the same's true of lots of older vintage electronics.

I recently bought an old keyboard (an old IBM M4-1, a PS2 keyboard / trackpoint combination thing) -- it's from 1994 and needed a couple adapters, but connected to my macbook pro just fine, and worked pretty well.

OS2 won't know modern TLS and the keyboard doesn't have the command / meta / windows / open clover shift keys, they do fine.

My old HP calculator from high school didn't work when I tried it out...


it was a small miracle when the Dell Latitude C640s my dad collected from his work in the early 2000s turned on and worked fine around 2019. Just in time to turn them in for $50 Best Buy gift cards for their laptop recycling drive.


Having a solar array on your roof and some battery storage in house makes even a normal off-the-shelf laptop last for years, while doing a lot more practical workloads for the user.

This doesn't mean that the laptops battery life is measured in years.


I guess the battery acts more like a buffer for the solar panel, and battery life here literally means “the lifetime of the battery”.


Is Lisp used nowadays in producton for newly developed business platform?


Can you harvest some energy from the user typing on the keyboard?


>Lisp-powered laptop with a battery life measured in years

Take that, Lenovo!


Yes.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: