Hacker News new | past | comments | ask | show | jobs | submit | buserror's comments login

At home I have a 48xLTO5 changer with 4 drives (I picked for a song a while back! I actually don't need it but heck, it has a ROBOT ARM), and at work I'm currently provisioning a 96 LTO 9 tape drive dual-rack. With 640 tapes available :-)

I'm a STRONG believer in tapes!

Even LTO 5 gives you a very cheap 1.5TB of clean, pretty much bulletproof storage.. You can pick a drive (with a SAS HBA card) for less than $200, there is zero driver issue (SCSI, baby); the linux tape changer code is stable since 1997 (with a port to VMS!).

Tape FTW :-)


I don't have one but I'd definitely take a tape changer if it weren't too expensive. It would be amazing to have 72TB of storage just waiting to be filled, without needing to go out into my garage to load a tape up.

LTO tapes have really changed my life, or at least my mental health. Easy and robust backup has been elusive. DVD-R was just not doing it for me. Hard drives are too expensive and lacked robustness. My wife is a pro photographer so the never-ending data dumps had filled up all our hard drives, and spending hundreds of dollars more on another 2-disk mirror RAID, and then another, and another was just stupid. Most of the data will only need to be accessed rarely, but we still want to keep it. I lost sleep over the mountains of data we were hoarding on hard drives. I've had too many hard drives just die, including RAIDs being corrupted. LTO tape changed all of that. It's relatively cheap, and pretty easy and fast compared to all the other solutions. It's no wonder it's still being used in data centers. I love all the data center hand-me-downs that flood eBay.

And I do love hearing the tapes whir, it makes me smile.


Funny that, I was looking recently for a small, local smtp server to get notifications from my printer and other stuff and... there isn't. All you get are the ginormous ones with decades of crud attached.

So I ended up writing my own of course; no need for all the fancy features, just PLEASE let me receive email over SMTP and deliver them locally with 'dma'. Pfew.


I used to do quite a bit of SIMD version of critical functions, but now I rarely do -- one thing to try is isolate that code, and run it in the Most Excellent Compiler Explorer [0].

And stare at the generated code!

More often than not, the auto-vectorisation now generates pretty excellent SIMD version of your function, and all you have to do is 'hint' the compiler -- for example explicitly list alignment, provide your own vector source/destination type -- you can do a lot by 'styling' your C code while thinking about what the compiler might be able to do with it -- for example, use extra intermediary variables, really break down all the operations you want etc.

Worst case if REALLY the compiler isn't clever enough, this give you a good base to adapt the generated assembly to tweak, without having to actually write the boilerplate bits.

In most case, the resulting C function will be vectorized as good, or better than the hand coded one I'd do -- and in many other cases, it's "close enough" not to matter that much. The other good news is that that code will probably vectorize fine for WASM and NEON etc without having to have explicit versions.

[0] https://godbolt.org/


We did something slightly similar - for the very few isolated things it makes sense (e.g. image up/download and conversions in the gpu driver that weren't supported/large enough to be worth firing off a gpu job to complete), they were initially written in C and used the compiler annotations to specify things like the alignment or allowed pointer aliasing in order to make it generate the code wanted. GCC and Clang both support some vector extensions, that allow somewhat portable implementations of things like scatter-gather, or shuffling things around or masking elements in a single register that's hard to specify clearly enough so that it's both readable for humans and will always generate the expected code between compiler versions in "plain" C.

But due to needing to support other compilers and platforms we actually ended up importing the generated asm from those source files in the actual build.


As a counterpoint, I regularly run into trivial cases that compilers are not able to autovectorize well:

https://gcc.godbolt.org/z/rjEqzf1hh

This is an unsigned byte saturating add. It is directly supported as a single instruction in both x86-64 and ARM64 as PADDUSB and UQADD.16B. But all compilers make a mess of it from a straightforward description, either failing to vectorize it or generating vectorized code that is much larger and slower than necessary.

This is with a basic, simple vectorization primitive. It's difficult to impossible to get compilers to use some of the more complex ones, like a rounded narrowing saturated right shift (UQRSHRN).


Oh I agree it is not foolproof, in fact I never understood why saturated math isn't 'standard' somewhere, even as an operator. Given we have 'normalisation' operator there's alway a way to find a natural looking syntax of sort.

But again, if you don't like the generated code, you can take the generated code and tweak it, and use that; I did it quite a few times.


Problem is, you have to take care to look at the compiler output and compare it to your expectations. Maybe fiddle with it a bit until it matches what you would have written yourself. Usually, it is quicker to just write it yourself...


> Problem is, you have to take care to look at the compiler output and compare it to your expectations. Maybe fiddle with it a bit until it matches what you would have written yourself.

And keep redoing that for every new compiler or version of a compiler, or if you change compile options. Any of those things can prevent the auto-vectorization.


IME, auto-vectorization is a fragile optimization that will silently fail under all sorts of conditions. I don't like to rely on it.


You can just store the generated binary / assembly and rely on that if you want stable code.


I have no idea how to get the compiler to generate wider-than-16 pshufb in the general case, for example, and for the 16-wide case, writing the actual definition of pshufb prevents you from getting pshufb while writing a version with UB gets you pshufb.


I played quite a bit with MessagePack, used it for various things, and I don't like it. My primary gripes are:

+ The Object and Array needs to be entirely and deep parsed. You cannot skip them.

+ Object and Array cannot be streamed when writing. They require a 'count' at the beginning, and since the 'count' size can vary in number of bytes, you can't even "walk back" and update it. It would have been MUCH, MUCH better to have a "begin" and "end" tag --- err pretty much like JSON has, really.

You can alleviate the problems by using extensions, store a byte count to skip etc etc but really, if you start there, might as well use another format altogether.

Also, from my tests, it is not particularly more compact, unless again you spend some time and add a hash table for keys and embed that -- but then again, at that point where it becomes valuable, might as well gzip the JSON!

So in the end it is a lot better in my experience to use some sort of 'extended' JSON format, with the idiocies removed (trailing commas, forcing double-quote for keys etc).


I must further prothelitize Amazon Ion, which solves for most of the listed complaints and is criminally underused:

https://amazon-ion.github.io/ion-docs/

The "object and array need to be entirely and deep parsed" and "object and array cannot be streamed when writing" are somewhat incompatible from a (theoretical) parsing perspective, though; you need to know how far to skip ahead in order to do so.

I agree that it is silly to design an efficiency-oriented format that does neither, though. Ion chooses to be shallow parsed efficiently, although it also makes affordances for streams of top-level values explicitly in the spec.


> trailing commas, forcing double-quote for keys etc

How do these things matter in any use case where a binary protocol might be a viable alternative? These specific issues are problems for human-readability and -writability, right? But if msgpack was a viable technology for a particular use case, those concerns must already not exist.


I think this is the point-when one wanted “easy” parsing and readability for humans, they abandoned binary protocols for JSON; now, people who are finding some performance issue they don’t like are trying to start over and re-learn all the past lessons of why and how binary protocols were used in the first place.

The cost will be high, just like the cost for having to relearn CS basics for non-trivial JS use was/is.


> Object and Array cannot be streamed when writing. They require a 'count' at the beginning

Most languages know exactly how many elements a collection has (to say nothing of the number of members in a struct).


Not if you're streaming input data where you cannot know the size ahead of time, and you want to pipeline the processing so that output is written in lockstep with the input. It might not be the entire dataset that's streamed.

For example, consider serializing something like [fetch(url1), join(fetch(url2), fetch(url3))]. The outer count is knowable, but the inner isn't. Even if the size of fetch(url2) and fetch(url3) are known, evaluating a join function may produce an unknown number of matches in its (streaming) output.

JSON, Protobuf, etc. can be very efficiently streamed, but it sounds like MessagePack is not designed for this. So processing the above would require pre-rendering the data in memory and then serializing it, which may require too much memory.


> JSON, Protobuf, etc. can be very efficiently streamed

Protobuf yes, JSON no: you can't properly deserialize a JSON collection until it is fully consumed. The same issue you're highlighting for serializing MessagePack occurs when deserializing JSON. I think MessagePack is very much written with streaming in mind. It makes sense to trade write-efficiency for read-efficiency. Especially as the entity primarily affected by the tradeoff is the one making the cut, in case of msgpack. It all depends on your workloads but Ive done benchmarks for past work where msgpack came up on top. It can often be a good fit for when you need to do stuff in Redis.

(If anyone thinks to counter with JSONL, well, there's no reason you can't do the same with msgpack).


The advantage of JSON for streaming is on serialization. A server can begin streaming the response to the client before the length of the data is known.

JSON Lines is particularly helpful for JavaScript clients where streaming JSON parsers tend to be much slower than JSON.parse.


Sorry, I was mentally thinking of writing mostly. With JSON the main problem is, as you say, read efficiency.


I think the pattern in question might be (for example) the way some people (like me) sometimes write JSON as a trace of execution, sometimes directly to stdout (so, no going back in the stream). You're not serializing a structure but directly writing it as you go. So you don't know in advance how many objects you'll have in an array.


If your code is compartimented properly, a lower layers (sub objects) doesn't have to have to do all kind of preparations just because a higher layer has "special needs".

For example, pseudo code in a sub-function:

if (that) write_field('that'); if (these) write_field('these');

With messagepack you have to go and apply the logic to count, then again to write. And keep a state for each levels etc.


That sounds less than compartmentalisation and more like you having special needs and being unhappy they are not catered to.


The topic is data serialiasation formats, not programming languages.


The day is still young. Wait for it, there'll be one (or 3) soon.

With 865 dependencies pulled in by cargo :-)


is there a leftpad crate?


I use for pairing etc and it's great, but the rule editor is just completely bonkers. I tried several times to add a gesture move for the 'gesture' button on my 3S, and eventually gave up!

Seems you need to know the exact keycodes, or names, or whatever key you want to use. Like XF86_MonBrightnessUp. Want to add a combo? not sure how to do that either.


More importantly, what do you use for toolpath generation? I haven't found anything open source that really works...


In general, FreedCAD has 3-axis tool path support, and I've met the kind local guy who started the original Path toolbox.

For 2.5D there are several engraver options with the inkscape mightyscape plugins (scorchworks etc.)

YMMV with the island-routing and surface probing routines:

https://github.com/pcb2gcode/pcb2gcode.git

https://github.com/pcb2gcode/pcb2gcodeGUI.git

Also, we are currently evaluating: viaconstructor, inkcut, GCAM ( https://github.com/blinkenlight/GCAM.git )

If you want 3D contouring operations, than you could also try:

git://pycam.git.sourceforge.net/gitroot/pycam/pycam

And note, if you patch CAMotics to compile on ubuntu 24.04.x it still has a number of Qt5 GUI problems (not entirely unexpected with Qt impact on LTS programs.)

There are also direct gcode generator macros that support the LinuxCNC/emc language extensions. This is the most accurate methodology for turning and milling ops.

Best of luck, =3


FreeCAD. Or, if it's simple enough, I write it by hand and use Python to do loops. I've got a handful of Python scripts that can mill holes, make square pockets, write text in a vector font, etc.


Opensource options include:

- Solvespace --- limited to 2D last I checked

- FreeCAD has a workbench for CAM/toolpath generation

- BlenderCAM is a plug-in for Blender which is well-regarded, and together w/ CADsketcher/BlenderCAD works well for some folks

- Kiri:Moto

- pyCAM --- a venerable option, it worked well ages ago when I used my son's gaming computer to make toolpaths.

Rather rough (possibly outdated) list at: https://old.reddit.com/r/shapeoko/wiki/cam


Another might be gcad3d

https://www.gcad3d.org/


I've been long-term contributor to bCNC https://github.com/vlachoudis/bCNC/ But FreeCAD seems to be interresting choice lately as well...


Having had a play with both Supermicro and ASRock Rack boards for workstations (admittedly, only H12's so not the "most recent" but from my cursory glance and newer boards, nothing has changed that much), SuperMicro board feels like they were made in 2005, not 2024. It is ridiculous in fact.

* No support for ACPI sleep. In 2024. Seriously. * No support for 4 and 3 pins fans. 3 pins are 100% speed all the time. * IPMI web interface straight out of 2010. * NVME placement prevents you from using heatsinks. * Tons of opaque jumpers on the board, with no board labelling.

The ASRock Rack equivalent board is amazing in comparison.


If you think the latest SuperMicro IPMI webinterface is from 2010 you haven't seen their 2010 interfaces. Would you like to download the Java Web Start file to start your Remote Console? Use a Java Applet to see just the screenshot of the VGA output? How about getting RAID controllers with a (PS/2) mouse based GUI inside the OptionROM (looking at you LSI) only to find out the damn RAID controller manual lied about having a HBA passthrough mode. It just creates a single RAID volume per disk, but still stores controller specific metadata on the disks.

If ASRock Rack got the SSH serial console redirection latency down from ~1 second to the low milliseconds like the other vendors (SuperMicro, Dell, HP, etc.) it would actually be useable without taking Valium before logging in.

</rant>


ASRock IPMI, at least on X570D4U Ryzen 5000, is unusable. You can't turn on the firewall allow only specific IPs, it blocks everything. They said they have a beta firmware (> 01.39.00) to solve this, but it doesn't. Had to purchase a Spyder to add to the machine.

Supermicro's BMC UI on X11 does feel like 2004, but it is decently reliable. On the H13 series is even more stable and doesn't feel outdated.


> No support for ACPI sleep.

To be fair, these are meant for datacenter applications where it would be absolutely normal for these to be either fully on or off all of the time. You could make an argument for warm spare servers, but there's other ways to accomplish it than ACPI sleep, and I'd probably consider warm spares a relatively niche need, considering I'd rather leave them hot-but-outside-production for system monitoring purposes. Would be more annoying to wake a sleeping system and find it has a failing drive/RAM/NIC, bad switchport config, etc.

> No support for 4 and 3 pins fans. 3 pins are 100% speed all the time.

To be fair, these are meant for datacenter applications where it would be absolutely normal to run your fans at full speed all of the time.

> IPMI web interface straight out of 2010

Does it have HTML5 remote console, and basic component diagnostics? If so what else would you like? I've been a datacenter tech, a linux sysadmin, and a devops engineer, depending on the company for a decade now, and really only ever used IPMI for those two things so I'm curious where the other use cases are. I've also used Dell PowerEdge and HPE servers, which have slightly nicer looking UIs but perform the exact same functions more or less.


> To be fair, these are meant for datacenter applications where it would be absolutely normal to run your fans at full speed all of the time.

Case fans for rack mount chassis are very powerful, and also can take up a significant amount of energy when running full-bore (not to mention the mechanical wear).

I haven't used a server chassis in over a decade where the fans were running at full beyond a few brief seconds at startup. I'm not sure if they used a four-pin header or some other mechanism, but fan speed control is a normal and expected feature in server hardware.

> Does it have HTML5 remote console, and basic component diagnostics? If so what else would you like?

IPMI specifically is meant to be a standardized remote management interface. It (mostly) works for basic things, but more advanced functionality is hit-or-miss, or absent entirely, leaving you at the mercy of proprietary tools. Redfish is supposed to be better, although I personally haven't used it.

Web interfaces can be hit-or-miss in terms of functionality and UX. Additionally, I've often found these web interfaces to be unstable -- either being very slow or not loading at all, to certain features of the UI not loading data or hanging the interface.


> and also can take up a significant amount of energy when running full-bore (not to mention the mechanical wear).

In datacenters, colocation datacenters like Digital Realty, you often pay a set amount per month for power out of a rack. Doesn't matter if you use 1kWh or 3000kWh, you paid $X for electricity for each of Y number of server racks for the billing period. So this is another case where their ideal customer just frankly doesn't care how much electricity the fans consume. Supermicro just doesn't care about homelabbers because that's not the people they tend to do business with. Datacenters buy Supermicro servers and sell their old ones on Craigslist to homelabbers.

> It (mostly) works for basic things, but more advanced functionality is hit-or-miss, or absent entirely, leaving you at the mercy of proprietary tools.

You didn't really say anything that you're missing from IPMI specifically here. I was really looking for like a specific feature that you found missing, because customers often think they want more when something is "simple" but don't really even have a use case for more. And even more often, there are times where the desired solution a customer is looking for isn't the best solution.


So this completely removes it as an option for all the SMB out there that host on prem.


I don’t at all see how you’re getting a take-away that it completely removes it as an option. And I work at a company of ~60 that hosts Supermicro servers on prem. We just don’t run them like underneath our desks where it sounds like you’re expecting them to go.


Word. I borrowed a circa 2005 – 2010 server and the fan roared on start-up. I would not want that on full-time for reasons of noise, power and wear.


I've had 40%+ (8 out of 20) RMA rate with ASrock boards (from pcie drives dropping to weird gremlins), while I have replaced two out of over 300 super micro boards, all of them running 10y+

The IPMI interface sure is nicer though


Every vendor has their quirks. I have a pair of ASRock Rack ROMED8-2T/BCM boards, but my M.2 NVME boot drive is no longer detected if I update the BIOS past v3.50. Unfortunately, that means no support for resizable BAR in my configuration.

I have a pair of Supermicro H12SSL-NT boards to use for my next couple builds. I might be trading one set of issues for another, but I'm optimistic they'll work well for my purposes.


Agreed. If nothing else, the IPMI web on supermicro randomly deciding you need to re-login constantly definitely feels very 2004.

It's been this way for at least a decade, too. It's like it just forgets all its session variables sometimes.


My iDRAC 8s do that too.


I agree! I love my ASRock Rack board (X570si). I had written off ASRock as "budget" but it's been rock solid and had loads of features like you said.


I don't want ACPI sleep on a server.


Why not? Assuming your servers are not all at or near full load at all times, you can put some to sleep as warm spares.


Asking myself the same question, but IME Linux support for sleep states is (was?) generally horrible due to absolute lack of standard enforcement, and that's with laptops where it's a priority. I've only had one laptop (my most recent one) work 100% with sleep states, and only after replacing the NVMe that froze roughly 33% of the wake ups due to some obscure bug (but worked fine otherwise)


Maybe lucky, but haven’t had trouble with Linux and sleep for about 15 years. Dell and Framework.


I have a Dell latitude and dell precision laptops. Both have awful support for sleep. The latitude I had to use an LTS kernel to make it work 90% of the time, the other 10% it just wakes up randomly after closing the lid. The precision will wake itself up instantly after putting it into sleep due to a bug with the dedicated NVIDEA GPU


I bought the “developer” edition, that may have helped.


They support wake on lan (most likely) and you could power up from IPMI REST API


IPMI is such an improvement on the old ISA-style iochips, and not even sure they are more expensive these days! It's not just the remote/web access which is great, plain 'introspection' with tools like ipmiutil and ipmitool are super useful, even if you don't have a huge rack 'enterprise' installation.


Haven't played with these yet, but they do look interesting. Might have also to add support for the new IO blocks in my AVR simulator simavr[0] -- I still use the AVR a lot, since they have no pipeline and other fancy CPU optimization, they are 'cycle accurate' and are a lot closer to a PIO than most other more complex 32 bit CPUs.. Now that they ALSO have an equivalent to PIO it might even help with reaching faster IO speed when toggling pins.

[0]: https://github.com/buserror/simavr


PICs have had these CPU independent logic blocks for a while too so it makes sense they ported the idea to AVR too.

I don’t remember exactly how many “things” you can do with them in comparison to PIO though. Probably a lot less, as it’s pure gates and not a set of extra instructions like PIO.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: