Hacker News new | past | comments | ask | show | jobs | submit | radiorental's comments login

Grafana can get pretty info dense very quickly. Try some of the dashboards or the Explore feature here https://play.grafana.org/

I worked there as a product designer for a couple of years, I now work on even more data dense UI in the cyber security domain, e.g. https://elastio.com/blog/cyber-recovery/three-clicks-to-rans...

As with almost all UI design the answer is "It Depends". If you could provide a little more context around the domain you're working in I'm sure I could point you at some specific examples


The premise of Tufte's work make sense until you try to apply it to functional and usable user interfaces.

He has strong opinions strongly held but as someone who's designed industrial strength UIs for over 20 years (networking CLIs & UIs, CAD modeling/simulation, Devops dashboards, cybersecurity tooling) I've read all his books, attended his lectures... he's a king with no clothes


Have loved some of those for years and never heard of others. Thanks for the recommendations and it sort of speaks to how special and unique they are (o;


They do the first two (clean and eat bugs) and will happily take sugar in the form of bread an cake crumbs you leave out on surfaces. We're already quite symbiotic


Genuine question, are distributed systems naturally more resilient?

I can see arguments for both sides. Your point and then the hidden failure modes without central observability and ownership. Nothing exists in isolation.


Not distributed per se, but diversity makes a huge difference in resilience.

When everybody is using the exact same tech, the fall out of an incident can be huge because it will affect everybody everywhere at the same time. Superficially it might seem efficient and smart, but the end result is fragility.

Diversity of species is what nature ended up with as the ultimate solution: the individual species do not matter, but life as a whole will be able to flourish. With technology, we're now moving the other way: every single thing gets concentrated into one of the few cloud providers. Resilience decreases, fragility increases.


I prefer heterogeneity rather than diversity. Different implementations of similar processes fenerally make different tradeoffs, incurring different bottlenecks, and resulting in an ecosystem with a higher statistical probability that one relative Black Swan won't wipe out a key structural function in it's totality.

It's actually a hallmark of building fault tolerant systems and ecosystems. Pity the economists and MBA's can't be convinced of it. Otherwise there'd be less push to create TBTF institutions.


Distribution alone doesn't make a system resilient. A distributed system can help with resilience for anything related to network or hardware failure, but even then you need to make sure the different resources don't have a hard dependency on each other.

If you want a resilient system redundancy and automatic failover systems are really important, along with solid error handling.

Think about a distributed data store for example. You may spread all your data across multiple distributed areas, but if each area is managing a shard of data and they aren't replications then you still lose functionality when any one region goes down. If you instead have a copy complete copy of data in each region, and a system to automatically switch regions if the primary goes down, your system is much more resilient to outages (though also more complex and expensive).


It does not garanty resiliency but it does increase it.

If tomorow mastodon.social disappear the network might lose 80% of it's content but recovery could be possible even if the server never come back.


My point was just that resilience still depends on how a system is distributed and what else is done.

Distribution alone doesn't really make a difference, though pairing it was redundancy and failovers is going to get pretty far.

The case of mastodon.social is really a question of whether the value there is the network and protocol itself or the user created content posted there. If its the user content, the value is lost when the one host goes away. If the value is the network and protocol then yes, the value of the network is still there even though the data is gone. It does raise an interesting question of whether Mastodon is really considered distributed or not, the network is and hosts are using a shared protocol but the data isn't really distributed.


Yes there is the question of network vs data :). And as you mention while some data end-up being distributed with Activity Pub the protocol is not made to allow restoration.

One point I find interesting too is that distributed network often allows more agency to external actors. For example if you believe that the resiliency of the mastodon.social instance is not enough for you then you can decide to host you own server with your preferred criteria.


That's really where ActivityPub starts to rub me the wrong way. Server admins really need moderation power since everything is hosted on their hardware, but it also is a poison pill for decentralization.

I can host my own server and make my own rules, but every other admin can just ban my instance.


I feel like that's actually a counterexample. At least most people with mastodon.social as their home server will probably not have a backup of their followed/following graph and never be able to recover.


With a large number of small providers, more often than not some of them will fail on any given day, but stars need to align really well to get a half-of-the-internet-is-down kind of failure caused by AWS or Cloudflare.


Not exactly “more resilient”, but rather, “the only way to gain more resiliency over a single system”.

A distributed system can be more resilient, but it also adds complexity, making it (sometimes) less reliable.

A single system with a lot of internal redundancy can be more reliable than a poorly implemented distributed system, which is why at a smaller scale it’s often better to scale vertically until a single node can’t handle your needs.

Distributed systems are more of a necessity than “the best way”. If we could just build a single node that scaled infinitely, that would be more reliable than a distributed system.


“A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable.” — Leslie Lamport, 1987


Distributed systems with tight coupling and no redundancy are less resilient. It's not so much a question about distribution but more about redundancy and coupling.


> Genuine question, are distributed systems naturally more resilient?

Only if they've prioritized the "availability" component from the CAP theorem.


>are distributed systems naturally more resilient?

All else being equal: Yes.

It's like asking if a RAID1 is more resilient than a single drive.


RAID1 is mirrored. That is not what I would call a typical distributed system. It is a very redundant system. Like a cluster.

A distributed system without redundancy would rather be something like data stripped across disks without parity.

And that actually makes it less resilient, because failure of one component can bring down the whole system and the likelihood of failure is statistically higher because of the higher number of components.


When I think of distributed systems, the RAID1 analogy seems much more applicable than RAID0.

The term "distributed" has been traditionally applied to the original design of the TCP/IP protocol, various application-layer protocols like NNTP, IRC, etc., with the common factor being that each node operates as a standalone unit, but with nodes maintaining connectivity to each other so the whole system approaches a synchronized state -- if one node fails. the others continue to operate, but the overall system might become partitioned, with each segment diverging in its state.

The "RAID0" approach might apply to something like a Kubernetes cluster, where each node of the system is an autonomous unit, but each node performs a slightly different function, so that if any one node fails, the functionality of the overall system is blocked.

That second approach seems more consistent with what we traditionally label as "distributed" -- for example, the original design of the TCP/IP protocol, along with lots of application-layer protocols like NNTP and IRC, have each node operating autonomously but synchronized to other nodes so the whole system approaches a common data state. If one node fails, the other nodes all continue to operate, but might cause the overall system to become partitioned, leading to divergent states in each disconnected segment.

The CAP theorem comes to mind: the first approach maintains availability but risks consistency, the second approach maintains consistency but risks availability. But the second approach seems like a variant implementation strategy for what is still effectively a centralized system -- the overall solution still exists only as a single instance -- so I usually think of the first approach when something is described as "distributed".


You're assuming a stateful system where the state is distributed throughout the components of the system. For a stateless component of a distributed system, you don't need redundancy to recover from an outage.

>likelihood of failure is statistically higher because of the higher number of components

Yes, absolutely true, but resiliency for a distributed system is not necessarily like your example of data stripped without parity, unless we're specifically talking about distributed storage.


To the GP's point - if you lose the RAID controller, then you've lost a whole lot more than just a single drive failure.


The controller isn't stateful; it's just an interface to the disks. If the controller fails, but the disks haven't, then all you've lost is the time it takes to plug the disks into a new controller.

With RAID1, there's also nothing specific to the RAID configuration inherent in the way the data is encoded on the disk. You might have to carefully replicate your configuration to access the filesystem from a failed RAID0 array, but you can just pull and individual disk out of a RAID1 array and use it normally as a standalone disk.


Yes, RAID isn't a backup, but it is resilient.

You will have a better chance at uptime with a RAID than a single drive so you hopefully don't have to climb up ventilation ducts, walk across broken glass, and kill anyone sent to stop you on your quest to reconnect those cables that were cut.


Because its Teenage Engineering and hardware designwankery, not product design, is what they do.

Need a BT speaker? That'll be $550 thank you, btw the speaker cones are completely exposed because it looks cool.


It just blows my mind Apple can get people to pay 3.5K to join their beta test program.


Apple in a way is like Supreme, where the brand transcends the actual products. Folks stand in line to buy the latest Supreme merch even though it's just regular stuff with the Supreme brand written on it. In a way Apple has done the same with it's products where it's cool to buy it just because you like Apple as a brand and not necessarily because you need the underlying product. As a consumer I resent this sort of behavior but whenever I look at it through the lens of business, I always come away with the thought: "this is f** genius"


This is the same drive which gets people to buy Louis Vuitton/Gucci/etc., the added value does not lie in the product itself but in the brand recognition. Those high product prices are part of the brand appeal for those who want to advertise affluence.


The title of the article is specific to -cordless- tools. Of course there are downsides around battery management and corded tools can have more torque but thats not what the article is about.

I think the point of the article is cordless is good and after some major advances away from nicad and brushed motors, things have pretty much plateaued.

GO TEAM RED!!!


The title specifically claims that we are in a golden age of cordless tools. Having to commit to a single manufacturer or submit to managing a herd of batteries of varying species is not my idea of a golden age just yet.

They are not as bad as they used to be.


Team Yellow all the way, rhymes with default so you know it works.

But seriously find a brand that is generally good then you only have to worry about 1 type of charger and you can collect a small variety of batteries. I have 5 5amphr batteries that I use for everything, takes up minimal space. The only time I come to using them up is when I mow the lawn and deplete 2+.


Problem with the one brand wonder is that if you join the red army for M18, eventually you'll get an M12 for some reason, and suddenly you're on two battery systems without even noticing.

Big Yellow has the same thing, though those packs look even more similar. It can be useful to have different backs visually distinct.


Yeah, but still... I mean, a cordless screwdriver is nice, but e.g. a cordless drill only makes sense if you are regularly going to drill 10 holes in different places all around the house - otherwise the time spent plugging in your tool is negligible compared to the time you will spend drilling the holes. Not to mention you have to make sure your batteries are charged. So they make sense for professionals, but not for regular "DIY people" who only use a drill once every few months...


My cordless drill just is my cordless screwdriver; also my cordless IKEA assembler, etc. You just need one that has a clutch and a featherable trigger.


Action sports cams, dashcams, handheld gaming device such as the switch and steamdeck, I'm sure there's plenty more application. All fairly niche but it's a market


The Nintendo Switch has sold 132 million units, so I'm not sure how niche that is. Presumably the successor is going to keep using microSD cards, and filesizes will increase in line with graphical capabilities.


Cartridges are pretty popular, if only because you can sell / borrow them.

I have an SD card for the online-only titles, but it's pretty small IIRC.

But yeah, this still seems like a major use case for larger MicroSD cards.


I feel gaming devices are the primary target...as someone who used to shoot wedding videography a few years ago, there is a limit of how much footage you want to store on one card - if only because you can easily lose them. The smaller storage limit is a feature, not a bug :) It forces "normal" users to actually figure out a backup solution. Most action cams are going to take a long while to fill up 2TB.


I have a steam deck and have no interest in a 2tb sd card, i am subscribed to some steam deck communities and the preference, given the reliability of sd cards, is to have multiple smaller rather than risk a single big sd card to fail and having to reinstall them all


I have a steam deck and my primary interest would be to use it on mine. I'd apply the WORM method with it. Load it up with ROMs and only ROMs. I have a back up of all that data so suffering a failure would be a minor inconvenience.


> Making already highly illegal actions more illegal.

Regulation isn't stiffer penalties alone, it's oversight.

If I have to pass 2 checks vs 1, the incentive to fuck around decreases.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: