Hacker Newsnew | past | comments | ask | show | jobs | submit | beagle3's commentslogin

This was happening en masse, perhaps still does - the cloud backup was unencrypted. Originally it was encrypted. Then, one day, Google stopped counting it towards your storage quota, but it became unencrypted. But even before that, Meta had the encryption keys (and probably still does).

When you get a new phone, all you need is your phone number to retrieve the past chats from backup; nothing else. That proves, regardless of specifics, that Meta can read your chats - they can send it to any new phone.

So it doesn’t really matter that it is E2EE in transit - they just have to wait for the daily backup, and they can read it then.


scp works as long as the app is not making changes at the same time.

If there's a chance someone is writing to the database during the copy, you should "sqlite3 database.sqlite .backup" (or ".dump") first; Or, alternatively, on a new enough sqlite3, you have a builtin sqlite3_rsync that is like rsync except it interacts with the sqlite3 updates to guarantee a good copy at the other end.


Great tips and you’re right.

We just flip into an app-side maintenance mode before we run the backup so we know there’s no writes, scp the file and then flip it back. We only do nightlies so it’s not a problem. The shell script is super simple and we’ve only needed to do nightly backups so far so we run it in a cron at midnight when no one is working. Ezpz. Literally took us an hour to implement and been chugging along without issues for nearly 2 years now without fail.

If we ever need more than that I’d probably just setup litestream replication.


The real analog copper lines were kind of limited to approx 28K - more or less the nyquist limit. However, the lines at the time were increasingly replaced with digital 64Kbit lines that sampled the analog tone. So, the 56k standard aligned itself to the actual sample times, and that allowed it to reach a 56k bps rate (some time/error tolerance still eats away at your bandwidth)

If you never got more than 24-28k, you likely still had an analog line.


56k was also unidirectional, you had to have special hardware on the other side to send at 56k downstream. The upstream was 33.6kbps I think, and that was in ideal conditions.


The special hardware was actually just a DSP at the ISP end. The big difference was before 56k modems, we had multiple analog lines coming into the ISP. We had to upgrade to digital service (DS1 or ISDN PRI) and break out the 64k digital channels to separate DSPs.

The economical way to do that was integrated RAS systems like the Livingston Portmaster, Cisco 5x00 seriers, or Ascend Max. Those would take the aggregated digital line, break out the channels, hold multiple DSPs on multiple boards, and have an Ethernet (or sometimes another DS1 or DS3 for more direct uplink) with all those parts communicating inside the same chassis. In theory, though, you could break out the line in one piece of hardware and then have a bunch of firmware modems.


The asymmetry of 56k standards was 2:1, so if you got a 56k6 link (the best you could get in theory IIRC) your upload rate would be ~28k3. In my expereience the best you would get in real world use was ~48k (so 48kbpd down, 24kbps up), and 42k (so 21k up) was the most I could guarantee would be stable (baring in mind “unstable” meant the link might completely drop randomly, not that there would be a blip here-or-there and all would be well again PDQ afterwards) for a significant length of time.

To get 33k6 up (or even just 28k8 - some ISPs had banks of modems that supported one the 56k6 standards but would not support more than 28k8 symmetric) you needed to force your modem to connect using the older symmetric standards.


Yeah 28k sounds more closer to what I got when things were going well. I also forget if they were tracking in lower case 'k' (x1000) or upper case 'K' (x1024) units/s which obviously has an effect as well.


The lower case "k" vs upper case "K" is an abomination. The official notation is lower case "k" for 1000 and lower case "ki" for 1024. It's an abomination too, but it's the correct abomination.


That's a newer representation, mostly because storage companies always (mis)represented their storage... I don't think any ISPs really misrepresent k/K in kilo bits/bytes


Line speed is always base 10. I think everything except RAM (memory, caches etc.) is base 10 really.


It’s not necessarily picky - it’s sometimes about physically different perception.

When DLP projectors first came out, I couldn’t watch them. I would see colors breaking in fast motion scenes and whenever I would move my head even slightly (and … we all move our head slightly often when watching a movie).

When I told other people, some of them nodded in understanding, but the vast majority thought I was making things up - for them, it was a rock solid picture.

One of my friends replied: “I can see about 300hz. Not all the time - only when I have secadic movements; but that means many fluorescents, DLPs and other light sources drive me crazy. I guess you’re also a member of crazy club”

Some people can hear 26khz. Some people can see DLPs. Some people can see the alternating pattern….


Somewhat related - I want control over devices in my home. Too many things these days need an internet connection to be useful. I run my own OpenWRT router and set up firewall policies for them so they only get the access they need to provide their function. But I'm getting tired of it.

I'm looking for a nice tool that would give me that "control" over my home network -- at the very least, proper observability. Like "little snitch / open snitch" but running on my home router... and I haven't found anything like that yet.


But also, there is more traffic on the bridge.

The word processors of 30 years ago often had limits like “50k chapters” and required “master documents” for anything larger. Lotus 123 had much fewer columns or rows than modern excel.

Not an excuse, of course, but the older tools are not usable anymore if you have modern expectations.


Python’s dicts for many years did not return keys in insertion order (since Tim Peters improved the hash in iirc 1.5 until Raymond Hettinger improved it further in iirc 3.6).

After the 3.6 changed, they were returned in order. And people started relying on that - so at a later stage, this became part of the spec.


OOP is different things to different people, see e.g. [0]. Many types of OOP that were popular in the past, are, indeed, dead. Many are still alive.

[0] https://paulgraham.com/reesoo.html


I'd personally declare dead everything except 3 and 4 because, unlike the rest, polymorphism is genuinely useful (e.g. Rust traits, Kotlin interfaces)

trivia: Kotlin interfaces were initially called "traits", but with Kotlin M12 release (2015), they renamed it to interfaces because Kotlin traits basically are Java interfaces. [0]

[0]: https://blog.jetbrains.com/kotlin/2015/05/kotlin-m12-is-out/...


1 is about encapsulation, that makes it really easy to unit test stuff. Say you need to access a file or a database in your test, you could write an abstraction on top of file or db access and mock that.

2 indeed never made sense to me since once everything is ASM "protected" means nothing, and if you can get a pointer to the right offset you can read "passwords". This claim of enforcing what can and cannot be reached from a subclass to help security never made sense to me.

3 i never liked function overloading, prefer optional arguments with default values.. if you need a function to work with multiple types of one parameter, make it a template and constrain what types can be passed

7 interfaces are a must have for when you want to add tests to a bunch of code that has no tests.

8 rust macros do this, and it's a great way to add functionality to your types without much hassle

9 idk what this is


Indeed. But ... do not confuse your model with reality.

There's a folk story - I don't remember where I read it - about a genealogy database that made it impossible to e.g. have someone be both the father and the grandfather of the same person. Which worked well until they had to put in details about a person who had fathered a child with his own daughter - and was thus both the father and the grandfather of that child. (Sad as it might be, it is something that can, in fact, happen in reality, and unfortunately does).

While that was probably just database constraints of some sort which could easily be relaxed, and not strictly "unrepresentable" like in the example in the article - it is easy to paint yourself into a corner by making a possible state of the world, which your mental model dims impossible, unrepresentable.


Your example doesn’t validate your point. That’s a valid state made unrepresentable, not an invalid state made unrepresentable. Your example simply demonstrates a poorly architected set of constraints.

The critical thing with state and constraints is knowing at what level the constraint should be. This is what trips up most people, especially when designing relational database schemas.


Any assumption made in order to ship a product on time will eventually be found to have been incorrect and will cause 10x the cost that it would have taken to properly design the thing in the first place. The problem is that if you do that proper design you never survive to the stage where have that problem.

I think the solution to that is to continuously refactor, and to spell out very clearly what your assumptions are when you are writing the code (which is an excellent use for comments).


Continuous refactoring is much easier with well constrained data/type schemas. There are fewer edge cases to consider, which means any refactoring or data migration processes are simpler.

The trick is to make the schema represent what you need - right now - and no more. Which is the point of the “Make your invalid states unrepresentable” comment.


I do see how it does, in a way. That something the designer thought is "invalid state" turns out a valid and possible state in real world. In terms or UI/UX, it's the uncomfortable long pause before something happens and screen renders (lack of feedback, feeling that system hangs). Or, content flicker when window is resized or dragged. Just because somebody thought "oh, this clearly is invalid state and can be ignored".

The real world and user experience requirements have a way of intruding on these underspecified models of how the world "should" be.


That’s still a poorly designed system. For UI there should be a ‘view model’ that augments your model, that view model should be able to represent every state your UI can be in, which includes any ‘intermediate’ states. If you don’t do this with a concrete and well constrained model then you’re still doing it, but with arbitrary UI logic, and other ad-hoc state that is much harder to understand and manage.

Ultimately you need to make your own pragmatic decisions about where you think that state should be and how it should be managed. But the ad-hoc approach is more open to inconsistencies and therefore bugs.


> at what level the constraint should be

Hi, can you give an example? Not sure I understand what you're getting at there.

(My tuppence: "the map is not the territory", "untruths programmers believe about...", "Those drawn with a very fine camel's hair brush", etc etc.

All models are wrong, and that's inevitable/fine, as long as the model can be altered without pain. Focus on ease of improving the model (eg can we do rollbacks?) is more valuable than getting the model "right").


> Hi, can you give an example? Not sure I understand what you're getting at there.

An utterly trivial example is constraining the day-field in a date structure. If your constraint is at the level of the field then it can’t make a decision as to whether 31 is a good day-value or not, but if the constraint is at the record-structure level then it can use the month-value in its predicate and that allows us to constrain the data correctly.

When it comes to schema design it always helps to think about how to ‘step up’ to see if there’s a way of representing a constraint that seems impossible at ‘smaller’ schema units.


I get it - thanks.


That sounds like this (in)famous stackowerflow question: https://stackoverflow.com/a/6198257


The Amiga 500 had high res graphics (or high color graphics … but not on the same scanline), multitasking, 15 bit sound (with a lot of work - the hardware had 4 channels of 8 bit DACs but a 6-bit volume, so …)

In 1985, and with 512K of RAM. It was very usable for work.


a 320x200 6bit color depth wasn't exactly a pleasure to use. I think the games could double the res in certain mode (was it called 13h?)


For OCS/ECS hardware 2bit HiRes - 640x256 or 640x200 depending on region - was default resolution for OS, and you could add interlacing or up color depth to 3 and 4 bit at cost of response lag; starting with OS2.0 the resolution setting was basically limited by chip memory and what your output device could actually display. I got my 1200 to display crisp 1440x550 on my LCD by just sliding screen parameters to max on default display driver.

Games used either 320h or 640h resolutions, 4 bit or fake 5 bit known as HalfBrite, because it was basically 4 bit with the other 16 colors being same but half brightness. The fabled 12-bit HAM mode was also used, even in some games, even for interactive content, but it wasn't too often.


You might be thinking of DOS mode 13h, which was VGA 320x200, 8 bits per pixel.


i remember playing with mode 13h, writing little graphics programs with my turboc compiler. computers were so magical back then.


And 6-bits per colour component.


VGA color palette was 18-bits/256K, but input into the palette was 8-bit per channel. (63,63,63) is visibly different from (255,255,255).

http://qzx.com/pc-gpe/tut2.txt

http://qzx.com/pc-gpe/


Sorry I'm not exactly sure what you're saying. I know very well how it works as I write a lot of demos and games (still today) for mode 13h (see https://www.pouet.net/groups.php?which=1217&order=release) and I can program the VGA DAC palette in my sleep. Were you referring to the fact that you write 8-bits to the palette registers? That's true, you do, but only 6-bits is actually used so it effectively wraps around at 64. There are 6-bits per colour component which as you pointed out is 18-bits colour depth.

Btw I was a teenager when those Denthor trainers came out and I read them all, I loved them! They taught me a lot!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: