Hacker Newsnew | past | comments | ask | show | jobs | submit | zorlem's commentslogin

I highly doubt that. The risk is too high.

In my country there was a big scandal a few years ago when a couple of radio stations started playing Billboard Hot 100 songs. I guess they couldn't source the new songs in time from reputable sources for some reason (licensing restrictions, logistical failure, no idea...).

So they simply downloaded them from local torrent trackers. The thing blew up, because a guy that was uploading and seeding the albums put his own catchy promotional jingle in the middle of most songs.


Use Shamir's Secret Sharing Scheme [0] to store your master password and distribute pieces of the key to several relatives and close friends.

[0] http://point-at-infinity.org/ssss/


Take a look at Keepass2Android, I like it a lot.

> An "acceptable" alternative would be to implement an indirect clipboard in which a trusted keyboard application can replay the string for a short duration but that's not going to happen either.

Actually Keepass2Android does just that - it provides an (optional) dedicated keyboard layout that you can install and activate.

> Whats worse is that probably most people who use a PWM on a mobile device choose to have a PIN lock on it after supplying the initial PW which reduces the PW complexity even further.

Some password managers have the option for using a part of your password after the database is unlocked - you get a limited number of tries (configurable) and the database locks if you don't guess the short code.


Apple doesn't allow replacement keyboards AFAIK considering that what was the post about.

I use Keepass with an eToken on my PC, found too many things about the Android version if it that i don't like :)

As for the PIN part, most people will setup a 4 or 5 digit pin, you would be surprised how many PIN locks can be broken with using the 10 most common PIN's avoiding most lockouts. If you have the device it self then avoiding the PIN lockout is a trivial thing to begin with. IF you talk about the Keepass2Android then it has many issues including not encrypting the the DEK in memory while being active, caching way too much shit on disk, and overall having quite questionable other implementations so there's a good chance you won't have to brute force anything.


Apple allows replacement keyboards as of iOS8


I, too, don't like the way MongoDB works and is marketed, but I would not bet too strongly on them failing. Only 10 years or so ago MySQL did pretty much the same, but they managed to succeed, despite there being better alternatives at the time.

Even today, after all the development effort put into MySQL and its derivatives, there are still better and more capable databases, but that doesn't prevent MySQL from being one of the top three RDBMS-es (and we can bet it is the most popular FOSS RDBMS).


Depending on your needs you might check git-annex, unison and others mentioned in other comments.


To answer that you will have to define "trustworthy". I'm personally leaning toward trusting the curves proposed by Dan Bernstein [1], since he clearly explains the reasons for choosing the specific parameters and they're demonstrably valid.

[1]: http://cr.yp.to/ecdh.html


> And what if it's not your PhD thesis in Libre Office but a busy database server which can easily get corrupted if you don't flush.

No properly configured and working ACID compliant RDBMS should lose any data when the server is reset or stopped. If it does, then it is either a problem with the hardware, OS, configuration or the RDBMS itself. The application must also be able to handle the DB disappearing, though. Sadly this is often not the case.


You mean lose data after it's been committed. I can send something to the database just as it's dying and it's been lost.


SQL commit is not allowed to finish before the data is safely on disk (that's the D in ACID).


That's assuming that the RDBMS can tell. The disks may lie:|


You should strive not to use such disks in your server. A machine reset won't power off the disks, though.


Or the CPU or the memory or the OS...


Thus the 's' in reisub?


The disk may lie. The 's' will get the OS to send everything out to the disk. Actually writing it to the platter (or flash part, or whatever) is at the disk's discretion.


It's at the disk's discretion as far as the laws of physics are concerned, but this would a severely broken disk prone to losing data and if it was a major server disk vendor, the vendor would take a pretty serious hit to its reputation.


Yes, but my point was that I could send data to an ACID compliant server, and kill it before the commit happened and data will be lost. Just trying to point out to the parent poster that sending is not enough, you need to wait for the commiting.


You do it by breaking up the potentially destructive changes so you could rollback your deployment to the previous working version. Typically all destructive modification of the schema is done in several passes: eg. in each release/deploy:

# copy the data in a new table and add code to place it in both tables

# change the code to reference only the new data but still store it in both tables

# rename the old table

# remove the old table

On top of that you have changes that are supposed to be applied before the new code is deployed (eg. creating a table, renaming a table) and after the code is deployed (eg. dropping unused columns).

As somebody else mentions in the comments this is more about Change Control than Versioning, but the two things go together to insure that you have as much idempotent and reversible changes as possible. Of course, if you have the resources you can always adopt a "never delete data" policy, so you're always copying instead of modifying the data, making sure you have a way to retract the changes. Clean-up can be done on regular, distant intervals.

Using ORM doesn't prevent you from doing a review on the changes that are about to be implemented. ActiveRecord for example, could provide you with a SQL file that will be representing your schema after the ORM is done. Usually the ORM schema changes are executed automatically by your CI upon deployment on Dev, QA and possibly Staging environments (you do have at least two of those, don't you? :) and need manual intervention (maybe to control performance degradation of your live env - locks, increased disk I/O, etc). This way you get plenty of testing of any DB changes and you should be able to spot any problems as early as possible.


Since 2011 McDonald's changed their process http://news.yahoo.com/blogs/sideshow/mcdonald-confirms-no-lo...


It would be nice if you quoted some reputable source or research when making such bold claims.

Who are the people that can recover more than 1 rewrite easily from a modern hard disk drive, do you have any URLs at hand?

You could check Peter Gutmann's research titled "Secure Deletion of Data from Magnetic and Solid-State Memory" [1] from 1996 which is one of the original sources for "35-cycle erase" [2]. In this paper he states that it is possible to recover the data using a specialized microscopy equipment (using Magnetic Force Microscopy or MF Scanning Tunneling Microscopy techniques). These techniques are only applicable for mediums with a much lower magnetic density and much simpler encoding than are used these days (eg. RLL encodings like MFM, PRML, etc.).

How did you arrive at the number 50? In the epilogue of his paper Gutmann states:

In fact performing the full 35-pass overwrite is pointless for any drive since it targets a blend of scenarios involving all types of (normally-used) encoding technology, which covers everything back to 30+-year-old MFM methods (if you don't understand that statement, re-read the paper). If you're using a drive which uses encoding technology X, you only need to perform the passes specific to X, and you never need to perform all 35 passes.

tl;dr: overwriting 35 times is a pointless waste of time and is one of the popular myths that refuses to die. Using shred under GNU/Linux or DBAN to overwrite with some random data is more than sufficient for our purposes.

[1]: https://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html

[2]: http://en.wikipedia.org/wiki/Gutmann_method


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: