I'm curious about your reasoning to provide an API to potentially other companies that could offer payroll service. Do you see this as financially better than providing the service directly yourselves?
As far as how well financially one model turns out to be vs the other, I think a lot of that depends on market size. I do wonder how big the market is for a payroll API. Though we've come across some interesting applications (employer of record companies, accounting firms, hr products etc).
The cool thing that I've always liked about API businesses, is that it's like building really fertile ground for new things to arise that don't even exist yet.
Interesting to think about what new modals could bloom over a fully-featured payroll API
I first created the Android version about 3 years ago, then the iOS version about 1 year ago. It currently makes just enough to cover some bills, although I believe it has a greater potential. I'm currently looking for ways to make this a recurring revenue stream instead of a one time payment gig.
The above article seems like a good point of view. Basically achieving the top 1% is an ideal that's sold to us but very very unlikely to become reality. A much better approach would be to find contentment in what you're doing.
This is why it's a bad idea to be a technical cofounder for something you're not passionate about. It will most likely fail, and you'll be out of a ton of programming hours with nothing to show. At least if you're being paid you can go home with a paycheck each month.
I know the fundamental idea behind PGP and related technologies. My question is, if bumping his key from 2048 to 4096 bits will keep him safe until around the year 2020 (as stated by a previous reader, and from keylength.com), why not just use a 8192 bit key, or 16384 bit key and be safe for virtually your lifetime?
Does the computing cost to encrypt/decrypt make this impractical?
4096 is the largest key size gpg offers today. It was the largest key size gpg offered in 2009, which is why that's the key size I'm using now. In 1996 the largest key size pgp supported was probably 768 , which is why my first pgp key is that size. I know for sure that in 1999, the largest key I could manage to make was 2048. Looking back at those older keys, I would prefer if I could have chosen larger key sizes for them. So I suspect that in 10 years I will wish I could have used a 8196 or larger key today..
I suspect that gpg partly doesn't offer insanely large key sizes because then people like me will naively use them even if we don't need them. And perhaps partly because dealing with the math for such large numbers is harder to implement. I'd rather it offered much larger keys even if they came with warnings that it might make operations slow.
Seems that if you're really paranoid, gpg --gen-key --batch with an approptiate batch file can make 8192 or larger keys. Currently trying to generate a 81920 bit key, for general giggles and to increase my NSA rating.
In 1998, the NSA required MIT to subborn their PGP. MIT publicly stated that at the time. NSA simultaneously banned the use of MIT's European confederate's version of PGP by U.S. citizens, and blocked access to that university's FTP from the U.S. Naturally, being overseas at the time, I downloaded the European version and, since it allowed creation of up-to 4096-bit keys with the option of manually-specifying non-standard lengths, I created a very large key which I saved to floppy disk.
Time complexity of RSA operations is somewhere between O(n^2) and O(n^3) with n being number of bits of modulus, so using longer keys than necessary gets impractical really fast.
It should be O(n^2). Doubling the length of the key will incur a four-fold time requirement for any single RSA operation. (I verified this a few years ago when writing an article about practical cryptosystems.)
The reason is actually quite simple. As far as I understand, the bignum libraries store the large numbers as an array of "limbs". Doing a bignum operation requires the library to iterate through the array, one limb and a time. The operations required for a single RSA calculation are effectively "run every limb in array A against every limb in array B". So you have a nested for-loop for N elements without possibility for early termination.
As for the numbers from the article: my old 400MHz box spent 20ms signing or encrypting a block of data with 1024-bit RSA key. The same operations took 80ms with a 2048-bit key.