Amazon has been building technology based on ML&DL for over 20 years and has developed several frameworks. You must have missed the announcement of this open source framework earlier in the year: https://github.com/amznlabs/amazon-dsstne.
I saw that when it was announced. DSSTNE has failed to capture the hearts and minds of developers. In my experience, it doesn't come up in any conversations about which frameworks to bet on for new product development.
And I'm rooting for Amazon (and FaceBook, and Microsoft...). TensorFlow needs competition for the hearts and minds of developers.
This doesn't address the root comment at all. Does Amazon actually think MXNet is the best? Or did they simply choose the next best thing that isn't already backed by another "big four" company (Google -> TensorFlow, Facebook -> Torch). It's hard to believe this is actually about scalability without any data.
You can choose eventual or fully consistent in DynamoDB. Given that full consistency comes at a higher cost (read from a quorum of replicas) we expose that cost to you.
BTW nobody wants eventual consistency, it is a fact of live among many trade-offs. I would rather not expose it but it comes with other advantages ...
Nope, we have built a new scheduler for you that will allow placement over multiple AZ's, replace failed containers, allow them to connect to ELB's, etc.
When I'm balancing a single deployment across multiple AZs (e.g. US-East -> US-West), the latencies between the containers seem far higher than just the 200-300ms predicted by speed of light. Am I doing something wrong?
Thanks again - this is really helpful. I had talked to someone who had left Amazon but knew the internal workings who said ECS was Mesos just privately branded like Chef -> OpsWorks, but I guess I must have misunderstood.
check out hyper.sh, this is the future of public CaaS. After all, you don't need EC2 to host your containers, if you can run them directly with a hypervisor.
Yep, I thought it would be of interest to the HN community, but most importantly I was interested about the comments/feedback/criticism of those who would potentially be using it.
I don't know what fraction of AWS users are outside of the US, but I'm sure it's not a completely trivial number. Please, find a way for us to use services like this. I know there's legal and accounting headaches involved, but they can't be any worse than the headaches involved in buying land and building datacenters, and you manage to do that just fine.
Seriously, my startup (tarsnap) is almost entirely AWS-based (the other exception is email -- I use sendgrid because SES is incompatible with public mailing lists) and I'd love to offer Amazon Payments as an option to my customers, but because I'm based in Canada it's not an option.
Yeah, there is a real opportunity here not to just read saurik's well-thought out reply, but respond to it. I have no doubt that he has sent very detailed emails to them, speaking as one engineer to another, and there is no better way to turn someone off than to ignore them after they go through such pains to make your product better.
I would love to have a minute to talk to you about this.
I recently took over a new but quickly growing webstore. Just two weeks ago we decided that we want to add "Payments by Amazon" as an option for our customers. We run a Magento Enterprise platform, so I was certain there is an extension for this.
Welcome to the rabbit hole... Do I want Checkout by Amazon or Payments by Amazon? Not clear as advantages of either or why I'd go with one over the other. After spending a few hours reading the differences I gave up and called friends until I got a contact at Amazon payments. Spoke with a super nice rep who explained to me why I shouldn't bother with CBA and look at Payments API instead... "But, I see there is a CBA Magento extension," I protested, can't I drop that in and be on my way? Turns out the extension is maintained by a 3rd party, and as far as I could tell and the rep confirmed it isn't really maintained at all, so even if we spent the money on it, no one could guarantee that it will work.
Fine, let's talk about payments, if Amazon is pushing payments api for merchants, there must be something for Magento users. I get told sorry, maybe there is something in the works but if I want anything up and running for the holiday season, I have to roll my own, and by the way please sign up for another Amazon service (I accidentally signed up for CBA, because it wasn't clear which I service I needed).
So here we are… building our own implementation for Magento. I know it's not Amazon's problem to support how payments are used, but if you want e-commerce merchants to take you up on this offering, some love for the ecosystem will go a long way and some clarity that CBA isn't being promoted anymore.
But I have to give Amazon credit, I've spoken to reps over at Selling on Amazon, FBA, Payments and they are all sharp, knowledgeable and eager to help. I can't say a single bad word about the people that interact with your business customers.
You're the CTO of Amazon, the revenue you're going to get from my business isn't even going to justify a rounding error on the balance sheet, but if you want to hear from the merchants, I'm happy to give you a view from the trenches.
I am a happy user of Amazon Simple Payments, which is, as promised, simple, works well, and has good support. As another user posts though, I'm a bit confused at the role the various payment solutions play. In other words, we now have Login and Pay, but also Amazon FPS. What's what?
And if you are looking for really cool jobs in which you can apply all your distributed systems and cloud skills on amazingly interesting products, beyond just video streaming, check out Amazon.com (75+ pages of 20 jobs per page in software development alone)...
Well given we're doing this here now for anyone that takes advantage of AWS... ;)
I'm looking for engineers and UX people to come help me make the Heroku Add-ons platform even more amazing. If putting these cloud services into the hands of developers and changing the way people think about provisioning these services sounds like something you'd like to be part of: glenn at heroku dot com
What's it worth to do great work if you can't live in a great city? Amazon's Development Center in Cape Town, South Africa is situated in the heart of the Mother City, and is surrounded by the Atlantic Ocean and Table Mountain - you can't get better views. Combine this with never-ending beaches and sunny weather and you have the perfect work and play environment.
We build software for AWS and are looking for all sorts of engineers: from kernel development to building great web front-ends for our customers. Check out http://www.amazon.co.za/ for more info.
5000 reads per sec of 64Kb items, would make you stream 2.5 Gbits/sec using consistent reads and 1Gbits/sec writes, moving close to 1.5TB each hour. At the end of the month you have read well over 800 TB and updates 160 TB... That is a substantial application you have in mind... :-)
That may be true for an application with a constant load, but applications with a less balanced load have to provision for their peaks. My company (Malwarebytes) has very irregular traffic (at the hour mark we get very big spikes, but only for a couple of minutes) and it seems like we would have to provision (for this specific app) that peak for the entire hour. I might be misunderstanding the billing for this service though- if we ask for more units for 15 minutes, would the billing be prorated?
This actually hits on my only real issue with AWS in general, which is the hourly billing. We've used the mapreduce service a bit, and having a cluster roll up and fail on the first job is heartbreaking when it has a hundred machine hours associated with it. Obviously that is far, far cheaper than us building out our own cluster (especially with spot instances, which I can't even describe how much I love), but for some of the services smaller billing times would be useful.
The amount of consumed read units by a query is not necessarily proportional to the # of items. It is equal to the cumulative size of processed items, rounded up to the next kilobyte increment. For example if you have a query returning 1,500 items of 64 bytes each, then you’ll consume 94 read units, not 1,500.
If that's the case then it's a completely different ball-game. I was about to abandon the whole idea of using DynamoDB due to the pricing of throughput. This makes it a whole lot more interesting!
The official documentation seems to clearly contradict you. The pricing calculator doesn't let you specify a value of less than 1KB. Who's right? Or maybe I'm just not understanding what either you or the official pricing doc is saying :)
If your items are less than 1KB in size, then each unit of Read Capacity will give you 1 read/second of capacity and each unit of Write Capacity will give you 1 write/second of capacity. For example, if your items are 512 bytes and you need to read 100 items per second from your table, then you need to provision 100 units of Read Capacity.
Werner is right. The query operation is able to be more efficient than GetItem and BatchGetItems. To calculate how many units of read capacity will be consumed by a query, take the total size of all items combined and round up to the nearest whole KB. For example, if your query returns 10 items that were each 1KB, then you will consume 10 units of read capacity. If your query returns 10 items that were all 0.1KB, then you will consume only 1 unit of read capacity.
This is currently an undocumented benefit of the query operation, but we will be adding that to our documentation shortly.
Thanks for your critical reading!