Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Cleanly separate your logic from AWS dependencies using interfaces / protocols, summarily mock the required service by implementing said protocols (sparing you the pain of mocking the AWS services themselves), and voilà, you're running everything locally.

It's easy. But if that's too much for you, the serverless framework has some nice plugins for all of it.



One of the smartest, easier things you can do early on during the development of a function is to treat the "core" of the function body as interface-agnostic. Write a function that just takes an object and returns an object, then write an Adapter function which accepts, say, a lambda event, calls your core handler, and returns whatever your lambda function needs (API Gateway response, etc).

This enables you to, with a little more effort, swap out the Interface Adapter part with, say, a CLI. Or, if you ever want to get off lambda, its a bit easier as well.

Mocking out dependencies, like S3 buckets, isn't worth it during a prototype/MVP. As time goes on, sure, go for it. But early on, just use AWS. Don't use localstack or any of the other various tools that try to replicate AWS. They're all going to get it wrong, hopefully in obvious ways, but usually in subtle ways that'll crash production, and you're just creating more work for yourself. Just use AWS. Just use AWS. Just use AWS.


The agnostic core part is pretty much what I do. Mocking the interfaces is just done to put together the different cores in integration tests and check that the system itself works, independently of what it relies upon.

Then, everything is once more tested using AWS. I stay away from replicating AWS services locally.

It greatly simplifies refactoring and overhauling the core itself, as well as trying out new approaches.


I understand what one can do to create a leaky abstraction for local development. And it is leaky, and it can't be relied upon for correctness, and so while it makes "getting things done" possible, it makes getting things done correctly much more difficult than it needs to be.

And local development when you've opted to use systems like Cognito are well beyond its scope, unless someone has something very clever that I haven't seen.

From a lot of experience doing this on AWS and on other clouds, I have learned that it is better to use systems that you can operate, and then use hosted versions where applicable. RDS is great, but it's great mostly because you can run PostgreSQL locally and inspect its brain. DynamoDB, SQS, etc. tend to be untrustworthy and should be avoided unless you have a bulletproof story for local testing (and none of the fake implementations are bulletproof).


I avoid systems like Cognito for that reason! SQS I've found to be ok when used as a backend for an abstraction. E.g. Laravel has a generic queue library that can be backed by file or redis or sql, ans using redis locally with SQS in production worked quite well with this.


SQS has different characteristics from a Redis queue or a RabbitMQ queue. That's the source of a lot of my nervousness around it: when those abstractions break and somebody-who-isn't-me has to debug it.

(I actually have an answer for local dev with Cognito because my current employer already had Cognito when I showed up, but it amounts to "have configurable signing keys and sign your own JWT in dev".)


I don't use Cognito, and all authentication / authorization takes place upstream, in API Gateway.

Our local tests' leaky abstractions don't care about anything happening upstream. They only care about testing our core "agnostic" logic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: