>The pattern we use is to monitor the execution time left and queue up a follow-up once there's <30s left. Not every job we do can be composed this way but it helps a lot.
Hmm. Are you saying you can queue additional jobs on the same worker to use up remaining execution time?
Amazon doesnt really give you any direct access to the underlying "worker" so it doesnt work like that. It's really as simple as:
- the active lambda decides to stop performing work once there's only 30 seconds left
- it switches to perform whatever cleanup is necessary
- it queues up a new lambda to be executed by pushing the lambda name and event payload to redis
- I have a special lambda that's scheduled for once a minute that scans certain keys in redis, pops the data out, and invokes the lambdas
To give an example, I have a job where I need to read ~50 files in S3 line by line and do some work over them. I spin up 50 lambdas, one for each file. Each lambda chugs along until it is about to run out of time. It then queues up a followup lambda with the byte # to start at in the event payload. The followup lambda then picks up where the last one left off.
I have some cases where instead of queueing up the followup lambda I just invoke it directly. The value in having a queue step in between is that you can more cleanly handle throttling and you can avoid event payload limits by just putting the payload into Redis.
Hmm. Are you saying you can queue additional jobs on the same worker to use up remaining execution time?