Private AWS credentials being shared with Serverless.com? - amazon-web-services

I've been having trouble with a deployment with a serverless-component, so I've been trying to debug it. Stepping through the code, I actually thought I'd be able to step into the component itself and see what was going on.
But to my surprise, I couldn't actually debug it, because the component doesn't actually exist on my computer. Apparently the serverless cli is sending a request to a server, and the request seems to include everything serverless needs to build and deploy the actual service— which includes my AWS credentials...
Is this a well-known thing? Is there a way to force serverless to build and deploy locally? This really caught me be surprise, and to be honest I'm not very happy about it.

I haven't used their platform, (I thought the CLI only executed from your local seems very risky), but you can make this more secure by the following:
First setup an iam role which can only do the deploy actions for your app. Then make a profile which assumes this role when you work on your serverless app and use the cli.
Secondly you can also avoid long-term cli credentials (iam users) by using the AWS SSO functionality which generates cli credentials for an hour, and with the AWS cli, you can login from the cli I believe. What this will mean is that your CLI credentials will live for at maximum 1 hour.
If the requests are always coming from the same IP you can also put that in an IAM policy but I wouldn't imagine there is any guarantee that their IP will always be the same.

Related

AWS programatic credential use in automation scripts

Right now we create scripts that run through CLI to automate or fetch things from AWS.
But we used AWS access key/ secret access/session token for the same.
these keys and tokens are valid for 1 hour. Hence next hour if we do use them, the script will fail.
But it is also not possible to fetch the temp credentials, update the script, and run those.
So what is the best possible solution in this condition? What should I do that I can get the updated credentials and I can run the script by using those updated credentials (automatically)? Or any other alternatives so that we can still run scripts from our local machines using Boto with AWS credentials?
any help is appreciated.
Bhavesh
I'm assuming that your script runs outside of AWS, otherwise you would simply configure your compute (EC2, Lambda, etc.) to automatically assume an IAM role.
If you have persistent IAM User credentials that allow you to assume the relevant role then use those.
If you don't then take a look at the new IAM Roles Anywhere feature.

Detect EC2 role with AWS PHP SDK?

I have as php library I wrote to help with working along side Amazon Web Services. It was built to either look for the default $HOME/.aws/credentials (or be pointed to a similar format file) or to look for the key and secret in the environment before proceeding.
We are now going to be running it on an EC2 and I was shown how you can use roles in conjunction with the EC2 to get and keep much better control on what the server code can and can't do. But I need to modify my code to be able to know when it has proper permissions before proceeding and I don't see anywhere in the docs on assigning an EC2 instance a given role how you know in the SDK that it has the permissions of that role.
Is there some way once I instantiate the SDK to ask something akin to 'hasRole' or 'getRoleArn' or something like that?
SDKs are mapped directly to API calls. So if you know what cli command to call, it makes it much easier to google. So you want the aws sts get-caller-identity most likely.
Doing a google for "PHP sts sdk aws" is then the search you would do. And then you would wind up on this page.
So that way is using the SDK. There are a couple of other ways as well. As you are using ec2 you can use instance meta-data as well.
On another note I do think you should be careful though with leaking the AWS role into your application code. It probably makes more sense to use user identity context, such as with Cogito, and then use different groups with different permission sets. The role on the actual ec2 instance shouldn't be changing (unless you do a re-deploy), so there is no need for your code to check something that won't change during the normal running of the application. You could simply use an environment variable to convey whatever configuration you want to your application.
aws sts get-caller-identity --query 'Arn'
arn:aws:iam::1232412321:role/YourRole

What is aws-vault actually used for?

So it says on the github documentation here that
AWS Vault is a tool to securely store and access AWS credentials in a
development environment.
AWS Vault stores IAM credentials in your operating system's secure
keystore and then generates temporary credentials from those to expose
to your shell and applications. It's designed to be complementary to
the AWS CLI tools, and is aware of your
But what does this actually mean? As a developer does this mean to create a kind of lock to prevent anyone from using my code without the aws-vault profile? When should I use this technology? I want to know a bit more about it before I use it.
It actually doesn't have anything related to development.
While working with Amazon managed services we can take advantage of IAM roles but that doesn't work when you're doing it from our local environment or from some other Cloud VM like accessing a S3 bucket. It comes handy when you're doing a lot of work with AWS CLI or even writing terraform for your environment. It is just for a precaution so we don't expose or IAM credentials to external world (you will receive an abuse notification from Amazon whenever your keys are compromised). There are many other ways to make sure your keys don't get compromised like before pushing your code to a version control use git-secrets to make sure you don't push any sensitive information.

targeting AWS services locally

I was wondering if its possible to target the services of aws, for example, dynamoDB, from outside of aws, for example, code that runs on my personal computer.
All I could find is creating a mock of dynamodb localy and configuring to it, but not a way to configure the code to target the real thing.
Thanks.
By target I mean to use only the sdk of the language to access the service, not some kind of rest api.
Ok so after more search, and as #JohnRotenstein recommended, I have searched for a way to configure the credentials.
https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html
The link above shows how to configure all the needed credentials.
Of course there is an IAM user with a key and secret key.
Cheers.

Continuous deploys on elastic beanstalk

I have everything setup and working with rolling deploys and being able to do git aws.push but how do I add a authorized key to EB server so my CI server can deploy as well?
Since you are using Shippable, I found this guide on Continuous Delivery using Shippable and Amazon Elastic Beanstalk that shows how to set it up on their end. Specifically, step 3 is what you are looking for.
It doesn't look like you need an authorized key, instead, you just need to give an AWS ID and AWS Secret Key that will allow Shippable to make API calls on your behalf. To do this, I recommend creating an IAM role that is specifically for Shippable. That way you can revoke it if you ever need to and only give it the permissions that it needs.