This is my question (literally stolen from here, but the ultimate question is different)
"I have written some code to retrieve my secrets from the AWS Secrets Manager to be used for further processing of other components. In my development environment I configured my credentials using AWS CLI. Once the code was compiled I am able to run it from VS and also from the exe that is generated."
My question is that once it's on my IIS production server, I repeat these steps but it doesn't work, because I run the steps as the user account I'm logged in as, but the IIS process doesn't run as the logged in user, so the code can't get what it needs.
I want the IIS process to be able to access these credentials under its own user profile. How do I place the credentials under that profile? This question seems to have the answer (C:\Users\<IIS_app_name>\.aws\credentials) but how do I actually access the <IIS_app_name>? Or figure out what it is? I attempt to access this path with my II_app_name, but I get an error that it doesn't exist.
This is an on prem server that accesses aws secrets manager.
Related
I have the following scenario. Currently a Wordpress site was hosted on an EC2
server. There is no deployment strategy in place. It was using the AWS CLI to
deploy the data from local to the server.
Problem For various reasons, I don't want to take it over that way of deployment and am
currently considering putting the deployment process into a GitHub Action. Which
in itself wouldn't be a problem if it weren't for the 2FA check.
Using the AWS CLI, I am forced to enter the code displayed in my MS Authenticator
app into the CLI to proceed.
Question: Is it possible to deploy via GitHub Action and 2FA? I guess almost no?
But what do I know?
No, you can't have GitHub Actions prompt and enter the 2FA tokens on your behalf. But what you can do, is setup OpenID Connect between GitHub actions and AWS.
That way you can authorize GitHub Actions to make changes to specific resources in AWS and OIDC handles the magic key exchange parts to make sure it can safely do its thing.
More info here:
https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services
You can use the wait-for-secrets GitHub Action to use 2FA from a GitHub Actions workflow. Wait-for-secrets GitHub Action waits for the developer to enter secrets during a workflow run. Developers can enter secrets using a web browser and use them in the workflow.
I've been having trouble with a deployment with a serverless-component, so I've been trying to debug it. Stepping through the code, I actually thought I'd be able to step into the component itself and see what was going on.
But to my surprise, I couldn't actually debug it, because the component doesn't actually exist on my computer. Apparently the serverless cli is sending a request to a server, and the request seems to include everything serverless needs to build and deploy the actual service— which includes my AWS credentials...
Is this a well-known thing? Is there a way to force serverless to build and deploy locally? This really caught me be surprise, and to be honest I'm not very happy about it.
I haven't used their platform, (I thought the CLI only executed from your local seems very risky), but you can make this more secure by the following:
First setup an iam role which can only do the deploy actions for your app. Then make a profile which assumes this role when you work on your serverless app and use the cli.
Secondly you can also avoid long-term cli credentials (iam users) by using the AWS SSO functionality which generates cli credentials for an hour, and with the AWS cli, you can login from the cli I believe. What this will mean is that your CLI credentials will live for at maximum 1 hour.
If the requests are always coming from the same IP you can also put that in an IAM policy but I wouldn't imagine there is any guarantee that their IP will always be the same.
I have a CLI tool that interacts with Google KMS. In order for it to work, I fetch the user credentials as a JSON file which is stored on disk. Now a new requirement came along. I need to make a web app out of this CLI tool. The web app will be protected via Google Cloud IAP. Question is, how do I run the CLI tool on behalf of the authenticated user?
You don't. Better use a service-account and assign the required role. That service account still could have domain-wide delegation of rights (able to impersonate just any user, which is known).
Running CLI tools from a web-application probably also could/should be avoided. Iit might be better to convert his CLI tool into a Cloud Function and then call it via HTTP trigger, from within the web-application (so that access to the service account is limited as far as possible).
This might also be something to reconsider, security-wise:
I fetch the user credentials as a JSON file which is stored on disk.
Even if it might have been required, with a service-account it wouldn't.
So it says on the github documentation here that
AWS Vault is a tool to securely store and access AWS credentials in a
development environment.
AWS Vault stores IAM credentials in your operating system's secure
keystore and then generates temporary credentials from those to expose
to your shell and applications. It's designed to be complementary to
the AWS CLI tools, and is aware of your
But what does this actually mean? As a developer does this mean to create a kind of lock to prevent anyone from using my code without the aws-vault profile? When should I use this technology? I want to know a bit more about it before I use it.
It actually doesn't have anything related to development.
While working with Amazon managed services we can take advantage of IAM roles but that doesn't work when you're doing it from our local environment or from some other Cloud VM like accessing a S3 bucket. It comes handy when you're doing a lot of work with AWS CLI or even writing terraform for your environment. It is just for a precaution so we don't expose or IAM credentials to external world (you will receive an abuse notification from Amazon whenever your keys are compromised). There are many other ways to make sure your keys don't get compromised like before pushing your code to a version control use git-secrets to make sure you don't push any sensitive information.
I have deployed the django application on aws . I want that application should be deployed by team as well. What is procedure to do this? I have searched a lot and almost spent couple of hours . Anyone has any answer or tutorial?
Can we share these keys ?
aws_access_key_id
aws_secret_access_key
No, the AWS access keys should be kept secret and not even stored under version control.
For deployment (i.e. the credentials needed to actually release the code - used by EB), you should use an aws profile. Add a ~/.aws/credentials file with
[myprofile]
aws_access_key_id=...
aws_secret_access_key=...
and then, on all eb commands use --profile. e.g.
eb create --profile myprofile
If your application requires other AWS services (e.g. RDS, S3, SQS), then you can use the same local profile for development (although I would recommend not requiring any other AWS for testing) by using then environment variable export AWS_PROFILE=myprofile. And then rely on AWS roles and policies for the production environment.
If you feel you need the secret keys as django settings, then consider using https://django-environ.readthedocs.org where you can keep all those secrets on a .env file that gets loaded by django. But again, this file should not be under version control.
You should also create IAM users for every person in your team, so each person has its own credentials, and you can more easily monitor or if needed, revoke credentials.