Is there a way to configure the Session Manager via CDK?
I want to change settings like enabling KMS encryption and max session duration as well as writing session data to a S3 bucket. The online documentation from AWS (https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-configure-preferences.html) only has manual steps via the console described. However, everything is set up via CDK in my case and I also want to have those things configured via CDK, so in case the S3 bucket which is created via CDK is deleted/renewed I don't have to do any manual steps to configure SSM again.
You cant do that. Those settings are set per account globally. CDK/Cloudformation is resource provisioning tool.
Session Manager preferences are regional and since they be changed via command line, they can also be changed via a CDK custom resource.
Just create a lambda that runs the
aws ssm update-document --name "SSM-SessionManagerRunShell"
with a json config as explained here:
https://docs.aws.amazon.com/systems-manager/latest/userguide/getting-started-configure-preferences-cli.html
If you pass the name of your S3 bucket as a parameter of your custom resource it will trigger an on_event update every time your bucket changes.
Related
When deploying to AWS from gitlab-ci.yml file, you usually use aws-cli commands as scripts. At my current workplace, before I can use the aws-cli normally, I have to login via aws-azure-cli, authenticate via 2FA, then my workstation is given a secret key than expires after 8 hours.
Gitlab has CI/CD variables where I would usually put the AWS_ACCESS_KEY and AWS_SECRET_KEY, but I can't create IAM role to get these. So I can't use aws-cli commands in the script, which means I can't deploy.
Is there anyway to authenticate Gitlab other than this? I can reach out to our cloud services team, but that will take a week.
You can configure OpenID to retrieve temporary credentials from AWS without needing to store secrets.
In my view its actually a best practice too, to use OopenID roles instead of storing actual credentials.
Add the identity provider fir gitlab in aws
Configure the role and trust
Retrieve a temporary credential
follow this https://docs.gitlab.com/ee/ci/cloud_services/aws/ or a more detailed version https://oblcc.com/blog/configure-openid-connect-for-gitlab-and-aws/
So I got into a situation working for a client which does not provide in any way AWS_ACCESS_KEY_ID as security protection. We have only available for development AWS Web Console. So I started searching for another way of the programmatic script(speed-up) my dev tasks.
Note: we cannot use AWS client without AWS_ACCESS_KEY_ID and secret.
My assumptions: If the AWS web console can do the same thing as aws cli (.eg create bucket, load data into bucket, etc.), why not use web console auth mechanism (visible in http request headers) and bind it to aws cli (or some other api call code) to make it work even without aws keys?
Question: Is this possible? For sure I can see in http headers following artifacts:
aws-session-token
aws-session-id
awsccc
and dozen of others...
My idea is to automate this by:
Go to the web console and login have a script that will
automatically output from browser session required parameters to
some text file
Use this extracted information by some dev script
If this is not supported or impossible to achieve with aws cli, can I use some SDK or raw AWS Api calls with extracted information?
I can extract SAML content which has above mentioned aws-creds header also I see oauth client call with following headers:
https://signin.aws.amazon.com/oauth?
client_id=arn%3Aaws%3Asignin%3A%3A%3Aconsole%2Fcanvas&
code_challenge=bJNNw87gBewdsKnMCZU1OIKHB733RmD3p8cuhFoz2aw&
code_challenge_method=SHA-256&
response_type=code&
redirect_uri=https%3A%2F%2Fconsole.aws.amazon.com%2Fconsole%2Fhome%3Ffromtb%3Dtrue%26isauthcode%3Dtrue%26state%3DhashArgsFromTB_us-east-1_c63b804c7d804573&
X-Amz-Security-Token=hidden content&
X-Amz-Date=20211223T054052Z&
X-Amz-Algorithm=AWS4-HMAC-SHA256&
X-Amz-Credential=ASIAVHC3TML26B76NPS4%2F20211223%2Fus-east-1%2Fsignin%2Faws4_request&
X-Amz-SignedHeaders=host&
X-Amz-Signature=3142997fe7212f041ef90c1a87288f53cecca9236098653904bab36b17fa53ef
Can I use it with AWS SDK somehow?
To reset an S3 bucket to a known state, I would suggest looking at the AWS cli s3 sync command and the -delete switch. Create a "template" bucket with your default contents, then sync that bucket into your Dev Bucket to reset your Dev bucket.
As for your key problems, i would look at IAM Roles rather trying to hack the console auth.
As to how to run the AWS CLI, you have several options. It can be done from Lambda, ECS (containers running on your own Ec2) or an ec2 instance. All 3 allow you to attach an IAM role. That role can have policies attached (for your S3 bucket) - but there is no key to manage.
Thx for feedback to #MisterSmith! It kinda helped with follow up.
I have found also SAML call during analysis of Chrome traffic from login page to AWS console, I have found this project: https://github.com/Versent/saml2aws#linux
Which extracted all ~/.aws/credentials variables needed for aws cli to work.
I am using AWS Amplify in a React Native App. I set up my userpool with a domain via the console (https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-assign-domain-prefix.html), but have to manually remove and re-install it every time I make a chance to the backend\auth\poolname-cloudformation-template.yml.
Is there a CloudFormation setting that would allow me to set it up through there?
Thanks!
This has been added to CloudFormation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-userpooldomain.html
Unfortunately, there is no Cloud Formation setting that allows to create an Amazon Cognito Domain. One work around for this is to create a Custom Cloud Formation resource backed by a Lambda and then creating the domain in Lambda through Boto 3.
I've been using AWS Codedeploy using github as the revision source. I have couple of configuration files that contains credentials(e.g. NewRelic and other third party license key) which I do not want to add it to my github repository. But, I need them in the EC2 instances.
What is a standard way of managing these configurations. Or, what tools do you guys use for the same purpose?
First, use IAM roles. That removes 90% of your credentials. Once you've done that, you can store (encrypted!) credentials in an S3 bucket and carefully control access. Here's a good primer from AWS:
https://blogs.aws.amazon.com/security/post/Tx1XG3FX6VMU6O5/A-safer-way-to-distribute-AWS-credentials-to-EC2
The previous answers are useful for managing AWS roles/credential specifically. However, your question is more about general non-AWS credentials, and how to manage them securely using AWS.
What works well for us is to secure the credentials in a properties file in a S3 bucket. Using same technique as suggested by tedder42 in A safer way to distribute AWS credentials to EC2, you can upload your credentials in a properties file into a highly secured S3 bucket, only available to your instance, which has been configured with the appropriate IAM role.
Then using CodeDeploy, you can add a BeforeInstall lifecycle hook to download the credential files to a local directory via the AWS CLI. For example:
aws s3 cp s3://credentials-example-com/credentials.properties
c:\credentials
Then when the application starts, it can read those credentials from the local file.
Launch your EC2 instances with an instance profile and then give the associated role access to all the things your service needs access to. That's what the CodeDeploy agent is using to make calls, but it's really there for any service you are running to use.
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html
I'm setting up a Kubernetes cluster on AWS and as part of the configuration for say the API Server, I provide the --cloud-provider=aws setting.
Once it starts up, however, I see in the logs that it complains about not having AWS credentials:
NoCredentialProviders: no valid providers in chain
After some searching, it seems that this issue was resolved for most people by using the "kube-up" script. However, for those who are not using the script to set up their cluster, how do we provide Kubernetes with AWS credentials?
It sounds like you don't have the appropriate IAM instance profile set on your master VM. The kube-up script for AWS creates a role and associated policy that is attached to the master VM when it is created. Having the IAM policy attached should give you the credentials necessary to make API calls into AWS.