We are using Jenkins and have got AWS Credentials stored in Jenkins Credential store.
I am now configuring a Build job to get the list of APIs from AWS to allow users to select the API (from parameter dropdown). To do this, I am writing a groovy script to get the list of AWS APIs. E.g.
aws apigateway get-rest-apis
But in order to run the above command, I need to first get the AWS credentials from the Jenkins credentials store. How can I do this?
(Correct me if I am wrong, the script which is going to be part of Extended parameter is going to run on Jenkins master node (and not on the Slave node) and not sure how do I get the AWS credentials)
Adding below in groovy script:
env.AWS_CREDENTIAL_ID="user_id"
user_id is id that is set in credentials for AWS account
for everyone's benefit, I used command like
env AWS_ACCESS_KEY_ID=${YourIDVariable} AWS_SECRET_ACCESS_KEY=${YourKeyVariable} aws apigateway get-rest-apis
and it worked
Related
I am trying to setup some build and deployment servers based on EC2 instances to deploy software to AWS via CloudFormation.
The current setup uses the AWS CLI to deploy CloudFormation templates, and authentication is handled using a credentials profile where the ~/.aws/config file has a profile with:
[profile x]
role_arn = x
credential_source = Ec2InstanceMetadata
region = x
The setup using the AWS CLI appears to be working fine, and can deploy CloudFormation templates, upload files to S3 etc.
I wanted to automate this further and use a configuration-based approach to allow for more flexibility in our deployments. To achieve this, I have written some Python code to parse a config file and use the Boto3 library (which the AWS CLI also uses) to replicate the functionality. However when I am trying to do similar things in Boto3 (like deploy CloudFormation and upload files to S3), I get the following error: Connection to sts.amazonaws.com timed out. Unfortunately I can't provide the full stack trace since it's on a separate network. I am running Python 3.7 and boto3-1.21-13, botocore-1.24.13.
I assume it might be because I need to setup a VPC endpoint for STS? However, I can't work out why and how the AWS CLI works fine, but Boto3 doesn't. Especially since AWS CLI uses Boto3 under the hood.
In addition, I have confirmed that I can retrieve instance metadata using curl from the EC2 instances.
To reproduce the error, this command fails for me:
python -c "import boto3;print(boto3.Session(profile_name='x').client('s3').list_objects('bucket')"
However this AWS cli command works:
aws --profile x s3 ls bucket
I guess I don't understand why the AWS CLI command works, when the boto3 command fails. Why does boto3 needs to call the sts.amazonaws.com endpoint, when the AWS CLI seemingly doesn't? What am I missing?
The aws cli and boto3 both use botocore, which is only a minor detail. Nevertheless, both the cli and boto3, when run in the same environment with the same access to the credentials, should indeed be able to reach the same endpoint.
This:
aws sts get-caller-identity --profile x
and:
python -c "import boto3;print(boto3.Session(profile_name='x').client('sts').get_caller_identity())"
are equivalent and should make the same api calls to the same endpoint.
As an aside, I find it is often best not to have your code concerned with session handling at all. It seems most simple to me for the code to expect the environment to handle that. So just export AWS_PROFILE and run the code. This prevents other user of the script from having to have the same profile and name it the same.
Yeah so it turns out I just needed to set/export AWS_STS_REGIONAL_ENDPOINTS='regional'.
After many hours of trawling the botocore and awscli source and logs, I found out that botocore sets it by default to 'legacy'.
Where as in v2 of the AWS CLI, they set it to 'regional'.
Background:
I have jenkins installed in AWS Account #1 (account1234) and it has iam Role-jenkins attached to it. There's github configured with Jenkins.
When I click build job in Jenkins, jenkins pulls all the files from github and can be found in
/var/lib/jenkins/workspace/.
There's an application running in AWS Account #2 (acccount5678) in an ec2 instance (i-xyz123) and the project files are in /home/app/all_files/ ; This ec2 instance role has app-role attached to it.
What I'm trying to achieve:
When I click build, I want jenkins to push files from account 1234 to account 5678 by opening an SSM session from Jenkins ,to the ec2 instance on which app is running.
What I tried:
In the jenkins as part of build shell script I added:
aws ssm send-command --region us-east-1 --instance-ids i-xyz123 --document-name AWS-RunShellScript --comment IP config --parameters commands=ifconfig --output text
to test it. (If successful, I want to pass cp var/lib/jenkins/workspace/ /home/app/all_files/ as the command)
Error:
An error occurred (AccessDeniedException) when calling the SendCommand operation: User: arn:aws:sts::account1234:assumed-role/Role-Jenkins/i-01234abcd is not authorized to perform: ssm:SendCommand on resource: arn:aws:ec2:us-east-1:account1234:instance/i-xyz123
Build step 'Execute shell' marked build as failure
Finished: FAILURE
Issue 1: instance/i-xyz123 is in account5678 but error above shows ssm trying to connect to instance in account1234 ( which shouldn't be happening)
Q1: How do I update my command so that it tries to open an ssm session
with instance/i-xyz123 present in account5678 to accomplish what I'm
trying to do.
I believe I would also need to make each role added as a trusted relationship to the other.
(Note I want to do it via sessions manager as I won't have to deal with credentials of any sort)
If I've understood correctly then you're right; to interact with the resources in account5678, there needs to be a trust relationship so that the Jenkins account can assume the relevant role in account5678 and call SSM from there.
Once you've configured the role relationship (ref: IAM cross account roles )
You should be able to achieve what you need by assuming the role first in your shell script and then running the ssm command. That way Jenkins will use the temp creds and execute the command in the correct account (5678).
This site steps through it pretty well :
Tom Gregory - Jenkins Assume Role
If you just cmd/ctrl f on that page ^ and search for 'shell' you should get to the section you need. Hope this somewhat helps.
I want to install a Python3 package from a private Github repository onto an AWS EMR Spark cluster.
I know how to do this the dirty way by hardcoding credentials but what is the recommended best practice to do this safely ? I don't want to store credentials in a bootstrap script...
Thanks in advance.
Thanks to Maurice I've successfully implemented a safe process, following his option #2.
Create an access token with read credentials on github.
Store this in AWS Secrets Manager. In my case I named this secret "github-read-access"
Give access to this secret to the user that is going to query it, or in the case of a bootstrap EMR script, to the EMR roles.
Using aws CLI I store the token as an environment variable and install the package with the following commands:
export GITHUB_TOKEN=`aws secretsmanager get-secret-value --secret-id github-read-access |grep SecretString|cut -d ":" -f 3|cut -d '"' -f 2 |cut -d '\' -f1`
sudo pip3 install git+https://${GITHUB_TOKEN}#github.com/<USER_NAME>/<REPO_NAME>.git
Caveat: I haven't worked with custom EMR bootstrapping scripts, but I assume they're not too different from regular user data scripts.
There are some options:
Systems Manager Parameter Store: This is essentially something like the windows registry in AWS, a regional key-value store. You can store your credentials here under a name such as my/git/credentials and even encrypt them using the Key Management Service. In your bootstrapping script you can then request the credentials using the AWS CLI and use them to connect to the private git repository. This requires the instance role of the cluster to have permissions to access that parameter (and the KMS-Key if you've encrypted the value)
Secrets Manager: The general idea is similar to the SSM parameter store. The secrets manager also allows you to store your credentials in a secure way, in this case encryption is mandatory. It even offers lifecycle hooks to periodically renew the credentials should you require that. You can use the same technique I described in option 1) in the bootstrapping script. The requirements in terms of permissions are similar, although you definitely have to add KMS permissions and Secrets Manger permissions here. In this case you'd have to parse the JSON response from the Secrets Manager though.
I'd personally start with option 1, it will be cheaper. If you have specific audit/regulatory requirements, I'd look at option 2 - it's slightly more complex.
Say I use aws-cli locally on my machine, I´d need to authenticate with credentials prior to any operation.
How do AWS services give permission to other services on my behalf? And more specifically, how does a container run aws-cli on my behalf without prior authentication?
I am asking this, after running my first pipeline successfully in codePipeline. My buildspec.yml does run aws s3 sync command flawlessly -which made me then wonder how do aws internally permissions work-.
AWS CodeBuild uses an IAM Service Role to provide AWS permissions to the CodeBuild environment. You should have had to create a service role for your CodeBuild configuration.
When the AWS cli tool runs, and it hasn't been previously configured with API access keys, it will check if it is running in an AWS environment like EC2 or Lambda and if so, it will use the AWS IAM role assigned to that runtime environment.
I am trying to get some files from S3 on startup in an EC2 instance by using a User Data script and the command
/usr/bin/aws s3 cp ...
The log tells me that permission was denied and I believe it is due to aws cli not finding any credentials when executing the user data script.
Running the command with sudo after the instance has started works fine.
I have run aws configure both with sudo and without.
I do not want to use cronjob to run something on startup since I am working with an AMI and often need to change the script, therefore it is more convenient for me to change the user data instead of creating a new AMI everytime the script changes.
If possible, I would also like to avoid writing the credentials into the script.
How can I configure awscli in such a way that the credentials are used when running a user data script?
I suggest you remove the AWS credentials from the instance/AMI. Your userdata script will be supplied with temporary credentials when needed by the AWS metadata server.
See: IAM Roles for Amazon EC2
Clear/delete AWS credentials configurations from your instance and create an AMI
Create a policy that has the minimum privileges to run your script
Create a IAM role and attach the policy you just created
Attach the IAM role when you launch the instance (very important)
Have your userdata script call /usr/bin/aws s3 cp ... without supplying credentials explicitly or using credentials file
You can configure your EC2 instance to receive a pre-defined IAM Role that has its credentials "baked-in" to the instance that it fetches upon instantiation, which it turn will allow it to call aws-cli commands in your User Data script without the need to configure credentials at all.
Here's more info on IAM Roles for EC2:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
It's worth noting here that you'll need to attach the appropriate policies to the IAM Role that you assign to your instance in order for the aws-cli commands to succeed. More information on that can be found here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#working-with-iam-roles