I am using the Javascript SDK to run Windows Server2019 EC2 instances - the AMI I'm using is a custom AMI. Through the SDK, I'm inputting this user data:
<powershell>
Copy-S3Object -BucketName mybucket -KeyPrefix myprefix -LocalFolder C:\Users\myuser\Desktop -Region ap-southeast-2
</powershell>
<persist>true</persist>
When I select the running instance and view user data, the above user data correctly shows.
I have added the appropriate IAM role to the instance as it works when I manually run the base Windows Server2019 instance from the console with the same user data and IAM role.
But when running it from the SDK, the EC2 logs show that:
<powershell> tag was provided.. running powershell content
Failed to get metadata: The result from http://169.254.169.254/latest/user-data was empty
Unable to execute userdata: Userdata was not provided
The trick was to create the AMI using sysprep as shown here.
Related
Background:
I have jenkins installed in AWS Account #1 (account1234) and it has iam Role-jenkins attached to it. There's github configured with Jenkins.
When I click build job in Jenkins, jenkins pulls all the files from github and can be found in
/var/lib/jenkins/workspace/.
There's an application running in AWS Account #2 (acccount5678) in an ec2 instance (i-xyz123) and the project files are in /home/app/all_files/ ; This ec2 instance role has app-role attached to it.
What I'm trying to achieve:
When I click build, I want jenkins to push files from account 1234 to account 5678 by opening an SSM session from Jenkins ,to the ec2 instance on which app is running.
What I tried:
In the jenkins as part of build shell script I added:
aws ssm send-command --region us-east-1 --instance-ids i-xyz123 --document-name AWS-RunShellScript --comment IP config --parameters commands=ifconfig --output text
to test it. (If successful, I want to pass cp var/lib/jenkins/workspace/ /home/app/all_files/ as the command)
Error:
An error occurred (AccessDeniedException) when calling the SendCommand operation: User: arn:aws:sts::account1234:assumed-role/Role-Jenkins/i-01234abcd is not authorized to perform: ssm:SendCommand on resource: arn:aws:ec2:us-east-1:account1234:instance/i-xyz123
Build step 'Execute shell' marked build as failure
Finished: FAILURE
Issue 1: instance/i-xyz123 is in account5678 but error above shows ssm trying to connect to instance in account1234 ( which shouldn't be happening)
Q1: How do I update my command so that it tries to open an ssm session
with instance/i-xyz123 present in account5678 to accomplish what I'm
trying to do.
I believe I would also need to make each role added as a trusted relationship to the other.
(Note I want to do it via sessions manager as I won't have to deal with credentials of any sort)
If I've understood correctly then you're right; to interact with the resources in account5678, there needs to be a trust relationship so that the Jenkins account can assume the relevant role in account5678 and call SSM from there.
Once you've configured the role relationship (ref: IAM cross account roles )
You should be able to achieve what you need by assuming the role first in your shell script and then running the ssm command. That way Jenkins will use the temp creds and execute the command in the correct account (5678).
This site steps through it pretty well :
Tom Gregory - Jenkins Assume Role
If you just cmd/ctrl f on that page ^ and search for 'shell' you should get to the section you need. Hope this somewhat helps.
I am trying to download codedeploy following AWS docs:
Install the CodeDeploy agent for Windows Server
To do it, i need to run following command which basically downloads the file on temporary folder for my region us-east-1:
powershell.exe -Command Read-S3Object -BucketName aws-codedeploy-us-east-1 -Key latest/codedeploy-agent.msi -File c:\temp\codedeploy-agent.msi
On that instance I attached an IAM Role which has AmazonS3FullAccess policy. I'm getting this error when I execute the command:
Read-S3Object : No credentials specified or obtained from persisted/shell defaults.
I'm aware I could fix this adding personal credentials but since this is not considered a good practice I would like to download the client without tackling it this way.
Just in case someone has a similar problem.
This was caused because I previously migrated a custom windows AMI to a different AWS region.
The instance profile metadata server was pointing to old region server. You need to change that to point to the new region:
Import-Module (Join-Path $env:ProgramData 'Amazon\EC2-Windows\Launch\Module\Ec2Launch.psd1')
Add-Routes
While debugging this question, I went on and
In IAM console at https://console.aws.amazon.com/iam/
1.1. Deleted one role (CodeDeployServiceRole).
1.2. Created a service role.
In S3 console at https://console.aws.amazon.com/s3/
2.1. Emptied and deleted one bucket (tiagocodedeploylightsailbucket).
2.2. Created a new bucket in EU London (eu-west-2).
Back into the IAM console at https://console.aws.amazon.com/iam/
3.1. Deleted one policy (CodeDeployS3BucketPolicy).
3.2. Created a new policy.
Stay in the IAM console at https://console.aws.amazon.com/iam/
4.1. Delete one user (LightSailCodeDeployUser)
4.2. Created a new user (with that same name).
Navigate to the Lightsail home page at https://lightsail.aws.amazon.com/
5.1. Deleted previous instance (codedeploy).
5.2. Created one new instance with Amazon Linux (Amazon_Linux_1) (Note that if I use Amazon Linux 2 then would reach this problem),
using the script
mkdir /etc/codedeploy-agent/
mkdir /etc/codedeploy-agent/conf
cat <<EOT >> /etc/codedeploy-agent/conf/codedeploy.onpremises.yml
---
aws_access_key_id: ACCESS_KEY
aws_secret_access_key: SECRET_KEY
iam_user_arn: arn:aws:iam::525221857828:user/LightSailCodeDeployUser
region: eu-west-2
EOT
wget https://aws-codedeploy-us-west-2.s3.us-west-2.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
Checked that CodeDeploy agent is running and then when running the following command in AWS CLI
aws deploy register-on-premises-instance --instance-name Amazon_Linux_1 --iam-user-arn arn:aws:iam::525221857828:user/LightSailCodeDeployUser --region eu-west-2
I get
An error occurred (IamUserArnAlreadyRegisteredException) when calling
the RegisterOnPremisesInstance operation: The on-premises instance
could not be registered because the request included an IAM user ARN
that has already been used to register an instance. Include either a
different IAM user ARN or IAM session ARN in the request, and then try
again.
Even though I deleted the user, created one with the same name and then deleted the other existing instance, the IAM User ARN is still the same
arn:aws:iam::525221857828:user/LightSailCodeDeployUser
To fix it, I've gone back to step 4 and created a user with a different name; then, updated the script for the instance creation, checked if the CodeDeploy agent is running and now when running in AWS CLI
aws deploy register-on-premises-instance --instance-name Amazon_Linux_1 --iam-user-arn arn:aws:iam::525221857828:user/GeneralUser --region eu-west-2
I get the expected result
This is an odd one for sure. I have an aws command line user that I've setup with admin privileges in the AWS account. The credentials I generated for the user work when I issue an aws ec2 command. But not when I run an aws iam command.
When I run the aws iam command, this is what I get:
[user#web1:~] #aws iam create-account-alias --account-alias=mcollective
An error occurred (InvalidClientTokenId) when calling the CreateAccountAlias operation: The security token included in the request is invalid.
However when I run an aws ec2 subcommand using the same credentials, I get a success:
[root#web1:~] #aws ec2 describe-instances --profile=mcollective
RESERVATIONS 281933881942 r-0080cb499a0299557
INSTANCES 0 x86_64 146923690580740912 False xen ami-6d1c2007 i-00dcdb6cbff0d7980 t2.micro mcollective 2016-07-27T23:56:50.000Z ip-xxx-xx-xx-xx.ec2.internal xx.xx.xx.xx ec2-xx-xx-xx-xx.compute-1.amazonaws.com xx.xxx.xx.xx /dev/sda1 ebs True subnet-0e734056 hvm vpc-909103f7
BLOCKDEVICEMAPPINGS /dev/sda1
EBS 2016-07-23T01:26:42.000Z False attached vol-0eb52f6a94c5833aa
MONITORING disabled
NETWORKINTERFACES 0e:68:20:c5:fa:23 eni-f78223ec 281933881942 ip-xxx-xx-xx-xx.ec2.internal xxx.xx.xx.xx True in-use subnet-0e734056 vpc-909103f7
ASSOCIATION 281933881942 ec2-xxx-xx-xx-xx.compute-1.amazonaws.com xx.xx.xx.xx
ATTACHMENT 2016-07-23T01:26:41.000Z eni-attach-cbf11a1f True 0 attached
GROUPS sg-b1b3bdca CentOS 7 -x86_64- - with Updates HVM-1602-AutogenByAWSMP-
PRIVATEIPADDRESSES True ip-xxx-xx-xx-xxx.ec2.internal xxx.xx.xx.xx
ASSOCIATION 281933881942 ec2-xx-xx-xx-xx.compute-1.amazonaws.com xx.xx.xx.xx
PLACEMENT us-east-1a default
PRODUCTCODES aw0evgkw8e5c1q413zgy5pjce marketplace
SECURITYGROUPS sg-b1b3bdca CentOS 7 -x86_64- - with Updates HVM-1602-AutogenByAWSMP-
STATE 16 running
TAGS Name mcollective
You have mail in /var/spool/mail/root
[root#web1:~] #
So why the heck are the same credentials working for one set of aws subcommands, but not another? I'm really curious about this one!
This question was answered by #garnaat, but the answer was buried in the comments of the question, so quoting the answer here for those who glaze over it:
Different profiles are being used in each command through use of --profile flag
Explanation:
In the first command
[user#web1:~] #aws iam create-account-alias --account-alias=mcollective
--profile is not specified, so the default aws profile credentials are automatically used
In the second command
aws ec2 describe-instances --profile=mcollective
--profile is specified, which overrides the default profile
I have an EC2 instance and I need to download a file from its D drive through my program. Currently, it's a very annoying process because I can't access the instance directly from my local machine. The way what I am doing now is running a script on the instance and the instance uploads the file I need to S3 and my program access S3 to read the file.
Just wonder whether there is any simple way to access the drive on the instance instead of going through S3?
I have used AWS DataPipeline and its task runner to execute scripts on a remote instance. The taskrunner waits for a pipeline event published to its worker group.
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-using-task-runner.html
I use it to execute shell script and commands on a schedule. The script to run should be uploaded to S3, and the Data pipeline template specifies the script's path. Works great for periodic tasks. You can do anything you want on the remote box via the script.
You cannot directly download the file from EC2, but via s3( or maybe using scp command) from your remote ec2.
But to simplify this annoying process you can use AWS Systems Manager.
AWS Systems Manager Run Command allows you to remotely and securely run set of commands on EC2 as well on-premise server. Below are high-level steps to achieve this.
Attach Instance IAM role:
The ec2 instance must have IAM role with policy AmazonSSMFullAccess. This role enables the instance to communicate with the Systems Manager API.
Install SSM Agent:
The EC2 instance must have SSM agent installed on it. The SSM Agent process the run command requests & configure the instance as per command.
Execute command :
Example usage via AWS CLI:
Execute the following command to retrieve the services running on the instance. Replace Instance-ID with ec2 instance id.
aws ssm send-command --document-name "AWS-RunShellScript" --comment "listing services" --instance-ids "Instance-ID" --parameters commands="service --status-all" --region us-west-2 --output text
More detailed information: https://www.justdocloud.com/2018/04/01/run-commands-remotely-ec2-instances/