I am new to AWS. This is a task which i am trying to achieve automating the AMI copy for cross account and cross region with jenkins.
Now I am running as below as part of shell script.
Must set the profile to the source and destination profile. Something like this.
saml2aws login -p prof1 -a prof1
then I can run the shell script to copy the AMI.
./Copy_ami.sh -s mysrcprofile -d mydstprofile -a ami-613cxxxx
Can Anyone please suggest if they have any better resolution for this task.
I am not sure how to achieve in jenkins, Any suggestion is welcome.
Related
I am trying to deploy my machine learning model with sam for couple of days and I am getting this error:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I am also make sure that my aws config is fine
the "aws s3 ls" command works fine with me any help will be useful thanks in advance
I've read through this issue which seems to have been deployed in v1.53: SAM Accelerate issue
Reading that seemed to imply that it might be worth trying
sam deploy --guided --profile mark
--profile mark is the new part and mark is just the name of the profile.
I'm using v1.53 but still have to pass in the profile to avoid the problem you're having and I was having, so they may not have fixed the issue as well as intended, but at least the --profile seems to solve it for me.
If you are using Linux, this error can be caused by a misalignment between a docker root installation and user-level AWS credentials.
Amazon documentation recommends adding credentials using the aws configure command without sudo. However, when you install docker on Linux, it requires a root-level installation. This ultimately results in the user being forced to use sudo for the SAM CLI build and deploy commands, which leads to the error.
There are two different solutions that will fix the issue:
Allow non-root users to manage docker. If you use this method, you will not need to use sudo for your SAM CLI commands. This fix can be accomplished by using the following commands:
sudo groupadd docker
sudo usermod -aG docker $USER
OR
Use sudo aws configure to add AWS credentials to root. This fix requires you to continue using sudo for your SAM CLI commands.
My aim is to launch an instance such that a start-up script is triggered on boot-up to download some configuration files stored in AWS S3. Therefore, in the start-up script, I am setting the S3 bucket details and then, triggering a config.sh where aws s3 sync does the actual download. However, the aws command does not work - it is not found for execution.
User data
I have the following user data when launching an EC2 instance:
#!/bin/bash
# Set command from https://stackoverflow.com/a/34206311/919480
set -e -x
export S3_PREFIX='bucket-name/folder-name'
/home/ubuntu/app/sh/config.sh
The AWS CLI was installed with pip as described in the documentation.
Observation
I think, the user data script is run with root user ID. That is why, in the user data I have /home/ubuntu/ because $HOME did not resolve into /home/ubuntu/. In fact, the first command in config.sh is mkdir /home/ubuntu/csv which creates a directory with owner as root!
So, would it be right to conclude that, the user data runs under root user ID?
Resolution
Should I use REST API to download?
Scripts entered as user data are executed as the root user, so do not use the sudo command in the script.
See: Running Commands on Your Linux Instance at Launch
One solution is to set the PATH env variable to include AWS CLI (and add any other required path) before executing AWS CLI.
Solution
Given that, AWS CLI was installed without a sudo pip, the CLI is not available for root. Therefore, to run with ubuntu user, I used the following user data script:
#!/bin/bash
su ubuntu -c '$HOME/app/sh/config.sh default`
In config.sh, the argument default is used to build the full S3 URI before invoking the CLI. However, the invocation was successful only with the full path $HOME/.local/bin/aws despite the fact that aws can be accessed with normal login.
I have managed to set up a VM instance on Google cloud platform using the following instructions:
https://towardsdatascience.com/running-jupyter-notebook-in-google-cloud-platform-in-15-min-61e16da34d52
I am then able to run a Jupyter notebook as per the instructions.
Now I want to be able to use my own data in the notebook....this is where I am really struggling. I downloaded the Cloud SDK onto my mac and ran this from the terminal (as per https://cloud.google.com/compute/docs/instances/transfer-files)
My-MacBook-Air:~ me$ gcloud compute scp /Users/me/Desktop/my_data.csv aml-test:~/amlfolder
where aml-test is the name of my instance and amlfolder a folder I created on the VM instance. I don't get any error messages and it seems to work (the terminal displays the following after I run it >> 100% 66MB 1.0MB/s 01:03 )
However when I connect to my VM instance via the SSH button on the google console and type
cd amlfolder
ls
I cannot see any files! (nor can I see them from the jupyter notebook homepage)
I cannot figure out how to use my own data in a python jupyter notebook on a GCP VM instance. I have been trying/googling for an entire day. As you might have guessed I'm a complete newbie to GCP (and cd, ls and mkdir is the extent of my linux command knowledge!)
I also tried using Google Cloud Storage - I uploaded the data into a google storage bucket (as per https://cloud.google.com/compute/docs/instances/transfer-files) but don't know how to complete the last step '4. On your instance, download files from the bucket.'
If anyone can figure out what i am doing wrong, or an easier method to get my own data running into a python jupyter notebook on GCP than using gcloud scp command please help!
Definitely try writing
pwd
to verify you're in the path you think you are, there's a chance that your scp command and the console SSH command login as different users.
To copy data from a bucket to the instance, do
gsutil cp gcs://bucket-name/you-file .
As you can see in gcloud compute docs , gcloud compute scp /Users/me/Desktop/my_data.csv aml-test:~/amlfolder will use your local environment username, so the tilde in your command refers to the home directory of a username that is the same name as your local.
But when you SSH from the Browser as you can see from docs that your Gmail username will be used.
So, you should check the home directory of the user used by gcloud compute scp ... command.
The easiest way to check, SSH to your VM and run
ls /home/ --recursive
Currently we build our Docker containers and publish them to Amazon ECR. We have created TaskDefinitions and are able to deploy them manually on an ECS Cluster. So a new deployment involves manual update of the TaskDefinition.
Now we would like to automate the deployment so when a Docker Image is successfully build using Jenkins and published to the ECR repo we would like to replace the current running version with the newly build one.
Next to this we would like to give people the opportunity to launch a specific version of 1 or more combinations of docker containers. Any suggestion on how we could implement a continuous cycle without manually updating the TaskDefinitions?
A more simple solution for this might be to use the ecs-deploy script from here:
https://github.com/silinternational/ecs-deploy
After my container has been built and deployed to dockerhub it's simply a matter of:
ecs-deploy -k $AWS_KEY -s $AWS_SECRET -r $AWS_REGION -c $CLUSTER_NAME -n $SERVICE_NAME -i $DOCKER_IMAGE_NAME
and that does it.
This article describes how to do Continuous deployments to ECS with Jenkins. It uses a shell script after the image has been built and pushed to update an ECS service with a new task definition revision. Hope it helps.
I've just try the Debian 7.1 AMI on AWS(from the Marketplace). I've some problem with my user-data script.
He's not executed during the boot time, my script works well with the Amazon AMI but not with Debian(I've also try with a simple script: echo "toto" > /tmp/test.log but nothing).
Any idea?
Thanks
Matt
P.S: I start my script with #!/bin/bash
In fact, the user-data script is executed only one time. If you create a AMI based on the Debian marketplace AMI, when you'll launch your "custom" AMI, the user-data has already been executed when you have started the Debian base AMI.
If you want that the user-data will be executed on a custom AMI, you must change the insserv for the init.d ec2-run-user-data:
sudo -i
insserv -d ec2-run-user-data
And now you can create an AMI.
Matt