elastic beanstalk application can't find private docker image - amazon-web-services

I am attempting to setup a simple Elastic Beanstalk application with the following settings:
Web server environment
Predefined configuration: docker
Environment type: single instance
My Dockerrun.aws.json has the authentication block, which was created by running the docker login command on my local machine. I have added those credentials in the form of a .dockercfg file to an S3 bucket and given the necessary IAM roles to the EC2 instance so that it can access the config file with the authentication information.
However, when I attempt to start up the instance, the creation process fails and the log tells me:
Error: image mydockeruser/my-docker-app:latest not found
It says the image can't be found, but the image IS there (in a private repo), with the "latest" tag. To prove it to myself, I can go to https://hub.docker.com/r/mydockeruser/my-docker-app/tags/ and I can see the image with tag name of "latest" including the size of the image, etc.
Any idea why Elastic Beanstalk wouldn't be able to find the image during the application setup process?

I had the wrong format in my .dockercfg file. They changed the format from (I believe) Docker 1.6 and later.
{
"https://index.docker.io/v1/" :
{
"auth" : "mywackyauthcode",
"email" : "myemail#email.org"
}
}

Related

How to create administration account of keycloak on AWS ECS

I am working on AWS ECS. I have uploaded Keycloak image to aws ecs, but when i run task and open that using public id, i am getting problem in administration account. There is no admin account at first and i am not able to create it.
What i have done : I have created task definition using jboss/keycloak:latest image url. Then created one cluster and run task using above task definition.
Issue : creating admin account on running task.
Thanks for your help.
If you are using official keycloak image here then you can pass folowing environment variables to generate admin user:
KEYCLOAK_USER=admin
KEYCLOAK_PASSWORD=password
Or if you don't use keycloak image (maybe you build yourself), you can also run the following command directly which does the same
/opt/jboss/keycloak/bin/add-user-keycloak.sh --user "admin" --password "password"

How do I use AWS credentials with Jenkins to deploy to Elastic Beanstalk?

I have entered AWS credentials in Jenkins at /credentials, however they do not show up in the drop down list for the Post Build steps in the AWS Elastic Beanstalk plugin.
If I click Validate Credentials, I get this strange error.
Failure
com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), com.amazonaws.auth.profile.ProfileCredentialsProvider#5c932b96: profile file cannot be null, com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper#32abba7: The requested metadata is not found at http://169.254.169.254/latest/meta-data/iam/security-credentials/]
at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:136)
I don't know where it got that IP address. When I search for that IP in the Jenkins directory, I turn up with
-bash-4.2$ grep -r 169.254.169.254 *
plugins/ec2/AMI-Scripts/ubuntu-init.py:conn = httplib.HTTPConnection("169.254.169.254")
The contents of that file is here: https://pastebin.com/3ShanSSw
There are actually 2 different Amazon Elastic Beanstalk plugins.
AWSEB Deployment Plugin, v 0.3.19, Aldrin Leal
AWS Beanstalk Publisher Plugin, v 1.7.4, David Tanner
Neither of them work. Neither will display the credentials in the drop down list. Since updating Jenkins, I am unable to even show "Deploy to Elastic Beanstalk" as a post-build step for the first one (v0.3.19) even though it is the only one installed.
For the 2nd plugin (v1.7.4), I see this screen shot:
When I fill in what I can, and run it, it gives the error
No credentials provided for build!!!
Environment found (environment id='e-yfwqnurxh6', name='appenvironment'). Attempting to update environment to version label 'sprint5-13'
'appenvironment': Attempt 0/5
'appenvironment': Problem:
com.amazonaws.services.elasticbeanstalk.model.AWSElasticBeanstalkException: No Application Version named 'sprint5-13' found. (Service: AWSElasticBeanstalk; Status Code: 400; Error Code: InvalidParameterValue; Request ID: af9eae4f-ad56-426e-8fe4-4ae75548f3b1)
I tried to add an S3 sub-task to the Elastic Beanstalk deployment, but it failed with an exception.
No credentials provided for build!!!
Root File Object is a file. We assume its a zip file, which is okay.
Uploading file awseb-4831053374102655095.zip as s3://appname-sprint5-15.zip
ERROR: Build step failed with exception
com.amazonaws.services.s3.model.AmazonS3Exception: The XML you provided was not well-formed or did not validate against our published schema (Service: Amazon S3; Status Code: 400; Error Code: MalformedXML; Request ID: 7C4734153DB2BC36; S3 Extended Request ID: x7B5HflSeiIw++NGosos08zO5DxP3WIzrUPkZOjjbBv856os69QRBVgic62nW3GpMtBj1IxW7tc=), S3 Extended Request ID: x7B5HflSeiIw++NGosos08zO5DxP3WIzrUPkZOjjbBv856os69QRBVgic62nW3GpMtBj1IxW7tc=
Jenkins is hopelessly out of date and unmaintained. I added the Post Build Task plugin, installed eb tool as jenkins user, ran eb init in the job directory, edited .elasticbeanstalk/config.yml to add the lines
deploy:
artifact: target/AppName-Sprint5-SNAPSHOT-bin.zip
Then entered in the shell command to deploy the build.
/var/lib/jenkins/.local/bin/eb deploy -l sprint5-${BUILD_NUMBER}
For Eleastic beanstalk plugin right place to configure AWS key is Jenkins Master configure
http://{jenkinsURL}/configure

Django (django-ses-gateway) gives default region as EU-WEST-1 instead of US-EAST-1

I am having application on EC2 that requires to send an email.
I am using Django with AWS, and module of 'django-ses-gateway' to send an email.
EC2 is configured, hence on ~/.aws folder I am having appropriate credentials file with region as 'default'
However, whenever application tries to send an email by default it is trying to use "EU-WEST-1" region which is not expected one, as it should use "US-EAST-1".
Because of wrong region, application fails.
PS:
I also verified that "settings.py" file is not overwriting region,
Finally, got the solution.
'django_ses_gateway' (version 0.1.1) module of python has a bug.
By default, it selects EU-WEST-1 region,
hence, 'sending_mail.py' file requires correction to not to hard-cord a region of EU.
The location of installed package can be found using 'pip3 show django-ses-gateway' command

404 not found on AWS

I have a spring boot project that runs normally on localhost. But when I upload the WAR on AWS using ElasticBeansTalk, I have a 404 not found.
The access to DynamoDB works fine from the CLI.
The variables on the properties file are the same as the one I am using to access DynamoDB from the CLI
properties file:
amazon.dynamodb.endpoint=http://dynamodb.us-west-2.amazonaws.com
amazon.aws.accesskey=***
amazon.aws.secretkey=***
CLI:
aws configure
AWS Access Key ID [********************]:
AWS Secret Access Key [********************]:
Default region name [us-west-2]:
Default output format [json]:
I don't know why I am having the 404 not found on AWS
AWS is set to listen to port 5000 by default, so you just need to set server.port=5000 in your application.properties file before creating your WAR. Or you can set an SERVER_PORT environment property to change it from 5000 to 8080 that Spring uses.
Another issue I ran into was creating the environment using Java, when you should actually be using Tomcat.
Great step by step here and extra context here.

Elastic BeanStalk MultiContainer docker fails

I want to deploy an multi-container application in elastic beanstalk. I get the following error.
Error 1: The EC2 instances failed to communicate with AWS Elastic
Beanstalk, either because of configuration problems with the VPC or a
failed EC2 instance. Check your VPC configuration and try launching
the environment again.
I have set up the VPC with just the public subnet and the security group that allows all traffic both inbound and outbound. I know this is not encouraged for production level deployment, but I have reduced the complexity to find the cause of the error.
So, the load balancer and the EC2 instance are inside the same public subnet that is attached with the internet gateway. They both share the same security group allowing all the traffic.
Before the above error, I also get another error stating
Error 2: No ecs task definition (or empty definition file) found in environment
Having said, I have bundled my Dockerrun.aws.json file with .ebextensions folder inside the source bundle which the beanstalk uses for deployment.
After all these errors, drilling down to two questions:
I cannot understand why No ecs task error appears, when I have packaged my dockerrun.aws.json file containing containerDefinitions?
Since there is no ecs task running, there is nothing running in the instance. Is this why beanstalk and ELB cannot communicate to the instance? (Assuming my public subnet and all traffic security group is not a problem)
The problem was the VPC. Even I had the simple VPC with just an public subnet, the beanstalk cannot talk to the instance and so cannot deploy the ECS task definition and docker containers in the instance.
By creating two subnets namely public and private and having an NAT instance in public subnet, which becomes the router for the instances in the private subnet. Making the above setup worked for me and I could deploy the ECS task definition successfully to the EC2 instance in the private subnet.
I found this question because I got the same error. Here are the steps that worked for me to actually deploy a multi-container app on Beanstalk:
To get past this particular error, I used the eb CLI tools. For some reason, using eb deploy instead of zipping and uploading myself fixed this. It didn't actually work, but it gave me a new error.
So, I changed my Dockerrun.aws.json, a file format that needs WAY more documentation, until I stopped getting errors about that.
Then, I got an even better error!
ERROR: [Instance: i-0*********0bb37cf] Command failed on instance.
Return code: 1 Output: (TRUNCATED)..._api_call
raise ClientError(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when
calling the GetObject operation: Access Denied
Failed to download authentication credentials [config file name] from [bucket name].
Hook /opt/elasticbeanstalk/hooks/appdeploy/enact/02update-
credentials.sh failed. For more detail, check /var/log/eb-activity.log
using console or EB CLI.
Per this part of the docs the way to solve this is to
Open the Roles page in the IAM console.
Choose aws-elasticbeanstalk-ec2-role.
On the Permissions tab, under Managed Policies, choose Attach Policy.
Select the managed policy for the additional services that your application uses. For example, AmazonS3FullAccess or AmazonDynamoDBFullAccess. (For our problem, the S3 one)
Choose Attach Policies.
This part got really exciting, because I got yet another error: Authentication credentials are not in JSON format as expected. Please generate the credentials using 'docker login'. (Keep in mind, I tried to follow the instructions on how to do this to the letter, but, oh well). Turns out this one was on me, I had malformed JSON in my DockerHub auth file stored on S3. I renamed the file to dockercfg.json to get syntax checking, and it seems the Beanstalk/ECS is okay with having the .json as part of the name, because this time... there was a different error: CannotPullContainerError: Error: image [DockerHub organization]/[repo name]:latest not found). Hmm, maybe there was a typo? Let's check:
$ docker run -it [DockerHub organization]/[repo name]:latest
Unable to find image '[DockerHub organization]/[repo name]:latest' locally
latest: Pulling from [DockerHub organization]/[repo name]
Ok, the repo is there. So... my auth is bad? Yup, turns out I followed an example in the DockerHub auth docs that was of what you shouldn't do. Your dockercfg.json should look like
{
"https://index.docker.io/v1/": {
"auth": "ZWpMQ=Vyd5zOmFsluMTkycN0ZGYmbn=WV2FtaGF2",
"email": "your#email.com"
}
}
There were a few more errors (volume sourcePath has to be a absolute path! That's what the invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed message means), but it eventually deployed. Sorry for the novel; hoping it helps someone.