Previously I used a single docker container elastic beanstalk environment. It was able to use my login credentials that are stored on S3 to download a container from a private docker hub repository.
However, I created a new multi container docker environment and since then I always get the error:
change="{TaskArn:arn:aws:ecs:eu-west-1:188125317072:task/dbf02781-8140-422a-9b81-93d83441747d
ContainerName:aws-first-test Status:4
Reason:CannotPullContainerError:
Error: image test/awstest:latest not found ExitCode:<nil> PortBindings:[] SentStatus:NONE}"
(I'm using exactly the same container that worked before)
The container does exist and the environment is in the same location as the login credentials (ireland)
My Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": 2,
"authentication": {
"Bucket": "docker-ireland",
"Key": ".dockercfg"
},
"containerDefinitions": [
{
"name": "aws-first-test",
"image": "test/awstest",
"memory": 250
},
{
"name": "aws-second-test",
"image": "test/awstest",
"memory": 250
}
]
}
The Dockerrun.aws.json is case sensitive and in version 2.0 the keys authentication, bucket and key are changed to lower case.
This answer is from the amazon aws forums: https://forums.aws.amazon.com/message.jspa?messageID=667098
In my case this error was caused because I had something like the following in my S3 config file:
{
"server" :
{
"auth" : "*****",
"email" : "*****"
}
}
Not kidding, i had the keyword "server" instead of registry url service (https://index.docker.io/v1/ for docker).
I must´ve copied from some blog or documentation idk. Feeling dump already.
Related
I have performed aws configure and ask configure after installing ask-cli.
While setting up new skill using ask new selected NodeJS, AWS with CloudFormation.
Trying to deploy the skill using ask deploy, getting [Error]: CliError: The CloudFormation deploy failed for Alexa region "default": Access Denied.
Tried setting the region in ~/.aws/config and in ~/.aws/credentials, still running into same error.
What should be done to fix the issue?
Skill creation
Error deploying skill
I've been able to deploy.
After running aws configure, I called ask new, and I think the solution was to not select AWS With CloudFormation but AWS Lambda:
I wanted to use an existing skill that I previously created in the web UI. So I created two folders: lambda and skill-package. Then I used ask init saying I don't want to use AWS CloudFormation to deploy:
Next, I added my region in ask-resources.json, under skillInfrastructure:
{
"askcliResourcesVersion": "2020-03-31",
"profiles": {
"default": {
"skillMetadata": {
"src": "./skill-package"
},
"code": {
"default": {
"src": "./lambda"
}
},
"skillInfrastructure": {
"type": "#ask-cli/lambda-deployer",
"userConfig": {
"runtime": "nodejs12.x",
"handler": "index.js",
"awsRegion": "eu-west-1"
}
}
}
}
}
And I finished with ask deploy that worked!
As I searched for the stickiness in Elastic Beanstalk I didn't find the way how to include it using AWS Cloud Formation. Can anyone help me to do that thing.
Thanks in advance.
If we see the definition of sticky sessions, it says "Sticky sessions are a mechanism to route requests to the same target in a target group".
In Elastic Beanstalk, a target group is represented by a process. So we need to set up stickiness at process level using option settings
You can take 2 approaches here : ( below is for "default" process, if you have configured additional processes then modify accordingly but implementation remains same )
option setting namespace : aws:elasticbeanstalk:environment:process:default
Valid options to set : StickinessEnabled, StickinessLBCookieDuration
Specify option settings in your CloudFormation template under the AWS::ElasticBeanstalk::Environment Type, like described.
sample :
"Environment": {
"Properties": {
"ApplicationName": {
"Ref": "Application"
},
"Description": "AWS Elastic Beanstalk Environment running Python Sample Application",
"SolutionStackName": {
"Ref": "SolutionStackName"
},
"VersionLabel": "Initial Version",
"OptionSettings": [
{
"Namespace": "aws:elasticbeanstalk:environment:process:default",
"OptionName": "StickinessEnabled",
"Value":"true"
},
{
"Namespace": "aws:elasticbeanstalk:environment:process:default",
"OptionName": "StickinessLBCookieDuration",
"Value":"43200"
}
]
},
"Type": "AWS::ElasticBeanstalk::Environment"
}
Configure this at the source bundle level, i.e, create a .config file ( say albstickiness.config ) and place it in .ebextensions folder. In the .config file set stickiness for the ALB process.
sample can be found here under the sub heading ".ebextensions/alb-default-process.config"
you can try the below
LBCookieStickinessPolicy:
- PolicyName: myLBPolicy
CookieExpirationPeriod: '180'
you can read more about the sticky session here and here
I uploaded a project having multi-containers docker platform with two containers say xyz and abc in aws elastic-beanstalk. xyz contains tomcat server in it. I have following configuration in my project for Dockerrunner.aws.json file.
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "xyz",
"image": "<PLACEHOLDER_REPLACED_BY_CICD_TOOLS>",
"essential": true,
"memory": 2048,
"links": [
"abc"
],
"environment": [
{
"name": "ENVIRONMENT",
"value": "QA"
},
{
"name": "LOG_HOME",
"value": "/usr/local/tomcat/logs"
},
.
.
.
],
"mountPoints": [
{
"sourceVolume": "awseb-logs-xyz",
"containerPath": "/usr/local/tomcat/logs"
}
],
.
.
},
{
"name": "abc",
"image": "image123",
"essential": true,
.
.
.
}
]
}
But, I am not able to view data in health section of elastic-beanstalk.
What I did so far to resolve this issue:
I read (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/health-enhanced-serverlogs.html) and got to know that elastic beanstalk has special logging format for multi-container health page to work.
For testing propose, I manually created an error log-file in the same format by accessing ec2 instance. The file I created in ec2 instance host at location /var/log/containers/xyz (where health-agent read logs) was also mapped properly to tomcat's log file's location (i.e. /usr/local/tomcat/logs) in xyz docker container.
But, I still could not see changes in enhanced health overview section.
From this AWS support site:
From the Elastic Beanstalk console, verify that enhanced health reporting is enabled:
Choose Configuration, and then on the Health panel under Web Tier, choose the edit gear.
Under Health Reporting, ensure the System type is set to Enhanced.
64-bit Amazon Linux 2016.xx vx.x.x running Node.js platform:
Ensure that the correct proxy server is configured:
Choose Configuration, and then on the Software Configuration panel under Web Tier, choose the edit gear.
In Container Options, ensure you have a Proxy server selected.
If Proxy server is set to none, the application log file is not generated under /var/log/nginx/healthd/ and health reporting does not generate data to display.
You can also modify the Node.js logs and location to be compatible with enhanced health log format, then review the healthd configuration file /etc/healthd/config.yaml.
64-bit Amazon Linux 2016.xx vx.x.x running Multicontainer Docker 2.xx.x:
This platform doesn’t come with a proxy server, so you need to ensure that logs are produced in the correct format from their containers and configure healthd to read them. To use enhanced health monitoring in Multicontainer Docker environments, you need to configure healthd to consume these logs.
To provide logs to the health agent, ensure the following:
Logs are in the correct format
Logs are written to /var/log/nginx/healthd/
Log names use the format: application.log.$year-$month-$day-$hour
Logs are rotated once per hour
Logs are not truncated
Note: With the Node.js platform, if you disable the proxy, the logs are not created under /var/logs/nginx/healthd/. You must either re-enable the proxy or configure your Node.js application to produce logs under /var/logs/nginx/healthd/
This sample Docker-multicontainer-v2.zip code shows how to manage ebextensions where the healthd configuration is set to read another directory. [...]
I think this part might be able to help you:
If you are unable to see information for a server in the Enhanced Health Overview, check the healthd service status on the instance and ensure that it’s running. If it is not running, restart the service.
This sample code shows how to check the healthd service status:
$ ps aux | grep healthd
This sample code shows how to restart the healthd service:
[ec2-user#ip-172-31-39-182 ~]$ sudo initctl restart healthd
In addition to #char's answer, specifically for multicontainer docker environments, you need to mount the log location and EB will take care of the rest automatically:
Elastic Beanstalk creates log volumes on the container instance, one
for each Docker container, at /var/log/containers/containername. These
volumes are named awseb-logs-containername and should be mounted to
the location within the container file structure where logs are
written.
This is what is the multi-container docker example:
{
"name": "nginx-proxy",
"image": "nginx:alpine",
"essential": true,
"memoryReservation": 8,
"mountPoints": [{
"sourceVolume": "awseb-logs-nginx-proxy",
"containerPath": "/var/log/nginx"
}],
"portMappings": [{
"hostPort": 80,
"containerPort": 80
}]
}
The key being the mountPoint awseb-logs-nginx-proxy which is awseb-logs-<container_name>.
This is in correctly documented in the OP's config, but it was the piece I was missing which is part of the total answer.
I am fairly new to AWS, I wrote a script to create a ElasticBeanstalk server and deploy code to it which works fine
I am able to get the IP Address and Instance ID using
aws ec2 describe-instances
I know a typical HTTP URL looks like this
http://(cname-prefix).(region).elasticbeanstalk.com
and I used it to "generate" the URL in the script
But I want to check if we can get the URL using CLI
The AWS CLI has a command for this, it describes the elastic beanstalk environment and one of the resulting values is the Endpoint URL
For load-balanced, autoscaling environments, it returns the URL to the LoadBalancer. For single-instance environments, the IP address of the instance is returned.
See docs
aws elasticbeanstalk describe-environments --environment-names my-env
The output looks like this:
{
"Environments": [
{
"ApplicationName": "my-app",
"EnvironmentName": "my-env",
"VersionLabel": "7f58-stage-150812_025409",
"Status": "Ready",
"EnvironmentId": "e-rpqsewtp2j",
"EndpointURL": "awseb-e-w-AWSEBLoa-1483140XB0Q4L-109QXY8121.us-west-2.elb.amazonaws.com",
"SolutionStackName": "64bit Amazon Linux 2015.03 v2.0.0 running Tomcat 8 Java 8",
"CNAME": "my-env.elasticbeanstalk.com",
"Health": "Green",
"AbortableOperationInProgress": false,
"Tier": {
"Version": " ",
"Type": "Standard",
"Name": "WebServer"
},
"DateUpdated": "2015-08-12T18:16:55.019Z",
"DateCreated": "2015-08-07T20:48:49.599Z"
}
]
}
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html#docker-singlecontainer-dockerrun-privaterepo
Following the instructions here to connect to a private docker hub container from Elastic Beanstalk, but it stubbornly refuses to work. It seems like when calling docker login in Docker 1.12 the resulting file has no email property, but it sounds like aws expects it so I create a file called dockercfg.json that looks like this:
{
"https://index.docker.io/v1/": {
"auth": "Y2...Fz",
"email": "c...n#gmail.com"
}
}
The relevant piece of my Dockerrun.aws.json file looks like this:
"Authentication": {
"Bucket": "elasticbeanstalk-us-west-2-9...4",
"Key": "dockercfg.json"
},
And I have the file uploaded at the root of the S3 bucket. Why do I still get errors that say Error: image c...6/w...t:23 not found. Check snapshot logs for details. I am sure the names are right and that this would work if it was a public repository. The full error is below. I am deploying from GitHub with Circle CI if it makes a difference, happy to provide any other information needed.
INFO: Deploying new version to instance(s).
WARN: Failed to pull Docker image c...6/w...t:23, retrying...
ERROR: Failed to pull Docker image c...6/w...t:23: Pulling repository docker.io/c...6/w...t
Error: image c...6/w...t:23 not found. Check snapshot logs for details.
ERROR: [Instance: i-06b66f5121d8d23c3] Command failed on instance. Return code: 1 Output: (TRUNCATED)...b-project
Error: image c...6/w...t:23 not found
Failed to pull Docker image c...6/w...t:23: Pulling repository docker.io/c...6/w...t
Error: image c...6/w...t:23 not found. Check snapshot logs for details.
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
ERROR: Unsuccessful command execution on instance id(s) 'i-06b66f5121d8d23c3'. Aborting the operation.
ERROR: Failed to deploy application.
ERROR: Failed to deploy application.
EDIT: Here's the full Dockerrun file. Note that %BUILD_NUM% is just an int, I can verify that works.
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "elasticbeanstalk-us-west-2-9...4",
"Key": "dockercfg.json"
},
"Image": {
"Name": "c...6/w...t:%BUILD_NUM%",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}
EDIT: Also, I have verified that this works if I make this Docker Hub container public.
OK, let's do this;
Looking at the same doc page,
With Docker version 1.6.2 and earlier, the docker login command creates the authentication file in ~/.dockercfg in the following format:
{
"server" :
{
"auth" : "auth_token",
"email" : "email"
}
}
You already got this part correct I see. Please double check the cases below one by one;
1) Are you hosting the S3 bucket in the same region?
The Amazon S3 bucket must be hosted in the same region as the
environment that is using it. Elastic Beanstalk cannot download files
from an Amazon S3 bucket hosted in other regions.
2) Have you checked the required permissions?
Grant permissions for the s3:GetObject operation to the IAM role in
the instance profile. For details, see Managing Elastic Beanstalk
Instance Profiles.
3) Have you got your S3 bucket info in your config file? (I think you got this too)
Include the Amazon S3 bucket information in the Authentication (v1) or
authentication (v2) parameter in your Dockerrun.aws.json file.
Can't see your permissions or your env region, so please double check those.
If that does not work, i'd upgrade to Docker 1.7+ if possible and use the corresponding ~/.docker/config.json style.
Depending on your Docker version, this file is saved as either ~/.dockercfg or *~/.docker/config.json
cat ~/.docker/config.json
Output:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "zq212MzEXAMPLE7o6T25Dk0i"
}
}
}
Important:
Newer versions of Docker create a configuration file as shown above with an outer auths object. The Amazon ECS agent only supports dockercfg authentication data that is in the below format, without the auths object. If you have the jq utility installed, you can extract this data with the following command:
cat ~/.docker/config.json | jq .auths
Output:
{
"https://index.docker.io/v1/": {
"auth": "zq212MzEXAMPLE7o6T25Dk0i",
"email": "email#example.com"
}
}
Create a file called my-dockercfg using the above content.
Upload the file into the S3 bucket with the specified key(my-dockercfg) in the Dockerrun.aws.json file.
{
"AWSEBDockerrunVersion": 2,
"authentication": {
"bucket": "elasticbeanstalk-us-west-2-618148269374",
"key": "my-dockercfg"
}
}