Add Docker Run command options in AWS Dockerrun.aws.json file - amazon-web-services

In my docker-compose.yml file I can do the following:
splash:
image: scrapinghub/splash
command: --max-timeout 300
ports:
- "8050:8050"
As you can see, I just pass in the additional options that I want to add to the docker run command that is executed in the image.
When I try to apply this to my Dockerrun.aws.json file that is deployed to Amazon Elastic Beanstalk, I get error when I write the following:
{
"name": "splash",
"image": "scrapinghub/splash",
...
"command": [
"--max-timeout 300"
]
},
So question is, how do I add the --max-timeout parameter to the default command that my Docker Image is executing with my AWS deployment?

You cant customize how AWS start a container by playing with DOCKER RUN options. You have to use .ebextensions .
If you want to increase timeout, create a file in .ebextensions sub directory of your ZIP package (it should already contains the Dockerrun.aws.json file) :
option_settings:
- namespace: aws:elb:policies
option_name: ConnectionSettingIdleTimeout
value: 300
- namespace: aws:elasticbeanstalk:command
option_name: Timeout
value: 300
- namespace: aws:elbv2:loadbalancer
option_name: IdleTimeout
value: 300

Related

GCP Helm Cloud Builder

Just curious, why isn't there a helm cloud builder officially supported? It seems like a very common requirement, yet I'm not seeing one in the list here:
https://github.com/GoogleCloudPlatform/cloud-builders
I was previously using alpine/helm in my cloudbuild.yaml for my helm deployment as follows:
steps:
# Build app image
- name: gcr.io/cloud_builders/docker
args:
- build
- -t
- $_IMAGE_REPO/$_CONTAINER_NAME:$COMMIT_SHA
- ./cloudbuild/$_CONTAINER_NAME/
# Push my-app image to Google Cloud Registry
- name: gcr.io/cloud-builders/docker
args:
- push
- $_IMAGE_REPO/$_CONTAINER_NAME:$COMMIT_SHA
# Configure a kubectl workspace for this project
- name: gcr.io/cloud-builders/kubectl
args:
- cluster-info
env:
- CLOUDSDK_COMPUTE_REGION=$_CUSTOM_REGION
- CLOUDSDK_CONTAINER_CLUSTER=$_CUSTOM_CLUSTER
- KUBECONFIG=/workspace/.kube/config
# Deploy with Helm
- name: alpine/helm
args:
- upgrade
- -i
- $_CONTAINER_NAME
- ./cloudbuild/$_CONTAINER_NAME/k8s
- --set
- image.repository=$_IMAGE_REPO/$_CONTAINER_NAME,image.tag=$COMMIT_SHA
- -f
- ./cloudbuild/$_CONTAINER_NAME/k8s/values.yaml
env:
- KUBECONFIG=/workspace/.kube/config
- TILLERLESS=false
- TILLER_NAMESPACE=kube-system
- USE_GKE_GCLOUD_AUTH_PLUGIN=True
timeout: 1200s
substitutions:
# substitutionOption: ALLOW_LOOSE
# dynamicSubstitutions: true
_CUSTOM_REGION: us-east1
_CUSTOM_CLUSTER: demo-gke
_IMAGE_REPO: us-east1-docker.pkg.dev/fakeproject/my-docker-repo
_CONTAINER_NAME: app2
options:
logging: CLOUD_LOGGING_ONLY
# In this option we are providing the worker pool name that we have created in the previous step
workerPool:
'projects/fakeproject/locations/us-east1/workerPools/cloud-build-pool'
And this was working with no issues. Then recently it just started failing with the following error so I'm guessing a change was made recently:
Error: Kubernetes cluster unreachable: Get "https://10.10.2.2/version": getting credentials: exec: executable gke-gcloud-auth-plugin not found"
I get this error regularly on VM's and can workaround it by setting USE_GKE_GCLOUD_AUTH_PLUGIN=True, but that does not seem to fix the issue here if I add it to the env section. So I'm looking for recommendations on how to use helm with Cloud Build. alpine/helm was just something I randomly tried and was working for me up until now, but there's probably better solutions out there.
Thanks!

Using --net=host in Tekton sidecars

I am creating a tekton project which will spawn docker images which in turn will run few kubectl commands. This I have accomplished by using sidecars in tekton docker:dind image and setting
securityContext:
privileged: true
env:
However, one of the task is failing, since it needs to have an equivalent of --net=host in docker run example.
I have tried to set a podtemplate with hostnetwork: True, but then the task with the sidecar fails to start the docker
Any idea if I could implement --net=host in the task yaml file. It would be really helpful.
Snippet of my task with the sidecar:
sidecars:
- image: mypvtreg:exv1
name: mgmtserver
args:
- --storage-driver=vfs
- --userland-proxy=false
# - --net=host
securityContext:
privileged: true
env:
# Write generated certs to the path shared with the client.
- name: DOCKER_TLS_CERTDIR
value: /certs
volumeMounts:
- mountPath: /certs
As commented by #SYN, Using docker:dind as a sidecar, your builder container, executing in your Task steps, should connect to 127.0.0.1. That's how you would talk to your dind sidecar.

How to logging in Amazon Web Service ( AWS )?

I have a project built in Golang and deployed on a Docker instance in AWS.
Internally I create a log file where the program write several logs.
How can I access that log file?
Is there another correct way to logging?
Thanks
You could mount the log file from your container to your EC2 host. You can do this by using the -v flag when running your container:
docker run -v /var/log/my_host_log_file.log:/var/log/your_container_log_file.log your-image
Alternatively, you can configure your app to log to stdout and use syslog as your log driver (using the --log-driver=syslog switch). Your container logs will then be written to /var/log/messages on your host.
If you use AWS, i would suggest to send Logs direct to AWS CloudWatch.
First create a new Log-Group in AWS Cloudwatch, for example "Production". In your Docker-Compose.yml (or via docker run..) add the AWS Logdriver:
logging:
driver: "awslogs"
options:
awslogs-region: "eu-central-1"
awslogs-group: "Production"
awslogs-stream: "MyApp"
Next creat a IAM user with Access to AWS Cloudwatch and add to the Dockerhost the credentials.
Example IAM Policy:
"Version" "2012-10-17"
"Statement"
"Action" "logs:CreateLogStream" "logs:PutLogEvents" "Effect" "Allow" "Resource"
On Ubuntu with systemd:
"Version" "2012-10-17"
"Statement"
"Action"
"logs:CreateLogStream"
"logs:PutLogEvents"
"Effect"
"Allow" "Resource"
And add to the File:
[Service] Environment"AWS_ACCESS_KEY_ID=<aws_access_key_id>"
Environment"AWS_SECRET_ACCESS_KEY=<aws_secret_access_key>"
Run:
systemctl daemon-reload
service docker restart
Now your logs should appear in AWS Cloudwatch.
Thanks for reply.
After a while looking for the solution to the problem, I found it!
Firstly, I needed to mount the file that is inside the instance in the docker-host.
To do this I add a Json file in the root folder of my project called Dockerrun.aws.json
( http://docs.aws.amazon.com/es_es/elasticbeanstalk/latest/dg/create_deploy_docker_image.html#create_deploy_docker_image_dockerrun )
That is the file that declares the shared folder (volumes) (beetwen docker-host and instance) where I save my log file . This line is equivalent to adding -v flag in the docker run command (https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-data-volume). I do this this way because I can not add mount to a running instance and i cant stop it by ssh.
{
"AWSEBDockerrunVersion": "1",
"Volumes": [
{
"HostDirectory": "/var/log/",
"ContainerDirectory": "/go/src/app/log"
}
]
}
Then to tell aws that I want to download my log file when I request records. (Tail (last 100 lines), bundle or rotate) I add these files to the .ebextension folder in my project directory. ( http://docs.aws.amazon.com/en_us/elasticbeanstalk/latest/dg/using-features.logging.html#health-logs-extend )
Log_bundle.conf
Files:
"/opt/elasticbeanstalk/tasks/bundlelogs.d/log_bundle.conf":
Mode: "000755"
Owner: root
Group: root
Content: |
/var/log/application.log
Log_rotate.config
Files:
"/opt/elasticbeanstalk/tasks/bundlelogs.d/log_rotate.conf":
Mode: "000755"
Owner: root
Group: root
Content: |
/var/log/application.log
Log_tail.config
Files:
"/opt/elasticbeanstalk/tasks/publishlogs.d/log_tail.conf":
Mode: "000755"
Owner: root
Group: root
Content: |
/var/log/application.log
Finally, I dont try Amazon Could Watch but is the next step.
Regards
If you use ELK (Elasticsearch, Logstash, Kibana), I would suggest to use "logrus"
Get the library
go get github.com/sirupsen/logrus
Then in your project
package main
import (
logrus "github.com/sirupsen/logrus"
)
var log = logrus.New()
func main() {
conn, _ := net.Dial("tcp", "logstash-address")
hook := logrustash.New(conn, logrustash.DefaultFormatter(logrus.Fields{"type": "my-app"}))
log.Hooks.Add(hook)
log.Info("Hello World!")
}

AWS Elastic Beanstalk: How to use environment variables in ebextensions?

We are trying to store environment specific application configuration files in s3.
The files are stored in different subdirectories which are named after the environment and also have the environment as part of the file name.
Examples are
dev/application-dev.properties
stg/application-stg.properties
prd/application-prd.properties
The Elastic Beanstalk environments are named dev, stg, prd and alternatively I also have an environment variable defined in Elastic Beanstalk named ENVIRONMENT which can be dev, stg or prd.
My question now is, how do I reference the environment name or ENVIRONMENT variable when downloading the configuration file from a config file in .ebextensions?
I tried using a {"Ref": "AWSEBEnvironmentName" } reference in .ebextensions/myapp.config but get a syntax error when deploying.
The content of .ebextensions/myapp.config is:
files:
/config/application-`{"Ref": "AWSEBEnvironmentName" }`.properties:
mode: "000666"
owner: webapp
group: webapp
source: https://s3.amazonaws.com/com.mycompany.mybucket/`{"Ref": "AWSEBEnvironmentName" }`/application-`{"Ref": "AWSEBEnvironmentName" }`.properties
authentication: S3Access
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: com.mycompany.api.config
The error I get is:
The configuration file .ebextensions/myapp.config in application version
manualtest-18 contains invalid YAML or JSON. YAML exception: Invalid Yaml:
mapping values are not allowed here in "<reader>", line 6, column 85:
... .config/stg/application-`{"Ref": "AWSEBEnvironmentName" }`.prop ... ^ ,
JSON exception: Invalid JSON: Unexpected character (f) at position 0..
Update the configuration file.
What is the correct way of referencing an environment variable in a .ebextensions config file in AWS Elastic Beanstalk?
Your .ebextensions config file was almost correct. Substituting the file name with environment variable or AWS resource name won't work, for that do as in Mark's answer to rename the file created in container_commands section.
The source option value trying to access AWS resource name using Ref was correct, it just had to be surrounded by single quote ', like below:
files:
/config/application.properties:
mode: "000666"
owner: webapp
group: webapp
source: 'https://s3.amazonaws.com/com.mycompany.mybucket/`{"Ref": "AWSEBEnvironmentName" }`/application-`{"Ref": "AWSEBEnvironmentName" }`.properties'
authentication: S3Access
And to access environment variables use Fn::GetOptionSetting. Environment variables are in aws:elasticbeanstalk:application:environment namespace.
Below example access an environment variable ENVIRONMENT in source option of files:
files:
"/tmp/application.properties" :
mode: "000666"
owner: webapp
group: webapp
source: 'https://s3.amazonaws.com/com.mycompany.mybucket/`{"Ref": "AWSEBEnvironmentName" }`/application-`{"Fn::GetOptionSetting": {"Namespace": "aws:elasticbeanstalk:application:environment", "OptionName": "ENVIRONMENT ", "DefaultValue": "dev"}}`.properties'
authentication: S3Auth
I struggled to get this working, until I discovered that the Sub function doesn't appear to be available in ebextensions: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions-functions.html
This means that you need to fall back to Fn::Join and Ref, at least until support for Sub is introduced to ebextensions. It also seems that the files attribute requires a fixed path (and I couldn't use Fn::Join in this context).
My overall solution to this was as follows:
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Auth:
type: S3
buckets: arn:aws:s3:::elasticbeanstalk-xxx
roleName: aws-elasticbeanstalk-ec2-role
files:
"/tmp/application.properties" :
mode: "000644"
owner: root
group: root
source: { "Fn::Join" : ["", ["https://s3-xxx.amazonaws.com/elasticbeanstalk-xxx/path/to/application-", { "Ref" : "AWSEBEnvironmentName" }, ".properties" ]]}
authentication: S3Auth
container_commands:
01-apply-configuration:
command: mkdir -p config && mv /tmp/application.properties config
This will result in an application.properties file (without the environment name qualifier) in a config directory next to the deployed application instance.
If you want to keep the name of the environment as part of the file name using this approach, you will need to adjust the command that moves the file to use another Fn::Join expression to control the filename.
You are almost there .ebextensions are using YAML format, while your trying to use JSON. Use Ref: AWSEBEnvironmentName.
In addition, you can take advantage of Sub function to avoid pesky Join:
!Sub "/config/application-${AWSEBEnvironmentName}.properties"

Ansible docker_container 'no Host in request URL', docker pull works correctly

I'm trying to provision my infrastructure on AWS using Ansible playbooks. I have the instance, and am able to provision docker-engine, docker-py, etc. and, I swear, yesterday this worked correctly and I haven't changed the code since.
The relevant portion of my playbook is:
- name: Ensure AWS CLI is available
pip:
name: awscli
state: present
when: aws_deploy
- block:
- name: Add .boto file with AWS credentials.
copy:
content: "{{ boto_file }}"
dest: ~/.boto
when: aws_deploy
- name: Log in to docker registry.
shell: "$(aws ecr get-login --region us-east-1)"
when: aws_deploy
- name: Remove .boto file with AWS credentials.
file:
path: ~/.boto
state: absent
when: aws_deploy
- name: Create docker network
docker_network:
name: my-net
- name: Start Container
docker_container:
name: example
image: "{{ docker_registry }}/example"
pull: true
restart: true
network_mode: host
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone
My {{ docker_registry }} is set to my-acct-id.dkr.ecr.us-east-1.amazonaws.com and the result I'm getting is:
"msg": "Error pulling my-acct-id.dkr.ecr.us-east-1.amazonaws.com/example - code: None message: Get http://: http: no Host in request URL"
However, as mentioned, this worked correctly last night. Since then I've made some VPC/subnet changes, but I'm able to ssh to the instance, and run docker pull my-acct-id.dkr.ecr.us-east-1.amazonaws.com/example with no issues.
Googling has led me not very far as I can't seem to find other folks with the same error. I'm wondering what changed, and how I can fix it! Thanks!
EDIT: Versions:
ansible - 2.2.0.0
docker - 1.12.3 6b644ec
docker-py - 1.10.6
I had the same problem. Downgrading docker-compose pip image on that host machine from 1.9.0 to 1.8.1 solved the problem.
- name: Install docker-compose
pip: name=docker-compose version=1.8.1
Per this thread: https://github.com/ansible/ansible-modules-core/issues/5775, the real culprit is requests. This fixes it:
- name: fix requests
pip: name=requests version=2.12.1 state=forcereinstall