While trying to restore the EBS volumes from the snapshot it returns status as lost. we are using AWS KMS CMK keys with policy having kms* permission. The backup operation went fine.. the restore operation is able to restore all k8s resources expect the PVC.
k get pvc -n nginx-example
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-logs Lost pvc-bda55207-a1e5-11ea-b7e6-02b82f6b7f4e 0 gp2-encrypt 4m22s
k get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-bda55207-a1e5-11ea-b7e6-02b82f6b7f4e 1Gi RWO Retain Released nginx-example/nginx-logs gp2-encrypt 33m
We noticed the UID of PV and PVC are not matching after the PVC is restored.
The service account used by velero pod has below policy as
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"ec2:DeleteSnapshot",
"kms:Decrypt",
"ec2:CreateTags",
"kms:GenerateDataKeyWithoutPlaintext",
"s3:ListBucket",
"kms:GenerateDataKeyPairWithoutPlaintext",
"ec2:DescribeSnapshots",
"kms:GenerateDataKeyPair",
"kms:ReEncryptFrom",
"ec2:CreateVolume",
"s3:ListMultipartUploadParts",
"s3:PutObject",
"s3:GetObject",
"s3:AbortMultipartUpload",
"ec2:DescribeVolumes",
"ec2:CreateSnapshot",
"kms:GenerateDataKey",
"kms:ReEncryptTo",
"s3:DeleteObject"
],
"Resource": "*"
}
]
}
we are using the below yaml to define storageclass and PVC
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp2-encrypt
parameters:
type: gp2
encrypted: "true"
fsType: ext4
kmsKeyId: arn:aws:kms:us-east-XXXXXX
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Retain
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-logs
namespace: nginx-example
labels:
app: nginx
spec:
storageClassName: gp2-encrypt
accessModes:
- ReadWriteOnce
resources:
requests:
storage: [50Mi]
Below are logs from velero pods..
> time="2020-05-29T19:59:04Z" level=info msg="Starting restore of backup
> cluster-addons/nginx-backup-5" logSource="pkg/restore/restore.go:394"
> restore=cluster-addons/nginx-backup-5-20200529155858
> time="2020-05-29T19:59:04Z" level=info msg="Restoring cluster level
> resource 'persistentvolumes'" logSource="pkg/restore/restore.go:779"
> restore=cluster-addons/nginx-backup-5-20200529155858
> time="2020-05-29T19:59:04Z" level=info msg="Getting client for /v1,
> Kind=PersistentVolume" logSource="pkg/restore/restore.go:821"
> restore=cluster-addons/nginx-backup-5-20200529155858
> time="2020-05-29T20:09:04Z" level=info msg="Restoring resource
> 'persistentvolumeclaims' into namespace 'nginx-example'"
> logSource="pkg/restore/restore.go:777"
> restore=cluster-addons/nginx-backup-5-20200529155858
> time="2020-05-29T20:09:04Z" level=info msg="Getting client for /v1,
> Kind=PersistentVolumeClaim" logSource="pkg/restore/restore.go:821"
> restore=cluster-addons/nginx-backup-5-20200529155858
> time="2020-05-29T20:09:04Z" level=info msg="Executing item action for
> persistentvolumeclaims" logSource="pkg/restore/restore.go:1030"
> restore=cluster-addons/nginx-backup-5-20200529155858
> time="2020-05-29T20:09:04Z" level=info msg="Executing
> AddPVFromPVCAction" cmd=/velero
> logSource="pkg/restore/add_pv_from_pvc_action.go:44" pluginName=velero
> restore=cluster-addons/nginx-backup-5-20200529155858
> time="2020-05-29T20:09:04Z" level=info msg="Adding PV
> pvc-bda55207-a1e5-11ea-b7e6-02b82f6b7f4e as an additional item to
> restore" cmd=/velero
> logSource="pkg/restore/add_pv_from_pvc_action.go:66" pluginName=velero
> restore=cluster-addons/nginx-backup-5-20200529155858
> time="2020-05-29T20:09:04Z" level=info msg="Skipping
> persistentvolumes/pvc-bda55207-a1e5-11ea-b7e6-02b82f6b7f4e because
> it's already been restored." logSource="pkg/restore/restore.go:910"
> restore=cluster-addons/nginx-backup-5-20200529155858
> time="2020-05-29T20:09:04Z" level=info msg="Executing item action for
> persistentvolumeclaims" logSource="pkg/restore/restore.go:1030"
> restore=cluster-addons/nginx-backup-5-20200529155858
> time="2020-05-29T20:09:04Z" level=info msg="Executing
> ChangeStorageClassAction" cmd=/velero
> logSource="pkg/restore/change_storageclass_action.go:63"
> pluginName=velero restore=cluster-addons/nginx-backup-5-20200529155858
> time="2020-05-29T20:09:04Z" level=info msg="Attempting to restore
> PersistentVolumeClaim: nginx-logs"
> logSource="pkg/restore/restore.go:1136"
> restore=cluster-addons/nginx-backup-5-20200529155858
> time="2020-05-29T20:09:04Z" level=info msg="Done executing
> ChangeStorageClassAction" cmd=/velero
> logSource="pkg/restore/change_storageclass_action.go:74"
> pluginName=velero restore=cluster-addons/nginx-backup-5-20200529155858
>
> The cloudtrail does not have much information. Would you please let us
> know any additional. settings needed here?
Related
I am trying to deploy a AWS step function where each state machine runs a AWS Batch job. All worked successfully but now I need to store all the logs for these state machines in a specific Cloudwatch log group.
Based on AWS documentation for Batch, I try this snippet in my step function definition in cloudformation template -
*"ContainerOverrides": {
"LogConfiguration": { #also tried logConfiguration
"LogDriver": "awslogs", #also tried logDriver
"Options": { #also tried options
"awslogs-group": "${PipelineLogGroup}",
"awslogs-stream-prefix": "canonical-"
}
}
}*
Under the same "ContainerOverrides" tag, "Environment" is defined and is working correctly. For "Log Configuration", I receiving the build error - 'SCHEMA_VALIDATION_FAILED: The field "LogConfiguration" is not supported by Step Functions (same for logConfiguration).
Isn't it possible to define "Log Configuration" of AWS Batch job through Step Function definition?
The "LogConfiguration" is not a part of "ContainerOverrides" in "StateMachine" tag. Rather, it is required to be configured with the Batch job definition.
> PipelineJobDefinition:
> Type: AWS::Batch::JobDefinition
> Properties:
> Type: Container
> ContainerProperties:
> Memory: 32768
> LogConfiguration:
> LogDriver: 'awslogs'
> Options: {
> "awslogs-group": !Ref 'LogGroupCreatedinCF',
> "awslogs-stream-prefix": "batch"
> }
> JobRoleArn: !ImportValue pipeline-DataBatchJobRoleArn
> Vcpus: 16
> Image: "...."
> JobDefinitionName: PipelineJobDefinition
> RetryStrategy:
> Attempts: 1
I tested kubernetes deployment with EBS volume mounting on AWS cluster provisioned by kops. This is deployment yml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-deployment-volume
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- name: nodejs-port
containerPort: 3000
volumeMounts:
- mountPath: /myvol
name: myvolume
volumes:
- name: myvolume
awsElasticBlockStore:
volumeID: <volume_id>
After kubectl create -f <path_to_this_yml>, I got the following message in pod description:
Attach failed for volume "myvolume" : Error attaching EBS volume "XXX" to instance "YYY": "UnauthorizedOperation: You are not authorized to perform this operation. status code: 403
Looks like this is just a permission issue. Ok, I checked policy for node role IAM -> Roles -> nodes.<my_domain> and found that there where no actions which allow to manipulate volumes, there was only ec2:DescribeInstances action by default. So I added AttachVolume and DetachVolume actions:
{
"Sid": "kopsK8sEC2NodePerms",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:AttachVolume",
"ec2:DetachVolume"
],
"Resource": [
"*"
]
},
And this didn't help. I'm still getting that error:
Attach failed for volume "myvolume" : Error attaching EBS volume "XXX" to instance "YYY": "UnauthorizedOperation: You are not authorized to perform this operation.
Am I missing something?
I found a solution. It's described here.
In kops 1.8.0-beta.1, master node requires you to tag the AWS volume with:
KubernetesCluster: <clustername-here>
So it's necessary to create EBS volume with that tag by using awscli:
aws ec2 create-volume --size 10 --region eu-central-1 --availability-zone eu-central-1a --volume-type gp2 --tag-specifications 'ResourceType=volume,Tags=[{Key=KubernetesCluster,Value=<clustername-here>}]'
or you can tag it by manually in EC2 -> Volumes -> Your volume -> Tags
That's it.
EDIT:
The right cluster name can be found within EC2 instances tags which are part of cluster. Key is the same: KubernetesCluster.
Question
Please suggest the cause of the error of not being able to mount AWS EBS volume in pod.
journalctl -b -f -u kubelet
1480 kubelet.go:1625] Unable to mount volumes for pod "nginx_default(ddc938ee-edda-11e7-ae06-06bb783bb15c)": timeout expired waiting for volumes to attach/mount for pod "default"/"nginx". list of unattached/unmounted volumes=[ebs]; skipping pod
1480 pod_workers.go:186] Error syncing pod ddc938ee-edda-11e7-ae06-06bb783bb15c ("nginx_default(ddc938ee-edda-11e7-ae06-06bb783bb15c)"), skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"nginx". list of unattached/unmounted volumes=[ebs]
1480 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "pv-ebs" (UniqueName: "kubernetes.io/aws-ebs/vol-0d275986ce24f4304") pod "nginx" (UID: "ddc938ee-edda-11e7-ae06-06bb783bb15c")
1480 nestedpendingoperations.go:263] Operation for "\"kubernetes.io/aws-ebs/vol-0d275986ce24f4304\"" failed. No retries permitted until 2017-12-31 03:34:03.644604131 +0000 UTC m=+6842.543441523 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pv-ebs\" (UniqueName: \"kubernetes.io/aws-ebs/vol-0d275986ce24f4304\") pod \"nginx\" (UID: \"ddc938ee-edda-11e7-ae06-06bb783bb15c\") "
Steps
Deployed K8S 1.9 using kubeadm (without EBS volume mount, pods work) in AWS (us-west-1 and AZ is us-west-1b).
Configure an IAM role as per Kubernetes - Cloud Providers and kubelets failing to start when using 'aws' as cloud provider.
Assign the IAM role to EC2 instances as per Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console.
Deploy PV/PVC/POD as in the manifest.
The status from the kubectl:
kubectl get
NAME READY STATUS RESTARTS AGE IP NODE
nginx 0/1 ContainerCreating 0 29m <none> ip-172-31-1-43.us-west-1.compute.internal
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/pv-ebs 5Gi RWO Recycle Bound default/pvc-ebs 33m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc/pvc-ebs Bound pv-ebs 5Gi RWO 33m
kubectl describe pod nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 27m default-scheduler Successfully assigned nginx to ip-172-31-1-43.us-west-1.compute.internal
Normal SuccessfulMountVolume 27m kubelet, ip-172-31-1-43.us-west-1.compute.internal MountVolume.SetUp succeeded for volume "default-token-dt698"
Warning FailedMount 6s (x12 over 25m) kubelet, ip-172-31-1-43.us-west-1.compute.internal Unable to mount volumes for pod "nginx_default(ddc938ee-edda-11e7-ae06-06bb783bb15c)": timeout expired waiting for volumes to attach/mount for pod "default"/"nginx". Warning FailedMount 6s (x12 over 25m) kubelet, ip-172-31-1-43.us-west-1.compute.internal Unable to mount volumes for pod "nginx_default(ddc938ee-edda-11e7-ae06-06bb783bb15c)": timeout expired waiting for volumes to attach/mount for pod "default"/"nginx".
Manifest
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-ebs
labels:
type: amazonEBS
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-0d275986ce24f4304
fsType: ext4
persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-ebs
labels:
type: amazonEBS
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
kind: Pod
apiVersion: v1
metadata:
name: nginx
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: ebs
volumes:
- name: ebs
persistentVolumeClaim:
claimName: pvc-ebs
IAM Policy
Environment
$ kubectl version -o json
{
"clientVersion": {
"major": "1",
"minor": "9",
"gitVersion": "v1.9.0",
"gitCommit": "925c127ec6b946659ad0fd596fa959be43f0cc05",
"gitTreeState": "clean",
"buildDate": "2017-12-15T21:07:38Z",
"goVersion": "go1.9.2",
"compiler": "gc",
"platform": "linux/amd64"
},
"serverVersion": {
"major": "1",
"minor": "9",
"gitVersion": "v1.9.0",
"gitCommit": "925c127ec6b946659ad0fd596fa959be43f0cc05",
"gitTreeState": "clean",
"buildDate": "2017-12-15T20:55:30Z",
"goVersion": "go1.9.2",
"compiler": "gc",
"platform": "linux/amd64"
}
}
$ cat /etc/centos-release
CentOS Linux release 7.4.1708 (Core)
EC2
EBS
Solution
Found the documentation which shows how to configure AWS cloud provider.
K8S AWS Cloud Provider Notes
Steps
Tag EC2 instances and SG with the KubernetesCluster=${kubernetes cluster name}. If created with kubeadm, it is kubernetes as in Ability to configure user and cluster name in AdminKubeConfigFile
Run kubeadm init --config kubeadm.yaml.
kubeadm.yaml (Ansible template)
kind: MasterConfiguration
apiVersion: kubeadm.k8s.io/v1alpha1
api:
advertiseAddress: {{ K8S_ADVERTISE_ADDRESS }}
networking:
podSubnet: {{ K8S_SERVICE_ADDRESSES }}
cloudProvider: {{ K8S_CLOUD_PROVIDER }}
Result
$ journalctl -b -f CONTAINER_ID=$(docker ps | grep k8s_kube-controller-manager | awk '{ print $1 }')
Jan 02 04:48:28 ip-172-31-4-117.us-west-1.compute.internal dockerd-current[8063]: I0102 04:48:28.752141
1 reconciler.go:287] attacherDetacher.AttachVolume started for volume "kuard-pv" (UniqueName: "kubernetes.io/aws-ebs/vol-0d275986ce24f4304") from node "ip-172-3
Jan 02 04:48:39 ip-172-31-4-117.us-west-1.compute.internal dockerd-current[8063]: I0102 04:48:39.309178
1 operation_generator.go:308] AttachVolume.Attach succeeded for volume "kuard-pv" (UniqueName: "kubernetes.io/aws-ebs/vol-0d275986ce24f4304") from node "ip-172-
$ kubectl describe pod kuard
...
Volumes:
kuard-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: kuard-pvc
ReadOnly: false
$ kubectl describe pv kuard-pv
Name: kuard-pv
Labels: failure-domain.beta.kubernetes.io/region=us-west-1
failure-domain.beta.kubernetes.io/zone=us-west-1b
type=amazonEBS
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"amazonEBS"},"name":"kuard-pv","namespace":""},"spec":{"acce...
pv.kubernetes.io/bound-by-controller=yes
StorageClass:
Status: Bound
Claim: default/kuard-pvc
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 5Gi
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: vol-0d275986ce24f4304
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
$ kubectl version -o json
{
"clientVersion": {
"major": "1",
"minor": "9",
"gitVersion": "v1.9.0",
"gitCommit": "925c127ec6b946659ad0fd596fa959be43f0cc05",
"gitTreeState": "clean",
"buildDate": "2017-12-15T21:07:38Z",
"goVersion": "go1.9.2",
"compiler": "gc",
"platform": "linux/amd64"
},
"serverVersion": {
"major": "1",
"minor": "9",
"gitVersion": "v1.9.0",
"gitCommit": "925c127ec6b946659ad0fd596fa959be43f0cc05",
"gitTreeState": "clean",
"buildDate": "2017-12-15T20:55:30Z",
"goVersion": "go1.9.2",
"compiler": "gc",
"platform": "linux/amd64"
}
}
I have RBAC enabled kubernetes cluster created using
kops version 1.8.0-beta.1, I am trying to run a nginx pod which should attach pre-created EBS volume and pod should start. But getting issue as not authorized even though i am a admin user. Any help would be highly appreciated.
kubectl version Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-09T07:27:47Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:27:48Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
namespace:default
cat test-ebs.yml
apiVersion: v1
kind: Pod
metadata:
name: test-ebs
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /test-ebs
name: test-volume
volumes:
- name: test-volume
awsElasticBlockStore:
volumeID: <vol-IDhere>
fsType: ext4
I am getting the below error:
Warning FailedMount 8m attachdetach AttachVolume.Attach failed for volume "test-volume" : Error attaching EBS volume "<vol-ID>" to instance "<i-instanceID>": "UnauthorizedOperation: You are not authorized to perform this operation
In kops 1.8.0-beta.1, master node requires you to tag the AWS volume with:
KubernetesCluster: <clustername-here>
If you have created the k8s cluster using kops like so:
kops create cluster --name=k8s.yourdomain.com [other-args-here]
your tag on the EBS volume needs to be
KubernetesCluster: k8s.yourdomain.com
And the policy on master would contain a block which would contain:
{
"Sid": "kopsK8sEC2MasterPermsTaggedResources",
"Effect": "Allow",
"Action": [
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:DeleteRoute",
"ec2:DeleteSecurityGroup",
"ec2:DeleteVolume",
"ec2:DetachVolume",
"ec2:RevokeSecurityGroupIngress"
],
"Resource": [
"*"
],
"Condition": {
"StringEquals": {
"ec2:ResourceTag/KubernetesCluster": "k8s.yourdomain.com"
}
}
}
The condition indicates that master-policy has privilege to only attach volumes which contain the right tag.
Issue is because of kops1.8 version. Rolled back to kops version v1.7.1. its working now.
I am trying to launch an AWS BeanStalk environment running multi container Docker but it fails with the following list of events:
2016-01-18 16:58:57 UTC+0100 WARN Removed instance [i-a7162d2c] from your environment due to a EC2 health check failure.
2016-01-18 16:57:57 UTC+0100 WARN Environment health has transitioned from Degraded to Severe. None of the instances are sending data.
2016-01-18 16:47:58 UTC+0100 WARN Environment health has transitioned from Pending to Degraded. Command is executing on all instances. Command failed on all instances.
2016-01-18 16:43:58 UTC+0100 INFO Added instance [i-a7162d2c] to your environment.
2016-01-18 16:43:27 UTC+0100 INFO Waiting for EC2 instances to launch. This may take a few minutes.
2016-01-18 16:41:58 UTC+0100 INFO Environment health has transitioned to Pending. There are no instances.
2016-01-18 16:41:54 UTC+0100 INFO Created security group named: awseb-e-ih2exekpvz-stack-AWSEBSecurityGroup-M2O11DNNCJXW
2016-01-18 16:41:54 UTC+0100 INFO Created EIP: 52.48.132.172
2016-01-18 16:41:09 UTC+0100 INFO Using elasticbeanstalk-eu-west-1-936425941972 as Amazon S3 storage bucket for environment data.
2016-01-18 16:41:08 UTC+0100 INFO createEnvironment is starting.
Health status is marked "Severe" and I have the following logs:
98 % of CPU is in use.
Initialization failed at 2016-01-18T15:54:33Z with exit status 1 and error: Hook /opt/elasticbeanstalk/hooks/preinit/02ecs.sh failed.
. /opt/elasticbeanstalk/hooks/common.sh
/opt/elasticbeanstalk/bin/get-config container -k ecs_cluster
EB_CONFIG_ECS_CLUSTER=awseb-figure-test-ih2exekpvz
/opt/elasticbeanstalk/bin/get-config container -k ecs_region
EB_CONFIG_ECS_REGION=eu-west-1
/opt/elasticbeanstalk/bin/get-config container -k support_files_dir
EB_CONFIG_SUPPORT_FILES_DIR=/opt/elasticbeanstalk/containerfiles/support
is_baked ecs_agent
[[ -f /etc/elasticbeanstalk/baking_manifest/ecs_agent ]]
true
aws configure set default.output json
aws configure set default.region eu-west-1
echo ECS_CLUSTER=awseb-figure-test-ih2exekpvz
grep -q 'ecs start/'
initctl status ecs
initctl start ecs
ecs start/running, process 8418
TIMEOUT=120
jq -r .ContainerInstanceArn
curl http://localhost:51678/v1/metadata
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 51678: Connection refused
My configuration:
Environment type: Single Instance
Instance type: Medium
Root volume type: SSD
Root Volume Size: 8GB
Zone: EU-West
My Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "2",
"containerDefinitions": [
{
"essential": true,
"memory": 252,
"links": [
"redis"
],
"mountPoints": [
{
"containerPath": "/srv/express",
"sourceVolume": "_WebExpress"
},
{
"containerPath": "/srv/express/node_modules",
"sourceVolume": "SrvExpressNode_Modules"
}
],
"name": "web",
"image": "figure/web:latest",
"portMappings": [
{
"containerPort": 3000,
"hostPort": 3000
}
]
},
{
"essential": true,
"memory": 252,
"image": "redis",
"name": "redis",
"portMappings": [
{
"containerPort": 6379,
"hostPort": 6379
}
]
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "./web/express"
},
"name": "_WebExpress"
},
{
"host": {
"sourcePath": "/srv/express/node_modules"
},
"name": "SrvExpressNode_Modules"
}
]
}
#Kilianc: If you are creating elastic beanstalk application using multi-docker container platform then you have to add the following list of policies to the default role "aws-elasticbeanstalk-ec2-role" or you can create your own role.
AWSElasticBeanstalkWebTier
AWSElasticBeanstalkMulticontainerDocker
AWSElasticBeanstalkWorkerTier
Thanks
It appears I forgot to set up policies and permissions as described in AWS BeanStalk documentation:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecstutorial.html