I'm using codeDeploy to deploy new code to my EC2 instances.
Here my appspec file:
version: 0.0
os: linux
files:
- source: v1
destination: /somewhere/v1
hooks:
BeforeInstall:
- location: script/deregister_from_elb.sh
timeout: 400
But I'm getting the following error:
LifecycleEvent - BeforeInstall
Script - v1/script/deregister_from_elb.sh
[stderr]Running AWS CLI with region: eu-west-1
[stderr]Started deregister_from_elb.sh at 2017-03-17 11:44:30
[stderr]Checking if instance i-youshouldnotknow is part of an AutoScaling group
[stderr]/iguessishouldhidethis/common_functions.sh: line 190: aws: command not found
[stderr]Instance is not part of an ASG, trying with ELB...
[stderr]Automatically finding all the ELBs that this instance is registered to...
[stderr]/iguessishouldhidethis/common_functions.sh: line 552: aws: command not found
[stderr][FATAL] Couldn't find any. Must have at least one load balancer to deregister from.
Any ideas why is this happening? I suspect that the message "aws: command not found" could be the issue, but I have awscli installed
~$ aws --version
aws-cli/1.11.63 Python/2.7.6 Linux/3.13.0-95-generic botocore/1.5.26
Thanks very much for your help
Related
I am deploying a spring boot jar in a EC2 instance from code pipeline using code build and deploy. CodeBuild is working fine and the artifact is getting uploaded in S3.
When the CodeDeploy starts, it will be in process for 2-3 mins and fail in the fist step. with this error
this is the error description
CodeDeploy service is running fine and all the other setup looks good. Also not seeing any logs in CodeDeploy logs in EC2. please help out on identifying the issue.
Here is my appspec file
version: 0.0
os: linux
files:
- source: /
destination: /home/ec2-user
hooks:
AfterInstall:
- location: fix_previleges.sh
timeout: 300
runas: root
ApplicationStart:
- location: start_server.sh
timeout: 300
runas: root
ApplicationStop:
- location: stop_server.sh
timeout: 300
runas: root
I am getting this error below. when I start deployment
Script name
scripts/server_stop.sh
Message
Script does not exist at specified location: /opt/codedeploy-agent/deployment-root/2a16dbef-8b9f-42a5-98ad-1fd41f596acb/d-2KDHE6D5F/deployment-archive/scripts/server_stop.sh
source and build are fine
this is the appspec that I am using:
version: 0.0
os: linux
files:
- source: /
destination: /home/ec2-user/KnCare-Rest
file_exists_behavior: OVERWRITE
permissions:
- object: /
pattern: "**"
owner: ec2-user
group: ec2-user
hooks:
#BeforeInstall:
#- location: scripts/server_clear.sh
#timeout: 300
#runas: ec2-use
ApplicationStart:
- location: scripsts/server_start.sh
timeout: 20
runas: ec2-user
ApplicationStop:
- location: scripsts/server_stop.sh
timeout: 20
runas: ec2-user
Two things cause this issue
If you deployed the scripts on asg or ec2 and then add additional scripts for the hook.
For this you need to restart the asg or ec2 before adding additional scripts.
If you used the same pipeline name again for a different deployment.
For this you need to delete the input artifact that is in s3 before running the deployment since aws will not delete the artifacts even if you delete the pipeline.
Did you ssh to ec2 instance to verify that scripts/server_stop.sh existed and in execute mode permissions?
First time I'm trying to deploy a django app to elastic beanstalk. The application uses django channels.
These are my config files:
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: "dashboard/dashboard/wsgi.py"
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: "dashboard/dashboard/settings.py"
PYTHONPATH: /opt/python/current/app/dashboard:$PYTHONPATH
aws:elbv2:listener:80:
DefaultProcess: http
ListenerEnabled: 'true'
Protocol: HTTP
Rules: ws
aws:elbv2:listenerrule:ws:
PathPatterns: /websockets/*
Process: websocket
Priority: 1
aws:elasticbeanstalk:environment:process:http:
Port: '80'
Protocol: HTTP
aws:elasticbeanstalk:environment:process:websocket:
Port: '5000'
Protocol: HTTP
container_commands:
00_pip_upgrade:
command: "source /opt/python/run/venv/bin/activate && pip install --upgrade pip"
ignoreErrors: false
01_migrate:
command: "django-admin.py migrate"
leader_only: true
02_collectstatic:
command: "django-admin.py collectstatic --noinput"
03_wsgipass:
command: 'echo "WSGIPassAuthorization On" >> ../wsgi.conf'
When I run eb create django-env I get the following logs:
Creating application version archive "app-200617_112710".
Uploading: [##################################################] 100% Done...
Environment details for: django-env
Application name: dashboard
Region: us-west-2
Deployed Version: app-200617_112710
Environment ID: e-rdgipdg4z3
Platform: arn:aws:elasticbeanstalk:us-west-2::platform/Python 3.7 running on 64bit Amazon Linux 2/3.0.2
Tier: WebServer-Standard-1.0
CNAME: UNKNOWN
Updated: 2020-06-17 10:27:48.898000+00:00
Printing Status:
2020-06-17 10:27:47 INFO createEnvironment is starting.
2020-06-17 10:27:49 INFO Using elasticbeanstalk-us-west-2-041741961231 as Amazon S3 storage bucket for environment data.
2020-06-17 10:28:10 INFO Created security group named: sg-0942435ec637ad173
2020-06-17 10:28:25 INFO Created load balancer named: awseb-e-r-AWSEBLoa-19UYXEUG5IA4F
2020-06-17 10:28:25 INFO Created security group named: awseb-e-rdgipdg4z3-stack-AWSEBSecurityGroup-17RVV1ZT14855
2020-06-17 10:28:25 INFO Created Auto Scaling launch configuration named: awseb-e-rdgipdg4z3-stack-AWSEBAutoScalingLaunchConfiguration-H5E4G2YJ3LEC
2020-06-17 10:29:30 INFO Created Auto Scaling group named: awseb-e-rdgipdg4z3-stack-AWSEBAutoScalingGroup-1I2C273N6RN8S
2020-06-17 10:29:30 INFO Waiting for EC2 instances to launch. This may take a few minutes.
2020-06-17 10:29:30 INFO Created Auto Scaling group policy named: arn:aws:autoscaling:us-west-2:041741961231:scalingPolicy:8d4c8dcf-d77d-4d18-92d8-67f8a2c1cd9e:autoScalingGroupName/awseb-e-rdgipdg4z3-stack-AWSEBAutoScalingGroup-1I2C273N6RN8S:policyName/awseb-e-rdgipdg4z3-stack-AWSEBAutoScalingScaleDownPolicy-1JAUAII3SCELN
2020-06-17 10:29:30 INFO Created Auto Scaling group policy named: arn:aws:autoscaling:us-west-2:041741961231:scalingPolicy:0c3d9c2c-bc65-44ed-8a22-2f9bef538ba7:autoScalingGroupName/awseb-e-rdgipdg4z3-stack-AWSEBAutoScalingGroup-1I2C273N6RN8S:policyName/awseb-e-rdgipdg4z3-stack-AWSEBAutoScalingScaleUpPolicy-XI8Z22SYWQKR
2020-06-17 10:29:30 INFO Created CloudWatch alarm named: awseb-e-rdgipdg4z3-stack-AWSEBCloudwatchAlarmHigh-572C6W1QYGIC
2020-06-17 10:29:30 INFO Created CloudWatch alarm named: awseb-e-rdgipdg4z3-stack-AWSEBCloudwatchAlarmLow-1RTNBIHPHISRO
2020-06-17 10:33:05 ERROR [Instance: i-01576cfe5918af1c3] Command failed on instance. An unexpected error has occurred [ErrorCode: 0000000001].
2020-06-17 10:33:05 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2020-06-17 10:34:07 ERROR Create environment operation is complete, but with errors. For more information, see troubleshooting documentation.
ERROR: ServiceError - Create environment operation is complete, but with errors. For more information, see troubleshooting documentation.
The error is extremely vague, and I have no clue as to what I'm doing wrong.
I had a similar issue. I used psycopg2-binary instead of psycopg2 and created a new environment. The health status is now ok
Since this is getting some attention, I suggest you check your Elastic Beanstalk logs on the aws console, since the error is completely generic and can be anything. I suggest checking mainly the cmd execution and activity logs.
In my case, it was because I had the following listed in requirements.txt, and they failed to install on EC2:
mkl-fft==1.1.0
mkl-random==1.1.0
mkl-service==2.3.0
pypiwin32==223
pywin32==228
Removing those from requirements.txt fixed the issue
it is most likely a connection error. make sure the instance can access to the internet and you have VPC endpoints for SQS/Cloudformation/CloudWatch/S3/elasticbeanstalk/elasticbeanstalk-health. also make sure the security groups for these endpoints allow access to your instance
Currently I have a AWS Codecommit repository and an AWS Elastic Beanstalk enviroment in which I upload updates with the EB CLI, using eb deploy.
I have some config files that are ignored in .gitignore, I want to establish a AWS CodePipeline so when I push changes to repository, automatically run the test functions and upload the changes directly to Elastic Beanstalk
I tried implementing a simple pipeline where I push code to CodeCommit and Deploys to Elastic Beantstalk but I get the following error:
2019-09-09 11:51:45 UTC-0500 ERROR "option_settings" in one of the configuration files failed validation. More details to follow.
2019-09-09 11:51:45 UTC-0500 ERROR You cannot remove an environment from a VPC. Launch a new environment outside the VPC.
2019-09-09 11:51:45 UTC-0500 ERROR Failed to deploy application.
This is the *.config file that isn't in Codecommit
option_settings:
aws:ec2:vpc:
VPCId: vpc-xxx
Subnets: 'subnet-xxx'
aws:elasticbeanstalk:environment:
EnvironmentType: SingleInstance
ServiceRole: aws-xxxx
aws:elasticbeanstalk:container:python:
WSGIPath: xxx/wsgi.py
aws:elasticbeanstalk:healthreporting:system:
SystemType: enhanced
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: xxxxsettings
SECRET_KEY: xxxx
DB_NAME: xxxx
DB_USER: xxxx
DB_PASSWORD: xxxx
DB_HOST: xxx
DB_PORT: xxxx
aws:autoscaling:launchconfiguration:
SecurityGroups: sg-xxx
I noticed some syntax that is a little different from the above:
Subnets: value has '' around them, could this be causing the issue and if you have this here, are '' supposed to be around the other values ?
From the config file it's look like that you are using single instance. For Single instance you don't need to specify autoscaling launch configuration. Just remove the last two line it will work fine.
I think from what I have been reading is that I should not commit my config files, but add them in CodeBuild so it generates the .zip file that would be deployed to ElasticBeanstalk.
I'm running a single Docker container on Elastic Beanstalk using its Single Container Docker Configuration, and trying to send the application stdout to CloudWatch using the awslogs logging driver.
EB looks for a Dockerrun.aws.json file for the configuration of the container, but as far as I can see doesn't have an option to use awslogs as the container's logging driver (or add any other flags to the docker run command for that matter).
I've tried hacking into the docker run command using the answer provided here, by adding a file .ebextensions/01-commands.config with content:
commands:
add_awslogs:
command: 'sudo sed -i "s/docker run -d/docker run --log-driver=awslogs --log-opt awslogs-region=eu-west-2 --log-opt awslogs-group=dockerContainerLogs -d/" /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh'
This works, in the sense that it modifies the run script, and logs show up in CloudWatch.
But the EB application dies. The container is up, but does not respond to requests.
I find the following error in the container logs:
"logs" command is supported only for "json-file" and "journald" logging
drivers (got: awslogs)
I find answers to similar questions relating to ECS (not EB) suggesting to append ECS_AVAILABLE_LOGGING_DRIVERS with awslogs. But I don't find this configuration setting in EB.
Any thoughts?
I'm posting here the answer I received from AWS support:
As Elastic Beanstalk Single Container environment will save the stdout
and stderr on /var/log/eb-docker/containers/eb-current-app/ by
default, and as the new solution stack allows you the option to stream
log to cloudwatch, automating the configuration of the AWSLogs agent
on the instances, what I recommend to do is to add an ebextension to
add the stdout and stderr logs files to the cloudwatch configuration
and use the already configured agent to stream those files to
cloudwatch logs. instead of touching the pre-hooks , which is nor
supported by AWS as hooks may change from solution stack version to
another.
Regarding the error you are seeing "logs" command is supported only
for "json-file" and "journald" logging drivers (got: awslogs)" this
error is from how docker works, when it is configured to send logs to
other driver beside json-file or journald it will not be able to
display logs locally as it does not have a local copy of them.
### BEGIN .ebextensions/logs.config
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 7
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[docker-stdout]
log_group_name=/aws/elasticbeanstalk/environment_name/docker-stdout
log_stream_name={instance_id}
file=/var/log/eb-docker/containers/eb-current-app/*-stdouterr.log
commands:
"00_restart_awslogs":
command: service awslogs restart
### END .ebextensions/logs.config
I was able to expand on the previous answer for a multi container elastic beanstalk environment as well as inject the environment name. I did have to grant the correct permission in the ec2 role to be able to create the log group. You can see if it is working by looking in:
/var/log/awslogs.log
this goes in .ebextensions/logs.config
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 14
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[/var/log/containers/docker-stdout]
log_group_name=/aws/elasticbeanstalk/`{ "Ref" : "AWSEBEnvironmentName" }`/docker-stdout.log
log_stream_name={instance_id}
file=/var/log/containers/*-stdouterr.log
commands:
"00_restart_awslogs":
command: service awslogs restart