Elastic Beanstalk terminating and recreating instances frantically - amazon-web-services

Elastic Beanstalk is adding & removing instances one after the other. Googling around points to checking the "State transition message" which is coming up as "Client.UserInitiatedShutdown: User initiated shutdown" for which https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/troubleshooting-launch.html#troubleshooting-launch-internal states some possible reasons but none of these apply. No one has touched any setting, etc. Any ideas?
UPDATE: Did a bit more digging and found out that app deployment is failing. Relevant log errors are below:
eb-engine.log
2021/08/05 15:46:29.272215 [INFO] Executing instruction: PreBuildEbExtension
2021/08/05 15:46:29.272220 [INFO] Starting executing the config set Infra-EmbeddedPreBuild.
2021/08/05 15:46:29.272235 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-init -s arn:aws:cloudformation:us-east-1:345470085661:stack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f -r AWSEBAutoScalingGroup --region us-east-1 --configsets Infra-EmbeddedPreBuild
2021/08/05 15:50:44.538818 [ERROR] An error occurred during execution of command [app-deploy] - [PreBuildEbExtension]. Stop running the command. Error: EbExtension build failed. Please refer to /var/log/cfn-init.log for more details.
2021/08/05 15:50:44.540438 [INFO] Executing cleanup logic
2021/08/05 15:50:44.581445 [INFO] CommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1628178644,"severity":"ERROR"}]}]}
2021/08/05 15:50:44.620394 [INFO] Platform Engine finished execution on command: app-deploy
2021/08/05 15:51:22.196186 [ERROR] An error occurred during execution of command [self-startup] - [PreBuildEbExtension]. Stop running the command. Error: EbExtension build failed. Please refer to /var/log/cfn-init.log for more details.
2021/08/05 15:51:22.196215 [INFO] Executing cleanup logic
eb-cfn-init.log
[2021-08-05T15:42:44.199Z] Completed executing cfn_init.
[2021-08-05T15:42:44.226Z] finished _OnInstanceReboot
+ RESULT=1
+ [[ 1 -ne 0 ]]
+ sleep_delay
+ (( 2 < 3600 ))
+ echo Sleeping 2
Sleeping 2
+ sleep 2
+ SLEEP_TIME=4
+ true
+ curl https://elasticbeanstalk-platform-assets-us-east-1.s3.amazonaws.com/stalks/eb_php74_amazon_linux_2_1.0.1153.0_20210728213922/lib/UserDataScript.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 4627 100 4627 0 0 24098 0 --:--:-- --:--:-- --:--:-- 24098
+ RESULT=0
+ [[ 0 -ne 0 ]]
+ SLEEP_TIME=2
+ /bin/bash /tmp/ebbootstrap.sh 'https://cloudformation-waitcondition-us-east-1.s3.amazonaws.com/arn%3Aaws%3Acloudformation%3Aus-east-1%3A345470085661%3Astack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f/AWSEBInstanceLaunchWaitHandle?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20200528T171102Z&X-Amz-SignedHeaders=host&X-Amz-Expires=86399&X-Amz-Credential=AKIAIIT3CWAIMJYUTISA%2F20200528%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=57c7da0aec730af1b425d1aff68517c333cf9d5432c984d775419b415cac8513' arn:aws:cloudformation:us-east-1:345470085661:stack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f 65c52bb7-0376-4d43-b304-b64890a34c1c https://elasticbeanstalk-health.us-east-1.amazonaws.com '' https://elasticbeanstalk-platform-assets-us-east-1.s3.amazonaws.com/stalks/eb_php74_amazon_linux_2_1.0.1153.0_20210728213922 us-east-1
[2021-08-05T15:46:07.683Z] Started EB Bootstrapping Script.
[2021-08-05T15:46:07.739Z] Received parameters:
TARBALLS =
EB_GEMS =
SIGNAL_URL = https://cloudformation-waitcondition-us-east-1.s3.amazonaws.com/arn%3Aaws%3Acloudformation%3Aus-east-1%3A345470085661%3Astack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f/AWSEBInstanceLaunchWaitHandle?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20200528T171102Z&X-Amz-SignedHeaders=host&X-Amz-Expires=86399&X-Amz-Credential=AKIAIIT3CWAIMJYUTISA%2F20200528%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=57c7da0aec730af1b425d1aff68517c333cf9d5432c984d775419b415cac8513
STACK_ID = arn:aws:cloudformation:us-east-1:345470085661:stack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f
REGION = us-east-1
GUID =
HEALTHD_GROUP_ID = 65c52bb7-0376-4d43-b304-b64890a34c1c
HEALTHD_ENDPOINT = https://elasticbeanstalk-health.us-east-1.amazonaws.com
PROXY_SERVER =
HEALTHD_PROXY_LOG_LOCATION =
PLATFORM_ASSETS_URL = https://elasticbeanstalk-platform-assets-us-east-1.s3.amazonaws.com/stalks/eb_php74_amazon_linux_2_1.0.1153.0_20210728213922
Is this some corrupted AMI?

Turned out that there was a config script in the .ebextension directory that was not behaving.

Related

How to read json file in gitlab ci yaml and use if else command?

I have a yaml file and I have a respond for yaml file.Respond file have a message part.and I need if message is true return Job Succeed for yaml file if message writes wrong yaml file is not succeed
variables:
NUGET_PATH: 'C:\Tools\Nuget\nuget.exe'
MSBUILD_PATH: 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Current\Bin\amd64\msbuild.exe'
SOLUTION_PATH: 'Textbox_ComboBox.sln'
stages:
- build
- job1
- job2
before_script:
- "cd Source"
build_job:
stage: build
except:
- schedules
script:
- '& "$env:NUGET_PATH" restore'
- '& "$env:MSBUILD_PATH" "$env:SOLUTION_PATH" /nologo /t:Rebuild /p:Configuration=Debug'
job1:
stage: job1
script:
- 'curl adress1'
- - if [ "$message" == "SAP transfer started. Please check in db" ]; then exit 0; else exit 1; fi
job2:
stage: trigger_SAP_service
when: delayed
start_in: 5 minutes
only:
- schedules
script:
- 'curl adress2'
It is yaml file respond.It should be job succeed.Because respond message and if command message is same.
Skipping Git submodules setup
Authenticating with credentials from job payload (GitLab Registry)
$ cd Source
$ curl adress1
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 146 0 146 0 0 877 0 --:--:-- --:--:-- --:--:-- 879
{"status":200,"message":"SAP transfer started. Please check in db","errorCode":0,"timestamp":"2019-10-04T07:59:58.436+0300","responseObject":null}$ if ( [ '$message' == 'SAP transfer started. Please check in db' ] ); then exit 0; else exit 1; fi
ERROR: Job failed: exit code 1
The message variable you use in your condition is empty.
You need to assign the curl response to your message variable :
message=$(curl -Ss adress1)
and then test the content of $message

k8s: Error pulling images from ECR

We constantly get Waiting: ImagePullBackOff during CI upgrades. Anybody know whats happening? k8s cluster 1.6.2 installed via kops. During upgrades, we do kubectl set image and during the last 2 days, we are seeing the following error
Failed to pull image "********.dkr.ecr.eu-west-1.amazonaws.com/backend:da76bb49ec9a": rpc error: code = 2 desc = net/http: request canceled
Error syncing pod, skipping: failed to "StartContainer" for "backend" with ErrImagePull: "rpc error: code = 2 desc = net/http: request canceled"
journalctl -r -u kubelet
Jul 26 09:32:40 ip-10-0-49-227 kubelet[840]: W0726 09:32:40.731903 840 docker_sandbox.go:263] NetworkPlugin kubenet failed on the status hook for pod "backend-1277054742-bb8zm_default": Unexpected command output nsenter: cannot open : No such file or directory
Jul 26 09:32:40 ip-10-0-49-227 kubelet[840]: E0726 09:32:40.724387 840 generic.go:239] PLEG: Ignoring events for pod frontend-1493767179-84rkl/default: rpc error: code = 2 desc = Error: No such container: 2421109e0d1eb31242c5088b547c0f29377816ca068a283b8fe6c2d8e7e5874d
Jul 26 09:32:40 ip-10-0-49-227 kubelet[840]: E0726 09:32:40.724371 840 kuberuntime_manager.go:858] getPodContainerStatuses for pod "frontend-1493767179-84rkl_default(0fff3b22-71c8-11e7-9679-02c1112ca4ec)" failed: rpc error: code = 2 desc = Error: No such container: 2421109e0d1eb31242c5088b547c0f29377816ca068a283b8fe6c2d8e7e5874d
Jul 26 09:32:40 ip-10-0-49-227 kubelet[840]: E0726 09:32:40.724358 840 kuberuntime_container.go:385] ContainerStatus for 2421109e0d1eb31242c5088b547c0f29377816ca068a283b8fe6c2d8e7e5874d error: rpc error: code = 2 desc = Error: No such container: 2421109e0d1eb31242c5088b547c0f29377816ca068a283b8fe6c2d8e7e5874d
Jul 26 09:32:40 ip-10-0-49-227 kubelet[840]: E0726 09:32:40.724329 840 remote_runtime.go:269] ContainerStatus "2421109e0d1eb31242c5088b547c0f29377816ca068a283b8fe6c2d8e7e5874d" from runtime service failed: rpc error: code = 2 desc = Error: No such container: 2421109e0d1eb31242c5088b547c0f29377816ca068a283b8fe6c2d8e7e5874d
Jul 26 09:32:40 ip-10-0-49-227 kubelet[840]: with error: exit status 1
Try running kubectl create configmap -n kube-system kube-dns
For more context, check out known issues with kubernetes 1.6 https://github.com/kubernetes/kops/releases/tag/1.6.0
This may be caused by a known docker bug where shutdown occurs before the content is synced to disk on layer creation. The fix is included in docker v1.13.
work around is to remove the empty files and re-pull the image.

Saltstack fails when creating minions from a map file on DigitalOcean

I can't create a minion from the map file, no idea what's happened. A month ago my script was working correctly, right now it fails. I was trying to do some research about it but I could't find anything about it. Could someone have a look on my DEBUG log? The minion is created on DigitalOcean but the master server can't connect to it at all.
so I run:
salt-cloud -P -m /etc/salt/cloud.maps.d/production.map -l debug
The master is running on Ubuntu 16.04.1 x64, the minion also.
I use the latest saltstack's library:
echo "deb http://repo.saltstack.com/apt/ubuntu/16.04/amd64/latest xenial main" >> /etc/apt/sources.list.d/saltstack.list
I tested both 2016.3.2 and 2016.3.3, what is interesting, the same script was working correctly 4 weeks ago, I assume something had to change.
ERROR:
Writing /usr/lib/python2.7/dist-packages/salt-2016.3.3.egg-info
* INFO: Running install_ubuntu_git_post()
disabled
Created symlink from /etc/systemd/system/multi-user.target.wants/salt-minion.service to /lib/systemd/system/salt-minion.service.
* INFO: Running install_ubuntu_check_services()
* INFO: Running install_ubuntu_restart_daemons()
Job for salt-minion.service failed because a configured resource limit was exceeded. See "systemctl status salt-minion.service" and "journalctl -xe" for details.
start: Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: Connection refused
* ERROR: No init.d support for salt-minion was found
* ERROR: Fai
[DEBUG ] led to run install_ubuntu_restart_daemons()!!!
[ERROR ] Failed to deploy 'minion-zk-0'. Error: Command 'ssh -t -t -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oControlPath=none -oPasswordAuthentication=no -oChallengeResponseAuthentication=no -oPubkeyAuthentication=yes -oIdentitiesOnly=yes -oKbdInteractiveAuthentication=no -i /etc/salt/keys/cloud/do.pem -p 22 root#REMOVED_IP '/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd/deploy.sh -c '"'"'/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd'"'"' -P git v2016.3.3'' failed. Exit code: 1
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/cloud/__init__.py", line 2293, in create_multiprocessing
local_master=parallel_data['local_master']
File "/usr/lib/python2.7/dist-packages/salt/cloud/__init__.py", line 1281, in create
output = self.clouds[func](vm_)
File "/usr/lib/python2.7/dist-packages/salt/cloud/clouds/digital_ocean.py", line 481, in create
ret = __utils__['cloud.bootstrap'](vm_, __opts__)
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 527, in bootstrap
deployed = deploy_script(**deploy_kwargs)
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 1516, in deploy_script
if root_cmd(deploy_command, tty, sudo, **ssh_kwargs) != 0:
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 2167, in root_cmd
retcode = _exec_ssh_cmd(cmd, allow_failure=allow_failure, **kwargs)
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 1784, in _exec_ssh_cmd
cmd, proc.exitstatus
SaltCloudSystemExit: Command 'ssh -t -t -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oControlPath=none -oPasswordAuthentication=no -oChallengeResponseAuthentication=no -oPubkeyAuthentication=yes -oIdentitiesOnly=yes -oKbdInteractiveAuthentication=no -i /etc/salt/keys/cloud/do.pem -p 22 root#REMOVED_ID '/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd/deploy.sh -c '"'"'/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd'"'"' -P git v2016.3.3'' failed. Exit code: 1
[DEBUG ] LazyLoaded nested.output
minion-zk-0:
----------
Error:
Command 'ssh -t -t -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oControlPath=none -oPasswordAuthentication=no -oChallengeResponseAuthentication=no -oPubkeyAuthentication=yes -oIdentitiesOnly=yes -oKbdInteractiveAuthentication=no -i /etc/salt/keys/cloud/do.pem -p 22 root#REMOVED_IP '/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd/deploy.sh -c '"'"'/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd'"'"' -P git v2016.3.3'' failed. Exit code: 1
root#master-zk:/etc/salt/cloud.maps.d# salt '*' test.ping
minion-zk-0:
Minion did not return. [No response]
root#master-zk:/etc/salt/cloud.maps.d#
It is located in your cloud configuration somewhere in /etc/salt/cloud.profiles.d/, /etc/salt/cloud.providers.d/ or /etc/salt/cloud.d/. Just figure out where and change the value salt to your masters ip.
I currently do this in my providers setting like that:
hit-vcenter:
driver: vmware
user: 'foo'
password: 'secret'
url: 'some url'
protocol: 'https'
port: 443
minion:
master: 10.1.10.1

Abort Capistrano deploy bundler:install

Following this tutorial from GoRails I'm getting this error when I try to deploy on Ubuntu 16.04 on Digital Ocean.
$ cap production deploy --trace
Trace Here
** DEPLOY FAILED
** Refer to log/capistrano.log for details. Here are the last 20 lines:
DEBUG [9a2c15d9] Command: [ -d /home/deployer/RMG_rodeobest/releases/20160829222734/public/system ]
DEBUG [9a2c15d9] Finished in 0.181 seconds with exit status 1 (failed).
INFO [86a233a2] Running /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/system /home/deployer/RMG_rodeobest/releases/20160829222734/public/system as deployer#138.68.8.2…
DEBUG [86a233a2] Command: ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.3.1" ; /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/system /home/deployer/RMG_rodeobest/…
INFO [86a233a2] Finished in 0.166 seconds with exit status 0 (successful).
DEBUG [07f5e5a2] Running [ -L /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ] as deployer#138.68.8.255
DEBUG [07f5e5a2] Command: [ -L /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ]
DEBUG [07f5e5a2] Finished in 0.166 seconds with exit status 1 (failed).
DEBUG [5e61eaf3] Running [ -d /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ] as deployer#138.68.8.255
DEBUG [5e61eaf3] Command: [ -d /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ]
DEBUG [5e61eaf3] Finished in 0.168 seconds with exit status 1 (failed).
INFO [52076052] Running /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/assets /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets as deployer#138.68.8.2…
DEBUG [52076052] Command: ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.3.1" ; /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/assets /home/deployer/RMG_rodeobest/…
INFO [52076052] Finished in 0.167 seconds with exit status 0 (successful).
DEBUG [2a6bf02b] Running if test ! -d /home/deployer/RMG_rodeobest/releases/20160829222734; then echo "Directory does not exist '/home/deployer/RMG_rodeobest/releases/20160829222734'"…
DEBUG [2a6bf02b] Command: if test ! -d /home/deployer/RMG_rodeobest/releases/20160829222734; then echo "Directory does not exist '/home/deployer/RMG_rodeobest/releases/20160829222734'…
DEBUG [2a6bf02b] Finished in 0.164 seconds with exit status 0 (successful).
INFO [f4b636e3] Running $HOME/.rbenv/bin/rbenv exec bundle install --path /home/deployer/RMG_rodeobest/shared/bundle --without development test --deployment --quiet as deployer#138.6…
DEBUG [f4b636e3] Command: cd /home/deployer/RMG_rodeobest/releases/20160829222734 && ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.3.1" ; $HOME/.rbenv/bin/rbenv exec bundle inst…
DEBUG [f4b636e3] bash: line 1: 3509 Killed $HOME/.rbenv/bin/rbenv exec bundle install --path /home/deployer/RMG_rodeobest/shared/bundle --without development test -…
My Capfile:
# Load DSL and Setup Up Stages
require 'capistrano/setup'
# Includes default deployment tasks
require 'capistrano/deploy'
# Includes tasks from other gems included in your Gemfile
# If you are using rbenv add these lines:
require 'capistrano/rbenv'
set :rbenv_type, :user # or :system, depends on your rbenv setup
set :rbenv_ruby, '2.3.1'
require 'capistrano/bundler'
require 'capistrano/rails'
# require 'capistrano/passenger'
# Loads custom tasks from `lib/capistrano/tasks' if you have any defined.
Dir.glob('lib/capistrano/tasks/*.cap').each { |r| import r }
I'm stuck, don't know why is aborting cap.
Any idea?

puppet file function doesn't load contents

I am trying to use the puppet file function (not the type) in the following way
class iop_users {
include 's3file::curl'
include 'stdlib'
$secretpath=file('/etc/secret','dev/null')
notify { 'show secretpath':
message =>"secretpath is $secretpath"
}
s3file { '/opt/utab.yaml':
source => "mybucket/$secretpath/utab.yaml",
ensure => 'latest',
}
exec { 'fix perms':
command => '/bin/chmod 600 /opt/utab.yaml',
require => S3file['/opt/utab.yaml']
}
if ( $::virtual == 'xenhvm' and defined(S3file['/opt/utab.yaml']) ) {
$uhash=loadyaml('/opt/utab.yaml')
create_resources(iop_users::usercreate, $uhash)
}
}
If I run this then here is some typical output. The manifest fails as the initial "secret" used to find the path is not loaded
https_proxy=https://puppet:3128 puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for ip-10-40-1-68.eu-west-1.compute.internal
Info: Applying configuration version '1431531382'
Notice: /Stage[main]/Iop_users/S3file[/opt/utab.yaml]/Exec[fetch /opt/utab.yaml]/returns: % Total % Received % Xferd Average Speed Time Time Time Current
Notice: /Stage[main]/Iop_users/S3file[/opt/utab.yaml]/Exec[fetch /opt/utab.yaml]/returns: Dload Upload Total Spent Left Speed
Notice: /Stage[main]/Iop_users/S3file[/opt/utab.yaml]/Exec[fetch /opt/utab.yaml] 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
Notice: /Stage[main]/Iop_users/S3file[/opt/utab.yaml]/Exec[fetch /opt/utab.yaml]/returns: curl: (56) Received HTTP code 404 from proxy after CONNECT
Error: curl -L -o /opt/utab.yaml https://s3-eu-west.amazonaws.com/mybucket//utab.yaml returned 56 instead of one of [0]
Error: /Stage[main]/Iop_users/S3file[/opt/utab.yaml]/Exec[fetch /opt/utab.yaml]/returns: change from notrun to 0 failed: curl -L -o /opt/utab.yaml https://s3-eu-west.amazonaws.com/mybucket//utab.yaml returned 56 instead of one of [0]
Notice: /Stage[main]/Iop_users/Exec[fix perms]: Dependency Exec[fetch /opt/utab.yaml] has failures: true
Warning: /Stage[main]/Iop_users/Exec[fix perms]: Skipping because of failed dependencies
Notice: secretpath is
Notice: /Stage[main]/Iop_users/Notify[show secretpath]/message: defined 'message' as 'secretpath is '
Notice: Finished catalog run in 1.28 seconds
However on the same host that the above puppet agent run fails on, if I use "apply" to try it outside of the context of a manifest, it works fine
puppet apply -e '$z=file("/etc/secret") notify { "z": message => $z}'
Notice: Compiled catalog for ip-x.x.x.x.eu-west-1.compute.internal in environment production in 0.02 seconds
Notice: wombat
Notice: /Stage[main]/Main/Notify[z]/message: defined 'message' as 'wombat
'
Notice: Finished catalog run in 0.03 seconds
What am I doing wrong? Are there any better alternative approaches I could make?
As usual I was confused about the way puppet works
Apparently, functions are always executed on the master
So any files being loaded in this way must be on the master
As soon as I added a "/etc/secret" file to the puppetmaster it all worked