I have an Elastic Beanstalk project that has been working fine for months. Today, I decide to enable and disable a port listener as seen in the image below:
I enabled port 80 and then the website stopped working. So I was like "oh crap, I will change it back". But guess what? It is still broken. The code has not changed whatsoever, but the application is now broken.
I have restarted the app servers, rebuilt the environment and nothing. I can't even access the environment site by clicking Go to environment. I just see a Bad Gateway message on screen. The health status of the environment when first deployed is OK and then quickly goes to Severe.
If my code has not changed, what is happening here? How can I find out what is going on here? All I changed was that port, by enabling and then disabling again.
I have already come across this question: Question and I am already doing this. This environment variable is on my application.properties file like this:
server.port=5000 and it's been like this for months and has already been working. So this can't be the reason that it broke today. I even tried adding it directly to the environment variables in Elastic Beanstalk console and same result, still getting 502 Bad Gateway.
I also have a path for the health-check configured and this has not changed in months.
Here are the last 100 lines from my log file after health status goes to Severe:
----------------------------------------
/var/log/eb-engine.log
----------------------------------------
2022/01/27 15:53:53.370165 [INFO] Running command /bin/sh -c docker tag af10382f81a4 aws_beanstalk/current-app
2022/01/27 15:53:53.489035 [INFO] Running command /bin/sh -c docker rmi aws_beanstalk/staging-app
2022/01/27 15:53:53.568222 [INFO] Untagged: aws_beanstalk/staging-app:latest
2022/01/27 15:53:53.568307 [INFO] Running command /bin/sh -c systemctl show -p PartOf eb-docker.service
2022/01/27 15:53:53.576541 [INFO] Running command /bin/sh -c systemctl daemon-reload
2022/01/27 15:53:53.712836 [INFO] Running command /bin/sh -c systemctl reset-failed
2022/01/27 15:53:53.720035 [INFO] Running command /bin/sh -c systemctl enable eb-docker.service
2022/01/27 15:53:53.866046 [INFO] Running command /bin/sh -c systemctl show -p PartOf eb-docker.service
2022/01/27 15:53:53.875112 [INFO] Running command /bin/sh -c systemctl is-active eb-docker.service
2022/01/27 15:53:53.886916 [INFO] Running command /bin/sh -c systemctl start eb-docker.service
2022/01/27 15:53:53.991608 [INFO] Running command /bin/sh -c systemctl show -p PartOf eb-docker-log.service
2022/01/27 15:53:54.002839 [INFO] Running command /bin/sh -c systemctl daemon-reload
2022/01/27 15:53:54.092602 [INFO] Running command /bin/sh -c systemctl reset-failed
2022/01/27 15:53:54.102854 [INFO] Running command /bin/sh -c systemctl enable eb-docker-log.service
2022/01/27 15:53:54.226561 [INFO] Running command /bin/sh -c systemctl show -p PartOf eb-docker-log.service
2022/01/27 15:53:54.246914 [INFO] Running command /bin/sh -c systemctl is-active eb-docker-log.service
2022/01/27 15:53:54.263293 [INFO] Running command /bin/sh -c systemctl start eb-docker-log.service
2022/01/27 15:53:54.433800 [INFO] docker container 3771e61e64ae is running aws_beanstalk/current-app
2022/01/27 15:53:54.433823 [INFO] Executing instruction: Clean up Docker
2022/01/27 15:53:54.433842 [INFO] Running command /bin/sh -c docker ps -aq
2022/01/27 15:53:54.638602 [INFO] 3771e61e64ae
2022/01/27 15:53:54.638644 [INFO] Running command /bin/sh -c docker images | sed 1d
2022/01/27 15:53:54.810723 [INFO] aws_beanstalk/current-app latest af10382f81a4 13 seconds ago 597MB
<none> <none> adafe645300e 24 seconds ago 732MB
openjdk 8 3bc5f7759e81 30 hours ago 526MB
maven 3.8.1-jdk-8 498ac51e5e6e 6 months ago 525MB
2022/01/27 15:53:54.810767 [INFO] save docker tag command: docker tag af10382f81a4 aws_beanstalk/current-app:latest
2022/01/27 15:53:54.810772 [INFO] save docker tag command: docker tag adafe645300e <none>:<none>
2022/01/27 15:53:54.810776 [INFO] save docker tag command: docker tag 3bc5f7759e81 openjdk:8
2022/01/27 15:53:54.810781 [INFO] save docker tag command: docker tag 498ac51e5e6e maven:3.8.1-jdk-8
2022/01/27 15:53:54.810793 [INFO] Running command /bin/sh -c docker rm `docker ps -aq`
2022/01/27 15:53:54.964217 [INFO] Running command /bin/sh -c docker rmi `docker images -aq`
2022/01/27 15:53:56.249352 [INFO] Deleted: sha256:adafe645300e41dd29b04abccf86a562ad5e635bd6afff9343b6a45721fb3a45
Deleted: sha256:b78c0f45b590e7c8c496466450e2fecf2e31044dd53bcf8d9c64a9e7a8c84139
Deleted: sha256:16aedb83589da925c19d2f692234a2a36c017b35846c07fd8ad6817cceda6ae9
Deleted: sha256:a568ba4507a603b7ace044d64726daaf3022c817cc9550779d64dbb95d0e1e5d
Deleted: sha256:fe90a30920d18ecad75ec02e8c04894fbcaadc209529c3e5c14fdaa66d3a7bc9
Deleted: sha256:7c72fe5e2da958b5d44267aa9de538c274e70125c902bc3e663af4c5c87280dc
Untagged: maven:3.8.1-jdk-8
Untagged: maven#sha256:cba6d738a97e81e8845d60ee2662f020385d01d6135a2cf75bc1f5a84980ef88
Deleted: sha256:498ac51e5e6e99ae8646d007ed554587a4ceeab78a664dc7eedde7137c658e9e
Deleted: sha256:de026bec49cbc1fd7bd1bd7aa03d544713985e39bc0a913f4c0a59dbcc556715
Deleted: sha256:f5c45a5e495b035f37dc2e19d8ead0458cf0ad8b83d5573cc9b4016ea54814b6
Deleted: sha256:9f871694bb9a37f62b6baf12760480448d46e008c8c85f06dab5340b16d11a2b
Deleted: sha256:19a57d2c318dfeac5de4cac0a5263af560eff01c620100570c83658e12df0a87
Deleted: sha256:bc20a3f84b95792033865bff3c1cc53b060108ef2018b1913da3c8eddda77b99
Deleted: sha256:f33d6ed931ff64c63168af00c7544d148d01fda66831246572ff2bfcacbcf2d6
Deleted: sha256:017b9704876de2443b332b1dfec580d365184b514eb0af43f1d59637e77af9bb
Deleted: sha256:98fc59c935e697d6375f05f4fa29d0e1ef7e8ece61aed109056926983ada0ef4
Deleted: sha256:c21ff68b02e7caf277f5d356e8b323a95e8d3969dd1ab0d9f60e7c8b4a01c874
Deleted: sha256:afa3e488a0ee76983343f8aa759e4b7b898db65b715eb90abc81c181388374e3
2022/01/27 15:53:56.249384 [INFO] restore docker image name with command: docker tag af10382f81a4 aws_beanstalk/current-app:latest
2022/01/27 15:53:56.249393 [INFO] Running command /bin/sh -c docker tag af10382f81a4 aws_beanstalk/current-app:latest
2022/01/27 15:53:56.352957 [INFO] restore docker image name with command: docker tag adafe645300e <none>:<none>
2022/01/27 15:53:56.352988 [INFO] Running command /bin/sh -c docker tag adafe645300e <none>:<none>
2022/01/27 15:53:56.360403 [INFO] restore docker image name with command: docker tag 3bc5f7759e81 openjdk:8
2022/01/27 15:53:56.360437 [INFO] Running command /bin/sh -c docker tag 3bc5f7759e81 openjdk:8
2022/01/27 15:53:56.461652 [INFO] restore docker image name with command: docker tag 498ac51e5e6e maven:3.8.1-jdk-8
2022/01/27 15:53:56.461677 [INFO] Running command /bin/sh -c docker tag 498ac51e5e6e maven:3.8.1-jdk-8
2022/01/27 15:53:56.561836 [INFO] Executing instruction: start X-Ray
2022/01/27 15:53:56.561859 [INFO] X-Ray is not enabled.
2022/01/27 15:53:56.561863 [INFO] Executing instruction: configureSqsd
2022/01/27 15:53:56.561868 [INFO] This is a web server environment instance, skip configure sqsd daemon ...
2022/01/27 15:53:56.561871 [INFO] Executing instruction: startSqsd
2022/01/27 15:53:56.561874 [INFO] This is a web server environment instance, skip start sqsd daemon ...
2022/01/27 15:53:56.561877 [INFO] Executing instruction: Track pids in healthd
2022/01/27 15:53:56.561881 [INFO] This is an enhanced health env...
2022/01/27 15:53:56.561891 [INFO] Running command /bin/sh -c systemctl show -p ConsistsOf aws-eb.target | cut -d= -f2
2022/01/27 15:53:56.572170 [INFO] cfn-hup.service docker.service nginx.service healthd.service eb-docker-log.service eb-docker-events.service eb-docker.service
2022/01/27 15:53:56.572206 [INFO] Running command /bin/sh -c systemctl show -p ConsistsOf eb-app.target | cut -d= -f2
2022/01/27 15:53:56.583143 [INFO]
2022/01/27 15:53:56.583747 [INFO] Executing instruction: Configure Docker Container Logging
2022/01/27 15:53:56.587182 [INFO] Executing instruction: RunAppDeployPostDeployHooks
2022/01/27 15:53:56.587200 [INFO] The dir .platform/hooks/postdeploy/ does not exist in the application. Skipping this step...
2022/01/27 15:53:56.587204 [INFO] Executing cleanup logic
2022/01/27 15:53:56.587325 [INFO] CommandService Response: {"status":"SUCCESS","api_version":"1.0","results":[{"status":"SUCCESS","msg":"Engine execution has succeeded.","returncode":0,"events":[{"msg":"Instance deployment completed successfully.","timestamp":1643298836,"severity":"INFO"}]}]}
2022/01/27 15:53:56.587458 [INFO] Platform Engine finished execution on command: app-deploy
2022/01/27 15:56:08.141406 [INFO] Starting...
2022/01/27 15:56:08.141500 [INFO] Starting EBPlatform-PlatformEngine
2022/01/27 15:56:08.141523 [INFO] reading event message file
2022/01/27 15:56:08.141619 [INFO] no eb envtier info file found, skip loading env tier info.
2022/01/27 15:56:08.141697 [INFO] Engine received EB command cfn-hup-exec
2022/01/27 15:56:08.291283 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:796071762232:stack/awseb-e-zzq77xp3px-stack/a072a330-7f88-11ec-8245-125e3f27604f -r AWSEBAutoScalingGroup --region us-east-1
2022/01/27 15:56:08.851246 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:796071762232:stack/awseb-e-zzq77xp3px-stack/a072a330-7f88-11ec-8245-125e3f27604f -r AWSEBBeanstalkMetadata --region us-east-1
2022/01/27 15:56:09.238835 [INFO] checking whether command tail-log is applicable to this instance...
2022/01/27 15:56:09.238847 [INFO] this command is applicable to the instance, thus instance should execute command
2022/01/27 15:56:09.238849 [INFO] Engine command: (tail-log)
2022/01/27 15:56:09.238906 [INFO] Executing instruction: GetTailLogs
2022/01/27 15:56:09.238910 [INFO] Tail Logs...
2022/01/27 15:56:09.239208 [INFO] Running command /bin/sh -c tail -n 100 /var/log/eb-engine.log
----------------------------------------
/var/log/nginx/access.log
----------------------------------------
172.31.35.54 - - [27/Jan/2022:15:53:59 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\x82\x02\x92T\xC0\x06O\x7F\xAA\xB5=\xC8\x8Ca\x83v\xFF\xF7\x8E\xF2\xB9\xBDW\x1B\xB9\x9A\x91x\xB0\x81\xBF\xA6\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.35.54 - - [27/Jan/2022:15:54:14 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\xBAy5)=k\x1D\x19|\xF6\xBC\xB0B\x10\x0B$\xE8#\x06\x8B\xA1iY\xB4##+-\x1F\xAC\x92&\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.35.54 - - [27/Jan/2022:15:54:29 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\x03\xBC\xF2\x93\x90uW\xC0\xA5f\xFFWz~K_\xF61\xAEsuY\xE2R\xE0\xBC&\xE7\xFB|\xDB\xC2\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.35.54 - - [27/Jan/2022:15:54:44 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\x84\xFD\xD5\xA5{\xF7\xDEr\x96\xEB" 400 157 "-" "-" "-"
172.31.35.54 - - [27/Jan/2022:15:54:59 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\xBCU\xC9\x92=\xCBT\xC2\xB8RL\xA3\xF7\xE6\xD4s\xB8!A\xF2\x14\xC3" 400 157 "-" "-" "-"
172.31.85.167 - - [27/Jan/2022:15:55:09 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03f\x1B\xB8\x17\x19k|H\x1DW\xEF&\x83\x03#\xE9GB\xE8f\xB4\xDAGJ]\x8E\x92\xD6\xC8L\xD3%\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.35.54 - - [27/Jan/2022:15:55:14 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\xCC\x9D\x1A5&\x99\xB76\x16\xC1\xE2\xB5\xC3:G]\x1A\xA5H\xEE\xF6s\xD0\xF9s\xA3\xBE\xD2\x9Aq\xF0\xC2\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.85.167 - - [27/Jan/2022:15:55:24 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03j4x\xF0\x86uwh\x1C\xEEg8\xA9\xA3\x1E(\x18C\x96\xFA\xE8\xA6\x87{\xC3N\xD4\x08\x10\xBA\xAC\x03\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.35.54 - - [27/Jan/2022:15:55:29 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\x5C\x8Btq\xBEG\xD2\xF8l\xC8\xBA\x94F\x14\x8F\x1C\xCC\xA1#JSw9\xE4\xCD\xA7\x05\x82\xE4][\xB8\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.85.167 - - [27/Jan/2022:15:55:39 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03{\x05\x86\x89\x09.:A\x0C\xCF\x14\xA4=\xDF\xFA\xC6\xD4\xF5+\x9D\xA4\xF8\x93\xE9k\xD5\xD3\xC5\xCA\x9C\xFB\x15\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.35.54 - - [27/Jan/2022:15:55:44 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\xBC\xF3\xE3\xDEy\xB3(\xF2\x18\xEB\xC5f\x1F\xA2\xF5\xE6\xF5\x8C\xF6lO\x98D\xFAT\xCB\xB3`\x9C\xC2\xCE.\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.85.167 - - [27/Jan/2022:15:55:54 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\x16P\x10\x07}\x90\xBD!\x9E\xA1\xAB\xD9\xDD\x1F\xAA\xBF\x85u\xCF\xE7\xAD\xA9\x93$q\xC4" 400 157 "-" "-" "-"
172.31.35.54 - - [27/Jan/2022:15:55:59 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03x\x94z\x84\x1Buz3\x9A\x8FbX\x07\x13\x00\x8DH\xDFf\x10\xC9\xE7\xDB\xF7\xE7\xBFr\xE8w>\xFC\x9E\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.85.167 - - [27/Jan/2022:15:56:09 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\xEF\x1F'\x84#\xF4\xF4\xB6C\xEE\xE4}\xD6E\x94\x05\xA1\x1B*\x1EZ\x94N\xB9K\x96A>\x8A\x8Ep\xBF\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
----------------------------------------
/var/log/nginx/error.log
----------------------------------------
----------------------------------------
/var/log/docker-events.log
----------------------------------------
2022-01-27T15:52:46.764393026Z image pull maven:3.8.1-jdk-8 (name=maven)
2022-01-27T15:52:47.730944524Z container create b83331900dd580a01b9c5e2744412bd6f6e4465313177fb45a2f288d70765010 (image=sha256:16aedb83589da925c19d2f692234a2a36c017b35846c07fd8ad6817cceda6ae9, name=inspiring_tesla)
2022-01-27T15:52:47.731203832Z container attach b83331900dd580a01b9c5e2744412bd6f6e4465313177fb45a2f288d70765010 (image=sha256:16aedb83589da925c19d2f692234a2a36c017b35846c07fd8ad6817cceda6ae9, name=inspiring_tesla)
2022-01-27T15:52:47.784204703Z network connect 38cc920306e67474a0e4c1558a074911f27746d82bcaf75a013b36aa57d583d3 (container=b83331900dd580a01b9c5e2744412bd6f6e4465313177fb45a2f288d70765010, name=bridge, type=bridge)
2022-01-27T15:52:48.320837501Z container start b83331900dd580a01b9c5e2744412bd6f6e4465313177fb45a2f288d70765010 (image=sha256:16aedb83589da925c19d2f692234a2a36c017b35846c07fd8ad6817cceda6ae9, name=inspiring_tesla)
2022-01-27T15:53:28.504262431Z container die b83331900dd580a01b9c5e2744412bd6f6e4465313177fb45a2f288d70765010 (exitCode=0, image=sha256:16aedb83589da925c19d2f692234a2a36c017b35846c07fd8ad6817cceda6ae9, name=inspiring_tesla)
2022-01-27T15:53:28.615767036Z network disconnect 38cc920306e67474a0e4c1558a074911f27746d82bcaf75a013b36aa57d583d3 (container=b83331900dd580a01b9c5e2744412bd6f6e4465313177fb45a2f288d70765010, name=bridge, type=bridge)
2022-01-27T15:53:30.828196270Z container destroy b83331900dd580a01b9c5e2744412bd6f6e4465313177fb45a2f288d70765010 (image=sha256:16aedb83589da925c19d2f692234a2a36c017b35846c07fd8ad6817cceda6ae9, name=inspiring_tesla)
2022-01-27T15:53:40.412059108Z image pull openjdk:8 (name=openjdk)
2022-01-27T15:53:41.682562011Z container create ebb956fca825c2053c41bce28fb0a802ab2f3ef344bdeb14f821a7577c284138 (image=sha256:2ab20532670b7570e512ec955536dfa5e246c374bdca4f0494df107b88a51c75, name=stoic_fermi)
2022-01-27T15:53:41.807749332Z container destroy ebb956fca825c2053c41bce28fb0a802ab2f3ef344bdeb14f821a7577c284138 (image=sha256:2ab20532670b7570e512ec955536dfa5e246c374bdca4f0494df107b88a51c75, name=stoic_fermi)
2022-01-27T15:53:41.854905318Z container create 28814d73d5d71c7f3cd97d31e3745db7c8d74c7f41a1369d86a6ac94540ff54c (image=sha256:8020ea63973791b37416e569141e448a047578432cc73771afc09069d4a0f99c, name=awesome_ritchie)
2022-01-27T15:53:41.972362390Z container destroy 28814d73d5d71c7f3cd97d31e3745db7c8d74c7f41a1369d86a6ac94540ff54c (image=sha256:8020ea63973791b37416e569141e448a047578432cc73771afc09069d4a0f99c, name=awesome_ritchie)
2022-01-27T15:53:41.978868467Z image tag sha256:af10382f81a47247f3194b007fe0b95c08b2a68c7d9f8f4118741b00121ee217 (name=aws_beanstalk/staging-app:latest)
2022-01-27T15:53:46.962572822Z container create 3771e61e64aec3296f70d863c3deeae6e33d57184feecc1297665eee4630c399 (image=af10382f81a4, name=dreamy_napier)
2022-01-27T15:53:47.000564620Z network connect 38cc920306e67474a0e4c1558a074911f27746d82bcaf75a013b36aa57d583d3 (container=3771e61e64aec3296f70d863c3deeae6e33d57184feecc1297665eee4630c399, name=bridge, type=bridge)
2022-01-27T15:53:47.520980591Z container start 3771e61e64aec3296f70d863c3deeae6e33d57184feecc1297665eee4630c399 (image=af10382f81a4, name=dreamy_napier)
2022-01-27T15:53:53.482805850Z image tag sha256:af10382f81a47247f3194b007fe0b95c08b2a68c7d9f8f4118741b00121ee217 (name=aws_beanstalk/current-app:latest)
2022-01-27T15:53:53.562121224Z image untag sha256:af10382f81a47247f3194b007fe0b95c08b2a68c7d9f8f4118741b00121ee217 (name=sha256:af10382f81a47247f3194b007fe0b95c08b2a68c7d9f8f4118741b00121ee217)
2022-01-27T15:53:55.349273944Z image delete sha256:adafe645300e41dd29b04abccf86a562ad5e635bd6afff9343b6a45721fb3a45 (name=sha256:adafe645300e41dd29b04abccf86a562ad5e635bd6afff9343b6a45721fb3a45)
2022-01-27T15:53:55.351988220Z image delete sha256:16aedb83589da925c19d2f692234a2a36c017b35846c07fd8ad6817cceda6ae9 (name=sha256:16aedb83589da925c19d2f692234a2a36c017b35846c07fd8ad6817cceda6ae9)
2022-01-27T15:53:55.356884258Z image delete sha256:fe90a30920d18ecad75ec02e8c04894fbcaadc209529c3e5c14fdaa66d3a7bc9 (name=sha256:fe90a30920d18ecad75ec02e8c04894fbcaadc209529c3e5c14fdaa66d3a7bc9)
2022-01-27T15:53:55.374500965Z image untag sha256:498ac51e5e6e99ae8646d007ed554587a4ceeab78a664dc7eedde7137c658e9e (name=sha256:498ac51e5e6e99ae8646d007ed554587a4ceeab78a664dc7eedde7137c658e9e)
2022-01-27T15:53:55.376309688Z image untag sha256:498ac51e5e6e99ae8646d007ed554587a4ceeab78a664dc7eedde7137c658e9e (name=sha256:498ac51e5e6e99ae8646d007ed554587a4ceeab78a664dc7eedde7137c658e9e)
2022-01-27T15:53:56.244254893Z image delete sha256:498ac51e5e6e99ae8646d007ed554587a4ceeab78a664dc7eedde7137c658e9e (name=sha256:498ac51e5e6e99ae8646d007ed554587a4ceeab78a664dc7eedde7137c658e9e)
2022-01-27T15:53:56.345382037Z image tag sha256:af10382f81a47247f3194b007fe0b95c08b2a68c7d9f8f4118741b00121ee217 (name=aws_beanstalk/current-app:latest)
2022-01-27T15:53:56.458746013Z image tag sha256:3bc5f7759e81182b118ab4d74087103d3733483ea37080ed5b6581251d326713 (name=openjdk:8)
----------------------------------------
/var/log/eb-docker-process.log
----------------------------------------
2022/01/27 15:53:53.917760 [INFO] Loading Manifest...
2022/01/27 15:53:53.917884 [INFO] no eb envtier info file found, skip loading env tier info.
2022/01/27 15:53:53.943756 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:796071762232:stack/awseb-e-zzq77xp3px-stack/a072a330-7f88-11ec-8245-125e3f27604f -r AWSEBAutoScalingGroup --region us-east-1
2022/01/27 15:53:57.965132 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:796071762232:stack/awseb-e-zzq77xp3px-stack/a072a330-7f88-11ec-8245-125e3f27604f -r AWSEBBeanstalkMetadata --region us-east-1
2022/01/27 15:53:58.364393 [INFO] Checking if docker is running...
2022/01/27 15:53:58.364409 [INFO] Fetch current app container id...
2022/01/27 15:53:58.364434 [INFO] Running command /bin/sh -c docker ps | grep 3771e61e64ae
2022/01/27 15:53:58.402972 [INFO] 3771e61e64ae af10382f81a4 "java -jar /usr/loca…" 12 seconds ago Up 10 seconds 5000/tcp dreamy_napier
2022/01/27 15:53:58.402996 [INFO] Running command /bin/sh -c docker wait 3771e61e64ae
----------------------------------------
/var/log/docker
----------------------------------------
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.206815429Z" level=info msg="Starting up"
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.251734173Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.251769208Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.251794146Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.251813620Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.273290447Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.273327673Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.273364441Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.273386710Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.465282859Z" level=info msg="Loading containers: start."
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.956009883Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jan 27 15:50:42 ip-172-31-85-60 docker: time="2022-01-27T15:50:42.186887273Z" level=info msg="Loading containers: done."
Jan 27 15:50:42 ip-172-31-85-60 docker: time="2022-01-27T15:50:42.641490298Z" level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
Jan 27 15:50:42 ip-172-31-85-60 docker: time="2022-01-27T15:50:42.643174227Z" level=info msg="Daemon has completed initialization"
Jan 27 15:50:42 ip-172-31-85-60 docker: time="2022-01-27T15:50:42.702629222Z" level=info msg="API listen on /run/docker.sock"
Jan 27 15:53:28 ip-172-31-85-60 docker: time="2022-01-27T15:53:28.503145956Z" level=info msg="ignoring event" container=b83331900dd580a01b9c5e2744412bd6f6e4465313177fb45a2f288d70765010 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 27 15:53:41 ip-172-31-85-60 docker: time="2022-01-27T15:53:41.783532791Z" level=info msg="Layer sha256:e963a094d3f25a21ce0bfcae0216d04385c4c06ad580c73675a7992627c28416 cleaned up"
Jan 27 15:53:41 ip-172-31-85-60 docker: time="2022-01-27T15:53:41.948756315Z" level=info msg="Layer sha256:e963a094d3f25a21ce0bfcae0216d04385c4c06ad580c73675a7992627c28416 cleaned up"
----------------------------------------
/var/log/eb-docker/containers/eb-current-app/eb-3771e61e64ae-stdouterr.log
----------------------------------------
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.5.6)
2022-01-27 15:53:57.807 INFO 3771e61e64ae --- [ main] o.s.b.a.e.w.EndpointLinksResolver : Exposing 1 endpoint(s) beneath base path '/actuator'
2022-01-27 15:53:57.853 INFO 3771e61e64ae --- [ main] o.a.c.h.Http11NioProtocol : Starting ProtocolHandler ["http-nio-5000"]
2022-01-27 15:53:57.875 INFO 3771e61e64ae --- [ main] o.s.b.w.e.t.TomcatWebServer : Tomcat started on port(s): 5000 (http) with context path ''
2022-01-27 15:53:57.903 INFO 3771e61e64ae --- [ main] c.n.p.ParalleniumHostApplication : Started ParalleniumHostApplication in 8.805 seconds (JVM running for 10.386)
2022-01-27 15:53:57.939 INFO 3771e61e64ae --- [ main] c.n.p.ParalleniumHostApplication : **The server is hosted at: 127.0.0.1:5000 with a PUBLIC ip of 34.226.166.24
2022-01-27 15:53:57.941 INFO 3771e61e64ae --- [ main] c.n.p.ParalleniumHostApplication : Spring version is 5.3.12
2022-01-27 15:53:57.946 INFO 3771e61e64ae --- [ main] c.n.p.ParalleniumHostApplication : Socket Server is listening on port 6868...
Okay, so I decided to just launch a new environment using the same exact configuration and code and it worked. Looks like Elastic Beanstalk environments can break and once that happens, there is no fixing it apparently.
Related
I have currently implemented websocket connections via django channels using a redis layer.
I'm new to docker and not sure where I might have made a mistake. After the docker-compose up -d --build the "static files, media, database and gunicorn wsgi" all function, but redis won't connect. even though it is running in the background.
Before trying to containerize the application with docker, it worked well with:
python manage.py runserver
with the following settings.py setction for the redis layer:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("0.0.0.0", 6379)],
},
},
}
and by calling a docker container for the redis layer:
docker run -p 6379:6379 -d redis:5
But after the trying to containerize the entire application it was unable to find the websocket
The new setup for the docker-compose is as follows:
version: '3.10'
services:
web:
container_name: web
build:
context: ./app
dockerfile: Dockerfile
command: bash -c "gunicorn core.wsgi:application --bind 0.0.0.0:8000"
volumes:
- ./app/:/usr/src/app/
- static_volume:/usr/src/app/staticfiles/
- media_volume:/usr/src/app/media/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
networks:
- app_network
redis:
container_name: redis
image: redis:5
ports:
- 6379:6379
networks:
- app_network
restart: on-failure
db:
container_name: db
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- ./.env.psql
ports:
- 5432:5432
networks:
- app_network
volumes:
postgres_data:
static_volume:
media_volume:
networks:
app_network:
with this settings.py:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("redis", 6379)],
},
},
}
After building successfully the container and running docker-compose logs -f:
Attaching to web, db, redis
db | The files belonging to this database system will be owned by user "postgres".
db | This user must also own the server process.
db |
db | The database cluster will be initialized with locale "en_US.utf8".
db | The default database encoding has accordingly been set to "UTF8".
db | The default text search configuration will be set to "english".
db |
db | Data page checksums are disabled.
db |
db | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db | creating subdirectories ... ok
db | selecting dynamic shared memory implementation ... posix
db | selecting default max_connections ... 100
db | selecting default shared_buffers ... 128MB
db | selecting default time zone ... Etc/UTC
db | creating configuration files ... ok
db | running bootstrap script ... ok
db | performing post-bootstrap initialization ... ok
db | initdb: warning: enabling "trust" authentication for local connections
db | You can change this by editing pg_hba.conf or using the option -A, or
db | --auth-local and --auth-host, the next time you run initdb.
db | syncing data to disk ... ok
db |
db |
db | Success. You can now start the database server using:
db |
db | pg_ctl -D /var/lib/postgresql/data -l logfile start
db |
db | waiting for server to start....2022-06-27 16:18:30.303 UTC [48] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
db | 2022-06-27 16:18:30.310 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db | 2022-06-27 16:18:30.334 UTC [49] LOG: database system was shut down at 2022-06-27 16:18:29 UTC
db | 2022-06-27 16:18:30.350 UTC [48] LOG: database system is ready to accept connections
db | done
db | server started
db | CREATE DATABASE
db |
db |
db | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db |
db | 2022-06-27 16:18:31.587 UTC [48] LOG: received fast shutdown request
db | waiting for server to shut down....2022-06-27 16:18:31.596 UTC [48] LOG: aborting any active transactions
db | 2022-06-27 16:18:31.601 UTC [48] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1
db | 2022-06-27 16:18:31.602 UTC [50] LOG: shutting down
db | 2022-06-27 16:18:31.650 UTC [48] LOG: database system is shut down
db | done
db | server stopped
db |
db | PostgreSQL init process complete; ready for start up.
db |
db | 2022-06-27 16:18:31.800 UTC [1] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
db | 2022-06-27 16:18:31.804 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db | 2022-06-27 16:18:31.804 UTC [1] LOG: listening on IPv6 address "::", port 5432
db | 2022-06-27 16:18:31.810 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db | 2022-06-27 16:18:31.818 UTC [62] LOG: database system was shut down at 2022-06-27 16:18:31 UTC
db | 2022-06-27 16:18:31.825 UTC [1] LOG: database system is ready to accept connections
redis | 1:C 27 Jun 2022 16:18:29.080 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis | 1:C 27 Jun 2022 16:18:29.080 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=1, just started
redis | 1:C 27 Jun 2022 16:18:29.080 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis | 1:M 27 Jun 2022 16:18:29.082 * Running mode=standalone, port=6379.
redis | 1:M 27 Jun 2022 16:18:29.082 # Server initialized
redis | 1:M 27 Jun 2022 16:18:29.082 * Ready to accept connections
web | Waiting for postgres...
web | PostgreSQL started
web | Waiting for redis...
web | redis started
web | [2022-06-27 16:18:33 +0000] [1] [INFO] Starting gunicorn 20.1.0
web | [2022-06-27 16:18:33 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
web | [2022-06-27 16:18:33 +0000] [1] [INFO] Using worker: sync
web | [2022-06-27 16:18:33 +0000] [8] [INFO] Booting worker with pid: 8
web | [2022-06-27 16:19:18 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:8)
web | [2022-06-27 18:19:18 +0200] [8] [INFO] Worker exiting (pid: 8)
web | [2022-06-27 16:19:18 +0000] [9] [INFO] Booting worker with pid: 9
web | Not Found: /ws/user_consumer/1/
web | Not Found: /ws/accueil/accueil/
web | Not Found: /ws/user_consumer/1/
web | Not Found: /ws/accueil/accueil/
And the docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cb3e489e0831 dermatology-project_web "/usr/src/app/entryp…" 35 minutes ago Up 35 minutes 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp web
aee14c8665d0 postgres "docker-entrypoint.s…" 35 minutes ago Up 35 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp db
94c29591b352 redis:5 "docker-entrypoint.s…" 35 minutes ago Up 35 minutes 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp redis
The build Dockerfile:
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apt-get update
RUN apt-get install -y libpq-dev python3-pip python-dev postgresql postgresql-contrib netcat
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# create the appropriate directories for staticfiles
# copy project
COPY . .
# staticfiles
RUN python manage.py collectstatic --no-input --clear
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
and the entrypoint that checks the connections:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
if [ "$CHANNEL" = "redis" ]
then
echo "Waiting for redis..."
while ! nc -z $REDIS_HOST $REDIS_PORT; do
sleep 0.1
done
echo "redis started"
fi
#python manage.py flush --no-input
#python manage.py migrate
exec "$#"
I have also tried to run the redis container separate like before and maintain the working containers, but that doesn´t work either. I have also tried it while running daphne on a different port and passing the asgi:application (daphne -p 8001 myproject.asgi:application) and it also didn't work.
Thank you
Managed a solution eventually
To make it work I needed to run the wsgi and asgi servers separately from from each other, each with its own container. Also, the previous service "web" that exposed the ports to the applications needed to be run twice for each container as well, with nginx proxies that upstreamed to each respective port.
This was all thanks to this genius of a man:
https://github.com/pplonski/simple-tasks
Here he explains what I needed and more. He also uses celery workers to manage the asynchronous task queue/job queue based on distributed message passing, which was a bit overkill for my project but beautiful.
New docker-compose:
version: '2'
services:
nginx:
container_name: nginx
restart: always
build: ./nginx
ports:
- 1337:80
volumes:
- static_volume:/usr/src/app/staticfiles/
- media_volume:/usr/src/app/media/
depends_on:
- wsgiserver
- asgiserver
postgres:
container_name: postgres
restart: always
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- 5433:5432
expose:
- 5432
environment:
- ./.env.db
redis:
container_name: redis
image: redis:5
restart: unless-stopped
ports:
- 6378:6379
wsgiserver:
build:
context: ./app
dockerfile: Dockerfile
container_name: wsgiserver
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
env_file:
- ./.env.dev
volumes:
- ./app/:/usr/src/app/
- static_volume:/usr/src/app/staticfiles/
- media_volume:/usr/src/app/media/
links:
- postgres
- redis
expose:
- 8000
asgiserver:
build:
context: ./app
dockerfile: Dockerfile
container_name: asgiserver
command: daphne core.asgi:application -b 0.0.0.0 -p 9000
env_file:
- ./.env.dev
volumes:
- ./app/:/usr/src/app/
links:
- postgres
- redis
expose:
- 9000
volumes:
static_volume:
media_volume:
postgres_data:
New entrypoint.sh:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
#python manage.py flush --no-input
#python manage.py migrate
exec "$#"
New nginx
nginx.conf:
server {
listen 80;
# gunicon wsgi server
location / {
try_files $uri #proxy_api;
}
location #proxy_api {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://wsgiserver:8000;
}
# ASGI
# map websocket connection to daphne
location /ws {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_pass http://asgiserver:9000;
}
# static and media files
location /static/ {
alias /usr/src/app/staticfiles/;
}
location /media/ {
alias /usr/src/app/media/;
}
}
Dockerfile for nginx:
FROM nginx:1.21
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
Note
If anyone is using this as reference, this is not a production container, there are further steps needed.
This article explains the other step:
https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/#conclusion
, as well as securing the application with AWS with Docker and Let's Encrypt, in the conclusion link.
I want to insert environmental variables from an .env file into my containerized Django application, so I can use it to securely set Django's settings.py.
However on $docker-compose up I receive part of an UserWarning which apparently originates in the django-environ package (and breaks my code):
/usr/local/lib/python3.9/site-packages/environ/environ.py:628: UserWarning: /app/djangoDocker/.env doesn't exist - if you're not configuring your environment separately, create one. web | warnings.warn(
The warning breaks at that point and (although all the container claim to be running) I can neither stop them from that console (zsh, Ctrl+C) nor can I access the website locally. What am I missing? Really appreciate any useful input.
Dockerfile: (located in root)
# pull official base image
FROM python:3.9.5
# set environment variables, grab via os.environ
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
# set work directory
WORKDIR /app
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# add entrypoint script
COPY ./entrypoint.sh ./app
# run entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
# copy project
COPY . /app
docker-compose.yml (located in root; I've tried either using env_file or environment as in the comments)
version: '3'
services:
web:
build: .
container_name: web
command: gunicorn djangoDocker.wsgi:application --bind 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:80"
env_file:
- .env
# environment:
# BASE_URL: ${BASE_URL}
# SECRET_KEY: ${SECRET_KEY}
# ALLOWED_HOSTS: ${ALLOWED_HOSTS}
# DEBUG: ${DEBUG}
# SQL_ENGINE: ${SQL_ENGINE}
# SQL_DATABASE: ${SQL_DATABASE}
# SQL_USER: ${SQL_USER}
# SQL_PASSWORD: ${SQL_PASSWORD}
# SQL_HOST: ${SQL_HOST}
# SQL_PORT: ${SQL_PORT}
# EMAIL_HOST_USER: ${EMAIL_HOST_USER}
# EMAIL_HOST_PASSWORD: ${EMAIL_HOST_PASSWORD}
# TEMPLATE_DIR: ${TEMPLATE_DIR}
depends_on:
- pgdb
pgdb:
image: postgres
container_name: pgdb
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- pgdata:/var/lib/postgresql/data/
nginx:
build: ./nginx
volumes:
- .:/app
links:
- web:web
ports:
- "80:80"
depends_on:
- web
volumes:
pgdata:
.env. (also located in root)
BASE_URL=localhost
SECRET_KEY=mySecretKey
ALLOWED_HOSTS=localhost,127.0.0.1,0.0.0.0
DEBUG=True
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=postgres
SQL_USER=postgres
SQL_PASSWORD=postgres
SQL_HOST=pgdb
SQL_PORT=5432
EMAIL_HOST_USER=my#mail.com
EMAIL_HOST_PASSWORD=myMailPassword
TEMPLATE_DIR=frontend/templates/frontend/
Terminal Output after running $docker-compose up in the root
pgdb is up-to-date
Recreating web ... done
Recreating djangodocker_nginx_1 ... done
Attaching to pgdb, web, djangodocker_nginx_1
nginx_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx_1 | 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist
web | [2021-06-04 14:58:09 +0000] [1] [INFO] Starting gunicorn 20.0.4
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
web | [2021-06-04 14:58:09 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
web | [2021-06-04 14:58:09 +0000] [1] [INFO] Using worker: sync
web | [2021-06-04 14:58:09 +0000] [8] [INFO] Booting worker with pid: 8
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: using the "epoll" event method
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: nginx/1.20.1
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: built by gcc 10.2.1 20201203 (Alpine 10.2.1_pre1)
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: OS: Linux 4.19.121-linuxkit
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: start worker processes
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: start worker process 23
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: start worker process 24
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: start worker process 25
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: start worker process 26
pgdb |
pgdb | PostgreSQL Database directory appears to contain a database; Skipping initialization
pgdb |
pgdb | 2021-06-04 14:34:00.119 UTC [1] LOG: starting PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
pgdb | 2021-06-04 14:34:00.120 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
pgdb | 2021-06-04 14:34:00.120 UTC [1] LOG: listening on IPv6 address "::", port 5432
pgdb | 2021-06-04 14:34:00.125 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
pgdb | 2021-06-04 14:34:00.134 UTC [27] LOG: database system was shut down at 2021-06-04 14:21:19 UTC
pgdb | 2021-06-04 14:34:00.151 UTC [1] LOG: database system is ready to accept connections
web | /usr/local/lib/python3.9/site-packages/environ/environ.py:628: UserWarning: /app/djangoDocker/.env doesn't exist - if you're not configuring your environment separately, create one.
web | warnings.warn(
requirements.txt
Django==3.2
gunicorn==20.0.4
djoser==2.1.0
django-environ
psycopg2-binary~=2.8.0
django-cors-headers==3.5.0
django-templated-mail==1.1.1
djangorestframework==3.12.2
djangorestframework-simplejwt==4.7.0
Let me know in case any further information is required.
Until now I do not know what caused the error, but in case anyone else has the same problem: switching to python-decouple instead of django-environ fixed it. Of course you have to adapt everything in settings.py accordingly, f.e. add from decouple import config and DEBUG = config('DEBUG', default=False, cast=bool).
I have to deploy flask app on amazon elastic beanstalk
I was following these steps to deploy on elastic beanstalk
http://www.alcortech.com/steps-to-deploy-python-flask-mysql-application-on-aws-elastic-beanstalk/
error code I'm getting
----------------------------------------
/var/log/eb-engine.log
----------------------------------------
2020/08/04 17:54:08.190038 [INFO] Copying file /opt/elasticbeanstalk/config/private/healthd/healthd.conf to /var/proxy/staging/nginx/conf.d/elasticbeanstalk/healthd.conf
2020/08/04 17:54:08.191770 [INFO] Executing instruction: configure log streaming
2020/08/04 17:54:08.191779 [INFO] log streaming is not enabled
2020/08/04 17:54:08.191783 [INFO] disable log stream
2020/08/04 17:54:08.192853 [INFO] Running command /bin/sh -c systemctl show -p PartOf amazon-cloudwatch-agent.service
2020/08/04 17:54:08.298022 [INFO] Running command /bin/sh -c systemctl stop amazon-cloudwatch-agent.service
2020/08/04 17:54:08.303818 [INFO] Executing instruction: GetToggleForceRotate
2020/08/04 17:54:08.303831 [INFO] Checking if logs need forced rotation
2020/08/04 17:54:08.303852 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:859160877773:stack/awseb-e-dxcnd8btg7-stack/fd1c0e90-d67a-11ea-895d-0ee443750bc7 -r AWSEBAutoScalingGroup --region us-east-1
2020/08/04 17:54:09.170590 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:859160877773:stack/awseb-e-dxcnd8btg7-stack/fd1c0e90-d67a-11ea-895d-0ee443750bc7 -r AWSEBBeanstalkMetadata --region us-east-1
2020/08/04 17:54:09.501785 [INFO] Copying file /opt/elasticbeanstalk/config/private/rsyslog.conf to /etc/rsyslog.d/web.conf
2020/08/04 17:54:09.503412 [INFO] Running command /bin/sh -c systemctl restart rsyslog.service
2020/08/04 17:54:10.455082 [INFO] Executing instruction: PostBuildEbExtension
2020/08/04 17:54:10.455106 [INFO] No plugin in cfn metadata.
2020/08/04 17:54:10.455116 [INFO] Starting executing the config set Infra-EmbeddedPostBuild.
2020/08/04 17:54:10.455138 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-init -s arn:aws:cloudformation:us-east-1:859160877773:stack/awseb-e-dxcnd8btg7-stack/fd1c0e90-d67a-11ea-895d-0ee443750bc7 -r AWSEBAutoScalingGroup --region us-east-1 --configsets Infra-EmbeddedPostBuild
2020/08/04 17:54:10.827402 [INFO] Finished executing the config set Infra-EmbeddedPostBuild.
2020/08/04 17:54:10.827431 [INFO] Executing instruction: CleanEbExtensions
2020/08/04 17:54:10.827453 [INFO] Cleaned ebextensions subdirectories from app staging directory.
2020/08/04 17:54:10.827457 [INFO] Executing instruction: RunPreDeployHooks
2020/08/04 17:54:10.827478 [INFO] The dir .platform/hooks/predeploy/ does not exist in the application. Skipping this step...
2020/08/04 17:54:10.827482 [INFO] Executing instruction: stop X-Ray
2020/08/04 17:54:10.827486 [INFO] stop X-Ray ...
2020/08/04 17:54:10.827504 [INFO] Running command /bin/sh -c systemctl show -p PartOf xray.service
2020/08/04 17:54:10.834251 [WARN] stopProcess Warning: process xray is not registered
2020/08/04 17:54:10.834271 [INFO] Running command /bin/sh -c systemctl stop xray.service
2020/08/04 17:54:10.844029 [INFO] Executing instruction: stop proxy
2020/08/04 17:54:10.844061 [INFO] Running command /bin/sh -c systemctl show -p PartOf nginx.service
2020/08/04 17:54:10.929856 [WARN] stopProcess Warning: process nginx is not registered
2020/08/04 17:54:10.929893 [INFO] Running command /bin/sh -c systemctl stop nginx.service
2020/08/04 17:54:10.935107 [INFO] Executing instruction: FlipApplication
2020/08/04 17:54:10.935119 [INFO] Fetching environment variables...
2020/08/04 17:54:10.935125 [INFO] No plugin in cfn metadata.
2020/08/04 17:54:10.936360 [INFO] Purge old process...
2020/08/04 17:54:10.936404 [INFO] Register application processes...
2020/08/04 17:54:10.936409 [INFO] Registering the proc: web
2020/08/04 17:54:10.936423 [INFO] Running command /bin/sh -c systemctl show -p PartOf web.service
2020/08/04 17:54:10.942911 [INFO] Running command /bin/sh -c systemctl daemon-reload
2020/08/04 17:54:11.190918 [INFO] Running command /bin/sh -c systemctl reset-failed
2020/08/04 17:54:11.195011 [INFO] Running command /bin/sh -c systemctl is-enabled eb-app.target
2020/08/04 17:54:11.198465 [INFO] Copying file /opt/elasticbeanstalk/config/private/aws-eb.target to /etc/systemd/system/eb-app.target
2020/08/04 17:54:11.200382 [INFO] Running command /bin/sh -c systemctl enable eb-app.target
2020/08/04 17:54:11.275179 [ERROR] Created symlink from /etc/systemd/system/multi-user.target.wants/eb-app.target to /etc/systemd/system/eb-app.target.
2020/08/04 17:54:11.275218 [INFO] Running command /bin/sh -c systemctl start eb-app.target
2020/08/04 17:54:11.280436 [INFO] Running command /bin/sh -c systemctl enable web.service
2020/08/04 17:54:11.355233 [ERROR] Created symlink from /etc/systemd/system/multi-user.target.wants/web.service to /etc/systemd/system/web.service.
2020/08/04 17:54:11.355273 [INFO] Running command /bin/sh -c systemctl show -p PartOf web.service
2020/08/04 17:54:11.360364 [INFO] Running command /bin/sh -c systemctl is-active web.service
2020/08/04 17:54:11.363811 [INFO] Running command /bin/sh -c systemctl start web.service
2020/08/04 17:54:11.389333 [INFO] Executing instruction: start X-Ray
2020/08/04 17:54:11.389349 [INFO] X-Ray is not enabled.
2020/08/04 17:54:11.389354 [INFO] Executing instruction: start proxy with new configuration
2020/08/04 17:54:11.389382 [INFO] Running command /bin/sh -c /usr/sbin/nginx -t -c /var/proxy/staging/nginx/nginx.conf
2020/08/04 17:54:11.594594 [ERROR] nginx: the configuration file /var/proxy/staging/nginx/nginx.conf syntax is ok
nginx: configuration file /var/proxy/staging/nginx/nginx.conf test is successful
2020/08/04 17:54:11.595275 [INFO] Running command /bin/sh -c cp -rp /var/proxy/staging/nginx/. /etc/nginx
2020/08/04 17:54:11.603198 [INFO] Running command /bin/sh -c systemctl show -p PartOf nginx.service
2020/08/04 17:54:11.618752 [INFO] Running command /bin/sh -c systemctl daemon-reload
2020/08/04 17:54:11.716763 [INFO] Running command /bin/sh -c systemctl reset-failed
2020/08/04 17:54:11.724234 [INFO] Running command /bin/sh -c systemctl show -p PartOf nginx.service
2020/08/04 17:54:11.735835 [INFO] Running command /bin/sh -c systemctl is-active nginx.service
2020/08/04 17:54:11.743306 [INFO] Running command /bin/sh -c systemctl start nginx.service
2020/08/04 17:54:11.810080 [INFO] Executing instruction: configureSqsd
2020/08/04 17:54:11.810096 [INFO] This is a web server environment instance, skip configure sqsd daemon ...
2020/08/04 17:54:11.810102 [INFO] Executing instruction: startSqsd
2020/08/04 17:54:11.810105 [INFO] This is a web server environment instance, skip start sqsd daemon ...
2020/08/04 17:54:11.810110 [INFO] Executing instruction: Track pids in healthd
2020/08/04 17:54:11.810114 [INFO] This is an enhanced health env...
2020/08/04 17:54:11.810138 [INFO] Running command /bin/sh -c systemctl show -p ConsistsOf aws-eb.target | cut -d= -f2
2020/08/04 17:54:11.819320 [INFO] healthd.service nginx.service cfn-hup.service
2020/08/04 17:54:11.819352 [INFO] Running command /bin/sh -c systemctl show -p ConsistsOf eb-app.target | cut -d= -f2
2020/08/04 17:54:11.826094 [INFO] web.service
2020/08/04 17:54:11.826211 [INFO] Executing instruction: RunPostDeployHooks
2020/08/04 17:54:11.826223 [INFO] The dir .platform/hooks/postdeploy/ does not exist in the application. Skipping this step...
2020/08/04 17:54:11.826228 [INFO] Executing cleanup logic
2020/08/04 17:54:11.826308 [INFO] CommandService Response: {"status":"SUCCESS","api_version":"1.0","results":[{"status":"SUCCESS","msg":"Engine execution has succeeded.","returncode":0,"events":[]}]}
2020/08/04 17:54:11.826448 [INFO] Platform Engine finished execution on command: app-deploy
2020/08/04 17:55:26.814753 [INFO] Starting...
2020/08/04 17:55:26.814816 [INFO] Starting EBPlatform-PlatformEngine
2020/08/04 17:55:26.817259 [INFO] no eb envtier info file found, skip loading env tier info.
2020/08/04 17:55:26.817348 [INFO] Engine received EB command cfn-hup-exec
2020/08/04 17:55:26.939483 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:859160877773:stack/awseb-e-dxcnd8btg7-stack/fd1c0e90-d67a-11ea-895d-0ee443750bc7 -r AWSEBAutoScalingGroup --region us-east-1
2020/08/04 17:55:27.277717 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:859160877773:stack/awseb-e-dxcnd8btg7-stack/fd1c0e90-d67a-11ea-895d-0ee443750bc7 -r AWSEBBeanstalkMetadata --region us-east-1
2020/08/04 17:55:27.829610 [INFO] checking whether command tail-log is applicable to this instance...
2020/08/04 17:55:27.829630 [INFO] this command is applicable to the instance, thus instance should execute command
2020/08/04 17:55:27.829635 [INFO] Engine command: (tail-log)
2020/08/04 17:55:27.830551 [INFO] Executing instruction: GetTailLogs
2020/08/04 17:55:27.830557 [INFO] Tail Logs...
2020/08/04 17:55:27.834471 [INFO] Running command /bin/sh -c tail -n 100 /var/log/eb-engine.log
----------------------------------------
/var/log/web.stdout.log
----------------------------------------
Aug 4 17:54:11 ip-172-31-20-145 web: [2020-08-04 17:54:11 +0000] [3881] [INFO] Starting gunicorn 20.0.4
Aug 4 17:54:11 ip-172-31-20-145 web: [2020-08-04 17:54:11 +0000] [3881] [INFO] Listening at: http://127.0.0.1:8000 (3881)
Aug 4 17:54:11 ip-172-31-20-145 web: [2020-08-04 17:54:11 +0000] [3881] [INFO] Using worker: threads
Aug 4 17:54:11 ip-172-31-20-145 web: [2020-08-04 17:54:11 +0000] [3918] [INFO] Booting worker with pid: 3918
----------------------------------------
/var/log/nginx/access.log
----------------------------------------
----------------------------------------
/var/log/nginx/error.log
----------------------------------------
My application.py file is on the root and its source code
from pprint import pprint
import re
import smtplib
import ssl
import docxpy
import glob
import time
import spacy
import requests
import json
import pickle
import numpy as np
import pandas as pd
import tensorflow as tf
from flask import Flask
from flask_restful import Api, Resource, reqparse
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing.text import one_hot
from tensorflow.keras.models import model_from_json
import en_core_web_sm
NLP = en_core_web_sm.load()
df = pd.read_csv('skill_train.csv')
df=df.dropna()
df['skill']=pd.to_numeric(df['skill'])
negitive=df[df['skill']==0]
positive=df[df['skill']==1]
application = Flask(__name__)
api = Api(application)
class Candidate:
def __init__(self,file_link):
__text = docxpy.process(file_link).strip()
self.__resume={
'Name':self.__extract_name(__text),
'Phone Number':self.__extract_phone(__text),
'Email':self.__extract_email(__text),
'Experience':self.__extract_experience(__text),
'Skills':list(),
'Title':'',
'match':0,
'file_path':file_link,
}
def get_resume(self):
return self.__resume
def __extract_name(self,text):
try:
return text[:text.index('\n')]
except:
return None
def __extract_email(self,text):
email_pattern = re.compile(r'\S+#\S+\.\S+')
try:
return email_pattern.findall(text)[0].upper()
except:
try:
__hyperlinks = text.data['links'][0][0].decode('UTF-8')
return email_pattern.findall(__hyperlinks)[0].upper()
except:
return None
def __extract_phone(self,text):
phone_pattern = re.compile(r'(\d{3}[-\.\s]??\d{3}[-\.\s]??\d{4}|\(\d{3}\)[-\.\s]*\d{3}[-\.\s]??\d{4}|\d{3}[-\.\s]??\d{4})')
try:
return ''.join(phone_pattern.findall(text)[0]) if len(''.join(phone_pattern.findall(text)[0]))>=10 else None
except:
return None
def __extract_experience(self,text):
try:
__exp_pattern = re.compile(r'\d\+ years|\d years|\d\d\+ years|\d\d years|\d\d \+ Years|\d \+ Years')
__exp = __exp_pattern.findall(text)
return str(max([int(re.findall(re.compile(r'\d+'),i)[0]) for i in __exp])) + '+ years'
except:
try:
__date_patt = re.compile(r"\d{2}[/-]\d+")
__dates_list = __date_patt.findall(text)
try:
__year_list=[int(date[-4:]) for date in __dates_list]
except:
__year_list=[int(date[-2:]) for date in __dates_list]
return str(max(__year_list)-min(__year_list))+'+ years'
except:
return None
class JobDescription:
def __init__(self,args):
self.description=args['job_description'].upper()
self.__title=self.__get_title(self.description) if 'job_title' not in args else args['job_title'].upper()
__doc=NLP(self.description)
__noun_chunks=set([chunk.text.upper() for chunk in __doc.noun_chunks])
self.__skills=list(self.__get_skills(list(__noun_chunks)))
def title(self):
return self.__title
def skills(self):
return self.__skills
def __clean_data(self,noun_chunks):
subs=[r'^[\d|\W]*','EXPERIENCE','EXPERT','DEVELOPER','SERVICES','STACK','TECHNOLOGIES',
'JOBS','JOB',r'\n',' ',r'\t','AND','DEV','SCRIPTS','DBS','DATABASE','DATABASES','SERVER',
'SERVERS',r'^\d+']
__clean_chunks=[]
for chunk in noun_chunks:
for sub in subs:
chunk=(re.sub(sub,' ',chunk).strip())
filtered_chunk=[]
chunk=chunk.split(' ')
for word in chunk:
for sub in subs:
word=(re.sub(sub,' ',word).strip())
if word != '':
if not NLP.vocab[word.strip()].is_stop:
filtered_chunk.append(word.strip())
filtered_chunk=' '.join(filtered_chunk)
if filtered_chunk != '' and filtered_chunk != ' ':
if ',' in filtered_chunk:
__clean_chunks+=filtered_chunk.split(',')
elif '/' in filtered_chunk:
__clean_chunks+=filtered_chunk.split('/')
else:
__clean_chunks.append(filtered_chunk)
return set([chunk.strip() for chunk in __clean_chunks])
def __get_skills(self,nounChunks):
with open('skill_model.json','r') as f:
model=f.read()
sq_model = model_from_json(model)
sq_model.load_weights('skillweights.h5')
__clean_chunks=list(self.__clean_data(nounChunks))
__onehot_repr=[one_hot(words,25000)for words in __clean_chunks]
__test_data=pad_sequences(__onehot_repr,padding='pre',maxlen=6)
__results = [(x,y[0])for x,y in zip(__clean_chunks,sq_model.predict_classes(np.array(__test_data)))]
ones=set(positive['chunk'])
zeros=set(negitive['chunk'])
for i,result in enumerate(__results):
if result[0] in ones and result[1] !=1:
__results[i]=(result[0],1)
if result[0] in zeros and result[1] !=0:
__results[i]=(result[0],0)
return set([x[0] for x in __results if x[1]==1])
def __get_title(self,text):
try:
__role=re.findall(re.compile(r'POSITION[ ]*:[\w .\(\)]+|ROLE[ ]*:[\w .\(\)]+|TITLE[ ]*:[\w .\(\)]+'),text)[0].split(':')[1].strip()
if '(' in __role:
__role=re.findall(re.compile(r'\([\w ]+\)'),__role)[0][1:-1].strip()
return __role.upper()
except:
return None
def __matcher(self,resume):
__text = docxpy.process(resume['file_path']).upper()
if self.__title in __text:
resume['Title']=self.__title
for skill in self.__skills:
if skill in __text:
resume['Skills'].append(skill)
resume['Skills'] = list(set(resume['Skills']))
resume['match'] = 0.0 if len(self.__skills)==0 else (len(resume['Skills'])/len(self.__skills))*100
return resume
def filter_matches(self,candidates):
if self.__title != None:
__matches = []
for user in candidates:
resume = user.get_resume()
result = self.__matcher(resume)
if (result['Title']!='' and result['match']>60) or result['match']>60:
__matches.append(result)
return sorted(__matches, key=lambda match:match['match'], reverse=True)
else:
print('Unable to extract Role try writing Role:...... or Position:....')
def send_mail(self,matches):
__port = 465
__smtp_server = "smtp.gmail.com"
__sender_email = 'sonai20202015#gmail.com'
__password = 'Sonai#123'
context = ssl.create_default_context()
with smtplib.SMTP_SSL(__smtp_server, __port, context=context) as server:
server.login(__sender_email, __password)
for Candidate in matches:
__reciver_email = Candidate['Email']
__message=f'''Subject: Job offer
Hi {Candidate['Name']},
This is an autogenrated email from an ATS SONAI we found your resume to be a
good match for {self.__title} job
'''
server.sendmail(__sender_email,__reciver_email, __message)
def get_acess(self):
auth_url = 'https://secure.dice.com/oauth/token'
auth_header = {'Authorization': 'Basic dHM0LWhheWRlbnRlY2hub2xvZ3k6Yzk0NWI4YmItMmRmNi00Yjk4LThmNDUtMTg4ZWU5Mjk3ZGEz', 'Content-Type': 'application/x-www-form-urlencoded'}
auth_data = {'grant_type': 'password', 'username': 'haydentechnology#dice.com', 'password': '635n3E7s'}
try:
auth_response = requests.request('POST',auth_url,headers=auth_header,data=auth_data)
auth_code = auth_response.status_code
auth_response = json.loads(auth_response.content.decode())
return (auth_code,auth_response)
except:
return(0,'')
def boolean_skills(self):
with open('output.pkl','rb') as f:
data = pickle.load(f)
if self.__title in data:
output = []
for skill in self.__skills:
if skill in data[self.__title][0] and data[self.__title][0][skill]>(3/4)*data[self.__title][1]:
continue
output.append(skill)
return output
return self.__skills
def search_with_api(self):
auth_response = self.get_acess()
if auth_response[0] == 200:
token = auth_response[1]['access_token']
headers = {'Authorization':f'bearer {token}'}
url = 'https://talent-api.dice.com/v2/profiles/search?q='
boolean_skills = self.boolean_skills()
for skill in boolean_skills:
url += f'{skill}&'
url = url + self.__title
print('\n',url,'\n')
try:
output = requests.request('GET',url,headers=headers)
output = json.loads(output.content.decode())
return output
except:
return ('error while finding users')
else:
return ('Authentication error with dice')
class Search_Candidates(Resource):
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("application_type",required=False)#String
parser.add_argument("application_name",required=False)#String
parser.add_argument("application_internal_only",required=False)#Boolean
parser.add_argument("application_applicant_history",required=False)#Boolean
parser.add_argument("application_years_of_employement_needed",required=False)#Float
parser.add_argument("application_number_of_refrences",required=False)#Float
parser.add_argument("application_flag_voluntarily_resign",required=False)#Boolean
parser.add_argument("application_flag_past_employer_contracted",required=False)#Boolean
parser.add_argument("email_template_default_address",required=False)#String
parser.add_argument("task",required=False)#List
parser.add_argument("job_title",required=True)#String
parser.add_argument("employement_status",required=False)#String
parser.add_argument('job_description', required=True)#String
parser.add_argument("joinig_date",required=False)#String as ISO STANDARDS
parser.add_argument("salary",required=False)#Float
parser.add_argument("average_hours_weekly",required=False)#Float
parser.add_argument("post_title",required=False)#String
parser.add_argument("post_details_category",required=False)#String
parser.add_argument("number_of_open_position",required=False)#Float
parser.add_argument("general_application",required=False)#Boolean
args = parser.parse_args()
response = self.find_matches(args)
response = json.dumps(response)
return response
def find_matches(self,args):
file_paths=glob.glob(r'demo_word_file\*.docx')
candidates=[Candidate(file_path) for file_path in file_paths]
job = JobDescription(args)
start_time=time.time()
results = job.filter_matches(candidates)
pprint(f'Found and Sorted {len(results)} results in {time.time()-start_time} secs from {len(candidates)} files')
matches = [matches for matches in job.filter_matches(candidates)]
if not len(matches) == 0:
matches_with_email=[match for match in matches if match['Email'] != None]
job.send_mail(matches_with_email)
else:
results = job.search_with_api()
return results
def run():
file_paths=glob.glob(r'demo_word_file\*.docx')
candidates=[Candidate(file_path) for file_path in file_paths]
text = docxpy.process('jobtest.docx')
args= {'job_description': text}
job = JobDescription(args)
results = job.filter_matches(candidates)
return results
if __name__ == "__main__":
api.add_resource(Search_Candidates,'/findmatches/')
application.run('localhost',8080,debug=True)
My requirement.txt file is here
# Automatically generated by https://github.com/damnever/pigar.
# application.py: 15
Flask == 1.0.4
# application.py: 16
Flask_RESTful == 0.3.8
# application.py: 5
docxpy == 0.8.5
# application.py: 12
numpy == 1.19.1
# application.py: 13
pandas == 1.1.0
# application.py: 9
requests == 2.18.4
spacy>=2.2.0,<3.0.0
https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz#egg=en_core_web_sm
# application.py: 14,17,18,19
tensorflow == 1.14.0
Flask-SQLAlchemy==2.4.3
itsdangerous==1.1.0
Jinja2==2.11.2
MarkupSafe==1.1.1
pytz==2020.1
six==1.15.0
SQLAlchemy==1.3.18
Werkzeug==1.0.1
Environment health status is ok but in environment url I am constantly getting 404 not found
My code is working on development server but its not working here on production server
One reason is probably incorrect port.
You are using port 8080:
application.run('localhost',8080,debug=True)
but default port on EB for your application is 8000. If you want to use non-default port, you can define EB environment variable PORT with the value of 8080. You can do this using .ebextenations or in EB console.
Also, there could be many other issues, which are not apparent yet. For example, the tutorial linked is using old version of EB environment, based on Amazon Linux 1, but you are using Amazon Linux 2. There are many differences between AL1 and AL2 which make them incompatible.
Tensorflow is a resource hungry package. Although instance type is not specified in your question, t2.micro can be too small for it, in case you are using it for testing or development.
I am provisioning a cloudformation stack. I am just trying to run the simplest possible cfn-initever on an instance started using a custom ami that was based on Amazon Linux 2:
EC2ESMasterNode1:
Type: AWS::EC2::Instance
Metadata:
Comment: ES Cluster Master 1 instance
AWS::CloudFormation::Init:
config:
commands:
01_template_elastic:
command:
!Sub |
echo "'Hello World'"
Properties:
ImageId: ami-09693313102a30b2c
InstanceType: !Ref MasterInstanceType
SubnetId: !Ref Subn1ID
SecurityGroupIds: [!Ref SGES]
KeyName: mykey
UserData:
"Fn::Base64":
!Sub |
#!/bin/bash -xe
# Start cfn-init
/opt/aws/bin/cfn-init -s ${AWS::StackName} --resource EC2ESMasterNode1 --region ${AWS::Region}
# Send the respective signal to Cloudformation
/opt/aws/bin/cfn-signal -e 0 --stack ${AWS::StackName} --resource EC2ESMasterNode1 --region ${AWS::Region}
Tags:
- Key: "Name"
Value: !Ref Master1NodeName
The /var/log/cloud-init-output.log has the following print
No packages needed for security; 15 packages available
Resolving Dependencies
Cloud-init v. 18.2-72.amzn2.0.6 running 'modules:final' at Wed, 02 Jan 2019 12:41:26 +0000. Up 14.42 seconds.
+ /opt/aws/bin/cfn-init -s test-elastic --resource EC2ESMasterNode1 --region eu-west-1
+ /opt/aws/bin/cfn-signal -e 0 --stack test-elastic --resource EC2ESMasterNode1 --region eu-west-1
ValidationError: Stack arn:aws:cloudformation:eu-west-1:248059334340:stack/test-elastic/9fc79150-0e8b-11e9-b135-503ac9e74cfd is in CREATE_COMPLETE state and cannot be signaled
Jan 02 12:41:27 cloud-init[2575]: util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [1]
Jan 02 12:41:27 cloud-init[2575]: cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
Jan 02 12:41:27 cloud-init[2575]: util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/site-packages/cloudinit/config/cc_scripts_user.pyc'>) failed
Cloud-init v. 18.2-72.amzn2.0.6 finished at Wed, 02 Jan 2019 12:41:27 +0000. Datasource DataSourceEc2. Up 15.30 seconds
The /var/log/cloud-init.log has the following errors:
Jan 02 12:41:26 cloud-init[2575]: handlers.py[DEBUG]: start: modules-final/config-scripts-user: running config-scripts-user with frequency once-per-instance
Jan 02 12:41:26 cloud-init[2575]: util.py[DEBUG]: Writing to /var/lib/cloud/instances/i-0c10a5ff1be475b99/sem/config_scripts_user - wb: [644] 20 bytes
Jan 02 12:41:26 cloud-init[2575]: helpers.py[DEBUG]: Running config-scripts-user using lock (<FileLock using file '/var/lib/cloud/instances/i-0c10a5ff1be475b99/sem/config_scripts_user'>)
Jan 02 12:41:26 cloud-init[2575]: util.py[DEBUG]: Running command ['/var/lib/cloud/instance/scripts/part-001'] with allowed return codes [0] (shell=True, capture=False)
Jan 02 12:41:27 cloud-init[2575]: util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [1]
Jan 02 12:41:27 cloud-init[2575]: util.py[DEBUG]: Failed running /var/lib/cloud/instance/scripts/part-001 [1]
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cloudinit/util.py", line 860, in runparts
subp(prefix + [exe_path], capture=False, shell=True)
File "/usr/lib/python2.7/site-packages/cloudinit/util.py", line 2053, in subp
cmd=args)
ProcessExecutionError: Unexpected error while running command.
Command: ['/var/lib/cloud/instance/scripts/part-001']
Exit code: 1
Reason: -
Stdout: -
Stderr: -
Jan 02 12:41:27 cloud-init[2575]: cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
Jan 02 12:41:27 cloud-init[2575]: handlers.py[DEBUG]: finish: modules-final/config-scripts-user: FAIL: running config-scripts-user with frequency once-per-instance
Jan 02 12:41:27 cloud-init[2575]: util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/site-packages/cloudinit/config/cc_scripts_user.pyc'>) failed
Jan 02 12:41:27 cloud-init[2575]: util.py[DEBUG]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/site-packages/cloudinit/config/cc_scripts_user.pyc'>) failed
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cloudinit/stages.py", line 798, in _run_modules
freq=freq)
File "/usr/lib/python2.7/site-packages/cloudinit/cloud.py", line 54, in run
return self._runners.run(name, functor, args, freq, clear_on_fail)
File "/usr/lib/python2.7/site-packages/cloudinit/helpers.py", line 187, in run
results = functor(*args)
File "/usr/lib/python2.7/site-packages/cloudinit/config/cc_scripts_user.py", line 45, in handle
util.runparts(runparts_path)
File "/usr/lib/python2.7/site-packages/cloudinit/util.py", line 867, in runparts
% (len(failed), len(attempted)))
RuntimeError: Runparts: 1 failures in 1 attempted commands
Jan 02 12:41:27 cloud-init[2575]: stages.py[DEBUG]: Running module ssh-authkey-fingerprints (<module 'cloudinit.config.cc_ssh_authkey_fingerprints' from '/usr/lib/python2.7/site-packages/cloudinit/config/cc_ssh_authkey_fingerprints.pyc'>) with frequency once-per-instance
_
cat /var/log/cfn-init-cmd.log
2019-01-02 12:50:54,777 P2582 [INFO] ************************************************************
2019-01-02 12:50:54,777 P2582 [INFO] ConfigSet default
2019-01-02 12:50:54,778 P2582 [INFO] ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
2019-01-02 12:50:54,778 P2582 [INFO] Config config
2019-01-02 12:50:54,778 P2582 [INFO] ============================================================
2019-01-02 12:50:54,778 P2582 [INFO] Command 01_template_elastic
2019-01-02 12:50:54,782 P2582 [INFO] -----------------------Command Output-----------------------
2019-01-02 12:50:54,782 P2582 [INFO] 'Hello World'
2019-01-02 12:50:54,783 P2582 [INFO] ------------------------------------------------------------
2019-01-02 12:50:54,783 P2582 [INFO] Completed successfully.
Does anyone have a clue what the error is about?
Furthermore, why the stack is created with success? (as also the specific resource?)
The error message in /var/log/cloud-init.log means that your UserData script exited with error status 1 rather than the expected 0.
Meanwhile, your /var/log/cloud-init-output.log contains this line:
ValidationError: Stack arn:aws:cloudformation:eu-west-1:248059334340:stack/test-elastic/9fc79150-0e8b-11e9-b135-503ac9e74cfd
is in CREATE_COMPLETE state and cannot be signaled
To your other question:
Furthermore, why the stack is created with success? (as also the specific resource?)
It is the normal behaviour of the stack to go into CREATE_COMPLETE state once the resources are created. The running of the UserData script doesn't by default delay this state.
Because you are using the cfn-signal, I assume that you have a requirement for the CREATE_COMPLETE state to be deferred until such time as you send the signal in UserData.
There is a good blog post on how to set this all up here.
But tl;dr -
You probably just need to add a CreationPolicy to your EC2 instance resource like this:
Resources:
EC2ESMasterNode1:
...
CreationPolicy:
ResourceSignal:
Count: 1
Timeout: PT10M
That says wait for 1 signal and time out after 10 minutes. Set those according to your requirements obviously.
I am trying to deploy my django backend rest apis on GCP by following the google tutorial at https://cloud.google.com/python/django/flexible-environment
I was able to deploy sample app successfully but when I am trying to deploy my django app then I get below errors:
latest: digest:
sha256:d43a6f7d84335f8d724e44cee16de03fd50685d6713107a83b70f44d3c6b5e8f
size: 2835
DONE
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Updating service [default] (this may take several minutes)...failed.
ERROR: (gcloud.app.deploy) Error Response: [9]
Application startup error:
[2018-04-03 13:01:35 +0000] [1] [INFO] Starting gunicorn 19.7.1
[2018-04-03 13:01:35 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)
[2018-04-03 13:01:35 +0000] [1] [INFO] Using worker: sync
[2018-04-03 13:01:35 +0000] [7] [INFO] Booting worker with pid: 7
[2018-04-03 13:01:35 +0000] [1] [INFO] Shutting down: Master
[2018-04-03 13:01:35 +0000] [1] [INFO] Reason: Worker failed to boot.
In build history, it shows success:
Build information
Status
Build successful
Build id
b2f2ab39-18df-420e-8fac-eeda74dc7a75
Image
eu.gcr.io/bcbackend-200008/appengine/default.20180403t182207:latest
Trigger
—
Source
gs://staging.bcbackend-200008.appspot.com/eu.gcr.io/bcbackend-
200008/appengine/default.20180403t182207:latest
Started
April 3, 2018 at 6:23:32 PM UTC+5:30
Build time
6 min 13 sec
In GCP logs also it shows no error but "Worker failed to boot":
A 2018/04/03 13:01:32 Ready for new connections
A 2018/04/03 13:01:33 Listening on /cloudsql/bcbackend-200008:europe-
west3:bc-mysql-instance for bcbackend-200008:europe-west3:bc-mysql-instance
A [2018-04-03 13:01:35 +0000] [1] [INFO] Starting gunicorn 19.7.1
A [2018-04-03 13:01:35 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)
A [2018-04-03 13:01:35 +0000] [1] [INFO] Using worker: sync
A [2018-04-03 13:01:35 +0000] [7] [INFO] Booting worker with pid: 7
A [2018-04-03 13:01:35 +0000] [1] [INFO] Shutting down: Master
A [2018-04-03 13:01:35 +0000] [1] [INFO] Reason: Worker failed to boot.
A 2018/04/03 13:01:40 Ready for new connections
A 2018/04/03 13:01:41 Listening on /cloudsql/bcbackend-200008:europe-west3:bc-mysql-instance for bcbackend-200008:europe-west3:bc-mysql-instance
When I try to open "https://bcbackend-200008.appspot.com/" I get following:
Error: Not Found
The requested URL / was not found on this server.
Tried running it with "--verbosity=debug" option and below is the log:
DEBUG: (gcloud.app.deploy) Error Response: [9]
Application startup error:
[2018-04-04 12:34:42 +0000] [1] [INFO] Starting gunicorn 19.7.1
[2018-04-04 12:34:42 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)
[2018-04-04 12:34:42 +0000] [1] [INFO] Using worker: sync
[2018-04-04 12:34:42 +0000] [7] [INFO] Booting worker with pid: 7
[2018-04-04 12:34:43 +0000] [1] [INFO] Shutting down: Master
[2018-04-04 12:34:43 +0000] [1] [INFO] Reason: Worker failed to boot.
Traceback (most recent call last):
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line
788, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py",
line 760, in Run
resources = command_instance.Run(args)
File "/usr/lib/google-cloud-sdk/lib/surface/app/deploy.py", line 81, in
Run
parallel_build=False)
File "/usr/lib/google-cloud-
sdk/lib/googlecloudsdk/command_lib/app/deploy_util.py", line 583, in
RunDeploy
flex_image_build_option=flex_image_build_option)
File "/usr/lib/google-cloud-
sdk/lib/googlecloudsdk/command_lib/app/deploy_util.py", line 392, in Deploy
extra_config_settings)
File "/usr/lib/google-cloud-
sdk/lib/googlecloudsdk/api_lib/app/appengine_api_client.py", line 200, in
DeployService
poller=done_poller)
File "/usr/lib/google-cloud-
sdk/lib/googlecloudsdk/api_lib/app/operations_util.py", line 310, in
WaitForOperation
sleep_ms=retry_interval)
File "/usr/lib/google-cloud-
sdk/lib/googlecloudsdk/api_lib/util/waiter.py", line 251, in WaitFor
sleep_ms, _StatusUpdate)
File "/usr/lib/google-cloud-
sdk/lib/googlecloudsdk/api_lib/util/waiter.py", line 309, in PollUntilDone
sleep_ms=sleep_ms)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/util/retry.py",
line 226, in RetryOnResult
if not should_retry(result, state):
File "/usr/lib/google-cloud-
sdk/lib/googlecloudsdk/api_lib/util/waiter.py", line 303, in _IsNotDone
return not poller.IsDone(operation)
File "/usr/lib/google-cloud-
sdk/lib/googlecloudsdk/api_lib/app/operations_util.py", line 179, in IsDone
encoding.MessageToPyValue(operation.error)))
OperationError: Error Response: [9]
Application startup error:
[2018-04-04 12:34:42 +0000] [1] [INFO] Starting gunicorn 19.7.1
[2018-04-04 12:34:42 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)
Try adding --preload as an argument to gunicorn command in your app.yaml. This will show you the errors while trying to start the workers. The errors will give you a clue why the deployment is failing.
Your app.yaml should look something like this:
runtime: python
env: flex
entrypoint: gunicorn --preload -b :$PORT mysite.wsgi
beta_settings:
cloud_sql_instances: <your-cloudsql-connection-string>
runtime_config:
python_version: 3