I have to deploy flask app on amazon elastic beanstalk
I was following these steps to deploy on elastic beanstalk
http://www.alcortech.com/steps-to-deploy-python-flask-mysql-application-on-aws-elastic-beanstalk/
error code I'm getting
----------------------------------------
/var/log/eb-engine.log
----------------------------------------
2020/08/04 17:54:08.190038 [INFO] Copying file /opt/elasticbeanstalk/config/private/healthd/healthd.conf to /var/proxy/staging/nginx/conf.d/elasticbeanstalk/healthd.conf
2020/08/04 17:54:08.191770 [INFO] Executing instruction: configure log streaming
2020/08/04 17:54:08.191779 [INFO] log streaming is not enabled
2020/08/04 17:54:08.191783 [INFO] disable log stream
2020/08/04 17:54:08.192853 [INFO] Running command /bin/sh -c systemctl show -p PartOf amazon-cloudwatch-agent.service
2020/08/04 17:54:08.298022 [INFO] Running command /bin/sh -c systemctl stop amazon-cloudwatch-agent.service
2020/08/04 17:54:08.303818 [INFO] Executing instruction: GetToggleForceRotate
2020/08/04 17:54:08.303831 [INFO] Checking if logs need forced rotation
2020/08/04 17:54:08.303852 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:859160877773:stack/awseb-e-dxcnd8btg7-stack/fd1c0e90-d67a-11ea-895d-0ee443750bc7 -r AWSEBAutoScalingGroup --region us-east-1
2020/08/04 17:54:09.170590 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:859160877773:stack/awseb-e-dxcnd8btg7-stack/fd1c0e90-d67a-11ea-895d-0ee443750bc7 -r AWSEBBeanstalkMetadata --region us-east-1
2020/08/04 17:54:09.501785 [INFO] Copying file /opt/elasticbeanstalk/config/private/rsyslog.conf to /etc/rsyslog.d/web.conf
2020/08/04 17:54:09.503412 [INFO] Running command /bin/sh -c systemctl restart rsyslog.service
2020/08/04 17:54:10.455082 [INFO] Executing instruction: PostBuildEbExtension
2020/08/04 17:54:10.455106 [INFO] No plugin in cfn metadata.
2020/08/04 17:54:10.455116 [INFO] Starting executing the config set Infra-EmbeddedPostBuild.
2020/08/04 17:54:10.455138 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-init -s arn:aws:cloudformation:us-east-1:859160877773:stack/awseb-e-dxcnd8btg7-stack/fd1c0e90-d67a-11ea-895d-0ee443750bc7 -r AWSEBAutoScalingGroup --region us-east-1 --configsets Infra-EmbeddedPostBuild
2020/08/04 17:54:10.827402 [INFO] Finished executing the config set Infra-EmbeddedPostBuild.
2020/08/04 17:54:10.827431 [INFO] Executing instruction: CleanEbExtensions
2020/08/04 17:54:10.827453 [INFO] Cleaned ebextensions subdirectories from app staging directory.
2020/08/04 17:54:10.827457 [INFO] Executing instruction: RunPreDeployHooks
2020/08/04 17:54:10.827478 [INFO] The dir .platform/hooks/predeploy/ does not exist in the application. Skipping this step...
2020/08/04 17:54:10.827482 [INFO] Executing instruction: stop X-Ray
2020/08/04 17:54:10.827486 [INFO] stop X-Ray ...
2020/08/04 17:54:10.827504 [INFO] Running command /bin/sh -c systemctl show -p PartOf xray.service
2020/08/04 17:54:10.834251 [WARN] stopProcess Warning: process xray is not registered
2020/08/04 17:54:10.834271 [INFO] Running command /bin/sh -c systemctl stop xray.service
2020/08/04 17:54:10.844029 [INFO] Executing instruction: stop proxy
2020/08/04 17:54:10.844061 [INFO] Running command /bin/sh -c systemctl show -p PartOf nginx.service
2020/08/04 17:54:10.929856 [WARN] stopProcess Warning: process nginx is not registered
2020/08/04 17:54:10.929893 [INFO] Running command /bin/sh -c systemctl stop nginx.service
2020/08/04 17:54:10.935107 [INFO] Executing instruction: FlipApplication
2020/08/04 17:54:10.935119 [INFO] Fetching environment variables...
2020/08/04 17:54:10.935125 [INFO] No plugin in cfn metadata.
2020/08/04 17:54:10.936360 [INFO] Purge old process...
2020/08/04 17:54:10.936404 [INFO] Register application processes...
2020/08/04 17:54:10.936409 [INFO] Registering the proc: web
2020/08/04 17:54:10.936423 [INFO] Running command /bin/sh -c systemctl show -p PartOf web.service
2020/08/04 17:54:10.942911 [INFO] Running command /bin/sh -c systemctl daemon-reload
2020/08/04 17:54:11.190918 [INFO] Running command /bin/sh -c systemctl reset-failed
2020/08/04 17:54:11.195011 [INFO] Running command /bin/sh -c systemctl is-enabled eb-app.target
2020/08/04 17:54:11.198465 [INFO] Copying file /opt/elasticbeanstalk/config/private/aws-eb.target to /etc/systemd/system/eb-app.target
2020/08/04 17:54:11.200382 [INFO] Running command /bin/sh -c systemctl enable eb-app.target
2020/08/04 17:54:11.275179 [ERROR] Created symlink from /etc/systemd/system/multi-user.target.wants/eb-app.target to /etc/systemd/system/eb-app.target.
2020/08/04 17:54:11.275218 [INFO] Running command /bin/sh -c systemctl start eb-app.target
2020/08/04 17:54:11.280436 [INFO] Running command /bin/sh -c systemctl enable web.service
2020/08/04 17:54:11.355233 [ERROR] Created symlink from /etc/systemd/system/multi-user.target.wants/web.service to /etc/systemd/system/web.service.
2020/08/04 17:54:11.355273 [INFO] Running command /bin/sh -c systemctl show -p PartOf web.service
2020/08/04 17:54:11.360364 [INFO] Running command /bin/sh -c systemctl is-active web.service
2020/08/04 17:54:11.363811 [INFO] Running command /bin/sh -c systemctl start web.service
2020/08/04 17:54:11.389333 [INFO] Executing instruction: start X-Ray
2020/08/04 17:54:11.389349 [INFO] X-Ray is not enabled.
2020/08/04 17:54:11.389354 [INFO] Executing instruction: start proxy with new configuration
2020/08/04 17:54:11.389382 [INFO] Running command /bin/sh -c /usr/sbin/nginx -t -c /var/proxy/staging/nginx/nginx.conf
2020/08/04 17:54:11.594594 [ERROR] nginx: the configuration file /var/proxy/staging/nginx/nginx.conf syntax is ok
nginx: configuration file /var/proxy/staging/nginx/nginx.conf test is successful
2020/08/04 17:54:11.595275 [INFO] Running command /bin/sh -c cp -rp /var/proxy/staging/nginx/. /etc/nginx
2020/08/04 17:54:11.603198 [INFO] Running command /bin/sh -c systemctl show -p PartOf nginx.service
2020/08/04 17:54:11.618752 [INFO] Running command /bin/sh -c systemctl daemon-reload
2020/08/04 17:54:11.716763 [INFO] Running command /bin/sh -c systemctl reset-failed
2020/08/04 17:54:11.724234 [INFO] Running command /bin/sh -c systemctl show -p PartOf nginx.service
2020/08/04 17:54:11.735835 [INFO] Running command /bin/sh -c systemctl is-active nginx.service
2020/08/04 17:54:11.743306 [INFO] Running command /bin/sh -c systemctl start nginx.service
2020/08/04 17:54:11.810080 [INFO] Executing instruction: configureSqsd
2020/08/04 17:54:11.810096 [INFO] This is a web server environment instance, skip configure sqsd daemon ...
2020/08/04 17:54:11.810102 [INFO] Executing instruction: startSqsd
2020/08/04 17:54:11.810105 [INFO] This is a web server environment instance, skip start sqsd daemon ...
2020/08/04 17:54:11.810110 [INFO] Executing instruction: Track pids in healthd
2020/08/04 17:54:11.810114 [INFO] This is an enhanced health env...
2020/08/04 17:54:11.810138 [INFO] Running command /bin/sh -c systemctl show -p ConsistsOf aws-eb.target | cut -d= -f2
2020/08/04 17:54:11.819320 [INFO] healthd.service nginx.service cfn-hup.service
2020/08/04 17:54:11.819352 [INFO] Running command /bin/sh -c systemctl show -p ConsistsOf eb-app.target | cut -d= -f2
2020/08/04 17:54:11.826094 [INFO] web.service
2020/08/04 17:54:11.826211 [INFO] Executing instruction: RunPostDeployHooks
2020/08/04 17:54:11.826223 [INFO] The dir .platform/hooks/postdeploy/ does not exist in the application. Skipping this step...
2020/08/04 17:54:11.826228 [INFO] Executing cleanup logic
2020/08/04 17:54:11.826308 [INFO] CommandService Response: {"status":"SUCCESS","api_version":"1.0","results":[{"status":"SUCCESS","msg":"Engine execution has succeeded.","returncode":0,"events":[]}]}
2020/08/04 17:54:11.826448 [INFO] Platform Engine finished execution on command: app-deploy
2020/08/04 17:55:26.814753 [INFO] Starting...
2020/08/04 17:55:26.814816 [INFO] Starting EBPlatform-PlatformEngine
2020/08/04 17:55:26.817259 [INFO] no eb envtier info file found, skip loading env tier info.
2020/08/04 17:55:26.817348 [INFO] Engine received EB command cfn-hup-exec
2020/08/04 17:55:26.939483 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:859160877773:stack/awseb-e-dxcnd8btg7-stack/fd1c0e90-d67a-11ea-895d-0ee443750bc7 -r AWSEBAutoScalingGroup --region us-east-1
2020/08/04 17:55:27.277717 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:859160877773:stack/awseb-e-dxcnd8btg7-stack/fd1c0e90-d67a-11ea-895d-0ee443750bc7 -r AWSEBBeanstalkMetadata --region us-east-1
2020/08/04 17:55:27.829610 [INFO] checking whether command tail-log is applicable to this instance...
2020/08/04 17:55:27.829630 [INFO] this command is applicable to the instance, thus instance should execute command
2020/08/04 17:55:27.829635 [INFO] Engine command: (tail-log)
2020/08/04 17:55:27.830551 [INFO] Executing instruction: GetTailLogs
2020/08/04 17:55:27.830557 [INFO] Tail Logs...
2020/08/04 17:55:27.834471 [INFO] Running command /bin/sh -c tail -n 100 /var/log/eb-engine.log
----------------------------------------
/var/log/web.stdout.log
----------------------------------------
Aug 4 17:54:11 ip-172-31-20-145 web: [2020-08-04 17:54:11 +0000] [3881] [INFO] Starting gunicorn 20.0.4
Aug 4 17:54:11 ip-172-31-20-145 web: [2020-08-04 17:54:11 +0000] [3881] [INFO] Listening at: http://127.0.0.1:8000 (3881)
Aug 4 17:54:11 ip-172-31-20-145 web: [2020-08-04 17:54:11 +0000] [3881] [INFO] Using worker: threads
Aug 4 17:54:11 ip-172-31-20-145 web: [2020-08-04 17:54:11 +0000] [3918] [INFO] Booting worker with pid: 3918
----------------------------------------
/var/log/nginx/access.log
----------------------------------------
----------------------------------------
/var/log/nginx/error.log
----------------------------------------
My application.py file is on the root and its source code
from pprint import pprint
import re
import smtplib
import ssl
import docxpy
import glob
import time
import spacy
import requests
import json
import pickle
import numpy as np
import pandas as pd
import tensorflow as tf
from flask import Flask
from flask_restful import Api, Resource, reqparse
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing.text import one_hot
from tensorflow.keras.models import model_from_json
import en_core_web_sm
NLP = en_core_web_sm.load()
df = pd.read_csv('skill_train.csv')
df=df.dropna()
df['skill']=pd.to_numeric(df['skill'])
negitive=df[df['skill']==0]
positive=df[df['skill']==1]
application = Flask(__name__)
api = Api(application)
class Candidate:
def __init__(self,file_link):
__text = docxpy.process(file_link).strip()
self.__resume={
'Name':self.__extract_name(__text),
'Phone Number':self.__extract_phone(__text),
'Email':self.__extract_email(__text),
'Experience':self.__extract_experience(__text),
'Skills':list(),
'Title':'',
'match':0,
'file_path':file_link,
}
def get_resume(self):
return self.__resume
def __extract_name(self,text):
try:
return text[:text.index('\n')]
except:
return None
def __extract_email(self,text):
email_pattern = re.compile(r'\S+#\S+\.\S+')
try:
return email_pattern.findall(text)[0].upper()
except:
try:
__hyperlinks = text.data['links'][0][0].decode('UTF-8')
return email_pattern.findall(__hyperlinks)[0].upper()
except:
return None
def __extract_phone(self,text):
phone_pattern = re.compile(r'(\d{3}[-\.\s]??\d{3}[-\.\s]??\d{4}|\(\d{3}\)[-\.\s]*\d{3}[-\.\s]??\d{4}|\d{3}[-\.\s]??\d{4})')
try:
return ''.join(phone_pattern.findall(text)[0]) if len(''.join(phone_pattern.findall(text)[0]))>=10 else None
except:
return None
def __extract_experience(self,text):
try:
__exp_pattern = re.compile(r'\d\+ years|\d years|\d\d\+ years|\d\d years|\d\d \+ Years|\d \+ Years')
__exp = __exp_pattern.findall(text)
return str(max([int(re.findall(re.compile(r'\d+'),i)[0]) for i in __exp])) + '+ years'
except:
try:
__date_patt = re.compile(r"\d{2}[/-]\d+")
__dates_list = __date_patt.findall(text)
try:
__year_list=[int(date[-4:]) for date in __dates_list]
except:
__year_list=[int(date[-2:]) for date in __dates_list]
return str(max(__year_list)-min(__year_list))+'+ years'
except:
return None
class JobDescription:
def __init__(self,args):
self.description=args['job_description'].upper()
self.__title=self.__get_title(self.description) if 'job_title' not in args else args['job_title'].upper()
__doc=NLP(self.description)
__noun_chunks=set([chunk.text.upper() for chunk in __doc.noun_chunks])
self.__skills=list(self.__get_skills(list(__noun_chunks)))
def title(self):
return self.__title
def skills(self):
return self.__skills
def __clean_data(self,noun_chunks):
subs=[r'^[\d|\W]*','EXPERIENCE','EXPERT','DEVELOPER','SERVICES','STACK','TECHNOLOGIES',
'JOBS','JOB',r'\n',' ',r'\t','AND','DEV','SCRIPTS','DBS','DATABASE','DATABASES','SERVER',
'SERVERS',r'^\d+']
__clean_chunks=[]
for chunk in noun_chunks:
for sub in subs:
chunk=(re.sub(sub,' ',chunk).strip())
filtered_chunk=[]
chunk=chunk.split(' ')
for word in chunk:
for sub in subs:
word=(re.sub(sub,' ',word).strip())
if word != '':
if not NLP.vocab[word.strip()].is_stop:
filtered_chunk.append(word.strip())
filtered_chunk=' '.join(filtered_chunk)
if filtered_chunk != '' and filtered_chunk != ' ':
if ',' in filtered_chunk:
__clean_chunks+=filtered_chunk.split(',')
elif '/' in filtered_chunk:
__clean_chunks+=filtered_chunk.split('/')
else:
__clean_chunks.append(filtered_chunk)
return set([chunk.strip() for chunk in __clean_chunks])
def __get_skills(self,nounChunks):
with open('skill_model.json','r') as f:
model=f.read()
sq_model = model_from_json(model)
sq_model.load_weights('skillweights.h5')
__clean_chunks=list(self.__clean_data(nounChunks))
__onehot_repr=[one_hot(words,25000)for words in __clean_chunks]
__test_data=pad_sequences(__onehot_repr,padding='pre',maxlen=6)
__results = [(x,y[0])for x,y in zip(__clean_chunks,sq_model.predict_classes(np.array(__test_data)))]
ones=set(positive['chunk'])
zeros=set(negitive['chunk'])
for i,result in enumerate(__results):
if result[0] in ones and result[1] !=1:
__results[i]=(result[0],1)
if result[0] in zeros and result[1] !=0:
__results[i]=(result[0],0)
return set([x[0] for x in __results if x[1]==1])
def __get_title(self,text):
try:
__role=re.findall(re.compile(r'POSITION[ ]*:[\w .\(\)]+|ROLE[ ]*:[\w .\(\)]+|TITLE[ ]*:[\w .\(\)]+'),text)[0].split(':')[1].strip()
if '(' in __role:
__role=re.findall(re.compile(r'\([\w ]+\)'),__role)[0][1:-1].strip()
return __role.upper()
except:
return None
def __matcher(self,resume):
__text = docxpy.process(resume['file_path']).upper()
if self.__title in __text:
resume['Title']=self.__title
for skill in self.__skills:
if skill in __text:
resume['Skills'].append(skill)
resume['Skills'] = list(set(resume['Skills']))
resume['match'] = 0.0 if len(self.__skills)==0 else (len(resume['Skills'])/len(self.__skills))*100
return resume
def filter_matches(self,candidates):
if self.__title != None:
__matches = []
for user in candidates:
resume = user.get_resume()
result = self.__matcher(resume)
if (result['Title']!='' and result['match']>60) or result['match']>60:
__matches.append(result)
return sorted(__matches, key=lambda match:match['match'], reverse=True)
else:
print('Unable to extract Role try writing Role:...... or Position:....')
def send_mail(self,matches):
__port = 465
__smtp_server = "smtp.gmail.com"
__sender_email = 'sonai20202015#gmail.com'
__password = 'Sonai#123'
context = ssl.create_default_context()
with smtplib.SMTP_SSL(__smtp_server, __port, context=context) as server:
server.login(__sender_email, __password)
for Candidate in matches:
__reciver_email = Candidate['Email']
__message=f'''Subject: Job offer
Hi {Candidate['Name']},
This is an autogenrated email from an ATS SONAI we found your resume to be a
good match for {self.__title} job
'''
server.sendmail(__sender_email,__reciver_email, __message)
def get_acess(self):
auth_url = 'https://secure.dice.com/oauth/token'
auth_header = {'Authorization': 'Basic dHM0LWhheWRlbnRlY2hub2xvZ3k6Yzk0NWI4YmItMmRmNi00Yjk4LThmNDUtMTg4ZWU5Mjk3ZGEz', 'Content-Type': 'application/x-www-form-urlencoded'}
auth_data = {'grant_type': 'password', 'username': 'haydentechnology#dice.com', 'password': '635n3E7s'}
try:
auth_response = requests.request('POST',auth_url,headers=auth_header,data=auth_data)
auth_code = auth_response.status_code
auth_response = json.loads(auth_response.content.decode())
return (auth_code,auth_response)
except:
return(0,'')
def boolean_skills(self):
with open('output.pkl','rb') as f:
data = pickle.load(f)
if self.__title in data:
output = []
for skill in self.__skills:
if skill in data[self.__title][0] and data[self.__title][0][skill]>(3/4)*data[self.__title][1]:
continue
output.append(skill)
return output
return self.__skills
def search_with_api(self):
auth_response = self.get_acess()
if auth_response[0] == 200:
token = auth_response[1]['access_token']
headers = {'Authorization':f'bearer {token}'}
url = 'https://talent-api.dice.com/v2/profiles/search?q='
boolean_skills = self.boolean_skills()
for skill in boolean_skills:
url += f'{skill}&'
url = url + self.__title
print('\n',url,'\n')
try:
output = requests.request('GET',url,headers=headers)
output = json.loads(output.content.decode())
return output
except:
return ('error while finding users')
else:
return ('Authentication error with dice')
class Search_Candidates(Resource):
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("application_type",required=False)#String
parser.add_argument("application_name",required=False)#String
parser.add_argument("application_internal_only",required=False)#Boolean
parser.add_argument("application_applicant_history",required=False)#Boolean
parser.add_argument("application_years_of_employement_needed",required=False)#Float
parser.add_argument("application_number_of_refrences",required=False)#Float
parser.add_argument("application_flag_voluntarily_resign",required=False)#Boolean
parser.add_argument("application_flag_past_employer_contracted",required=False)#Boolean
parser.add_argument("email_template_default_address",required=False)#String
parser.add_argument("task",required=False)#List
parser.add_argument("job_title",required=True)#String
parser.add_argument("employement_status",required=False)#String
parser.add_argument('job_description', required=True)#String
parser.add_argument("joinig_date",required=False)#String as ISO STANDARDS
parser.add_argument("salary",required=False)#Float
parser.add_argument("average_hours_weekly",required=False)#Float
parser.add_argument("post_title",required=False)#String
parser.add_argument("post_details_category",required=False)#String
parser.add_argument("number_of_open_position",required=False)#Float
parser.add_argument("general_application",required=False)#Boolean
args = parser.parse_args()
response = self.find_matches(args)
response = json.dumps(response)
return response
def find_matches(self,args):
file_paths=glob.glob(r'demo_word_file\*.docx')
candidates=[Candidate(file_path) for file_path in file_paths]
job = JobDescription(args)
start_time=time.time()
results = job.filter_matches(candidates)
pprint(f'Found and Sorted {len(results)} results in {time.time()-start_time} secs from {len(candidates)} files')
matches = [matches for matches in job.filter_matches(candidates)]
if not len(matches) == 0:
matches_with_email=[match for match in matches if match['Email'] != None]
job.send_mail(matches_with_email)
else:
results = job.search_with_api()
return results
def run():
file_paths=glob.glob(r'demo_word_file\*.docx')
candidates=[Candidate(file_path) for file_path in file_paths]
text = docxpy.process('jobtest.docx')
args= {'job_description': text}
job = JobDescription(args)
results = job.filter_matches(candidates)
return results
if __name__ == "__main__":
api.add_resource(Search_Candidates,'/findmatches/')
application.run('localhost',8080,debug=True)
My requirement.txt file is here
# Automatically generated by https://github.com/damnever/pigar.
# application.py: 15
Flask == 1.0.4
# application.py: 16
Flask_RESTful == 0.3.8
# application.py: 5
docxpy == 0.8.5
# application.py: 12
numpy == 1.19.1
# application.py: 13
pandas == 1.1.0
# application.py: 9
requests == 2.18.4
spacy>=2.2.0,<3.0.0
https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz#egg=en_core_web_sm
# application.py: 14,17,18,19
tensorflow == 1.14.0
Flask-SQLAlchemy==2.4.3
itsdangerous==1.1.0
Jinja2==2.11.2
MarkupSafe==1.1.1
pytz==2020.1
six==1.15.0
SQLAlchemy==1.3.18
Werkzeug==1.0.1
Environment health status is ok but in environment url I am constantly getting 404 not found
My code is working on development server but its not working here on production server
One reason is probably incorrect port.
You are using port 8080:
application.run('localhost',8080,debug=True)
but default port on EB for your application is 8000. If you want to use non-default port, you can define EB environment variable PORT with the value of 8080. You can do this using .ebextenations or in EB console.
Also, there could be many other issues, which are not apparent yet. For example, the tutorial linked is using old version of EB environment, based on Amazon Linux 1, but you are using Amazon Linux 2. There are many differences between AL1 and AL2 which make them incompatible.
Tensorflow is a resource hungry package. Although instance type is not specified in your question, t2.micro can be too small for it, in case you are using it for testing or development.
Related
I have an Elastic Beanstalk project that has been working fine for months. Today, I decide to enable and disable a port listener as seen in the image below:
I enabled port 80 and then the website stopped working. So I was like "oh crap, I will change it back". But guess what? It is still broken. The code has not changed whatsoever, but the application is now broken.
I have restarted the app servers, rebuilt the environment and nothing. I can't even access the environment site by clicking Go to environment. I just see a Bad Gateway message on screen. The health status of the environment when first deployed is OK and then quickly goes to Severe.
If my code has not changed, what is happening here? How can I find out what is going on here? All I changed was that port, by enabling and then disabling again.
I have already come across this question: Question and I am already doing this. This environment variable is on my application.properties file like this:
server.port=5000 and it's been like this for months and has already been working. So this can't be the reason that it broke today. I even tried adding it directly to the environment variables in Elastic Beanstalk console and same result, still getting 502 Bad Gateway.
I also have a path for the health-check configured and this has not changed in months.
Here are the last 100 lines from my log file after health status goes to Severe:
----------------------------------------
/var/log/eb-engine.log
----------------------------------------
2022/01/27 15:53:53.370165 [INFO] Running command /bin/sh -c docker tag af10382f81a4 aws_beanstalk/current-app
2022/01/27 15:53:53.489035 [INFO] Running command /bin/sh -c docker rmi aws_beanstalk/staging-app
2022/01/27 15:53:53.568222 [INFO] Untagged: aws_beanstalk/staging-app:latest
2022/01/27 15:53:53.568307 [INFO] Running command /bin/sh -c systemctl show -p PartOf eb-docker.service
2022/01/27 15:53:53.576541 [INFO] Running command /bin/sh -c systemctl daemon-reload
2022/01/27 15:53:53.712836 [INFO] Running command /bin/sh -c systemctl reset-failed
2022/01/27 15:53:53.720035 [INFO] Running command /bin/sh -c systemctl enable eb-docker.service
2022/01/27 15:53:53.866046 [INFO] Running command /bin/sh -c systemctl show -p PartOf eb-docker.service
2022/01/27 15:53:53.875112 [INFO] Running command /bin/sh -c systemctl is-active eb-docker.service
2022/01/27 15:53:53.886916 [INFO] Running command /bin/sh -c systemctl start eb-docker.service
2022/01/27 15:53:53.991608 [INFO] Running command /bin/sh -c systemctl show -p PartOf eb-docker-log.service
2022/01/27 15:53:54.002839 [INFO] Running command /bin/sh -c systemctl daemon-reload
2022/01/27 15:53:54.092602 [INFO] Running command /bin/sh -c systemctl reset-failed
2022/01/27 15:53:54.102854 [INFO] Running command /bin/sh -c systemctl enable eb-docker-log.service
2022/01/27 15:53:54.226561 [INFO] Running command /bin/sh -c systemctl show -p PartOf eb-docker-log.service
2022/01/27 15:53:54.246914 [INFO] Running command /bin/sh -c systemctl is-active eb-docker-log.service
2022/01/27 15:53:54.263293 [INFO] Running command /bin/sh -c systemctl start eb-docker-log.service
2022/01/27 15:53:54.433800 [INFO] docker container 3771e61e64ae is running aws_beanstalk/current-app
2022/01/27 15:53:54.433823 [INFO] Executing instruction: Clean up Docker
2022/01/27 15:53:54.433842 [INFO] Running command /bin/sh -c docker ps -aq
2022/01/27 15:53:54.638602 [INFO] 3771e61e64ae
2022/01/27 15:53:54.638644 [INFO] Running command /bin/sh -c docker images | sed 1d
2022/01/27 15:53:54.810723 [INFO] aws_beanstalk/current-app latest af10382f81a4 13 seconds ago 597MB
<none> <none> adafe645300e 24 seconds ago 732MB
openjdk 8 3bc5f7759e81 30 hours ago 526MB
maven 3.8.1-jdk-8 498ac51e5e6e 6 months ago 525MB
2022/01/27 15:53:54.810767 [INFO] save docker tag command: docker tag af10382f81a4 aws_beanstalk/current-app:latest
2022/01/27 15:53:54.810772 [INFO] save docker tag command: docker tag adafe645300e <none>:<none>
2022/01/27 15:53:54.810776 [INFO] save docker tag command: docker tag 3bc5f7759e81 openjdk:8
2022/01/27 15:53:54.810781 [INFO] save docker tag command: docker tag 498ac51e5e6e maven:3.8.1-jdk-8
2022/01/27 15:53:54.810793 [INFO] Running command /bin/sh -c docker rm `docker ps -aq`
2022/01/27 15:53:54.964217 [INFO] Running command /bin/sh -c docker rmi `docker images -aq`
2022/01/27 15:53:56.249352 [INFO] Deleted: sha256:adafe645300e41dd29b04abccf86a562ad5e635bd6afff9343b6a45721fb3a45
Deleted: sha256:b78c0f45b590e7c8c496466450e2fecf2e31044dd53bcf8d9c64a9e7a8c84139
Deleted: sha256:16aedb83589da925c19d2f692234a2a36c017b35846c07fd8ad6817cceda6ae9
Deleted: sha256:a568ba4507a603b7ace044d64726daaf3022c817cc9550779d64dbb95d0e1e5d
Deleted: sha256:fe90a30920d18ecad75ec02e8c04894fbcaadc209529c3e5c14fdaa66d3a7bc9
Deleted: sha256:7c72fe5e2da958b5d44267aa9de538c274e70125c902bc3e663af4c5c87280dc
Untagged: maven:3.8.1-jdk-8
Untagged: maven#sha256:cba6d738a97e81e8845d60ee2662f020385d01d6135a2cf75bc1f5a84980ef88
Deleted: sha256:498ac51e5e6e99ae8646d007ed554587a4ceeab78a664dc7eedde7137c658e9e
Deleted: sha256:de026bec49cbc1fd7bd1bd7aa03d544713985e39bc0a913f4c0a59dbcc556715
Deleted: sha256:f5c45a5e495b035f37dc2e19d8ead0458cf0ad8b83d5573cc9b4016ea54814b6
Deleted: sha256:9f871694bb9a37f62b6baf12760480448d46e008c8c85f06dab5340b16d11a2b
Deleted: sha256:19a57d2c318dfeac5de4cac0a5263af560eff01c620100570c83658e12df0a87
Deleted: sha256:bc20a3f84b95792033865bff3c1cc53b060108ef2018b1913da3c8eddda77b99
Deleted: sha256:f33d6ed931ff64c63168af00c7544d148d01fda66831246572ff2bfcacbcf2d6
Deleted: sha256:017b9704876de2443b332b1dfec580d365184b514eb0af43f1d59637e77af9bb
Deleted: sha256:98fc59c935e697d6375f05f4fa29d0e1ef7e8ece61aed109056926983ada0ef4
Deleted: sha256:c21ff68b02e7caf277f5d356e8b323a95e8d3969dd1ab0d9f60e7c8b4a01c874
Deleted: sha256:afa3e488a0ee76983343f8aa759e4b7b898db65b715eb90abc81c181388374e3
2022/01/27 15:53:56.249384 [INFO] restore docker image name with command: docker tag af10382f81a4 aws_beanstalk/current-app:latest
2022/01/27 15:53:56.249393 [INFO] Running command /bin/sh -c docker tag af10382f81a4 aws_beanstalk/current-app:latest
2022/01/27 15:53:56.352957 [INFO] restore docker image name with command: docker tag adafe645300e <none>:<none>
2022/01/27 15:53:56.352988 [INFO] Running command /bin/sh -c docker tag adafe645300e <none>:<none>
2022/01/27 15:53:56.360403 [INFO] restore docker image name with command: docker tag 3bc5f7759e81 openjdk:8
2022/01/27 15:53:56.360437 [INFO] Running command /bin/sh -c docker tag 3bc5f7759e81 openjdk:8
2022/01/27 15:53:56.461652 [INFO] restore docker image name with command: docker tag 498ac51e5e6e maven:3.8.1-jdk-8
2022/01/27 15:53:56.461677 [INFO] Running command /bin/sh -c docker tag 498ac51e5e6e maven:3.8.1-jdk-8
2022/01/27 15:53:56.561836 [INFO] Executing instruction: start X-Ray
2022/01/27 15:53:56.561859 [INFO] X-Ray is not enabled.
2022/01/27 15:53:56.561863 [INFO] Executing instruction: configureSqsd
2022/01/27 15:53:56.561868 [INFO] This is a web server environment instance, skip configure sqsd daemon ...
2022/01/27 15:53:56.561871 [INFO] Executing instruction: startSqsd
2022/01/27 15:53:56.561874 [INFO] This is a web server environment instance, skip start sqsd daemon ...
2022/01/27 15:53:56.561877 [INFO] Executing instruction: Track pids in healthd
2022/01/27 15:53:56.561881 [INFO] This is an enhanced health env...
2022/01/27 15:53:56.561891 [INFO] Running command /bin/sh -c systemctl show -p ConsistsOf aws-eb.target | cut -d= -f2
2022/01/27 15:53:56.572170 [INFO] cfn-hup.service docker.service nginx.service healthd.service eb-docker-log.service eb-docker-events.service eb-docker.service
2022/01/27 15:53:56.572206 [INFO] Running command /bin/sh -c systemctl show -p ConsistsOf eb-app.target | cut -d= -f2
2022/01/27 15:53:56.583143 [INFO]
2022/01/27 15:53:56.583747 [INFO] Executing instruction: Configure Docker Container Logging
2022/01/27 15:53:56.587182 [INFO] Executing instruction: RunAppDeployPostDeployHooks
2022/01/27 15:53:56.587200 [INFO] The dir .platform/hooks/postdeploy/ does not exist in the application. Skipping this step...
2022/01/27 15:53:56.587204 [INFO] Executing cleanup logic
2022/01/27 15:53:56.587325 [INFO] CommandService Response: {"status":"SUCCESS","api_version":"1.0","results":[{"status":"SUCCESS","msg":"Engine execution has succeeded.","returncode":0,"events":[{"msg":"Instance deployment completed successfully.","timestamp":1643298836,"severity":"INFO"}]}]}
2022/01/27 15:53:56.587458 [INFO] Platform Engine finished execution on command: app-deploy
2022/01/27 15:56:08.141406 [INFO] Starting...
2022/01/27 15:56:08.141500 [INFO] Starting EBPlatform-PlatformEngine
2022/01/27 15:56:08.141523 [INFO] reading event message file
2022/01/27 15:56:08.141619 [INFO] no eb envtier info file found, skip loading env tier info.
2022/01/27 15:56:08.141697 [INFO] Engine received EB command cfn-hup-exec
2022/01/27 15:56:08.291283 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:796071762232:stack/awseb-e-zzq77xp3px-stack/a072a330-7f88-11ec-8245-125e3f27604f -r AWSEBAutoScalingGroup --region us-east-1
2022/01/27 15:56:08.851246 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:796071762232:stack/awseb-e-zzq77xp3px-stack/a072a330-7f88-11ec-8245-125e3f27604f -r AWSEBBeanstalkMetadata --region us-east-1
2022/01/27 15:56:09.238835 [INFO] checking whether command tail-log is applicable to this instance...
2022/01/27 15:56:09.238847 [INFO] this command is applicable to the instance, thus instance should execute command
2022/01/27 15:56:09.238849 [INFO] Engine command: (tail-log)
2022/01/27 15:56:09.238906 [INFO] Executing instruction: GetTailLogs
2022/01/27 15:56:09.238910 [INFO] Tail Logs...
2022/01/27 15:56:09.239208 [INFO] Running command /bin/sh -c tail -n 100 /var/log/eb-engine.log
----------------------------------------
/var/log/nginx/access.log
----------------------------------------
172.31.35.54 - - [27/Jan/2022:15:53:59 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\x82\x02\x92T\xC0\x06O\x7F\xAA\xB5=\xC8\x8Ca\x83v\xFF\xF7\x8E\xF2\xB9\xBDW\x1B\xB9\x9A\x91x\xB0\x81\xBF\xA6\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.35.54 - - [27/Jan/2022:15:54:14 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\xBAy5)=k\x1D\x19|\xF6\xBC\xB0B\x10\x0B$\xE8#\x06\x8B\xA1iY\xB4##+-\x1F\xAC\x92&\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.35.54 - - [27/Jan/2022:15:54:29 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\x03\xBC\xF2\x93\x90uW\xC0\xA5f\xFFWz~K_\xF61\xAEsuY\xE2R\xE0\xBC&\xE7\xFB|\xDB\xC2\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.35.54 - - [27/Jan/2022:15:54:44 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\x84\xFD\xD5\xA5{\xF7\xDEr\x96\xEB" 400 157 "-" "-" "-"
172.31.35.54 - - [27/Jan/2022:15:54:59 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\xBCU\xC9\x92=\xCBT\xC2\xB8RL\xA3\xF7\xE6\xD4s\xB8!A\xF2\x14\xC3" 400 157 "-" "-" "-"
172.31.85.167 - - [27/Jan/2022:15:55:09 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03f\x1B\xB8\x17\x19k|H\x1DW\xEF&\x83\x03#\xE9GB\xE8f\xB4\xDAGJ]\x8E\x92\xD6\xC8L\xD3%\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.35.54 - - [27/Jan/2022:15:55:14 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\xCC\x9D\x1A5&\x99\xB76\x16\xC1\xE2\xB5\xC3:G]\x1A\xA5H\xEE\xF6s\xD0\xF9s\xA3\xBE\xD2\x9Aq\xF0\xC2\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.85.167 - - [27/Jan/2022:15:55:24 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03j4x\xF0\x86uwh\x1C\xEEg8\xA9\xA3\x1E(\x18C\x96\xFA\xE8\xA6\x87{\xC3N\xD4\x08\x10\xBA\xAC\x03\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.35.54 - - [27/Jan/2022:15:55:29 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\x5C\x8Btq\xBEG\xD2\xF8l\xC8\xBA\x94F\x14\x8F\x1C\xCC\xA1#JSw9\xE4\xCD\xA7\x05\x82\xE4][\xB8\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.85.167 - - [27/Jan/2022:15:55:39 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03{\x05\x86\x89\x09.:A\x0C\xCF\x14\xA4=\xDF\xFA\xC6\xD4\xF5+\x9D\xA4\xF8\x93\xE9k\xD5\xD3\xC5\xCA\x9C\xFB\x15\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.35.54 - - [27/Jan/2022:15:55:44 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\xBC\xF3\xE3\xDEy\xB3(\xF2\x18\xEB\xC5f\x1F\xA2\xF5\xE6\xF5\x8C\xF6lO\x98D\xFAT\xCB\xB3`\x9C\xC2\xCE.\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.85.167 - - [27/Jan/2022:15:55:54 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\x16P\x10\x07}\x90\xBD!\x9E\xA1\xAB\xD9\xDD\x1F\xAA\xBF\x85u\xCF\xE7\xAD\xA9\x93$q\xC4" 400 157 "-" "-" "-"
172.31.35.54 - - [27/Jan/2022:15:55:59 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03x\x94z\x84\x1Buz3\x9A\x8FbX\x07\x13\x00\x8DH\xDFf\x10\xC9\xE7\xDB\xF7\xE7\xBFr\xE8w>\xFC\x9E\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
172.31.85.167 - - [27/Jan/2022:15:56:09 +0000] "\x16\x03\x01\x00\xA3\x01\x00\x00\x9F\x03\x03\xEF\x1F'\x84#\xF4\xF4\xB6C\xEE\xE4}\xD6E\x94\x05\xA1\x1B*\x1EZ\x94N\xB9K\x96A>\x8A\x8Ep\xBF\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
----------------------------------------
/var/log/nginx/error.log
----------------------------------------
----------------------------------------
/var/log/docker-events.log
----------------------------------------
2022-01-27T15:52:46.764393026Z image pull maven:3.8.1-jdk-8 (name=maven)
2022-01-27T15:52:47.730944524Z container create b83331900dd580a01b9c5e2744412bd6f6e4465313177fb45a2f288d70765010 (image=sha256:16aedb83589da925c19d2f692234a2a36c017b35846c07fd8ad6817cceda6ae9, name=inspiring_tesla)
2022-01-27T15:52:47.731203832Z container attach b83331900dd580a01b9c5e2744412bd6f6e4465313177fb45a2f288d70765010 (image=sha256:16aedb83589da925c19d2f692234a2a36c017b35846c07fd8ad6817cceda6ae9, name=inspiring_tesla)
2022-01-27T15:52:47.784204703Z network connect 38cc920306e67474a0e4c1558a074911f27746d82bcaf75a013b36aa57d583d3 (container=b83331900dd580a01b9c5e2744412bd6f6e4465313177fb45a2f288d70765010, name=bridge, type=bridge)
2022-01-27T15:52:48.320837501Z container start b83331900dd580a01b9c5e2744412bd6f6e4465313177fb45a2f288d70765010 (image=sha256:16aedb83589da925c19d2f692234a2a36c017b35846c07fd8ad6817cceda6ae9, name=inspiring_tesla)
2022-01-27T15:53:28.504262431Z container die b83331900dd580a01b9c5e2744412bd6f6e4465313177fb45a2f288d70765010 (exitCode=0, image=sha256:16aedb83589da925c19d2f692234a2a36c017b35846c07fd8ad6817cceda6ae9, name=inspiring_tesla)
2022-01-27T15:53:28.615767036Z network disconnect 38cc920306e67474a0e4c1558a074911f27746d82bcaf75a013b36aa57d583d3 (container=b83331900dd580a01b9c5e2744412bd6f6e4465313177fb45a2f288d70765010, name=bridge, type=bridge)
2022-01-27T15:53:30.828196270Z container destroy b83331900dd580a01b9c5e2744412bd6f6e4465313177fb45a2f288d70765010 (image=sha256:16aedb83589da925c19d2f692234a2a36c017b35846c07fd8ad6817cceda6ae9, name=inspiring_tesla)
2022-01-27T15:53:40.412059108Z image pull openjdk:8 (name=openjdk)
2022-01-27T15:53:41.682562011Z container create ebb956fca825c2053c41bce28fb0a802ab2f3ef344bdeb14f821a7577c284138 (image=sha256:2ab20532670b7570e512ec955536dfa5e246c374bdca4f0494df107b88a51c75, name=stoic_fermi)
2022-01-27T15:53:41.807749332Z container destroy ebb956fca825c2053c41bce28fb0a802ab2f3ef344bdeb14f821a7577c284138 (image=sha256:2ab20532670b7570e512ec955536dfa5e246c374bdca4f0494df107b88a51c75, name=stoic_fermi)
2022-01-27T15:53:41.854905318Z container create 28814d73d5d71c7f3cd97d31e3745db7c8d74c7f41a1369d86a6ac94540ff54c (image=sha256:8020ea63973791b37416e569141e448a047578432cc73771afc09069d4a0f99c, name=awesome_ritchie)
2022-01-27T15:53:41.972362390Z container destroy 28814d73d5d71c7f3cd97d31e3745db7c8d74c7f41a1369d86a6ac94540ff54c (image=sha256:8020ea63973791b37416e569141e448a047578432cc73771afc09069d4a0f99c, name=awesome_ritchie)
2022-01-27T15:53:41.978868467Z image tag sha256:af10382f81a47247f3194b007fe0b95c08b2a68c7d9f8f4118741b00121ee217 (name=aws_beanstalk/staging-app:latest)
2022-01-27T15:53:46.962572822Z container create 3771e61e64aec3296f70d863c3deeae6e33d57184feecc1297665eee4630c399 (image=af10382f81a4, name=dreamy_napier)
2022-01-27T15:53:47.000564620Z network connect 38cc920306e67474a0e4c1558a074911f27746d82bcaf75a013b36aa57d583d3 (container=3771e61e64aec3296f70d863c3deeae6e33d57184feecc1297665eee4630c399, name=bridge, type=bridge)
2022-01-27T15:53:47.520980591Z container start 3771e61e64aec3296f70d863c3deeae6e33d57184feecc1297665eee4630c399 (image=af10382f81a4, name=dreamy_napier)
2022-01-27T15:53:53.482805850Z image tag sha256:af10382f81a47247f3194b007fe0b95c08b2a68c7d9f8f4118741b00121ee217 (name=aws_beanstalk/current-app:latest)
2022-01-27T15:53:53.562121224Z image untag sha256:af10382f81a47247f3194b007fe0b95c08b2a68c7d9f8f4118741b00121ee217 (name=sha256:af10382f81a47247f3194b007fe0b95c08b2a68c7d9f8f4118741b00121ee217)
2022-01-27T15:53:55.349273944Z image delete sha256:adafe645300e41dd29b04abccf86a562ad5e635bd6afff9343b6a45721fb3a45 (name=sha256:adafe645300e41dd29b04abccf86a562ad5e635bd6afff9343b6a45721fb3a45)
2022-01-27T15:53:55.351988220Z image delete sha256:16aedb83589da925c19d2f692234a2a36c017b35846c07fd8ad6817cceda6ae9 (name=sha256:16aedb83589da925c19d2f692234a2a36c017b35846c07fd8ad6817cceda6ae9)
2022-01-27T15:53:55.356884258Z image delete sha256:fe90a30920d18ecad75ec02e8c04894fbcaadc209529c3e5c14fdaa66d3a7bc9 (name=sha256:fe90a30920d18ecad75ec02e8c04894fbcaadc209529c3e5c14fdaa66d3a7bc9)
2022-01-27T15:53:55.374500965Z image untag sha256:498ac51e5e6e99ae8646d007ed554587a4ceeab78a664dc7eedde7137c658e9e (name=sha256:498ac51e5e6e99ae8646d007ed554587a4ceeab78a664dc7eedde7137c658e9e)
2022-01-27T15:53:55.376309688Z image untag sha256:498ac51e5e6e99ae8646d007ed554587a4ceeab78a664dc7eedde7137c658e9e (name=sha256:498ac51e5e6e99ae8646d007ed554587a4ceeab78a664dc7eedde7137c658e9e)
2022-01-27T15:53:56.244254893Z image delete sha256:498ac51e5e6e99ae8646d007ed554587a4ceeab78a664dc7eedde7137c658e9e (name=sha256:498ac51e5e6e99ae8646d007ed554587a4ceeab78a664dc7eedde7137c658e9e)
2022-01-27T15:53:56.345382037Z image tag sha256:af10382f81a47247f3194b007fe0b95c08b2a68c7d9f8f4118741b00121ee217 (name=aws_beanstalk/current-app:latest)
2022-01-27T15:53:56.458746013Z image tag sha256:3bc5f7759e81182b118ab4d74087103d3733483ea37080ed5b6581251d326713 (name=openjdk:8)
----------------------------------------
/var/log/eb-docker-process.log
----------------------------------------
2022/01/27 15:53:53.917760 [INFO] Loading Manifest...
2022/01/27 15:53:53.917884 [INFO] no eb envtier info file found, skip loading env tier info.
2022/01/27 15:53:53.943756 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:796071762232:stack/awseb-e-zzq77xp3px-stack/a072a330-7f88-11ec-8245-125e3f27604f -r AWSEBAutoScalingGroup --region us-east-1
2022/01/27 15:53:57.965132 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:796071762232:stack/awseb-e-zzq77xp3px-stack/a072a330-7f88-11ec-8245-125e3f27604f -r AWSEBBeanstalkMetadata --region us-east-1
2022/01/27 15:53:58.364393 [INFO] Checking if docker is running...
2022/01/27 15:53:58.364409 [INFO] Fetch current app container id...
2022/01/27 15:53:58.364434 [INFO] Running command /bin/sh -c docker ps | grep 3771e61e64ae
2022/01/27 15:53:58.402972 [INFO] 3771e61e64ae af10382f81a4 "java -jar /usr/loca…" 12 seconds ago Up 10 seconds 5000/tcp dreamy_napier
2022/01/27 15:53:58.402996 [INFO] Running command /bin/sh -c docker wait 3771e61e64ae
----------------------------------------
/var/log/docker
----------------------------------------
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.206815429Z" level=info msg="Starting up"
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.251734173Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.251769208Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.251794146Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.251813620Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.273290447Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.273327673Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.273364441Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.273386710Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.465282859Z" level=info msg="Loading containers: start."
Jan 27 15:50:41 ip-172-31-85-60 docker: time="2022-01-27T15:50:41.956009883Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jan 27 15:50:42 ip-172-31-85-60 docker: time="2022-01-27T15:50:42.186887273Z" level=info msg="Loading containers: done."
Jan 27 15:50:42 ip-172-31-85-60 docker: time="2022-01-27T15:50:42.641490298Z" level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
Jan 27 15:50:42 ip-172-31-85-60 docker: time="2022-01-27T15:50:42.643174227Z" level=info msg="Daemon has completed initialization"
Jan 27 15:50:42 ip-172-31-85-60 docker: time="2022-01-27T15:50:42.702629222Z" level=info msg="API listen on /run/docker.sock"
Jan 27 15:53:28 ip-172-31-85-60 docker: time="2022-01-27T15:53:28.503145956Z" level=info msg="ignoring event" container=b83331900dd580a01b9c5e2744412bd6f6e4465313177fb45a2f288d70765010 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 27 15:53:41 ip-172-31-85-60 docker: time="2022-01-27T15:53:41.783532791Z" level=info msg="Layer sha256:e963a094d3f25a21ce0bfcae0216d04385c4c06ad580c73675a7992627c28416 cleaned up"
Jan 27 15:53:41 ip-172-31-85-60 docker: time="2022-01-27T15:53:41.948756315Z" level=info msg="Layer sha256:e963a094d3f25a21ce0bfcae0216d04385c4c06ad580c73675a7992627c28416 cleaned up"
----------------------------------------
/var/log/eb-docker/containers/eb-current-app/eb-3771e61e64ae-stdouterr.log
----------------------------------------
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.5.6)
2022-01-27 15:53:57.807 INFO 3771e61e64ae --- [ main] o.s.b.a.e.w.EndpointLinksResolver : Exposing 1 endpoint(s) beneath base path '/actuator'
2022-01-27 15:53:57.853 INFO 3771e61e64ae --- [ main] o.a.c.h.Http11NioProtocol : Starting ProtocolHandler ["http-nio-5000"]
2022-01-27 15:53:57.875 INFO 3771e61e64ae --- [ main] o.s.b.w.e.t.TomcatWebServer : Tomcat started on port(s): 5000 (http) with context path ''
2022-01-27 15:53:57.903 INFO 3771e61e64ae --- [ main] c.n.p.ParalleniumHostApplication : Started ParalleniumHostApplication in 8.805 seconds (JVM running for 10.386)
2022-01-27 15:53:57.939 INFO 3771e61e64ae --- [ main] c.n.p.ParalleniumHostApplication : **The server is hosted at: 127.0.0.1:5000 with a PUBLIC ip of 34.226.166.24
2022-01-27 15:53:57.941 INFO 3771e61e64ae --- [ main] c.n.p.ParalleniumHostApplication : Spring version is 5.3.12
2022-01-27 15:53:57.946 INFO 3771e61e64ae --- [ main] c.n.p.ParalleniumHostApplication : Socket Server is listening on port 6868...
Okay, so I decided to just launch a new environment using the same exact configuration and code and it worked. Looks like Elastic Beanstalk environments can break and once that happens, there is no fixing it apparently.
I am trying to deploy a website/webapp using django... constructed app.yaml and requirements.txt... everything done and when I hit gcloud app deploy , I have this following error log at the end..
DONE
-----------------------------------------------------------------------------------------------------------------------------------------
Updating service [default] (this may take several minutes)...failed.
ERROR: (gcloud.app.deploy) Error Response: [9]
Application startup error:
[2019-03-18 03:14:29 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2019-03-18 03:14:29 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)
[2019-03-18 03:14:29 +0000] [1] [INFO] Using worker: sync
[2019-03-18 03:14:29 +0000] [9] [INFO] Booting worker with pid: 9
[2019-03-18 03:14:29 +0000] [9] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/env/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/env/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 129, in init_process
self.load_wsgi()
File "/env/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
self.wsgi = self.app.wsgi()
File "/env/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/env/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
return self.load_wsgiapp()
File "/env/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
return util.import_app(self.app_uri)
File "/env/local/lib/python2.7/site-packages/gunicorn/util.py", line 350, in import_app
__import__(module)
ImportError: Import by filename is not supported.
[2019-03-18 03:14:29 +0000] [9] [INFO] Worker exiting (pid: 9)
[2019-03-18 03:14:29 +0000] [1] [INFO] Shutting down: Master
[2019-03-18 03:14:29 +0000] [1] [INFO] Reason: Worker failed to boot.
here is my app.yaml
runtime: python
api_version: 1
threadsafe: true
# the PROJECT-DIRECTORY is the one with settings.py and wsgi.py
entrypoint: gunicorn -b :$PORT ~/NovUs/rec/rec.wsgi
# specific to a GUnicorn HTTP server deployment
env: flex
# for Google Cloud Flexible App Engine
# any environment variables you want to pass to your application.
# accessible through os.environ['VARIABLE_NAME']
env_variables:
# the secret key used for the Django app (from PROJECT-DIRECTORY/settings.py)
SECRET_KEY: '***i removed this***'
DEBUG: 'False' # always False for deployment
# everything after /cloudsql/ can be found by entering >> gcloud sql instances describe DATABASE-NAME << in your Terminal
# the DATABASE-NAME is the name you gave your project's PostgreSQL database
# the second line from the describe output called connectionName can be copied and pasted after /cloudsql/
DB_HOST: '/cloudsql/ final-234816:us-central1:novusdb'
DB_PORT: '5432' # PostgreSQL port
DB_NAME: 'novusdb'
DB_USER: 'postgres' # either 'postgres' (default) or one you created on the PostgreSQL instance page
DB_PASSWORD: ''
STATIC_URL: 'https://storage.googleapis.com/BUCKET-NAME/static/' # this is the url that you sync static files to
handlers:
- url: /static
static_dir: static
- url: /
script: home.app
- url: /index\.html
script: home.app
- url: /stylesheets
static_dir: stylesheets
- url: /(.*\.(gif|png|jpg))$
static_files: static/\1
upload: static/.*\.(gif|png|jpg)$
- url: /admin/.*
script: admin.app
login: admin
- url: /.*
script: not_found.app
beta_settings:
# from command >> gcloud sql instances describe DATABASE-NAME <<
cloud_sql_instances: final-234816:us-central1:novusdb
#runtime_config:
#python_version: 2 # enter your Python version BASE ONLY here. Enter 2 for 2.7.9 or 3 for 3.6.4
#manual_scaling:
# instances: 1
#resources:
# cpu: 1
# memory_gb: 0.5
# disk_size_gb: 10
here in my settings.py
WSGI_APPLICATION = 'rec.wsgi.application'
even if i change it to WSGI_APPLICATION = 'wsgi.application'
it doesnt solve, the error remains same.
and i have tried editing the entrypoint with main:app the problem is same....
someone please solve this. thankyou
Generally there could be 2 problems, ran into this a while ago when deploying a Dash application on Google App Engine.
There could be a version conflict in GAE's gunicorn version. Use gunicorn 19.7.1 or higher instead. I had the same problem when using an older version of gunicorn.
The other conflict could be that requirements.txt is not in the same directory as your main.py entrypoint. Therefore the app will be deployed without all the packages installed, which will return no error when deploying to GAE.
In your app.yaml add the default gunicorn entrypoint line, but also add a longer timeout to suit your needs: entrypoint: gunicorn -b :$PORT YOURSITE.wsgi --timeout 120
Versions:
Django 1.9.8
celery 3.1.23
django-celery 3.1.17
Python 2.7
I'm trying to run my celery worker on AWS Elastic Beanstalk. I use Amazon SQS as a celery broker.
Here is my settings.py
INSTALLED_APPS += ('djcelery',)
import djcelery
djcelery.setup_loader()
BROKER_URL = "sqs://%s:%s#" % (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY.replace('/', '%2F'))
When I type the line below on terminal, it starts the worker on my local. Also I've created a few tasks and they're executed correctly. How can I do this on AWS EB?
python manage.py celery worker --loglevel=INFO
I've found this question on StackOverflow.
It says I should add a celery config to the .ebextensions folder which executes the script after deployment. But it doesn't work. I'd appreciate any help. After installing supervisor, I didn't do anything with it. Maybe that's what I'm missing.
Here is the script.
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
celeryenv=${celeryenv%?}
# Create celery configuration script
celeryconf="[program:celeryd]
command=/opt/python/run/venv/bin/celery worker --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
; priority=998
environment=$celeryenv"
# Create the celery supervisord conf script
echo "$celeryconf" | tee /opt/python/etc/celery.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart celeryd through supervisord
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd
Logs from EB: It looks like it works but still it doesn't execute my tasks.
-------------------------------------
/opt/python/log/supervisord.log
-------------------------------------
2016-08-02 10:45:27,713 CRIT Supervisor running as root (no user in config file)
2016-08-02 10:45:27,733 INFO RPC interface 'supervisor' initialized
2016-08-02 10:45:27,733 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2016-08-02 10:45:27,733 INFO supervisord started with pid 2726
2016-08-02 10:45:28,735 INFO spawned: 'httpd' with pid 2812
2016-08-02 10:45:29,737 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2016-08-02 10:47:14,684 INFO stopped: httpd (exit status 0)
2016-08-02 10:47:15,689 INFO spawned: 'httpd' with pid 4092
2016-08-02 10:47:16,727 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2016-08-02 10:47:23,701 INFO spawned: 'celeryd' with pid 4208
2016-08-02 10:47:23,854 INFO stopped: celeryd (terminated by SIGTERM)
2016-08-02 10:47:24,858 INFO spawned: 'celeryd' with pid 4214
2016-08-02 10:47:35,067 INFO success: celeryd entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2016-08-02 10:52:36,240 INFO stopped: httpd (exit status 0)
2016-08-02 10:52:37,245 INFO spawned: 'httpd' with pid 4460
2016-08-02 10:52:38,278 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2016-08-02 10:52:45,677 INFO stopped: celeryd (exit status 0)
2016-08-02 10:52:46,682 INFO spawned: 'celeryd' with pid 4514
2016-08-02 10:52:46,860 INFO stopped: celeryd (terminated by SIGTERM)
2016-08-02 10:52:47,865 INFO spawned: 'celeryd' with pid 4521
2016-08-02 10:52:58,054 INFO success: celeryd entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2016-08-02 10:55:03,135 INFO stopped: httpd (exit status 0)
2016-08-02 10:55:04,139 INFO spawned: 'httpd' with pid 4745
2016-08-02 10:55:05,173 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2016-08-02 10:55:13,143 INFO stopped: celeryd (exit status 0)
2016-08-02 10:55:14,147 INFO spawned: 'celeryd' with pid 4857
2016-08-02 10:55:14,316 INFO stopped: celeryd (terminated by SIGTERM)
2016-08-02 10:55:15,321 INFO spawned: 'celeryd' with pid 4863
2016-08-02 10:55:25,518 INFO success: celeryd entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
I forgot to add an answer after solving this.
This is how i fixed it.
I've created a new file "99-celery.config" in my .ebextensions folder.
In this file, I've added this code and it works perfectly.
(don't forget the change your project name in line number 16, mine is molocate_eb)
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
celeryenv=${celeryenv%?}
# Create celery configuraiton script
celeryconf="[program:celeryd]
; Set full path to celery program if using virtualenv
command=/opt/python/current/app/molocate_eb/manage.py celery worker --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# Create the celery supervisord conf script
echo "$celeryconf" | tee /opt/python/etc/celery.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart celeryd through supervisord
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd
Edit: In case of an supervisor error on AWS, just be sure that;
You're using Python 2 not Python 3 since supervisor doesn't work on Python 3.
Don't forget to add supervisor to your requirements.txt.
If it still gives error(happened to me once), just 'Rebuild Environment' and it'll probably work.
you can use the supervisor to run celery. That will run the celery in demon process.
[program:tornado-8002]
directory: name of the director where django project lies
command: command to run celery // python manage.py celery
stderr_logfile = /var/log/supervisord/tornado-stderr.log
stdout_logfile = /var/log/supervisord/tornado-stdout.log
In my local environment i used celery for schedule task it works in local system i used redis as a worker
now i want to configure django celery in heroku server
i tried to use heroku-redis add-ons in heroku app
added this stuff in my settings.py:
r = redis.from_url(os.environ.get("REDIS_URL"))
BROKER_URL = redis.from_url(os.environ.get("REDIS_URL"))
CELERY_RESULT_BACKEND = os.environ.get('REDIS_URL')
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Canada/Eastern'
redis_url = urlparse.urlparse(os.environ.get('REDIS_URL'))
CACHES = {
"default": {
"BACKEND": "redis_cache.RedisCache",
"LOCATION": "{0}:{1}".format(redis_url.hostname, redis_url.port),
"OPTIONS": {
"PASSWORD": redis_url.password,
"DB": 0,
}
}
}
after in my procfile I added:
web: gunicorn bizbii.wsgi --log-file -
worker : celery workder -A tasks.app -l INFO
python manage.py celeryd -v 2 -B -s celery -E -l INFO
but still task does not run
After that I run command for log so it return:
2016-07-30T08:53:19+00:00 app[heroku-redis]: source=REDIS sample#active-connections=1 sample#load-avg-1m=0.07 sample#load-avg-5m=0.075 sample#load-avg-15m=0.07 sample#read-iops=0 sample#write-iops=0 sample#memory-total=15664876.0kB sample#memory-free=13426732.0kB sample#memory-cached=460140kB sample#memory-redis=299616bytes sample#hit-rate=1 sample#evicted-keys=0
after that create dyno with this command:
heroku run bash -a bizbii2
and run following command:
python manage.py celeryd -v 2 -B -s celery -E -l INFO
so it return error like:
[2016-08-03 08:23:26,506: ERROR/Beat] beat: Connection error: [Errno 111] Connection refused. Trying again in 8.0 seconds...
[2016-08-03 08:23:26,843: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
Trying again in 8.00 seconds...
Please give me suggestion how we deploy celery on heroku server
I had this exact problem. I updated my procfile with the following line and the error is gone:
worker: celery -A TASKFILE worker -B --loglevel=info
Replace TASKFILE with for example: proj.celery or proj.tasks. This depends on where you put the tasks.
I have a problem with Maven Compiler Hellow-Service sample.
the response: Unable to add module to the current project as it is not of packaging type 'pom' -> [Help 1]
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building WSO2 MSF4J Microservice 1.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] >>> maven-archetype-plugin:2.4:generate (default-cli) # Hello-Service >>>
[INFO]
[INFO] <<< maven-archetype-plugin:2.4:generate (default-cli) # Hello-Service <<<
[INFO]
[INFO] --- maven-archetype-plugin:2.4:generate (default-cli) # Hello-Service ---
[INFO] Generating project in Interactive mode
[INFO] Archetype repository not defined. Using the one from [org.wso2.msf4j:msf4j-microservice:1.0.0] found in catalog remote
[INFO] Using property: groupId = org.example
[INFO] Using property: artifactId = Hello-Service
[INFO] Using property: version = 1.0.0-SNAPSHOT
[INFO] Using property: package = br.teste.service
[INFO] Using property: serviceClass = HelloService
Confirm properties configuration:
groupId: org.example
artifactId: Hello-Service
version: 1.0.0-SNAPSHOT
package: br.teste.service
serviceClass: HelloService
Y: : y
[INFO] ----------------------------------------------------------------------------
[INFO] Using following parameters for creating project from Archetype: msf4j-microservice:1.0.0
[INFO] ----------------------------------------------------------------------------
[INFO] Parameter: groupId, Value: org.example
[INFO] Parameter: artifactId, Value: Hello-Service
[INFO] Parameter: version, Value: 1.0.0-SNAPSHOT
[INFO] Parameter: package, Value: br.teste.service
[INFO] Parameter: packageInPathFormat, Value: br/teste/service
[INFO] Parameter: package, Value: br.teste.service
[INFO] Parameter: version, Value: 1.0.0-SNAPSHOT
[INFO] Parameter: groupId, Value: org.example
[INFO] Parameter: serviceClass, Value: HelloService
[INFO] Parameter: artifactId, Value: Hello-Service
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Skipping WSO2 MSF4J Microservice
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 10.359s
[INFO] Finished at: Wed Mar 16 15:05:44 BRT 2016
[INFO] Final Memory: 22M/265M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-archetype-plugin:2.4:generate (default-cli) on project Hello-Service: org.apache.maven.archetype.exception.InvalidPackaging: Unable to add module to the current project as it is not of packaging type 'pom' -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
POM.XML
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<groupId>org.wso2.msf4j</groupId>
<artifactId>msf4j-service</artifactId>
<version>1.0.0</version>
<relativePath>../pom.xml</relativePath>
</parent>
<modelVersion>4.0.0</modelVersion>
<groupId>org.example</groupId>
<artifactId>Hello-Service</artifactId>
<version>1.0.0-SNAPSHOT</version>
<name>WSO2 MSF4J Microservice</name>
<properties>
<microservice.mainClass>org.example.service.Application</microservice.mainClass>
</properties>
</project>
You are using old version of source code. I can identify it from your module artifact name. Relevant pom file can be found here
I can compile the latest source code without any issues.