Not able to start the docker container for postfix - dockerfile

I am getting the below error when i try to run the docker container for postfix
2020-05-29 08:49:05,837 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
2020-05-29 08:49:05,837 INFO Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2020-05-29 08:49:05,844 INFO RPC interface 'supervisor' initialized
2020-05-29 08:49:05,844 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2020-05-29 08:49:05,844 INFO supervisord started with pid 17
2020-05-29 08:49:06,852 INFO spawned: 'postfix' with pid 19
2020-05-29 08:49:06,856 INFO spawnerr: can't find command 'rsyslogd'
2020-05-29 08:49:07,167 INFO exited: postfix (exit status 1; not expected)
2020-05-29 08:49:08,172 INFO spawned: 'postfix' with pid 136
2020-05-29 08:49:08,174 INFO spawnerr: can't find command 'rsyslogd'
2020-05-29 08:49:08,219 INFO exited: postfix (exit status 1; not expected)
2020-05-29 08:49:10,230 INFO spawned: 'postfix' with pid 151
2020-05-29 08:49:10,233 INFO spawnerr: can't find command 'rsyslogd'
2020-05-29 08:49:10,274 INFO exited: postfix (exit status 1; not expected)
2020-05-29 08:49:13,283 INFO spawned: 'postfix' with pid 166
2020-05-29 08:49:13,286 INFO spawnerr: can't find command 'rsyslogd'
2020-05-29 08:49:13,286 INFO gave up: rsyslog entered FATAL state, too many start retries too quickly
2020-05-29 08:49:13,325 INFO exited: postfix (exit status 1; not expected)
2020-05-29 08:49:14,330 INFO gave up: postfix entered FATAL state, too many start retries too quickly
The block corresponding to the above is
command=/usr/sbin/rsyslogd -n -c3
Kindly help
Thanks,
Suv

Please provide the container config you are using to run.
Coming to what I understood was, I guess rsyslog is not installed in the container, so please install it before using it. If you are on a Debian or ubuntu container use the below commands to install it in the container.
add-apt-repository ppa:adiscon/v8-stable
apt-get install rsyslog
In dockerfile:
RUN add-apt-repository ppa:adiscon/v8-stable && \
apt-get -y install rsyslog

Related

Spark Job Crashes with error in prelaunch.err

We are runing a spark job which runs close to 30 scripts one by one. it usually takes 14-15h to run, but this time it failed in 13h. Below is the details:
Command:spark-submit --executor-memory=80g --executor-cores=5 --conf spark.sql.shuffle.partitions=800 run.py
Setup: Running spark jobs via jenkins on AWS EMR with 16 spot nodes
Error: Since the YARN log is huge (270Mb+), below are some extracts from it:
[2022-07-25 04:50:08.646]Container exited with a non-zero exit code 1. Error file: prelaunch.err. Last 4096 bytes of prelaunch.err : Last 4096 bytes of stderr : ermediates/master/email/_temporary/0/_temporary/attempt_202207250435265404741257029168752_0641_m_000599_168147 s3://memberanalytics-data-out-prod/pipelined_intermediates/master/email/_temporary/0/task_202207250435265404741257029168752_0641_m_000599 using algorithm version 1 22/07/25 04:37:05 INFO FileOutputCommitter: Saved output of task 'attempt_202207250435265404741257029168752_0641_m_000599_168147' to s3://memberanalytics-data-out-prod/pipelined_intermediates/master/email/_temporary/0/task_202207250435265404741257029168752_0641_m_000599 22/07/25 04:37:05 INFO SparkHadoopMapRedUtil: attempt_202207250435265404741257029168752_0641_m_000599_168147: Committed 22/07/25 04:37:05 INFO Executor: Finished task 599.0 in stage 641.0 (TID 168147). 9341 bytes result sent to driver 22/07/25 04:49:36 ERROR YarnCoarseGrainedExecutorBackend: Executor self-exiting due to : Driver ip-10-13-52-109.bjw2k.asg:45383 disassociated! Shutting down. 22/07/25 04:49:36 INFO MemoryStore: MemoryStore cleared 22/07/25 04:49:36 INFO BlockManager: BlockManager stopped 22/07/25 04:50:06 WARN ShutdownHookManager: ShutdownHook '$anon$2' timeout, java.util.concurrent.TimeoutException java.util.concurrent.TimeoutException at java.util.concurrent.FutureTask.get(FutureTask.java:205) at org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124) at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:95) 22/07/25 04:50:06 ERROR Utils: Uncaught exception in thread shutdown-hook-0 java.lang.InterruptedException

Celery workers failing in aws elastic beanstalk [exited: celeryd-worker (exit status 1; not expected)]

I've been trying to follow this thorough explanation on how to deploy a django app with celery worker to aws elastic beanstalk:
How to run a celery worker with Django app scalable by AWS Elastic Beanstalk?
I had some problems installing pycurl but solved it with the comment in:
Pip Requirements.txt --global-option causing installation errors with other packages. "option not recognized"
Then i got:
[2019-01-26T06:43:04.865Z] INFO [12249] - [Application update app-190126_134200#28/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_1_raiseflags/Command 05_celery_tasks_run] : Activity execution failed, because: /usr/bin/env: bash
: No such file or directory
(ElasticBeanstalk::ExternalInvocationError)
But also solved it: it turns out I had to convert "celery_configuration.txt" file to UNIX EOL (i'm using Windows, and Notepad++ automatically converted it to Windows EOL).
With all these modifications I can successfully deploy the project. But the problem is that the periodic tasks are not running.
I get:
2019-01-26 09:12:57,337 INFO exited: celeryd-beat (exit status 1; not expected)
2019-01-26 09:12:58,583 INFO spawned: 'celeryd-worker' with pid 25691
2019-01-26 09:12:59,453 INFO spawned: 'celeryd-beat' with pid 25695
2019-01-26 09:12:59,666 INFO exited: celeryd-worker (exit status 1; not expected)
2019-01-26 09:13:00,790 INFO spawned: 'celeryd-worker' with pid 25705
2019-01-26 09:13:00,791 INFO exited: celeryd-beat (exit status 1; not expected)
2019-01-26 09:13:01,915 INFO exited: celeryd-worker (exit status 1; not expected)
2019-01-26 09:13:03,919 INFO spawned: 'celeryd-worker' with pid 25728
2019-01-26 09:13:03,920 INFO spawned: 'celeryd-beat' with pid 25729
2019-01-26 09:13:05,985 INFO exited: celeryd-worker (exit status 1; not expected)
2019-01-26 09:13:06,091 INFO exited: celeryd-beat (exit status 1; not expected)
2019-01-26 09:13:07,092 INFO gave up: celeryd-beat entered FATAL state, too many start retries too quickly
2019-01-26 09:13:09,096 INFO spawned: 'celeryd-worker' with pid 25737
2019-01-26 09:13:10,084 INFO exited: celeryd-worker (exit status 1; not expected)
2019-01-26 09:13:11,085 INFO gave up: celeryd-worker entered FATAL state, too many start retries too quickly
I also have this part of the logs:
[2019-01-26T09:13:00.583Z] INFO [25247] - [Application update app-190126_161213#43/AppDeployStage1/AppDeployPostHook/run_supervised_celeryd.sh] : Completed activity. Result:
[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A raiseflags --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=PYTHONPATH="/opt/python/current/app/:",PATH="/opt/python/run/venv/bin/:%%(ENV_PATH)s",RDS_PORT="5432",RDS_DB_NAME="ebdb",RDS_USERNAME="foobar",PYCURL_SSL_LIBRARY="nss",DJANGO_SETTINGS_MODULE="raiseflags.settings",RDS_PASSWORD="foobar",RDS_HOSTNAME="something.something.eu-west-1.rds.amazonaws.com"
[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A raiseflags --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=PYTHONPATH="/opt/python/current/app/:",PATH="/opt/python/run/venv/bin/:%%(ENV_PATH)s",RDS_PORT="5432",RDS_DB_NAME="ebdb",RDS_USERNAME="puigdemontAWS",PYCURL_SSL_LIBRARY="nss",DJANGO_SETTINGS_MODULE="raiseflags.settings",RDS_PASSWORD="holahola",RDS_HOSTNAME="aa1m59206y4fljn.cdreg3t50bbl.eu-west-1.rds.amazonaws.com"
No config updates to processes
celeryd-beat: ERROR (not running)
celeryd-beat: ERROR (abnormal termination)
celeryd-worker: ERROR (not running)
celeryd-worker: ERROR (abnormal termination)
[2019-01-26T09:13:00.583Z] INFO [25247] - [Application update app-190126_161213#43/AppDeployStage1/AppDeployPostHook] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/appdeploy/post.
[2019-01-26T09:13:00.583Z] INFO [25247] - [Application update app-190126_161213#43/AppDeployStage1] : Completed activity. Result:
Application version switch - Command CMD-AppDeploy stage 1 completed
[2019-01-26T09:13:00.583Z] INFO [25247] - [Application update app-190126_161213#43/AddonsAfter] : Starting activity...
[2019-01-26T09:13:00.583Z] INFO [25247] - [Application update app-190126_161213#43/AddonsAfter/ConfigLogRotation] : Starting activity...
[2019-01-26T09:13:00.583Z] INFO [25247] - [Application update app-190126_161213#43/AddonsAfter/ConfigLogRotation/10-config.sh] : Starting activity...
[2019-01-26T09:13:00.756Z] INFO [25247] - [Application update app-190126_161213#43/AddonsAfter/ConfigLogRotation/10-config.sh] : Completed activity. Result:
Disabled forced hourly log rotation.
[2019-01-26T09:13:00.756Z] INFO [25247] - [Application update app-190126_161213#43/AddonsAfter/ConfigLogRotation] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/addons/logpublish/hooks/config.
I don't know if it has something to do with the error, but notice above the line [[ PATH="/opt/python/run/venv/bin/:%%(ENV_PATH)s" ]] --> shouldn't ENV_PATH be something else?:
environment=PYTHONPATH="/opt/python/current/app/:",PATH="/opt/python/run/venv/bin/:%%(ENV_PATH)s",RDS_PORT="5432",RDS_DB_NAME="ebdb",RDS_USERNAME="foobar",PYCURL_SSL_LIBRARY="nss",DJANGO_SETTINGS_MODULE="raiseflags.settings",RDS_PASSWORD="foobar",RDS_HOSTNAME="something.something.eu-west-1.rds.amazonaws.com"
I'ts my first time deploying an app with celery, and i'm really lost to be honest. I fought a lot to solve the first two errors (i'm really amateur), and now that i get this I don't even know where to start.
Also, i'm not sure if I'm using "celery_configuration.txt" the right way. The only thing I edited was the 2 places where it says "django_app", which I changed for "raiseflags" (the name of my django project). Is this correct?
Does anyone know how to solve it? I can paste my files if needed, but they are just like the ones provided in the first link. I'm using Windows.
Thank you very much!
Ok, the problem had nothing to do with the PATH line I was referring to. I just had to add 'django_celery_beat' and 'django_celery_results' in INSTALLED_APPS in my settings.py
The connection error I later referred to talking to Fran was because I needed to set BROKER_URL instead of CELERY_BROKER_URL, also in the settings.py file. I guess this had to do with me not specifying 'CELERY' as the namespace in the app.autodiscover_tasks() in celery.py file (although in the linked question they do it, i didn't do it because i was using a different version of celery).
Thanks to Fran for everything, specially for pointing out that i should review the celery error logs. I didn't know how to do it. If any other amateur is also struggling, know that you have to "eb ssh" to your instance and then "tail -n 40 /var/log/celery-worker.log" and ""tail -n 40 /var/log/celery-beat.log" (where "40" is the number of lines you want to read). I know this sounds obvious to a lot of people but, stupid me, I had no clue.
(btw, i'm still struggling with a problem with the celery worker, that can't find pycurl module, but this has nothing to do with this question).
Referring to the line you pointed out where appears
environment=PYTHONPATH="/opt/python/current/app/:",PATH="/opt/python/run/venv/bin/:%%(ENV_PATH)s",RDS_PORT="5432",RDS_DB_NAME="ebdb",RDS_USERNAME="foobar",PYCURL_SSL_LIBRARY="nss",DJANGO_SETTINGS_MODULE="raiseflags.settings",RDS_PASSWORD="foobar",RDS_HOSTNAME="something.something.eu-west-1.rds.amazonaws.com", do you copy this line from somewhere? Because I don't see it in the link you posted.
In the linked answer was environment=$celeryenv, where $celeryenv was defined as
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}```

Supervisor not working on centos7 for laravel-echo-server

Log details are below for the supervisor.log file. Below error comes when I restart the supervisor on cent OS7
2018-02-01 17:48:02,392 INFO spawnerr: can't find command
'/var/www/laravel/laravel-echo-server' 2018-02-01 17:48:03,393 INFO
success: laravel-queue-listener entered RUNNING state, process has
stayed up for > than 1 seconds (startsecs) 2018-02-01 17:48:03,394
INFO spawnerr: can't find command
'/var/www/laravel/laravel-echo-server' 2018-02-01 17:48:05,396 INFO
spawnerr: can't find command '/var/www/laravel/laravel-echo-server'
2018-02-01 17:48:08,401 INFO spawnerr: can't find command
'/var/www/laravel/laravel-echo-server' 2018-02-01 17:48:08,401 INFO
gave up: laravel-worker entered FATAL state, too many start retries
too quickly
More about the issue
I accessed server using putty and ran the command manually laravel-echo-server start and everything works but why it does not work if run the same command using supervisor file with below code and restarts supervisor...Here is the screenshot when I try to run the laravel-echo-server manually using putty. But this is of no use when the putty is closed...laravel-echo-server gets off also.
Command Details for laravel-echo-server are below present in the supervisor file
[program:laravel-worker]
command=/var/www/laravel/laravel-echo-server start
autostart=true
user=root
autorestart=true
stdout_logfile=/var/www/laravel/storage/logs/echoserver.log
You can check below that the laravel-echo-server is already installed on the server..
Update - 1
Using command - which laravel-echo-server, it is found out that the path of laravel-echo-server is /usr/bin/laravel-echo-server
When I entered in the above mentioned directory and tried to run the command manually laravel-echo-server start , it gave an error Message that laravel-echo-server.json file is missing. I manually added and updated the url(authHost and allowOrigin). Finally, I stopped the command that I ran manually and added the correct path in supervisor file. Now it is like below.
[program:echo-server]
command=/usr/bin/laravel-echo-server start
autostart=true
user=root
autorestart=true
stdout_logfile=/var/www/laravel/storage/logs/echoserver.log
Then I restarted the supervisor and got the below supervisor logs.
2018-02-09 07:19:31,674 INFO success: echo-server entered RUNNING
state, process has stayed up for > than 1 seconds (startsecs)
2018-02-09 07:19:31,715 INFO exited: echo-server (exit status 0;
expected) 2018-02-09 07:19:32,718 INFO spawned: 'echo-server' with pid
2286 2018-02-09 07:19:33,648 INFO exited: echo-server (exit status 0;
not expected) 2018-02-09 07:19:34,652 INFO spawned: 'echo-server' with
pid 2296 2018-02-09 07:19:35,545 INFO exited: echo-server (exit status
0; not expected) 2018-02-09 07:19:37,550 INFO spawned: 'echo-server'
with pid 2306 2018-02-09 07:19:38,446 INFO exited: echo-server (exit
status 0; not expected) 2018-02-09 07:19:41,451 INFO spawned:
'echo-server' with pid 2317 2018-02-09 07:19:42,299 INFO exited:
echo-server (exit status 0; not expected) 2018-02-09 07:19:43,301 INFO
gave up: echo-server entered FATAL state, too many start retries too
quickly
I am still facing same 404 error of socket.io/socket.io.js
You have two issues. One is the path of the echo server executable and one is the working directory. You need to use below config
[program:echo-server]
command=/usr/bin/laravel-echo-server start
dierctory=/var/www/laravel
autostart=true
user=root
autorestart=true
stdout_logfile=/var/www/laravel/storage/logs/echoserver.log
This should now help you fix the issues

Supervisord, Gunicorn (exit status 127; not expected)

I am hoping to use supervisor to monitor and run a gunicorn server.
When I run:
/usr/bin/gunicorn app.wsgi:application -c config.conf
it works.
But the exact same command in my supervisor conf file does not work. Any explaination?
supervisor.conf
[supervisord]
[group:app]
programs=gunicorn_app
[program:gunicorn_app]
environment=PYTHONPATH=usr/bin
command=/usr/bin/gunicorn app.wsgi:application -c gunicorn.conf.py
directory=~/path/to/app
autostart=true
autorestart=true
environment=LANG="en_US.UTF-8",LC_ALL="en_US.UTF-8",LC_LANG="en_US.UTF-8"
I'm receiving an error like this:
2016-05-31 22:53:34,786 INFO spawned: 'gunicorn_app' with pid 18763
2016-05-31 22:53:34,789 INFO exited: gunicorn_app (exit status 127; not expected)
2016-05-31 22:53:35,791 INFO spawned: 'gunicorn_app' with pid 18764
2016-05-31 22:53:35,795 INFO exited: gunicorn_app (exit status 127; not expected)
2016-05-31 22:53:37,798 INFO spawned: 'gunicorn_app' with pid 18765
2016-05-31 22:53:37,802 INFO exited: gunicorn_app (exit status 127; not expected)
2016-05-31 22:53:40,807 INFO spawned: 'gunicorn_app' with pid 18766
2016-05-31 22:53:40,810 INFO exited: gunicorn_app (exit status 127; not expected)
I understand that exit code 127 means "command not found" but I can execute the exact same command on the command line.
Try to use absolute path.
/home/path/to/app
not ~/path/to/app
As you rightly said, this code means "command not found" which could be as a result of either:
Supervisor not being able to locate the configuration file
Wrong configuration
Whatever the case, i will recommend you in:
case 1:
make sure you provide the absolute path(full path) path to you gunicorn.conf.py file(.e.g. /home/user/path/to/gunicorn.conf.py)
case 2:
revisit your supervisor configuration file and try to determine where the error may arise. The best way to do so is to locate the log file and open it to verify the cause. In other to facilitate this, I recommend to add to your supervisord.conf file the following:
[program:gunicorn]
# where the configuation file is located on the /home/<user>/path/to/configuration_file
command=/usr/local/bin/gunicorn app.wsgi:application -c /home/<user>/path/to/gunicorn.conf.py
directory=~/path/to/app
autostart=true
autorestart=true
#add this setting to log error
stderr_logfile=/var/log/gunicorn.err.log
stdout_logfile=/var/log/gunicorn.out.log
environment=LANG="en_US.UTF-8",LC_ALL="en_US.UTF-8",LC_LANG="en_US.UTF-8"
NB: The assumption i am making here is that you want to deploy or run a django application using gunicorn. Any error encountered when you lunch your server can be verifying in the gunicorn.err.log file.

AWS Beanstalk django 1.5 application not working

I'm new to AWS so please help me. I'll write only the things which might matter for my problem. If you need more info, just write in comments section.
When I ping ELB address or app address, I get "Request timeout".
Server:
Instance type: micro
Custom AMI: ami-c37474b7
Load balancer:
Only HTTP, port 80
And RDS, S3, ElastiCache, SQS.
I use S3 also to store django's static files, which works, because I can see those files in my bucket.
RDS and SQS also works. The problem with ElastiCache is timeout, which libmemcached fired, but that isn't the main problem.
sg-cced0da3 | SecurityGroup for ElasticBeanstalk environment.
22 (SSH) 0.0.0.0/0
80 (HTTP) sg-ceed0da1
sg-ceed0da1 | ELB created security group used when no security group is specified during ELB creation - modifications could impact traffic to future ELBs
80 (HTTP) 0.0.0.0/0
Config file
packages:
yum:
libevent: []
libmemcached: []
libmemcached-devel: []
container_commands:
01_collectstatic:
command: "django-admin.py collectstatic --noinput"
02_syncdb:
command: "django-admin.py syncdb --noinput"
leader_only: true
03_createadmin:
command: "utilities/scripts/createadmin.py"
leader_only: true
option_settings:
- namespace: aws:elasticbeanstalk:container:python
option_name: WSGIPath
value: findtofun/wsgi.py
- option_name: DJANGO_SETTINGS_MODULE
value: findtofun.settings
- option_name: AWS_ACCESS_KEY_ID
value: ...
- option_name: AWS_SECRET_KEY
value: ...
- namespace: aws:elasticbeanstalk:container:python:staticfiles
option_name: /static/
value: static/
LOGS
/var/log/eb-tools.log
2013-06-03 14:52:47,908 [INFO] (27814 MainThread) [directoryHooksExecutor.py-29] [root directoryHooksExecutor info]
Script succeeded.
2013-06-03 14:52:47,908 [INFO] (27814 MainThread) [directoryHooksExecutor.py-29] [root directoryHooksExecutor info] Executing script:
/opt/elasticbeanstalk/hooks/appdeploy/pre/03deploy.py
2013-06-03 14:52:50,019 [INFO] (27814 MainThread) [directoryHooksExecutor.py-29] [root directoryHooksExecutor info] Output from script: New python executable in
/opt/python/run/venv/bin/python2.6
Installing
/var/log/httpd/error_log
Python/2.6.8 configured -- resuming normal operations
[Mon Jun 03 16:53:06 2013] [error] Exception KeyError:
KeyError(140672020449248,) in ignored
[Mon Jun 03 14:53:06 2013] [notice] caught SIGTERM, shutting down
[Mon Jun 03 14:53:08 2013] [notice] suEXEC mechanism enabled
(wrapper: /usr/sbin/suexec)
[Mon Jun 03 14:53:08 2013] [notice] Digest: generating secret for digest authentication
...
[Mon Jun 03 14:53:08 2013] [notice] Digest: done
[Mon Jun 03 14:53:08 2013] [notice] Apache/2.2.22 (Unix) DAV/2 mod_wsgi/3.2
Python/2.6.8 configured -- resuming normal operations List item
/opt/python/log/supervisord.log
2013-06-03 04:39:35,544 CRIT Supervisor running as root (no user in config file)
2013-06-03 04:39:35,650 INFO RPC interface 'supervisor' initialized
2013-06-03 04:39:35,651 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2013-06-03 04:39:35,651 INFO supervisord started with pid 3488
2013-06-03 04:39:36,658 INFO spawned: 'httpd' with pid 3498
2013-06-03 04:39:37,660 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2013-06-03 04:44:51,265 INFO stopped: httpd (exit status 0)
2013-06-03 04:44:52,280 INFO spawned: 'httpd' with pid 3804
2013-06-03 04:44:53,283 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2013-06-03 14:53:06,986 INFO stopped: httpd (exit status 0)
2013-06-03 14:53:08,000 INFO spawned: 'httpd' with pid 27871
2013-06-03 14:53:09,003 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
/var/log/cfn-init.log
2013-06-03 14:53:05,520 [DEBUG] Running test for command 03_createadmin
2013-06-03 14:53:05,535 [DEBUG] Test command output:
2013-06-03 14:53:05,536 [DEBUG] Test for command 03_createadmin passed
2013-06-03 14:53:05,986 [INFO] Command 03_createadmin succeeded
2013-06-03 14:53:05,987 [DEBUG] Command 03_createadmin output:
2013-06-03 14:53:05,987 [DEBUG] No services specified
2013-06-03 14:53:05,994 [INFO] ConfigSets completed
2013-06-03 14:53:06,000 [DEBUG] Not clearing reboot trigger as scheduling support is not available
2013-06-03 14:53:06,292 [DEBUG] CloudFormation client initialized with endpoint https://cloudformation.eu-west-1.amazonaws.com
2013-06-03 14:53:06,292 [DEBUG] Describing resource AWSEBAutoScalingGroup in stack arn:aws:cloudformation:eu-west-1:352769977590:stack/awseb-e-bwrsuih23z-stack/52c9b3c0-cbf6-11e2-ace7-5017c2ccb886
2013-06-03 14:53:06,489 [DEBUG] Not setting a reboot trigger as scheduling support is not available
2013-06-03 14:53:06,510 [INFO] Running configSets: Hook-EnactAppDeploy
2013-06-03 14:53:06,511 [INFO] Running configSet Hook-EnactAppDeploy
2013-06-03 14:53:06,512 [INFO] Running config Hook-EnactAppDeploy
2013-06-03 14:53:06,512 [DEBUG] No packages specified
2013-06-03 14:53:06,512 [DEBUG] No groups specified
2013-06-03 14:53:06,512 [DEBUG] No users specified
2013-06-03 14:53:06,513 [DEBUG] No sources specified
2013-06-03 14:53:06,513 [DEBUG] /etc/httpd/conf.d/01ebsys.conf already exists
2013-06-03 14:53:06,513 [DEBUG] Moving /etc/httpd/conf.d/01ebsys.conf to /etc/httpd/conf.d/01ebsys.conf.bak
2013-06-03 14:53:06,513 [DEBUG] Writing content to /etc/httpd/conf.d/01ebsys.conf
2013-06-03 14:53:06,514 [DEBUG] No mode specified for /etc/httpd/conf.d/01ebsys.conf
2013-06-03 14:53:06,514 [DEBUG] Running command aclean
2013-06-03 14:53:06,514 [DEBUG] No test for command aclean
2013-06-03 14:53:06,532 [INFO] Command aclean succeeded
2013-06-03 14:53:06,533 [DEBUG] Command aclean output:
2013-06-03 14:53:06,533 [DEBUG] Running command clean
2013-06-03 14:53:06,534 [DEBUG] No test for command clean
2013-06-03 14:53:06,547 [INFO] Command clean succeeded
2013-06-03 14:53:06,548 [DEBUG] Command clean output:
2013-06-03 14:53:06,548 [DEBUG] Running command hooks
2013-06-03 14:53:06,548 [DEBUG] No test for command hooks
2013-06-03 14:53:19,278 [INFO] Command hooks succeeded
2013-06-03 14:53:19,279 [DEBUG] Command hooks output: Executing directory: /opt/elasticbeanstalk/hooks/appdeploy/enact/
Executing script: /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.py
Output from script: httpd: stopped
httpd: started
httpd RUNNING pid 27871, uptime 0:00:03
Script succeeded.
Executing script: /opt/elasticbeanstalk/hooks/appdeploy/enact/09clean.sh
Output from script:
Script succeeded.
Don't know if this is the right answer, but I guess it works when I put DEBUG = False in settings.py file. Can someone clarify this.