Vora 1.4 Catalog fails to start - vora

So I upgraded Vora from 1.3 to 1.4 on recently upgraded HDP 2.5.6.
All services seem to be starting fine, except Catalog. In the log I see a lot of messages like this:
2017-08-16 11:43:34.591183|+1000|ERROR|Was not able to create new dlog via XXXXX:37999, Status was ERROR_OP_TIMED_OUT, Details: |v2catalog_server|Distributed Log|140607339825056|CreateDLog|log_administration.cpp(211)^^
2017-08-16 11:43:34.611044|+1000|ERROR|Operation (CREATE_LOG) timed out, last status was: ERROR_INTERNAL|v2catalog_server|Distributed Log|140607279314688|Retry|callback_base.cpp(222)^^
2017-08-16 11:43:34.611204|+1000|ERROR|Was not able to create new dlog via XXXXX:20439, Status was ERROR_OP_TIMED_OUT, Details: |v2catalog_server|Distributed Log|140607339825056|CreateDLog|log_administration.cpp(211)^^
2017-08-16 11:43:34.611235|+1000|ERROR|Create DLog ended with status ERROR_OP_TIMED_OUT, retrying in 1000ms|v2catalog_server|Distributed Log|140607339825056|CreateDLog|log_administration.cpp(163)^^
2017-08-16 11:43:35.611757|+1000|ERROR|can't create dlog client[ ERROR_OP_TIMED_OUT ]|v2catalog_server|Catalog|140607339825056|Init|dlog_accessor.cpp(174)^^
terminate called after throwing an instance of 'std::system_error'
what(): Invalid argument
Any ideas what I left misconfigured?
[UPDATE] DLog's log below:
[Wed Aug 16 10:31:23 2017] DLOG Server Version: 1.2.330.20859
[Wed Aug 16 10:31:23 2017] Listening on XXXXXX:46026
[Wed Aug 16 10:31:23 2017] Loading data store
2017-08-16 10:31:23.475454|+1000|WARN |Server file descriptor limit too large vs system limit; reducing to 896|v2dlog|Distributed Log|140349419014080|Load|store.cpp(2187)^^
[Wed Aug 16 10:31:23 2017] Server file descriptor limit too large vs system limit; reducing to 896
[Wed Aug 16 10:31:23 2017] Recovering log in store
[Wed Aug 16 10:31:23 2017] Starting server in managed mode
[Wed Aug 16 10:31:23 2017] Initializing management interface
2017-08-16 10:31:39.365780|+1000|WARN |f(1)h(1):Host 1 has timed out, disabling|v2dlog|Distributed Log|140349343360768|newcluster.(*FragmentRef).ProcessRule|dlog.go(607)^^
2017-08-16 10:32:10.333444|+1000|ERROR|Log with ID 1 is not registered on unit.|v2dlog|Distributed Log|140349238322944|Seal|tenant_registry.cpp(63)^^
2017-08-16 10:32:10.333754|+1000|ERROR|f(1)h(1):Sealing local unit failed for log 1: disabling|v2dlog|Distributed Log|140349238322944|newcluster.(*replicaStateRef).disable|dlog.go(991)^^
[Wed Aug 16 11:22:24 2017] Received signal: 15. Shutting down
[Wed Aug 16 11:22:24 2017] Flushing store...
[Wed Aug 16 11:22:24 2017] Store flush complete
[Wed Aug 16 11:30:17 2017] DLOG Server Version: 1.2.330.20859
[Wed Aug 16 11:30:17 2017] Listening on XXXXXX:37999
[Wed Aug 16 11:30:17 2017] Loading data store
2017-08-16 11:30:17.371415|+1000|WARN |Server file descriptor limit too large vs system limit; reducing to 896|v2dlog|Distributed Log|140388824664000|Load|store.cpp(2187)^^
[Wed Aug 16 11:30:17 2017] Server file descriptor limit too large vs system limit; reducing to 896
[Wed Aug 16 11:30:17 2017] Recovering log in store
[Wed Aug 16 11:30:17 2017] Starting server in managed mode
[Wed Aug 16 11:30:17 2017] Initializing management interface
2017-08-16 11:30:19.421458|+1000|WARN |missed heartbeat for log 1, host 2; poking with state 2|v2dlog|Distributed Log|140388740617984|newcluster.(*FragmentRef).ProcessRule|dlog.go(619)^^
Further on this, I've configured Vora DLog to run on all three nodes of the cluster, but I see it's not running on one of them. The (likely) related part of Vora Manager's log is:
Aug 17 09:32:36 XXXXXX vora.vora-dlog: [c.63f700da] : stdout from check: [Thu Aug 17 09:32:36 2017] Checking for store #012[Thu Aug 17 09:32:36 2017] No valid store found
Aug 17 09:32:36 XXXXXX vora.vora-dlog: [c.63f700da] : stderr from check: 2017-08-17 09:32:36.590974|+1000|INFO |Command Line: /opt/vora/lib/vora-dlog/bin/v2dlog check --trace-level DEBUG --trace-to-stderr /var/local/vora/vora-dlog|v2dlog|Distributed Log|139919669938112|server_main|main.cpp(1323) #0122017-08-17 09:32:36.592784|+1000|INFO |Checking for store|v2dlog|Distributed Log|139919669938112|Run|main.cpp(1146) #0122017-08-17 09:32:36.593074|+1000|ERROR|Exception during recovery: Encountered a generic I/O error|v2dlog|Distributed Log|139919669938112|Load|store.cpp(2201) #0122017-08-17 09:32:36.593157|+1000|FATAL|Error during recovery|v2dlog|Distributed Log|139919669938112|handle_recovery_error|main.cpp(767) #012[Thu Aug 17 09:32:36 2017] Error during recovery #0122017-08-17 09:32:36.593214|+1000|FATAL| Encountered a generic I/O error|v2dlog|Distributed Log|139919669938112|handle_recovery_error|main.cpp(767) #012[Thu Aug 17 09:32:36 2017] Encountered a generic I/O error #0122017-08-17 09:32:36.593277|+1000|FATAL| boost::filesystem::status: Permission den
Aug 17 09:32:36 XXXXXX vora.vora-dlog: [c.63f700da] : ... ied: "/var/local/vora/vora-dlog"|v2dlog|Distributed Log|139919669938112|handle_recovery_error|main.cpp(767) #012[Thu Aug 17 09:32:36 2017] boost::filesystem::status: Permission denied: "/var/local/vora/vora-dlog" #0122017-08-17 09:32:36.593330|+1000|INFO |No valid store found|v2dlog|Distributed Log|139919669938112|Run|main.cpp(1151)
Aug 17 09:32:36 XXXXXX vora.vora-dlog: [c.63f700da] : Creating SAP Hana Vora Distributed Log store ...
Aug 17 09:32:36 XXXXXX vora.vora-dlog: [c.63f700da] : stdout from format: [Thu Aug 17 09:32:36 2017] Formatting store
Aug 17 09:32:36 XXXXXX vora.vora-dlog: [c.63f700da] : stderr from format: 2017-08-17 09:32:36.615558|+1000|INFO |Command Line: /opt/vora/lib/vora-dlog/bin/v2dlog format --trace-level DEBUG --trace-to-stderr /var/local/vora/vora-dlog|v2dlog|Distributed Log|140176991168448|server_main|main.cpp(1323) #0122017-08-17 09:32:36.617444|+1000|INFO |Formatting store|v2dlog|Distributed Log|140176991168448|Run|main.cpp(1093) #0122017-08-17 09:32:36.617655|+1000|ERROR|boost::filesystem::status: Permission denied: "/var/local/vora/vora-dlog"|v2dlog|Distributed Log|140176991168448|Format|store.cpp(2107) #0122017-08-17 09:32:36.617693|+1000|FATAL|Could not format store.|v2dlog|Distributed Log|140176991168448|Run|main.cpp(1095) #012[Thu Aug 17 09:32:36 2017] Could not format store.
Aug 17 09:32:36 XXXXXX vora.vora-dlog: [c.63f700da] : Error while creating dlog store.
Aug 17 09:32:36 XXXXXX nomad[628]: client: task "vora-dlog-server" for alloc "058fd477-4e80-59ca-7703-e97f2ca1c8c2" failed: Wait returned exit code 1, signal 0, and error <nil>
[UPDATE2] So I see quite a few lines like this in Vora Manager log:
Aug 17 14:38:27 XXXXXX vora.vora-dlog: [c.2235f785] : Running['sudo', '-i', '-u', 'root', 'chown', 'vora:vora', '/var/log/vora/vora-dlog/']
And I would guess it should be successful, as on that node I see that the directory vora-dlog belongs to vora user:
-rw-r--r-- 1 vora vora 0 Jun 29 19:04 .keep
drwxrwx--- 2 vora vora 4096 Aug 16 10:31 dbdir
drwxrwx--- 6 root vora 4096 Aug 15 16:24 vora-discovery
drwxrwx--- 2 vora vora 4096 Aug 16 10:31 vora-dlog
drwxr-xr-x 4 root root 4096 Aug 15 16:23 vora-scheduler
The contents of vora-dlog is empty.

Related

VPS is unaccessible through ssh and cant connect website

the problem is the connection to my vps is lost on daily basis multiple times like 20 mins. When the server is down i can't connect website so i get the error:
Err connection timed out.
and i try connecting through ssh and it outputs a log:
Connection refused.
Nothing more nothing less i should solve this because it causes lots of trouble the only solution i came up with its restarting from the server provider site. But this happens frequently not one in a month or a year it happens 10 times a day. How should i debug the problem or how can i find a real solution.
Any help is appreciated. Thanks.
Edit
The output of ssh -vvv root#ip:
OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolve_canonicalize: hostname *ip is address
debug2: ssh_connect_direct
debug1: Connecting to *ip [*ip] port 22.
debug1: connect to address *ip port 22: Connection refused
ssh: connect to host *ip port 22: Connection refused
Apache error.log
[Sun Oct 17 13:59:41.104024 2021] [wsgi:error] [pid 2883:tid 139903188997888] [remote some_ip:53604] Bad Request: /iframe2/news/kurallar/
[Sun Oct 17 13:59:41.109194 2021] [wsgi:error] [pid 2883:tid 139903071426304] [remote some_ip:48318] Bad Request: /iframe2/news/kurallar/
[Sun Oct 17 14:10:08.136701 2021] [wsgi:error] [pid 2883:tid 139903071426304] [remote my_ip:24816] Not Found: /favicon.ico
[Sun Oct 17 14:19:34.339115 2021] [mpm_event:notice] [pid 2882:tid 139903302818752] AH00491: caught SIGTERM, shutting down
Exception ignored in: <bound method BaseEventLoop.__del__ of <_UnixSelectorEventLoop running=False closed=False debug=False>>
Traceback (most recent call last):
File "/usr/lib/python3.6/asyncio/base_events.py", line 526, in __del__
NameError: name 'ResourceWarning' is not defined
[Sun Oct 17 14:19:34.517419 2021] [mpm_event:notice] [pid 3319:tid 140614344002496] AH00489: Apache/2.4.29 (Ubuntu) mod_wsgi/4.5.17 Python/3.6 configured -- resuming normal operations
[Sun Oct 17 14:19:34.517583 2021] [core:notice] [pid 3319:tid 140614344002496] AH00094: Command line: '/usr/sbin/apache2'
/var/log/auth.log
Oct 17 21:15:01 my_name sshd[1365]: Invalid user pi from 94.3.213.149 port 42290
Oct 17 21:15:01 my_name sshd[1364]: Invalid user pi from 94.3.213.149 port 42286
Oct 17 21:15:01 my_name sshd[1365]: Connection closed by invalid user pi 94.3.213.149 port 42290 [preauth]
Oct 17 21:15:01 my_name sshd[1364]: Connection closed by invalid user pi 94.3.213.149 port 42286 [preauth]
Oct 17 22:00:38 my_name sshd[1628]: Invalid user user from 212.193.30.32 port 39410
Oct 17 22:00:38 my_name sshd[1628]: Received disconnect from 212.193.30.32 port 39410:11: Normal Shutdown, Thank you for playing [preauth]
Oct 17 22:00:38 my_name sshd[1628]: Disconnected from invalid user user 212.193.30.32 port 39410 [preauth]
I get lots of these inputs and shutdowns in the log the name 'pi' and the ip is not me. Do these connections affect the website or leak any information of the user.
There are gaps in the closed times when i could not connect to the server.
The closed times the /var/log/syslog prints these:
Oct 18 08:27:08 my_name kernel: [53593.658210] [UFW BLOCK] IN=eth0 OUT= MAC={mac_addr} SRC={some_ip} DST={my_ip} LEN=40 TOS=0x08 PREC=0x20 TTL=241 ID=31782 PROTO=TCP SPT=47415 DPT=24634 WINDOW=1024 RES=0x00 SYN URGP=0
Oct 18 08:35:01 my_name CRON[2854]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Oct 18 08:45:01 my_name CRON[2860]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Oct 18 08:49:48 my_name kernel: [ 0.000000] Linux version 4.15.0-158-generic (buildd#lgw01-amd64-051) (gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04)) #166-Ubuntu SMP Fri Sep 17 19:37:52 UTC 2021 (Ubuntu 4.15.0-158.166-generic 4.15.18)

Apache not serving website

My website was running perfectly. Today I found that someone had rebooted the server (which is a virtual machine) and the website is not working from that time.
Apache is set to run automatically, and it is running.
I tried to restart apache and the server again and to turn the firewall off, but still the website is not working and the browser tells me "ERR_CONNECTION_REFUSED"
Apache error log gives these messages when I restart apache and try to open the website:
[Tue Apr 12 16:14:32.275157 2016] [mpm_winnt:notice] [pid 1356:tid 468] AH00424: Parent: Received restart signal -- Restarting the server.
[Tue Apr 12 16:14:32.562277 2016] [ssl:warn] [pid 1356:tid 468] AH01873: Init: Session Cache is not configured [hint: SSLSessionCache]
[Tue Apr 12 16:14:32.563254 2016] [mpm_winnt:notice] [pid 1356:tid 468] AH00455: Apache/2.4.18 (Win32) OpenSSL/1.0.2f mod_wsgi/4.4.22 Python/2.7.11 configured -- resuming normal operations
[Tue Apr 12 16:14:32.563254 2016] [mpm_winnt:notice] [pid 1356:tid 468] AH00456: Server built: Dec 9 2015 12:21:09
[Tue Apr 12 16:14:32.563254 2016] [core:notice] [pid 1356:tid 468] AH00094: Command line: 'C:\\Program Files (x86)\\Apache Software Foundation\\Apache24\\bin\\httpd.exe -d C:/Program Files (x86)/Apache Software Foundation/Apache24'
[Tue Apr 12 16:14:32.563254 2016] [mpm_winnt:notice] [pid 1356:tid 468] AH00418: Parent: Created child process 4928
[Tue Apr 12 16:14:33.040811 2016] [ssl:warn] [pid 4928:tid 364] AH01873: Init: Session Cache is not configured [hint: SSLSessionCache]
[Tue Apr 12 16:14:33.275195 2016] [mpm_winnt:notice] [pid 4928:tid 364] AH00354: Child: Starting 64 worker threads.
[Tue Apr 12 16:15:04.407249 2016] [mpm_winnt:notice] [pid 1688:tid 360] AH00362: Child: Waiting 30 more seconds for 7 worker threads to finish.
[Tue Apr 12 16:15:34.699426 2016] [mpm_winnt:notice] [pid 1688:tid 360] AH00362: Child: Waiting 0 more seconds for 7 worker threads to finish.
[Tue Apr 12 16:15:34.800016 2016] [mpm_winnt:notice] [pid 1688:tid 360] AH00363: Child: Terminating 7 threads that failed to exit.
[Tue Apr 12 16:15:34.800016 2016] [mpm_winnt:notice] [pid 1688:tid 360] AH00364: Child: All worker threads have exited.
[Tue Apr 12 16:15:47.162795 2016] [wsgi:error] [pid 4928:tid 1000] c:\\Python27\\lib\\site-packages\\skimage\\filter\\__init__.py:6: skimage_deprecation: The `skimage.filter` module has been renamed to `skimage.filters`. This placeholder module will be removed in v0.13.
[Tue Apr 12 16:15:47.162795 2016] [wsgi:error] [pid 4928:tid 1000] warn(skimage_deprecation('The `skimage.filter` module has been renamed '
[Tue Apr 12 16:15:47.162795 2016] [wsgi:error] [pid 4928:tid 1000]
[Tue Apr 12 16:17:19.326490 2016] [mpm_winnt:notice] [pid 1356:tid 468] AH00424: Parent: Received restart signal -- Restarting the server.
[Tue Apr 12 16:17:19.452471 2016] [ssl:warn] [pid 1356:tid 468] AH01873: Init: Session Cache is not configured [hint: SSLSessionCache]
[Tue Apr 12 16:17:19.452471 2016] [mpm_winnt:notice] [pid 1356:tid 468] AH00455: Apache/2.4.18 (Win32) OpenSSL/1.0.2f mod_wsgi/4.4.22 Python/2.7.11 configured -- resuming normal operations
[Tue Apr 12 16:17:19.452471 2016] [mpm_winnt:notice] [pid 1356:tid 468] AH00456: Server built: Dec 9 2015 12:21:09
[Tue Apr 12 16:17:19.452471 2016] [core:notice] [pid 1356:tid 468] AH00094: Command line: 'C:\\Program Files (x86)\\Apache Software Foundation\\Apache24\\bin\\httpd.exe -d C:/Program Files (x86)/Apache Software Foundation/Apache24'
[Tue Apr 12 16:17:19.453448 2016] [mpm_winnt:notice] [pid 1356:tid 468] AH00418: Parent: Created child process 3708
[Tue Apr 12 16:17:19.922216 2016] [ssl:warn] [pid 3708:tid 360] AH01873: Init: Session Cache is not configured [hint: SSLSessionCache]
[Tue Apr 12 16:17:21.297269 2016] [mpm_winnt:notice] [pid 3708:tid 360] AH00354: Child: Starting 64 worker threads.
[Tue Apr 12 16:17:22.298284 2016] [mpm_winnt:notice] [pid 4928:tid 364] AH00363: Child: Terminating 57 threads that failed to exit.
[Tue Apr 12 16:17:22.298284 2016] [mpm_winnt:notice] [pid 4928:tid 364] AH00364: Child: All worker threads have exited.
[Tue Apr 12 16:17:22.773888 2016] [wsgi:error] [pid 3708:tid 984] c:\\Python27\\lib\\site-packages\\skimage\\filter\\__init__.py:6: skimage_deprecation: The `skimage.filter` module has been renamed to `skimage.filters`. This placeholder module will be removed in v0.13.
[Tue Apr 12 16:17:22.773888 2016] [wsgi:error] [pid 3708:tid 984] warn(skimage_deprecation('The `skimage.filter` module has been renamed '
[Tue Apr 12 16:17:22.773888 2016] [wsgi:error] [pid 3708:tid 984]
What may cause this problem? or What else I can check?
Thank you very much.
Sounds like your network connection to the VM has problems. Try to telnet into it on port 80. You can find instructions to do this with Google."

Failing on create_engine while using SQLAlchemy on ElasticBeanstalk

I am attempting to deploy a Flask application which uses SQLAlchemy to an AWS ElasticBeanstalk environment. I can deploy and see a running application, but when I attempt to use create_engine to connect to my database the whole thing crashes. I can run the run the whole app locally on a linux vm (including using the database within the ElasticBeanstalk environment.) I can ssh into the app server and print to the terminal.
I think something about how the application runs when being accessed from a browser (via the application url) is causing the problem. I would be immensely grateful if someone can point me in the right direction. I've been working on this problem for 3 days and I'm starting to go nuts.
This is the code I am using to create my session:
from sqlalchemy import create_engine, Column, ForeignKey, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, sessionmaker
from sqlalchemy.engine.url import URL
import settings
Base = declarative_base()
# establish connection to database
def db_connect():
"""
Performs database connection using database settings from settings.py.
Returns sqlalchemy engine instance
"""
print ".............................................."
print "...Connecting to Database at URL : "
print "...attempting URL(**settings.DATABASE)"
print "... ", URL(**settings.DATABASE)
print "...above should read: "
print "... postgresql://ebroot:42Snails#aa12jlddfw2awrj.cc6p0ojvcx3g.us-east-1.rds.amazonaws.com:5432/ebdb"
print ".............................................."
print "...attempting engine = create_engine(URL(**settings.DATABASE))..."
try:
engine = create_engine(URL(**settings.DATABASE))
print "...Succeeded in creating engine.................."
return engine
except:
print "..................................................."
print "...Failed to create engine in database_setup.py...."
print "..................................................."
raise
def makeSession():
# Establishes a session called session which allows you to work with the DB
try:
print "....trying to create session.................."
engine = db_connect()
print "....succeeded in db_connect().................."
Base.metadata.bind = engine
print "....succeeded in Base.metadata.bind = engine..."
# create a configured "Session" class
Session = sessionmaker(bind=engine)
print "....succeeded in Session = sessionmaker(bind=engine)..."
# create a Session
session = Session()
print ".............................................."
print "...Succeeded in creating session.............."
print ".............................................."
return session
except:
print "................................................."
print "...Failed to create session in application.py...."
print "................................................."
raise
print "finally.... session.close()"
session.close()
This is a snippet from /var/log/httpd/error_log. Lines beggining with "..." are print statements I am using to isolate the issue. I have altered the url I use to get to the database to frustrate the robots. The address I use works perfectly from my local machine and from the terminal when I ssh into the app server.
[Sat Mar 19 20:16:00.159375 2016] [mpm_prefork:notice] [pid 670] AH00169: caught SIGTERM, shutting down
[Sat Mar 19 20:16:01.259575 2016] [suexec:notice] [pid 1077] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Sat Mar 19 20:16:01.270801 2016] [so:warn] [pid 1077] AH01574: module wsgi_module is already loaded, skipping
[Sat Mar 19 20:16:01.273963 2016] [auth_digest:notice] [pid 1077] AH01757: generating secret for digest authentication ...
[Sat Mar 19 20:16:01.274662 2016] [lbmethod_heartbeat:notice] [pid 1077] AH02282: No slotmem from mod_heartmonitor
[Sat Mar 19 20:16:01.274712 2016] [:warn] [pid 1077] mod_wsgi: Compiled for Python/2.7.9.
[Sat Mar 19 20:16:01.274719 2016] [:warn] [pid 1077] mod_wsgi: Runtime using Python/2.7.10.
[Sat Mar 19 20:16:01.276674 2016] [mpm_prefork:notice] [pid 1077] AH00163: Apache/2.4.16 (Amazon) mod_wsgi/3.5 Python/2.7.10 configured -- resuming normal operations
[Sat Mar 19 20:16:01.276690 2016] [core:notice] [pid 1077] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND'
[Sat Mar 19 20:16:04.069713 2016] [:error] [pid 1080] ...in settings.py.......
[Sat Mar 19 20:19:53.419659 2016] [:error] [pid 1080] ....trying to create session..................
[Sat Mar 19 20:19:53.419702 2016] [:error] [pid 1080] ..............................................
[Sat Mar 19 20:19:53.419707 2016] [:error] [pid 1080] ...Connecting to Database at URL :
[Sat Mar 19 20:19:53.419711 2016] [:error] [pid 1080] ...attempting URL(**settings.DATABASE)
[Sat Mar 19 20:19:53.419876 2016] [:error] [pid 1080] ... postgresql://user:password#aa12jlddfw2awrj.cc6p9990ojvcx3g.us-east-1.rds.amazonaws.com:5432/ebdb
[Sat Mar 19 20:19:53.419884 2016] [:error] [pid 1080] ...above should read:
[Sat Mar 19 20:19:53.419889 2016] [:error] [pid 1080] ... postgresql://user:password#aa12jlddfw2awrj.cc6p9990ojvcx3g.us-east-1.rds.amazonaws.com:5432/ebdb
[Sat Mar 19 20:19:53.419893 2016] [:error] [pid 1080] ..............................................
[Sat Mar 19 20:19:53.419897 2016] [:error] [pid 1080] ...attempting engine = create_engine(URL(**settings.DATABASE))...
[Sat Mar 19 20:19:53.427168 2016] [:error] [pid 1080] ...................................................
[Sat Mar 19 20:19:53.427189 2016] [:error] [pid 1080] ...Failed to create engine in database_setup.py....
[Sat Mar 19 20:19:53.427193 2016] [:error] [pid 1080] ...................................................
[Sat Mar 19 20:19:53.427200 2016] [:error] [pid 1080] .................................................
[Sat Mar 19 20:19:53.427203 2016] [:error] [pid 1080] ...Failed to create session in application.py....
[Sat Mar 19 20:19:53.427206 2016] [:error] [pid 1080] .................................................
Here is the url for the running app:
http://keirseysorterapp.9yk6hymk2z.us-east-1.elasticbeanstalk.com/testApp
I was never able to get this to work. I ended up following this tutorial:
https://medium.com/#rodkey/deploying-a-flask-application-on-aws-a72daba6bb80#.t2lbbv5q3
The tutorial uses the Flask-SQLAlchemy module instead of importing Flask and SQLAlchemy separately. Once I used this method I had no problems. Good luck!

What is the issue with my wamp server

Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator at admin#example.com to inform them of the time this error occurred, and the actions you performed just before this error.
More infortion.
After i went through the log file i found this but i dont know how to debug it
[Wed Sep 24 12:29:50.808777 2014] [mpm_winnt:notice] [pid 7716:tid 304] AH00364: Child: All worker threads have exited.
[Wed Sep 24 12:29:53.569982 2014] [mpm_winnt:notice] [pid 6836:tid 388] AH00430: Parent: Child process 7716 exited successfully.
[Wed Sep 24 12:38:59.563516 2014] [mpm_winnt:notice] [pid 4612:tid 388] AH00455: Apache/2.4.9 (Win64) PHP/5.5.12 configured -- resuming normal operations
[Wed Sep 24 12:38:59.598518 2014] [mpm_winnt:notice] [pid 4612:tid 388] AH00456: Apache Lounge VC11 Server built: Mar 16 2014 12:42:59
[Wed Sep 24 12:38:59.598518 2014] [core:notice] [pid 4612:tid 388] AH00094: Command line: 'c:\\wamp\\bin\\apache\\apache2.4.9\\bin\\httpd.exe -d C:/wamp/bin/apache/apache2.4.9'
[Wed Sep 24 12:38:59.600518 2014] [mpm_winnt:notice] [pid 4612:tid 388] AH00418: Parent: Created child process 5232
[Wed Sep 24 12:39:00.554573 2014] [mpm_winnt:notice] [pid 5232:tid 304] AH00354: Child: Starting 64 worker threads.
[Wed Sep 24 20:59:56.118653 2014] [mpm_winnt:notice] [pid 4612:tid 388] AH00422: Parent: Received shutdown signal -- Shutting down the server.
[Wed Sep 24 20:59:58.571793 2014] [mpm_winnt:notice] [pid 5232:tid 304] AH00364: Child: All worker threads have exited.
[Wed Sep 24 21:00:23.437495 2014] [mpm_winnt:notice] [pid 4612:tid 388] AH00430: Parent: Child process 5232 exited successfully.
[Thu Sep 25 08:41:53.157396 2014] [mpm_winnt:notice] [pid 1032:tid 392] AH00455: Apache/2.4.9 (Win64) PHP/5.5.12 configured -- resuming normal operations
[Thu Sep 25 08:41:53.166397 2014] [mpm_winnt:notice] [pid 1032:tid 392] AH00456: Apache Lounge VC11 Server built: Mar 16 2014 12:42:59
[Thu Sep 25 08:41:53.166397 2014] [core:notice] [pid 1032:tid 392] AH00094: Command line: 'c:\\wamp\\bin\\apache\\apache2.4.9\\bin\\httpd.exe -d C:/wamp/bin/apache/apache2.4.9'
[Thu Sep 25 08:41:53.168397 2014] [mpm_winnt:notice] [pid 1032:tid 392] AH00418: Parent: Created child process 6796
[Thu Sep 25 08:41:55.282518 2014] [mpm_winnt:notice] [pid 6796:tid 316] AH00354: Child: Starting 64 worker threads.
[Thu Sep 25 10:46:27.453901 2014] [core:error] [pid 6796:tid 836] [client 127.0.0.1:4242] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.
[Thu Sep 25 10:47:56.015967 2014] [core:error] [pid 6796:tid 844] [client 127.0.0.1:4282] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.
[Thu Sep 25 10:53:11.816030 2014] [core:error] [pid 6796:tid 832] [client 127.0.0.1:4443] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.
[Thu Sep 25 10:55:26.231718 2014] [core:error] [pid 6796:tid 852] [client 127.0.0.1:4476] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.
Request exceeded the limit of 10 internal redirects due to probable configuration error.
In combination with the the 500 (internal server) error message, it kind of sounds like you have a problem with the RedirectRule lines in your .htaccess file, that keep on redirecting the request in a loop.

Django WSGI daemon mode synchronization of requests

Running appache2 with the following /etc/httpd.conf:
<VirtualHost *:80>
WSGIDaemonProcess myapp user=pq group=pq processes=2 threads=1
WSGIProcessGroup myapp
LogLevel debug
<Directory /django/myapp/apache/>
Order allow,deny
Allow from all
</Directory>
WSGIScriptAlias / /django/myapp/apache/django.wsgi
</VirtualHost>
where this is my /django/myapp/apache/django.wsgi:
import os
import sys
sys.path.append('/django')
os.environ['PYTHONPATH'] = '/django'
os.environ['DJANGO_SETTINGS_MODULE'] = 'myapp.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
I have the following view:
def sleep(request):
print >> sys.stderr, '{', os.getpid()
time.sleep(5)
print >> sys.stderr, '}', os.getpid()
return index(request)
I make 4 concurrent requests and my error log shows:
[Wed Jan 12 12:59:56 2011] [error] {17160
[Wed Jan 12 13:00:01 2011] [error] }17160
[Wed Jan 12 13:00:01 2011] [error] {17157
[Wed Jan 12 13:00:06 2011] [error] }17157
[Wed Jan 12 13:00:06 2011] [error] {17160
[Wed Jan 12 13:00:11 2011] [error] }17160
[Wed Jan 12 13:00:11 2011] [error] {17157
[Wed Jan 12 13:00:16 2011] [error] }17157
Basically my requests were synchronized per webserver (not even per
process).
Why is this?
Edit: This is a single CPU machine and Apache2 is compiled with
prefork MMP. My client was 4 tabs in Chrome. Interesting, when I try
this with curl I get the expected:
[Wed Jan 12 18:10:18 2011] [error] {17160
[Wed Jan 12 18:10:18 2011] [error] {17157
[Wed Jan 12 18:10:23 2011] [error] }17160
[Wed Jan 12 18:10:23 2011] [error] {17160
[Wed Jan 12 18:10:23 2011] [error] }17157
[Wed Jan 12 18:10:23 2011] [error] {17157
[Wed Jan 12 18:10:28 2011] [error] }17160
[Wed Jan 12 18:10:28 2011] [error] }17157
Edit2: Looks like this is an issue with Chrome synchronizing requests. My (limited) tests showed that this only happens with Chrome and only when tabs are used. Multiple requests within a single tab are asynchronous.