Where to start tracing down an exception? - crystal-lang

I'm getting an exception in production which isn't providing and stacktrace information. How do I start debugging where this might be coming from?
Oct 25 16:26:17 socket-proxy app/web.1: Exception: RedisError: Disconnected (Redis::DisconnectedError)
Oct 25 16:26:17 socket-proxy app/web.1: 0x4af6ac: ??? at ??
Oct 25 16:26:17 socket-proxy app/web.1: 0x4ce900: ??? at ??
Oct 25 16:26:17 socket-proxy app/web.1: 0x4b553e: ??? at ??
Oct 25 16:26:17 socket-proxy app/web.1: 0x529d1c: ??? at ??
Oct 25 16:26:17 socket-proxy app/web.1: 0x518cb2: ??? at ??
Oct 25 16:26:17 socket-proxy app/web.1: 0x518064: ??? at ??
Oct 25 16:26:17 socket-proxy app/web.1: 0x521d82: ??? at ??
Oct 25 16:26:17 socket-proxy app/web.1: 0x51ed3b: ??? at ??
Oct 25 16:26:17 socket-proxy app/web.1: 0x5240e9: ??? at ??
Oct 25 16:26:17 socket-proxy app/web.1: 0x50b995: ??? at ??
Oct 25 16:26:17 socket-proxy app/web.1: 0x416209: ??? at ??
Oct 25 16:26:17 socket-proxy app/web.1: 0x0: ??? at ??

Ruby dev here so not sure why stack trace is printed mysteriously, however if you are looking for some clues as to where to look I would start at this class:
redis/error.cr
# Exception for errors that Redis returns.
class Redis::Error < Exception
def initialize(s)
super("RedisError: #{s}")
end
end
class Redis::DisconnectedError < Redis::Error
def initialize
super("Disconnected")
end
end
Now clearly only place that exception seems to being raised in the crystal-redis repository is in this class:
redis/connection.cr (line: )
def receive_line
line = #socket.gets(chomp: false)
unless line
raise Redis::DisconnectedError.new
end
line.byte_slice(0, line.bytesize - 2)
end
Looking at the method that uses it, receive_line it seems the error is clearly being thrown at Redis::Connection during connection or receive method.
So either its a error during connection, or a dropped connection.
Considering the clueless stack-trace, that would be a good start, unless you can share some more code to look at.
Hope that helps.

This ended up being because of the production server timing out the redis connection after a period of time. I've switched to redis-reconnect to auto-reconnect.
https://github.com/danielwestendorf/redis-reconnect

Related

VPS is unaccessible through ssh and cant connect website

the problem is the connection to my vps is lost on daily basis multiple times like 20 mins. When the server is down i can't connect website so i get the error:
Err connection timed out.
and i try connecting through ssh and it outputs a log:
Connection refused.
Nothing more nothing less i should solve this because it causes lots of trouble the only solution i came up with its restarting from the server provider site. But this happens frequently not one in a month or a year it happens 10 times a day. How should i debug the problem or how can i find a real solution.
Any help is appreciated. Thanks.
Edit
The output of ssh -vvv root#ip:
OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolve_canonicalize: hostname *ip is address
debug2: ssh_connect_direct
debug1: Connecting to *ip [*ip] port 22.
debug1: connect to address *ip port 22: Connection refused
ssh: connect to host *ip port 22: Connection refused
Apache error.log
[Sun Oct 17 13:59:41.104024 2021] [wsgi:error] [pid 2883:tid 139903188997888] [remote some_ip:53604] Bad Request: /iframe2/news/kurallar/
[Sun Oct 17 13:59:41.109194 2021] [wsgi:error] [pid 2883:tid 139903071426304] [remote some_ip:48318] Bad Request: /iframe2/news/kurallar/
[Sun Oct 17 14:10:08.136701 2021] [wsgi:error] [pid 2883:tid 139903071426304] [remote my_ip:24816] Not Found: /favicon.ico
[Sun Oct 17 14:19:34.339115 2021] [mpm_event:notice] [pid 2882:tid 139903302818752] AH00491: caught SIGTERM, shutting down
Exception ignored in: <bound method BaseEventLoop.__del__ of <_UnixSelectorEventLoop running=False closed=False debug=False>>
Traceback (most recent call last):
File "/usr/lib/python3.6/asyncio/base_events.py", line 526, in __del__
NameError: name 'ResourceWarning' is not defined
[Sun Oct 17 14:19:34.517419 2021] [mpm_event:notice] [pid 3319:tid 140614344002496] AH00489: Apache/2.4.29 (Ubuntu) mod_wsgi/4.5.17 Python/3.6 configured -- resuming normal operations
[Sun Oct 17 14:19:34.517583 2021] [core:notice] [pid 3319:tid 140614344002496] AH00094: Command line: '/usr/sbin/apache2'
/var/log/auth.log
Oct 17 21:15:01 my_name sshd[1365]: Invalid user pi from 94.3.213.149 port 42290
Oct 17 21:15:01 my_name sshd[1364]: Invalid user pi from 94.3.213.149 port 42286
Oct 17 21:15:01 my_name sshd[1365]: Connection closed by invalid user pi 94.3.213.149 port 42290 [preauth]
Oct 17 21:15:01 my_name sshd[1364]: Connection closed by invalid user pi 94.3.213.149 port 42286 [preauth]
Oct 17 22:00:38 my_name sshd[1628]: Invalid user user from 212.193.30.32 port 39410
Oct 17 22:00:38 my_name sshd[1628]: Received disconnect from 212.193.30.32 port 39410:11: Normal Shutdown, Thank you for playing [preauth]
Oct 17 22:00:38 my_name sshd[1628]: Disconnected from invalid user user 212.193.30.32 port 39410 [preauth]
I get lots of these inputs and shutdowns in the log the name 'pi' and the ip is not me. Do these connections affect the website or leak any information of the user.
There are gaps in the closed times when i could not connect to the server.
The closed times the /var/log/syslog prints these:
Oct 18 08:27:08 my_name kernel: [53593.658210] [UFW BLOCK] IN=eth0 OUT= MAC={mac_addr} SRC={some_ip} DST={my_ip} LEN=40 TOS=0x08 PREC=0x20 TTL=241 ID=31782 PROTO=TCP SPT=47415 DPT=24634 WINDOW=1024 RES=0x00 SYN URGP=0
Oct 18 08:35:01 my_name CRON[2854]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Oct 18 08:45:01 my_name CRON[2860]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Oct 18 08:49:48 my_name kernel: [ 0.000000] Linux version 4.15.0-158-generic (buildd#lgw01-amd64-051) (gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04)) #166-Ubuntu SMP Fri Sep 17 19:37:52 UTC 2021 (Ubuntu 4.15.0-158.166-generic 4.15.18)

Vora 1.4 Catalog fails to start

So I upgraded Vora from 1.3 to 1.4 on recently upgraded HDP 2.5.6.
All services seem to be starting fine, except Catalog. In the log I see a lot of messages like this:
2017-08-16 11:43:34.591183|+1000|ERROR|Was not able to create new dlog via XXXXX:37999, Status was ERROR_OP_TIMED_OUT, Details: |v2catalog_server|Distributed Log|140607339825056|CreateDLog|log_administration.cpp(211)^^
2017-08-16 11:43:34.611044|+1000|ERROR|Operation (CREATE_LOG) timed out, last status was: ERROR_INTERNAL|v2catalog_server|Distributed Log|140607279314688|Retry|callback_base.cpp(222)^^
2017-08-16 11:43:34.611204|+1000|ERROR|Was not able to create new dlog via XXXXX:20439, Status was ERROR_OP_TIMED_OUT, Details: |v2catalog_server|Distributed Log|140607339825056|CreateDLog|log_administration.cpp(211)^^
2017-08-16 11:43:34.611235|+1000|ERROR|Create DLog ended with status ERROR_OP_TIMED_OUT, retrying in 1000ms|v2catalog_server|Distributed Log|140607339825056|CreateDLog|log_administration.cpp(163)^^
2017-08-16 11:43:35.611757|+1000|ERROR|can't create dlog client[ ERROR_OP_TIMED_OUT ]|v2catalog_server|Catalog|140607339825056|Init|dlog_accessor.cpp(174)^^
terminate called after throwing an instance of 'std::system_error'
what(): Invalid argument
Any ideas what I left misconfigured?
[UPDATE] DLog's log below:
[Wed Aug 16 10:31:23 2017] DLOG Server Version: 1.2.330.20859
[Wed Aug 16 10:31:23 2017] Listening on XXXXXX:46026
[Wed Aug 16 10:31:23 2017] Loading data store
2017-08-16 10:31:23.475454|+1000|WARN |Server file descriptor limit too large vs system limit; reducing to 896|v2dlog|Distributed Log|140349419014080|Load|store.cpp(2187)^^
[Wed Aug 16 10:31:23 2017] Server file descriptor limit too large vs system limit; reducing to 896
[Wed Aug 16 10:31:23 2017] Recovering log in store
[Wed Aug 16 10:31:23 2017] Starting server in managed mode
[Wed Aug 16 10:31:23 2017] Initializing management interface
2017-08-16 10:31:39.365780|+1000|WARN |f(1)h(1):Host 1 has timed out, disabling|v2dlog|Distributed Log|140349343360768|newcluster.(*FragmentRef).ProcessRule|dlog.go(607)^^
2017-08-16 10:32:10.333444|+1000|ERROR|Log with ID 1 is not registered on unit.|v2dlog|Distributed Log|140349238322944|Seal|tenant_registry.cpp(63)^^
2017-08-16 10:32:10.333754|+1000|ERROR|f(1)h(1):Sealing local unit failed for log 1: disabling|v2dlog|Distributed Log|140349238322944|newcluster.(*replicaStateRef).disable|dlog.go(991)^^
[Wed Aug 16 11:22:24 2017] Received signal: 15. Shutting down
[Wed Aug 16 11:22:24 2017] Flushing store...
[Wed Aug 16 11:22:24 2017] Store flush complete
[Wed Aug 16 11:30:17 2017] DLOG Server Version: 1.2.330.20859
[Wed Aug 16 11:30:17 2017] Listening on XXXXXX:37999
[Wed Aug 16 11:30:17 2017] Loading data store
2017-08-16 11:30:17.371415|+1000|WARN |Server file descriptor limit too large vs system limit; reducing to 896|v2dlog|Distributed Log|140388824664000|Load|store.cpp(2187)^^
[Wed Aug 16 11:30:17 2017] Server file descriptor limit too large vs system limit; reducing to 896
[Wed Aug 16 11:30:17 2017] Recovering log in store
[Wed Aug 16 11:30:17 2017] Starting server in managed mode
[Wed Aug 16 11:30:17 2017] Initializing management interface
2017-08-16 11:30:19.421458|+1000|WARN |missed heartbeat for log 1, host 2; poking with state 2|v2dlog|Distributed Log|140388740617984|newcluster.(*FragmentRef).ProcessRule|dlog.go(619)^^
Further on this, I've configured Vora DLog to run on all three nodes of the cluster, but I see it's not running on one of them. The (likely) related part of Vora Manager's log is:
Aug 17 09:32:36 XXXXXX vora.vora-dlog: [c.63f700da] : stdout from check: [Thu Aug 17 09:32:36 2017] Checking for store #012[Thu Aug 17 09:32:36 2017] No valid store found
Aug 17 09:32:36 XXXXXX vora.vora-dlog: [c.63f700da] : stderr from check: 2017-08-17 09:32:36.590974|+1000|INFO |Command Line: /opt/vora/lib/vora-dlog/bin/v2dlog check --trace-level DEBUG --trace-to-stderr /var/local/vora/vora-dlog|v2dlog|Distributed Log|139919669938112|server_main|main.cpp(1323) #0122017-08-17 09:32:36.592784|+1000|INFO |Checking for store|v2dlog|Distributed Log|139919669938112|Run|main.cpp(1146) #0122017-08-17 09:32:36.593074|+1000|ERROR|Exception during recovery: Encountered a generic I/O error|v2dlog|Distributed Log|139919669938112|Load|store.cpp(2201) #0122017-08-17 09:32:36.593157|+1000|FATAL|Error during recovery|v2dlog|Distributed Log|139919669938112|handle_recovery_error|main.cpp(767) #012[Thu Aug 17 09:32:36 2017] Error during recovery #0122017-08-17 09:32:36.593214|+1000|FATAL| Encountered a generic I/O error|v2dlog|Distributed Log|139919669938112|handle_recovery_error|main.cpp(767) #012[Thu Aug 17 09:32:36 2017] Encountered a generic I/O error #0122017-08-17 09:32:36.593277|+1000|FATAL| boost::filesystem::status: Permission den
Aug 17 09:32:36 XXXXXX vora.vora-dlog: [c.63f700da] : ... ied: "/var/local/vora/vora-dlog"|v2dlog|Distributed Log|139919669938112|handle_recovery_error|main.cpp(767) #012[Thu Aug 17 09:32:36 2017] boost::filesystem::status: Permission denied: "/var/local/vora/vora-dlog" #0122017-08-17 09:32:36.593330|+1000|INFO |No valid store found|v2dlog|Distributed Log|139919669938112|Run|main.cpp(1151)
Aug 17 09:32:36 XXXXXX vora.vora-dlog: [c.63f700da] : Creating SAP Hana Vora Distributed Log store ...
Aug 17 09:32:36 XXXXXX vora.vora-dlog: [c.63f700da] : stdout from format: [Thu Aug 17 09:32:36 2017] Formatting store
Aug 17 09:32:36 XXXXXX vora.vora-dlog: [c.63f700da] : stderr from format: 2017-08-17 09:32:36.615558|+1000|INFO |Command Line: /opt/vora/lib/vora-dlog/bin/v2dlog format --trace-level DEBUG --trace-to-stderr /var/local/vora/vora-dlog|v2dlog|Distributed Log|140176991168448|server_main|main.cpp(1323) #0122017-08-17 09:32:36.617444|+1000|INFO |Formatting store|v2dlog|Distributed Log|140176991168448|Run|main.cpp(1093) #0122017-08-17 09:32:36.617655|+1000|ERROR|boost::filesystem::status: Permission denied: "/var/local/vora/vora-dlog"|v2dlog|Distributed Log|140176991168448|Format|store.cpp(2107) #0122017-08-17 09:32:36.617693|+1000|FATAL|Could not format store.|v2dlog|Distributed Log|140176991168448|Run|main.cpp(1095) #012[Thu Aug 17 09:32:36 2017] Could not format store.
Aug 17 09:32:36 XXXXXX vora.vora-dlog: [c.63f700da] : Error while creating dlog store.
Aug 17 09:32:36 XXXXXX nomad[628]: client: task "vora-dlog-server" for alloc "058fd477-4e80-59ca-7703-e97f2ca1c8c2" failed: Wait returned exit code 1, signal 0, and error <nil>
[UPDATE2] So I see quite a few lines like this in Vora Manager log:
Aug 17 14:38:27 XXXXXX vora.vora-dlog: [c.2235f785] : Running['sudo', '-i', '-u', 'root', 'chown', 'vora:vora', '/var/log/vora/vora-dlog/']
And I would guess it should be successful, as on that node I see that the directory vora-dlog belongs to vora user:
-rw-r--r-- 1 vora vora 0 Jun 29 19:04 .keep
drwxrwx--- 2 vora vora 4096 Aug 16 10:31 dbdir
drwxrwx--- 6 root vora 4096 Aug 15 16:24 vora-discovery
drwxrwx--- 2 vora vora 4096 Aug 16 10:31 vora-dlog
drwxr-xr-x 4 root root 4096 Aug 15 16:23 vora-scheduler
The contents of vora-dlog is empty.

AWS ECS container can't specify a region

First, I use the server environment:
sever: django + nginx + uwsgi
cloud: docker + AWS ECS
logging: AWS CloudWatch log service + watchtower third party app
This is project code
https://github.com/byunghyunpark/django-log-test
Question
I am using the django watchtower third party app to use the AWS Cloudwatch log service. If I set the logging handler to watchtower and upload the docker image to the ECS service and run the task, it will still return 500 error.
500 error If you check the log
/tmp/uwsgi.log
*** Operational MODE: single process ***
DEBUG = False
DEV = False
TEST = False
LMS_MESSAGE = False
STATIC_S3 = True
DJANGO_LOG_LEVEL = INFO
Traceback (most recent call last):
File "/usr/lib/python3.5/logging/config.py", line 558, in configure
handler = self.configure_handler(handlers[name])
File "/usr/lib/python3.5/logging/config.py", line 731, in configure_handler
result = factory(**kwargs)
File "/usr/local/lib/python3.5/dist-packages/watchtower/__init__.py", line 78, in __init__
self.cwl_client = (boto3_session or boto3).client("logs")
File "/usr/local/lib/python3.5/dist-packages/boto3/__init__.py", line 83, in client
return _get_default_session().client(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/boto3/session.py", line 263, in client
aws_session_token=aws_session_token, config=config)
File "/usr/local/lib/python3.5/dist-packages/botocore/session.py", line 836, in create_client
client_config=config, api_version=api_version)
File "/usr/local/lib/python3.5/dist-packages/botocore/client.py", line 70, in create_client
verify, credentials, scoped_config, client_config, endpoint_bridge)
File "/usr/local/lib/python3.5/dist-packages/botocore/client.py", line 224, in _get_client_args
verify, credentials, scoped_config, client_config, endpoint_bridge)
File "/usr/local/lib/python3.5/dist-packages/botocore/args.py", line 45, in get_client_args
endpoint_url, is_secure, scoped_config)
File "/usr/local/lib/python3.5/dist-packages/botocore/args.py", line 103, in compute_client_args
service_name, region_name, endpoint_url, is_secure)
File "/usr/local/lib/python3.5/dist-packages/botocore/client.py", line 297, in resolve
service_name, region_name)
File "/usr/local/lib/python3.5/dist-packages/botocore/regions.py", line 122, in construct_endpoint
partition, service_name, region_name)
File "/usr/local/lib/python3.5/dist-packages/botocore/regions.py", line 135, in _endpoint_for_partition
raise NoRegionError()
botocore.exceptions.NoRegionError: You must specify a region.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./mysite/wsgi.py", line 16, in <module>
application = get_wsgi_application()
File "/usr/local/lib/python3.5/dist-packages/django/core/wsgi.py", line 13, in get_wsgi_application
django.setup(set_prefix=False)
File "/usr/local/lib/python3.5/dist-packages/django/__init__.py", line 22, in setup
configure_logging(settings.LOGGING_CONFIG, settings.LOGGING)
File "/usr/local/lib/python3.5/dist-packages/django/utils/log.py", line 75, in configure_logging
logging_config_func(logging_settings)
File "/usr/lib/python3.5/logging/config.py", line 795, in dictConfig
dictConfigClass(config).configure()
File "/usr/lib/python3.5/logging/config.py", line 566, in configure
'%r: %s' % (name, e))
ValueError: Unable to configure handler 'watchtower': You must specify a region.
unable to load app 0 (mountpoint='') (callable not found or import error)
*** no app loaded. going in full dynamic mode ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 22)
spawned uWSGI worker 1 (pid: 33, cores: 1)
--- no python application found, check your startup logs for errors ---
[pid: 33|app: -1|req: -1/1] 123.212.195.148 () {40 vars in 738 bytes} [Mon Jun 5 10:43:13 2017] GET / => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0)
--- no python application found, check your startup logs for errors ---
[pid: 33|app: -1|req: -1/2] 123.212.195.148 () {40 vars in 756 bytes} [Mon Jun 5 10:43:13 2017] GET /favicon.ico => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0)
--- no python application found, check your startup logs for errors ---
[pid: 33|app: -1|req: -1/3] 54.167.97.82 () {36 vars in 515 bytes} [Mon Jun 5 11:22:42 2017] GET / => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0)
--- no python application found, check your startup logs for errors ---
[pid: 33|app: -1|req: -1/4] 91.196.50.33 () {38 vars in 613 bytes} [Mon Jun 5 12:03:20 2017] GET /testproxy.php => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0)
--- no python application found, check your startup logs for errors ---
[pid: 33|app: -1|req: -1/5] 123.212.195.148 () {40 vars in 738 bytes} [Mon Jun 5 14:01:04 2017] GET / => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0)
--- no python application found, check your startup logs for errors ---
[pid: 33|app: -1|req: -1/6] 123.212.195.148 () {40 vars in 756 bytes} [Mon Jun 5 14:01:04 2017] GET /favicon.ico => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0)
--- no python application found, check your startup logs for errors ---
[pid: 33|app: -1|req: -1/7] 123.212.195.148 () {42 vars in 769 bytes} [Mon Jun 5 14:06:48 2017] GET / => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0)
--- no python application found, check your startup logs for errors ---
[pid: 33|app: -1|req: -1/8] 123.212.195.148 () {44 vars in 809 bytes} [Mon Jun 5 14:06:48 2017] GET /favicon.ico => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0)
--- no python application found, check your startup logs for errors ---
[pid: 33|app: -1|req: -1/9] 123.212.195.148 () {42 vars in 769 bytes} [Mon Jun 5 14:06:49 2017] GET / => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0)
--- no python application found, check your startup logs for errors ---
[pid: 33|app: -1|req: -1/10] 123.212.195.148 () {44 vars in 809 bytes} [Mon Jun 5 14:06:49 2017] GET /favicon.ico => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0)
--- no python application found, check your startup logs for errors ---
[pid: 33|app: -1|req: -1/11] 123.212.195.148 () {42 vars in 769 bytes} [Mon Jun 5 14:06:49 2017] GET / => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0)
task IAM role was assigned to administrator when task definition was created.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
However, the container still does not talk to credentials.
If I change logging handler to default(console), the nginx server will work normally. If I run a docker conatiner with docker run -v $ HOME / .aws: /root/.aws --rm -it -p 9090: 80 image_name in local, nginx will work normally and the logging will work normally do in Cloudwatch log service.
Only in ECS environment fails authentication.
Do I need to do other settings besides IAM roles?
I do not like this..
As a temporary resolution,
when I build docker, I passed the credentials information with the Dockerfile ARG variable. and I removed Task IAM.
Dockerfile's code like this
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
ARG AWS_DEFAULT_REGION
ENV AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} \
AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} \
AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}

What is the issue with my wamp server

Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator at admin#example.com to inform them of the time this error occurred, and the actions you performed just before this error.
More infortion.
After i went through the log file i found this but i dont know how to debug it
[Wed Sep 24 12:29:50.808777 2014] [mpm_winnt:notice] [pid 7716:tid 304] AH00364: Child: All worker threads have exited.
[Wed Sep 24 12:29:53.569982 2014] [mpm_winnt:notice] [pid 6836:tid 388] AH00430: Parent: Child process 7716 exited successfully.
[Wed Sep 24 12:38:59.563516 2014] [mpm_winnt:notice] [pid 4612:tid 388] AH00455: Apache/2.4.9 (Win64) PHP/5.5.12 configured -- resuming normal operations
[Wed Sep 24 12:38:59.598518 2014] [mpm_winnt:notice] [pid 4612:tid 388] AH00456: Apache Lounge VC11 Server built: Mar 16 2014 12:42:59
[Wed Sep 24 12:38:59.598518 2014] [core:notice] [pid 4612:tid 388] AH00094: Command line: 'c:\\wamp\\bin\\apache\\apache2.4.9\\bin\\httpd.exe -d C:/wamp/bin/apache/apache2.4.9'
[Wed Sep 24 12:38:59.600518 2014] [mpm_winnt:notice] [pid 4612:tid 388] AH00418: Parent: Created child process 5232
[Wed Sep 24 12:39:00.554573 2014] [mpm_winnt:notice] [pid 5232:tid 304] AH00354: Child: Starting 64 worker threads.
[Wed Sep 24 20:59:56.118653 2014] [mpm_winnt:notice] [pid 4612:tid 388] AH00422: Parent: Received shutdown signal -- Shutting down the server.
[Wed Sep 24 20:59:58.571793 2014] [mpm_winnt:notice] [pid 5232:tid 304] AH00364: Child: All worker threads have exited.
[Wed Sep 24 21:00:23.437495 2014] [mpm_winnt:notice] [pid 4612:tid 388] AH00430: Parent: Child process 5232 exited successfully.
[Thu Sep 25 08:41:53.157396 2014] [mpm_winnt:notice] [pid 1032:tid 392] AH00455: Apache/2.4.9 (Win64) PHP/5.5.12 configured -- resuming normal operations
[Thu Sep 25 08:41:53.166397 2014] [mpm_winnt:notice] [pid 1032:tid 392] AH00456: Apache Lounge VC11 Server built: Mar 16 2014 12:42:59
[Thu Sep 25 08:41:53.166397 2014] [core:notice] [pid 1032:tid 392] AH00094: Command line: 'c:\\wamp\\bin\\apache\\apache2.4.9\\bin\\httpd.exe -d C:/wamp/bin/apache/apache2.4.9'
[Thu Sep 25 08:41:53.168397 2014] [mpm_winnt:notice] [pid 1032:tid 392] AH00418: Parent: Created child process 6796
[Thu Sep 25 08:41:55.282518 2014] [mpm_winnt:notice] [pid 6796:tid 316] AH00354: Child: Starting 64 worker threads.
[Thu Sep 25 10:46:27.453901 2014] [core:error] [pid 6796:tid 836] [client 127.0.0.1:4242] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.
[Thu Sep 25 10:47:56.015967 2014] [core:error] [pid 6796:tid 844] [client 127.0.0.1:4282] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.
[Thu Sep 25 10:53:11.816030 2014] [core:error] [pid 6796:tid 832] [client 127.0.0.1:4443] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.
[Thu Sep 25 10:55:26.231718 2014] [core:error] [pid 6796:tid 852] [client 127.0.0.1:4476] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.
Request exceeded the limit of 10 internal redirects due to probable configuration error.
In combination with the the 500 (internal server) error message, it kind of sounds like you have a problem with the RedirectRule lines in your .htaccess file, that keep on redirecting the request in a loop.

Django Admin redirects to 500 error

I am getting a 500 error when i login to the django admin interface.
I have a ubuntu server 13.10 running nginx uwsgi mysql for my database.
ive set it up following this tutorial (first time I've set up a django production server)
my settings.py file is as follows
"""
Django settings for app_name project.
For more information on this file, see
https://docs.djangoproject.com/en/1.6/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.6/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.6/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'XXXXXXXXXXXXXXXXXXXXXXXX'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = False
TEMPLATE_DEBUG = False
ALLOWED_HOSTS = ['website.com', 'www.website.com', 'ip_address']
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'registration',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
ROOT_URLCONF = 'app_name.urls'
WSGI_APPLICATION = 'app_name.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.6/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE':'django.db.backends.mysql',
'NAME': 'db_name',
'USER': 'username',
'PASSWORD': 'password',
'HOST': '127.0.0.1',
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.6/topics/i18n/
LANGUAGE_CODE = 'en-gb'
TIME_ZONE = 'Greenwich'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# URL that handles the media served from MEDIA_ROOT. Make sure to use a
# trailing slash.
# Examples: "http://example.com/media/", "http://media.example.com/"
MEDIA_URL = '/media/'
# Absolute filesystem path to the directory that will hold user-uploaded files.
# Example: "/var/www/example.com/media/"
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.6/howto/static-files/
STATIC_URL = '/static/'
# Absolute path to the directory static files should be collected to.
# Don't put anything in this directory yourself; store your static files
# in apps' "static/" subdirectories and in STATICFILES_DIRS.
# Example: "/var/www/example.com/static/"
STATIC_ROOT = os.path.join(BASE_DIR, 'static', 'static-only')
# Additional locations of static files
STATICFILES_DIRS = (
# Put strings here, like "/home/html/static" or "C:/www/django/static".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
os.path.join(BASE_DIR, 'static', 'static'),
)
TEMPLATE_DIRS = (
# Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
os.path.join(BASE_DIR, 'static', 'templates'),
)
AUTHENTICATION_BACKENDS = (
'django.contrib.auth.backends.ModelBackend',
)
ACCOUNT_ACTIVATION_DAYS = 7
I've managed to run sudo python manage.py syncdb and set up my admin user but when i go to login it redirects me to my 500.html template page.
My uwsgi log file is here
*** Starting uWSGI 1.9.13-debian (64bit) on [Mon Feb 3 13:11:22 2014] ***
compiled with version: 4.8.1 on 16 July 2013 02:12:59
os: Linux-3.11.0-13-generic #20-Ubuntu SMP Wed Oct 23 07:38:26 UTC 2013
nodename: appname
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 1
current working directory: /var/www/appname.com/src
writing pidfile to /tmp/project-master.pid
detected binary path: /usr/bin/uwsgi-core
setuid() to 33
your processes number limit is 7781
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
uwsgi socket 0 bound to TCP address 127.0.0.1:8889 fd 3
Python version: 2.7.5+ (default, Sep 19 2013, 13:52:09) [GCC 4.8.1]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x1a9f500
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 145536 bytes (142 KB) for 1 cores
*** Operational MODE: single process ***
added /var/www/appname.com/src/appname/ to pythonpath.
WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0x1a9f500 pid: 13398 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 13398)
spawned uWSGI worker 1 (pid: 13399, cores: 1)
[pid: 13399|app: 0|req: 1/1] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:11:26 2014] GET / => generated 1761 bytes in 161 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 2/2] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:13:27 2014] GET / => generated 1761 bytes in 4 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 3/3] 176.62.211.192 () {40 vars in 717 bytes} [Mon Feb 3 13:13:32 2014] GET /admin/ => generated 1865 bytes in 35 msecs (HTTP/1.1 200) 7 headers in 336 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 4/4] 176.62.211.192 () {48 vars in 926 bytes} [Mon Feb 3 13:13:33 2014] POST /admin/ => generated 1761 bytes in 84 msecs (HTTP/1.1 500) 3 headers in 121 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 5/5] 176.62.211.192 () {40 vars in 717 bytes} [Mon Feb 3 13:19:05 2014] GET /admin/ => generated 1865 bytes in 14 msecs (HTTP/1.1 200) 7 headers in 336 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 6/6] 176.62.211.192 () {42 vars in 717 bytes} [Mon Feb 3 13:19:05 2014] GET /favicon.ico => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 7/7] 176.62.211.192 () {48 vars in 926 bytes} [Mon Feb 3 13:19:07 2014] POST /admin/ => generated 1761 bytes in 78 msecs (HTTP/1.1 500) 3 headers in 121 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 8/8] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:30:01 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 9/9] 176.62.211.192 () {40 vars in 717 bytes} [Mon Feb 3 13:30:05 2014] GET /admin/ => generated 1865 bytes in 14 msecs (HTTP/1.1 200) 7 headers in 336 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 10/10] 176.62.211.192 () {42 vars in 717 bytes} [Mon Feb 3 13:30:05 2014] GET /favicon.ico => generated 1761 bytes in 4 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 11/11] 176.62.211.192 () {48 vars in 926 bytes} [Mon Feb 3 13:30:06 2014] POST /admin/ => generated 1761 bytes in 92 msecs (HTTP/1.1 500) 3 headers in 121 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 12/12] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:31:00 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 13/13] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:12 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 14/14] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:13 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 15/15] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:13 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 16/16] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:13 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 17/17] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:31:15 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 18/18] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:31 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 19/19] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:32 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 20/20] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:32 2014] GET / => generated 1761 bytes in 4 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 21/21] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:32 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 22/22] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:33 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 23/23] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:34 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 24/24] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:31:36 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 25/25] 176.62.211.192 () {42 vars in 730 bytes} [Mon Feb 3 13:32:00 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 26/26] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:32:27 2014] GET / => generated 1761 bytes in 5 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 27/27] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:32:32 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 28/28] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:32:38 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 29/29] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:32:38 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 30/30] 176.62.211.192 () {40 vars in 717 bytes} [Mon Feb 3 13:32:40 2014] GET /admin/ => generated 1865 bytes in 16 msecs (HTTP/1.1 200) 7 headers in 336 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 31/31] 176.62.211.192 () {40 vars in 741 bytes} [Mon Feb 3 13:32:40 2014] GET /accounts/register/ => generated 2839 bytes in 7 msecs (HTTP/1.1 200) 4 headers in 224 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 32/32] 176.62.211.192 () {40 vars in 741 bytes} [Mon Feb 3 13:32:42 2014] GET /accounts/register/ => generated 2839 bytes in 7 msecs (HTTP/1.1 200) 4 headers in 224 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 33/33] 176.62.211.192 () {40 vars in 735 bytes} [Mon Feb 3 13:33:03 2014] GET /accounts/login/ => generated 2336 bytes in 7 msecs (HTTP/1.1 200) 7 headers in 336 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 34/34] 176.62.211.192 () {48 vars in 951 bytes} [Mon Feb 3 13:33:05 2014] POST /accounts/login/ => generated 1761 bytes in 75 msecs (HTTP/1.1 500) 3 headers in 121 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 35/35] 176.62.211.192 () {40 vars in 735 bytes} [Mon Feb 3 13:33:08 2014] GET /accounts/login/ => generated 2336 bytes in 9 msecs (HTTP/1.1 200) 7 headers in 336 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 36/36] 176.62.211.192 () {40 vars in 741 bytes} [Mon Feb 3 13:33:09 2014] GET /accounts/register/ => generated 2839 bytes in 6 msecs (HTTP/1.1 200) 4 headers in 224 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 37/37] 176.62.211.192 () {40 vars in 717 bytes} [Mon Feb 3 13:33:10 2014] GET /admin/ => generated 1865 bytes in 13 msecs (HTTP/1.1 200) 7 headers in 336 bytes (1 switches on core 0)
pid: 13399|app: 0|req: 38/38] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:33:10 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 39/39] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:33:10 2014] GET / => generated 1761 bytes in 4 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 40/40] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:55:37 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 41/41] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:55:41 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 42/42] 176.62.211.192 () {40 vars in 717 bytes} [Mon Feb 3 13:55:41 2014] GET /admin/ => generated 1865 bytes in 14 msecs (HTTP/1.1 200) 7 headers in 336 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 43/43] 176.62.211.192 () {42 vars in 717 bytes} [Mon Feb 3 13:55:41 2014] GET /favicon.ico => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0)
[pid: 13399|app: 0|req: 44/44] 176.62.211.192 () {48 vars in 926 bytes} [Mon Feb 3 13:55:45 2014] POST /admin/ => generated 1761 bytes in 71 msecs (HTTP/1.1 500) 3 headers in 121 bytes (1 switches on core 0)
Ive been searching online for a solution but haven't been able to find anything so have resorted to posting on here.
Any help on this would be much appreciated.
I have not fixed this realised that i had SESSION_COOKIE_SECURE = True which was messing up the login. I've now restarted the uWSGI process and re run the uwsgi.ini and it all work.
Thanks to everyone that helped me resolve this!