Wt: fatal error: call to empty boost::function - c++

I am trying to build the "Wt" library (in version 3.3.5) on my own, but failing when I try to run the Example.
My environment is a Debian with Boost 1.53.0 for which I want to build the library.
Compiling and linking (gcc 4.7.2) works well, but when I try to run the Wt-Example (http://www.webtoolkit.eu/wt/doc/tutorial/wt.html), the Server fails with:
Wt: fatal error: call to empty boost::function
Complete Log is:
foo#rtm:/tmp/$ wt_test --docroot . --http-address 0.0.0.0 --http-port 9090
Option no-compression is implied because wthttp was built without zlib support.
[2016-Feb-11 10:35:29.326974] 6436 - [info] "config: reading Wt config file: /etc/wt/wt_config.xml (location = 'wt_test')"
Option no-compression is implied because wthttp was built without zlib support.
[2016-Feb-11 10:35:29.327938] 6436 - [info] "WServer/wthttp: initializing built-in wthttpd"
[2016-Feb-11 10:35:29.328281] 6436 - [info] "wthttp: started server: http://0.0.0.0:9090"
[2016-Feb-11 10:35:33.971973] 6436 - [info] "Wt: session created (#sessions = 1)"
[2016-Feb-11 10:35:33.972423] 6436 [/ mZ4BIN0ZPoVnqQTG] [info] "WEnvironment: UserAgent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:44.0) Gecko/20100101 Firefox/44.0"
10.12.5.50 - - [2016-Feb-11 10:35:33.974561] "GET / HTTP/1.1" 200 4691
[2016-Feb-11 10:35:33.974722] 6436 - [info] "WebRequest: took 3.044ms"
[2016-Feb-11 10:35:34.026134] 6436 [/ mZ4BIN0ZPoVnqQTG] [error] "Wt: fatal error: call to empty boost::function"
[2016-Feb-11 10:35:34.026242] 6436 - [info] "WebController: Removing session mZ4BIN0ZPoVnqQTG"
[2016-Feb-11 10:35:34.026293] 6436 [/ mZ4BIN0ZPoVnqQTG] [info] "Wt: session destroyed (#sessions = 0)"
10.12.5.50 - - [2016-Feb-11 10:35:34.026374] "GET /?wtd=mZ4BIN0ZPoVnqQTG&sid=2063522618&webGL=true&scrW=1680&scrH=1050&tz=60&htmlHistory=true&deployPath=%2F&request=script&rand=4063601615 HTTP/1.1" 500 84
10.12.5.50 - - [2016-Feb-11 10:35:34.026375] "GET /?wtd=mZ4BIN0ZPoVnqQTG&request=style&page=1 HTTP/1.1" 200 0
[2016-Feb-11 10:35:34.026414] 6436 - [info] "WebRequest: took 0.5ms"
[2016-Feb-11 10:35:34.026430] 6436 - [info] "WebRequest: took 30.593ms"
Does anybody have an idea how to find out what is going wrong here?
I know the Boost Version isn't up-to-date, but I guess this shouldn't be a problem?
Regads,
VanDahlen

I get this warning:
boost_1_60_0/boost/signal.hpp|17 col 4| warning: #warning "Boost.Signals is no longer being maintained and is now deprecated. Please switch to Boost.Signals2. To disable this warning message, define BOOST_SIGNALS_NO_DEPRECATION_WARNING." [-Wcpp]
This look like it could have a lot to do with things.
That said, it just works and runs cleanly under valgrind on my Ubuntu box using g++-5. (It does fail to link when using libc++)

Related

Django Login suddenly stopped working - timing out

My Django project used to work perfectly fine for the last 90 days.
There has been no new code deployment during this time.
Running supervisor -> gunicorn to serve the application and to the front nginx.
Unfortunately it just stopped serving the login page (standard framework login).
I wrote a small view that checks if the DB connection is working and it comes up within seconds.
def updown(request):
from django.shortcuts import HttpResponse
from django.db import connections
from django.db.utils import OperationalError
status = True
# Check database connection
if status is True:
db_conn = connections['default']
try:
c = db_conn.cursor()
except OperationalError:
status = False
error = 'No connection to database'
else:
status = True
if status is True:
message = 'OK'
elif status is False:
message = 'NOK' + ' \n' + error
return HttpResponse(message)
This delivers back an OK.
But the second I am trying to reach /admin or anything else requiring the login, it times out.
wget http://127.0.0.1:8000
--2022-07-20 22:54:58-- http://127.0.0.1:8000/
Connecting to 127.0.0.1:8000... connected.
HTTP request sent, awaiting response... 302 Found
Location: /business/dashboard/ [following]
--2022-07-20 22:54:58-- http://127.0.0.1:8000/business/dashboard/
Connecting to 127.0.0.1:8000... connected.
HTTP request sent, awaiting response... 302 Found
Location: /account/login/?next=/business/dashboard/ [following]
--2022-07-20 22:54:58-- http://127.0.0.1:8000/account/login/? next=/business/dashboard/
Connecting to 127.0.0.1:8000... connected.
HTTP request sent, awaiting response... No data received.
Retrying.
--2022-07-20 22:55:30-- (try: 2) http://127.0.0.1:8000/account/login/?next=/business/dashboard/
Connecting to 127.0.0.1:8000... connected.
HTTP request sent, awaiting response...
Supervisor / Gunicorn Log is not helpful at all:
[2022-07-20 23:06:34 +0200] [980] [INFO] Starting gunicorn 20.1.0
[2022-07-20 23:06:34 +0200] [980] [INFO] Listening at: http://127.0.0.1:8000 (980)
[2022-07-20 23:06:34 +0200] [980] [INFO] Using worker: sync
[2022-07-20 23:06:34 +0200] [986] [INFO] Booting worker with pid: 986
[2022-07-20 23:08:01 +0200] [980] [CRITICAL] WORKER TIMEOUT (pid:986)
[2022-07-20 23:08:02 +0200] [980] [WARNING] Worker with pid 986 was terminated due to signal 9
[2022-07-20 23:08:02 +0200] [1249] [INFO] Booting worker with pid: 1249
[2022-07-20 23:12:26 +0200] [980] [CRITICAL] WORKER TIMEOUT (pid:1249)
[2022-07-20 23:12:27 +0200] [980] [WARNING] Worker with pid 1249 was terminated due to signal 9
[2022-07-20 23:12:27 +0200] [1515] [INFO] Booting worker with pid: 1515
Nginx is just giving:
502 Bad Gateway
I don't see anything in the logs, I don't see any error when running the dev server from Django, also Sentry is not showing anything. Totally lost.
I am running Django 4.0.x and all libraries are updated.
The check up script for the database is only checking the connection. Due to misconfiguration of the database replication, the db was connecting and also reading, but when writing it hang.
The login page tries to write a session to the tables, which failed in this case.

gcloud Property validation was skipped

I am using gcloud CLI to configure my region and zone:
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-c
But each command lasts for about 15 seconds, and I get a warning:
WARNING: Property validation for compute/region was skipped
Everything works fine, but why do I have 15 seconds delay, and a warning?
With verbose argument, the output is:
DEBUG: Running [gcloud.config.set] with arguments: [--verbosity: "debug", SECTION/PROPERTY: "compute/region", VALUE: "us-central1"]
Updated property [compute/region].
DEBUG: Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/GOOGLE_AACOUNT_REPLACED#cloudbuild.gserviceaccount.com/?recursive=true
DEBUG: Starting new HTTP connection (1): metadata.google.internal:80
DEBUG: http://metadata.google.internal:80 "GET /computeMetadata/v1/instance/service-accounts/GOOGLE_AACOUNT_REPLACED#cloudbuild.gserviceaccount.com/?recursive=true HTTP/1.1" 200 185
DEBUG: Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/GOOGLE_AACOUNT_REPLACED#cloudbuild.gserviceaccount.com/token
DEBUG: http://metadata.google.internal:80 "GET /computeMetadata/v1/instance/service-accounts/GOOGLE_AACOUNT_REPLACED#cloudbuild.gserviceaccount.com/token HTTP/1.1" 200 1050
DEBUG: Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/GOOGLE_AACOUNT_REPLACED#cloudbuild.gserviceaccount.com/?recursive=true
DEBUG: Starting new HTTP connection (1): metadata.google.internal:80
DEBUG: http://metadata.google.internal:80 "GET /computeMetadata/v1/instance/service-accounts/GOOGLE_AACOUNT_REPLACED#cloudbuild.gserviceaccount.com/?recursive=true HTTP/1.1" 200 185
DEBUG: Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/GOOGLE_AACOUNT_REPLACED#cloudbuild.gserviceaccount.com/token
DEBUG: http://metadata.google.internal:80 "GET /computeMetadata/v1/instance/service-accounts/GOOGLE_AACOUNT_REPLACED#cloudbuild.gserviceaccount.com/token HTTP/1.1" 200 1050
DEBUG: Starting new HTTPS connection (1): compute.googleapis.com:443
DEBUG: https://compute.googleapis.com:443 "POST /batch/compute/v1 HTTP/1.1" 200 None
DEBUG: https://compute.googleapis.com:443 "POST /batch/compute/v1 HTTP/1.1" 200 None
DEBUG: https://compute.googleapis.com:443 "POST /batch/compute/v1 HTTP/1.1" 200 None
DEBUG: https://compute.googleapis.com:443 "POST /batch/compute/v1 HTTP/1.1" 200 None
DEBUG: https://compute.googleapis.com:443 "POST /batch/compute/v1 HTTP/1.1" 200 None
WARNING: Property validation for compute/region was skipped.
To make the gcloud tool easier to use, the Google Cloud will try and validate the values provided, including “compute/region”. In this case, it has to fetch a full list of available regions from the API. If this fails, for whatever reason, then it will show this warning message.
One of the many reasons may be that the Compute Engine API is not enabled. It could also be a lack of authentication, although Cloud Build will have authentication enabled by default and you don't need any special permissions to run this command.
To find out what exactly is going wrong, you can try adding the --log-http parameter to your gcloud command line. This will display the full details of any interactions with the API, including any error message in the response.
In any case, this is simply a warning, and the config entry is still being updated. This happens even if the validation fails, e.g. the region does not exist. As I mentioned above, this is just a feature to help letting the user know if they make certain types of simple mistakes.

loading admin page after upgrading to django 3.0 crashes dev. server

I upgraded Django from 2.2 to 3.0, Now I can't access admin page.
Every time I access http://127.0.0.1:8000/admin the dev. server quits without any message or error.
If I revert back to Django 2.2 everything works fine.
I did fresh virtualenv and created new project unfortunately, I hit the same wall again the Dev. server quits without any error.
Is this a common issue or do I have any error?
I am using windows 10 64-bit and Pycharm community 2019.2.5
my virtual is:
Package Version
------------------- ----------
asgiref==3.2.3
certifi==2019.11.28
chardet==3.0.4
Django==3.0
djangorestframework==3.10.3
idna==2.8
Pillow==6.2.1
pip==19.3.1
PyJWT==1.7.1
pytz==2019.3
requests==2.22.0
setuptools==42.0.2
six==1.13.0
sqlparse==0.3.0
twilio==6.34.0
urllib3==1.25.7
wheel==0.33.6
Stack trace
System check identified no issues (0 silenced).
December 06, 2019 - 21:59:45
Django version 3.0, using settings 'suhul_prj.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CTRL-BREAK.
[06/Dec/2019 21:59:56] "GET / HTTP/1.1" 200 16351
[06/Dec/2019 21:59:56] "GET /static/admin/css/fonts.css HTTP/1.1" 200 423
[06/Dec/2019 21:59:56] "GET /static/admin/fonts/Roboto-Regular-webfont.woff HTTP/1.1" 200 85876
[06/Dec/2019 21:59:56] "GET /static/admin/fonts/Roboto-Bold-webfont.woff HTTP/1.1" 200 86184
[06/Dec/2019 21:59:56] "GET /static/admin/fonts/Roboto-Light-webfont.woff HTTP/1.1" 200 85692
[06/Dec/2019 22:00:02] "GET /admin HTTP/1.1" 301 0
[06/Dec/2019 22:00:02] "GET /admin/ HTTP/1.1" 302 0
[06/Dec/2019 22:00:02] "GET /admin/login/?next=/admin/ HTTP/1.1" 200 1913
[06/Dec/2019 22:00:03] "GET /static/admin/css/base.css HTTP/1.1" 200 16378
[06/Dec/2019 22:00:03] "GET /static/admin/css/login.css HTTP/1.1" 200 1233
[06/Dec/2019 22:00:03] "GET /static/admin/css/responsive.css HTTP/1.1" 200 18052
[06/Dec/2019 22:00:08] "POST /admin/login/?next=/admin/ HTTP/1.1" 302 0
(home_venv) C:\Users\Admin\Dropbox\django_projects\suhul_prj>
looks like this problem going back to python and Django versions incompatibility, which caused WSGI segmentation fault, I had the same issue while I was using python 3.7 and with downgrading python to 3.6 it has been solved!
Just want to share my experience solving the same issue.
I used python v3.7 and got this issue. Solved this problem by upgrading to python v3.8 and creating a new venv.
Now my python version FYI:
Python 3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020, 23:03:10) [MSC v.1916 64 bit (AMD64)] on win32
Also another post mentioned this issue.

fail2ban need help to create regex rules

I try to protect my serveur from xmlrpc.php ddos.
I use fail2ban, but the regex I found dont seems to be ok. Can you have a look:
This is the log:
Aug 2 17:33:11 myserver pound: my.web.site 188.209.49.38 - -
[02/Aug/2015:17:33:11 +0200] "POST /xmlrpc.php HTTP/1.0" 404 410 ""
"Mozilla/5.0 (compatible; Googlebot/2.1;
http://www.google.com/bot.html)"
Aug 2 16:27:49 myserver pound:
(7fec610c5700) e503 no back-end "POST /xmlrpc.php HTTP/1.0" from
185.62.188.25
filter.d/xmlrpc.conf
[Definition]
failregex = ^<HOST> .*POST .*xmlrpc\.php.*
ignoreregex =
jail.local
[xmlrpc]
enabled = true
filter = xmlrpc
action = iptables[name=xmlrpc, port=http, protocol=tcp]
logpath = /var/log/pound.log
bantime = 43600
maxretry = 2
And the test
fail2ban-regex /var/log/pound.log /etc/fail2ban/filter.d/xmlrpc.conf
/usr/share/fail2ban/server/filter.py:442: DeprecationWarning: the md5 module is deprecated; use hashlib instead
import md5
Running tests
=============
Use regex file : /etc/fail2ban/filter.d/xmlrpc.conf
Use log file : /var/log/pound.log
Results
=======
Failregex
|- Regular expressions:
| [1] ^<HOST> .*POST .*xmlrpc\.php.*
|
`- Number of matches:
[1] 0 match(es)
Ignoreregex
|- Regular expressions:
|
`- Number of matches:
Summary
=======
Sorry, no match
Look at the above section 'Running tests' which could contain important
information.
root#myserver:/etc/fail2ban#
Any idea?
Thks
I edited the type format, so I have now this kind of log
Aug 3 06:25:51 ns111111 pound: 141.101.96.94 POST /xmlrpc.php HTTP/1.1 - HTTP/1.1 200 OK
So I tried this, and it's ok :
fail2ban-regex 'Aug 3 06:25:51 ns111111 pound: 141.101.96.94 POST /xmlrpc.php HTTP/1.1 - HTTP/1.1 200 OK' 'ns111111 pound: <HOST> .*POST .*xmlrpc\.php.*'

Nginx connection reset, response from uWsgi lost

I have a django app hosted via Nginx and uWsgi. In a certain very simple request, I get different behaviour for GET and POST, which should not be the case.
The uWsgi daemon log:
[pid: 32454|app: 0|req: 5/17] 127.0.0.1 () {36 vars in 636 bytes} [Tue Oct 19 11:18:36 2010] POST /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ => generated 80 bytes in 3 msecs (HTTP/1.0 440) 1 headers in 76 bytes (0 async switches on async core 0)
[pid: 32455|app: 0|req: 5/18] 127.0.0.1 () {32 vars in 521 bytes} [Tue Oct 19 11:18:50 2010] GET /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ => generated 80 bytes in 3 msecs (HTTP/1.0 440) 1 headers in 76 bytes (0 async switches on async core 0)
The Nginx accesslog:
127.0.0.1 - - [19/Oct/2010:18:18:36 +0200] "POST /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ HTTP/1.0" 440 0 "-" "curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15"
127.0.0.1 - - [19/Oct/2010:18:18:50 +0200] "GET /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ HTTP/1.0" 440 80 "-" "curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15"
The Nginx errorlog:
2010/10/19 18:18:36 [error] 4615#0: *5 readv() failed (104: Connection reset by peer) while reading upstream, client: 127.0.0.1, server: localhost, request: "POST /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ HTTP/1.0", upstream: "uwsgi://unix:sock/uwsgi.sock:", host: "localhost:9201"
In essence, Nginx somewhere loses the response if I use POST, not so if I use GET.
Anybody knows something about that?
Pass --post-buffering 1 to uwsgi
This will automatically buffer all the http body > 1 byte
The problem is raised by the way nginx manages upstream disconnections
I hit the same issue, but on my case I can't disable "uwsgi_pass_request_body" as most times (but not always) my app do need the POST data.
This is the workaround I found, while this issue is not fixed in uwsgi:
http://permalink.gmane.org/gmane.comp.python.wsgi.uwsgi.general/813
import django.core.handlers.wsgi
class ForcePostHandler(django.core.handlers.wsgi.WSGIHandler):
"""Workaround for: http://lists.unbit.it/pipermail/uwsgi/2011-February/001395.html
"""
def get_response(self, request):
request.POST # force reading of POST data
return super(ForcePostHandler, self).get_response(request)
application = ForcePostHandler()
I am facing the same issues. I tried all solutions above, but they were not working. Ignoring the response body in my case is simply not an option.
Apparently it is a bug with nginx and uwsgi when dealing with POST requests whose response is smaller than 4052 bytes
What solved it for me was adding "--pep3333-input" to the parameter list of uwsgi. After that all POSTs are returned correctly.
Versions of nginx/uwsgi I'm using:
$ nginx -V
nginx: nginx version: nginx/0.9.6
$ uwsgi --version
uWSGI 0.9.7
After a lucky find in further research (http://answerpot.com/showthread.php?577619-Several%20Bugs/Page2) I found something that helped...
Supplying the uwsgi_pass_request_body off; parameter in the Nginx conf resolves this problem...