I have a toy Flask app:
from flask import Flask
app = Flask(__name__)
#app.route("/home")
def home():
return "<h1>Home...</h1>"
#app.route("/health")
def health():
return "<h1>Healthy</h1>"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000, debug=True)
.. which is being provisioned using Ansible on a Vagrant guest machine with the following Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.box = "geerlingguy/centos7"
config.vm.provision "ansible_local" do |ansible|
ansible.playbook = "playbooks/run.yml"
ansible.tags = "install"
end
config.vm.network "forwarded_port", guest: 5000, host: 5000
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 1
end
end
If I vagrant ssh to the guest and launch the flaskapp in with the following:
export FLASK_APP='/flaskapp/app.py'
export FLASK_ENV='development'
cd /flaskapp
python3 -m flask run
I can curl 127.0.0.1:5000/home with a successful response.
However from the Vagrant host (i.e. not the guest where the flask app is running), I cannot access http://localhost:5000/home. Accessing localhost:5000/home (or 127.0.0.1:500/home) is:
The connection was reset
The connection to the server was reset while the page was loading.
The site could be temporarily unavailable or too busy. Try again in a few moments.
If you are unable to load any pages, check your computer’s network connection.
If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web.
I'm using Windows 10, and the guest VMs network config looks like this:
I have tried assigning static ip or assigning an ip dynamically but I cannot access. I am not sure if it's something that I need to add to the /etc/hosts file...
Can anyone help to identify the issue here and how can I debug this kind of problem?
Related
I have created a sample webserver using python in GCP VM Using below code.
from http.server import BaseHTTPRequestHandler, HTTPServer
import time
hostName = "localhost"
serverPort = 5500
class MyServer(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(bytes("<html><head><title>https://pythonbasics.org</title></head>", "utf-8"))
self.wfile.write(bytes("<p>Request: %s</p>" % self.path, "utf-8"))
self.wfile.write(bytes("<body>", "utf-8"))
self.wfile.write(bytes("<p>This is an example web server.</p>", "utf-8"))
self.wfile.write(bytes("</body></html>", "utf-8"))
if __name__ == "__main__":
webServer = HTTPServer((hostName, serverPort), MyServer)
print("Server started http://%s:%s" % (hostName, serverPort))
try:
webServer.serve_forever()
except KeyboardInterrupt:
pass
webServer.server_close()
print("Server stopped.")
I can access this server on the VM using the command.
user#instance1:~$ curl http://localhost:5500
<html><head><title>https://pythonbasics.org</title></head><p>Request: /</p><body><p>This is an example web server.</p></body></html>
I have created a firewall rule to allow access from all source ips. But not able to access using external ip. I have also tried from external browser.
Firewall rule screen. Allow HTTP(S).Here the external ip is 34.122.198.62
user#instance1:~$ curl http://34.122.198.62:5500/
curl: (7) Failed to connect to 34.122.198.62 port 5500: Connection refused
Could you please help in resolve the issue? Thank you in advance
Running the server with python3 manage.py runserver 0.0.0.0:8000 resolved the issue.
i have two django instances running on two servers and i am using memcached to cache some data in my applicationa.
each server have it's own memcached installed, i want to both of my applications have access to both caches but i cant't. when i set a values from one application in cache other application cant access it
my memcached instances are running as root, also i have tried memcache and other users but it didn't fix the problem.
for testing i used django shell, import cache class:
from django.core.cache import cache
set a value in cache :
cache.set('foo', 'bar', 3000)
and tried to get value from my other Django instance :
cache.get('foo')
but it returns nothing!
here is my settings.py file :
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.PyLibMCCache',
'LOCATION': [
'first app server ip:port',
'second app server ip:port']
}
}
and my memcached.conf(comments deletede):
-d
logfile /var/log/memcache/memcached.log
# -v
-vv
-m 512
-p 11211
-u root
-l 192.168.174.160
# -c 1024
# -k
# -M
# -r
-P /var/run/memcached/memcached.pid
The order of location in settings must be the same in all servers. Could you please check if they are same?
Summary
I have a flask application deployed to Kubernetes with python 2.7.12, Flask 0.12.2 and using requests library. I'm getting a SSLError while using requests.session to send a POST Request inside the container. When using requests sessions to connect to a https url , requests throws a SSLError
Some background
I have not added any certificates
The project works when I run a docker image locally but after deployment to kubernetes, from inside the container - the post request is not being sent to the url
verify=false does not work either
System Info - What I am using:
Python 2.7.12, Flask==0.12.2, Kubernetes, python-requests-2.18.4
Expected Result
Get HTTP Response code 200 after sending a POST request
Error Logs
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/adapters.py", line 511, in send
raise SSLError(e, request=request)
SSLError: HTTPSConnectionPool(host='dev.domain.nl', port=443): Max retries exceeded with url: /ingestion?LrnDevEui=0059AC0000152A03&LrnFPort=1&LrnInfos=TWA_100006356.873.AS-1-135680630&AS_ID=testserver&Time=2018-06-22T11%3A41%3A08.163%2B02%3A00&Token=1765b08354dfdec (Caused by SSLError(SSLEOFError(8, u'EOF occurred in violation of protocol (_ssl.c:661)'),))
/usr/local/lib/python2.7/site-packages/urllib3/connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
Reproduction Steps
import requests
from flask import Flask, request, jsonify
from requests import Request, Session
sess = requests.Session()
adapter = requests.adapters.HTTPAdapter(max_retries = 200)
sess.mount('http://', adapter)
sess.mount('https://', adapter)
sess.cert ='/usr/local/lib/python2.7/site-packages/certifi/cacert.pem'
def test_post():
url = 'https://dev.domain.nl/ingestion/?'
header = {'Content-Type': 'application/json', 'Accept': 'application/json'}
response = sess.post(url, headers= header, params= somepara, data= json.dumps(data),verify=True)
print response.status_code
return response.status_code
def main():
threading.Timer(10.0, main).start()
test_post()
if __name__ == '__main__':
main()
app.run(host="0.0.0.0", debug=True, port=5001, threaded=True)
Docker File
FROM python:2.7-alpine
COPY ./web /web
WORKDIR /web
RUN pip install -r requirements.txt
ENV FLASK_APP app.py
EXPOSE 5001
EXPOSE 443
CMD ["python", "app.py"]
The problem may be in the Alpine Docker image that lacks CA certificates. On your laptop code works as it uses CA certs from you local workstation. I would think that running Docker image locally will fail too - so the problem is not k8s.
Try to add the following line to the Dockerfile:
RUN apk update && apk add ca-certificates && rm -rf /var/cache/apk/*
It will install CA certs inside the container.
I am testing my Flask app on AWS EC2 instance (Ubuntu).
Main app:
from sonip.api.factory import create_app
app = create_app()
def main():
app.run(debug=True, threaded=True)
if __name__ == "__main__":
main()
The actual set up of the Flask app is done in a factory including registering blueprint, etc.
def create_app():
app = Flask(__name__)
app.config['SERVER_NAME'] = settings.FLASK_SERVER_NAME
app.config['SWAGGER_UI_DOC_EXPANSION'] = settings.RESTPLUS_SWAGGER_UI_DOC_EXPANSION
app.config['RESTPLUS_VALIDATE'] = settings.RESTPLUS_VALIDATE
app.config['RESTPLUS_MASK_SWAGGER'] = settings.RESTPLUS_MASK_SWAGGER
app.config['ERROR_404_HELP'] = settings.RESTPLUS_ERROR_404_HELP
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
app.config["SQLALCHEMY_DATABASE_URI"] = "postgresql://{}:{}#{}:{}/{}".format(
settings.DB_USER,
settings.DB_PASS,
settings.DB_HOST,
settings.DB_PORT,
settings.DB_NAME)
db.init_app(app)
blueprint = Blueprint('api', __name__, url_prefix='/api')
app.register_blueprint(blueprint)
return app
When I run python application.py and use curl -X GET http://localhost:5000/api, it returns the correct Swagger page. However, if I tried to run the app by specifying host=0.0.0.0 for external traffic, I got 404 for the same request.
(env) ubuntu#ip-172-31-18-136:~/aae$ python application.py
* Serving Flask app "sonip.api.factory" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://localhost:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 948-062-124
127.0.0.1 - - [16/May/2018 18:07:24] "GET /api/ HTTP/1.1" 200 -
♥(env) ubuntu#ip-172-31-18-136:~/aae$ vi application.py
(env) ubuntu#ip-172-31-18-136:~/aae$ python application.py
* Serving Flask app "sonip.api.factory" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 948-062-124
165.225.34.185 - - [16/May/2018 18:08:28] "GET /api/ HTTP/1.1" 404 -
Port 5000 is open to allow all inbound traffic in the security group. I tried a vanilla Flask app with just a few line of code, it worked just fine.
app = Flask(__name__)
if __name__ == '__main__':
app.run(debug=True, port=8080, host='0.0.0.0')
This at least means 5000 is fine. Could it be the Blueprint or Swagger?
Setting SERVER_NAME and changing the host/port to something different in app.run() used to be a recipe for problems. At best, it's under-documented.
Try changing settings.FLASK_SERVER_NAME to 0.0.0.0:5000. Or if your app wants to be using cookies, try the trick of using something.dev:5000 and adding an entry for something.dev to your local /etc/hosts.
Using rabbitmq as broker for celery. Issue is coming while running command
celery -A proj worker --loglevel=info
celery console shows this
[2017-06-23 07:57:09,261: ERROR/MainProcess] consumer: Cannot connect to amqp://bruce:**#127.0.0.1:5672//: timed out.
Trying again in 2.00 seconds...
[2017-06-23 07:57:15,285: ERROR/MainProcess] consumer: Cannot connect to amqp://bruce:**#127.0.0.1:5672//: timed out.
Trying again in 4.00 seconds...
following are the logs from rabbitmq
=ERROR REPORT==== 23-Jun-2017::13:28:58 ===
closing AMQP connection <0.18756.0> (127.0.0.1:58424 -> 127.0.0.1:5672):
{handshake_timeout,frame_header}
=INFO REPORT==== 23-Jun-2017::13:29:04 ===
accepting AMQP connection <0.18897.0> (127.0.0.1:58425 -> 127.0.0.1:5672)
=ERROR REPORT==== 23-Jun-2017::13:29:14 ===
closing AMQP connection <0.18897.0> (127.0.0.1:58425 -> 127.0.0.1:5672):
{handshake_timeout,frame_header}
=INFO REPORT==== 23-Jun-2017::13:29:22 ===
accepting AMQP connection <0.19054.0> (127.0.0.1:58426 -> 127.0.0.1:5672)
Any input would be appreciated.
I know its late
But I came across the same issue today, spent almost an hour to find the exact fix. Thought it might help someone else
I was using celery version 4.1.0
Hope you have configured RabbitMQ properly, if not please configure it as mentioned in the page http://docs.celeryproject.org/en/latest/getting-started/brokers/rabbitmq.html#setting-up-rabbitmq
Also cross check if the broker url is correct. Here is the brocker url syntax
amqp://user_name:password#localhost/host_name
You might not need to specify the port number, since it will automatically select the default one
If you follow the same variables from the setup tutorial link above your Brocker url will be like
amqp://myuser:mypassword#localhost/myvhost
Follow this project structure
Project
../app
../Project
../settings.py
../celery.py
../tasks.py
../celery_config.py
celery_config.py
# - - - - - - - - - -
# BROKER SETTINGS
# - - - - - - - - - -
# BROKER_URL = os.environ['APP_BROKER_URL']
BROKER_HEARTBEAT = 10
BROKER_HEARTBEAT_CHECKRATE = 2.0
# Setting BROKER_POOL_LIMIT to None disables pooling
# Disabling pooling causes open/close connections for every task.
# However, the rabbitMQ cluster being behind an Elastic Load Balancer,
# the pooling is not working correctly,
# and the connection is lost at some point.
# There seems no other way around it for the time being.
BROKER_POOL_LIMIT = None
BROKER_TRANSPORT_OPTIONS = {'confirm_publish': True}
BROKER_CONNECTION_TIMEOUT = 20
BROKER_CONNECTION_RETRY = True
BROKER_CONNECTION_MAX_RETRIES = 100
celery.py
from __future__ import absolute_import, unicode_literals
from celery import Celery
from Project import celery_config
app = Celery('Project',
broker='amqp://myuser:mypassword#localhost/myvhost',
backend='amqp://',
include=['Project'])
# Optional configuration, see the application user guide.
# app.conf.update(
# result_expires=3600,
# CELERY_BROKER_POOL_LIMIT = None,
# )
app.config_from_object(celery_config)
if __name__ == '__main__':
app.start()
tasks.py
from __future__ import absolute_import, unicode_literals
from .celery import app
#app.task
def add(x, y):
return x + y
Then start the celery with “celery -A Project worker -l info” from the project directory
Everything will be fine.
set CELERY_BROKER_POOL_LIMIT = None in settings.py
This solution is for GCP users.
I've been working on GCP and faced the same issue.
The error message was :
[2022-03-15 16:56:00,318: ERROR/MainProcess] consumer: Cannot connect
to amqp://root:**#34.125.161.132:5672/vhost: timed out.
I spent almost one hour to solve this issue and finally found the solution
We have to add the port number 5672 in the Firewall rules
Steps:
Go to Firewall
select default-allow-http rule
press Edit
search "Specified protocols and ports"
add 5672 in tcp box ( example if you want to add more ports : 80,5672,8000 )
save the changes and there you go !