Communication between Flask and ESP8266 via SocketIO (Updated 2x) - flask

I have a small web app for which the back-end is a Flask+SocketIO server. I would like to get some data from an ESP8266 into my app. The most simple way to achieve this I could think of was to have the micro controller connected directly to the back-end.
I am using the timum-viw library with this example code to implement the client on the micro controller.
The problem is that on trying to run the example I get
(12765) accepted ('192.168.0.11', 59848)
192.168.0.11 - - [06/Jul/2020 18:15:25] "GET /socket.io/?transport=websocket HTTP/1.1" 400 122 0.000265
192.168.0.11 - - [06/Jul/2020 18:15:31] code 400, message Bad request syntax ('This is a webSocket client!')
192.168.0.11 - - [06/Jul/2020 18:15:31] "This is a webSocket client!" 400 -
in the terminal window of the dev server. (The IP belongs to the ESP8266.)
I have the same experience with the arduinoWebSockets library and the WebSocketClientSocketIO example.
Can you help me figure out what the problem is?
Update
Everything is hosted locally at this point. I am running the flask dev server with python3 flask_main.py, eventlet is installed.
The minimal code that manifests the problem:
Arduino:
#include <SocketIoClient.h>
#include <Arduino.h>
#include <ESP8266WiFi.h>
#include <ESP8266WiFiMulti.h>
#include <Hash.h>
#define USE_SERIAL Serial
#define SSID_primary "**********"
#define WIFI_PWD_primary "**********"
#define SERVER_IP "192.168.0.7"
#define SERVER_PORT 5005
ESP8266WiFiMulti wifiMulti;
SocketIoClient socketIOClient;
void setup() {
//// set up serial communication
USE_SERIAL.begin(115200);
USE_SERIAL.setDebugOutput(true);
for(uint8_t t = 4; t > 0; t--) {
USE_SERIAL.printf("[SETUP] BOOT WAIT %d...\n", t);
USE_SERIAL.flush();
delay(1000);
}
//// connect to some access point
wifiMulti.addAP(SSID_primary, WIFI_PWD_primary);
while(wifiMulti.run() != WL_CONNECTED) {
delay(500);
USE_SERIAL.print("Looking for WiFi ");
}
USE_SERIAL.printf("Connected to %s\n", WiFi.SSID().c_str());
USE_SERIAL.printf("My local IP address is %s\n", WiFi.localIP().toString().c_str());
//// set up socket communication
socketIOClient.begin(SERVER_IP, SERVER_PORT);
}
void loop() {
socketIOClient.emit("message", "\"hi there :)\"");
socketIOClient.loop();
delay(1000);
}
Flask minimal code:
from flask import Flask, render_template, request
from flask_socketio import SocketIO
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
socketio = SocketIO(app)
#socketio.on('message')
def handle_message_event(msg):
print('received msg from {} : {}'.format(request.remote_addr, str(msg)))
if __name__ == '__main__':
socketio.run(app, host="0.0.0.0", port=5005, debug=True)
The code below is for debugging only. I do not wish to use them in any form later.
Weirdly enough the Arduino code works fine with a node.js server:
var app = require('express')();
var http = require('http').createServer(app);
var io = require('socket.io')(5005);
io.attach(http, {
pingInterval: 10000,
pingTimeout: 5000,
cookie: false
});
io.on('connection', function (socket) {
console.log('user connected');
socket.on('disconnect', function () {
console.log('user disconnected');
});
socket.on('message', function (msg) {
console.log("message: "+msg);
});
timeout();
});
http.listen();
Could there be something wrong with my Flask? It responds to connections from this:
from socketIO_client import SocketIO, LoggingNamespace
socketIO = SocketIO('localhost', 5005, LoggingNamespace)
while True:
_ = raw_input("> ")
socketIO.emit('message', "hello 2")
But the node server does not!
Update 2
So I went ahead and looked at the communication with wire shark:
Python client & Flask server (works)
The payload of frame 27:
Hypertext Transfer Protocol
GET /socket.io/?EIO=3&transport=websocket&sid=8f47e9a521404b66b23cd985cdee049d HTTP/1.1\r\n
Upgrade: websocket\r\n
Host: localhost:5005\r\n
Origin: http://localhost:5005\r\n
Sec-WebSocket-Key: TQ589ew7EgwDILWb50Eu9Q==\r\n
Sec-WebSocket-Version: 13\r\n
Connection: upgrade\r\n
Connection: keep-alive\r\n
Accept-Encoding: gzip, deflate\r\n
Accept: */*\r\n
User-Agent: python-requests/2.18.4\r\n
\r\n
[Full request URI: http://localhost:5005/socket.io/?EIO=3&transport=websocket&sid=8f47e9a521404b66b23cd985cdee049d]
[HTTP request 1/1]
[Response in frame: 29]
Doing the same with the arduino & flask (does not work)
The payload of frame 34:
Hypertext Transfer Protocol
GET /socket.io/?transport=websocket HTTP/1.1\r\n
Host: 192.168.0.7:5005\r\n
Connection: Upgrade\r\n
Upgrade: websocket\r\n
Sec-WebSocket-Version: 13\r\n
Sec-WebSocket-Key: D9+/7YOHoA8lW7a/0V8vsA==\r\n
Sec-WebSocket-Protocol: arduino\r\n
Origin: file://\r\n
User-Agent: arduino-WebSocket-Client\r\n
\r\n
[Full request URI: http://192.168.0.7:5005/socket.io/?transport=websocket]
[HTTP request 1/1]
[Response in frame: 36]

So it turns out that Flask freaks out about the
Origin: file://\r\n
part because it thinks it is CORS. This is why this answer actually works, however I think it is the wrong fix. Removing this extra header entry is the right way to go about this. This is most simply done by editing this line to match this:
_client.extraHeaders = WEBSOCKETS_STRING("");
in your local library.
There goes hours of research :D

Related

Python Orion Context Broker Token problems

I've been developing the following code:
datos = {
"id":"1",
"type":"Car",
"bra":"0",
}
jsonData = json.dumps(datos)
url = 'http://130.456.456.555:1026/v2/entities'
head = {
"Content-Type": "application/json",
"Accept": "application/json",
"X-Auth-Token": token
}
response = requests.post(url, data=jsonData, headers=head)
My problem is that I can't establish a connection between my computer and my fiware Lab instance.
The error is:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='130.206.113.177', port=1026): Max retries exceeded with url: /v1/entities (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f02c97c1f90>: Failed to establish a new connection: [Errno 110] Connection timed out',))
Seems to be a network connectivity problem.
Assuming that there actually an Orion process listening to port 1026 at IP 130.206.113.177 (should be checked, eg. curl localhost:1026/version command executed in the same VM where Orion runs), the most probable causes of Orion connection problems are:
Something in the Orion host (e.g a firewall or security group) is blocking the incoming connection
Something in the client host (e.g a firewall) is blocking the outcoming connection
There is some other network issue is causing the connection problem.

How to enable CORS in python

Let me start this with, I do not know python, I've had maybe 1 day going through the python tutorials. The situation is this. I have an angular app that has a python app hosted with Apache on a vm in an iframe. I didn't write the python app, but another developer wrote me an endpoint where I am supposed to be able to post from my angular app.
The developer who made the python endpoint is saying that there is something wrong with my request but I am fairly certain there isn't anything wrong. I am almost 100% certain that the problem is that there are no CORS headers in the response and/or the response is not set up to respond to the OPTIONS method. Below is the entirety of the python endpoint:
import os, site, inspect
site.addsitedir(os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))+"/../")
import json
from datetime import datetime
import pymongo
from Config import Config
def application(environ, start_response):
response = environ['wsgi.input'].read(int(environ['CONTENT_LENGTH']))
if response:
json_response = json.loads(response)
document = {
'payment_id': json_response['payment_id'],
'log': json_response['log'],
'login_id': json_response['login_id'],
'browser': environ.get('HTTP_USER_AGENT', None),
'ip_address': environ.get('REMOTE_ADDR', None),
'created_at': datetime.utcnow(),
}
client = pymongo.MongoClient(Config.getValue('MongoServer'))
db = client.updatepromise
db.PaymentLogs.insert(document)
start_response('200 OK', [('Content-Type', 'application/json')
return '{"success": true}'
start_response('400 Bad Request', [('Content-Type', 'application/json')])
return '{"success": false}'
I have attempted the following to make this work: I added to both start_response functions more headers so the code looks like this now:
start_response('201 OK', [('Content-Type', 'application/json',
('Access-Control-Allow-Headers','authorization'),
('Access-Control-Allow-Methods','HEAD, GET, POST, PUT, PATCH, DELETE'),
('Access-Control-Allow-Origin','*'),
('Access-Control-Max-Age','600'))])
Not: I did this both with the 200 and the 400 response at first, and saw no change at all in the response, then just for the heck of it, I decided to change the 200 to a 201, this also did not come through on the response so I suspect this code isn't even getting run for some reason.
Please help, python newb here.
Addendum, i figured this would help, here is what the Headers look like in the response:
General:
Request URL: http://rpc.local/api/payment_log_api.py
Request Method: OPTIONS
Status Code: 200 OK
Remote Address: 10.1.20.233:80
Referrer Policy: no-referrer-when-downgrade
Response Headers:
Allow: GET,HEAD,POST,OPTIONS
Connection: Keep-Alive
Content-Length: 0
Content-Type: text/x-python
Date: Fri, 27 Apr 2018 15:18:55 GMT
Keep-Alive: timeout=5, max=100
Server: Apache/2.4.18 (Ubuntu)
Request Headers:
Accept: */*
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9
Access-Control-Request-Headers: authorization,content-type
Access-Control-Request-Method: POST
Connection: keep-alive
Host: rpc.local
Origin: http://10.1.20.61:4200
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36
Here it is. Just add this to the application right at the beginning:
def application(environ, start_response):
if environ['REQUEST_METHOD'] == 'OPTIONS':
start_response(
'200 OK',
[
('Content-Type', 'application/json'),
('Access-Control-Allow-Origin', '*'),
('Access-Control-Allow-Headers', 'Authorization, Content-Type'),
('Access-Control-Allow-Methods', 'POST'),
]
)
return ''
For Python with CGI, I found this to work:
print '''Access-Control-Allow-Origin: *\r\n''',
print '''Content-Type: text/html\r\n'''
Don't forget to enable CORS on the other side as well, e.g., JavaScript jQuery:
$.ajax({ url: URL,
type: "GET",
crossDomain: true,
dataType: "text", etc, etc

ESP8266 NodeMCU Lua "Socket client" to "Python Server" connection not possible

I was trying to connect a NodeMCU Socket client program to a Python server program, but I was not able to establish a connection.
I tested a simple Python client server code and it worked well.
Python Server Code
import socket # Import socket module
s = socket.socket() # Create a socket object
host = socket.gethostname() # Get local machine name
port = 12345 # Reserve a port for your service.
s.bind((host, port)) # Bind to the port
s.listen(5) # Now wait for client connection.
while True:
c, addr = s.accept() # Establish connection with client.
print 'Got connection from', addr
print c.recv(1024)
c.send('Thank you for connecting')
c.close() # Close the connection
Python client code (with this I tested the above code)
import socket # Import socket module
s = socket.socket() # Create a socket object
host = socket.gethostname() # Get local machine name
port = 12345 # Reserve a port for your service.
s.connect((host, port))
s.send('Hi i am aslam')
print s.recv(1024)
s.close # Close the socket when done
The output server side was
Got connection from ('192.168.99.1', 65385)
Hi i am aslam
NodeMCU code
--set wifi as station
print("Setting up WIFI...")
wifi.setmode(wifi.STATION)
--modify according your wireless router settings
wifi.sta.config("xxx", "xxx")
wifi.sta.connect()
function postThingSpeak()
print("hi")
srv = net.createConnection(net.TCP, 0)
srv:on("receive", function(sck, c) print(c) end)
srv:connect(12345, "192.168.0.104")
srv:on("connection", function(sck, c)
print("Wait for connection before sending.")
sck:send("hi how r u")
end)
end
tmr.alarm(1, 1000, 1, function()
if wifi.sta.getip() == nil then
print("Waiting for IP address...")
else
tmr.stop(1)
print("WiFi connection established, IP address: " .. wifi.sta.getip())
print("You have 3 seconds to abort")
print("Waiting...")
tmr.alarm(0, 3000, 0, postThingSpeak)
end
end)
But when I run the NodeMCU there is no response in the Python server.
The Output in the ESPlorer console looks like
Waiting for IP address...
Waiting for IP address...
Waiting for IP address...
Waiting for IP address...
Waiting for IP address...
Waiting for IP address...
WiFi connection established, IP address: 192.168.0.103
You have 3 seconds to abort
Waiting...
hi
Am I doing something wrong or missing some steps here?
Your guidance is appreciated.
After I revisited this for the second time it finally clicked. I must have scanned your Lua code too quickly the first time.
You need to set up all event handlers (srv:on) before you establish the connection. They may not fire otherwise - depending on how quickly the connection is established.
srv = net.createConnection(net.TCP, 0)
srv:on("receive", function(sck, c) print(c) end)
srv:on("connection", function(sck)
print("Wait for connection before sending.")
sck:send("hi how r u")
end)
srv:connect(12345,"192.168.0.104")
The example in our API documentation is wrong but it's already fixed in the dev branch.

sf::Http::sendRequest never returns

I've written a simple web service using pistache. I'm seding requests to it using sf::Http and sf::Http::Request classes. However, call of sf::Http::sendRequest never returns, even though I specified a 250 ms timeout. The thing happens only with requests to my website. If I send GET request to www.google.com the method returns correct response quite quickly.
Here's the client-side code sample:
sf::Http http("http://192.168.1.10", 8080);
sf::Http::Request request("/highscores", sf::Http::Request::Method::Get);
request.setHttpVersion(1, 1);
//the call below never returns
auto response = http.sendRequest(request, sf::seconds(0.25f));
std::cout << response.getBody();
The service response seems correct in browser and in curl:
$ curl -v 192.168.1.10:8080/highscores
* Trying 192.168.1.10...
* Connected to 192.168.1.10 (192.168.1.10) port 8080 (#0)
> GET /highscores HTTP/1.1
> Host: 192.168.1.10:8080
> User-Agent: curl/7.47.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< Connection: Keep-Alive
< Content-Length: 2
<
* Connection #0 to host 192.168.1.10 left intact
[]%
Using strace on my application shows that it sends correct request and even at some point it receives the correct response:
$ strace -s 192 ./sfmlApplication
...
sendto(20, "GET /highscores HTTP/1.1\r\nconnection: close\r\ncontent-length: 0\r\ncontent-type: application/json\r\nfrom: user#sfml-dev.org\r\nhost: 192.168.1.10\r\nuser-agent: libsfml-network/2.x\r\n\r\n", 176, MSG_NOSIGNAL, NULL, 0) = 176
recvfrom(20, "HTTP/1.1 200 OK\r\nContent-Type: application/json\r\nConnection: Keep-Alive\r\nContent-Length: 2\r\n\r\n[]", 1024, MSG_NOSIGNAL, NULL, NULL) = 96
recvfrom(20,
These are the last lines from strace output, after recvfrom(20, the program stops responding and has to be killed.
And the top of stack trace of blocked operation is:
recv() at 0x7ffff7bcd10f
sf::TcpSocket::receive() at 0x7ffff77b12c0
sf::Http::sendRequest() at 0x7ffff77ad5ed
SFML Version: 2.3.2
System: Fedora 4.8.4-200.fc24.x86_64
Any ideas why the sf::Http::sendRequest method call never returns?

http 403 error + "readv() failed (104: Connection reset by peer) while reading upstream"

Preface: I'm running nginx + gunicorn + django on an amazon ec2 instance using s3boto as a default storage backend. I am free tier. The ec2 security group allows: http, ssh, & https.
I'm attempting to send a multipart/form-data request containing a single element: a photo. When attempting to upload the photo, the iPhone (where the request is coming from) hangs. The photo is around 9.5 MB in size.
When I check the nginx-access.logs:
"POST /myUrl/ HTTP/1.1" 400 5 "-""....
When I check the nginx-error.logs:
[error] 5562#0: *1 readv() failed (104: Connection reset by peer) while reading upstream, client: my.ip.addr.iphone, server: default, request: "POST /myUrl/ HTTP/1.1", upstream: "http://127.0.0.1:8000/myUrl/", host: "ec2-my-server-ip-addr.the-location-2.compute.amazonaws.com"
[info] 5562#0: *1 client my.ip.addr.iphone closed keepalive connection
I really cannot figure out why this is happening... I have tried changing the /etc/nginx/sites-available/default timeout settings...
server { ...
client_max_body_size 20M;
client_body_buffer_size 20M;
location / {
keepalive_timeout 300;
proxy_read_timeout 300;
}
}
Any thoughts?
EDIT: After talking on IRC a little more, his problem is the 403 itself, not the nginx error. Leaving my comments on the nginx error below, in case anyone else stumbles into it someday.
I ran into this very problem last week and spent quite a while trying to figure out what was going on. See here: https://github.com/benoitc/gunicorn/issues/872
Basically, as soon as django sees the headers, it knows that the request isn't authenticated. It doesn't wait for the large request body to finish uploading; it responds immediately, and gunicorn closes the connection right after. nginx keeps sending data, and the end result is that gunicorn sends a RST packet to nginx. Once this happens, nginx cannot recover and instead of sending the actual response from gunicorn/django, it sends a 502 Bad Gateway.
I ended up putting in a piece of middleware that acecsses a couple fields in the django request, which ensures that the entire request body is downloaded before Django sends a response:
checker = re.compile(feed_url_regexp)
class AccessPostBodyMiddleware:
def process_request(self, request):
if checker.match(request.path.lstrip('/')) is not None:
# just need to access the request info here
# not sure which one of these actually does the trick.
# This will download the entire request,
# fixing this random issue between gunicorn and nginx
_ = request.POST
_ = request.REQUEST
_ = request.body
return None
However, I do not have control of the client. Since you do (in the form of your iphone app), maybe you can find a way to handle the 502 Bad Gateway. That will keep your app from having to send the entire request twice.