Need Help in Receiving Data using TCP in Python - python-2.7

LED is not switching ON when I compare a received data with a char.
Here is my Code:
import RPi.GPIO as GPIO
import time
from time import sleep
import socket
from socket import *
from operator import eq
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
GPIO.setup(17,GPIO.OUT)
def led():
IP="192.168.0.105"
port=2525
s=socket(AF_INET, SOCK_STREAM)
s.connect((IP,port))
msg=s.recv(1024)
print msg
s.close()
if eq(msg,'a'):
GPIO.output(17,GPIO.HIGH)
if eq(msg,'b'):
GPIO.output(17,GPIO.LOW)
else:
print 'Pls Enter Valid Key'
while True:
led()`

Related

Django - Whatsapp Sessions for scheduled messages

I'm developing a system where users, in addition to all the bureaucratic part, can register their clients' whatsapp so that automatic billing messages, congratulations, etc. are sent. Where the user would read the QR code and the system would be in charge of sending messages over time, using the user's whatsapp, thus opening a user<-> clinet conversation. I'm dividing this problem into parts, for now I'm trying to read the Whatsapp Web Qr Code and display it in a template. This is already happening. The problem is that the webdriver is terminated first, as soon as the image is returned to the template, then the session cannot be validated. The webdriver remains open forever, or closes before the image is sent to the template, the image needs to go to the template via return (or another way) and the webdriver remains active for a while. How to solve this concurrent task?
# views.py
from django.shortcuts import render
from django.http import HttpResponse
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import base64
import time
from django.shortcuts import render
def read_qr_code(request):
driver = webdriver.Firefox()
# driver.implicitly_wait(30) # mantém o webdriver ativo por 2 minutos
driver.get('https://web.whatsapp.com/')
wait = WebDriverWait(driver, 10)
qr_element = wait.until(EC.presence_of_element_located((By.XPATH, '//*[#id="app"]/div/div/div[3]/div[1]/div/div/div[2]/div/canvas')))
qr_image_binary = qr_element.screenshot_as_png
qr_image_base64 = base64.b64encode(qr_image_binary).decode('utf-8')
context = {
'image_data': qr_image_base64
}
# send_qr(request, context)
# time.sleep(20) # aguarda por 2 minutos
# driver.quit() # fecha o webdriver
return render(request, 'read_qr_code.html', context)
I solved this problmes using Threds, the code is.
# views.py
from django.shortcuts import render
from django.http import HttpResponse
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import base64
import time
from django.shortcuts import render
import threading
def quit_driver_thread():
time.sleep(40)
driver.quit()
def read_qr_code(request):
global driver
driver = webdriver.Firefox()
driver.implicitly_wait(120)
driver.get('https://web.whatsapp.com/')
wait = WebDriverWait(driver, 10)
qr_element = wait.until(EC.presence_of_element_located((By.XPATH, '//*[#id="app"]/div/div/div[3]/div[1]/div/div/div[2]/div/canvas')))
qr_image_binary = qr_element.screenshot_as_png
qr_image_base64 = base64.b64encode(qr_image_binary).decode('utf-8')
context = {
'image_data': qr_image_base64
}
thread = threading.Thread(target=quit_driver_thread)
thread.start()
return render(request, 'read_qr_code.html', context)

How to run two threads ? So that the RunBot function is async?

How to run two threads ? So that the RunBot function is async?
After startup, it works fine - but #client.event does not work due to the fact that the web server thread took
`
import discord
from ast import literal_eval
import aiohttp
import aiosqlite
from quart import Quart, render_template, request, session, redirect, url_for, make_response, websocket
from quart_discord import DiscordOAuth2Session, requires_authorization, Unauthorized
import asyncio
from threading import Thread
import multiprocessing as mp
TOKEN = "token"
client = discord.Client(command_prefix='-=-=-=', intents=discord.Intents.all())
app = Quart(__name__)
#client.event
async def on_ready():
print(f'{client.user} Bot Content')
#app.before_serving
async def before_serving():
async def RunBot()
#don't' work
#create new Thread
await client.run(True)
# loop = asyncio.get_event_loop()
# await client.login(TOKEN)
# loop.create_task(client.connect())
#client.event
async def on_message(message):
print(f'New msg {message.content}, Server: {message.guild}')
#app.route("/")
async def index():
return render_template('index.html')
if __name__ == "__main__":
from hypercorn.config import Config
from hypercorn.asyncio import serve
asyncio.run(serve(app, Config()))
`
I tried a simple launch via task but it didn't work
It is necessary that the web server does not block the client stream

Code run too slow while Using Threading and Queue

I can say I've been improving on Python. I am actually able to write a successful port scan using Python 2.7. My question is, how can I make this code run faster, as it is extremely slow when tested? I'm not pretty sure if I put the queue and thread definition in the proper way it should be. Thanks in advance.
below is a copy of the code.
import socket
import sys
import time
import Queue
import colorama
import threading
from Queue import Queue
from threading import Thread
from colorama import Fore, Back, Style
colorama.init(autoreset=True)
queue = Queue()
num_threads = 10
try:
ipLists = open(raw_input('\033[91m[\033[92m+\033[91m]\033[92m IP Lists : \033[97m'),'r').read().splitlines()
except:
sys.exit('\n\033[91m{!} Pleas Specify a FILE \033[91m[\033[92m+\033[91m]\033[92m Example => IPS.txt\033[00m')
def wait(i, q):
host = q.get()
q.task_done()
def thrd(i, q):
while True:
wait(i, q)
def portscan():
while True:
for host in ipLists:
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
rez = s.connect_ex((host, 22))
if rez == 0:
print (Fore.GREEN + Style.DIM + 'SSH PORT {}: Open on ' + host)
s.close()
break
else:
print (Fore.BLUE + Style.DIM + 'SSH PORT {}: Closed on ' + host)
queue.put(host)
except socket.error:
print (Fore.RED + Style.DIM + 'Couldn\'t connect to server')
sys.exit(0)
queue.put(host)
except KeyboardInterrupt:
print ('Stoping...')
sys.exit(0)
pass
if __name__ == "__main__":
for i in range(int(num_threads)):
worker = Thread(target=thrd, args=(i, queue))
worker.setDaemon(True)
worker.start()
time.sleep(0.1)
portscan()

flask_socketio client does not receive a data that is processed in another request used multiprocess

I am using flask with react in the client I am trying to make a loop to update data constantly but when I do the emit the data is not sent but the loop is done even if I put a string to send if it is reflected in the console.log
from flask import Flask, send_from_directory
from flask_socketio import SocketIO, emit, send
from flask_cors import CORS
from multiprocessing import Process, Queue
from servidor.api.api import Api_iv
import sys
ruta_template = '../../cliente/build'
app = Flask(__name__)#, static_folder=ruta_template+"/static/")
app.config['SECRET_KEY'] = '085i5RIlQM'
socket = SocketIO(app, cors_allowed_origins="*", async_mode='eventlet')
CORS(app)
#sys.setrecursionlimit(5000)
class Server_api:
def __init__(self, socket):
self.socket = socket
def Get_api(self, q):
api = Api_iv()
data = api.Get_data()
data_api = api.Parser_data(data)
q.put(data_api)
return
def Multiprocess_api(self):
data_process = Queue()
process = Process(target=self.Get_api(data_process), args=(data_process,))
process.start()
process.join()
return data_process.get()
#socket.on('stream')
def Stream_data():
api = Server_api(socket)
while True:
print("Obteniendo Partidos...")
emit("partidos", api.Multiprocess_api())
socket.sleep(5)
if __name__ == '__main__':
socket.run(app, debug=True)
client
import React from 'react';
import ReactDOM from 'react-dom';
import './index.css';
import App from './App';
import reportWebVitals from './reportWebVitals';
import io from 'socket.io-client';
const socket = io("http://127.0.0.1:5000");
ReactDOM.render(
<React.StrictMode>
<App />
</React.StrictMode>,
document.getElementById('root')
);
socket.emit("stream");
socket.on("partidos", (data) =>{
console.log(data);
})
// If you want to start measuring performance in your app, pass a function
// to log results (for example: reportWebVitals(console.log))
reportWebVitals();
in the console.log nothing is reflected but on the server side if you do a loop and it is not seen that it is stuck only that it does not send the value
I already saw the problem, it was that I am sending data with many characters apparently sockeio cannot send data with many characters so I had to partition the sending into several parts so that it could be sent successfully ...

Python urllib2 does not respect timeout

The following two lines of code hangs forever:
import urllib2
urllib2.urlopen('https://www.5giay.vn/', timeout=5)
This is with python2.7, and I have no http_proxy or any other env variables set. Any other website works fine. I can also wget the site without any issue. What could be the issue?
If you run
import urllib2
url = 'https://www.5giay.vn/'
urllib2.urlopen(url, timeout=1.0)
wait for a few seconds, and then use C-c to interrupt the program, you'll see
File "/usr/lib/python2.7/ssl.py", line 260, in read
return self._sslobj.read(len)
KeyboardInterrupt
This shows that the program is hanging on self._sslobj.read(len).
SSL timeouts raise socket.timeout.
You can control the delay before socket.timeout is raised by calling
socket.setdefaulttimeout(1.0).
For example,
import urllib2
import socket
socket.setdefaulttimeout(1.0)
url = 'https://www.5giay.vn/'
try:
urllib2.urlopen(url, timeout=1.0)
except IOError as err:
print('timeout')
% time script.py
timeout
real 0m3.629s
user 0m0.020s
sys 0m0.024s
Note that the requests module succeeds here although urllib2 did not:
import requests
r = requests.get('https://www.5giay.vn/')
How to enforce a timeout on the entire function call:
socket.setdefaulttimeout only affects how long Python waits before an exception is raised if the server has not issued a response.
Neither it nor urlopen(..., timeout=...) enforce a time limit on the entire function call.
To do that, you could use eventlets, as shown here.
If you don't want to install eventlets, you could use multiprocessing from the standard library; though this solution will not scale as well as an asynchronous solution such as the one eventlets provides.
import urllib2
import socket
import multiprocessing as mp
def timeout(t, cmd, *args, **kwds):
pool = mp.Pool(processes=1)
result = pool.apply_async(cmd, args=args, kwds=kwds)
try:
retval = result.get(timeout=t)
except mp.TimeoutError as err:
pool.terminate()
pool.join()
raise
else:
return retval
def open(url):
response = urllib2.urlopen(url)
print(response)
url = 'https://www.5giay.vn/'
try:
timeout(5, open, url)
except mp.TimeoutError as err:
print('timeout')
Running this will either succeed or timeout in about 5 seconds of wall clock time.