I can say I've been improving on Python. I am actually able to write a successful port scan using Python 2.7. My question is, how can I make this code run faster, as it is extremely slow when tested? I'm not pretty sure if I put the queue and thread definition in the proper way it should be. Thanks in advance.
below is a copy of the code.
import socket
import sys
import time
import Queue
import colorama
import threading
from Queue import Queue
from threading import Thread
from colorama import Fore, Back, Style
colorama.init(autoreset=True)
queue = Queue()
num_threads = 10
try:
ipLists = open(raw_input('\033[91m[\033[92m+\033[91m]\033[92m IP Lists : \033[97m'),'r').read().splitlines()
except:
sys.exit('\n\033[91m{!} Pleas Specify a FILE \033[91m[\033[92m+\033[91m]\033[92m Example => IPS.txt\033[00m')
def wait(i, q):
host = q.get()
q.task_done()
def thrd(i, q):
while True:
wait(i, q)
def portscan():
while True:
for host in ipLists:
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
rez = s.connect_ex((host, 22))
if rez == 0:
print (Fore.GREEN + Style.DIM + 'SSH PORT {}: Open on ' + host)
s.close()
break
else:
print (Fore.BLUE + Style.DIM + 'SSH PORT {}: Closed on ' + host)
queue.put(host)
except socket.error:
print (Fore.RED + Style.DIM + 'Couldn\'t connect to server')
sys.exit(0)
queue.put(host)
except KeyboardInterrupt:
print ('Stoping...')
sys.exit(0)
pass
if __name__ == "__main__":
for i in range(int(num_threads)):
worker = Thread(target=thrd, args=(i, queue))
worker.setDaemon(True)
worker.start()
time.sleep(0.1)
portscan()
Related
How do I end socket gracefully? I tried using signal.SIGINT but no success.
Is there another way? I'm running in development and can't seem to stop socket after Crtl+C. Browser console log continues to print and locks the browser from reloading the page when app.py starts up again.
Here is my app.py
from logging.handlers import SocketHandler
import os
import pandas as pd
import json
import threading
import signal
from flask import Flask, render_template, session, request, jsonify
from flask_socketio import SocketIO
from flask_cors import CORS, cross_origin
app = Flask(__name__)
app.debug = True
socketio = SocketIO(
app, cors_allowed_origins="*", always_connect=True, async_mode="threading"
)
app.config["SECRET_KEY"] = "secret!"
def signal_handler(signum, frame):
exit_event.set()
SocketHandler.close(socketio)
exit_event = threading.Event()
#socketio.on("response_demo")
def background_task_func():
"""Example of how to send server generated events to clients."""
i = 0
while True:
if exit_event.is_set():
print(f"completed {threading.current_thread().name} : {os.getpid()} ")
socketio.disconnect()
socketio.settimeout(2)
socketio.close()
# SocketHandler.close(socketio)
break
socketio.sleep(5.05)
data = {
"Name": "data packet",
"p": [{"x": i, "a": 12, "b": 12, "c": 10, "d": 10, "e": 10}],
}
data_2 = pd.DataFrame(data)
df_json = data_2.to_json(orient="records")
result = {"objects": json.loads(df_json)}
socketio.emit("my_response", result, broadcast=True)
#app.route("/", methods=["GET", "POST"])
def index():
if request.method == "GET":
return render_template("index-so.html")
exit_event.clear()
val = request.json.get("c_check")
bg = threading.Thread(target=background_task_func, daemon=True)
if val == 1:
# bg.start()
print(f"c_check = 1")
elif val == 0:
try:
print("trying to kill thread")
exit_event.set()
except Exception as e:
print(e)
print("val0 is ", val)
response = jsonify({"data": {"val": val}})
return response
if __name__ == "__main__":
signal.signal(signal.SIGINT, signal_handler)
socketio.run(
app, logger=True, engineio_logger=True, use_reloader=True, debug=True, port=5000
)
The Ctrl-C stops the server. The errors that you see are from the client, which runs in the browser and is completely independent.
These errors occur because the Socket.IO protocol implements connection retries. This is actually a good thing, not a bad thing. On a production site, when the server goes offline for a moment, maybe as a result of restarting after an upgrade, the retries from the client ensure that the connection is reestablished as soon as the server is back able to receive traffic.
If you want your client to not attempt to reconnect, you can configure it as follows:
var socket = io.connect('http://server.com', {
reconnection: false
});
LED is not switching ON when I compare a received data with a char.
Here is my Code:
import RPi.GPIO as GPIO
import time
from time import sleep
import socket
from socket import *
from operator import eq
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
GPIO.setup(17,GPIO.OUT)
def led():
IP="192.168.0.105"
port=2525
s=socket(AF_INET, SOCK_STREAM)
s.connect((IP,port))
msg=s.recv(1024)
print msg
s.close()
if eq(msg,'a'):
GPIO.output(17,GPIO.HIGH)
if eq(msg,'b'):
GPIO.output(17,GPIO.LOW)
else:
print 'Pls Enter Valid Key'
while True:
led()`
The following two lines of code hangs forever:
import urllib2
urllib2.urlopen('https://www.5giay.vn/', timeout=5)
This is with python2.7, and I have no http_proxy or any other env variables set. Any other website works fine. I can also wget the site without any issue. What could be the issue?
If you run
import urllib2
url = 'https://www.5giay.vn/'
urllib2.urlopen(url, timeout=1.0)
wait for a few seconds, and then use C-c to interrupt the program, you'll see
File "/usr/lib/python2.7/ssl.py", line 260, in read
return self._sslobj.read(len)
KeyboardInterrupt
This shows that the program is hanging on self._sslobj.read(len).
SSL timeouts raise socket.timeout.
You can control the delay before socket.timeout is raised by calling
socket.setdefaulttimeout(1.0).
For example,
import urllib2
import socket
socket.setdefaulttimeout(1.0)
url = 'https://www.5giay.vn/'
try:
urllib2.urlopen(url, timeout=1.0)
except IOError as err:
print('timeout')
% time script.py
timeout
real 0m3.629s
user 0m0.020s
sys 0m0.024s
Note that the requests module succeeds here although urllib2 did not:
import requests
r = requests.get('https://www.5giay.vn/')
How to enforce a timeout on the entire function call:
socket.setdefaulttimeout only affects how long Python waits before an exception is raised if the server has not issued a response.
Neither it nor urlopen(..., timeout=...) enforce a time limit on the entire function call.
To do that, you could use eventlets, as shown here.
If you don't want to install eventlets, you could use multiprocessing from the standard library; though this solution will not scale as well as an asynchronous solution such as the one eventlets provides.
import urllib2
import socket
import multiprocessing as mp
def timeout(t, cmd, *args, **kwds):
pool = mp.Pool(processes=1)
result = pool.apply_async(cmd, args=args, kwds=kwds)
try:
retval = result.get(timeout=t)
except mp.TimeoutError as err:
pool.terminate()
pool.join()
raise
else:
return retval
def open(url):
response = urllib2.urlopen(url)
print(response)
url = 'https://www.5giay.vn/'
try:
timeout(5, open, url)
except mp.TimeoutError as err:
print('timeout')
Running this will either succeed or timeout in about 5 seconds of wall clock time.
I use python 2.7.3 and daemon runner in my script. In a run(loop) method i want to sleep for the some time, but not with the such code:
while True:
time.sleep(10)
I want wait on a some synchronizing primitive, for example multiprocessing.Event. There is my code:
# -*- coding: utf-8 -*-
import logging
from daemon import runner
import signal
import multiprocessing
import spyder_cfg
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s', datefmt='%m-%d %H:%M', filename=spyder_cfg.log_file)
class Daemon(object):
def __init__(self, pidfile_path):
self.stdin_path = '/dev/null'
self.stdout_path = '/dev/tty'
self.stderr_path = '/dev/tty'
self.pidfile_path = None
self.pidfile_timeout = 5
self.pidfile_path = pidfile_path
def setup_daemon_context(self, daemon_context):
self.daemon_context = daemon_context
def run(self):
logging.info('Spyder service has started')
logging.debug('event from the run() = {}'.format(self.daemon_context.stop_event))
while not self.daemon_context.stop_event.wait(10):
try:
logging.info('Spyder is working...')
except BaseException as exc:
logging.exception(exc)
logging.info('Spyder service has been stopped')
def handle_exit(self, signum, frame):
try:
logging.info('Spyder stopping...')
self.daemon_context.stop_event.set()
except BaseException as exc:
logging.exception(exc)
if __name__ == '__main__':
app = Daemon(spyder_cfg.pid_file)
d = runner.DaemonRunner(app)
d.daemon_context.working_directory = spyder_cfg.work_dir
d.daemon_context.files_preserve = [h.stream for h in logging.root.handlers]
d.daemon_context.signal_map = {signal.SIGUSR1: app.handle_exit}
d.daemon_context.stop_event = multiprocessing.Event()
app.setup_daemon_context(d.daemon_context)
logging.debug('event from the main = {}'.format(d.daemon_context.stop_event))
d.do_action()
It is my log file records:
06-04 11:32 root DEBUG event from the main = <multiprocessing.synchronize.Event object at 0x7f0ef0930d50>
06-04 11:32 root INFO Spyder service has started
06-04 11:32 root DEBUG event from the run() = <multiprocessing.synchronize.Event object at 0x7f0ef0930d50>
06-04 11:32 root INFO Spyder is working...
06-04 11:32 root INFO Spyder stopping...
There is not 'Spyder service has been stopped' print in the log, my program hang on the set() call. While debugging i see that it hang when Event.set() call, the set method hang on semaphore while all waiting entities wake up. There is no reason if Event will be global object or threading.Event. I see this one answer, but its not good for me. Is there an alternative for wait with the timeout wait with the same behavior as multiprocessing.Event?
I do print stack from the signal handler and i think there is deadlock, because signal handler use same stack with the my main process and when i call Event.set(), method wait() higher on the stack...
def handle_exit(self, signum, frame):
try:
logging.debug('Signal handler:{}'.format(traceback.print_stack()))
except BaseException as exc:
logging.exception(exc)
d.do_action()
File ".../venv/work/local/lib/python2.7/site-packages/daemon/runner.py", line 189, in do_action
func(self)
File ".../venv/work/local/lib/python2.7/site-packages/daemon/runner.py", line 134, in _start
self.app.run()
File ".../venv/work/skelet/src/spyder.py", line 32, in run
while not self.daemon_context.stop_event.wait(10):
File "/usr/lib/python2.7/multiprocessing/synchronize.py", line 337, in wait
self._cond.wait(timeout)
File "/usr/lib/python2.7/multiprocessing/synchronize.py", line 246, in wait
self._wait_semaphore.acquire(True, timeout)
File ".../venv/work/skelet/src/spyder.py", line 41, in handle_exit
logging.debug('Signal handler:{}'.format(traceback.print_stack()))
that is why this fix solve the problem:
def handle_exit(self, signum, frame):
t = Timer(1, self.handle_exit2)
t.start()
def handle_exit2(self):
self.daemon_context.stop_event.set()
I am new to tornado web server. When I start the tornado web server using python main_tornado.py It is working. Please see the below code.
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
application = tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
When I stop the server using CTRL+C it gave the following error.
^CTraceback (most recent call last):
File "main_tornado.py", line 19, in <module>
tornado.ioloop.IOLoop.instance().start()
File "/home/nyros/Desktop/NewWeb/venv/lib/python3.2/site-packages/tornado/ioloop.py", line 301, in start
event_pairs = self._impl.poll(poll_timeout)
KeyboardInterrupt
Please solve my problem. Thanks..
You can stop Tornado main loop with tornado.ioloop.IOLoop.instance().stop(). To have this method called after passing signal with Ctrl+C you can periodically check global flag to test if main loop should end and register handler for SIGINT signal which will change value of this flag:
#!/usr/bin/python
# -*- coding: utf-8 -*-
import signal
import logging
import tornado.ioloop
import tornado.web
import tornado.options
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
class MyApplication(tornado.web.Application):
is_closing = False
def signal_handler(self, signum, frame):
logging.info('exiting...')
self.is_closing = True
def try_exit(self):
if self.is_closing:
# clean up here
tornado.ioloop.IOLoop.instance().stop()
logging.info('exit success')
application = MyApplication([
(r"/", MainHandler),
])
if __name__ == "__main__":
tornado.options.parse_command_line()
signal.signal(signal.SIGINT, application.signal_handler)
application.listen(8888)
tornado.ioloop.PeriodicCallback(application.try_exit, 100).start()
tornado.ioloop.IOLoop.instance().start()
Output:
$ python test.py
[I 181209 22:13:43 web:2162] 200 GET / (127.0.0.1) 0.92ms
^C[I 181209 22:13:45 test:21] exiting...
[I 181209 22:13:45 test:28] exit success
UPDATE
I've just saw in question Tornado long polling requests this simple solution:
try:
tornado.ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
tornado.ioloop.IOLoop.instance().stop()
Obviously, this is a less safe way.
UPDATE
Edited the code to remove use of global.
You can simply stop the Tornado ioloop from a signal handler. It should be safe thanks to add_callback_from_signal() method, the event loop will exit nicely, finishing any eventually concurrently running task.
import tornado.ioloop
import tornado.web
import signal
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
application = tornado.web.Application([
(r"/", MainHandler),
])
def sig_exit(signum, frame):
tornado.ioloop.IOLoop.instance().add_callback_from_signal(do_stop)
def do_stop(signum, frame):
tornado.ioloop.IOLoop.instance().stop()
if __name__ == "__main__":
application.listen(8888)
signal.signal(signal.SIGINT, sig_exit)
tornado.ioloop.IOLoop.instance().start()
The code is OK. The CTRL+C generates KeyboardInterrupt. To stop the server you can use CTRL+Pause Break(on windows) instead of CTRL+C. On linux CTRL+C also generates the KeyboardInterrupt also. If you will use CTRL+Z program will stop but port gets busy.
I'd say the cleanest, safest and most portable solution would be to put all closing and clean-up calls in a finally block instead of relying on KeyboardInterrupt exception:
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
application = tornado.web.Application([
(r"/", MainHandler),
])
# .instance() is deprecated in Tornado 5
loop = tornado.ioloop.IOLoop.current()
if __name__ == "__main__":
try:
print("Starting server")
application.listen(8888)
loop.start()
except KeyboardInterrupt:
pass
finally:
loop.stop() # might be redundant, the loop has already stopped
loop.close(True) # needed to close all open sockets
print("Server shut down, exiting...")