Ray Serve Can't Create Backend - ray

I'm getting a strange error when trying to run a basic ray program.
import ray
from ray import serve
import time
ray.init()
# This will start Ray locally and start Serve on top of it.
serve.start()
def my_backend_func(request):
return "hello"
serve.create_backend("my_backend", my_backend_func)
Running this gives me the following error: AttributeError: module 'ray.serve' has no attribute 'create_backend'.
If I store the object created by calling serve.start() in a variable, and use that to call .create_backend instead of serve it works. Every single test case, example, etc. does not do this so I was wondering what I might be doing wrong. I was able to recreate this issue on every Linux machine I tried it on, and in both python3.6 and python3.8. Thank you!

In case anyone in the future has this issue, here is the answer:
You are running the older stable release 1.2.0 which comes by default when you install ray with pip. All of the examples listed on the Github repo, and the source code/tests I was looking at are running the newer 2.0.0 version which you must download in a different way.

Related

Create Version failed. Bad model detected with error: Model requires more memory than allowed

Two days ago I was able to upload multiple ML models (w/ custom prediction routine) without any issues.
Last night, I started receiving prediction response errors: "Prediction server is out of memory, possibly because model size is too big."
I tried to upload the exact same model and custom prediction routine today and am now receiving the error "Create Version failed. Bad model detected with error: Model requires more memory than allowed. Please try to decrease the model size and re-deploy. If you continue to have error, please contact Cloud ML."
Did something change that I am not aware of?
This took me more than a day of work to solve... Unfortunately.
Check your project setup.py. I discovered that if it has tensorflow on it the deploy process fails before any line of my prediction script is ran. I removed it and it worked.
You don't need tensorflow to be installed as it is already in the runtime. My guess is that memory allocated by the setup.py script with tensorflow to be too great and then the deploy process fails as a result.
The error message is completely misleading and should be patched.
From this:
from setuptools import setup
setup(name="my-project",
version='0.0.1',
scripts=['project/predict/predictor.py'],
install_requires=[
'pika==0.12.0',
'unidecode==1.1.0',
'tensorflow'
])
To this:
from setuptools import setup
setup(name="my-project",
version='0.0.1',
scripts=['project/predict/predictor.py'],
install_requires=[
'pika==0.12.0',
'unidecode==1.1.0',
])
I received a blazing fast response from cloudml-feedback#google.com about this issue. I was told to include pytorch from http://storage.googleapis.com/cloud-ai-pytorch/readme.txt. This seems to have worked, but I am still having this problem with the transformers library. I will update if I figure out how to include the transformers library in my model without an error.
Edit: I used pip wheel transformers to get the whl, but I see that it creates one for all of the dependencies too. Do I have to bundle all of these together, or can I just provide the transformers.whl? Anyways, I am still getting the same error...

Multiple Tkinter windows pop up at runtime after failed attempts

First of all, I'm quite new at Python, Stackoverflow and programming in general so please forgive any formal errors I might have made as I'm still trying to grasp many of the required conceptual programming protocols.
Here's the issue:
I'm trying to work around a specific, seemingly simple problem I've been having when using Tkinter: Whenever I'm fiddling with some code that confuses me, it generally takes many attempts until I finally find a working solution. So I write some code, run it, get an error, make some changes, run it again, get another error, change it again... and so on until a working result is achieved.
When the code finally works, I unfortunately also get additional Tkinter main windows popping up for every failed run I've executed. So if I've made, say, 20 changes before I eventually achieve working code, 20 additional Tkinter windows pop up. Annoying...
Now, I was thinking that maybe handling the exceptions with try/except might avoid this, but I'm unsure how to accomplish this properly.
I've been looking for a solution but can't seem to find a post that addresses this issue. I'm actually not really sure how to formulate the problem correctly... Anybody has some advice concerning this?
The following shows a simple but failed attempt of how I'm trying to circumvent this. The code works as is, but if you make a little typo in the code, run it a couple of times, then undo the typo and run the code again, you'll get multiple Tkinter windows which is what I'm trying to avoid.
Any help is, of course, appreciated...
(btw, I'm using Python 2.7.13.)
import Tkinter as tk
class App(tk.Frame):
def __init__(self, parent):
tk.Frame.__init__(self)
self.root = parent
self.canvas = tk.Canvas(self)
self.canvas.pack(expand=1,fill='both')
self.bindings()
def click(self,e):
print 'clicked'
def bindings(self):
self.root.bind('<1>',self.click)
def main():
root = tk.Tk()
app = App(root)
app.pack()
root.mainloop()
if __name__ == '__main__':
try:
main()
except:
print 'Run failed...'
Alright, great. The issue has indeed nothing to do with Tkinter or Python but with the IDE itself. Thank you Ethan for pointing that out.
PyScripter has several modes or engines. I've been running scripts with its internal engine which is faster but does not reinitialize with every run. I believe this causes the problem. The remote engine, on the other hand, does reinitialize with every run. This avoids the failed run popups.
A more in-depth explanation from the PyScripter manual below:
Python Engines:
Internal
It is faster than the other options however if there are problems with
the scripts you are running or debugging they could affect the
reliability of PyScripter and could cause crashes. Another limitation
of this engine is that it cannot run or debug GUI scripts nor it can
be reinitialized.
Remote
This the default engine of PyScripter and is the recommended engine
for most Python development tasks. It runs in a child process and
communicates with PyScripter using rpyc. It can be used to run and
debug any kind of script. However if you run or debug GUI scripts you
may have to reinitialize the engine after each run.
Remote Tk
This remote Python engine is specifically created to run and debug
Tkinter applications including pylab using the Tkagg backend. It also
supports running pylab in interactive mode. The engine activates a
Tkinter mainloop and replaces the mainloop with a dummy function so
that the Tkinter scripts you are running or debugging do not block the
engine. You may even develop and test Tkinter widgets using the
interactive console.
Remote Wx
This remote Python engine is specifically created to run and debug
wxPython applications including pylab using the WX and WXAgg backends.
It also supports running pylab in interactive mode. The engine
activates a wx MainLoop and replaces the MainLoop with a dummy
function so that the wxPython scripts you are running or debugging do
not block the engine. You may even develop and test wxPython Frames
and Apps using the interactive console. Please note that this engine
prevents the redirection of wxPython output since that would prevent
the communication with Pyscripter.
When using the Tk and Wx remote engines you can of course run or debug
any other non-GUI Python script. However bear in mind that these
engines may be slightly slower than the standard remote engine since
they also contain a GUI main loop. Also note that these two engines
override the sys.exit function with a dummy procedure.

Python Scapy sniff without root

I'm wondering if there is any possibility to run Scapy's 'sniff(...)' without root priveleges.
It is used in an application, where certain packages are captured. But I don't want to run the whole application with root permissions or change anything on scapy itselfe.
Thanks in advance!
EDIT:
For testing I use following code:
from scapy.all import *
def arp_monitor_callback(pkt):
if ARP in pkt and pkt[ARP].op in (1,2): #who-has or is-at
return pkt.sprintf("%ARP.hwsrc% %ARP.psrc%")
sniff(prn=arp_monitor_callback, filter="arp", store=0)
I'm only able to run it using sudo.
I tried to set capabilities with sudo setcap 'cap_net_admin=+eip' test.py. But it doesn't show any effects. Even the all capablity doesn't help.
You need to set capabilities for binaries running your script i-e: python and tcpdump if you want to be able to just execute your script as ./test.py :
setcap cap_net_raw=eip /usr/bin/pythonX.X
setcap cap_net_raw=eip /usr/bin/tcpdump
Where X.X is the python version you use to run the script.
(note that path could be different on your system)
Please note that this allow anyone to open raw sockets on your system.
Although solution provided by #Jeff is technically correct, because of setting the file capabilities directly on binaries in /usr/bin, it has a drawback of allowing anyone in the system to open raw sockets.
Another way of achieving the desired outcome - script running with just the CAP_NET_RAW - is to use ambient capabilities. This can be done by leveraging a small helper binary that sets up ambient capabilities and exec()'s into python interpreter. For a reference please see this gist.
Using the reference implementation, assuming that that proper file capabilities are assigned to ./ambient:
$ sudo setcap 'cap_net_raw=p' ambient
your script would be launched as:
$ ./ambient -c '13' /usr/bin/python ./test.py
Please note that:
13 is the integer value of CAP_NET_RAW as per capability.h
ambient capabilities are available since kernel 4.3
you can use pscap to verify if the process was launched with desired capabilities in its effective set
Why does this method work?
Ambient capabilities are preserved across exec() calls (hence passed to all subsequently created subprocesses) and raised in their effective set, e.g. a python interpreter invoked by the binary or tcpdump invoked by python script. This is of course a simplification, for a full description of transitions between capability sets see capabilities(7)

Ejabberd module with child process

I created a logging module which logs messages to a mysql db, the current code is located here:
https://github.com/amiadogroup/mod_log_chat_mysql5/blob/master/src/mod_log_chat_mysql5.erl
The Problem with the current code is, that sometimes the connection gets closed and as a result, the module doesn't work anymore.
As you see in the code, I store the DBRef in an ets table, which is not really the good way to go.
I asked the erlang mailinglist about this and they suggested me to do the DB Connection as an own child process of the module. This would enable the module to gracefully restart the connection upon closing of the connection.
Now my question is: how can I implement this child process with gen_server and/or gen_mod?
Do I need to create two files or can I do it within the same file?
Is there any example somewhere on how I could achieve that?
Edit: As you can see in the linked github repo, I updated the code and it works now, weeha!
Looking at the mod_Archive code helped me a lot, although I didn't decide to upgrade my ejabberd version.
I ran into another, but related problem now. In the code you see that I do an initial query with "SET NAMES UTF8" to prevent garbling of messages. It seems that this isn't done again if the gen_server does a reconnect. Is there any hook I can call upon reconnect so that the UTF8 query is done everytime?
Edit#2:
Now I switched to Emysql (https://github.com/Eonblast/Emysql) and it works out of the box by specifying the encoding directly on connect.
Code is on github.
Thanks for your help,
Michael
I suggest you look into general Erlang/OTP principles (gen_server, supervisor, etc).
ejabberd is relying on this standard Erlang architecture pattern.
Regarding your comment on database, ejabberd has its own way on managing database and passing queries to MySQL for example. You should as well look into it.
In your source code you are only applying the gen_mod behaviour, if you do wish to have a gen_server you can do it in the same module, if you define the gen_server behaviour has well.
A good example would be the ejabberd module mod_archive, which implements both behaviours.
Edit: I never really worked "directly" with mysql on erlang. But through the ejabberd methods I find it pretty "easy"(you will have to make a few setup, but rather easy). You have the method
ejabberd_odbc:sql_query_t(Query)
And has an example you can find it on the module mod_archive_odbc.
To use that method(and the last module) I haved downloaded the mysql native driver and put the beams created from the driver in ejabberd ebin dir (you can put it anywhere has long is on the erlang path).
A a soft link to the ejabberd ebin is my favorite:
ln -s <diryouhavethedriver>/ebin/*.beam /usr/lib/ejabberd/ebin/
and do a few configurations on you ejabberd.cfg. This process is described on this page on process one. Notice that the full steps are to make mysql the full database of ejabberd. You may not want that, so you must jump a few steps.
Hope this help.

Selenium wait for download?

I'm trying to test the happy-path for a piece of code which takes a long time to respond, and then begins writing a file to the response output stream, which prompts a download dialog in browsers.
The problem is that this process has failed in the past, throwing an exception after this long amount of work. Is there a way in selenium to wait-for-download or equivalent?
I could throw in a Thread.sleep, but that would be inaccurate and unnecessarily slow down the test run.
What should I do, here?
I had the same problem. I invented something to solve the problem. A tempt file is created by Python with '.part' extension. So, if still we have the temp, python can wait for 10 second and check again if the file is downloaded or not yet.
while True:
if os.path.isfile('ts.csv.part'):
sleep(10)
elif os.path.isfile('ts.csv'):
break
else:
sleep(10)
driver.close()
So you have two problems here:
You need to cause the browser to download the file
You need to measure when the downloaded file is complete
Neither problemc an be directly solved by Selenium (yet - 2.0 may help), but both are solvable problems. The first problem can be solved by GUI automation toolkits, such as AutoIT. But they can also be solved by simply sending an automated keypress at the OS level that simulates the enter key (works for Firefox, a little harder on some versions of Chrome and Safari). If you're using Java, you can use Robot to do that. Other languages have similar toolkits to do such a thing.
The second issue is probably best solved with some sort of proxy solution. For example, if your browser was configured to go through a proxy and that proxy had an API, you could query the proxy with that API to ask when network activity had ended.
That's what we do at http://browsermob.com, which is a a startup I founded that uses Selenium to do load testing. We've released some of the proxy code as open source, available at http://browsermob.com/tools.
But two problems still persist:
You need to configure the browser to use the proxy. In Selenium 2 this is easier, but it's possible to do it with Selenium 1 as well. The key is just making sure that your browser launcher brings up the browser with the right profile/settings.
There currently is no API for BrowserMob proxy to tell you when network traffic has stopped! This is a big hole in the concept of the project that I want to fix as soon as I get the time. However, if you're keen to help out, join the Google Group and I can definitely point you in the right direction.
Hope that helps you identify your various options. Best of luck!
This is Chrome-testing-only solution for controlling the downloads with javascript..
Using WebDriver (Selenium2) it can be done within Chrome's chrome:// which is HTML/CSS/Javascript:
driver.get( "chrome://downloads/" );
waitElement( By.CssSelector("#downloads-summary-text") );
// next javascript snippet cancels the last/current download
// if your test ends in file attachment downloading
// you'll very likely need this if you more re-instantiated tests left
((JavascriptExecutor)driver).executeScript("downloads.downloads_[0].cancel_();");
There are other Download.prototype.functions in "chrome://downloads/downloads.js"
This suites you if you just need to test some info note eg. caused by file attachment starting activity, and not the file itself.
Naturally you need to control step 1. - mentioned by Patrick above - and by this you control step 2. FOR THE TEST, not for the functionality of actual file download completion / cancel.
See also : Javascript: Cancel/Stop Image Requests which is relating to Browser stopping.
This falls under the "things that can't be automated" category. Selenium is built with JavaScipt and due to JavaScript sandbox restrictions it can't access downloads.
Selenium 2 might be able to do this once Alerts/Prompts have been implemented but that this won't happen for the next little while yet.
If you want to check for the download dialog, try with AutoIt. I use that for uploading and downloading the files. Using AutoIt with Se RC is easier.
def file_downloaded?(file)
while File.file?(file) == false
p "File downloading in progress..."
sleep 1
end
end
*Ruby Syntax