I've spent several hours using different sources to figure out how to debug an Azure worker role written in Python. I even tried the steps here and I can't get breakpoints or VS Quick watch or Add watch to work.
I'm running VS Ultimate 2013 Update 4, Python 2.7.1, Python tools for VS 2.1.21008.00.
I followed the steps here to create a worker role in Python.
My code works as a stand-alone Python .PY file from Python IDLE. It successfully accesses my containers in Azure.
It works when run locally (although I can't debug it locally). My local storage emulator "(Development)" and the containers specified below work.
It deploys successfully to Azure. The associated worker role storage account is "online". The worker role itself is "Running" although it's not doing what I expect so I need to debug.
I set breakpoints, hit F5 to debug and the breakpoints aren't hit. Also, when I "break all" and try to watch a few variables I get "Unable to evaluate the expression".
The print statements below are left over from when I ran it from Python IDLE. The code is simple because I'm just trying to prove that I can get a worker role working.
Thanks in advance for any help you can provide.
import os
from time import sleep
from azure.storage import BlobService
STORAGE_ACCOUNT_NAME = 'my container is here'
STORAGE_ACCOUNT_KEY = 'my account key is here'
INPUT_CONTAINER = "inputcontainer"
OUTPUT_CONTAINER = "outputcontainer"
if os.environ.get('EMULATED', '').lower() == 'true':
# Running in the emulator, so use the development storage account
storage_account = CloudStorageAccount(None, None)
else:
storage_account = CloudStorageAccount(STORAGE_ACCOUNT_NAME, STORAGE_ACCOUNT_KEY)
blob_service = BlobService(accountname=STORAGE_ACCOUNT_NAME, account_key=STORAGE_ACCOUNT_KEY)
if __name__ == '__main__':
while True:
# Write your worker process here.
# Get a blob in the inputcontainer and copy to and rename it in the outputcontainer.
input_blobs = blob_service.list_blobs(INPUT_CONTAINER)
for blob in input_blobs:
new_blobname = "processed_" + blob.name
print 'blob name is: ', blob.name
print 'blob url is: ', blob.url
try:
blob_service.copy_blob(
OUTPUT_CONTAINER,
new_blobname,
x_ms_copy_source=blob.url)
except IOError:
print 'ERROR!'
else:
print 'Blob copy was successful.'
This question is pretty old, but no answer yet. So in case anyone runs into this answer before getting to the right page on Microsoft's site, you can debug Python Azure worker roles, but have to run it differently. Quoting from the linked site:
Although PTVS supports launching in the emulator, debugging (for example, breakpoints) will not work.
To debug your web and worker roles, you can set the role project as the startup project and debug that instead. You can also set multiple startup projects. Right-click the solution and then select Set StartUp Projects.
Following those instructions solved the problem for me.
Related
I have a flask server running in background i.e. created a .service file and initiated the flask application. Now, when I fire flask APIs from launcher file, I am unable to see logger messages on console. I wrote below code to get the output to console, which is not the efficient way
#log_file is populated by flask logging i.e. fileHandler
pro = subprocess.Popen(["tail", "-n0", "-f", log_file])
#this is to terminate the process once launcher file exits
os.killpg(os.getpgid(pro.pid), signal.SIGTERM)
I am doing this to add verbosity option to my launcher for example if user gives "launcher.py -v", it should print INFO, ERROR, WARN and DEBUG messages.
I am new to python and went through documentation as well, couldn't figure out a way.
Can someone suggest the right way to do this ?
Note: If I launch the application normally, I can see output on console as I have added StreamHandler(sys.out) to logger.
This is a follow-up to this question: How to stop flask application without using ctrl-c . The problem is that I didn't understand some of the terminology in the accepted answer since I'm totally new to this.
import dash
import dash_core_components as dcc
import dash_html_components as html
app = dash.Dash()
app.layout = html.Div(children=[
html.H1(children='Dash Tutorials'),
dcc.Graph()
])
if __name__ == '__main__':
app.run_server(debug=True)
How do I shut this down? My end goal is to run a plotly dashboard on a remote machine, but I'm testing it out on my local machine first.
I guess I'm supposed to "expose an endpoint" (have no idea what that means) via:
from flask import request
def shutdown_server():
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Not running with the Werkzeug Server')
func()
#app.route('/shutdown', methods=['POST'])
def shutdown():
shutdown_server()
return 'Server shutting down...'
Where do I include the above code? Is it supposed to be included in the first block of code that I showed (i.e. the code that contains app.run_server command)? Is it supposed to be separate? And then what are the exact steps I need to take to shut down the server when I want?
Finally, are the steps to shut down the server the same whether I run the server on a local or remote machine?
Would really appreciate help!
The method in the linked answer, werkzeug.server.shutdown, only works with the development server. Creating a view function, with an assigned URL ("exposing an endpoint") to implement this shutdown function is a convenience thing, which won't work when deployed with a WSGI server like gunicorn.
Maybe that creates more questions than it answers:
I suggest familiarising yourself with Flask's wsgi-standalone deployment docs.
And then probably the gunicorn deployment guide. The monitoring section has a number of different examples of service monitors, which you can use with gunicorn allowing you to run the app in the background, start on reboot, etc.
Ultimately, starting and stopping the WSGI server is the responsibility of the service monitor and logic to do this probably shouldn't be coded into your app.
What works in both cases of
app.run_server(debug=True)
and
app.run_server(debug=False)
anywhere in the code is:
os.kill(os.getpid(), signal.SIGTERM)
(don't forget to import os and signal)
SIGTERM should cause a clean exit of the application.
I have two python projects running locally:
A cloud endpoints python project using the latest App Engine version.
A client project which consumes the endpoint functions using the latest google-api-python-client (v 1.5.1).
Everything was fine until I renamed one endpoint's function from:
#endpoints.method(MyRequest, MyResponse, path = "save_ocupation", http_method='POST', name = "save_ocupation")
def save_ocupation(self, request):
[code here]
To:
#endpoints.method(MyRequest, MyResponse, path = "save_occupation", http_method='POST', name = "save_occupation")
def save_occupation(self, request):
[code here]
Looking at the local console (http://localhost:8080/_ah/api/explorer) I see the correct function name.
However, by executing the client project that invokes the endpoint, it keeps saying that the new endpoint function does not exist. I verified this using the ipython shell: The dynamically-generated python code for invoking the Resource has the old function name despite restarting both the server and client dozens of times.
How can I force the api client to get always the latest endpoint api document?
Help is appreciated.
Just after posting the question, I resumed my Ubuntu PC and started Eclipse and the python projects from scratch and now everything works as expected. This sounds like a kind of a http client cache, or a stale python process, which prevented from getting the latest discovery document and generating the corresponding resource code.
This is odd as I have tested running these projects outside and inside Eclipse without success. But I prefer documenting this just in case someone else has this issue.
I would like to list all Modern UI apps installed on my Windows 8 machine.
Is there a way to list all installed Modern UI apps from a standard desktop application (with administrator permissions).
You can do this with Powershell and the Get-AppxPackage command.
Nigel has the correct answer ;)
This is a complement to launch automatically with administrator permissions :
(I'm using this trick a few month ago, so i don't have any longer the source. I will edit this post if I'll found it.)
# Get the ID and security principal of the current user account
$myWindowsID=[System.Security.Principal.WindowsIdentity]::GetCurrent()
$myWindowsPrincipal=new-object System.Security.Principal.WindowsPrincipal($myWindowsID)
# Get the security principal for the Administrator role
$adminRole=[System.Security.Principal.WindowsBuiltInRole]::Administrator
# Check to see if we are currently running "as Administrator"
if ($myWindowsPrincipal.IsInRole($adminRole))
{
# We are running "as Administrator" - so change the title and background color to indicate this
$Host.UI.RawUI.WindowTitle = $myInvocation.MyCommand.Definition + "(Elevated)"
$Host.UI.RawUI.BackgroundColor = "DarkBlue"
clear-host
}
else
{
# We are not running "as Administrator" - so relaunch as administrator
# Create a new process object that starts PowerShell
$newProcess = new-object System.Diagnostics.ProcessStartInfo "PowerShell";
# Specify the current script path and name as a parameter
$newProcess.Arguments = $myInvocation.MyCommand.Definition;
# Indicate that the process should be elevated
$newProcess.Verb = "runas";
# Start the new process
[System.Diagnostics.Process]::Start($newProcess);
# Exit from the current, unelevated, process
exit
}
# List all apps
Get-AppxPackage -AllUsers
I'm running some basic test code, with web.py and GAE (Windows 7, Python27). The form enables messages to be posted to the datastore. When I stop the app and run it again, any data posted previously has disappeared. Adding entities manually using the admin (http://localhost:8080/_ah/admin/datastore) has the same problem.
I tried setting the path in the Application Settings using Extra flags:
--datastore_path=D:/path/to/app/
(Wasn't sure about syntax there). It had no effect. I searched my computer for *.datastore, and couldn't find any files, either, which seems suspect, although the data is obviously being stored somewhere for the duration of the app running.
from google.appengine.ext import db
import web
urls = (
'/', 'index',
'/note', 'note',
'/crash', 'crash'
)
render = web.template.render('templates/')
class Note(db.Model):
content = db.StringProperty(multiline=True)
date = db.DateTimeProperty(auto_now_add=True)
class index:
def GET(self):
notes = db.GqlQuery("SELECT * FROM Note ORDER BY date DESC LIMIT 10")
return render.index(notes)
class note:
def POST(self):
i = web.input('content')
note = Note()
note.content = i.content
note.put()
return web.seeother('/')
class crash:
def GET(self):
import logging
logging.error('test')
crash
app = web.application(urls, globals())
def main():
app.cgirun()
if __name__ == '__main__':
main()
UPDATE:
When I run it via command line, I get the following:
WARNING 2012-04-06 19:07:31,266 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded.
INFO 2012-04-06 19:07:31,778 appengine_rpc.py:160] Server: appengine.google.com
WARNING 2012-04-06 19:07:31,783 datastore_file_stub.py:513] Could not read datastore data from c:\users\amy\appdata\local\temp\dev_appserver.datastore
WARNING 2012-04-06 19:07:31,851 dev_appserver.py:3394] Could not initialize images API; you are likely missing the Python "PIL" module. ImportError: No module named _imaging
INFO 2012-04-06 19:07:32,052 dev_appserver_multiprocess.py:647] Running application dev~palimpsest01 on port 8080: http://localhost:8080
INFO 2012-04-06 19:07:32,052 dev_appserver_multiprocess.py:649] Admin console is available at: http://localhost:8080/_ah/admin
Suggesting that the datastore... didn't install properly?
As of 1.6.4, we stopped saving the datastore after every write. This method did not work when simulating the transactional model found in the High Replication Datastore (you would lose the last couple writes). It is also horribly inefficient. We changed it so the datastore dev stub flushes all writes and saves its state on shut down. It sounds like the dev_appserver is not shutting down correctly. You should see:
Applying all pending transactions and saving the datastore
in the logs when shutting down the server (see source code and source code). If you don't, it means that the dev_appserver is not being shut down cleanly (with a TERM signal or KeyInterrupt).