Printing flask server logs on console - flask

I have a flask server running in background i.e. created a .service file and initiated the flask application. Now, when I fire flask APIs from launcher file, I am unable to see logger messages on console. I wrote below code to get the output to console, which is not the efficient way
#log_file is populated by flask logging i.e. fileHandler
pro = subprocess.Popen(["tail", "-n0", "-f", log_file])
#this is to terminate the process once launcher file exits
os.killpg(os.getpgid(pro.pid), signal.SIGTERM)
I am doing this to add verbosity option to my launcher for example if user gives "launcher.py -v", it should print INFO, ERROR, WARN and DEBUG messages.
I am new to python and went through documentation as well, couldn't figure out a way.
Can someone suggest the right way to do this ?
Note: If I launch the application normally, I can see output on console as I have added StreamHandler(sys.out) to logger.

Related

How to use aiogram + flask (or only aiogram) for payment processing in telegram bot?

I have a telegram bot, it is written in python (uses the aiogram library), it works on a webhook. I need to process payments for a paid subscription to a bot (I use yoomoney as a payment).
It’s clear how you can do this on Flask: through its request method, catch http notifications that are sent from yoomoney (you can specify a url for notifications in yoomoney, where payment statuses like "payment.succeeded" should come)
In short, Flask is able to check the status of a payment. The bottom line is that the bot is written in aiogram and the bot is launched by the command:
if __name__ == '__main__': try: start_webhook( dispatcher=dp, webhook_path=WEBHOOK_PATH, on_startup=on_startup, on_shutdown=on_shutdown, skip_updates=True, host=WEBAPP_HOST, port=WEBAPP_PORT ) except (KeyboardInterrupt, SystemExit): logger.error("Bot stopped!")
And if you just write in this code the launch of the application on flask in order to listen for answers from yoomoney, then EITHER the commands (of the bot itself) from aiogram will be executed OR the launch of flask, depending on what comes first in the code.
In fact, it is impossible to use flask and aiogram at the same time without multithreading. Is it possible somehow without flask in aiogram to track what comes to my server from another server (yoomoney)? Or how to use the aiogram + flask bundle more competently?
I tried to run flask in multi-threaded mode and the aiogram bot itself, but then an error occurs that the same port cannot be attached to different processes (which is logical).
It turns out it is necessary to change ports or to execute processes on different servers?

Shutting down a plotly-dash server

This is a follow-up to this question: How to stop flask application without using ctrl-c . The problem is that I didn't understand some of the terminology in the accepted answer since I'm totally new to this.
import dash
import dash_core_components as dcc
import dash_html_components as html
app = dash.Dash()
app.layout = html.Div(children=[
html.H1(children='Dash Tutorials'),
dcc.Graph()
])
if __name__ == '__main__':
app.run_server(debug=True)
How do I shut this down? My end goal is to run a plotly dashboard on a remote machine, but I'm testing it out on my local machine first.
I guess I'm supposed to "expose an endpoint" (have no idea what that means) via:
from flask import request
def shutdown_server():
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Not running with the Werkzeug Server')
func()
#app.route('/shutdown', methods=['POST'])
def shutdown():
shutdown_server()
return 'Server shutting down...'
Where do I include the above code? Is it supposed to be included in the first block of code that I showed (i.e. the code that contains app.run_server command)? Is it supposed to be separate? And then what are the exact steps I need to take to shut down the server when I want?
Finally, are the steps to shut down the server the same whether I run the server on a local or remote machine?
Would really appreciate help!
The method in the linked answer, werkzeug.server.shutdown, only works with the development server. Creating a view function, with an assigned URL ("exposing an endpoint") to implement this shutdown function is a convenience thing, which won't work when deployed with a WSGI server like gunicorn.
Maybe that creates more questions than it answers:
I suggest familiarising yourself with Flask's wsgi-standalone deployment docs.
And then probably the gunicorn deployment guide. The monitoring section has a number of different examples of service monitors, which you can use with gunicorn allowing you to run the app in the background, start on reboot, etc.
Ultimately, starting and stopping the WSGI server is the responsibility of the service monitor and logic to do this probably shouldn't be coded into your app.
What works in both cases of
app.run_server(debug=True)
and
app.run_server(debug=False)
anywhere in the code is:
os.kill(os.getpid(), signal.SIGTERM)
(don't forget to import os and signal)
SIGTERM should cause a clean exit of the application.

View real-time logs from NAOqi application with SSH

Is it possible to view logs from my application without using Choregraphe?
At the moment I am limited to log files from '/var/log/naoqi/servicemanager/'.
I am implementing qi.logger() and would like to connect to the robot IP with SSH and get logs from a specific service.
qicli log-view
only shows system logs. I would like to attach the logger to a my application, maybe using the serivce PID?
Did you try to log into a specific places, like for instance if you start it from an independant python script.
logging.basicConfig(filename='some_files.log',
level=logging.DEBUG,
format='%(levelname)s %(relativeCreated)6d %(threadName)s %(message)s (%(module)s.%(lineno)d)',
filemode='w')
then some tail -f -n /var/log/naoqi/servicemanager/some_files.log
WRN: this is just an hint, I haven't tested this solution...

Debugging Python Azure worker role in VS 2013

I've spent several hours using different sources to figure out how to debug an Azure worker role written in Python. I even tried the steps here and I can't get breakpoints or VS Quick watch or Add watch to work.
I'm running VS Ultimate 2013 Update 4, Python 2.7.1, Python tools for VS 2.1.21008.00.
I followed the steps here to create a worker role in Python.
My code works as a stand-alone Python .PY file from Python IDLE. It successfully accesses my containers in Azure.
It works when run locally (although I can't debug it locally). My local storage emulator "(Development)" and the containers specified below work.
It deploys successfully to Azure. The associated worker role storage account is "online". The worker role itself is "Running" although it's not doing what I expect so I need to debug.
I set breakpoints, hit F5 to debug and the breakpoints aren't hit. Also, when I "break all" and try to watch a few variables I get "Unable to evaluate the expression".
The print statements below are left over from when I ran it from Python IDLE. The code is simple because I'm just trying to prove that I can get a worker role working.
Thanks in advance for any help you can provide.
import os
from time import sleep
from azure.storage import BlobService
STORAGE_ACCOUNT_NAME = 'my container is here'
STORAGE_ACCOUNT_KEY = 'my account key is here'
INPUT_CONTAINER = "inputcontainer"
OUTPUT_CONTAINER = "outputcontainer"
if os.environ.get('EMULATED', '').lower() == 'true':
# Running in the emulator, so use the development storage account
storage_account = CloudStorageAccount(None, None)
else:
storage_account = CloudStorageAccount(STORAGE_ACCOUNT_NAME, STORAGE_ACCOUNT_KEY)
blob_service = BlobService(accountname=STORAGE_ACCOUNT_NAME, account_key=STORAGE_ACCOUNT_KEY)
if __name__ == '__main__':
while True:
# Write your worker process here.
# Get a blob in the inputcontainer and copy to and rename it in the outputcontainer.
input_blobs = blob_service.list_blobs(INPUT_CONTAINER)
for blob in input_blobs:
new_blobname = "processed_" + blob.name
print 'blob name is: ', blob.name
print 'blob url is: ', blob.url
try:
blob_service.copy_blob(
OUTPUT_CONTAINER,
new_blobname,
x_ms_copy_source=blob.url)
except IOError:
print 'ERROR!'
else:
print 'Blob copy was successful.'
This question is pretty old, but no answer yet. So in case anyone runs into this answer before getting to the right page on Microsoft's site, you can debug Python Azure worker roles, but have to run it differently. Quoting from the linked site:
Although PTVS supports launching in the emulator, debugging (for example, breakpoints) will not work.
To debug your web and worker roles, you can set the role project as the startup project and debug that instead. You can also set multiple startup projects. Right-click the solution and then select Set StartUp Projects.
Following those instructions solved the problem for me.

How to configure wso2 servers logging the same level of detail as console output in wso2carbon.log file

When we run the bin/wso2server.sh file in a terminal, we get nice verbose logging output in the same terminal which is very useful for debugging. But the output in the repository/log/wso2carbon.log file is minimal. I have checked all the other files in the repository/log/ directory and none have the same level of verbosity as the console output.
I tried settings under Home > Configure > Logging after logging in to the management console of wso2 application server. Specifically I set the settings for "Configure Log4J Appenders" for CARBON_LOGFILE to be the same as for CARBON_CONSOLE but this did not have desired effect. The web application level info and debug messages are shown on the terminal from where we started the wso2 application server but this is not shown in the wso2carbon.log file.
How do we get the same level of detail i.e. verbose output like we get in the terminal into the repository/log/wso2carbon.log file?
I tried a lot of changes via the "Home > Configure > Logging" of the WSO2 web based management console, to get the same level of detail as the console into the logfile but none had the desired effect. In fact I observed that even though I changed the Log Pattern of CARBON_LOGFILE to [%d] %5p - %x %m {%c}%n I still kept getting logs in the TID: [0] [AS] [2013-08-23 15:11:10,025] format in the repository/logs/wso2carbon.log file. There is definitely some problem with setting log file detail level and pattern via the web based management console at least on version wso2as 5.0.1
So I ended up hacking the bin/wso2server.sh file.
I changed the line
nohup bash $CARBON_HOME/bin/wso2server.sh > /dev/null 2>&1 &
under both start and restart sections to
nohup bash $CARBON_HOME/bin/wso2server.sh > $CARBON_HOME/repository/logs/wso2carbon.log 2>&1 &
Now I am getting same logs as console in the file.
I know its a hack but atleast I am able to get the detailed debug logs in a file for offline analysis.
Hope someone from wso2 looks into the issue of log level & pattern setting via the web based management console and solves it..
By default the console output and the wso2carbon.log file should be the same. I checked and both have the same output. In "Configure Log4J Appenders" see whether you have DEBUG as the Threshold for both CARBON_LOGFILE and CARBON_CONSOLE.