Learning the Twisted framework and having trouble with the finger server - python-2.7

I am learning the Twisted framework for a project I am working on by using the Twisted Documentation Finger tutorial (http://twistedmatrix.com/documents/current/core/howto/tutorial/intro.html) and I'm having trouble getting my program to work.
Here's the code for the server, it should return "No Such User" when I telnet localhost 12345, but it just stays there, nothing happening.
from twisted.internet import protocol, reactor
from twisted.protocols import basic
class FingerProtocol(basic.LineReceiver):
def lineReceived(self, user):
self.transport.write("No such user\r\n")
self.transport.loseConnection()
class FingerFactory(protocol.ServerFactory):
protocol = FingerProtocol
reactor.listenTCP(12345, FingerFactory())
reactor.run()
I have run the server via python twisted-finger.py and sudo python twisted-finger.py, but neither worked.
Does anyone see why this doesn't return the message it is supposed to?

You have to send a finger request to the server before it responds.
According to the finger rfc:
Send a single "command line", ending with <CRLF>.
The command line:
Systems may differ in their interpretations of this line. However,
the basic scheme is straightforward: if the line is null (i.e. just
a <CRLF> is sent) then the server should return a "default" report
which lists all people using the system at that moment. If on the
other hand a user name is specified (e.g. FOO<CRLF>) then the
response should concern only that particular user, whether logged in
or not.
Try typing a word into telnet and hitting enter.

Related

How to use aiogram + flask (or only aiogram) for payment processing in telegram bot?

I have a telegram bot, it is written in python (uses the aiogram library), it works on a webhook. I need to process payments for a paid subscription to a bot (I use yoomoney as a payment).
It’s clear how you can do this on Flask: through its request method, catch http notifications that are sent from yoomoney (you can specify a url for notifications in yoomoney, where payment statuses like "payment.succeeded" should come)
In short, Flask is able to check the status of a payment. The bottom line is that the bot is written in aiogram and the bot is launched by the command:
if __name__ == '__main__': try: start_webhook( dispatcher=dp, webhook_path=WEBHOOK_PATH, on_startup=on_startup, on_shutdown=on_shutdown, skip_updates=True, host=WEBAPP_HOST, port=WEBAPP_PORT ) except (KeyboardInterrupt, SystemExit): logger.error("Bot stopped!")
And if you just write in this code the launch of the application on flask in order to listen for answers from yoomoney, then EITHER the commands (of the bot itself) from aiogram will be executed OR the launch of flask, depending on what comes first in the code.
In fact, it is impossible to use flask and aiogram at the same time without multithreading. Is it possible somehow without flask in aiogram to track what comes to my server from another server (yoomoney)? Or how to use the aiogram + flask bundle more competently?
I tried to run flask in multi-threaded mode and the aiogram bot itself, but then an error occurs that the same port cannot be attached to different processes (which is logical).
It turns out it is necessary to change ports or to execute processes on different servers?

Zend framework ACL fails for the first time to switch the server

Hi guys!
I'm not native to English, so I'll appreciate if you correct my sentence!
To explain my issue, here is our development environment.
language : PHP7.3.11
framework : Zend framework v3.3.11
server : aws ec2×4
server OS : Amazonlinux 2
redis was enabled, there are two project like a-project/ec2×2(a-ec2) b-project/ec2×2(b-ec2)
the only differences between a-ec2 and b-ec2 are source code, the other setting like nginx, php-fpm, redis also DB setting are same.
if I lack some info, please let me know
When we joint these project, the problems happen.
After logged in our service, the zend works oddly.
the loginAction is on the a-ec2, we can successfully login with that.
And we save that session information on redis, and it works normally.
But only for the first time that we switch the server from a-ec2 to b-ec2, zend acl error has occur.
We use isAllowed function for checking the privilege for whether the user has enough privilege to access certain service.
The isAllowd function which located at line 827, /library/ZendAcl.php return false for the first time.
Then, we reload the page, the isAllowed function return true so that we can access to the service.
In detail, something went wrong around &_getRules function which is at line 1161, which is used _getRuleType function.
In those process, somehow one of the array contain "TYPE_DENY".
But when try to reload(ctrl + f5), that value turn into "TYPE_ALLOW".
How can this happen?
And how to fix this?
We are trying to figure this out like 2 weeks or more...
Thanks in advance!!
[update]
we found that this method which is written in under doesn't work well, so that we can't get $auth properly.
The $auth return "".
self::$_auth = $auth = Zend_Auth::getInstance()->getStorage()->read();
[update 2]
we might solve this issue, so I'll leave our solution.
we use
Zend_Auth::getInstance()->getStorage()->read()
to get $auth from session.
But for the first time, the session information which has saved at a-ec2 can't read at b-ec2.
So, we decided to use session information at redis, so we changed method to get $auth like
require_once 'Zend/Session.php';
Zend_Session::start();
self::$_auth = $auth = Common_Model_Redis::get();
start the session before connect to the redis server and just get session information form it!
I hope this will work you too!

Shutting down a plotly-dash server

This is a follow-up to this question: How to stop flask application without using ctrl-c . The problem is that I didn't understand some of the terminology in the accepted answer since I'm totally new to this.
import dash
import dash_core_components as dcc
import dash_html_components as html
app = dash.Dash()
app.layout = html.Div(children=[
html.H1(children='Dash Tutorials'),
dcc.Graph()
])
if __name__ == '__main__':
app.run_server(debug=True)
How do I shut this down? My end goal is to run a plotly dashboard on a remote machine, but I'm testing it out on my local machine first.
I guess I'm supposed to "expose an endpoint" (have no idea what that means) via:
from flask import request
def shutdown_server():
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Not running with the Werkzeug Server')
func()
#app.route('/shutdown', methods=['POST'])
def shutdown():
shutdown_server()
return 'Server shutting down...'
Where do I include the above code? Is it supposed to be included in the first block of code that I showed (i.e. the code that contains app.run_server command)? Is it supposed to be separate? And then what are the exact steps I need to take to shut down the server when I want?
Finally, are the steps to shut down the server the same whether I run the server on a local or remote machine?
Would really appreciate help!
The method in the linked answer, werkzeug.server.shutdown, only works with the development server. Creating a view function, with an assigned URL ("exposing an endpoint") to implement this shutdown function is a convenience thing, which won't work when deployed with a WSGI server like gunicorn.
Maybe that creates more questions than it answers:
I suggest familiarising yourself with Flask's wsgi-standalone deployment docs.
And then probably the gunicorn deployment guide. The monitoring section has a number of different examples of service monitors, which you can use with gunicorn allowing you to run the app in the background, start on reboot, etc.
Ultimately, starting and stopping the WSGI server is the responsibility of the service monitor and logic to do this probably shouldn't be coded into your app.
What works in both cases of
app.run_server(debug=True)
and
app.run_server(debug=False)
anywhere in the code is:
os.kill(os.getpid(), signal.SIGTERM)
(don't forget to import os and signal)
SIGTERM should cause a clean exit of the application.

django-paypal signal update

I've been picking my way though django-paypal documentation and have got a signal connecting when I send a IPN from the sandbox simulator.
I can do:
UserProfile.objects.update(has_paid=True)
I can also do:
UserProfile.objects.update(middle_name=sender.custom) # sender.custom set to "Lyndon" on IPN
and everyone gets a year free. Not what I want...
What I'd like to do is
up = UserProfile.objects.get(user=ipn_obj.custom)
up.has_paid = False
up.save()
but on such occasions I get a server error (500) message on the Instant Payment Notification (IPN)
simulator.
IPN delivery failed. HTTP error code 500: Internal Server Error
I still get a Paypal IPN in my database and it will show up not flagged and with a payment status of "Completed". However, the signal is not connecting.
I'm just not getting something (or multiple things!) here. Any pointers much appreciated.
T
Try to use that,
UserProfile.objects.filter(user=ipn_obj.custom).update(has_paid=False)
For that kind of bugs, which you can not understand what is the problem use ipdb:
you should install ipdb,
$ pip install ipdb
and to run go to your code which doesn't work and add,
import ipdb; ipdb.set_trace()
when you run on your local (I mean with runserver) and make request to make that code pieces run, you will see the trace after the above line.
Note that to go next use "n" and to continue use "c" on ipdb.
It would help if I paid attention...
up = UserProfile.objects.get(user.username=ipn_obj.custom)
user.username...

creating a web url that listens to redis pubsub published message

Edit
OK I have a long polling from javascript that talks to a django view. The view looks as follows. It loses some messages that I publish from redis client in the channel. Also I should not be connecting to redis for every request (Perhaps the redis variables can be saved in session?)
If someone can point out the changes I need to make this view work with long polling, it would be awesome! Thank you!
def listen (request):
if request.session:
logger.info( 'request session: %s' %(request.session))
channel = request.GET.get('channel', None)
if channel:
logger.info('not in cache - first time - constructing redis object')
r = redis.Redis(host='localhost', port=6379, db=0)
p = r.pubsub()
logger.info('subscribing to channel: %s' %(channel))
p.psubscribe(channel)
logger.info('subscribed to channel: %s' %(channel))
message = p.listen().next()
logger.info('got msg %s' %(message))
return HttpResponse(json.dumps(message));
return HttpResponse('')
----Original question---
I am trying to create a chat application (using django, python) and am trying to avoid the polling mechanism. I have been struggling with this now - so any pointers would be really appreciated!
Since web sockets are not supported in most browsers, I think long polling is the right choice. Right now I am looking for something that scales better than regular polling and is easy to integrate with python django stack. Once I am done with this development, I plan to evaluate other python frameworks (tornado twister, gevent etc.) come to mind.
I did some research and liked the redis pubsub mechanism. The chat message gets published to a channel to which both users have already subscribed to. Following are my questions:
From what I understand, apache would not scale well since long polling would soon run into process/thread limits. Hence I have decided to switch to nginx. Is this rationale correct? Also are there any issues involved in nginx that I am worried about? In particular, I am worried about the latest version not supporting http 1.1 for proxy passing as mentioned in the blog post at http://www.letseehere.com/reverse-proxy-web-sockets?
How do I create the client portion of the subscription of messages on the browser side? In my mind, it would be a url to which the javascript code would "long poll". So at the javascript level, the client would poll a url which gets "blocked" in a "non blocking way" at the server side. When a result (in this case a new chat message) appears, server returns the result. Javascript does what it needs to and then again polls the same url. Is this thinking correct? What happens in between the intervals when the javascript loop is pausing - do we loose any messages from the server side.
In essence, I want to create the following:
From redis, I publish a message to a channel "foo" (can use redis-cli also - easy to incorporate it later in python/django)
I want the same message to appear in two browser windows that use the same js code to poll. Assume that the browser code knows the channel name for test purpose
I publish a second message that again appears in two browser windows.
I am new to real time apps, so apologies for any question that may not make sense.
Thank you!
Well just answering your question partly and mentioning one option out of many: Gunicorn being used with an async worker class is a solution for long-polling/non-blocking requests that is really easy to setup!