pushbullet API takes 30 minutes to deliver note - python-2.7

Working with Python 2.7 on Raspberry pi, I created a Pushbullet account and installed it on my iPhone 7 (iOS 12.4). In this instance, I'm using a github library from https://github.com/rbrcsk/pushbullet.py but I've noticed this lag using other methods as well.
Here's the code:
#!/usr/bin/env python
from pushbullet import Pushbullet
PB_API_KEY = 'o.00000000000000000000000000000000'
print("creating pb object with key:")
try:
pb = Pushbullet(PB_API_KEY)
except Exception as e:
print (str(e))
exit()
print("pushing note:")
try:
push = pb.push_note('important subject','this is a test')
except Exception as e:
print (str(e))
exit()
print ("done")
What happens is, when I run this script, it prints "creating pb object with key:" and then it appears to hang. 30 minutes (or so) later the notification appeared on my phone, and I saw that the next two print lines had appeared and the script had completed.
I'm anxious to begin using Pushbullet to push alarm notifications from my PI-GPIO home alarms. It appears to work, but why the big lag?

This issue was related to an incorrect ipV6 setting in my router (as in, it was enabled for some reason). This was giving me a stack of gateway and DNS addresses that had to time out before I could get a request out. So - this problem is NOT related to pushbullet.
Sorry about this false alarm.

Related

Poloniex & websockets

===SIMPLE & SHORT===
Does anybody have working application that talks with Poloniex through WAMP in these days (January, 2018)?
===MORE SPECIFIC===
I used several info sources to make it work using combo: autobahn-cpp & C++. Windows 10 OS.
I was able to connect to wss://api.poloniex.com, realm1. Plus I was able to subscribe and get subscription ID. But I never got any events even when everything established.
===RESEARCH===
During research in the web I saw a lot of controversial information:
1. Claims, that wss://api2.poloniex.com should be used, and channels names are actually numbers - How to connect to poloniex.com websocket api using a python library
2. This answer gave me base code, but I am getting anything more than just connections, also by following this answer - wss://api.poloniex.com is correct address - Connecting to Poloniex Push-API
3. I saw post (sorry, lost the link), there were comments made that websockets implementation are basically broken on poloniex. They were posted 6 months ago.
===SPECS===
1. Windows 10
2. Autobahn-Cpp
3. wss://api.poloniex.com:443 ; realm1
4. Different subscriptions: ticker, BTC_ETH, 148, 1002, etc..
5. Source code I got from here
===WILL HELP AS WELL===
Is there any way to get all valid subscriptions or, probably, those, that have more than 0 subscribers? I mean, does WAMP have a way to do that?
Is there any known issues with Autobahn-Cpp and poloniex combo?
Is there any simpler way to test WAMP elsewhere to make sure Autobahn isn't a problem? Like any other well documented & supported online projects that accept WAMP websocket communication?
I can receive the correct tick order book data from wss://api2.poloniex.com use python3
but sometime The channel 1002 may stop sending the new tick info.
wss://api.poloniex.com:443 ; realm1
This may be the issue as I've been using api2 and here is the code that works, and has been working for the past 2 quarters non-stop. Its in python, but should be easy enough to port to C++.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import websocket
import json
def on_error(ws, error):
print(error)
def on_close(ws):
print("### closed ###")
connection.close()
def on_open(ws):
print("ONOPEN")
ws.send(json.dumps({'command':'subscribe','channel':'BTC_ETH'}))
def on_message(ws, message):
message = json.loads(message)
print(message)
websocket.enableTrace(True)
ws = websocket.WebSocketApp("wss://api2.poloniex.com/",
on_message = on_message,
on_error = on_error,
on_close = on_close)
ws.on_open = on_open
ws.run_forever()
the code is pretty much self-explanatory (You can check all channels/pairs on Poloniex API website), just save it and run in terminal
python3 fileName.py
should provide You with BTCETH raw stream of orders and trades on console output.
Playing with the message/subscriptions You can then do as You please with it.
It seems that websockets in Poloniex are unstable. Therefore I can stop my attempts make Autobahn-Cpp work with it at least by now and move on.

Python 2.7 : How to track declining RAM?

Data is updated every 5 min. Every 5 min a python script I wrote is run. This data is related to signals, and when the data says a signal is True, then the signal name is shown in a PyQt Gui that I have.
In other words, the Gui is always on my screen, and every 5 min its "main" function is triggered and the "main" function's job is to check the database of signals against the newly downloaded data. I leave this GUI open for hours and days at a time and the computer always crashes. Random python modules get corrupted (pandas can't import this or numpy can't import that) and I have to reinstall python and all the packages.
I have a hypothesis that this is related to the program being open for a long time and using up more and more memory which eventually crashes the computer when the memory runs out.
How would I test this hypothesis? If I can just show that with every 5-min run the available memory decreases, then it would suggest that my hypothesis might be correct.
Here is the code that reruns the "main" function every 5 min:
class Editor(QtGui.QMainWindow):
# my app
def main():
app = QtGui.QApplication(sys.argv)
ex = Editor()
milliseconds_autocheck_frequency = 300000 # number of milliseconds in 5 min
timer = QtCore.QTimer()
timer.timeout.connect(ex.run)
timer.start(milliseconds_autocheck_frequency)
sys.exit(app.exec_())
if __name__ == '__main__':
main()

Python 2.7 socket.gethostbyaddr timeout before throwing socket.herror

I have the following code sample
import socket
try:
sock=socket
sock.setdefaulttimeout(1)
for result in sock.gethostbyaddr("165.139.149.169"):
if result and "[" not in str(result):
print str(result)
except socket.herror:
print("Host Not Found")
which works as part of a network discovery type POC that I'm building (mostly to learn Python). As I said, the code works, but when an address has no DNS record it takes forever. Is there a way to change the timeout of the sock.gethostbyaddr() method so that it will throw host not found sooner?

Simple libtorrent Python client

I tried creating a simple libtorrent python client (for magnet uri), and I failed, the program never continues past the "downloading metadata".
If you may help me write a simple client it would be amazing.
P.S. When I choose a save path, is the save path the folder which I want my data to be saved in? or the path for the data itself.
(I used a code someone posted here)
import libtorrent as lt
import time
ses = lt.session()
ses.listen_on(6881, 6891)
params = {
'save_path': '/home/downloads/',
'storage_mode': lt.storage_mode_t(2),
'paused': False,
'auto_managed': True,
'duplicate_is_error': True}
link = "magnet:?xt=urn:btih:4MR6HU7SIHXAXQQFXFJTNLTYSREDR5EI&tr=http://tracker.vodo.net:6970/announce"
handle = lt.add_magnet_uri(ses, link, params)
ses.start_dht()
print 'downloading metadata...'
while (not handle.has_metadata()):
time.sleep(1)
print 'got metadata, starting torrent download...'
while (handle.status().state != lt.torrent_status.seeding):
s = handle.status()
state_str = ['queued', 'checking', 'downloading metadata', \
'downloading', 'finished', 'seeding', 'allocating']
print '%.2f%% complete (down: %.1f kb/s up: %.1f kB/s peers: %d) %s %.3' % \
(s.progress * 100, s.download_rate / 1000, s.upload_rate / 1000, \
s.num_peers, state_str[s.state], s.total_download/1000000)
time.sleep(5)
What happens it is that the first while loop becomes infinite because the state does not change.
You have to add a s = handle.status (); for having the metadata the status changes and the loop stops. Alternatively add the first while inside the other while so that the same will happen.
Yes, the save path you specify is the one that the torrents will be downloaded to.
As for the metadata downloading part, I would add the following extensions first:
ses.add_extension(lt.create_metadata_plugin)
ses.add_extension(lt.create_ut_metadata_plugin)
Second, I would add a DHT bootstrap node:
ses.add_dht_router("router.bittorrent.com", 6881)
Finally, I would begin debugging the application by seeing if my network interface is binding or if any other errors come up (my experience with BitTorrent download problems, in general, is that they are network related). To get an idea of what's happening I would use libtorrent-rasterbar's alert system:
ses.set_alert_mask(lt.alert.category_t.all_categories)
And make a thread (with the following code) to collect the alerts and display them:
while True:
ses.wait_for_alert(500)
alert = lt_session.pop_alert()
if not alert:
continue
print "[%s] %s" % (type(alert), alert.__str__())
Even with all this working correctly, make sure that torrent you are trying to download actually has peers. Even if there are a few peers, none may be configured correctly or support metadata exchange (exchanging metadata is not a standard BitTorrent feature). Try to load a torrent file (which doesn't require downloading metadata) and see if you can download successfully (to rule out some network issues).

Django: Gracefully restart nginx + fastcgi sites to reflect code changes?

Common situation: I have a client on my server who may update some of the code in his python project. He can ssh into his shell and pull from his repository and all is fine -- but the code is stored in memory (as far as I know) so I need to actually kill the fastcgi process and restart it to have the code change.
I know I can gracefully restart fcgi but I don't want to have to manually do this. I want my client to update the code, and within 5 minutes or whatever, to have the new code running under the fcgi process.
Thanks
First off, if uptime is important to you, I'd suggest making the client do it. It can be as simple as giving him a command called deploy-code. Using your method, if there is an error in their code, your method requires a 10 minute turnaround (read: downtime) for fixing it, assuming he gets it correct.
That said, if you actually want to do this, you should create a daemon which will look for files modified within the last 5 minutes. If it detects one, it will execute the reboot command.
Code might look something like:
import os, time
CODE_DIR = '/tmp/foo'
while True:
if restarted = True:
restarted = False
time.sleep(5*60)
for root, dirs, files in os.walk(CODE_DIR):
if restarted=True:
break
for filename in files:
if restared=True:
break
updated_on = os.path.getmtime(os.path.join(root, filename))
current_time = time.time()
if current_time - updated_on <= 6 * 60: # 6 min
# 6 min could offer false negatives, but that's better
# than false positives
restarted = True
print "We should execute the restart command here."