Verify Bitcoin transfer from one address to another address in Django - django

I have a websites which provides a selling platform for individuals. Each individual registers with his bitcoin address and has to input his transaction ID after each transaction.
My code -
import urllib
import re
urlr = "https://blockchain.info/q/txresult/"+hash+"/"+receiver.bitcoin_account
urls = "https://blockchain.info/q/txresult/"+hash+"/"+sender.bitcoin_account
try:
res = urllib.urlopen(urls)
resread = res.read()
sen = urllib.urlopen(urlr)
senread = sen.read()
except IOError:
resread = ""
senread = ""
try:
resread = int(resread)
senread = int(senread)
if resread >= 5000000 and senread != 0:
...
Please i need a better solution, if i can get

You may get a better result if you run bitcoind yourself, and do not rely on blockchain.info's API. Simply start bitcoind with the following options:
bitcoind -txindex -server
If you already synced with the network before you might need to include -reindex on the first time.
You will then be able to use the JSON-RPC interface to query for transactions:
bitcoin-cli getrawtransaction 4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b
Better yet, you can use the python-bitcoinlib library to query and parse the transaction without shelling out to bitcoin-cli.
from binascii import unhexlify
from bitcoin.rpc Proxy
p = Proxy("http://rpcuser:rpcpass#127.0.0.1:8332")
h = unhexlify("4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b")
print(p.gettransaction(h))
That should give you direct access to a local copy of the Bitcoin blockchain, without having to trust blockchain.info, and be faster and more scalable.

Related

How do I move/copy files in s3 using boto3 asynchronously?

I understand using boto3 Object.copy_from(...) uses threads but is not asynchronous. Is it possible to make this call asynchronous? If not, is there another way to accomplish this using boto3? I'm finding that moving hundreds/thousands of files is fine, but when i'm processing 100's of thousands of files it gets extremely slow.
You can have a look at aioboto3. It is a third party library, not created by AWS, but it provides asyncio support for selected (not all) AWS API calls.
I use the following. You can copy into a python file and run it from the command line. I have a PC with 8 cores, so it's faster than my little EC2 instance with 1 VPC.
It uses the multiprocessing library, so you'd want to read up on that if you aren't familiar. It's relatively straightforward. There's a batch delete that I've commented out because you really don't want to accidentally delete the wrong directory. You can use whatever methods you want to list the keys or iterate through the objects, but this works for me.
from multiprocessing import Pool
from itertools import repeat
import boto3
import os
import math
s3sc = boto3.client('s3')
s3sr = boto3.resource('s3')
num_proc = os.cpu_count()
def get_list_of_keys_from_prefix(bucket, prefix):
"""gets list of keys for given bucket and prefix"""
keys_list = []
paginator = s3sr.meta.client.get_paginator('list_objects_v2')
for page in paginator.paginate(Bucket=bucket, Prefix=prefix, Delimiter='/'):
keys = [content['Key'] for content in page.get('Contents')]
keys_list.extend(keys)
if prefix in keys_list:
keys_list.remove(prefix)
return keys_list
def batch_delete_s3(keys_list, bucket):
total_keys = len(keys_list)
chunk_size = 1000
num_batches = math.ceil(total_keys / chunk_size)
for b in range(0, num_batches):
batch_to_delete = []
for k in keys_list[chunk_size*b:chunk_size*b+chunk_size]:
batch_to_delete.append({'Key': k})
s3sc.delete_objects(Bucket=bucket, Delete={'Objects': batch_to_delete,'Quiet': True})
def copy_s3_to_s3(from_bucket, from_key, to_bucket, to_key):
copy_source = {'Bucket': from_bucket, 'Key': from_key}
s3sr.meta.client.copy(copy_source, to_bucket, to_key)
def upload_multiprocess(from_bucket, keys_list_from, to_bucket, keys_list_to, num_proc=4):
with Pool(num_proc) as pool:
r = pool.starmap(copy_s3_to_s3, zip(repeat(from_bucket), keys_list_from, repeat(to_bucket), keys_list_to), 15)
pool.close()
pool.join()
return r
if __name__ == '__main__':
__spec__= None
from_bucket = 'from-bucket'
from_prefix = 'from/prefix/'
to_bucket = 'to-bucket'
to_prefix = 'to/prefix/'
keys_list_from = get_list_of_keys_from_prefix(from_bucket, from_prefix)
keys_list_to = [to_prefix + k.rsplit('/')[-1] for k in keys_list_from]
rs = upload_multiprocess(from_bucket, keys_list_from, to_bucket, keys_list_to, num_proc=num_proc)
# batch_delete_s3(keys_list_from, from_bucket)
I think you can use boto3 along with python threads to handle such cases, In AWS S3 Docs they mentioned
Your application can achieve at least 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket.
So you can make 3500 uploads by one call, nothing can override this 3500 limit set by AWS.
By using threads you need just 300 (approx) calls.
It Takes 5 Hrs in Worst Case i.e considering that your files are large, will take 1 min for a file to upload on average.
Note: Running more threads consumes more resources on your machine. You must be sure that your machine has enough resources to support the maximum number of concurrent requests that you want.

Dynamic DNS updates using Python's dns.update returning rcode REFUSED

I am trying to make a simple DNS update using Python's dns.update. However, every-time i run the script i get "rcode REFUSED". I tried a series of different permutations but cant seem to figure where i am going wrong. I am able to directly use this key with nsupdates and make changes.
I am running this on Python 2.7
My key looks like this
key test.testdomain.com. {
algorithm HMAC-MD5;
secret "5MbEv7VrELN7ztkNMGSUvfimpoLAEzdmDzAHE9X4ax0ZDxiYnz1rkIx29SQru2AHQ3XbRBHmY7EQ/xD/2FocCA==";
};
Here is my code, I have hard-coded it all for the purpose of troubleshooting.
import sys
import dns.update
import dns.query
import dns.tsigkeyring
import dns.resolver
def main():
UpdateDNS()
####################################################################################################################
def UpdateDNS():
# set zone and dnsserver
zone = 'testdomain.com'
dnshostname = 'dns-test.testdomain.com'
keyring = dns.tsigkeyring.from_text ({'test.testdomain.com.' : '5MbEv7VrELN7ztkNMGSUvfimpoLAEzdmDzAHE9X4ax0ZDxiYnz1rkIx29SQru2AHQ3XbRBHmY7EQ/xD/2FocCA=='})
update = dns.update.Update(zone, keyring = keyring, keyalgorithm = 'hmac-md5.sig-alg.reg.int')
update.add('foo.testdomain.com', 8600, 'A', '179.33.72.36')
response = dns.query.tcp(update, 'dns-test.testdomain.com')
print response
#########################################################
# Main
#########################################################
if __name__ == '__main__':
main()
Here is my response
x1c\x08'}
id 45721
opcode UPDATE
rcode REFUSED
flags QR RA
;ZONE
testdomain.com. IN SOA
;PREREQ
;UPDATE
;ADDITIONAL
Generally your code looks OK to me. I just tested essentially the same code on my name server and it works like a charm.
Did you allow updates for the TSIG key to the zone you're trying to update? There should be something like this in your bind config (probably there as you wrote you can use the key manually, but just to make sure):
zone "testdomain.com" IN {
type master;
[...]
allow-update {
key "test.testdomain.com.";
};
};
What do the name server logs say when you run your update script? Normally there should be a reason for rejecting the update:
view internal: signer "test-key" denied
view internal: request has invalid signature: TSIG test-key: tsig verify failure (BADKEY)
The former would indicate that the key is not allowed to update the zone, the latter that the key itself wasn't accepted (though that would also have resulted in an exception when running the code).

Google api limitation overcome

Hi all we are using google api for e.g. this one 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&%s' % query via python script but very fast it gets blocked. Any work around for this? Thank you.
Below is my current codes.
#!/usr/bin/env python
import math,sys
import json
import urllib
def gsearch(searchfor):
query = urllib.urlencode({'q': searchfor})
url = 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&%s' % query
search_response = urllib.urlopen(url)
search_results = search_response.read()
results = json.loads(search_results)
data = results['responseData']
return data
args = sys.argv[1:]
m = 45000000000
if len(args) != 2:
print "need two words as arguments"
exit
n0 = int(gsearch(args[0])['cursor']['estimatedResultCount'])
n1 = int(gsearch(args[1])['cursor']['estimatedResultCount'])
n2 = int(gsearch(args[0]+" "+args[1])['cursor']['estimatedResultCount'])
The link doesn't work, and there is no code here, so all I can suggest is finding out from the API what the limits are, and delaying your requests appropriately. Alternatively, you can probably pay for less restricted API usage.
Link is bad.
Usually you can overcome this by paying for use.

Python library to access a CalDAV server

I run ownCloud on my webspace for a shared calendar. Now I'm looking for a suitable python library to get read only access to the calendar. I want to put some information of the calendar on an intranet website.
I have tried http://trac.calendarserver.org/wiki/CalDAVClientLibrary but it always returns a NotImplementedError with the query command, so my guess is that the query command doesn't work well with the given library.
What library could I use instead?
I recommend the library, caldav.
Read-only is working really well with this library and looks straight-forward to me. It will do the whole job of getting calendars and reading events, returning them in the iCalendar format. More information about the caldav library can also be obtained in the documentation.
import caldav
client = caldav.DAVClient(<caldav-url>, username=<username>,
password=<password>)
principal = client.principal()
for calendar in principal.calendars():
for event in calendar.events():
ical_text = event.data
From this on you can use the icalendar library to read specific fields such as the type (e. g. event, todo, alarm), name, times, etc. - a good starting point may be this question.
I wrote this code few months ago to fetch data from CalDAV to present them on my website.
I have changed the data into JSON format, but you can do whatever you want with the data.
I have added some print for you to see the output which you can remove them in production.
from datetime import datetime
import json
from pytz import UTC # timezone
import caldav
from icalendar import Calendar, Event
# CalDAV info
url = "YOUR CALDAV URL"
userN = "YOUR CALDAV USERNAME"
passW = "YOUR CALDAV PASSWORD"
client = caldav.DAVClient(url=url, username=userN, password=passW)
principal = client.principal()
calendars = principal.calendars()
if len(calendars) > 0:
calendar = calendars[0]
print ("Using calendar", calendar)
results = calendar.events()
eventSummary = []
eventDescription = []
eventDateStart = []
eventdateEnd = []
eventTimeStart = []
eventTimeEnd = []
for eventraw in results:
event = Calendar.from_ical(eventraw._data)
for component in event.walk():
if component.name == "VEVENT":
print (component.get('summary'))
eventSummary.append(component.get('summary'))
print (component.get('description'))
eventDescription.append(component.get('description'))
startDate = component.get('dtstart')
print (startDate.dt.strftime('%m/%d/%Y %H:%M'))
eventDateStart.append(startDate.dt.strftime('%m/%d/%Y'))
eventTimeStart.append(startDate.dt.strftime('%H:%M'))
endDate = component.get('dtend')
print (endDate.dt.strftime('%m/%d/%Y %H:%M'))
eventdateEnd.append(endDate.dt.strftime('%m/%d/%Y'))
eventTimeEnd.append(endDate.dt.strftime('%H:%M'))
dateStamp = component.get('dtstamp')
print (dateStamp.dt.strftime('%m/%d/%Y %H:%M'))
print ('')
# Modify or change these values based on your CalDAV
# Converting to JSON
data = [{ 'Events Summary':eventSummary[0], 'Event Description':eventDescription[0],'Event Start date':eventDateStart[0], 'Event End date':eventdateEnd[0], 'At:':eventTimeStart[0], 'Until':eventTimeEnd[0]}]
data_string = json.dumps(data)
print ('JSON:', data_string)
pyOwnCloud could be the right thing for you. I haven't tried it, but it should provide a CMDline/API for reading the calendars.
You probably want to provide more details about how you are actually making use of the API but in case the query command is indeed not implemented, there is a list of other Python libraries at the CalConnect website (archvied version, original link is dead now).

GQL Queries - Retrieving specific data from query object

I'm building a database using Google Datastore. Here is my model...
class UserInfo(db.Model):
name = db.StringProperty(required = True)
password = db.StringProperty(required = True)
email = db.StringProperty(required = False)
...and below is my GQL Query. How would I go about retrieving the user's password and ID from the user_data object? I've gone through all the google documentation, find it hard to follow, and have spent ages trying various things I've read online but nothing has helped! I'm on Python 2.7.
user_data = db.GqlQuery('SELECT * FROM UserInfo WHERE name=:1', name_in)
user_info = user_data.get()
This is basic Python.
From the query, you get a UserInfo instance, which you have stored in the user_info variable. You can access the data of an instance via dot notation: user_info.password and user_info.email.
If this isn't clear, you really should do a basic Python tutorial before going any further.
You are almost there. Treat the query object like a class.
name = user_info.name
Documentation on queries here gives some examples
There are some python tips that might help you
dir(user_info)
help(user_info)
you can also print almost anything, like
print user_info[0]
print user_info[0].name
Setup logging for your app
Logging and python etc