How to convert list into DataFrame in Python (Binance Futures API) - list

Using Binance Futures API I am trying to get a proper form of my position regarding cryptocurrencies.
Using the code
from binance_f import RequestClient
request_client = RequestClient(api_key= my_key, secret_key=my_secet_key)
result = request_client.get_position()
I get the following result
[{"symbol":"BTCUSDT","positionAmt":"0.000","entryPrice":"0.00000","markPrice":"5455.13008723","unRealizedProfit":"0.00000000","liquidationPrice":"0","leverage":"20","maxNotionalValue":"5000000","marginType":"cross","isolatedMargin":"0.00000000","isAutoAddMargin":"false"}]
The type command indicates it is a list, however adding at the end of the code print(result) yields:
[<binance_f.model.position.Position object at 0x1135cb670>]
Which is baffling because it seems not to be the list (in fact, debugging it indicates object of type Position). Using PrintMix.print_data(result) yields:
data number 0 :
entryPrice:0.0
isAutoAddMargin:True
isolatedMargin:0.0
json_parse:<function Position.json_parse at 0x1165af820>
leverage:20.0
liquidationPrice:0.0
marginType:cross
markPrice:5442.28502271
maxNotionalValue:5000000.0
positionAmt:0.0
symbol:BTCUSDT
unrealizedProfit:0.0
Now it seems like a JSON format... But it is a list. I am confused - any ideas how I can convert result to a proper DataFrame? So that columns are Symbol, PositionAmt, entryPrice, etc.
Thanks!

Your main question remains as you wrote on the header you should not be confused. In your case you have a list of Position object, you can see the structure of Position in the GitHub of this library
Anyway to answer the question please use the following:
df = pd.DataFrame([t.__dict__ for t in result])
For more options and information please read the great answers on this question
Good Luck!

you can use that
df = pd.DataFrame([t.__dict__ for t in result])
klines=df.values.tolist()
open = [float(entry[1]) for entry in klines]
high = [float(entry[2]) for entry in klines]
low = [float(entry[3]) for entry in klines]
close = [float(entry[4]) for entry in klines]

Related

Why the output of model.wv.similarity() in Word2Vec results different with model.wv.similar()?

I have trained a Word2Vec model and I am trying to use it.
When I input the most similar words of ‘动力', I got the output like this:
动力系统 0.6429724097251892
驱动力 0.5936785936355591
动能 0.5788494348526001
动力车 0.5579575300216675
引擎 0.5339343547821045
推动力 0.5152761936187744
扭力 0.501279354095459
新动力 0.5010953545570374
支撑力 0.48610919713974
精神力量 0.47970670461654663
But the problem is that if I input model.wv.similarity('动力','动力系统') I got the result 0.0, which is not equal with
0.6429724097251892
what confused me more was that when I got the next similarity of word '动力' and word '驱动力', it showed
3.689349e+19
So why ? Did I make misunderstanding with the similarity? I need someone to tell me!!
And the code is:
res = model.wv.most_similar('动力')
for r in res:
print(r[0],r[1])
print(model.wv.similarity('动力','动力系统'))
print(model.wv.similarity('动力','驱动力'))
print(model.wv.similarity('动力','动能'))
output:
动力系统 0.6429724097251892
驱动力 0.5936785936355591
动能 0.5788494348526001
动力车 0.5579575300216675
引擎 0.5339343547821045
推动力 0.5152761936187744
扭力 0.501279354095459
新动力 0.5010953545570374
支撑力 0.48610919713974
精神力量 0.47970670461654663
0.0
3.689349e+19
2.0
I have written a function to replace the model.wv.similarity method.
def Similarity(w1,w2,model):
A = model[w1]; B = model[w2]
return sum(A*B)/(pow(sum(pow(A,2)),0.5)*pow(sum(pow(B,2)),0.5)
Where w1 and w2 are the words you input, model is the Word2Vec model you have trained.
Using the similarity method directly from the model is deprecated. It has a bit extra logic in it that performs vector normalization before evaluating the result.
You should be using vw directly, because as stated in their documentation, for the word vectors it is of non importance how they were trained so they should be looked as independent structure, the model is just the means to obtain it.
Here is short discussion which should give you starting points if you want to investigate further.
It may be an encoding issue, where you are not actually comparing the same tokens.
Try the following, to see if it gives results closer to what you expect.
res = model.wv.most_similar('动力')
for r in res:
print(r[0],r[1])
print(model.wv.similarity('动力', res[0][0]))
print(model.wv.similarity('动力', res[1][0]))
print(model.wv.similarity('动力', res[2][0]))
If it does, you could look further into why the model might be reporting strings which print as 动力系统 (etc), but don't match your typed-in-code string literals like '动力系统' (etc). For example:
print(res[0][0]=='动力系统')
print(type(res[0][0]))
print(type('动力系统'))

Trying to import dictionary to work with from a url; 'unicode' object not callable

I'm new to coding and have searched as best I can to find out how to solve this before asking.
I'm trying to pull information from poloniex.com REST api, which is in JSon format I believe. I can import the data, and work with it a little bit, but when I try to call and use the elements in the contained dictionaries, I get "'unicode' object not callable". How can I use this information? The end goal with this data is to pull the "BTC: "(volume)" for each coin pair and test if it is <100, and if not, append it to a new list.
The data is presented like this or you can see yourself at https://poloniex.com/public?command=return24hVolume:
{"BTC_LTC":{"BTC":"2.23248854","LTC":"87.10381314"},"BTC_NXT":{"BTC":"0.981616","NXT":"14145"}, ... "totalBTC":"81.89657704","totalLTC":"78.52083806"}
And my code I've been trying to get to work with currently looks like this(I've tried to iterate the information I want a million different ways, so I dunno what example to give for that part, but this is how I am importing the data):
returnvolume = urllib2.urlopen(urllib2.Request('https://poloniex.com/public?command=return24hVolume'))
coinvolume = json.loads(returnvolume.read())
coinvolume = dict(coinvolume)
No matter how I try to use the data I've pulled, I get an error stating:
"unicode' object not callable."
I'd really appreciate a little help, I'm concerned I may be approaching this the wrong way as I haven't been able to get anything to work, or maybe I'm just missing something rudimentary, I'm not sure.
Thank you very much for your time!
Thanks to another user, downshift, I have discovered the answer!
d = {}
for k, v in coinvolume.items():
try:
if float(v['BTC']) > 100:
d[k] = v
except KeyError:
d[k] = v
except TypeError:
if v > 100:
d[k] = k
This creates a new list, d, and adds every coin with a 'BTC' volume > 100 to this new list.
Thanks again downshift, and I hope this helps others as well!

compare two dictionary, one with list of float value per key, the other one a value per key (python)

I have a query sequence that I blasted online using NCBIWWW.qblast. In my xml blast file result I obtained for a query sequence a list of hit (i.e: gi|). Each hit or gi| have multiple hsp. I made a dictionary my_dict1 where I placed gi| as key and I appended the bit score as value. So multiple values for each key.
my_dict1 = {
gi|1002819492|: [437.702, 384.47, 380.86, 380.86, 362.83],
gi|675820360| : [2617.97, 2614.37, 122.112],
gi|953764029| : [414.258, 318.66, 122.112, 86.158],
gi|675820410| : [450.653, 388.08, 386.27] }
Then I looked for max value in each key using:
for key, value in my_dict1.items():
max_value = max(value)
And made a second dictionary my_dict2:
my_dict2 = {
gi|1002819492|: 437.702,
gi|675820360| : 2617.97,
gi|953764029| : 414.258,
gi|675820410| : 450.653 }
I want to compare both dictionary. So I can extract the hsp with the highest score bits. I am also including other parameters like query coverage and identity percentage (Not shown here). The finality is to get the best gi| with the highest bit scores, coverage and identity percentage.
I tried many things to compare both dictionary like this :
First code :
matches[]
if my_dict1.keys() not in my_dict2.keys():
matches[hit_id] = bit_score
else:
matches = matches[hit_id], bit_score
Second code:
if hit_id not in matches.keys():
matches[hit_id]= bit_score
else:
matches = matches[hit_id], bit_score
Third code:
intersection = set(set(my_dict1.items()) & set(my_dict2.items()))
Howerver I always end up with 2 types of errors:
1 ) TypeError: list indices must be integers, not unicode
2 ) ... float not iterable...
Please I need some help and guidance. Thank you very much in advance for your time. Best regards.
It's not clear what you're trying to do. What is hit_id? What is bit_score? It looks like your second dict is always going to have the same keys as your first if you're creating it by pulling the max value for each key of the first dict.
You say you're trying to compare them, but don't really state what you're actually trying to do. Find those with values under a certain max? Find those with the highest max?
Your first code doesn't work because I'm assuming you're trying to use a dict key value as an index to matches, which you define as a list. That's probably where your first error is coming from, though you haven't given the lines where the error is actually occurring.
See in-code comments below:
# First off, this needs to be a dict.
matches{}
# This will never happen if you've created these dicts as you stated.
if my_dict1.keys() not in my_dict2.keys():
matches[hit_id] = bit_score # Not clear what bit_score is?
else:
# Also not sure what you're trying to do here. This will assign a tuple
# to matches with whatever the value of matches[hit_id] is and bit_score.
matches = matches[hit_id], bit_score
Regardless, we really need more information and the full code to figure out your actual goal and what's going wrong.

TypeError during executemany() INSERT statement using a list of strings

I am trying to just do a basic INSERT operation to a PostgreSQL database through Python via the Psycopg2 module. I have read a great many of the questions already posted regarding this subject as well as the documentation but I seem to have done something uniquely wrong and none of the fixes seem to work for my code.
#API CALL + JSON decoding here
x = 0
for item in ulist:
idValue = list['members'][x]['name']
activeUsers.append(str(idValue))
x += 1
dbShell.executemany("""INSERT INTO slickusers (username) VALUES (%s)""", activeUsers
)
The loop creates a list of strings that looks like this when printed:
['b2ong', 'dune', 'drble', 'drars', 'feman', 'got', 'urbo']
I am just trying to have the code INSERT these strings as 1 row each into the table.
The error specified when running is:
TypeError: not all arguments converted during string formatting
I tried changing the INSERT to:
dbShell.executemany("INSERT INTO slackusers (username) VALUES (%s)", (activeUsers,) )
But that seems like it's merely treating the entire list as a single string as it yields:
psycopg2.DataError: value too long for type character varying(30)
What am I missing?
First in the code you pasted:
x = 0
for item in ulist:
idValue = list['members'][x]['name']
activeUsers.append(str(idValue))
x += 1
Is not the right way to accomplish what you are trying to do.
first list is a reserved word in python and you shouldn't use it as a variable name. I am assuming you meant ulist.
if you really need access to the index of an item in python you can use enumerate:
for x, item in enumerate(ulist):
but, the best way to do what you are trying to do is something like
for item in ulist: # or list['members'] Your example is kinda broken here
activeUsers.append(str(item['name']))
Your first try was:
['b2ong', 'dune', 'drble', 'drars', 'feman', 'got', 'urbo']
Your second attempt was:
(['b2ong', 'dune', 'drble', 'drars', 'feman', 'got', 'urbo'], )
What I think you want is:
[['b2ong'], ['dune'], ['drble'], ['drars'], ['feman'], ['got'], ['urbo']]
You could get this many ways:
dbShell.executemany("INSERT INTO slackusers (username) VALUES (%s)", [ [a] for a in activeUsers] )
or event better:
for item in ulist: # or list['members'] Your example is kinda broken here
activeUsers.append([str(item['name'])])
dbShell.executemany("""INSERT INTO slickusers (username) VALUES (%s)""", activeUsers)

How to Alter returned data format called from csv using python

I am looking for guidance regarding a return result FORMAT from a csv file. The code I have to date partially ahcieves my objective but despite significant effort researching through this and many other sites/forums I cannot resolve the final step. I have also posed this question on gis.stackexchange but was redirected to this forum with the comment "Questions relating to general Information Technology, with no clear GIS component, are off-topic here, but can be researched/asked at Stack Overflow".
My successful piece of python code that reads selected data from a csv and returns it in dict format is below ; (Yes I know the reason it returns as type dict is due to the format my code is calling!!! and that is the crux of the problem)
import arcpy, csv
Att_Dict ={}
with open ("C:/Data/Code/Python/Library/Peter/123.csv") as f:
reader = csv.DictReader(f)
for row in reader:
if row['Status']=='Keep':
Att_Dict.update({row['book_id']:row['book_ref']})
print Att_Dict
Att_Dict = {'7643': '7625', '9644': '2289', '4406': '4443', '7588': '9681', '2252': '7947'}
For the next part of my code to run I need the result above but in the format of ; (this is part of a very lengthy code but the only show stopper is the returned format so little value in posting the other 200 or so lines)
Att_Dict = [[7643, 7625], [9644, 2289], [4406, 4443], [7588, 9681], [2252, 7947]]
Although I have experimented endlessly and can achieve this by reverting to csv.Reader rather than csv.DictReader, I then lose the ability to 'weed out' rows where column 'Status' has value 'Keep' in them and that is a requirement for the task at hand.
My sledgehammer approach to date has been to use 'search and replace' within Idle to amend the returned set to the meet the other requirement but Im sure it can be done programatically rather than manually. Similar but not exact to https://docs.python.org/2/library/index.html, plus my startout question at Returning values from multiple CSV columns to Python dictionary? and Using Python's csv.dictreader to search for specific key to then print its value plus a multitude of csv based questions at geonet.esri.
(Using Win 7, ArcGIS 10.2, Python 2.7.5)
Try this
Att_Dict = {'7643': '7625', '9644': '2289', '4406': '4443', '7588': '9681', '2252': '7947'}
Att_List = []
for key, value in Att_Dict.items():
Att_List.append([int(key), int(value)])
print Att_List
Out: [[7643, 7625], [9644, 2289], [4406, 4443], [7588, 9681], [2252, 7947]]