I have an API presented as follow. If the API is called with only one value in params (which is a repeated field), everything work as intended. But if params holds multiple values, then I get error : No endpoint found for path.
1 INPUT = endpoints.ResourceContainer(
2 params = messages.IntegerField(1, repeated = True, variant = messages.Variant.INT32))
3
4 #endpoints.method(INPUT,
5 response_type.CustomResponse,
6 path = 'foo/{params}',
7 http_method = 'POST',
8 name = 'foo')
9 def foo(self, request):
10 #foo body is irrelevent
11 return response
How can I fix this. Something like : path = 'foo/{params[]}', ?
Thank you for your help
If 'params' is expected as part of the query string and not the path, you can just omit it from the path eg:
path = 'foo'
or
path = 'myApi/foo'
The example given in the docs uses a ResourceContainer for a single non-repeated path argument. Given the nature of repeated properties it doesn't look like you can use them as path arguments, only query string arguments. A repeated field in a query string would look like this (easily to deal with):
POST http://app.appspot.com/_ah/api/myApi/v1/foo?param=bar¶m=baz ...
But a repeated field in a path argument would look like this (not so much):
POST http://app.appspot.com/_ah/api/myApi/v1/foo/bar/baz....
Related
I am trying to fatch live cricket score from sportsmonk cricket api and cricapi. using Django
everything goes fine until I request for any endpoint with a unique Id that is stored in a variable.
it gives key error always while doing so
my request :
resL = json.loads(requests.get(
'https://cricket.sportmonks.com/api/v2.0/fixtures/id?api_token').text)
here 'id' is a variable for a particular fixture.
It works fine while putting any numerical value instead of a variable.
Same is the case with this url :
resS = json.loads(requests.get(
'http://cricapi.com/api/fantasySummary/?apikey=123&unique_id=id).text)
I not getting what I am doing wrong here.
You need to indicate that the id param in your url is a variable. The way you have it written, it's just a string with the characters id.
One way to accomplish this is with f-strings:
id = 543 # or whatever your id is
api_token = 123
resL = json.loads(requests.get(
f'https://cricket.sportmonks.com/api/v2.0/fixtures/{id}?apikey=
{api_token}').text)
resS = json.loads(requests.get(
f'http://cricapi.com/api/fantasySummary/?apikey={api_token}&unique_id=
{id}).text)
When python parses these strings, it will replace {id} with the id you've defined as a variable elsewhere. Please note that you have to include the f at the beginning of the string.
Another way is with the older 'str'.format() method, which works as follows:
id = 543 # or whatever your id is
api_token = 123
resL = json.loads(requests.get(
'https://cricket.sportmonks.com/api/v2.0/fixtures/{}?apikey=
{}'.format(id, api_token)).text)
resS = json.loads(requests.get(
'http://cricapi.com/api/fantasySummary/?apikey={}&unique_id=
{}'.format(id, api_token)).text)
Here, the variables in the format() function will be substituted into the empty brackets in order. (you do not need the f here)
I'm using python 2.7 with Elasticsearch-DSL package to query my elastic cluster.
Trying to add "from and limit" capabilities to the query in order to have pagination in my FE which presents the documents elastic returns but 'from' doesn't work right (i.e. I'm not using it correctly I spouse).
The relevant code is:
s = Search(using=elastic_conn, index='my_index'). \
filter("terms", organization_id=org_list)
hits = s[my_from:my_size].execute() # if from = 10, size = 10 then I get 0 documents, altought 100 documents match the filters.
My index contains 100 documents.
even when my filter match all results (i.e nothing is filtered out), if I use
my_from = 10 and my_size = 10, for instance, then I get nothing in hits (no matching documents)
Why is that? Am I misusing the from?
Documentation states:
from and size parameters. The from parameter defines the offset from the first result you want to fetch. The size parameter allows you to configure the maximum amount of hits to be returned.
So it seems really straightforward, what am I missing?
The answer to this question can be found in their documentation under the Pagination Section of the Search DSL:
Pagination
To specify the from/size parameters, use the Python slicing API:
s = s[10:20]
# {"from": 10, "size": 10}
The correct usage of these parameters with the Search DSL is just as you would with a Python list, slicing from the starting index to the end index. The size parameter would be implicitly defined as the end index minus the start index.
Hope this clears things up!
Try to pass from and size params as below:
search = Search(using=elastic_conn, index='my_index'). \
filter("terms", organization_id=org_list). \
extra(from_=10, size=20)
result = search.execute()
I am trying to just do a basic INSERT operation to a PostgreSQL database through Python via the Psycopg2 module. I have read a great many of the questions already posted regarding this subject as well as the documentation but I seem to have done something uniquely wrong and none of the fixes seem to work for my code.
#API CALL + JSON decoding here
x = 0
for item in ulist:
idValue = list['members'][x]['name']
activeUsers.append(str(idValue))
x += 1
dbShell.executemany("""INSERT INTO slickusers (username) VALUES (%s)""", activeUsers
)
The loop creates a list of strings that looks like this when printed:
['b2ong', 'dune', 'drble', 'drars', 'feman', 'got', 'urbo']
I am just trying to have the code INSERT these strings as 1 row each into the table.
The error specified when running is:
TypeError: not all arguments converted during string formatting
I tried changing the INSERT to:
dbShell.executemany("INSERT INTO slackusers (username) VALUES (%s)", (activeUsers,) )
But that seems like it's merely treating the entire list as a single string as it yields:
psycopg2.DataError: value too long for type character varying(30)
What am I missing?
First in the code you pasted:
x = 0
for item in ulist:
idValue = list['members'][x]['name']
activeUsers.append(str(idValue))
x += 1
Is not the right way to accomplish what you are trying to do.
first list is a reserved word in python and you shouldn't use it as a variable name. I am assuming you meant ulist.
if you really need access to the index of an item in python you can use enumerate:
for x, item in enumerate(ulist):
but, the best way to do what you are trying to do is something like
for item in ulist: # or list['members'] Your example is kinda broken here
activeUsers.append(str(item['name']))
Your first try was:
['b2ong', 'dune', 'drble', 'drars', 'feman', 'got', 'urbo']
Your second attempt was:
(['b2ong', 'dune', 'drble', 'drars', 'feman', 'got', 'urbo'], )
What I think you want is:
[['b2ong'], ['dune'], ['drble'], ['drars'], ['feman'], ['got'], ['urbo']]
You could get this many ways:
dbShell.executemany("INSERT INTO slackusers (username) VALUES (%s)", [ [a] for a in activeUsers] )
or event better:
for item in ulist: # or list['members'] Your example is kinda broken here
activeUsers.append([str(item['name'])])
dbShell.executemany("""INSERT INTO slickusers (username) VALUES (%s)""", activeUsers)
I'm working with the program Autodesk Maya.
I've made a naming convention script that will name each item in a certain convention accordingly. However I have it list every time in the scene, then check if the chosen name matches any current name in the scene, and then I have it rename it and recheck once more through the scene if there is a duplicate.
However, when i run the code, it can take as long as 30 seconds to a minute or more to run through it all. At first I had no idea what was making my code run slow, as it worked fine in a relatively low scene amount. But then when i put print statements in the check scene code, i saw that it was taking a long time to check through all the items in the scene, and check for duplicates.
The ls() command provides a unicode list of all the items in the scene. These items can be relatively large, up to a thousand or more if the scene has even a moderate amount of items, a normal scene would be several times larger than the testing scene i have at the moment (which has about 794 items in this list).
Is this supposed to take this long? Is the method i'm using to compare things inefficient? I'm not sure what to do here, the code is taking an excessive amount of time, i'm also wondering if it could be anything else in the code, but this seems like it might be it.
Here is some code below.
class Name(object):
"""A naming convention class that runs passed arguments through user
dictionary, and returns formatted string of users input naming convention.
"""
def __init__(self, user_conv):
self.user_conv = user_conv
# an example of a user convention is '${prefix}_${name}_${side}_${objtype}'
#staticmethod
def abbrev_lib(word):
# a dictionary of abbreviated words is here, takes in a string
# and returns an abbreviated string, if not found return given string
#staticmethod
def check_scene(name):
"""Checks entire scene for same name. If duplicate exists,
Keyword Arguments:
name -- (string) name of object to be checked
"""
scene = ls()
match = [x for x in scene if isinstance(x, collections.Iterable)
and (name in x)]
if not match:
return name
else:
return ''
def convert(self, prefix, name, side, objtype):
"""Converts given information about object into user specified convention.
Keyword Arguments:
prefix -- what is prefixed before the name
name -- name of the object or node
side -- what side the object is on, example 'left' or 'right'
obj_type -- the type of the object, example 'joint' or 'multiplyDivide'
"""
prefix = self.abbrev_lib(prefix)
name = self.abbrev_lib(name)
side = ''.join([self.abbrev_lib(x) for x in side])
objtype = self.abbrev_lib(objtype)
i = 02
checked = ''
subs = {'prefix': prefix, 'name': name, 'side':
side, 'objtype': objtype}
while self.checked == '':
newname = Template (self.user_conv.lower())
newname = newname.safe_substitute(**subs)
newname = newname.strip('_')
newname = newname.replace('__', '_')
checked = self.check_scene(newname)
if checked == '' and i < 100:
subs['objtype'] = '%s%s' %(objtype, i)
i+=1
else:
break
return checked
are you running this many times? You are potentially trolling a list of several hundred or a few thousand items for each iteration inside while self.checked =='', which would be a likely culprit. FWIW prints are also very slow in Maya, especially if you're printing a long list - so doing that many times will definitely be slow no matter what.
I'd try a couple of things to speed this up:
limit your searches to one type at a time - why troll through hundreds of random nodes if you only care about MultiplyDivide right now?
Use a set or a dictionary to search, rather than a list - sets and dictionaries use hashsets and are faster for lookups
If you're worried about maintining a naming convetion, definitely design it to be resistant to Maya's default behavior which is to append numeric suffixes to keep names unique. Any naming convention which doesn't support this will be a pain in the butt for all time, because you can't prevent Maya from doing this in the ordinary course of business. On the other hand if you use that for differntiating instances you don't need to do any uniquification at all - just use rename() on the object and capture the result. The weakness there is that Maya won't rename for global uniqueness, only local - so if you want to make unique node name for things that are not siblings you have to do it yourself.
Here's some cheapie code for finding unique node names:
def get_unique_scene_names (*nodeTypes):
if not nodeTypes:
nodeTypes = ('transform',)
results = {}
for longname in cmds.ls(type = nodeTypes, l=True):
shortname = longname.rpartition("|")[-1]
if not shortname in results:
results[shortname] = set()
results[shortname].add(longname)
return results
def add_unique_name(item, node_dict):
shortname = item.rpartition("|")[-1]
if shortname in node_dict:
node_dict[shortname].add(item)
else:
node_dict[shortname] = set([item])
def remove_unique_name(item, node_dict):
shortname = item.rpartition("|")[-1]
existing = node_dict.get(shortname, [])
if item in existing:
existing.remove(item)
def apply_convention(node, new_name, node_dict):
if not new_name in node_dict:
renamed_item = cmds.ls(cmds.rename(node, new_name), l=True)[0]
remove_unique_name(node, node_dict)
add_unique_name ( renamed_item, node_dict)
return renamed_item
else:
for n in range(99999):
possible_name = new_name + str(n + 1)
if not possible_name in node_dict:
renamed_item = cmds.ls(cmds.rename(node, possible_name), l=True)[0]
add_unique_name(renamed_item, node_dict)
return renamed_item
raise RuntimeError, "Too many duplicate names"
To use it on a particular node type, you just supply the right would-be name when calling apply_convention(). This would rename all the joints in the scene (naively!) to 'jnt_X' while keeping the suffixes unique. You'd do something smarter than that, like your original code did - this just makes sure that leaves are unique:
joint_names= get_unique_scene_names('joint')
existing = cmds.ls( type='joint', l = True)
existing .sort()
existing .reverse()
# do this to make sure it works from leaves backwards!
for item in existing :
apply_convention(item, 'jnt_', joint_names)
# check the uniqueness constraint by looking for how many items share a short name in the dict:
for d in joint_names:
print d, len (joint_names[d])
But, like i said, plan for those damn numeric suffixes, maya makes them all the time without asking for permission so you can't fight em :(
Instead of running ls for each and every name, you should run it once and store that result into a set (an unordered list - slightly faster). Then check against that when you run check_scene
def check_scene(self, name):
"""Checks entire scene for same name. If duplicate exists,
Keyword Arguments:
name -- (string) name of object to be checked
"""
if not hasattr(self, 'scene'):
self.scene = set(ls())
if name not in self.scene:
return name
else:
return ''
One field in my model is a charField with the format substring1-substring2-substring3-substring4 and it can have this range of values:
"1-1-2-1"
"1-1-2-2"
"1-1-2-3"
"1-1-2-4"
"2-2-2-6"
"2-2-2-7"
"2-2-2-9"
"3-1-1-10"
"10-1-1-11"
"11-1-1-12"
"11-1-1-13"
For example I need to count the single number of occurrences for substring1.
In this case there are 5 unique occurrences (1,2,3,10,11).
"1-X-X-X"
"2-X-X-X"
"3-X-X-X"
"10-X-X-X"
"11-X-X-XX"
Sincerely I don't know where I can start from. I read the doc https://docs.djangoproject.com/en/1.5/ref/models/querysets/ but I didn't find a specific clue.
Thanks in advance.
results = MyModel.objects.all()
pos_id = 0
values_for_pos_id = [res.field_to_check.split('-')[pos_id] for res in results]
values_for_pos_id = set(values_for_pos_id)
How does this work:
first you fetch all your objects (results)
pos_id is your substring index (you have 4 substring, so it's in range 0 to 3)
you split each field_to_check (aka: where you store the substring combinations) on - (your separator) and fetch the correct substring for that object
you convert the list to a set (to have all the unique values)
Then a simple len(values_for_pos_id) will do the trick for you
NB: If you don't have pos_id or can't set it anywhere, you just need to loop like this:
for pos_id in range(4):
values_for_pos_id = set([res.field_to_check.split('-')[pos_id] for res in results])
# process your set results now
print len(values_for_pos_id)
Try something like this...
# Assumes your model name is NumberStrings and attribute numbers stores the string.
search_string = "1-1-2-1"
matched_number_strings = NumberStrings.objects.filter(numbers__contains=search_string)
num_of_occurrences = len(matches_found)
matched_ids = [match.id for match in matched_number_strings]
You could loop through these items (I guess they're strings), and add the value of each substring_n to a Set_n.
Since set values are unique, you would have a set, called Set_1, for example, that contains 1,2,3,10,11.
Make sense?