=IF(and('Form Responses 5'!BA:BA="Service/install",if('Form Responses 5'!Y:Y="V723")),sum('Form Responses 5'!Z:Z),"")
I want it so
if ba:ba="service.install" then add z:z
if ba:ba="sending" then add z:z
bonus if I could get it to be also included
if ba:ba="receive" then subtract z:z
if none of the above, do nothing
I keep getting 'wrong number of arguments'
I've been trying for about 3 hours. I can't get it do do much. Sometimes it'll say false in the cell.
edit:
I just noticed I missed putting in half of what I wanted. I want it so if ba:ba="service.install" and y:y="V723" then add z:z
if ba:ba="sending" and Y:Y="V723" then add z:z
bonus if I could get it to be also included if ba:ba="receive" then subtract z:z
if none of the above, do nothing
https://docs.google.com/spreadsheets/d/1aeckdhKBFTlaRS42wzDM1lG0CTx_ultMqjjHkRkdzYk/edit?usp=sharing
try:
=IFERROR(SUM(FILTER('Form Responses 5'!Z:Z; REGEXMATCH(
'Form Responses 5'!BA:BA; "(?i)Service/install|sending");
'Form Responses 5'!Y:Y="V723")))
Related
Ok I have been stuck on this for a few weeks now. I'm using the Front Email API for a business use case and have created an iterative function to (attempt to) get multiple pages of query results.
A quick overview of the API (endpoint) I'm using for context:
The "events" endpoint returns a list of records based on the parameters given in the query (like before/after/between certain times, types of events, etc.)
If the query results in more than 100 records, a "next" pagination link is generated for the next page(s) of results. There is no "page=n" parameter in the query URL, you only get the "next page" link from the response of the previous query (which is fairly unique)
A side note, the initial base_url for the first query, and the base_url of the "next page" link are two different urls (i.e. the initial call is https://api2.frontapp.com and the second is https://companynamehere-inc.api.frontapp.com), so this is taken into consideration in my querying function.
My environment looks like this:
As you can see, I query the initial URL using the external Func_Query_Front_API function, then begin the iteration; While the next page link is not null, keep feeding the next links returned from the previous calls back into the function to get the next page of results. I deconstruct the links given to the function into a base, relative path and body/params so that I can use this logic in both Desktop and Online Service (don't ask).
It's difficult to describe the results I get, because sometimes in the preview window, it just clocks and clocks and doesn't return any results with the API not being queried at all (confirmed from Postman and the rate limit remaining number in the response headers). When I do get results back, it's a massive list of records (way more than what I'm querying for/expecting to receive) that contains lots of duplicates. It's almost like the initial (or the second) query URL is being looped over and over again, but not the "next" page's links? Like it's somehow stuck in a semi-infinite loop while the "next" link is not null from the initial response only, so it just repeats the same query over and over again re-using the same "next" page link.
Now, unfortunately I cannot provide my bearer token for this API as it's private company info returned by the API, but I'm wondering if the issue is with my syntax in the code that someone can spot and steer me in the right direction. The querying function itself works when invoked on its own, and everything looks like it SHOULD work, but it's just not doing what I think the code is saying it should do.
I understand it's not much to go on, but I'm out of options in trying to debug this thing. Thanks!
UPDATE
I think what MIGHT help here is a working code written in Python that might help to translate what I'm looking for into Power BI, so I've provided a working code in Python below (again though, the bearer token is not provided, but the syntax should make things a bit clearer). The code closely resembles what I've already made in Power BI as well so hopefully this helps things a bit?
import requests
from time import sleep
########################################
# GLOBAL VARIABLES
########################################
_event_types_filter = "assign&q[types]=archive&q[types]=comment&q[types]=inbound&q[types]=outbound&q[types]=forward&q[types]=tag&q[types]=out_reply"
_after = 1667506000
_page_limit = 100
_bearer_token = "Bearer XXXXXXXXXXXXXXXXXXXXXXXXXX"
_init_base_url = "https://api2.frontapp.com"
_relative_path = "events"
_init_body = "limit=" + str(_page_limit) + "&q[after]=" + str(_after) + "&q[types]=" + _event_types_filter
_headers = {'authorization': _bearer_token}
_init_query_url = _init_base_url + "/" + _relative_path + "?" + _init_body
########################################
# FUNCTION - Query the API
########################################
def Func_Query_Front_API(input_url):
#print(input_url)
# Deconstruct the url into its separate parts
splitted_input_url = input_url.split("?")
input_url_base = splitted_input_url[0].replace("/events", "")
input_url_body_params = splitted_input_url[1]
# Query the API
response = requests.request("GET",
input_url_base + "/" + _relative_path + "?" + input_url_body_params,
headers=_headers)
# Get the "next" link from the response
next = response.json()["_pagination"]["next"]
# Return the response and the "next" link
return response, next
########################################
# MAIN PROGRAM START
########################################
# List to add the response data's to
Source = []
# Make the initial request and add the results to the list
init_response, next = Func_Query_Front_API(_init_query_url)
Source.append(init_response)
# While the "next" link(s) are not None, query the current
# "next" link for the next page(s) of the query
while next != None:
response, next = Func_Query_Front_API(next)
Source.append(response)
sleep(1)
print(Source)
print("Done!")
Currently I have a method that retrieves all ~119,000 gmail accounts and writes them to a csv file using python code below and the enabled admin.sdk + auth 2.0:
def get_accounts(self):
students = []
page_token = None
params = {'customer': 'my_customer'}
while True:
try:
if page_token:
params['pageToken'] = page_token
current_page = self.dir_api.users().list(**params).execute()
students.extend(current_page['users'])
# write each page of data to a file
csv_file = CSVWriter(students, self.output_file)
csv_file.write_file()
# clear the list for the next page of data
del students[:]
page_token = current_page.get('nextPageToken')
if not page_token:
break
except errors.HttpError as error:
break
I would like to retrieve all 119,000 as a lump sum, that is, without having to loop or as a batch call. Is this possible and if so, can you provide example python code? I have run into communication issues and have to rerun the process multiple times to obtain the ~119,000 accts successfully (takes about 10 minutes to download). Would like to minimize communication errors. Please advise if better method exists or non-looping method also is possible.
There's no way to do this as a batch because you need to know each pageToken and those are only given as the page is retrieved. However, you can increase your performance somewhat by getting larger pages:
params = {'customer': 'my_customer', 'maxResults': 500}
since the default page size when maxResults is not set is 100, adding maxResults: 500 will reduce the number of API calls by an order of 5. While each call may take slightly longer, you should notice performance increases because you're making far fewer API calls and HTTP round trips.
You should also look at using the fields parameter to only specify user attributes you need to read in the list. That way you're not wasting time and bandwidth retrieving details about your users that your app never uses. Try something like:
my_fields = 'nextPageToken,users(primaryEmail,name,suspended)'
params = {
'customer': 'my_customer',
maxResults': 500,
fields: my_fields
}
Last of all, if your app retrieves the list of users fairly frequently, turning on caching may help.
I'm trying to get a list of all recent Youtube uploads using Python and API v3.0. I'm using youtube.search().list, but the results were all uploaded at seemingly random times in the last few years, and are not ordered by date. I've attached a sample of my code.
def getSearchResults():
""" Perform an empty search """
search_results = youtube.search().list(
part="snippet",
order="date",
maxResults=50,
type="video",
q=""
).execute()
return search_results
def getVideoDetails(video):
""" Get details of search results """
video_id = video["id"]["videoId"]
video_title = video["snippet"]["title"]
video_date = video["snippet"]["publishedAt"]
return video_id, video_title, video_date
search_results = getSearchResults()
for result in search_results["items"]:
print getVideoDetails(result)
Any help is greatly appreciated!
EDIT: After retrying a few times, sometimes I will get the correct output (the most recently uploaded videos sorted by upload date) and sometimes I won't (resulting in videos from seemingly random times). I have no idea what the result depends on, but at certain times of the day it works, at others it doesn't, so the issue seems to be with the API and not my code.
EDIT 2: Further evidence that the API is at fault: ordering by date or title on the API Reference's Try It section for youtube.search().list is wack, too.
UPDATE: Google have acknowledged the issue and seem to have fixed it.
Does anyone know how I can test the image upload with using WebTest. My current code is:
form['avatar'] =('avatar', os.path.join(settings.PROJECT_PATH, 'static', 'img', 'avatar.png'))
res = form.submit()
In the response I get the following error "Upload a valid image. The file you uploaded was either not an image or a corrupted image.".
Any help will be appreciated.
Power was right. Unfortunately(or not) I found his answer after I spend half an hour debugging the webtest. Here is a bit more information.
Trying to pass only the path to the files brings you the following exception:
webtest/app.py", line 1028, in _get_file_info
ValueError: upload_files need to be a list of tuples of (fieldname,
filename, filecontent) or (fieldname, filename); you gave: ...
The problem is that is doesn't told you that it automatically will append the field name to the tuple send and making 3 item tuple into 4 item one. The final solutions was:
avatar = ('avatar',
file(os.path.join(settings.PROJECT_PATH, '....', 'avatar.png')).read())
Too bad that there is not decent example but I hope this will help anyone else too )
Nowadays selected answer didn't help me.
But I found the way to provide expected files in the .submit() method args
form.submit(upload_files=[('avatar', '/../file_path.png')])
With Python 3:
form["avatar"] = ("avatar.png", open(os.path.join(settings.PROJECT_PATH, '....', 'avatar.png', "rb").read())
I've seen some nifty code on django-ratings documentation and like to create something similar. After googling around for now 2 weeks I got no idea on how to do this.
Maybe you could help me what to search for or where to get some docs?
Code from django-ratings docs:
...
response = AddRatingView()(request, **params)
if response.status_code == 200:
if response.content == 'Vote recorded.':
request.user.add_xp(settings.XP_BONUSES['submit-rating'])
return {'message': response.content, 'score': params['score']}
return {'error': 9, 'message': response.content}
...
My Problem:
request.user.add_xp(settings.XP_BONUSES['submit-rating'])
So i'd like to do something like this:
request.user.my_shiny_function(foobar)
Thanks in advance,
Thomas
Check out proxy models: http://docs.djangoproject.com/en/dev/topics/db/models/#id8
I think the code sample you're sighting seems to have been picked from somewhere else (it's not part of the django-ratings code - a simple grep -ir "add_xp" on the source directory shows that text is only in Readme.rst).
If you could explain why you need the functionality you're looking for here, maybe we could help some more. In the mean time, you can look at rolling your own custom backend, extending the default User model and then adding other "nifty" features to it :).