Does the 'Per 100 seconds per user 20,000' limit apply to the user who uploads files to my account? that is, do they apply the limit through your ip? Or is it for me as the account holder?
I have an app where users upload files to my account, so I'd like to know if the limit applies to me as the account holder or the submitter to see if the app is viable.
There are two types of quotas user based quotas and application based quotas.
The last one Per 100 seconds per user Meaning A singe user of your application can make a max of 20k requests per 100 Seconds.
The first one Per 100 Seconds Meaning All users of your application at the same time may make a max of 20k requests per 100 seconds.
The middle one Per day Meaning all users of your application together can make a max of 1 billon requests per day.
Per user limits apply to anyone authorized to access your application, via the consent screen being shown to them. Its tracked by the access token they are using that contains the user info within it.
If you have the same user making requests over different ip address sometimes you can Hack the quota limit but it doesn't always work. A user is a user. No matter which machine they are coming from.
Let's say I'm creating a PWA (Progressive Web App) where products can be added by users.
Prices of these products are variable from 0,01 EUR to 1,00 EUR.
I'm using Stripe for payments.
The Stripe Order object do not support dynamic price, passed on the fly, without any reference (kind of foreign key).
To accept the Order, Stripe needs a reference to a SKU.
This SKU will be, in my case, a variation of the price, on the product.
It means that, to cover all variations, I need 100 SKUs, from 1 (0.01 EUR) to 100 (1,00 EUR).
So, for each product created in Stripe, I need to create 100 SKUs in Stripe.
I tried to insert a test dataset of 200 products, which means (200 products + (200 x 100 SKUs)) = 20200 requests.
I got a surprising "Request rate limit exceeded" error from Stripe.
Less than half of records where created... :(
That "Request rate limit exceeded" is the core of the problem.
Right now, the insertion process is the following (x 200):
Create product in Firestore.
Firebase cloud function listener :
OMG new product inserted in Firestore. Ok let's :
Import official nodejs Stripe & Algolia libraries
Create product in Stripe to make it billable
Create the 100 SKU related to the product in Stripe, with Promise.all (This is where, at some point, I end up with a rate limit error, because my concurrent cloud functions instances are using the same Stripe key, which means the same Stripe account)
Create product in Algolia to make it searchable
I need solutions to counter this Stripe API rate limit error.
I have several solutions in mind :
Solution 1 :
Be able to increase Stripe rate API limit for a given amount of time.
Not sure this is possible.
Solution 2 :
Be able to use differents Stripe keys, then rotate over them, to perform admin stuff, such inserting multiple products/SKUs in Stripe.
Ultimately on production, be able to create programmatically 1 Stripe key per user, so each user would have its own limit.
Not sure this is possible.
Solution 3 :
Slow down insertion process in javascript.
Don't know how to perform that.
Besides, Cloud functions have a budget/limit of 60 seconds for javascript execution. So I can't delay too much.
Solution 4 :
Delay work using Pub/Sub (?), or Firestore Triggers
For example, having an integer in Firestore, that each function call increments, and same function listen the write to re-increment he number, etc, etc, etc, until the number equals 100 for the 100th SKU. That solution would sequentialize the 100 SKUs writes in Stripe.
Not sure this will really slow down enough the work to be under the API rate limit. In addition, such a solution would cost lots of money : 100+ Firestore writes, and 100+ functions calls to perform these writes, for only one product, which means 20000+/20000+ for the 200 products. That would be expensive.
Solution 5 :
Perform Just-In-Time insertions, when user pays.
The server side algorithm, after a Payment Request API call, might look like this :
Create order in Stripe
If error 'No such sku...' catched {
For each SKU { // Ideally filter here SKUs to create (only those in error)
If price not between 1 and 100 {
continue // Bad price, not legit
}
Create SKU in Stripe
If error 'Already exists' {
continue // no creation needed for that SKU
}
If error 'No such product...' catched {
If productId does not exists in Firestore {
continue // Bad productId, not legit
}
Create product in Stripe
}
Create SKU in Stripe
}
}
Create order in Stripe
This last solution could do the job.
But it might comes with some delay for the user when it executes payment, which could increase stress. Plus it might increase Stripe calls during the business hours. Many purchases in same time could lead to a Stripe API rate limit error, especially with well furnished carts (let's say an average of 30 products in the cart, so in worst case 30+ HTTPS calls during payment, times 1000 users = 30000 calls => Stripe error). That problem might decrease over time for a given product, because once a SKU is created it is created definitively. Still, as there would be new products, so products with zero SKU at creation, every day, the problem remains.
What do you think ?
Do you have any other ideas ?
Solution 3 and Solution 5 with some tweaks will work best.
Solution 3: You can limit number of concurrent requests to Stripe using async module's forEachLimit or queue.
Solution 5 : Just in time insertions is also a good option as it won't put much load on Stripe server at same time. Regarding your concern of getting the same error during business hour, it will a very rare case as Stripe APIs are built to perform very well. But if you still have doubt regarding this what you can do is to have a Background process for adding SKUs during non-business hours, which will keep on creating SKUs for you without encountering Stripe API rate limit error.
Solution 6 (Modified Solution 5): Have just in time insertions but also create an extra API request to your server whenever a product is entered in the cart which will then check if the SKU exist in Stripe and if not then create it in the background before cart payment happens.
Solution 6 :
Same idea (JIT), but moving SKU creation from payment time to product selection time. Each time a product is selected, try to create the product and its current SKU (price variation) in Stripe. This way, Stripe calls should be more distributed in the time. Or maybe it will ends with more API calls, as we select products more often than we pay, because users can select & unselect products, so they might end with more products selected during their journey than the sum of products finally being paid in the cart ?
Solution 7 :
Same idea (JIT), but with SKU cached in Algolia or Firebase, so I can perform "does this SKU exist ?" calls without querying Stripe, which should reduces Stripe calls if the existence test is performed before the create call (we do not call Stripe.skus.create() blindly). The drawback is, that Firebase and Algolia are exposed in Front so the SKUs and prices will be too, and this is a potential source of threat, so another index, dedicated and only known by the server, has to be used.
What is the maximum limit for fetching the facebook pages from an account?
Let's suppose if a facebook account has more than 200 pages to administer. If I try to retrieve facebook pages of that account using '/me/accounts' edge.
Then I get data as well as paging (containing cursors and next, previous page links). What I want to know is If I can set a limit while fetching the facebook pages like '/me/accounts?limit=200' and get all the 200 facebook pages the account has??
I have searched the documentation But there is no clear explanation as for this rate limit.
If you mean the API limit, it´s dynamic and not a definitive value. A general rule is "600 calls per 600 seconds, per token & per IP". You can also read this in other Stackoverflow threads, for example: What's the Facebook's Graph API call limit?
If you mean the limit parameter: Afaik they are changing it from time to time. I would not rely on it and just use the default value with paging, which is usually 25.
I am trying to use Google Admin Reports API: Users Usage Report to pull emails received/sent per user per day in our org's google app.
When I use Google APIs Explorer to pull my own stats for a particular day and compared it with real situation, it seems the number is far off.
For example, on Sunday, 7th Dec 2014, I only sent out one email. But the stats shows there were 4 emails sent out by me on that day.
Any assistance would be appreciated
Cheers,
You should get the same results than searching in Gmail:
in:sent from:me after:2014/12/07 before:2014/12/08
The missing bit is the time zone the server is using which in my research it is always Pacific Standard Time.
Did you:
Send out any calendar invitations that day? (1 email per attendee)
Share any Google Drive files/folders that day (1 email per file shared)
Send mail from a Google Group
there are likely other actions you may have performed in other Google Apps which caused emails to go out in your name and count against your quota but not necessarily show in your Sent folder.
If you'd like for these messages to appear in your Sent folder, turn on Comprehensive Storage.
Does the Alexa API give a way to query Bounce Rates? There seems to be no ResponseGroup that returns this information. I've tried all the ResponseGroups and Actions mentioned in the documentation here.
There is no way to get highly accurate data via their API, but the alternative is to use SimilarWeb API for engagement data such as:
Average Page Views
Average number of page views per visitor
Average Time On Site
Bounce Rate
You can read more about the Engagement API in this link: https://developer.similarweb.com/engagement_api