I have a .Net application that uses list of names/email addresses and finds there match on Facebook using the graph API. During testing, my list had 900 names...I was checking facebook matches for each name in in a loop...The process completed...After that when I opened my Facebook page...it gave me message that my account has been suspended due to suspicious activities?
What am I doing wrong here? Doesn't facebook allow to search large number requests to their server? And 900 doesn't seem to be a big number either..
per the platform policies: https://developers.facebook.com/policy/ this may be the a suspected breach of their "Principals" section.
See Policies I.5
If you exceed, or plan to exceed, any of the following thresholds
please contact us by creating confidential bug report with the
"threshold policy" tag as you may be subject to additional terms: (>5M
MAU) or (>100M API calls per day) or (>50M impressions per day).
Also IV.5
Facebook messaging (i.e., email sent to an #facebook.com address) is
designed for communication between users, and not a channel for
applications to communicate directly with users.
Then the biggie, V. Enforcement. No surprise, it's both automated and also monitored by humans. So maybe seeing 900+ requests coming from your app.
What I'd recommend doing:
Storing what you can client side (in a cache or data store) so you make fewer calls to the API.
Put logging on your API calls so you, the developer, can see exactly what is happening. You might be surprise at what you find there.
Related
I have a Books API project, and the GCP shows "No data is available for the selected time frame" for the last 30 days. This message appears on both the "Metrics" and "Quotas" pages. See screenshots below.
Clearly there is data, which I can see via my app analytics reports.
Any suggestions on how to fix it?
UPDATE 1:
Following are some points that were missing on the original post:
The Google Books API is used by an iOS app, which is available on the App Store and widely used across many iOS devices (iPhone and iPads) in many countries.
There are thousands of iOS devices running my app so the Google Books API calls are invoked from thousands of endpoints with different locations and different IPs. All endpoints are using the same API_KEY.
The Google Books API calls are performed successfully from the iOS devices and there is no API issue (I can clearly see that using analytics tool).
The only issue I have, is with GCP console not showing the number of the API calls (and other metrics) associated with my API_KEY. As you can see in the previous screenshots, I get "No data is available for the selected time frame" anywhere.
This is a regression issue since until recently I could successfully view the actual data of the API usage. I didn't change anything in this period.
When going to GCP > IAM & Admin > Quotas, you can clearly see that the app indeed consumes API calls (see screenshot below).
Any suggestion why would the GCP console tell that no data is available, while data is indeed available?
As the documentation [1], Google Books respects copyright, contract, and other legal restrictions associated with the end user's location. As a result, some users might not be able to access book content from certain countries. For example, certain books are "previewable" only in the United States; we omit such preview links for users in other countries. Therefore, the API results are restricted based on your server or client application's IP address.
On the other hand, I hope link [2] could be helpful for you which seems similar to the issue you are facing. Also, documentation [3] [4] could be helpful for us to have more information about books API to use in the Google Cloud Platform.
[1] https://developers.google.com/books/docs/v1/using#UserLocation
[2] Google books api always returns nothing
[3] https://developers.google.com/books/docs/v1/using
[4] https://developers.google.com/books/docs/v1/getting_started
I've been trying for a while to figure out if Google Cloud has a mechanism for a "crowbar" limit on API usage, as a security measure.
The scenario I'm concerned about is, say I have an API keyfile on a server, and a hacker steals it and plugs it into their system that (for the purposes of some nebulous scam) is running an unbounded number of translate requests as quickly as possible. I then receive a $25,000 bill. I don't see any mechanisms to really prevent this.
I can limit the roles for that key, but if the hacker's interest is in the same API I use for my project, that doesn't help. I can set a billing limit, but that just sends me an email, and I am not a person who notices incoming email, so in practice it would take me days to realize this had happened.
I can set a quota, but all the quotas in the dashboard seem to be per-minute or per-day limits. I need a per-month quota, and I don't see any way to do it. Am I just missing something, or does Google simply provide no option for this?
I understand that "well just don't let your API key get compromised" is probably the common wisdom, but that's unacceptable to my use cases for a number of reasons, so I'm not interested in discussing it. Thanks!
Edit: I should note, Google's documentation says "you can set requests per day caps" - but there are no instructions on that page for how to do this, and I can see no mechanism for it. I don't want a per-day cap, I want a per-month cap, but I'd take a per-day cap if I could find one. These are the only quotas I see for Cloud Vision, for instance:
Quotas part 1
Quotas part 2
As per Link 1 there is no Hard quota limit for Vision API on a monthly basis . If you need to use this feature you can request for this feature using the link 2.
In the meantime or as workaround you can control your vision API budget by using the Cloud Billing Budget API by following the link 3.
I am trying to write a Kafka connector to fetch data from the facebook. The problems are,
How to fetch data from facebook through their API without exceeding the limit of API hit provided by facebook? The connector should call facebook API for data after a specific time interval so that the number of hits won't exceed.
Each user can hit the facebook API with their Access Token so users can't share the same topic partition. So how to handle this scenario. Do we have to create one partition for each user?
I read a few guides and blogs to understand Kafka connect and write a connector.
Confluent- https://docs.confluent.io/current/connect/index.html
Kafka Documentation- https://kafka.apache.org/documentation/#connect
Conceptually It gave me an idea about what is Kafka connect, how it works and what are the important classes to write a Kafka connector. But still, I am confused that practically how to write and run a connector. I tried to find step by step development guide but didn't get.
Any tutorial or pdf If you could suggest which have detailed step by step development guide to write and run Kafka connector.
The only "official guide" is in those links you have
https://docs.confluent.io/current/connect/devguide.html#developing-a-simple-connector
I personally have no experience with the Facebook API, but I assume it uses REST, so you could make start by forking the kafka-connect-rest project, but the simplest answer to not exceed the limit would be to not send more requests than you are allowed within a given time period (add a timer to the code that waits between requests)
Also, one connector would only have one set of access keys. How you create the ConnectRecord objects to ultimately partition the records is up to you, but I don't think having an access key per user will scale very well. It might make more sense to have one key tied to one application, then each user will accept that that application has access to read certain details from their account.
Say I want to develop a web application that will have registered users and will be registered as a twitter app (allowing users to give it permissions to view their timeline and post on their behalf). The sole function of the application will be to re-tweet tweets from users' timeline according to users' settings and desires.
I understand that the website for this app will use the common technologies like HTML, CSS and JS on the client-side. The server side (where the user defines what kind of tweets the application should retweet) will have to be coded in PHP/Python/Perl/... based on a DB MySQL/Postgre/...
What I don't understand, and would really appreciate your help with, is where the real "business logic" will be coded? For example, what technology should I use to code the function that will sit on my server: contacting Twitter server every 5 minutes, reading the timeline of every user I have, checking whether there are tweets worth retweeting (according to what the user has defined), and sending Tweeter the necessary commands to retweet the chosen tweets on behalf of my users.
All that will happen off-line for the user, and will be an on-going and cyclic process - but what technology should I use to code it?
Thanks!
I have heard about this API for PHP. It is actually the only one that I have heard of for PHP, though. I know that there are some good Python libraries out there, but I don't know about Perl.
I am actually working on a new API for C# (won't be a good fit for you, as you're clearly not using Windows Servers), and started building it while working on an enterprise web application that prompted several questions similar to your own.
Here is what you are going to have to do:
Before you start, you are going to have to get in touch with one of Twitter's data partners (I believe that you may contact Twitter for the reference)
The reason for this is that you are going to be requiring many more requests than you think
Twitter's time interval used for Twitter's recorded rate cap is 900 seconds (5 minutes)
With the general rate limit if you are querying a user's timeline only once every rate limit, you are limiting your number of visitors on your site to 300 at a time
Here's where it gets tricky - if every user makes one Tweet (meaning you send the Tweet - not rate limited - and then refresh the timeline - rate limited - so that they can see the updated tweet) you have now dropped your maximum number of active users at any given time to 150
Factor in the company's own timeline (-1 visitor), plus the number of visitors who leave their browsers open (now you need more logic, and you have to either kick them off or simply keep track of whose timeline you won't be refreshing), the number of users who make more than one tweet (-1 visitor for each Tweet), etc.
Moral of the story: contact one of their data partners and get yourself either unlimited requests, or at least a significant enough amount to accommodate your number of visitors/users (plus a bit of padding)
If you adhere to this advise, skip steps 2 and 3, otherwise, skip step 4
(Note: Steps 2 and 3 are only for rate-capped implementations) Using your desired language, make a service that runs on the server and makes the queries to Twitter
Based on the information that you gave, I suggest that you use Python to make this service
The service will run at all times and be on it's own clock to base the 5-minute intervals between the requests on
You will have to use a caching or a database system for storing the data
(Note: Steps 2 and 3 are only for rate-capped implementations) Add the necessary code to make a request to the service that you created for the data and perform these requests every 5 minutes
I suggest that the clock used for making these requests to the service be running a little bit behind the clock used for the service to account for instances of slow data transfer, etc.
You will also have to have to call some methods on the service for adding/removing users from the queue
(Note: Step 4 is only for unlimited request implementations) Forget about the service and simply include the request code directly in the page that the user is on.
The user's timeline will be updated based on when they visited the site or when their timeline was last refreshed (if a Tweet was made)
The only caveat to this implementation is that you will have to pay for the unlimited/larger data rate limit
I am trying to prototype a system that will display a list of choices to a user, and allow them to place an order for the one they select (an over simplification of the prototype, but sufficient to get to the point). I have the users credit card number, billing and shipping addresses, and other contact information, but I can't find any web services that will let me actually purchase something with this information to complete the prototype. I have checked directories such as Programmable Web and Xmethods, but they just seem to point to APIs that let you check for prices and availability, but not actually place an order. Does such a thing exist, or is there some reason (such as security) that I am missing, that prevents such a service from being offered?
The most important thing about online shopping is the security of transmitted information (e.g. credit card data). So the ideal case is to transmit these information directly to the related bank's (issuer of the credit card) payment services, rather than passing it via other service providers. This is what 3-D Secure does.
So when you use a common API this means putting an extra broker between, and passing the secure information to this party which increases vulnerability. Since such a broker cannot use 3-D secure (since it is not the merchant so not possible to make an agreement with the banks) and it should pass the information to online shopping site.
Moreover, an online shoping site can block traffic coming from such an intermediary webservice at any time if you do not make an obligatory agreement and making agreements for each online merchant is practically not very possible.
There is no such free API available the simple reason behind that information like credit card is very secure and confidential and there will security threat on free API's.
here is list of best 10 online payment system
http://sixrevisions.com/tools/online-payment-systems/
and this one who providing live demo
http://www.fastcharge.com/
I think it is possible though I don't know in depth information. I think this is what you see. In next steps you will be redirected to payment gateway of the bank and then you can complete the transactions just by answering some security questions. I think this is a service you should obtain from the bank. And I haven't seen any universal API that can perform the task you have mentioned.
Dialog GSM - Sri Lanka
Anything.lk - Sri Lanka