I'm trying to get the friends of a user and append them to a list given a condition:
for friend in tweepy.Cursor(api.friends).items():
if friend not in visited:
screen_names.append(friend.screen_name)
visited.append(friend.screen_name)
However I obtain an error:
raise RateLimitError(error_msg, resp)
tweepy.error.RateLimitError: [{u'message': u'Rate limit exceeded', u'code': 88}]
Could you give me any hint on solving this problem? Thanks a lot
By default, friends method of API class, returns only list of 20 users per call, and by Twitter API you are limited to 15 calls only per window (15-minute). Thus you can only fetch 20 x 15 = 300 friends within 15-minutes.
Cursor in tweepy is another way of getting results without managing cursor value on each call to Twitter API.
You can increase the count of results fetched by per call, by including an extra parameter count.
tweepy.Cursor(api.friends, count = 200)
Maximum value of count can be 200. If you've friends more than 200 x 15 = 3000, than you need to use normal api.friends method, with maintaining cursor value and using sleep to distribute call timing. See GET friends/list page for detailed info.
Since tweepy 3.2+ you can instruct the tweepy library to wait for rate limits. This way you don't have to do that in your code.
To use this feature you would initialize your api handle as follows:
self.api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
The documentation for the new variables is below.
wait_on_rate_limit – Whether or not to automatically wait for rate limits to replenish
wait_on_rate_limit_notify – Whether or not to print a notification when Tweepy is waiting for rate limits to replenish
Per Twitter's API documentation, you have reached your query limit. It looks like the rate limits are in effect for every 15 minutes of querying, so try again in 30 minutes or use a different IP Address to hit the API. If you scroll down Twitter's documentation, you will see your code 88.
Related
We have a Vertex AI model that takes a relatively long time to return a prediction.
When hitting the model endpoint with one instance, things work fine. But batch jobs of size say 1000 instances end up with around 150 504 errors (upstream request timeout). (We actually need to send batches of 65K but I'm troubleshooting with 1000).
I tried increasing the number of replicas assuming that the # of instances handed to the model would be (1000/# of replicas) but that doesn't seem to be the case.
I then read that the default batch size is 64 and so tried decreasing the batch size to 4 like this from the python code that creates the batch job:
model_parameters = dict(batch_size=4)
def run_batch_prediction_job(vertex_config):
aiplatform.init(
project=vertex_config.vertex_project, location=vertex_config.location
)
model = aiplatform.Model(vertex_config.model_resource_name)
model_params = dict(batch_size=4)
batch_params = dict(
job_display_name=vertex_config.job_display_name,
gcs_source=vertex_config.gcs_source,
gcs_destination_prefix=vertex_config.gcs_destination,
machine_type=vertex_config.machine_type,
accelerator_count=vertex_config.accelerator_count,
accelerator_type=vertex_config.accelerator_type,
starting_replica_count=replica_count,
max_replica_count=replica_count,
sync=vertex_config.sync,
model_parameters=model_params
)
batch_prediction_job = model.batch_predict(**batch_params)
batch_prediction_job.wait()
return batch_prediction_job
I've also tried increasing the machine type to n1-high-cpu-16 and that helped somewhat but I'm not sure I understand how batches are sent to replicas?
Is there another way to decrease the number of instances sent to the model?
Or is there a way to increase the timeout?
Is there log output I can use to help figure this out?
Thanks
Answering your follow up question above.
Is that timeout for a single instance request or a batch request. Also, is it in seconds?
This is a timeout for the batch job creation request.
The timeout is in seconds, according to create_batch_prediction_job() timeout refers to rpc timeout. If we trace the code we will end up here and eventually to gapic where timeout is properly described.
timeout (float): The amount of time in seconds to wait for the RPC
to complete. Note that if ``retry`` is used, this timeout
applies to each individual attempt and the overall time it
takes for this method to complete may be longer. If
unspecified, the the default timeout in the client
configuration is used. If ``None``, then the RPC method will
not time out.
What I could suggest is to stick with whatever is working for your prediction model. If ever adding the timeout will improve your model might as well build on it along with your initial solution where you used a machine with a higher spec. You can also try using a machine with higher memory like the n1-highmem-* family.
"failureReason": "Job validation failed: Request field config is
invalid, expected an estimated total output size of at most 400 GB
(current value is 1194622697155 bytes).",
The actual input file was only 8 seconds long. It was created using the safari media recorder api on mac osx.
"failureReason": "Job validation failed: Request field
config.editList[0].startTimeOffset is 0s, expected start time less
than the minimum duration of all inputs for this atom (0s).",
The actual input file was 8 seconds long. It was created using the desktop Chrome media recorder api, with mimeType "webm; codecs=vp9" on mac osx.
Note that Stackoverlow wouldn't allow me to include the tag google-cloud-transcoder suggested by "Getting Support" https://cloud.google.com/transcoder/docs/getting-support?hl=sr
Like Faniel mentioned, your first issue is that your video was less than 10 seconds which is below the minimum 10 seconds for the API.
Your second issue is that the "Duration" information is likely missing from the EBML headers of your .webm file. When you record with MediaRecorder the duration of your video is set to N/A in the file headers as it is not known in advance. This means the Transcoder API will treat the length of your video is Infinity / 0. Some consider this a bug with Chromium.
To confirm this is your issue you can use ts-ebml or ffprobe to inspect the headers of your video. You can also use these tools to repair the headers. Read more about this here and here
Also just try running with the Transcoder API with this demo .webm which has its duration information set correctly.
This Google documentation states that the input file’s length must be at least 5 seconds in duration and should be stored in Cloud Storage (for example, gs://bucket/inputs/file.mp4). Job Validation error can occur when the inputs are not properly packaged and don't contain duration metadata or contain incorrect duration metadata. When the inputs are not properly packaged, we can explicitly specify startTimeOffset and endTimeOffset in the job config to set the correct duration. If the duration of the ffprobe output (in seconds) of the job config is more than 400 GB, it can result in a job validation error. We can use the following formula to estimate the output size.
estimatedTotalOutputSizeInBytes = bitrateBps * outputDurationInSec / 8;
Thanks for the question and feedback. The Transcoder API currently has a minimum duration of 10 seconds which may be why the job wasn't successful.
I have created a new Cloud Function using Java 11 (Beta) Runtime to handle HTML form submission for my static site. It's a simple 3-field form (name, email, message). No file upload is involved. The function does 2 things primarily:
Creates a pull request with BitBucket
Sends email to me using SendGrid
NOTE: It also verifies recaptcha but I've disabled it for testing.
The function when ran on my local machine (base model 2019 Macbook Pro 13") takes about 3 secs. I'm based in SE Asia. The same function when deployed to Google Cloud us-central1 takes about 25 secs (8 times slower). I have almost the same code running in production as part of a Servlet on GAE Java 8 runtime also in US Central region for a few years. It takes about 2-3 secs including recaptcha verification and sending the email. I'm trying to port it over to Cloud Function, but the performance is about 10 times slower with Cloud Function even without recaptcha verification.
For comparison, the Cloud Function is running on 256MB / 400GHz instance, whereas my GAE Java 8 runtime runs on F1 (128MB / 600GHz) instance. The function is using only about 75MB of memory. The function is configured to accept unauthenticated requests.
I noticed that even basic String concatenation like: String c = a + b; takes a good 100ms on the Cloud Function. I have timed the calls and a simple string concatenation of about 15 strings into one takes about 1.5-2.0 seconds.
Moreover, writing a small message (~ 1KB) to the HTTPUrlConnection output stream and reading the response back takes about 10 seconds (yes seconds)!
/* Writing < 1KB to output stream takes about 4-5 secs */
wr = new OutputStreamWriter(con.getOutputStream());
wr.write(encodedParams);
wr.flush();
wr.close();
/* Reading response also take about 4-5 secs */
String responseMessage = con.getResponseMessage();
Similarly, the SendGrid code below takes another 10 secs to send the email. It takes about 1 sec on my local machine.
Email from = new Email(fromEmail, fromName);
Email to = new Email(toEmail, toName);
Email replyTo = new Email(replyToEmail, replyToName);
Content content = new Content("text/html", body);
Mail mail = new Mail(from, subject, to, content);
mail.setReplyTo(replyTo);
SendGrid sg = new SendGrid(SENDGRID_API_KEY);
Request sgRequest = new Request();
Response sgResponse = null;
try {
sgRequest.setMethod(Method.POST);
sgRequest.setEndpoint("mail/send");
sgRequest.setBody(mail.build());
sgResponse = sg.api(sgRequest);
} catch (IOException ex) {
throw ex;
}
Something is obviously wrong with the Cloud Function. Since my original code is running on GAE Java 8 runtime, it was very easy for me to port it over to the Cloud Function with minor changes. Otherwise I would have gone with NodeJS runtime. I'm also not seeing any of the performance issues when running this function on my local machine.
Can someone help me make sense of the slow performance issue?
What you're seeing is almost certainly due to the "cold start" cost associated with the creation of a new server instance to handle the request. This is an issue with all types of Cloud Functions, as described in the documentation:
Several of the recommendations in this document center around what is known as a cold start. Functions are stateless, and the execution environment is often initialized from scratch, which is called a cold start. Cold starts can take significant amounts of time to complete. It is best practice to avoid unnecessary cold starts, and to streamline the cold start process to whatever extent possible (for example, by avoiding unnecessary dependencies).
I would expect JVM languages to have an even longer cold start time due to the amount of time that it takes to initalize a JVM, in addition to the server instance itself.
Other than the advice above, there is very little one can due to effectively mitigate cold starts. Efforts to keep a function warm are not as effective as you might imagine. There is a lot of discussion about this on the internet if you wish to search.
Keep in mind that the Java runtime is also in beta, so you can expect improvements in the future. The same thing happened with the other runtimes.
Could you please point me at documents you might have detailing your expected responsiveness of HERE geocoding APIs please? I'm after something more concrete than 99.9% availability.
Also, if I'm waiting for 40 minutes or 14 hours for a batch job containing a single address to be processed, does that fail 99.9%?
You can view the SLA-report by logging into developer.here.com and going to https://developer.here.com/sla-report. Batch jobs are POST requests and the time to complete your request depends on the queue size (there would be other requests waiting) and the batch size. So this doesn't fail 99.9%.
For a single address as you listed will take just few milliseconds. Anything above that, especially 40 min indicates that you are probably not connected. This includes invalid address input as the result will be back telling you that the address is not found. You can check the previous status using RequestID from the request like that https://batch.geocoder.ls.hereapi.com/6.2/jobs/
E2bc948zBsMCG4QclFKCq3tddWYCsE9g
?action=status
&apiKey={YOUR_API_KEY}
In general, if your address has more address tokens, it will take longer to return comparing to if it has fewer address tokens.
example
USA vs 1600 Pennsylvania Ave NW, Washington, DC 20500,USA
(ACF9)
Unless there's an option I'm missing, the "Log Slow Pages Taking Longer Than [n] Seconds" setting isn't useful for front-controller based sites (e.g., Model-Glue, FW/1, Fusebox, Mach-II, etc.).
For instance, in a Mura/Framework-One site, I just end up with:
"Warning","jrpp-186","04/25/13","15:26:36",,"Thread: jrpp-186, processing template: /home/mysite/public_html_cms/wwwroot/index.cfm, completed in 11 seconds, exceeding the 10 second warning limit"
"Warning","jrpp-196","04/25/13","15:27:11",,"Thread: jrpp-196, processing template: /home/mysite/public_html_cms/wwwroot/index.cfm, completed in 59 seconds, exceeding the 10 second warning limit"
"Warning","jrpp-214","04/25/13","15:28:56",,"Thread: jrpp-214, processing template: /home/mysite/public_html_cms/wwwroot/index.cfm, completed in 32 seconds, exceeding the 10 second warning limit"
"Warning","jrpp-134","04/25/13","15:31:53",,"Thread: jrpp-134, processing template: /home/mysite/public_html_cms/wwwroot/index.cfm, completed in 11 seconds, exceeding the 10 second warning limit"
Is there some way to get query string or post details in there, or is there another way to get what I'm after?
You can easily add some logging to your application for any requests that take longer than 10 seconds.
In onRequestStart():
request.startTime = getTickCount();
In onRequestEnd():
request.endTime = getTickCount();
if (request.endTime - request.startTime > 10000){
writeLog(cgi.QUERY_STRING);
}
If you're writing a Mach-II, FW/1 or ColdBox application, it's trivial to write a "plugin" that runs on every request which captures the URL or FORM variables passed in the request and stores that in a simple database table or log file. (You can even capture session.userID or IP address or whatever you may need.) If you're capturing to a database table, you'll probably not want any indexes to optimize for performance and you'll need to rotate that table so you're not trying to do high-speed inserts on a table with tens of millions of rows.
In Mach-II, you'd write a plugin.
In FW/1, you'd put a call to a controller which handles this into setupRequest() in your application.cfc.
In ColdBox, you'd write an interceptor.
The idea is that the log just tells you what pages arw xonsostently slow sp ypu can do your own performance tuning.
Turn on debugging for further details for a start.