AWS Error "Calling the invoke API action failed with this message: Rate Exceeded" when I use s3.get_paginator('list_objects_v2') - amazon-web-services

Some third party application is uploading around 10000 object to my bucket+prefix in a day. My requirement is to fetch all objects which were uploaded to my bucket+prefix in last 24 hours.
There are so many files in my bucket+prefix.
So I assume that when I call
response = s3_paginator.paginate(Bucket=bucket,Prefix='inside-bucket-level-1/', PaginationConfig={"PageSize": 1000})
then may be it makes multiple calls to S3 API and may be that's why it is showing Rate Exceeded error.
Below is my Python Lambda Function.
import json
import boto3
import time
from datetime import datetime, timedelta
def lambda_handler(event, context):
s3 = boto3.client("s3")
from_date = datetime.today() - timedelta(days=1)
string_from_date = from_date.strftime("%Y-%m-%d, %H:%M:%S")
print("Date :", string_from_date)
s3_paginator = s3.get_paginator('list_objects_v2')
list_of_buckets = ['kush-dragon-data']
bucket_wise_list = {}
for bucket in list_of_buckets:
response = s3_paginator.paginate(Bucket=bucket,Prefix='inside-bucket-level-1/', PaginationConfig={"PageSize": 1000})
filtered_iterator = response.search(
"Contents[?to_string(LastModified)>='\"" + string_from_date + "\"'].Key")
keylist = []
for key_data in filtered_iterator:
if "/" in key_data:
splitted_array = key_data.split("/")
if len(splitted_array) > 1:
if splitted_array[-1]:
keylist.append(splitted_array[-1])
else:
keylist.append(key_data)
bucket_wise_list.update({bucket: keylist})
print("Total Number Of Object = ", bucket_wise_list)
# TODO implement
return {
'statusCode': 200,
'body': json.dumps(bucket_wise_list)
}
So when we execute above Lambda Function then it shows below error.
"Calling the invoke API action failed with this message: Rate Exceeded."
Can anyone help to resolve this error and achieve my requirement ?

This is probably due to your account restrictions, you should add retry with some seconds between retries or increase pagesize

This is most likely due to you reaching your quota limit for AWS S3 API calls. The "bigger hammer" solution is to request a quota increase, but if you don't want to do that, there is another way using botocore.Config built in retries, for example:
import json
import time
from datetime import datetime, timedelta
from boto3 import client
from botocore.config import Config
config = Config(
retries = {
'max_attempts': 10,
'mode': 'standard'
}
)
def lambda_handler(event, context):
s3 = client('s3', config=config)
###ALL OF YOUR CURRENT PYTHON CODE EXACTLY THE WAY IT IS###
This config will use exponentially increasing sleep timer for a maximum number of retries. From the docs:
Any retry attempt will include an exponential backoff by a base factor of 2 for a maximum backoff time of 20 seconds.
There is also an adaptive mode which is still experimental. For more info, see the docs on botocore.Config retries
Another (much less robust IMO) option would be to write your own paginator with a sleep programmed in, though you'd probably just want to use the builtin backoff in 99.99% of cases (even if you do have to write your own paginator). (this code is untested and isn't even asynchronous, so the sleep will be in addition to the wait time for a page response. To make the "sleep time" exactly sleep_secs, you'll need to use concurrent.futures or asyncio (AWS built in paginators mostly use concurrent.futures)):
from boto3 import client
from typing import Generator
from time import sleep
def get_pages(bucket:str,prefix:str,page_size:int,sleep_secs:float) -> Generator:
s3 = client('s3')
page:dict = client.list_objects_v2(
Bucket=bucket,
MaxKeys=page_size,
Prefix=prefix
)
next_token:str = page.get('NextContinuationToken')
yield page
while(next_token):
sleep(sleep_secs)
page = client.list_objects_v2(
Bucket=bucket,
MaxKeys=page_size,
Prefix=prefix,
ContinuationToken=next_token
)
next_token = page.get('NextContinuationToken')
yield page

Related

"Rate of traffic exceeds capacity" error on Google Cloud VertexAI but only sending a single prediction request

As In the title. Exact response:
{
"error": {
"code": 429,
"message": "Rate of traffic exceeds capacity. Ramp your traffic up more slowly. endpoint_id: <My Endpoint>, deployed_model_id: <My model>.",
"status": "RESOURCE_EXHAUSTED"
}
I send a single prediction request which consists of an instance of 1 string. The model is a pipeline of a custom tfidf vectorizer and logistic regression. I timed the loading time: ~0.5s, prediction time < 0.01s.
I can confirm through logs that the prediction is executed successfully but for some reason this is the response I get. Any ideas?
Few things to consider:
Allow your prediction service to serve using multiple workers
Increase your number of replicas in Vertex or set your machine types to stronger types as long as you gain improvement
However, there's something worth doing first in the client side assuming most of your prediction calls go through successfully and it is not that frequent that the service is unavailable,
Configure your prediction client to use Retry (exponential backoff):
from google.api_core.retry import Retry, if_exception_type
import requests.exceptions
from google.auth import exceptions as auth_exceptions
from google.api_core import exceptions
if_error_retriable = if_exception_type(
exceptions.GatewayTimeout,
exceptions.TooManyRequests,
exceptions.ResourceExhausted,
exceptions.ServiceUnavailable,
exceptions.DeadlineExceeded,
requests.exceptions.ConnectionError, # The last three might be an overkill
requests.exceptions.ChunkedEncodingError,
auth_exceptions.TransportError,
)
def _get_retry_arg(settings: PredictionClientSettings):
return Retry(
predicate=if_error_retriable,
initial=1.0, # Initial delay
maximum=4.0, # Maximum delay
multiplier=2.0, # Delay's multiplier
deadline=9.0, # After 9 secs it won't try again and it will throw an exception
)
def predict_custom_trained_model_sample(
project: str,
endpoint_id: str,
instance_dict: Dict,
location: str = "us-central1",
api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
...
response = await client.predict(
endpoint=endpoint,
instances=instances,
parameters=parameters,
timeout=SOME_VALUE_IN_SEC,
retry=_get_retry_arg(),
)

How to run BigQuery after Dataflow job completed successfully

I am trying to run a query in BigQuery right after a dataflow job completes successfully. I have defined 3 different functions in main.py.
The first one is for running the dataflow job. The second one checks the dataflow jobs status. And the last one runs the query in BigQuery.
The trouble is the second function checks the dataflow job status multiple times for a period of time and after the dataflow job completes successfully, it does not stop checking the status.
And then function deployment fails due to 'function load attempt timed out' error.
from googleapiclient.discovery import build
from oauth2client.client import GoogleCredentials
import os
import re
import config
from google.cloud import bigquery
import time
global flag
def trigger_job(gcs_path, body):
credentials = GoogleCredentials.get_application_default()
service = build('dataflow', 'v1b3', credentials=credentials, cache_discovery=False)
request = service.projects().templates().launch(projectId=config.project_id, gcsPath=gcs_path, body=body)
response = request.execute()
def get_job_status(location, flag):
credentials=GoogleCredentials.get_application_default()
dataflow=build('dataflow', 'v1b3', credentials=credentials, cache_discovery=False)
result=dataflow.projects().jobs().list(projectId=config.project_id, location=location).execute()
for job in result['jobs']:
if re.findall(r'' + re.escape(config.job_name) + '', job['name']):
while flag==0:
if job['currentState'] != "JOB_STATE_DONE":
print('NOT DONE')
else:
flag=1
print('DONE')
break
def bq(sql):
client = bigquery.Client()
query_job = client.query(sql, location='US')
gcs_path = config.gcs_path
body=config.body
trigger_job(gcs_path,body)
flag=0
location='us-central1'
get_job_status(location,flag)
sql= """CREATE OR REPLACE TABLE 'table' AS SELECT * FROM 'table'"""
bq(SQL)
Cloud Function timeout is set to 540 seconds but deployment fails in 3-4 minutes.
Any help is very appreciated.
It appears from the code snippet provided that your HTTP-triggered cloud function is not returning a HTTP response.
All HTTP-based cloud functions must return a HTTP response for proper termination. From the google documentation Ensure HTTP functions send an HTTP response (Emphasis - mine):
If your function is HTTP-triggered, remember to send an HTTP response,
as shown below. Failing to do so can result in your function executing
until timeout. If this occurs, you will be charged for the entire
timeout time. Timeouts may also cause unpredictable behavior or cold
starts on subsequent invocations, resulting in unpredictable behavior
or additional latency.
Thus, you must have a function that in your main.py that returns some sort of value, ideally a value that can be coerced into a Flask http response.

AWS Lambda Function Handler Error Says There Is Not Enough Values To Unpack

I have a Lambda function (created in Boto3) which is triggered by an SQS message. The Lambda function is meant to take the objects uploaded to S3 and process them with AWS Transcribe. The Lambda function is being triggered but I'm receiving the following error:
{
"errorMessage": "Bad handler 'lambda_handler': not enough values to unpack (expected 2, got 1)",
"errorType": "Runtime.MalformedHandlerName"
}
Function Logs
START RequestId: e6080a7f-b5b7-4995-a469-351c144bb93e Version: $LATEST
[ERROR] Runtime.MalformedHandlerName: Bad handler 'lambda_handler': not enough values to unpack (expected 2, got 1)
END RequestId: e6080a7f-b5b7-4995-a469-351c144bb93e
REPORT RequestId: e6080a7f-b5b7-4995-a469-351c144bb93e Duration: 1.64 ms Billed Duration: 2 ms Memory Size: 500 MB Max Memory Used: 50 MB
Request ID
e6080a7f-b5b7-4995-a469-351c144bb93e
This is where I create my Lambda function in Boto3:
response = l.create_function(
FunctionName = lambda_name,
Runtime = 'python3.7',
Role = lambda_role,
Handler = 'lambda_handler',
Code = {
'ZipFile': open('./transcribe.zip', 'rb').read()
},
Description = 'Function to parse content from SQS message and pass content to Transcribe.',
Timeout = 123,
MemorySize = 500,
Publish = True,
PackageType = 'Zip',
)
And this is what the Lambda function looks like in AWS console:
from __future__ import print_function
import time
import boto3
def lambda_handler(event, context):
transcribe = boto3.client('transcribe')
job_name = "testJob"
job_uri = "https://my-bucket1729788.s3.eu-west-2.amazonaws.com/Audio3.wav"
transcribe.start_transcription_job(
TranscriptionJobName=job_name,
Media={'MediaFileUri': job_uri},
MediaFormat='wav',
LanguageCode='en-US'
)
while True:
status = transcribe.get_transcription_job(TranscriptionJobName=job_name)
if status['TranscriptionJob']['TranscriptionJobStatus'] in ['COMPLETED', 'FAILED']:
break
print("Not ready yet...")
time.sleep(5)
print(status)
I'm really not sure where I'm going wrong as I don't find the documentation particularly helpful, so any help is appreciated. Thanks.
My function wasn't receiving either the context or the event when I tried running:
def lambda_handler(event, context):
print("Event: {}".format(event))
print("Context: {}".format(context))
I needed to change the 'Handler' setting on runtime settings from 'lambda_handler' to 'transcribe.lambda_handler', as transcribe.py was the name of the file.
There is no function named lambda_handler in your Lambda function's code. Please look at the documentation for Python Lambda functions.
I believe the issue is that your handler path is not setup correctly in the create function call. It should be something like, <function_name>.<handler>, note the . in the path. What you have in the post is just, lambda_handler, no ..

Create AWS sagemaker endpoint and delete the same using AWS lambda

Is there a way to create sagemaker endpoint using AWS lambda ?
The maximum timeout limit for lambda is 300 seconds while my existing model takes 5-6 mins to host ?
One way is to combine Lambda and Step functions with a wait state to create sagemaker endpoint
In the step function have tasks to
1 . Launch AWS Lambda to CreateEndpoint
import time
import boto3
client = boto3.client('sagemaker')
endpoint_name = 'DEMO-imageclassification-' + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
endpoint_config_name = 'DEMO-imageclassification-epc--2018-06-18-17-02-44'
print(endpoint_name)
def lambda_handler(event, context):
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
print('EndpointArn = {}'.format(create_endpoint_response['EndpointArn']))
# get the status of the endpoint
response = client.describe_endpoint(EndpointName=endpoint_name)
status = response['EndpointStatus']
print('EndpointStatus = {}'.format(status))
return status
2 . Wait task to wait for X minutes
3 . Another task with Lambda to check EndpointStatus and depending on EndpointStatus (OutOfService | Creating | Updating | RollingBack | InService | Deleting | Failed) either stop the job or continue polling
import time
import boto3
client = boto3.client('sagemaker')
endpoint_name = 'DEMO-imageclassification-2018-07-20-18-52-30'
endpoint_config_name = 'DEMO-imageclassification-epc--2018-06-18-17-02-44'
print(endpoint_name)
def lambda_handler(event, context):
# print the status of the endpoint
endpoint_response = client.describe_endpoint(EndpointName=endpoint_name)
status = endpoint_response['EndpointStatus']
print('Endpoint creation ended with EndpointStatus = {}'.format(status))
if status != 'InService':
raise Exception('Endpoint creation failed.')
# wait until the status has changed
client.get_waiter('endpoint_in_service').wait(EndpointName=endpoint_name)
# print the status of the endpoint
endpoint_response = client.describe_endpoint(EndpointName=endpoint_name)
status = endpoint_response['EndpointStatus']
print('Endpoint creation ended with EndpointStatus = {}'.format(status))
if status != 'InService':
raise Exception('Endpoint creation failed.')
status = endpoint_response['EndpointStatus']
return
Another approach is to combination of AWS Lambda functions and CloudWatch rules which I think would be clumsy.
While rajesh answer is closer to what the question ask for, I like to add that sagemaker now has a batch transform job.
Instead of continously hosting a machine, this job can handle predicting large size of batches at once without caring about latency. So if the intention behind the question is to deploy the model for a short time to predict on a fix amount of batches. This might be the better approach.

AWS StepFunctions Task state gets cancelled when tearing down a Google Cloud cluster

I am using AWS StepFunctions to carry out several tasks on the Google Cloud side - creating a Dataproc cluster, submitting a task to it, and then tearing it down (each of which have their own Task state, as well as "poller" tasks that check when the jobs have been finished in order to move onto the next Task).
The issue is, for tearing down the cluster, the Task goes into the "cancelled" (gray) status instead of "in progress", followed by the poller Task. Once the cluster deletion lambda function executes the cluster deletion method, it should move on to the poller Task.
Here is a look at the cluster deletion lambda function:
from pprint import pprint
from google.cloud import storage
import googleapiclient.discovery
from rkstr8.cloud.google import GoogleCloudLambdaAuth
import time
def handler(event, context):
creds = event['GCP_creds']
GoogleCloudLambdaAuth(creds).configure_google_creds()
dataproc = googleapiclient.discovery.build('dataproc', 'v1')
project_id = event['gcp-administrative']['project']
zone = event['gcp-administrative']['zone']
try:
region_as_list = zone.split('-')[:-1]
region = '-'.join(region_as_list)
except (AttributeError, IndexError, ValueError):
raise ValueError('Invalid zone provided, please check your input.')
cluster = event['dataproc-administrative']['cluster_name']
print('Tearing down cluster...')
request = dataproc.projects().regions().clusters().delete(
projectId=project_id,
region=region,
clusterName=cluster)
time.sleep(30)
result = request.execute()
return result
Here is what the relevant part of the state machine building code looks like:
dproc_submit_state = AsyncPoller(
stats_path=DPROC_SUBMIT_POLLER_STATUS_PATH,
async_task=Task(
name=DPROC_SUBMIT,
resource=DPROC_SUBMIT_ARN_VAR,
input_path=DPROC_SUBMIT_INPUT_PATH,
result_path=DPROC_SUBMIT_RESULT_PATH,
next=DPROC_SUBMIT_POLLER
),
pollr_task=Task(
name=DPROC_SUBMIT_POLLER,
resource=DPROC_SUBMIT_POLLER_ARN_VAR,
input_path=DPROC_SUBMIT_RESULT_PATH,
result_path=DPROC_SUBMIT_POLLER_STATUS_PATH
),
faild_task=Fail(
name='HailScriptFailed'
),
succd_task=DPROC_DELETE,
pollr_wait_time=self.conf["POLLER_WAIT_TIME"]
).states()
dproc_delete_state = AsyncPoller(
stats_path=DPROC_DELETE_POLLER_STATUS_PATH,
async_task=Task(
name=DPROC_DELETE,
resource=DPROC_DELETE_ARN_VAR,
input_path=DPROC_DELETE_INPUT_PATH,
result_path=DPROC_DELETE_RESULT_PATH,
next=DPROC_DELETE_POLLER
),
pollr_task=Task(
name=DPROC_DELETE_POLLER,
resource=DPROC_DELETE_POLLER_ARN_VAR,
input_path=DPROC_DELETE_RESULT_PATH,
result_path=DPROC_DELETE_POLLER_STATUS_PATH
),
faild_task=Fail(
name='ClusterDeleteFailed'
),
succd_task='PipelineSucceeded',
pollr_wait_time=self.conf["POLLER_WAIT_TIME"]
).states()
Here is what the state machine looks like:
Why are you sleeping for 30 seconds between creating a request and executing it?
The default timeout for lambda is 3 seconds. My guess is that your lambda is just timing out.