How to delete multiple key from digitalocean space object storage? - digital-ocean

I am trying to delete multiple object once by using delete_objects. But i am getting an error. I didn’t found any solution regarding to this issue.
client = boto3.client("s3", **config)
response = client.delete_objects(
Bucket=BUCKET,
Delete={
'Objects': [
{
'Key': 'asdasd1.png',
},
{
'Key': 'asdasd1.png',
}
]
},
RequestPayer='requester'
)
I get an error like this:
An error occurred (NotImplemented) when calling the DeleteObjects operation: Unknown
INFO: 127.0.0.1:46958 - "DELETE /image/ HTTP/1.1" 500 Internal Server Error

maybe this can help you, it's another way to do it
def cleanup_from_s3(bucket, remote_path):
s3_contents = list_s3(bucket, remote_path)
if s3_contents == []:
return
for s3_content in s3_contents:
filename = s3_content["Key"]
s3_client.delete_object(Bucket=bucket, Key="{0}/{1}".format(remote_path, filename))

Related

S3 AWS Invalid according to Policy: Policy Condition failed: ["eq", "$content-type", "audio/wav"]

I'm trying to uploading objects to S3 using presigned URLs
Here is my python code: (refer from this post: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-presigned-urls.html)
import json
URL = "https://exmaple.server-give-me-presigned-URLs"
r = requests.get(url = URL)
response = r.json()
print("debug type payload: ", type(response))
print("debug key: ", response.keys())
print("debug value: ", response["jsonresult"])
print("debug ", response["signedupload"]["fields"])
print("debug ", response["signedupload"]["url"])
print("url type ", type(response["signedupload"]["url"]))
with open("test-wav-file.wav", 'rb') as f:
files = {'file': ("/test-wav-file.wav", f)}
http_response = requests.post(response["signedupload"]["url"], data=response["signedupload"]["fields"], files=files)
print("File upload HTTP: ", http_response.text)
I got the error when run:
('File upload HTTP: ', u'<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>AccessDenied</Code><Message>Invalid according to Policy: Policy Condition failed: ["eq", "$content-type", "audio/wav"]</Message><RequestId>1TRNC1XPX4MCJPN6</RequestId><HostId>h/YdSUDuPeZhUU1TqAu1BZrCfyXKiNTYvisbvp3iaLcoLoriQPREnJI1LZp69hDE4kOWYSVog7A=</HostId></Error>')
But when I change header content-type to headers = {'Content-type': 'audio/wav'}
I got the error:
('File upload HTTP: ', u'<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>PreconditionFailed</Code><Message>At least one of the pre-conditions you specified did not hold</Message><Condition>Bucket POST must be of the enclosure-type multipart/form-data</Condition><RequestId>S86QYFG0AYTY26WQ</RequestId><HostId>lvxNkydNcuiwE/UVZY2xRMBoqk/BSUn7qathgXWSu3Fii8ZlVlKDkEjOotw4fmU3bfFgjYbsspE=</HostId></Error>')
So do we have any kind of content-type satisfy all condition? Please help me
Many thanks
I solve my problem:
just add 2 fields "content-type": "audio/wav" and "x-amz-meta-tag": "" in the payload and it work :D
x-amz-meta-tag can be any value.
Hope to help someone like me

python, google cloud platform: unable to overwite a file from google bucket: CRC32 does not match

I am using python3 client to connect to google buckets and trying to the following
download 'my_rules_file.yaml'
modify the yaml file
overwrite the file
Here is the code that i used
from google.cloud import storage
import yaml
client = storage.Client()
bucket = client.get_bucket('bucket_name')
blob = bucket.blob('my_rules_file.yaml')
yaml_file = blob.download_as_string()
doc = yaml.load(yaml_file, Loader=yaml.FullLoader)
doc['email'].clear()
doc['email'].extend(["test#gmail.com"])
yaml_file = yaml.dump(doc)
blob.upload_from_string(yaml_file, content_type="application/octet-stream")
This is the error I get from the last line for upload
BadRequest: 400 POST https://storage.googleapis.com/upload/storage/v1/b/fc-sandbox-datastore/o?uploadType=multipart: {
"error": {
"code": 400,
"message": "Provided CRC32C \"YXQoSg==\" doesn't match calculated CRC32C \"EyDHsA==\".",
"errors": [
{
"message": "Provided CRC32C \"YXQoSg==\" doesn't match calculated CRC32C \"EyDHsA==\".",
"domain": "global",
"reason": "invalid"
},
{
"message": "Provided MD5 hash \"G/rQwQii9moEvc3ZDqW2qQ==\" doesn't match calculated MD5 hash \"GqyZzuvv6yE57q1bLg8HAg==\".",
"domain": "global",
"reason": "invalid"
}
]
}
}
: ('Request failed with status code', 400, 'Expected one of', <HTTPStatus.OK: 200>)
why is this happening. This seems to happen only for ".yaml files".
The reason for your error is because you are trying to use the same blob object for both downloading and uploading this will not work you need two separate instances... You can find some good examples here Python google.cloud.storage.Blob() Examples
You should use a seperate blob instance to handle the upload you are trying with only one...
.....
blob = bucket.blob('my_rules_file.yaml')
yaml_file = blob.download_as_string()
.....
the second instance is needed here
....
blob.upload_from_string(yaml_file, content_type="application/octet-stream")
...

When using lambda to generate elbv2 attributes (name specifically), receiving error from Lambda that name is longer than 32 characters

I am building a CloudFormation template that uses a Lambda function to generate the name of the load balancer built by the template.
When the function runs, it fails with the following error:
Failed to validate attributes of ELB arn:aws-us-gov:elasticloadbalancing:us-gov-west-1:273838691273:loadbalancer/app/dev-fu-WALB-18VHO2DJ4MHK/c69c48fd3464de01. An error occurred (ValidationError) when calling the DescribeLoadBalancers operation: The load balancer name 'arn:aws-us-gov:elasticloadbalancing:us-gov-west-1:273838691273:loadbalancer/app/dev-fu-WALB-18VHO2DJ4MHK/c69c48fd3464de01' cannot be longer than '32' characters.
It is obviously pulling the arn rather than the name of the elbv2.
I opened a ticket with AWS to no avail, and also with the company that wrote the script... same results.
I have attached the script and any help is greatly appreciated.
import cfn_resource
import boto3
import boto3.session
import logging
logger = logging.getLogger()
handler = cfn_resource.Resource()
# Retrieves DNSName and source security group name for the specified ELB
#handler.create
def get_elb_attribtes(event, context):
properties = event['ResourceProperties']
elb_name = properties['PORALBName']
elb_template = properties['PORALBTemplate']
elb_subnets = properties['PORALBSubnets']
try:
client = boto3.client('elbv2')
elb = client.describe_load_balancers(
Names=[
elb_name
]
)['LoadBalancers'][0]
for az in elb['AvailabilityZones']:
if not az['SubnetId'] in elb_subnets:
raise Exception("ELB does not include VPC subnet '" + az['SubnetId'] + "'.")
target_groups = client.describe_target_groups(
LoadBalancerArn=elb['LoadBalancerArn']
)['TargetGroups']
target_group_arns = []
for target_group in target_groups:
target_group_arns.append(target_group['TargetGroupArn'])
if elb_template == 'geoevent':
if elb['Type'] != 'network':
raise Exception("GeoEvent Server requires network ElasticLoadBalancer V2.")
response_data = {}
response_data['DNSName'] = elb['DNSName']
response_data['TargetGroupARNs'] = target_group_arns
msg = 'ELB {} found.'.format(elb_name)
logger.info(msg)
return {
'Status': 'SUCCESS',
'Reason': msg,
'PhysicalResourceId': context.log_stream_name,
'StackId': event['StackId'],
'RequestId': event['RequestId'],
'LogicalResourceId': event['LogicalResourceId'],
'Data': response_data
}
except Exception, e:
error_msg = 'Failed to validate attributes of ELB {}. {}'.format(elb_name, e)
logger.error(error_msg)
return {
'Status': 'FAILED',
'Reason': error_msg,
'PhysicalResourceId': context.log_stream_name,
'StackId': event['StackId'],
'RequestId': event['RequestId'],
'LogicalResourceId': event['LogicalResourceId']
}
The error says:
An error occurred (ValidationError) when calling the DescribeLoadBalancers operation
So, looking at where it calls DescribeLoadBalancers:
elb = client.describe_load_balancers(
Names=[
elb_name
]
)['LoadBalancers'][0]
The error also said:
The load balancer name ... cannot be longer than '32' characters.
The name comes from:
properties = event['ResourceProperties']
elb_name = properties['PORALBName']
So, the information is being passed into the Lambda function via event. This is coming from whatever is triggering the Lambda function. So, you'll need to find out what is triggering the function and discover what information it actually sending. Your problem is outside of the code listed.
Other options
In your code, you can send event to the debug logs (eg print (event)) and see whether they are passing the ELB name in a different field.
Alternatively, you could call describe_load_balancers without a Name filter to retrieve a list of all load balancers, then use the ARN (that you have) to find the load balancer of interest. Simply loop through all the results until you find the one that matches the ARN you have. Then, continue as normal.

How to capture lambda code level error in cloudwatch?

I am trying to setup cloudwatch alarm for Lambda execution. I am able to setup ALARM and OK for Errors. But whenever there is any syntax error in my code I get INSUFFICIENT_DATA alarm.
I added my code below:
import json
import sys
print "Buckle your seat belt even if you are in back seat"
def lambda_handler(event, context):
try:
print( "value 1 = " + event['key'])
print( "value 2 = " + event['key2'])
print( "value 3 = " + event['key3'])
return event['key1']
except Exception as e:
print sys.exc_info()[0]
raise
Test Data:
{ "key3": "value3", "key2": "value2", "key1": "value1" }
Here is the error I am generating:
{
"stackTrace": [
[
"/var/task/lambda_function.py",
6,
"lambda_handler",
"print( \"value 1 = \" + event['key'])"
]
],
"errorType": "KeyError",
"errorMessage": "'key'"
}
I can create a metric filter for KeyError and set my alarm. But I want to create one single alarm for all errors whether system level like lambda execution or code level like KeyError etc.
Can anyone please help me how to capture the syntax error or data error in one single alarm of cloudwatch?
Thanks
if you want to catch the error in the logs, then you can use logger feature and do logger.debug(e) or logger.error(e), where e is the exception raised and caught. exception will be available in logs and then you will be able to setup alarm on top of that.

Getting a 403 while creating a user, followed every step right so far

Looks like I have followed every step (given that the documentation is extremely lacking, it is sourced from multiple places). This is my code:
def create_user(cred_file_location, user_first_name, user_last_name, user_email):
cred_data = json.loads(open(cred_file_location).read())
access_email = cred_data['client_email']
private_key = cred_data['private_key']
# I have tried with the scope as a single string, and also
# as an array of a single string. Neither worked
credentials = SignedJwtAssertionCredentials(access_email, private_key, ["https://www.googleapis.com/auth/admin.directory.user"])
http = Http()
http = credentials.authorize(http)
service = build('admin', 'directory_v1', http=http)
users = service.users()
userinfo = {
'primaryEmail': user_email,
'name': {
'givenName': user_first_name,
'familyName': user_last_name
},
'password': ''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(80))
}
users.insert(body=userinfo).execute()
I downloaded the JSON key right, and it is loading it correctly. This is my JSON key (I am redacting certain parts of identifying information, I have kept some of it there to show that I am loading the correct info):
{
"type": "service_account",
"private_key_id": "c6ae56a9cb267fe<<redacted>>",
"private_key": "<<redacted>>",
"client_email": "account-1#<<redacted>>.iam.gserviceaccount.com",
"client_id": "10931536<<redacted>>",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/account-1%40<<redacted>>.iam.gserviceaccount.com"
}
This is how these credentials look in the developer console:
I have also enabled sitewide access for the service account:
I have no clue as to why I am still getting these 403s:
File "/usr/lib/python2.7/site-packages/googleapiclient/http.py", line 729, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 403 when requesting https://www.googleapis.com/admin/directory/v1/users?alt=json returned "Not Authorized to access this resource/api">
Any help is greatly appreciated.
Finally, on some random stackoverflow answer, I found the solutin. I have to sub as a user to execute any request. Esentially:
credentials = SignedJwtAssertionCredentials(
access_email,
private_key,
["https://www.googleapis.com/auth/admin.directory.user"])
changes to:
credentials = SignedJwtAssertionCredentials(
access_email,
private_key,
["https://www.googleapis.com/auth/admin.directory.user"],
sub="user#example.org")
Where all requests will now be made as if they were made on behalf of user#example.org.