How to configure lifecycle for S3 incomplete multi-part upload - c++

I've observed for failed multipart upload (like crash or stop in the middle), the partially uploaded object still exist in storage.
I want to configure lifecycle rules for these incomplete objects via either minio or S3 C++ SDk.
I want to configure something like
{
"ID": "bucket-lifecycle-incomplete-chunk-upload",
"Status": "Enabled",
"NoncurrentVersionExpiration": {
"NoncurrentDays": 1
},
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 1
}
},
My C++ code looks like the following:
Aws::S3::Model::AbortIncompleteMultipartUpload incomplete_upload_config;
incomplete_upload_config.SetDaysAfterInitiation(days);
Aws::S3::Model::NoncurrentVersionExpiration version_expire;
version_expire.SetNoncurrentDays(1);
auto status = Aws::S3::Model::ExpirationStatus::Enabled;
Aws::S3::Model::LifecycleRule rule;
rule.SetID("bucket-lifecycle-incomplete-chunk-upload");
rule.SetStatus(std::move(status));
rule.SetNoncurrentVersionExpiration(std::move(version_expire));
rule.SetAbortIncompleteMultipartUpload(std::move(incomplete_upload_config));
Aws::S3::Model::BucketLifecycleConfiguration bkt_config;
bkt_config.AddRules(std::move(rule));
Aws::S3::Model::PutBucketLifecycleConfigurationRequest config_req{};
config_req.SetBucket(bucket);
config_req.SetLifecycleConfiguration(std::move(bkt_config));
auto outcome = client->PutBucketLifecycleConfiguration(config_req);
And I get the following result:
Received HTTP return code: 400; Failed to update config for bucket <bucket-name> because MalformedXML: Unable to parse ExceptionName: MalformedXML Message:
The pain point for this error is: I cannot find which additional or missing fields lead to this error.

Related

Getting 400 Bad Request error while trying to move Azure resources via Postman

I am trying to move resources from one resource group to another via Postman.
I got the access token successfully by using below parameters:
https://login.microsoftonline.com/mytenantid/oauth2/v2.0/token
client_id='myclientid'
&scope=https://management.azure.com/.default
&grant_type=client_credentials
&client_secret='appclientsecret'
I am using query like below:
POST
https://management.azure.com/subscriptions/mysubscriptionid/resourceGroups/resourcegroupname/moveResources?api-version=2021-04-01
Request Body
{
"resources" : "/subscriptions/mysubscriptionid/resourceGroups/resourcegroupname/providers/Microsoft.KeyVault/vaults/keyvaultname",
"targetResourceGroup" : "/subscriptions/mysubscriptionid/resourceGroups/targetresourcegroupname"
}
But I got error like below:
{ "error": { "code": "UnsupportedMediaType",
"message": "The content
media type 'text/plain' is not supported. Only 'application/json' is
supported."
} }
After changing the type to JSON, I am getting another error like below:
{ "error": { "code": "InvalidRequestContent", "message": "The request
content was invalid and could not be deserialized: 'Error converting
value
"/subscriptions/mysubscriptionid/resourceGroups/resourcegroupname/providers/Microsoft.KeyVault/vaults/keyvaultname"
to type 'System.String[]'. Path 'resources', line 2, position 143.'."
} }
Can anyone help me to resolve this error?
I tried to reproduce the same in my environment and was able to move resources successfully like below:
Make sure your request body is something like below:
{
"resources" : ["/subscriptions/XXXXXXX/resourceGroups/Test/providers/Microsoft.KeyVault/vaults/testkeyvault549","/subscriptions/XXXXXXXXX/resourceGroups/Test/providers/Microsoft.Storage/storageAccounts/sristackdem01"],
"targetResourceGroup" : "/subscriptions/XXXXXXX/resourceGroups/Demo"
}
When I executed the same query, I got below response:
Please note that resources parameter expects list of resource IDs in [ ]. So, make sure to add them.
When I missed giving [ ] as mentioned above, I got the same error as you like below:
Reference:
Validate Azure Resource Move with Postman - Apostolidis Cloud Corner

Can't Send S3 Bucket Notification to SQS

I'm trying to publish a bucket notification to an SQS queue when an object is created in S3. I have read that I might be trying to use boto3 in mixed ways (i.e. as client & resource but neither will work).
s3 = boto3.client('s3')
s3.create_bucket(Bucket='my-bucket',
CreateBucketConfiguration={
'LocationConstraint': 'eu-west-2',
},
)
bucket_notification = s3.BucketNotification('my-bucket')
s3_notification_config = {
'QueueConfigurations': [
{
'QueueArn': 'arn:aws:sqs:location:number:number',
'Events': [
's3:ObjectCreated:*',
],
},
],
}
response = bucket_notification.put(NotificationConfiguration=s3_notification_config)
This gives me the following error:
AttributeError: 'S3' object has no attribute 'BucketNotification'
When I change the first line of code to be:
s3 = boto3.resource('s3')
I get the following error:
An error occurred (InvalidArgument) when calling the PutBucketNotificationConfiguration operation: Unable to validate the following destination configurations
I understand the first error but not sure how to work around it by using resource instead of client when presented with the second error.

SourceImage as Google storage bucket link while inserting new image

What should be the proper format to provide Google storage image link while images.insert ?
Image file located at GS Bucket as *.tar.gz.
Creating new image with google client library in python with api images.insert(body=body,project=project)
my body config looks like:
body = { "name":"test", "sourceImage":"https://console.cloud.google.com/storage/browser/[BUCKET]/[IMAGEFILE]",}
Procedure fails with following error:
googleapiclient.errors.HttpError: <HttpError 400 when requesting returned "Invalid value for field 'resource.sourceImage': 'https://storag
e.cloud.google.com/[BUCKET]/[IMAGEFILE]'. The URL is malformed.">
In order to access an image from Google storage, please use “rawDisk” instead. Here is an example:
"name": "image-1",
"rawDisk": {
"source": "https://storage.googleapis.com/[bucket]/[imagefile]"
}
Where [bucket] is the name of your bucket and [imagefile] = *.tar.gz

Digital Ocean Spaces | Add expire date for files with s3cmd

I try add expire days to a file and bucket but I have this problem:
sudo s3cmd expire s3://<my-bucket>/ --expiry-days=3 expiry-prefix=backup
ERROR: Error parsing xml: syntax error: line 1, column 0
ERROR: not found
ERROR: S3 error: 404 (Not Found)
and this
sudo s3cmd expire s3://<my-bucket>/<folder>/<file> --expiry-day=3
ERROR: Parameter problem: Expecting S3 URI with just the bucket name set instead of 's3:////'
How to add expire days in DO Spaces for a folder or file by using s3cmd?
Consider configuring Bucket's Lifecycle Rules
Lifecycle rules can be used to perform different actions on objects in a Space over the course of their "life." For example, a Space may be configured so that objects in it expire and are automatically deleted after a certain length of time.
In order to configure new lifecycle rules, send a PUT request to ${BUCKET}.${REGION}.digitaloceanspaces.com/?lifecycle
The body of the request should include an XML element named LifecycleConfiguration containing a list of Rule objects.
https://developers.digitalocean.com/documentation/spaces/#get-bucket-lifecycle
The expire option is not implemented on Digital Ocean Spaces
Thanks to Vitalii answer for pointing to API.
However API isn't really easy to use, so I've done it via NodeJS script.
First of all, generate your API keys here: https://cloud.digitalocean.com/account/api/tokens
And put them in ~/.aws/credentials file (according to docs):
[default]
aws_access_key_id=your_access_key
aws_secret_access_key=your_secret_key
Now create empty NodeJS project, run npm install aws-sdk and use following script:
const aws = require('aws-sdk');
// Replace with your region endpoint, nyc1.digitaloceanspaces.com for example
const spacesEndpoint = new aws.Endpoint('fra1.digitaloceanspaces.com');
// Replace with your bucket name
const bucketName = 'myHeckingBucket';
const s3 = new aws.S3({endpoint: spacesEndpoint});
s3.putBucketLifecycleConfiguration({
Bucket: bucketName,
LifecycleConfiguration: {
Rules: [{
ID: "autodelete_rule",
Expiration: {Days: 30},
Status: "Enabled",
Prefix: '/', // Unlike AWS in DO this parameter is required
}]
}
}, function (error, data) {
if (error)
console.error(error);
else
console.log("Successfully modified bucket lifecycle!");
});

"Request contains invalid agrument" when updating a log sink

I'm attempting to update an existing sink using the (Python) Google Stackdriver Logging API, and am getting an error that I can't track down.
The API explorer for the method in question is here, to which I'm supplying projects/my_project/sinks/my_sink_name as the sinkName, and the following for the Request Body:
{
"name": "audit_logs",
"destination": "bigquery.googleapis.com/projects/my_project/datasets/destination_dataset",
"filter": "resource.type=\"bigquery_resource\""
}
When submitting, I get the following error:
400 Bad Request
{
"error": {
"code": 400,
"message": "Request contains an invalid argument.",
"status": "INVALID_ARGUMENT"
}
}
...which doesn't specify which argument is invalid, and I have tried several variations without any success.
Additional info: this request is based on one generated by the Python API. I have also tried specifying the full path of the sink in name to no avail, which is what the Python API generates, which seems contrary to the documentation.
Can you try the UpdateSink with uniqueWriterIdentity set to true?
From https://cloud.google.com/logging/docs/reference/v2/rest/v2/projects.sinks/update:
"It is an error if the old value is true and the new value is set to false or defaulted to false."