Retrieve deleted AWS S3 file by version - amazon-web-services

I am trying to retrieve a deleted file from s3, without restoring it. I can do it through the console (clicking Download), but I need to do it programmatically.
The following call lists the version I need:
s3.list_object_versions(
Bucket="...",
KeyMarker="/.../part-00000-5ceb032b-c918-47df-a2ad-f02f3790077a-c000.csv",
VersionIdMarker = "A1GxocexjsirkzKfo47lvQ0r7ythwCWM",
MaxKeys=1
)
However, s3.get_object() with the same parameters, returns "ClientError: An error occurred (NoSuchVersion) when calling the GetObject operation: The specified version does not exist."
What is the proper way of retrieving a specific version of a deleted file?

Based on the comments. The issue was caused by using / in a key a prefix. Prefixes do not start with a /. Thus it should be:
KeyMarker=".../part-00000-5ceb032b-c918-47df-a2ad-f02f3790077a-c000.csv"

Related

AWS SecretManager Read and Write concurrency

AWS SecretManager Read and Write concurrency.
If I am writing new secret value to a secret and if at the same time read is performed,
does the read call waits for he write calls to complete?
Or read will retrieve some invalid or intermediate value of the keys stored inside the secret?
By default, the GetSecretValue API returns the version of the secret that has the AWSCURRENT stage. It also allows you to fetch older versions of a secret by specifying the VersionId. Versions are also immutable and if you call PutSecretValue you create a new version.
You won't get partial versions here - the label AWSCURRENT will only be switched to the new version once the update is complete. Everything else would result in a terrible user experience.

"checksum must be specified in PUT API, when the resource already exists"

I am getting the following error while building using AWS Lex?
"checksum must be specified in PUT API, when the resource already exists"
Can someone tell what it means and how to fix it?
I was getting the same error when building my bot in the console. I found the answer here.
Refresh the page and then set the version of the bot to Latest.
The documentation states that you have to provide the checksum of a bot that already exists if you are trying to update it: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/LexModelBuildingService.html#putBot-property
"checksum — (String)
Identifies a specific revision of the $LATEST version.
When you create a new bot, leave the checksum field blank. If you specify a checksum you get a BadRequestException exception.
When you want to update a bot, set the checksum field to the checksum of the most recent revision of the $LATEST version. If you don't specify the checksum field, or if the checksum does not match the $LATEST version, you get a PreconditionFailedException exception."
That's the aws-sdk for JavaScript docs, but the same concept applies to any SDK as well as the AWS CLI.
This requires calling get-bot first, which will return the checksum of the bot among other data. Save that checksum somewhere and pass it in the params when you call put-bot
I would recommend using the tutorials here: https://docs.aws.amazon.com/lex/latest/dg/gs-console.html
That tutorial demonstrates using the AWS CLI, but the same concepts can be abstracted to use any SDK you desire.
Had the same problem.
I guess once you have published one bot, you can not anymore modify or build it.
Create another bot.

Unable to deactivate Object Lock on an Amazon S3 bucket using boto3

I'm trying to use put_object_lock_configuration() API call to disable object locking on an Amazon S3 bucket using python boto3.
This is how I use it:
response = s3.put_object_lock_configuration(Bucket=bucket_name,
ObjectLockConfiguration={
'ObjectLockEnabled': 'Disabled'});
I always get exception with the following error.
botocore.exceptions.ClientError: An error occurred (MalformedXML) when calling the PutObjectLockConfiguration operation: The XML you provided was not well-formed or did not validate against our published schema
I suspect I miss the 2 parameters 'Token' and 'ContentMD5'. Does anyone know how do I get these values?
The only value of 'ObjectLockEnabled' allowed is 'Enabled'. My intention is to disable object lock. but this is not possible. because object lock is defined during bucket creation time and it can't be changed afterward. However, I can provide empty rule and the retention mode will become 'None', which is essentially no object lock.
Here is the boto3 code for blank retention rule, the precondition is to use mode=GOVERNANCE in the first place.
client.put_object_retention(
Bucket=bucket_name, Key=object_key,
Retention={},
BypassGovernanceRetention=True
)

how to get shared access signature of Azure container by C++

I want to use C++ Azure API to generate a Shared Access Signature for a container on Azure and get the access string. But cannot find any good example. Almost all examples are in C#. Only found this, https://learn.microsoft.com/en-us/azure/storage/files/storage-c-plus-plus-how-to-use-files
Here is what I did,
// Retrieve a reference to a previously created container.
azure::storage::cloud_blob_container container = blob_client.get_container_reference(s2ws(eventID));
// Create the container if it doesn't already exist.
container.create_if_not_exists();
// Get the current permissions for the event.
auto blobPermissions = container.download_permissions();
// Create and assign a policy
utility::string_t policy_name = s2ws("Signature" + eventID);
azure::storage::blob_shared_access_policy policy = azure::storage::blob_shared_access_policy();
// set expire date
policy.set_expiry(utility::datetime::utc_now() + utility::datetime::from_days(10));
//give read and write permissions
policy.set_permissions(azure::storage::blob_shared_access_policy::permissions::read);
azure::storage::shared_access_policies<azure::storage::blob_shared_access_policy> policies;
//add the new shared policy
policies.insert(std::make_pair(policy_name, policy));
blobPermissions.set_policies(policies);
blobPermissions.set_public_access(azure::storage::blob_container_public_access_type::off);
container.upload_permissions(blobPermissions);
auto token = container.get_shared_access_signature(policy, policy_name);
After run this, I can see the policy is successfully set on the container, but the token got by the last line is not right. And there will always be an exception when exiting this function, the breakpoint locates in _Deallocate().
Could someone tell me what's wrong with my code? Or some examples about this? Thank you very much.
Edited
The token I got looks like,
"sv=2016-05-31&si=Signature11111122222222&sig=JDW33j1Gzv00REFfr8Xjz5kavH18wme8E7vZ%2FFqUj3Y%3D&spr=https%2Chttp&se=2027-09-09T05%3A54%3A29Z&sp=r&sr=c"
By this token, I couldn't access my blobs. The right token created by "Microsoft Azure Storage Explorer" using this policy looks like,
?sv=2016-05-31&si=Signature11111122222222&sr=c&sig=9tS91DUK7nkIlIFZDmdAdlNEfN2HYYbvhc10iimP1sk%3D
About the exception, I put all these code in a function. If without the last line, everything is okay. But if added the last line, while exiting this function, it will throw an exception and said a breakpoint was triggered. It stopped at the last line of _Deallocate() in "C:\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.10.25017\include\xmemory0",
::operator delete(_Ptr);
Have no idea why this exception being thrown and how to debug because it seems it cannot be caught by my code.
Edited
After changed the last line to,
auto token = container.get_shared_access_signature(azure::storage::blob_shared_access_policy(), policy_name);
The returned token is right, I can access my blobs by using it. But the annoying exception is still there :-(
Edited
Just found the exception only happened when building in Debug. If in Release, everything is ok. So maybe it's related to compiling environment.
When creating a Shared Access Signature (SAS), there are a few permissions you set: SAS Start/Expiry, Permissions, IP ACLing, Protocol restrictions etc. Now what you could do is create an access policy on the blob container with these things, create an ad-hoc SAS (i.e. without access policy) with these things or combine these two to create a SAS token.
One key thing to keep in mind is that if something is defined in an access policy, you can't redefine them when creating a SAS. So for example, let's say you create an access policy with just Read permission and nothing else, then you can't provide any permissions when creating a SAS token while using this access policy. You can certainly define the things which are not there in the access policy (for example, you can define a SAS expiry if it is not defined in access policy).
If you look at your code (before edit), what you're doing is creating an access policy with some permissions and then creating a SAS token using the same permissions and access policy. That's why it did not work. However when you created a SAS token from Microsoft's Storage Explorer, you will notice that it only included the access policy (si=Signature11111122222222) and none of the other parameters and that's why it worked.
In your code after edit you did not include any permissions but only used the access policy (in a way you did what Storage Explorer is doing) and that's why things worked after edit.
I hope this explains the mystery behind not working/working SAS tokens.

AWS | Boto3 | RDS |function DownloadDBLogFilePortion |cannot download a log file because it contains binary data |

When I try to download all log files from a RDS instance, in some cases, I found this error in my python output:
An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed.
I manage correctly the pagination and the throttling (using The Marker parameter and the sleep function).
This is my calling:
log_page=request_paginated(rds,DBInstanceIdentifier=id_rds,LogFileName=log,NumberOfLines=1000)
rds-> boto3 resource
And this is the definition of my function:
def request_paginated(rds,**kwargs):
return rds.download_db_log_file_portion(**kwargs)
Like I said, most of time this function works but sometime it returns:
"An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed"
Can you help me please? :)
UPDATE: the problem is a known issue with downloading log files that contain non printable sign. As soon as possible I will try the proposed solution provide by the aws support
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released.
So the solutions is: use the java API
Giuseppe
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released. So the solutions is: use the java API
Giuseppe
http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonErrors.html
InvalidParameterValue : An invalid or out-of-range value was supplied
for the input parameter.
Invalid parameter in boto means the data pass does not complied. Probably an invalid name that you specified, possible something wrong with your variable id_rds, or maybe your LogFileName, etc. You must complied with the function arguments requirement.
response = client.download_db_log_file_portion(
DBInstanceIdentifier='string',
LogFileName='string',
Marker='string',
NumberOfLines=123
)
(UPDATE)
For example, LogFileName must be te exact file name exist inside RDS instance.
For the logfile , please make sure the log file EXISTS inside the instance. Use this AWS CLI to get a quick check
aws rds describe-db-log-files --db-instance-identifier <my-rds-name>
Do check Marker (string) and NumberOfLines (Integer) as well. Mismatch type or out of range. Skip them since they are not required, then test it later.