DB ParameterGroup not found, not allowed to do cross region copy - amazon-web-services

I am using the AWS command line tool and am having issue with copying the parameter group to a different region
Error:
An error occurred (DBParameterGroupNotFound) when calling the CopyDBParameterGroup operation: DB ParameterGroup not found, not allowed to do cross region copy.
The command is:
>aws rds copy-db-parameter-group --source-db-parameter-group-identifier arn:aws:rds:ap-southeast-1:MyActID:pg:source-para-group --target-db-parameter-group-identifier dest-para-group --target-db-parameter-group-description dest-para-group-description
I also tried with:
>aws rds copy-db-parameter-group --source-db-parameter-group-identifier arn:aws:rds:ap-southeast-1:MyActID:pg:source-para-group --target-db-parameter-group-identifier arn:aws:rds:ap-south-1:MyActID:pg:dest-para-group --target-db-parameter-group-description arn:aws:rds:ap-south-1:MyActID:pg:dest-para-group-description
Please let help me if anyone else come across the similar issue?

According to the copy-db-parameter-group, the tricky part :
If the source DB parameter group is in a different region than the copy, specify a valid DB parameter group ARN, for example arn:aws:rds:us-west-2:123456789012:pg:special-parameters .
Apparently it is a "from - to" copy option. Try using the TARGET region AWS credential when making the copy, then specify the source ARN.

Related

Failing to update a resource name/arn due to missing permissions

i'm to implement a custom resource and i'm failing when I try to update its name.
The reason that's i'm failing is that the custom resource lambda is missing the permissions to delete the old resource. since i'm giving it the exact permission by arn, and when I try to change the name of the resource which affects the arn, then it only has permissions for the new resource.
is there any way to solve this wiht giving it permissions for arn:...:/* ?
Here is some code to make this clearer, this implements an SSM SecureString placeholder (since we don't want to deploy secrets that will get away in the stack outputs so just a placeholder)
See code below, note that the policy statement passed to add_to_policy, is for arn:aws:ssm:{region}:{account_id}:parameter/* - this is the only way it works.
if I try to do minimal privileges, and set it to the specific param name arn, then if I try to update the name, it will fail due to missing permissions to delete the old resource on update.
If I manually added the delete permissions for the old resource (hard coded) it worked.
param_name = some-value # This is the ssm param name that is part of the arn
policy = AwsCustomResourcePolicy.from_sdk_calls(resources=[self.arn_join(f'arn:aws:ssm:{region}:{account_id}:parameter', param_name)])
resource = AwsCustomResource(scope=self, id=f'{id_}AwsCustomResource', policy=policy, log_retention=log_retention,
on_create=on_create, on_update=on_update, on_delete=on_delete,
resource_type='Custom::AWSSecureString',
timeout=timeout)
resource.grant_principal.add_to_policy(iam.PolicyStatement(actions=['ssm:DeleteParameter'], resources=[f'arn:aws:ssm:{region}:{account_id}:parameter/*']))
Is it possible to do somehow without giving it '*' permissions for all params?
Thanks for your help!

Issue with AWS DMS continuous replication

I am trying to create a DMS task to migrate data from RDS Postgres instance to S3 bucket. The full load works all fine, but the continuous replication is failing. Its giving this error:
"logical decoding requires wal_level >= logical"
When I checked the system settings from pg_settings, its showing that the setting "wal_level" has value "replica". So I tried to change the setting wal_level, but I am not able to find this setting in the Parameter Group in RDS. My RDS instance is using 9.6 version of parameters.
When I tried "ALTER SYSTEM SET wal_level TO 'logical'", it fails saying that "must be superuser to execute ALTER SYSTEM command", even though the user is under rds_superuser role.
Please suggest.
The Parameter name in Parameter Group is "rds.logical_replication" which needs to be changed to 1. The default value was 0.
This property changed "wal_level" value to "Logical".

AWS | Boto3 | RDS |function DownloadDBLogFilePortion |cannot download a log file because it contains binary data |

When I try to download all log files from a RDS instance, in some cases, I found this error in my python output:
An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed.
I manage correctly the pagination and the throttling (using The Marker parameter and the sleep function).
This is my calling:
log_page=request_paginated(rds,DBInstanceIdentifier=id_rds,LogFileName=log,NumberOfLines=1000)
rds-> boto3 resource
And this is the definition of my function:
def request_paginated(rds,**kwargs):
return rds.download_db_log_file_portion(**kwargs)
Like I said, most of time this function works but sometime it returns:
"An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed"
Can you help me please? :)
UPDATE: the problem is a known issue with downloading log files that contain non printable sign. As soon as possible I will try the proposed solution provide by the aws support
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released.
So the solutions is: use the java API
Giuseppe
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released. So the solutions is: use the java API
Giuseppe
http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonErrors.html
InvalidParameterValue : An invalid or out-of-range value was supplied
for the input parameter.
Invalid parameter in boto means the data pass does not complied. Probably an invalid name that you specified, possible something wrong with your variable id_rds, or maybe your LogFileName, etc. You must complied with the function arguments requirement.
response = client.download_db_log_file_portion(
DBInstanceIdentifier='string',
LogFileName='string',
Marker='string',
NumberOfLines=123
)
(UPDATE)
For example, LogFileName must be te exact file name exist inside RDS instance.
For the logfile , please make sure the log file EXISTS inside the instance. Use this AWS CLI to get a quick check
aws rds describe-db-log-files --db-instance-identifier <my-rds-name>
Do check Marker (string) and NumberOfLines (Integer) as well. Mismatch type or out of range. Skip them since they are not required, then test it later.

InvalidSignatureException when using boto3 for dynamoDB on aws

Im facing some sort of credentials issue when trying to connect to my dynamoDB on aws. Locally it all works fine and I can connect using env variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION and then
dynamoConnection = boto3.resource('dynamodb', endpoint_url='http://localhost:8000')
When changing to live creds in the env variables and setting the endpoint_url to the dynamoDB on aws this fails with:
"botocore.exceptions.ClientError: An error occurred (InvalidSignatureException) when calling the Query operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."
The creds are valid as they are used in a different app which talks to the same dynamoDB. Ive also tried not using env variables but rather directly in the method but the error persisted. Furthermore, to avoid any issues with trailing spaces Ive even used the credentials directly in the code. Im using Python v3.4.4.
Is there maybe a header that also should be set that Im not aware of? Any hints would be apprecihated.
EDIT
Ive now also created new credentials (to make sure there are only alphanumerical signs) but still no dice.
You shouldn't use the endpoint_url when you are connecting to the real DynamoDB service. That's really only for connecting to local services or non-standard endpoints. Instead, just specify the region you want:
dynamoConnection = boto3.resource('dynamodb', region_name='us-west-2')
It sign that your time zone is different. Maybe you can check your:
1. Time zone
2. Time settings.
If there are some automatic settings, you should fix your time settings.
"sudo hwclock --hctosys" should do the trick.
Just wanted to point out that accessing DynamoDB from a C# environment (using AWS.NET SDK) I ran into this error and the way I solved it was to create a new pair of AWS access/secret keys.
Worked immediately after I changed those keys in the code.

Why do I get the S3ServiceException error when loading AWS Redshift from S3?

I'm getting an error when trying to load a table in Redshift from a CSV file in S3. The error is:
error: S3ServiceException:All access to this object has been disabled,Status 403,Error AllAccessDisabled,Rid FBC64D9377CF9763,ExtRid o1vSFuV8SMtYDjkgKCYZ6VhoHlpzLoBVyXaio6hdSPZ5JRlug+c9XNTchMPzNziD,CanRetry 1
code: 8001
context: Listing bucket=amazonaws.com prefix=els-usage/simple.txt
query: 1122
location: s3_utility.cpp:540
process: padbmaster [pid=6649]
The copy statement used is:
copy public.simple from 's3://amazonaws.com/mypath/simple.txt' CREDENTIALS 'aws_access_key_id=xxxxxxx;aws_secret_access_key=xxxxxx' delimiter ',';
As this is my first attempt at using Redshift and S3, I've kept the simple.txt file (and its destination table) a single field record. I've run the copy in both Aginity Workbench and SQL Workbench with the same results.
I've clicked the link in the S3 file's property tab and it downloads the simple.txt file - so it appears the input file is accessible. Just to be sure, I've given it public access.
Unfortunately, I don't see any addition information that would be helpful in debugging this in the Redshift Loads tab.
Can anyone see anything I'm doing incorrectly?
Removing the amazonaws.com from the URL fixed the problem. The resulting COPY statement is now:
copy public.simple from 's3://mypath/simple.txt' CREDENTIALS 'aws_access_key_id=xxxxxxx;aws_secret_access_key=xxxxxx' delimiter ',';
You can receive the same error code if you are on an IAM role and use the IAM metadata for your aws_access_key and aws_secret_access_key. Per the documentation, the pattern to follow in this case includes a token from the instance. Both the IAM role's access keys and tokens can be found in the metadata here: http://169.254.169.254/latest/meta-data/iam/security-credentials/{{roleName}}.
copy table_name
from 's3://objectpath'
credentials 'aws_access_key_id=<temporary-access-key-id>;aws_secret_access_key=<temporary-secret-access-key>;token=<temporary-token>';