I am struggling to figure out how you update a secret in AWS Secrets Manager via Powershell when the secret has multiple key/values. My code is below with the standard change, but I have two key/values that I need to update. I tried a specific syntax with JSON, which did not take very well, and I overwrote the secure key with a simple text. I have attached my failed code as well, but I am stumped; I can not find references to the syntax for multiple key/values.
SINGLE SECRET(WORKS GREAT):
$sm_value = "my_secret_name"
Update-SECSecret -SecretId $sm_value -SecretString $Password
MULTI VALUE KEYS (FAILED):
Update-SECSecret -SecretId $sm_value -SecretString '{"Current":$newpassword,"Previous":$mycurrent}'
EXAMPLE AWS SECRET
Team, after looking at the documentation in Cloudformation and AWS CLI I concluded that my JSON string was incorrect. So after playing with it for an hour or so, I came up with this format that worked.
$mycurrent = "abc1234"
$newpassword = "zxy1234"
$json = #"
{"Current":"$newpassword","Previous":"$mycurrent"}
"#
Update-SECSecret -SecretId $sm_value -SecretString $json
It seems that updating secrets for multiple key/values requires a JSON file in order to work.
Related
I am quite new to the CDK, but I'm adding a LogQueryWidget to my CloudWatch Dashboard through the CDK, and I need a way to add all LogGroups ending with a suffix to the query.
Is there a way to either loop through all existing LogGroups and finding the ones with the correct suffix, or a way to search through LogGroups.
const queryWidget = new LogQueryWidget({
title: "Error Rate",
logGroupNames: ['/aws/lambda/someLogGroup'],
view: LogQueryVisualizationType.TABLE,
queryLines: [
'fields #message',
'filter #message like /(?i)error/'
],
})
Is there anyway I can add it so logGroupNames contains all LogGroups that end with a specific suffix?
You cannot do that dynamically (i.e. you can't make this work such that if you add a new LogGroup, the query automatically adjusts), without using something like AWS lambda that periodically updates your Log Query.
However, because CDK is just a code, there is nothing stopping you from making an AWS SDK API call inside the code to retrieve all the log groups (See https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudWatchLogs.html#describeLogGroups-property) and then populate logGroupNames accordingly.
That way, when CDK compiles, it will make an API call to fetch LogGroups and then generated CloudFormation will contain the log groups you need. Note that this list will only be updated when you re-synthesize and re-deploy your stack.
Finally, note that there is a limit on how many Log Groups you can query with Log Insights (20 according to https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html).
If you want to achieve this, you can create a custom resource using AwsCustomResource and AwsSdkCall classes to do the AWS SDK API call (as mentioned by #Tofig above) as part of the deployment. You can read data from the API call response as well and act on it as you want.
I'm trying to query my AWS Cloudsearch (2013 API) domain using the AWS CLI on Ubuntu. I haven't been able to get it to work successfully when the search is restricted to a specific field. The following query:
aws --profile myprofile cloudsearchdomain search
--endpoint-url "https://search-mydomain-abc123xyz.eu-west-1.cloudsearch.amazonaws.com"
--query-options {"fields":["my_field"]}
--query-parser "simple"
--return "my_field"
--search-query "foo bar"
...returns the following error:
An error occurred (SearchException) when calling the Search operation: q.options contains invalid javascript object
If I remove the --query-options parameter from the above query, then it works. From the AWS CLI docs regarding the fields options of the --query-options parameter:
An array of the fields to search when no fields are specified in a search... Valid for: simple , structured , lucene , and dismax
aws cli version:
aws-cli/1.11.150 Python/2.7.12 Linux/4.10.0-28-generic botocore/1.7.8
I think the documentation is a bit misleading as JSon does not like embedded double quotes inside double quotes, you would need to replace with single quote as
--query-options "{'fields':['my_field']}"
or you can escape the double quote
--query-options "{\"fields\":[\"my_field\"]}"
When I try to download all log files from a RDS instance, in some cases, I found this error in my python output:
An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed.
I manage correctly the pagination and the throttling (using The Marker parameter and the sleep function).
This is my calling:
log_page=request_paginated(rds,DBInstanceIdentifier=id_rds,LogFileName=log,NumberOfLines=1000)
rds-> boto3 resource
And this is the definition of my function:
def request_paginated(rds,**kwargs):
return rds.download_db_log_file_portion(**kwargs)
Like I said, most of time this function works but sometime it returns:
"An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed"
Can you help me please? :)
UPDATE: the problem is a known issue with downloading log files that contain non printable sign. As soon as possible I will try the proposed solution provide by the aws support
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released.
So the solutions is: use the java API
Giuseppe
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released. So the solutions is: use the java API
Giuseppe
http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonErrors.html
InvalidParameterValue : An invalid or out-of-range value was supplied
for the input parameter.
Invalid parameter in boto means the data pass does not complied. Probably an invalid name that you specified, possible something wrong with your variable id_rds, or maybe your LogFileName, etc. You must complied with the function arguments requirement.
response = client.download_db_log_file_portion(
DBInstanceIdentifier='string',
LogFileName='string',
Marker='string',
NumberOfLines=123
)
(UPDATE)
For example, LogFileName must be te exact file name exist inside RDS instance.
For the logfile , please make sure the log file EXISTS inside the instance. Use this AWS CLI to get a quick check
aws rds describe-db-log-files --db-instance-identifier <my-rds-name>
Do check Marker (string) and NumberOfLines (Integer) as well. Mismatch type or out of range. Skip them since they are not required, then test it later.
Im facing some sort of credentials issue when trying to connect to my dynamoDB on aws. Locally it all works fine and I can connect using env variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION and then
dynamoConnection = boto3.resource('dynamodb', endpoint_url='http://localhost:8000')
When changing to live creds in the env variables and setting the endpoint_url to the dynamoDB on aws this fails with:
"botocore.exceptions.ClientError: An error occurred (InvalidSignatureException) when calling the Query operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."
The creds are valid as they are used in a different app which talks to the same dynamoDB. Ive also tried not using env variables but rather directly in the method but the error persisted. Furthermore, to avoid any issues with trailing spaces Ive even used the credentials directly in the code. Im using Python v3.4.4.
Is there maybe a header that also should be set that Im not aware of? Any hints would be apprecihated.
EDIT
Ive now also created new credentials (to make sure there are only alphanumerical signs) but still no dice.
You shouldn't use the endpoint_url when you are connecting to the real DynamoDB service. That's really only for connecting to local services or non-standard endpoints. Instead, just specify the region you want:
dynamoConnection = boto3.resource('dynamodb', region_name='us-west-2')
It sign that your time zone is different. Maybe you can check your:
1. Time zone
2. Time settings.
If there are some automatic settings, you should fix your time settings.
"sudo hwclock --hctosys" should do the trick.
Just wanted to point out that accessing DynamoDB from a C# environment (using AWS.NET SDK) I ran into this error and the way I solved it was to create a new pair of AWS access/secret keys.
Worked immediately after I changed those keys in the code.
I've enabled object versioning on a bucket. I want to get all versions of a key inside that bucket. But I cannot find a method go do this; how would one accomplish this using the S3 APIs?
So, I ran into this brick wall this morning. This seemingly trivial thing is incredibly difficult to do, it turns out.
The API you want is the GET Bucket Object versions API, but it is sadly non-trivial to use.
First, you have to steer clear of some non-solutions: KeyMarker, which is documented by boto3 as,
KeyMarker (string) -- Specifies the key to start with when listing objects in a bucket.
…does not start with the specified key when listing objects in a bucket; rather, it starts immediately after that key, which makes it somewhat useless here.
The best restriction this API provides is Prefix; this isn't going to be perfect, since there could be keys that are not our key of interest that nonetheless contain our key.
Also beware of MaxKeys; it is tempting to think that, lexicographically, our key should be first, and all keys which have our key as a prefix of their key name would follow, so we could trim them using MaxKeys; sadly, MaxKeys controls not how many keys are returned in the response, but rather the number of versions. (And I'm going to presume that isn't known in advance.)
So, Prefix is the best it seems that can be done. Also note that, at least in some languages, the client library will not handle pagination for you, so you'll additionally need to deal with that.
As an example in boto3:
response = client.list_object_versions(
Bucket=bucket_name, Prefix=key_name,
)
while True:
# Process `response`
...
# Check if the results got paginated:
if response['IsTruncated']:
response = client.list_object_versions(
Bucket=bucket_name, Prefix=key_name,
KeyMarker=response['NextKeyMarker'],
VersionIdMarker=response['NextVersionIdMarker'],
)
else:
break
AWS supported get all object versions by prefix, so you could just use your key as this prefix, it works fine, please try it.
You can use AWS CLI to get list of all versions from bucket
aws s3api list-object-versions --bucket bucketname
Using python,
session = boto3.Session(aws_access_key_id, aws_secret_access_key)
s3 = session.client('s3')
bucket_name = 'bucketname'
versions = s3.list_object_versions (Bucket = bucket_name)
print(versions.get('Versions'))