long story short, I need to update my ECS Task definition via powershell in order to increase the "EphemeralStorage_SizeInGiB" which is only available via the AWS cli.
I am able to successfully grab the task via the Get-ECSTaskDefinitionDetail cmdlet but I'm stuck on what to do next.
I was able to convert that output to JSON and update the ephemeral storage field in the json file but cannot figure how to send that back to AWS. All my attempts with the Register-ECSTaskDefinition Cmdlet seem to fail as it wants individual arguments for each parameter instead of a json upload.
Any advice would be appreciated.
Thanks,
I don't have one to test with, but most AWS cmdlets return objects which can be piped to each other. Get-ECSTaskDefinitionDetail does too, returning an DescribeTaskDefinitionResponse object, with what looks like all the right properties to auto-fill the registration. Try out
Get-ECSTaskDefinitionDetail -TaskDefinition $ARN |
Register-ECSTaskDefinition -EphemeralStorage_SizeInGiB $newSize
Or it might require using this .TaskDefinition property:
$Response = Get-ECSTaskDefinitionDetail -TaskDefinition $ARN
$Response.TaskDefinition | Register-ECSTaskDefinition -EphemeralStorage_SizeInGiB $newSize
and maybe it's that easy?
note that you must not use -Select in the Get command, or it will return a different object type.
That said, it's pretty awkward that it won't take json when two of its parameters do. Might be worth reopening this feature request:
https://github.com/aws/aws-tools-for-powershell/issues/184
Related
I have an AWS lambda built using SAM. I want to propagate the id (or, if it's easier, the tag) of a lambda's supporting docker image through to the lambda runtime function.
How do I do this?
Note: I do mean image id and NOT container id - what you'd see if you called docker image ls locally. Getting the container id / hostname is the easy bit :D
I have tried to declare a parameter in the template.yaml and have it picked up as an environment variable that way. I would prefer to define the value at most once within the template.yaml, and preferably have it auto-populated, though I am not aware of best practice there. The aim is to avoid human error. I don't want to pass the value on the command line unless I have to.
If it's too hard to get the image id then as a fallback the DockerTag would be fine. Again, I don't want this in multiple places in the template.yaml. Thanks!
Unanswered similar question: Finding the image ID of a container from within the container
The launched image URI is available in the packaged template file after running sam package, so it's possible to extract the tag from there.
For example, if using YAML:
grep -w ImageUri packaged.yaml | cut -d: -f3
This will find the URI in the packaged template (which looks like ImageUri: 12345.dkr.ecr.us-east-1.amazonaws.com/myrepo:mylambda-123abc-latest) and grabs the tag, which is after the 2nd :.
That said, I don't think it's a great solution. I wish there was a way using the SAM CLI.
I have some current instances that get some data by passing a json blob through the user data string. I would like to also pass a script to be run at boot time through the user data. Is there a way to do both of these things? I've looked at cloud-config, but setting an arbitrary value doesn't seem to be one of the options.
You're correct that on EC2, there is only one 'user-data' blob that can be specified. Cloud-init addresses this limitation by allowing the blob to be an "archive" format of sorts.
Mime Multipart
Cloud-config Archive
cloud-config archive is unfortunately not documented now, but there is an example in doc/examples/cloud-config-archive.txt. It is expected to be yaml and start with '#cloud-config-archive'. Note that yaml is a strict superset of json, so any thing that can dump json can be used to produce this yaml.
Both of these formats require changes to all consumers to "share" the shared resource of user-data. cloud-init will ignore mime types that it does not understand, and handle those that it does. You'd have to modify the other application producing and consuming user-data to do the same.
Well, cloud-init supports multi-part MIME. With that in mind you could have your boot script as one part, then a custom mime part. Note that you would need to write a python handler that tells cloud-init what to do with that part (most likely moving it to wherever your app expects it). This handler code ends up in the handlers directory as described here.
Since many of the OpsWorks APIs take an OpsWorks id (different than an EC2 instance id), it seems like there should be an easy way to get the id. There is an opswork-agent-cli stack_state command that returns a JSON blob that has includes the id, but that still requires parsing, and I can't be sure what tools will be available on the instance. It is reasonably easy to parse the id out of the JSON using shell commands, but they feel like an ugly hack. Are there any commands I'm missing or other ways to get an instance to report its id?
I think you have to parse it.
You can use jq to parse JSON data, like it's typically done when reading EC2 instance metadata. jq package is included in AWS Linux AMIs (see available packages).
In your case, try opswork-agent-cli stack_state | jq '.stack.stack_id' .
When I try to download all log files from a RDS instance, in some cases, I found this error in my python output:
An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed.
I manage correctly the pagination and the throttling (using The Marker parameter and the sleep function).
This is my calling:
log_page=request_paginated(rds,DBInstanceIdentifier=id_rds,LogFileName=log,NumberOfLines=1000)
rds-> boto3 resource
And this is the definition of my function:
def request_paginated(rds,**kwargs):
return rds.download_db_log_file_portion(**kwargs)
Like I said, most of time this function works but sometime it returns:
"An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed"
Can you help me please? :)
UPDATE: the problem is a known issue with downloading log files that contain non printable sign. As soon as possible I will try the proposed solution provide by the aws support
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released.
So the solutions is: use the java API
Giuseppe
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released. So the solutions is: use the java API
Giuseppe
http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonErrors.html
InvalidParameterValue : An invalid or out-of-range value was supplied
for the input parameter.
Invalid parameter in boto means the data pass does not complied. Probably an invalid name that you specified, possible something wrong with your variable id_rds, or maybe your LogFileName, etc. You must complied with the function arguments requirement.
response = client.download_db_log_file_portion(
DBInstanceIdentifier='string',
LogFileName='string',
Marker='string',
NumberOfLines=123
)
(UPDATE)
For example, LogFileName must be te exact file name exist inside RDS instance.
For the logfile , please make sure the log file EXISTS inside the instance. Use this AWS CLI to get a quick check
aws rds describe-db-log-files --db-instance-identifier <my-rds-name>
Do check Marker (string) and NumberOfLines (Integer) as well. Mismatch type or out of range. Skip them since they are not required, then test it later.
I used a cloudformation template to create an ec2 instance. Is there any way besides tagging that I can get the name of the cloudformation template via the command line?
Method 1: Tagging
Tagging is going to be the cleanest and easiest way to get that data. You do need to do some advance work and this won't work for existing instances, but it's going to be fast and reliable.
Method 2: Cross-referencing
If you have the instance id, you can ask Cloudformation to search for it's sibling stack resources, from which you can infer the stack name, id, etc.
c = boto.cloudformation.connect_to_region('us-east-1')
c.describe_stack_resources(physical_resource_id='i-830e2869')[0].stack_name
If the instance is not part of a stack, you'll get a Stack for i-830e2869 does not exist 400 error.
Method 3: User data
I'll admit - this was pretty creative so kudos for thinking it up.
curl http://169.254.169.254/latest/user-data | grep 'cfn-init -s' | awk '{print $3}'
The reason this works is that instances created by Cloudformation need to run /opt/aws/bin/cfn-init to install packages and /opt/aws/bin/cfn-signal in order to report their successful creation and one of the parameters is the stack name.
It'll fail if someone edits the user-data, but despite feeling a bit hacky, it seems pretty reliable. I still wouldn't recommend using it in prod given it's brittle reliance on a script parameter.