What protocols are supported by the AWS CLI for files? - amazon-web-services

I am running the following command and was wondering how else the value for --template-body can be specified.
aws cloudformation validate-template --template-body file://my-template.yml
I know of http:// and file:// but are there others? Is there a list of these somewhere?
Thanks,

The AWSCLI supports file:, http:, https: protocols. This is discussed in this section of the AWSCLI documentation.
Updated
When in doubt, check the code. Here are the supported prefixes:
PREFIX_MAP = {
'file://': (get_file, {'mode': 'r'}),
'fileb://': (get_file, {'mode': 'rb'}),
'http://': (get_uri, {}),
'https://': (get_uri, {}),
}

Related

Missing commands in AWS CLI?

Sorry if this is a dumb question - I'm just getting started with AWS - I've installed the AWS CLI utility on Windows, and configured it with my access key and secret. The command "aws --version" says "aws-cli/2.8.10 Python/3.9.11 Windows/10 exe/AMD64 prompt/off". I'm trying to run "aws assume-impersonation-role" as documented here:
https://docs.aws.amazon.com/cli/latest/reference/workmail/assume-impersonation-role.html
but I get the error "invalid choice". The list of valid choices has nothing like what I'm looking for. Is there some extension or add-on I need to install?
Just to check the obvious, are you doing:
aws assume-impersonation-role --organization-id … --impersonation-role-id …
rather than the intended
aws workmail assume-impersonation-role --organization-id … --impersonation-role-id …
?

Using credential process for IAM Roles Anywhere in springboot application

I have a use case where I need to access the SNS topic from outside AWS. We planned to use https://aws.amazon.com/blogs/security/extend-aws-iam-roles-to-workloads-outside-of-aws-with-iam-roles-anywhere/ as it seems to be the right fit
But I'm unable to get this working correctly. I followed the link exactly mentioed above where the contents of .aws/config file are
credential_process = ./aws_signing_helper credential-process
--certificate /path/to/certificate.pem
--private-key /path/to/private-key.pem
--trust-anchor-arn <TA_ARN>
--profile-arn <PROFILE_ARN>
--role-arn <ExampleS3WriteRole_ARN>
But my spring boot application throws an error stating that it could not fetch the credentials to connect to AWS. Kindly assist
I found the easiest thing to do was to create a separate script for the credential_process to target, this isn't necessary I just found it easier.
So create a script along the lines of:
#! /bin/bash
# raw_helper.sh
/path/to/aws_signing_helper credential-process \
--certificate /path/to/cert.crt \
--private-key /path/to/key.key \
--trust-anchor-arn <TA_ARN> \
--profile-arn <Roles_Anywhere_Profile_ARN> \
--role-arn <IAM_Role_ARN>
The key thing I found is that most places (including AWS documentation) tell you to use the ~/.aws/config file and declare the profile there. This didn't seem to work, but when I added the profile to my ~/.aws/credentials file it did work. Assuming you've created a helper script, this would look like this:
# ~/.aws/credentials
[raw_profile]
credential_process = /path/to/raw_helper.sh

Provided region_name 'US East (Ohio) us-east-2' doesn't match a supported format

I want to upload a file to amazon CLI it's not working
When I'm uploading manually it's working
I'm using the below command
aws s3 cp /localfolderlocation awss3foldername --recursive --include "filename"
When i try to get list same error
aws s3 ls
The issue was that when running the aws configure CLI command the OP entered the name of the region as seen from the console.
In AWS CLI the region identifier should be the code not the full display name.
The full list of region codes are available here.
This is required for any programmatic interaction with AWS including the SDKs as well.
Check whether you have added the region something like that
AWS_S3_REGION_NAME = 'ca-central-1' in your settings.py file and make sure it should be in small letters.
For my Spring application, after the release of v1.3.1, I had to replace a call to the AWS SDK's Regions.US_EAST_1.getString() with getName(). It didn't like getting US_EAST_1 as part of the request anymore, though it worked before.
#Bean("amazonS3")
#ConditionalOnProperty(name = "localstack", havingValue = "true")
#Profile("!test")
public static AmazonS3 amazonS3LocalStackClient(
#Value("${s3.endpoint}") String localEndpoint) {
return AmazonS3ClientBuilder.standard()
.withCredentials(new DefaultAWSCredentialsProviderChain())
.withEndpointConfiguration(
new EndpointConfiguration(localEndpoint, Regions.US_EAST_1.getName()))
.withPathStyleAccessEnabled(Boolean.TRUE)
.build();
}
For me I tried all options,
Deleted env variables
Deleted .aws folder
Nothing worked. Then I just updated the aws cli and rebooted the machine. And it did work. Try below
For windows: msiexec.exe /i https://awscli.amazonaws.com/AWSCLIV2.msi
For Linux & Mac this is this link

AWS update-function-code Shows No Output and Does Not Update the Function

I am creating a Node.js based lambda function to query an AmazonRDS instance in the same VPC as the Lambda instance. The codebase uses npm libraries, so it needs to be zipped then updated via the console.
I wrote the code, zipped the files, and ran the following command :
aws lambda update-function-code --function-name (the arn of the function) --zip-file fileb://~/path/to/function/queryDatabase .zip
However, the console displays no output, and the function is not updated when it is viewed from the web interface.
This is my package.json
{
"dependencies": {
"aws-sdk": "^2.784.0",
"aws-xray-sdk-core": "2.4.0",
"aws-xray-sdk-mysql": "2.4.0",
"md5": "2.2.1",
"mysql2": "2.1.0"
}
}
How should I troubleshoot this issue?
Thank you in advance!
The Lambda instance was compatible with Node version 12. My codebase was using version 15. Telling nvm to use version 12, deleting package-lock.json, and then reinstalling the modules fixed the problem.

How to get cfnoutputs of AWS stack to a file using AWS-CDK

I want to store the Cfnoutputs in AWS-CDK to a file(Python).
Below is the code to show Public IP on console.
my_ip = core.CfnOutput(
scope=self,
id="PublicIp",
value=my_ec2.instance_public_ip,
description="public ip of my instance",
export_name="my-ec2-public-ip")
I have tried using redirecting the output in Python by using command:
cdk deploy * > file.txt
But no success.
Please help
For every value you want saved after the stack is run add a core.CfnOutput call in your code.
Then when you deploy your stack, use:
% cdk deploy {stack-name} --profile $(AWS_PROFILE) --require-approval never \
--outputs-file {output-json-file}
This deploys the stack, doesn't stop to ask for yes/no approvals (so you can put it in a Makefile or a CI/CD script) and once done, saves the value of every CfnOutput in your stack to a JSON file.
Details here: https://github.com/aws/aws-cdk/commit/75d5ee9e41935a9525fa6cfe5a059398d0a799cd
This answer is only relevant if you're using CDK <1.32.0. Since then #7020 was merged and --outputs-file is supported. See the top voted answer for a full example.
Based on this closed issue, your best bet is using AWS CLI to describe the stack and extract the output. For example:
aws cloudformation describe-stacks \
--stack-name <my stack name> \
--query "Stacks[0].Outputs[?OutputKey==`PublicIp`].OutputValue" \
--output text
If you're using Python, this can also be done with boto3.
import boto3
outputs = boto3.Session().client("cloudformation").describe_stacks(StackName="<my stack here>")["Stacks"][0]["Outputs"]
for o in outputs:
if o["OutputKey"] == "PublicIp":
print(o["OutputValue"])
break
else:
print("Can't find output")
Not sure if this is a recent update to cdk CLI, but cdk deploy -O will work.
-O, --outputs-file Path to file where stack outputs will be written as JSON
This is an option now. It will take cdk.CfnOutput and put in JSON file.
https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_core.CfnOutput.html