I'm currently working on AWS serverless lambda function deployment and try to distribute and test with AWS SAM. However, when I followed the AWS SAM hello world template tutorial on official website, I can't really deploy my code to AWS.
I've already
Assigned a working IAM account
Install every package we need for AWS SAM (brew, aws-sam-cli...etc)
Set up AWS configuration
Using a function template provided by AWS
Yet, I got error message
Error: Stack aws-sam-cli-managed-default is missing Tags and/or
Outputs information and therefore not in a healthy state (Current
state:aws-sam-cli-managed-default). Failing as the stack was likely
not created by the AWS SAM CLI
Took me a minute to figure out too.
Open up CloudFormation in AWS and delete the aws-sam-cli-managed-default stack then try to redeploy.
Every time your deploy fails you'll likely have to do this again.
It's aws credentials error - because you not configure it right or not config at all.
If you didn't have aws cli installed on your computer, find aws cli installer for
your filesystem, for mac it's https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-mac.html.
Go to https://console.aws.amazon.com/iam and create new user with AdministratorAccess permission and get aws_access_key_id and aws_secret_access_key.
Go to your terminal and type aws configure.
Enter your credentials.
Try to run sum build && sum deploy --guided
Now it's need to work.
Like #Eli Meiler says, it may well be a credential issue. If you need to see more details here try
$ aws cloudformation describe-change-set --change-set-name InitialCreation --stack-name aws-sam-cli-managed-default
...FAILED User: arn:aws:iam::123:user/<human user> is not authorized to perform:
cloudformation:CreateChangeSet
on resource: arn:aws:cloudformation:eu-central-1:aws:transform/Serverless-2016-10-31
with an explicit deny in an identity-based policy
EDIT
Even though I had full permissions in that AWS account, what I was not aware was that MFA / 2-factor auth is kinda troublesome here.
The advice that worked for me was this github comment to
generate an sts token
set the env vars and
then try sam deploy --guided again
$ aws sts get-session-token --serial-number arn:aws:iam::<account_id>:mfa/<human.user> --duration-seconds 15000 --token-code 123456
Related
I am trying to setup an ec2 instance (A role is associate with this instance).
This instance is responsible to
Create build, and upload to s3 bucket
Create a new application version from this build for elasticbeanstalk
Deploy newly created version on beanstalk
I am running following 3 commands. first 2 are executed successfully.
aws s3 cp api-service-build.zip s3://build-bucket/api-service/2022-11-2022.zip
aws elasticbeanstalk create-application-version
--application-name api-service-stage
--version-label v5
--description "Version 5"
--source-bundle S3Bucket="build-bucket",S3Key="api-service/2022-11-2022.zip"
but when I try to run third command its unable to deploy (please note on CLI its not failing)
aws elasticbeanstalk update-environment
--environment-name api-service-stage-env
--version-label v5
On beanstalk web console I can see following error
User: arn:aws:sts::xxxxxxxxx:assumed-role/MyAssumedRole/i-xxxxxx is not authorized to perform: autoscaling:DescribeAutoScalingGroups because no identity-based policy allows the autoscaling:DescribeAutoScalingGroups action (Service: AmazonAutoScaling; Status Code: 403; Error Code: AccessDenied;
I have updated my policy more than 30 times, to reach to above point, and yet another permission error.
Is there a way or a tool, where I paste my command and it tells me what permissions are required to run this command
aws s3 cp
aws elasticbeanstalk create-application-version
aws elasticbeanstalk update-environment
Permission I have added so far in MyAssumedRoles are as follows, I have added these with lots of hit and tries and yet its asking about another one autoscaling.
S3 Full access
Elastic Beanstalk full access
CloudFormation full access
Based on the error you are missing the AutoScaling permissions. They are different from the ones that you have already added. The best way to test is to use the AWS Policy Simulator. Follow the steps below :
Login to the AWS Console.
Go to the following URL : https://policysim.aws.amazon.com
Under User, Groups & Roles : Select Roles and then the role : MyAssumedRole
You can test the access on the right by selecting the action e.g. under Policy Simulator, select Auto Scaling and then action - DescribeScalingPlans. Policy Simulator will give you the exact policy you need to add for your role.
I am trying to setup some build and deployment servers based on EC2 instances to deploy software to AWS via CloudFormation.
The current setup uses the AWS CLI to deploy CloudFormation templates, and authentication is handled using a credentials profile where the ~/.aws/config file has a profile with:
[profile x]
role_arn = x
credential_source = Ec2InstanceMetadata
region = x
The setup using the AWS CLI appears to be working fine, and can deploy CloudFormation templates, upload files to S3 etc.
I wanted to automate this further and use a configuration-based approach to allow for more flexibility in our deployments. To achieve this, I have written some Python code to parse a config file and use the Boto3 library (which the AWS CLI also uses) to replicate the functionality. However when I am trying to do similar things in Boto3 (like deploy CloudFormation and upload files to S3), I get the following error: Connection to sts.amazonaws.com timed out. Unfortunately I can't provide the full stack trace since it's on a separate network. I am running Python 3.7 and boto3-1.21-13, botocore-1.24.13.
I assume it might be because I need to setup a VPC endpoint for STS? However, I can't work out why and how the AWS CLI works fine, but Boto3 doesn't. Especially since AWS CLI uses Boto3 under the hood.
In addition, I have confirmed that I can retrieve instance metadata using curl from the EC2 instances.
To reproduce the error, this command fails for me:
python -c "import boto3;print(boto3.Session(profile_name='x').client('s3').list_objects('bucket')"
However this AWS cli command works:
aws --profile x s3 ls bucket
I guess I don't understand why the AWS CLI command works, when the boto3 command fails. Why does boto3 needs to call the sts.amazonaws.com endpoint, when the AWS CLI seemingly doesn't? What am I missing?
The aws cli and boto3 both use botocore, which is only a minor detail. Nevertheless, both the cli and boto3, when run in the same environment with the same access to the credentials, should indeed be able to reach the same endpoint.
This:
aws sts get-caller-identity --profile x
and:
python -c "import boto3;print(boto3.Session(profile_name='x').client('sts').get_caller_identity())"
are equivalent and should make the same api calls to the same endpoint.
As an aside, I find it is often best not to have your code concerned with session handling at all. It seems most simple to me for the code to expect the environment to handle that. So just export AWS_PROFILE and run the code. This prevents other user of the script from having to have the same profile and name it the same.
Yeah so it turns out I just needed to set/export AWS_STS_REGIONAL_ENDPOINTS='regional'.
After many hours of trawling the botocore and awscli source and logs, I found out that botocore sets it by default to 'legacy'.
Where as in v2 of the AWS CLI, they set it to 'regional'.
I am running a cdk deploy build on circleCi, and when the step CDK deploy comes it gives me "Need to perform AWS calls for account ************, but no credentials have been configured".
But for the troubleshooting i tried other commands as well like
aws s3 ls
aws aws cloudformation list-stacks
These above commands we working fine, also able to run command to create a cloudformation with same config but not able to run cdk deploy. the access key and secret i am using has Admin access.
Set the creds with a profile name using aws-cli Orb in CircleCI and
try using the below command to deploy with CDK
cdk deploy --all --profile cdkprofile
For reference, in CircleCI
orbs:
aws-cli: circleci/aws-cli#2.0.3
commands:
env-setup:
description: AWS Env Setup
steps:
- aws-cli/setup:
profile-name: cdkprofile
aws-access-key-id: AWS_ACCESS_KEY_ID
aws-secret-access-key: AWS_SECRET_ACCESS_KEY
And assumption is AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are set as CircleCI env variables
As a starting note: The best way to troubleshoot is with cdk [command] --verbose (see CLI ref)
CDK has an internal mechanism for finding credentials not directly using AWS CLI (AWS CLI is not a requirement for CDK to run)
In a similar situation with a CI tool, the issue was simply that the ~/.aws/credentials file did not exist (not that you need it with AWS CLI, but in the situation for CDK, it was required)
Credit to this issue reporting: https://github.com/aws/aws-cdk/issues/6947#issue-586402006
Solution tested for above:
For an EC2 running CI tool, with EC2 IAM role
Where ~/.aws/config exists and defined profile(s) with:
credential_source = Ec2InstanceMetadata
role_arn = arn:aws:iam:::role/role-to-assume-in-acctId
Create empty ~/.aws/credentials file
Example error for the problem solved above (from verbose output)
Resolving default credentials
Notices refreshed
Unable to determine the default AWS account: ProcessCredentialsProviderFailure: Profile myprofile did not include credential process
Other causes found in other issues/comments could relate to:
Duplicate profiles
Having credential_process in the profile, set to empty
Needing --profile parameter to be added
Background:
I have jenkins installed in AWS Account #1 (account1234) and it has iam Role-jenkins attached to it. There's github configured with Jenkins.
When I click build job in Jenkins, jenkins pulls all the files from github and can be found in
/var/lib/jenkins/workspace/.
There's an application running in AWS Account #2 (acccount5678) in an ec2 instance (i-xyz123) and the project files are in /home/app/all_files/ ; This ec2 instance role has app-role attached to it.
What I'm trying to achieve:
When I click build, I want jenkins to push files from account 1234 to account 5678 by opening an SSM session from Jenkins ,to the ec2 instance on which app is running.
What I tried:
In the jenkins as part of build shell script I added:
aws ssm send-command --region us-east-1 --instance-ids i-xyz123 --document-name AWS-RunShellScript --comment IP config --parameters commands=ifconfig --output text
to test it. (If successful, I want to pass cp var/lib/jenkins/workspace/ /home/app/all_files/ as the command)
Error:
An error occurred (AccessDeniedException) when calling the SendCommand operation: User: arn:aws:sts::account1234:assumed-role/Role-Jenkins/i-01234abcd is not authorized to perform: ssm:SendCommand on resource: arn:aws:ec2:us-east-1:account1234:instance/i-xyz123
Build step 'Execute shell' marked build as failure
Finished: FAILURE
Issue 1: instance/i-xyz123 is in account5678 but error above shows ssm trying to connect to instance in account1234 ( which shouldn't be happening)
Q1: How do I update my command so that it tries to open an ssm session
with instance/i-xyz123 present in account5678 to accomplish what I'm
trying to do.
I believe I would also need to make each role added as a trusted relationship to the other.
(Note I want to do it via sessions manager as I won't have to deal with credentials of any sort)
If I've understood correctly then you're right; to interact with the resources in account5678, there needs to be a trust relationship so that the Jenkins account can assume the relevant role in account5678 and call SSM from there.
Once you've configured the role relationship (ref: IAM cross account roles )
You should be able to achieve what you need by assuming the role first in your shell script and then running the ssm command. That way Jenkins will use the temp creds and execute the command in the correct account (5678).
This site steps through it pretty well :
Tom Gregory - Jenkins Assume Role
If you just cmd/ctrl f on that page ^ and search for 'shell' you should get to the section you need. Hope this somewhat helps.
After installing awscli (the AWS command line tool), when I try to run it. I get this message in the terminal:
$ aws dynamodb describe-table --table-name MyTable
An error occurred (AccessDeniedException) when calling the DescribeTable operation:
User: arn:aws:iam::213352837455:user/someuser is not authorized to
perform: dynamodb:DescribeTable on resource: arn:aws:dynamodb:ap-northeast-1:213352837455:table/MyTable
$
But I don't know why I am considered logged as someuser at this moment (in the terminal in particular, but even in AWS).
someuser is only one of the few users I have set on AWS, a while ago.
What is the way to get logged in as the right user, to use awscli?
If you are running the AWS Command-Line Interface (CLI) on an Amazon EC2 instance that has been assigned a role, then the CLI can use the permissions associated with that role.
If you are not running on an EC2 instance, then you can provide credentials via a credentials file (~/.aws/credentials) or an environment variable.
The easiest way to configure the credentials is:
$ aws configure
See: Configuring the AWS CLI
Maybe your old credentials are still stored in ~/.aws.
Log in with correct credentials
aws configure
For more info see Configuring the AWS CLI
in official documentation.