I had added a new profile, workflow, using
aws configure
I have created a serverless application using
serverless create --template aws-nodejs --path ssm5
/.aws/credentials
[workflow]
aws_access_key_id=<<My Access Key>>
aws_secret_access_key=<<My Secret Key>>
/.aws/config
[profile workflow]
region = us-east-1
serverless.yml
service: ssm5
frameworkVersion: "2"
provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: 20201221
I tried to deploy the application using
serverless deploy --aws-profile workflow
Unfortunately I am getting below error.
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless Error ----------------------------------------
AWS profile ""workflow"" doesn't seem to be configured
I had then set below environment variables from command prompt.
set AWS_PROFILE="workflow"
set AWS_ACCESS_KEY=<<My Access Key>>
set AWS_SECRET_ACCESS_KEY=<<My Secret Key>>
set AWS_SDK_LOAD_CONFIG=1
Unfortunately, that also didn't help me and the error still prevails.
Note: I used terraform to provision infrastructure. Terraform picks the workflow profile successfully from the aforementioned config & credential files. Problem is only with serverless.
It would be really great if someone can help me on this.
I ran into this issue and after debugging the code, I found this:
https://github.com/serverless/serverless/blob/29f0e9c840e4b1ae9949925bc5a2a9d2de742271/lib/plugins/aws/provider.js#L129
Since by default AWS.SharedIniFileCredentials does not return the roleArn by default, sls assumes the profile is invalid. The fix is to set AWS_SDK_LOAD_CONFIG=1 as suggested in the comments. That variable tells the AWS SDK to load the profile when you are using a shared config file.
Based on that I can assume that setting AWS_SHARED_CREDENTIALS_FILE might work as well since the other file should only contain the one profile.
Related
I'm trying to deploy my stack to aws using cdk deploy my-stack. When doing it in my terminal window it works perfectly, but when im doing it in my pipeline i get this error: Need to perform AWS calls for account xxx, but no credentials have been configured. I have run aws configure and inserted the correct keys for the IAM user im using.
So again, it only works when im doing manually in the terimal but not when the pipeline is doing it. Anyone got a clue to why I get this error?
I encountered the same error message on my Mac.
I had ~/.aws/config and credentials files set up. My credentials file had a user that didn't exist in IAM.
For me, the solution was to go back into IAM in the AWS Console; create a new dev-admin user and add AdministratorAccess privileges like this ..
Then update my ~/.aws/credentials file with the new [dev-admin] user and added the keys which are available under the "Security Credentials" tab on the Summary page shown above. The credentials entry looks like this..
[dev-admin]
aws_access_key_id=<your access key here>
aws_secret_access_key=<your secret access key here>
I then went back into my project root folder and ran
cdk deploy --profile dev-admin -v
Not sure if this is the 'correct' approach but it worked for me.
If you are using a named profile other than 'default', you might want to pass the name of the profile with the --profile flag.
For example:
cdk deploy --all --profile mynamedprofile
If you are deploying a stack or a stage you can explicitly specify the environment you are deploying resources in. This is important for cdk-pipelines because the AWS Account where the Pipeline construct is created can be different from where the resources get dployed. For example (C#):
Env = new Amazon.CDK.Environment()
{
Account = "123456789",
Region = "us-east-1"
}
See the docs
If you get this error you might need to bootstrap the account in question. And if you have a tools/ops account you need to trust this from the "deployment" accounts.
Here is an example with dev, prod and tools:
cdk bootstrap <tools-account-no>/<region> --profile=tools;
cdk bootstrap <dev-account-no>/<region> --profile=dev;
cdk bootstrap <prod-account-no>/<region> --profile=prod;
cdk bootstrap --trust <tools-account-no> --profile=dev --cloudformation-execution-policies 'arn:aws:iam::aws:policy/ AdministratorAccess';
cdk bootstrap --trust <tools-account-no> --profile=prod --cloudformation-execution-policies 'arn:aws:iam::aws:policy/ AdministratorAccess';
cdk bootstrap --trust <tools-account-no> --profile=tools --cloudformation-execution-policies 'arn:aws:iam::aws:policy/ AdministratorAccess';
Note that you need to commit the changes to cdk.context.json
The only way worked with me is to make sure that ~/.aws/config and ~/.aws/credentials files they both can't have a default profile section.
So if you removed the default profile from both files, it should work fine with you :)
Here is a sample of my ~/.aws/config ====> (Note: i don't use default profile at all)
[profile myProfile]
sso_start_url = https://hostname/start#/
sso_region = REPLACE_ME_WITH_YOURS
sso_account_id = REPLACE_ME_WITH_YOURS
sso_role_name = REPLACE_ME_WITH_YOURS
region = REPLACE_ME_WITH_YOURS
output = yaml
And this is ~/.aws/credentials ====> (Note: i don't use default profile at all)
[myProfile]
aws_access_key_id=REPLACE_ME_WITH_YOURS
aws_secret_access_key=REPLACE_ME_WITH_YOURS
aws_session_token=REPLACE_ME_WITH_YOURS
source_profile=myProfile
Note: if it still doesn't work, so try to use one profile only in config and credentials holding your AWS configurations and credentials.
I'm also new to this. I was adding sudo before cdk bootstrap command. Removing sudo made it work.
You can also do aws configure list to list down all the profiles to check if credentials are being created and stored in a proper manner.
If using a CI tool, check the output of cdk <command> --verbose for hints at the root cause for credentials not found.
In one case, the issue was simply the ~/.aws/credentials file was missing (although not technically required if running on EC2) - more details in this answer.
I too had this issue.
when I checked ~/.aws/credentials, it was having some older account details. So I just deleted that file.
and
==> aws configure
==> cdk bootstrap aws://XXXXXX/ap-south-1
it worked.
We have a team of 3 to 4 members so we wanted to do serverless deploy or update functions or resources using our own personnel AWS credentials without creating new stack but just updating the existing resources. Is there a way to do that? I am aware that we can set up --aws-profile and different profiles for different stages. I am also aware that we cloud just divide the resources into microservices and just deploy or update our own parts. Any help is appreciated.
This can be done as below:
Add the profile configuration as below, i ha e named it as devProfile.
service: new-service
provider:
name: aws
runtime: nodejs12.x
stage: dev
profile: devProfile
Each individual would set their credentials under their own machine as below:
aws configure --profile devProfile
If you have different credentials for different stage, then above serverless snippet can be implemented in parameterized way as below:
serverless.yml
custom:
stages:
- local
- dev
- prod
# default stage/environment
defaultStage: local
# default AWS region
defaultRegion: us-east-1
# config file / region / stage
configFile: ${file(./config/${opt:region,self:provider.region}/${self:provider.stage}.yml)}
Provider:
...
stage: ${opt:stage, self:custom.defaultStage}
...
profile: ${self:custom.configFile.aws.profile}
...
Create config/us-east-1/dev.yml
aws:
profile: devProfile
and config/us-east-1/prod.yml
aws:
profile: prodProfile
It sounds like you already know what to do but need a sanity check. So I'll tell you how I, and everyone else I know, handles this.
We prefix commands with AWS_PROFILE env var declared and we use --stage names.
E.g. AWS_PROFILE=mycompany sls deploy --stage shailendra.
Google aws configure for examples on how to set up awscli that uses the AWS_PROFILE var.
We also name the --stage with a unique ID, e.g. your name. This way, you and your colleagues all have individual CloudFormation stacks that work independently of eachother and there will be no conflicts.
when i type serverless deploy appear this error:
ServerlessError: The security token included in the request is invalid.
I had to specify sls deploy --aws-profile in my serverless deploy commands like this:
sls deploy --aws-profile common
Can you provide more information?
Make sure that you've got the correct credentials in ~/.aws/config and ~/.aws/credentials. You can set these up by running aws configure. More info here: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-quick-configuration
Also make sure that the IAM user in question has as an attached security policy that allows access to everything you need, such as CloudFormation.
Create a new user in AWS (don't use the root key).
In the SSH keys for AWS CodeCommit, generate a new Access Key.
Copy the values and run this:
serverless config credentials --overwrite --provider aws --key bar --secret foo
sls deploy
In my case it was missing the localstack entry in the serverless file.
I had everything that should be inside it, but it was all inside custom (instead of custom.localstack).
In my case, I added region to the provider. I suppose it's not read from the credentials file.
provider:
name: aws
runtime: nodejs12.x
region: cn-northwest-1
In my case, multiple credentials are stored in the ~/.aws/credentials file.
And serverless is picking the default credentials.
So, I kept the new credentials under [default] and removed the previous credentials. And that worked for me.
to run the function from AWS you need to configure AWS with access_key_id and secret_access_key
but
to might get this error if you want to run the function locally
so for that use this command
sls invoke local -f functionName
it will run the function locally not on aws
If none of these answers work, it's maybe because you need to add a provider in your serverless account and add your AWS keys.
Does anyone using the Serverless framework know if it's possible to use the same serverless deploy file to deploy to all three cloud providers if the underlying code is capable?
Or are the serverless files specific to each cloud provider?
Thanks
Assuming all your function code are provider agnostic...
Each provider have their own specific way of defining and configuring things so you would expect that the low-level details of the serverless.yml file for each would be different.
That being said, the high-level properties of the serverless.yml are pretty much common for most, if not all, providers.
service:
provider:
plugins:
functions:
This would allow you to have one serverless.yml for all providers that simply references other YAML files depending on an environment variable. Assuming you have serverless-aws.yml, serverless-azure.yml, and serverless-google.yml for your provider-specific configuration, you should be able to use this in your serverless.yml,
service: ${file(serverless-${env:PROVIDER}.yml):service}
plugins: ${file(serverless-${env:PROVIDER}.yml):plugins}
provider: ${file(serverless-${env:PROVIDER}.yml):provider}
functions:
getSomething: ${file(serverless-${env:PROVIDER}.yml):functions.getSomething}
createSomething: ${file(serverless-${env:PROVIDER}.yml):functions.createSomething}
updateSomething: ${file(serverless-${env:PROVIDER}.yml):functions.updateSomething}
deleteSomething: ${file(serverless-${env:PROVIDER}.yml):functions.deleteSomething}
Whenever you deploy, you can choose which provider to use by specifying the PROVIDER environment variable.
$ PROVIDER=aws sls deploy # Deploys to AWS
$ PROVIDER=azure sls deploy # Deploys to Azure
$ PROVIDER=google sls deploy # Deploys to GCP
#dashmug's answer should work but doesn't. If you try to include the entire provider section, it doesn't get evaluated -- i.e. srs print just spits out the un-evaluated expression:
provider: ${file(serverless-${env:PROVIDER}.yml):provider}
Trying to parameterize each key doesn't work because it changes to order, which seems to cause the deploy to fail:
//serverless.yml
...
provider:
name: ${file(serverless-${env:PROVIDER}.yml):provider.name}
runtime: ${file(serverless-${env:PROVIDER}.yml):provider.runtime}
stage: ${file(serverless-${env:PROVIDER}.yml):provider.stage}
...
Results in this:
> sls print
service: my-crossplatform-service
provider:
stage: prod
name: aws
runtime: nodejs8.10
I ended up just maintaining separate serverless.yml files and deploying with a little bash script that copies the appropriate file first:
#!/bin/bash
if [ "$1" != "" ]; then
echo "copying serverless-$1.yml to serverless.yml and running serverless deploy"
cp serverless-$1.yml serverless.yml && sls deploy
else
echo "Please append provider, like 'deploy.sh aws' or 'deploy.sh azure'"
fi
Really wish you could just specify the config file as a deploy option, as requested here: https://github.com/serverless/serverless/issues/4485
I believe I might be missing a piece here,
I've added the aws account.
hal config provider aws account add spinnakermaster \
--account-id XXXXXXXXXXXX --asume-role role/spinnakerManaged
I've added the credentials for the AWS User.
hal config provider aws edit --access-key-id XXXXXXXXXXXXXXXXXXXX --secret-access-key
And prompted to its corresponding secret-access-key.
I've edited in the .hal directory the config file:
aws:
enabled: false
accounts:
- name: spinnakermaster
requiredGroupMembership: []
accountId: 'ZZZZZZZZZZZZZZZZZZ'
regions: []
assumeRole: role/spinnakerManaged
primaryAccount: spinnakermaster
accessKeyId: XXXXXXXXXXXXXXXXXXXX
secretAccessKey: YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
defaultKeyPairTemplate: '{{name}}-keypair'
defaultRegions:
- name: Canada
defaults:
iamRole: BaseIAMRole
And I am deploying Spinnaker with AWS support which execute with one hiccup:
Problems in default.provider.aws.spinnakermaster:
- WARNING No validation for the AWS provider has been
implemented.
Which step/info/config am I missing.
Regards
EN
updated. This warning is OK and will not affect your executions.
My suggestions after installing Spinnaker in EC2 local debian, Azure AKS and Minnaker on EC2.
Please dont install a microservice architecture in a monolith environment such as local Debian. It doesnt work
At All Cost Focus on the correct AWS Managed and Managing IAM structure. Please Follow Armory Spinnaker instructions on how to achieve this Armory IAM structure
Previous misleading answer: As of Now Spinnaker version 1.16.4 and based on the official documentation. There are 2 ways to manage the AWS infrastructure:
with aWS key and secret
with IAM role attached to the AWS EC2 instance running the spinnaker.
This error usually comes up when halyard cannot recognize the Key and secret for the corresponding account. Check halyard Code Documentation
One way to resolve it depending on your deployment type is adding an AWS account with the corresponding Key and Secret values. Check Halyard add-account
Documentation AWS Cloud Provider