How to change aws credentials in Serverless 1.0? - amazon-web-services

I try to use Serverless 1.0, with several AWS credentials.
(In my PC, 1.3.0 is installed)
I found some descriptions that "admin.env" can change credentials in Stack overflow or github issues, but I can't found how to write and where to put admin.env.
Are there any good document for admin.env?

First create different profiles. Use cli(this works from 1.3.0, won't work in 1.0.0, not sure which you are using since you mention both):
serverless config credentials --provider aws --key 1234 --secret 5678 --profile your-profile-name
Then in your serverless.yml file you can set the profile you want use:
provider:
name: aws
runtime: nodejs4.3
stage: dev
profile: your-profile-name
If you want to automatically deploy to different profiles depending on the stage you define variables and reference them in your serverless.yml file.
provider:
name: aws
runtime: nodejs4.3
stage: ${opt:stage, self:custom.defaultStage}
profile: ${self:custom.profiles.${self:provider.stage}}
custom:
defaultStage: dev
profiles:
dev: your-profile-name
prod: another-profile-name
Or you can reference your profile name in any other way. Read about variables in serverless-framework. You can get the name of profile to use from another file, from cli or from the same file(like in the example I gave).
More about the variables:
https://serverless.com/framework/docs/providers/aws/guide/variables/

Related

How to manage different Serverless (with AWS Lambda) environments (ie "dev" and "prod")

I want to create a separate 'dev' AWS Lambda with my Serverless service.
I have deployed my production, 'prod', environment and tried to then deploy a development, 'dev', environment so that I can trial features without affecting customer experience.
In order to deploy the 'dev' environment I have:
Created a new serverless-dev.yml file
Updated the stage and profile fields in my .yml file:
provider:
name: aws
runtime: nodejs14.x
stage: dev
region: eu-west-2
profile: dev
memorySize: 128
timeout: 30
Also update the resources.Resources.<Logical Id>.Properties.RoleName value, as if I try to use the same role as my 'prod' Lambda, I get this message: clearbit-lambda-role-prod already exists in stack
resources:
Resources:
<My Logical ID>:
Type: AWS::IAM::Role
Properties:
Path: /my/cust/path/
RoleName: clearbit-lambda-role-dev # Change this name
Run command: sls deploy -c serverless-dev.yml
Is this the conventional method to achieve this? I can't find anything in the documentation.
Serverless Framework has support for stages out of the box. You don't need a separate configuration, you can just specify --stage <name-of-stage> when running .e.g sls deploy and it will automatically use that stage. All resources created by the Framework under the hood are including stage in it's names or identifiers. If you are defining some extra resources in resources section, you need to change them, or make sure they include stage in their names. You can get the current stage in configuration with ${sls:stage} and use that to construct names that are e.g. prefixed with stage.

Serverless :: AWS profile ""workflow"" doesn't seem to be configured

I had added a new profile, workflow, using
aws configure
I have created a serverless application using
serverless create --template aws-nodejs --path ssm5
/.aws/credentials
[workflow]
aws_access_key_id=<<My Access Key>>
aws_secret_access_key=<<My Secret Key>>
/.aws/config
[profile workflow]
region = us-east-1
serverless.yml
service: ssm5
frameworkVersion: "2"
provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: 20201221
I tried to deploy the application using
serverless deploy --aws-profile workflow
Unfortunately I am getting below error.
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless Error ----------------------------------------
AWS profile ""workflow"" doesn't seem to be configured
I had then set below environment variables from command prompt.
set AWS_PROFILE="workflow"
set AWS_ACCESS_KEY=<<My Access Key>>
set AWS_SECRET_ACCESS_KEY=<<My Secret Key>>
set AWS_SDK_LOAD_CONFIG=1
Unfortunately, that also didn't help me and the error still prevails.
Note: I used terraform to provision infrastructure. Terraform picks the workflow profile successfully from the aforementioned config & credential files. Problem is only with serverless.
It would be really great if someone can help me on this.
I ran into this issue and after debugging the code, I found this:
https://github.com/serverless/serverless/blob/29f0e9c840e4b1ae9949925bc5a2a9d2de742271/lib/plugins/aws/provider.js#L129
Since by default AWS.SharedIniFileCredentials does not return the roleArn by default, sls assumes the profile is invalid. The fix is to set AWS_SDK_LOAD_CONFIG=1 as suggested in the comments. That variable tells the AWS SDK to load the profile when you are using a shared config file.
Based on that I can assume that setting AWS_SHARED_CREDENTIALS_FILE might work as well since the other file should only contain the one profile.

Is there a way to use multiple aws profiles to deploy(update) serverless stack?

We have a team of 3 to 4 members so we wanted to do serverless deploy or update functions or resources using our own personnel AWS credentials without creating new stack but just updating the existing resources. Is there a way to do that? I am aware that we can set up --aws-profile and different profiles for different stages. I am also aware that we cloud just divide the resources into microservices and just deploy or update our own parts. Any help is appreciated.
This can be done as below:
Add the profile configuration as below, i ha e named it as devProfile.
service: new-service
provider:
name: aws
runtime: nodejs12.x
stage: dev
profile: devProfile
Each individual would set their credentials under their own machine as below:
aws configure --profile devProfile
If you have different credentials for different stage, then above serverless snippet can be implemented in parameterized way as below:
serverless.yml
custom:
stages:
- local
- dev
- prod
# default stage/environment
defaultStage: local
# default AWS region
defaultRegion: us-east-1
# config file / region / stage
configFile: ${file(./config/${opt:region,self:provider.region}/${self:provider.stage}.yml)}
Provider:
...
stage: ${opt:stage, self:custom.defaultStage}
...
profile: ${self:custom.configFile.aws.profile}
...
Create config/us-east-1/dev.yml
aws:
profile: devProfile
and config/us-east-1/prod.yml
aws:
profile: prodProfile
It sounds like you already know what to do but need a sanity check. So I'll tell you how I, and everyone else I know, handles this.
We prefix commands with AWS_PROFILE env var declared and we use --stage names.
E.g. AWS_PROFILE=mycompany sls deploy --stage shailendra.
Google aws configure for examples on how to set up awscli that uses the AWS_PROFILE var.
We also name the --stage with a unique ID, e.g. your name. This way, you and your colleagues all have individual CloudFormation stacks that work independently of eachother and there will be no conflicts.

Issues Creating Environments For AWS Lambda Service In CodeStar And CodePipeline

I used AWS CodeStar to create a new application with the "Express.js Aws Lambda Webservice" CodeStar template. This was great because it set me up with a simple CI/CD pipeline using AWS CodePipeline. By default the pipeline has 3 steps for grabbing the source code from a git repo, running the build step, and then deploying to "dev" environment.
My issue is that I can't set it up so that my pipeline has multiple environments: dev, staging, and prod.
My current deploy step has 2 actions: GenerateChangeSet and ExecuteChangeSet. Here are the configurations for the actions in original dev environment build step which work great:
I've created a new deploy stage at the end of my pipeline to deploy to staging, but honestly I'm not sure how to change the configurations. I'm thinking ultimately I want to be able to go into the AWS Lambda section of the AWS console and see three independent lambda functions: binance-bot-dev, binance-bot-staging, binance-bot-prod. Then each of these I could set as cloudwatch scheduled events or expose with their own api gateway url.
This is the configuration that I tried to use for a new deployment stage:
I'm really not sure if this configuration is correct and what exactly I should change in order to deploy in the way I want.
For example, should I be changing "Stack name", or should I keep that as "awscodestar-binance-bot-lambda" or change it for each environment as I am here?
Also, I'm pointing to a different template.yml file in the project. The original template.yml looks like this:
AWSTemplateFormatVersion: 2010-09-09
Transform:
- AWS::Serverless-2016-10-31
- AWS::CodeStar
Parameters:
ProjectId:
Type: String
Description: AWS CodeStar projectID used to associate new resources to team members
Resources:
Dev:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs4.3
Environment:
Variables:
NODE_ENV: dev
Role:
Fn::ImportValue:
!Join ['-', [!Ref 'ProjectId', !Ref 'AWS::Region', 'LambdaTrustRole']]
Events:
GetEvent:
Type: Api
Properties:
Path: /
Method: get
PostEvent:
Type: Api
Properties:
Path: /
Method: post
For template.staging.yml I use the exact same config except I changed "Dev:" to "Staging:" under "Resources", and I also changed the value of the NODE_ENV environment variable. So, I'm basically wondering is this the correct configuration for what I'm trying to achieve?
Assuming that everything in the configuration is correct, I then need to troubleshoot this error. With everything set as described above I can run my pipeline, but when it gets to my staging build step the GenerateChage_Staging action fails with this error message:
Action execution failed User:
arn:aws:sts::954459734159:assumed-role/CodeStarWorker-binance-bot-CodePipeline/1524253307698
is not authorized to perform: cloudformation:DescribeStacks on
resource:
arn:aws:cloudformation:us-east-1:954459734159:stack/awscodestar-binance-bot-lambda-staging/*
(Service: AmazonCloudFormation; Status Code: 403; Error Code:
AccessDenied; Request ID: dd801664-44d2-11e8-a2de-8fa6c42cbf86)
It seem to me from this error message that I need to add the "cloudformation:DescribeStacks" for my "CodeStarWorker-binance-bot-CodePipeline" so I go to IAM -> Roles and click on the CodeStarWorker-binance-bot-CodePipeline role. However, when I click on "CodeStarWorker-binance-bot-CodePipeline" and drill into the policy information for CloudFormation it looks like this role already has permissions for "DescribeStacks"!
If anyone could point out what I'm doing wrong or offer any guidance on understanding and thinking about how to do multiple environments with AWS CodePipeline that would be great. thanks!
UPDATE:
I changed the "Stack name" in my Deploy_To_Staging pipeline stage back to "awscodestar-binance-bot-lambda". However, I then get this error form the GenerateChange_Staging action:
Action execution failed Invalid TemplatePath:
binance-bot-BuildArtifact::template-export.staging.yml. Artifact
binance-bot-BuildArtifact doesn't exist
UPDATE 2:
In the root of my project I have the buildspec.yml file that was generated by CodeStar. It looks like this:
version: 0.2
phases:
install:
commands:
# Install dependencies needed for running tests
- npm install
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
pre_build:
commands:
# Discover and run unit tests in the 'tests' directory
- npm test
build:
commands:
# Use AWS SAM to package the application using AWS CloudFormation
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template template-export.yml
- aws cloudformation package --template template.staging.yml --s3-bucket $S3_BUCKET --output-template template-export.staging.yml
- aws cloudformation package --template template.prod.yml --s3-bucket $S3_BUCKET --output-template template-export.prod.yml
artifacts:
type: zip
files:
- template-export.yml
I then added this to the CloudFormation section:
Then I add this to the "build: -> commands:" section:
- aws cloudformation package --template template.staging.yml --s3-bucket $S3_BUCKET --output-template template-export.staging.yml
- aws cloudformation package --template template.prod.yml --s3-bucket $S3_BUCKET --output-template template-export.prod.yml
And I added this to the "files:"
template-export.staging.yml
template-export.prod.yml
HOWEVER, I am still getting an error that "binance-bot-BuildArtifact does not exist".
Here is the full error after making the buildspec.yml change:
Action execution failed Invalid TemplatePath:
binance-bot-BuildArtifact::template-export.staging.yml. Artifact
binance-bot-BuildArtifact doesn't exist
It seems very strange to me that I can access "binance-bot-BuildArtifact" in one stage of the pipeline but not another. Could it be that the build artifact is only available to the one pipeline stage directly after the build stage? Can someone please help me to be able to access this "binance-bot-BuildArtifact"? Thanks!
For example, should I be changing "Stack name", or should I keep that as "awscodestar-binance-bot-lambda" or change it for each environment as I am here?
You should use a unique stack name for each environment. If you didn't, you would be replacing your 'dev' environment with your 'staging' environment, and so forth.
So, I'm basically wondering is this the correct configuration for what I'm trying to achieve?
I don't think so. You should use the exact same template for each environment. In order to change the environment name for each of your deploys, you can use the 'Parameter Overrides' field to choose the correct value for your 'Environment' parameter.
it looks like this role already has permissions for "DescribeStacks"!
Could the issue here be that your IAM role only has DescribeStacks permission for the dev stack? It looks like it does not have permission to describe the staging stack. Maybe you can add a 'wildcard'/asterisk to the policy so that it matches all of your stack names?
Could it be that the build artifact is only available to the one pipeline stage directly after the build stage?
No, that has not been my experience with CodePipeline. Unfortunately I don't know why it's telling you that your artifact can't be found.
robrtsql has already provided some good advice in terms of using the same template in both stages.
You might find this walkthrough useful.
Basically, it describes adding a Cloudformation "template configuration" which allows you to specify parameters to the Cloudformation stack.
This will allow you to deploy the same template in both your dev and prod environments, but also allow you to tell the difference between a dev deployment and a prod deployment, by choosing a different template configuration in each stage.

Serverless deploying to AWS, Azure or GCP

Does anyone using the Serverless framework know if it's possible to use the same serverless deploy file to deploy to all three cloud providers if the underlying code is capable?
Or are the serverless files specific to each cloud provider?
Thanks
Assuming all your function code are provider agnostic...
Each provider have their own specific way of defining and configuring things so you would expect that the low-level details of the serverless.yml file for each would be different.
That being said, the high-level properties of the serverless.yml are pretty much common for most, if not all, providers.
service:
provider:
plugins:
functions:
This would allow you to have one serverless.yml for all providers that simply references other YAML files depending on an environment variable. Assuming you have serverless-aws.yml, serverless-azure.yml, and serverless-google.yml for your provider-specific configuration, you should be able to use this in your serverless.yml,
service: ${file(serverless-${env:PROVIDER}.yml):service}
plugins: ${file(serverless-${env:PROVIDER}.yml):plugins}
provider: ${file(serverless-${env:PROVIDER}.yml):provider}
functions:
getSomething: ${file(serverless-${env:PROVIDER}.yml):functions.getSomething}
createSomething: ${file(serverless-${env:PROVIDER}.yml):functions.createSomething}
updateSomething: ${file(serverless-${env:PROVIDER}.yml):functions.updateSomething}
deleteSomething: ${file(serverless-${env:PROVIDER}.yml):functions.deleteSomething}
Whenever you deploy, you can choose which provider to use by specifying the PROVIDER environment variable.
$ PROVIDER=aws sls deploy # Deploys to AWS
$ PROVIDER=azure sls deploy # Deploys to Azure
$ PROVIDER=google sls deploy # Deploys to GCP
#dashmug's answer should work but doesn't. If you try to include the entire provider section, it doesn't get evaluated -- i.e. srs print just spits out the un-evaluated expression:
provider: ${file(serverless-${env:PROVIDER}.yml):provider}
Trying to parameterize each key doesn't work because it changes to order, which seems to cause the deploy to fail:
//serverless.yml
...
provider:
name: ${file(serverless-${env:PROVIDER}.yml):provider.name}
runtime: ${file(serverless-${env:PROVIDER}.yml):provider.runtime}
stage: ${file(serverless-${env:PROVIDER}.yml):provider.stage}
...
Results in this:
> sls print
service: my-crossplatform-service
provider:
stage: prod
name: aws
runtime: nodejs8.10
I ended up just maintaining separate serverless.yml files and deploying with a little bash script that copies the appropriate file first:
#!/bin/bash
if [ "$1" != "" ]; then
echo "copying serverless-$1.yml to serverless.yml and running serverless deploy"
cp serverless-$1.yml serverless.yml && sls deploy
else
echo "Please append provider, like 'deploy.sh aws' or 'deploy.sh azure'"
fi
Really wish you could just specify the config file as a deploy option, as requested here: https://github.com/serverless/serverless/issues/4485