I am creating an Electron app that connects to AWS services. Before services can be accessed, the users need to authenticate using AWS Cognito. In order for users to authenticate, I need to hardcode in the client app the app region, user pool id, identity pool id, and the app client id. Hard coding this is a terrible idea because these values will change from client to client.
In my app the users NEVER interact directly with the database, otherwise I would have them query the database for this data. Users connect to an Elastic Beanstalk endpoint and my EC2 instances are the only ones allowed to communicate with the database. This improves security.
What is the best way to avoid hard coding this kind of data?
Generally, config should be stored in the environment (see https://12factor.net/).
What this means differs for different environments, and I know nothing about electron, but your configuration values will be known at build-time, so when you are building your clients, you could build an environment.js file whose values can be referenced from your app.
Example using CloudFormation and CodePipeline
So, perhaps you are using CloudFormation to provision your Cognito infrastructure. In this case, you can export variables that can be referenced by other CloudFormation templates.
These exported app client id, user pool id, identity pool id, etc, can be injected them into a CloudFormation template that defines a CodePipeline instance that you might use to build your electron app, of which the following could be a fragment:
...
BuildElectronProject:
Type: AWS::CodeBuild::Project
Properties:
Name: electron-build
Artifacts:
Type: CODEPIPELINE
Environment:
ComputeType: BUILD_GENERAL1_SMALL
EnvironmentVariables:
-
Name: AWS_REGION
Value: !Ref 'AWS::Region'
-
Name: USER_POOL_ID
Value: !ImportValue 'user-pool-id'
-
Name: SERVER_URL
Value: !Join
- ''
-
- !If [ IsProd, 'https://', 'http://' ]
- !FindInMap [ Environments, !Ref Environment, ServerUrl ]
...
Then, when you build your app, you can use the environment variables in CodeBuild to create the environment.js file that is included as part of your distributable electron build.
Related
I want to create a separate 'dev' AWS Lambda with my Serverless service.
I have deployed my production, 'prod', environment and tried to then deploy a development, 'dev', environment so that I can trial features without affecting customer experience.
In order to deploy the 'dev' environment I have:
Created a new serverless-dev.yml file
Updated the stage and profile fields in my .yml file:
provider:
name: aws
runtime: nodejs14.x
stage: dev
region: eu-west-2
profile: dev
memorySize: 128
timeout: 30
Also update the resources.Resources.<Logical Id>.Properties.RoleName value, as if I try to use the same role as my 'prod' Lambda, I get this message: clearbit-lambda-role-prod already exists in stack
resources:
Resources:
<My Logical ID>:
Type: AWS::IAM::Role
Properties:
Path: /my/cust/path/
RoleName: clearbit-lambda-role-dev # Change this name
Run command: sls deploy -c serverless-dev.yml
Is this the conventional method to achieve this? I can't find anything in the documentation.
Serverless Framework has support for stages out of the box. You don't need a separate configuration, you can just specify --stage <name-of-stage> when running .e.g sls deploy and it will automatically use that stage. All resources created by the Framework under the hood are including stage in it's names or identifiers. If you are defining some extra resources in resources section, you need to change them, or make sure they include stage in their names. You can get the current stage in configuration with ${sls:stage} and use that to construct names that are e.g. prefixed with stage.
I am having a problem deploying my application in AWS with CloudFormation using the Serverless Framework.
I am using the "Single API Gateway Project" strategy for the deployment. I have my Backend divided into services, each with its directory inside the repo and its serverless.yml file.
To have a single API Gateway for each one of them, I first deploy a root service that creates said API Gateway for me and outputs the ApiGatewayRestApiId and ApiGatewayRestApiRootResourceId as I could see in the following document of the same Serverless Framework:
https://www.serverless.com/framework/docs/providers/aws/events/apigateway#easiest-and-cicd-friendly-example-of-using-shared-api-gateway-and-api-resources
My root service that creates the API Gateway is something like:
...
resources:
Outputs:
ApiGatewayRestApiId:
Value:
Ref: ApiGatewayRestApi
Export:
Name: ${self:provider.stage}-ApiGatewayRestApiId
ApiGatewayRestApiRootResourceId:
Value:
Fn::GetAtt:
- ApiGatewayRestApi
- RootResourceId
Export:
Name: ${self:provider.stage}-ApiGatewayRestApiRootResourceId
...
Then, from the rest of the microservices, I use those values by importing them as follows:
service: name
custom:
APP_ENV: ${env:SERVERLESS_APP}
providers:
apiGateway:
restApiId: !ImportValue ${env:${self:custom.APP_ENV}_API_STAGE}-ApiGatewayRestApiId
restApiRootResourceId: !ImportValue ${env:${self:custom.APP_ENV}_API_STAGE}-ApiGatewayRestApiRootResourceId
...
I never had any problems deploying until today, when I tried to deploy only the root service. The error I am having is the following:
An error occurred: root-beta - Template error: RootResourceId attribute of API Gateway RestAPI d8zc1j912b doesn't exist.
I checked everywhere but I can't find the reason why I get this error.
Operating System: linux
Node Version: 13.10.0
Framework Version: 1.60.0
PluginVersion: 3.8.4
SDK Version: 2.3.2
Components Core Version: 1.1.2
Components CLI Version: 1.6.0
Based on a comment I just read on Github, this is caused by an API that consists of more than 200 resources in total. CloudFormation (which is used to deploy serverless resources) only retrieves up to 8 pages of 25 resources to locate the root resource id, so if it happens to be on page 9+, it'll throw that error.
I'm not quite sure what the order of these resources is. In my own tests, sometimes it works, and other times it fails.
Original text from AWS (copied here in case the Github issue disappears):
... CloudFormation is making an API call with action “apiateway:GetResources” in order to get the root resource ID. This corresponds to the following API call [1]. This is a paginated response, meaning that it only returns resources 25 at a time (by default). Furthermore, to prevent throttling and time out issues, this is only checked a maximum of 8 pages deep. This means, that this root resource ID must be in the first 200 responses in order to get the root resource ID. Otherwise, it will fail to get this value.
...
The service team is aware of this issue and are working on a way to fix this. However, there is no ETA for this to be available.
Source: https://github.com/serverless/serverless/issues/9036#issuecomment-1047240189
I'm using aws copilot but i think i can generalize this question to apprunner. trying to get envvars set from Parameter Store but having no luck. the left is my aws copilot manifest yml, i saw examples of setting things this way. it results in an apprunner configed on the right. in production it seems these are interpreted as literals and not as parameter store values
any idea on how to properly connect apprunner to parameter store?
Unfortunately currently App Runner doesn't provide an intrinsic way for integrating with SSM parameter store like what ECS does. As a result, Copilot doesn't support secret section for Request-driven service as well (refer to Copilot doc here). As for environment variables, they are what you define in the manifest and will be injected as literals.
However, there is a workaround in Copilot allowing your app to use secrets stored in SSM parameter store. You can specify an addon template (e.g., policy.yaml) and put it in the copilot/${svc name}/addons/ local directory with the following template allowing the App Runner service to be able to retrieve from SSM parameter store:
Parameters:
App:
Type: String
Description: Your application's name.
Env:
Type: String
Description: The environment name your service, job, or workflow is being deployed to.
Name:
Type: String
Description: The name of the service, job, or workflow being deployed.
Resources:
MySSMPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: SSMActions
Effect: Allow
Action:
- "ssm:GetParameters"
Resource: !Sub 'arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:parameter/*'
Outputs:
MySSMPolicyArn:
Value: !Ref MySSMPolicy
After that, in your code by using AWS SDK you can call AWS SSM API to retrieve any secrets you defined before. Let me know if you have any more questions!
I have written a service using AWS Step Functions. I would like to integrate this into our applications existing Elastic Beanstalk development process, wherein we have distinct dev, staging, and production applications. Each of these stages have app-specific environment variables, which I would like to pull into my Lambda functions as well.
I am not presently using SAM but I can port over if necessary to accomplish this.
The following is a simplified configuration mirroring my serverless.yml file.
service:
name: small-service
plugins:
- serverless-webpack
- serverless-step-functions
- serverless-pseudo-parameters
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-2
iamRoleStatements:
- Effect: "Allow"
Action:
- "s3:*"
Resource: { "Fn::Join": ["", ["arn:aws:s3:::S3-bucket-name", "/*" ] ] }
functions:
connect:
handler: handler.connect
stepFunctions:
stateMachines:
smallService:
name: small-service-${self:provider.stage}
definition:
Comment: Service that connects to things
StartAt: Connect
States:
Connect:
Type: Task
Resource: arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:${self:service}-${self:provider.stage}-connect
End: true
How can I dynamically deploy the step functions into different beanstalk applications? How can I access ElasticBeanstalk environment properties from within Step Functions?
Is there a better way to import environment .env variables dynamically into a Serverless application outside of EB? We are integrating the service into a larger AWS applications development workflow, is there is a more "serverless" way of doing this?
Move your environment variables into SSM Parameter Store. Then you can either
reference SSM parameters in your serverless.yaml, or
fetch SSM parameters in the beginning of each Lambda invocation (see e.g. here)
Note that the former method requires re-deploying your Lambda to receive the latest SSM parameters whereas the latter always fetches the latest parameter values.
Is it possible to get a service account key that is deployed via Google Deployment Manager (iam.v1.serviceAccounts.key resource) as a result of request to DM?
I have seen an option to expose it in outputs (https://cloud.google.com/deployment-manager/docs/configuration/expose-information-outputs) , but can't see any possibility to get the key as a response of Deployment Manager insert/update API methods.
To fetch the key you can set up output or reference to the PrivatekeyData in the same configuration as creating the key. If there is not a reference or output to that field, then DM will ignore it.
Example config looks like:
outputs:
- name: key
value: $(ref.iam-key.privateKeyData)
resources:
- name: iam-account
type: iam.v1.serviceAccount
properties:
accountId: iam-account
displayName: iam-account-display
- name: iam-key
type: iam.v1.serviceAccounts.key
properties:
parent: $(ref.iam-account.name)
When running the above yaml file with
gcloud deployment-manager deployments create [DemploymentName] --config key.yaml.
This creates a service account with an associated key. You can look up at the manifest associated with the configuration. You can also access Deployment-> Deployment properties-> Layout in the cloud console.