I am very confused regarding how to set and access API secrets in a Next.js app within an AWS Amplify project.
The scenario is: I have a private API key that fetches data from an API. Obviously, this is a secret key and I don't want to share it in my github repo or the browser. I create a .env.local file and place my secret there.
API_KEY="qwerty123"
I am able to access this key in my code through using process.env.API_KEY
Here is an example fetch request with that API Key: https://developer.nps.gov/api/v1/parks?${parkCode}&api_key=${process.env.API_KEY}
This works perfectly when I run yarn dev and yarn build -> yarn start
This is the message I get when I run yarn start
next start
ready - started server on 0.0.0.0:3000, url: http://localhost:3000
info - Loaded env from /Users/tmo/Desktop/Code/projects/visit-national-parks/.env.local
The env is loaded and able to be called on my local machine.
However,
When I push this code to github and start the Build process in AWS Amplify, the app builds, but the API fetch calls do not work. I get a ````500 Server Error`````
This is what I have done to try and solve this issue:
Added my API_KEY in the Environment variables tab in Amplify
2. Update my Build settings
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- API_KEY=${API_KEY} '#Added my API_KEY from the environment variables tab in Amplify`
- yarn run build
I am not sure what else to do. After building the app again, I still get 500 server error
Here is the live amplify app with the server error.
We're working on something similar right now. Our dev designed it so it reads an .env file.
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- echo API_KEY=$API_KEY >.env
- echo OTHERKEY=$OTHER_KEY >> .env
- yarn run build
We were able to pick it up and pass it to AWS' DynamoDB Client SDK.
Not sure if it's your call or not, but yarn can be fickle in our Amplify projects sometimes, so we usually resort to using npm if it starts acting up.
Related
I am trying to access my Secret Manager values as environment variables in the build of my Amplify application.
I have followed AWS documentation and community threads/videos.
I have included in my build spec file, amplify.yml, as below per the guide: https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-versions
version: 1
env:
secrets-manager:
TOKEN: mySecret:myKey
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- echo "$TOKEN"
- yarn run build
artifacts:
baseDirectory: .next
files:
- '**/*'
cache:
paths:
- node_modules/**/*
- .next/cache/**/*
I have attached Secret Manager access policies to my Amplify service role per community threads and this YouTube video:
https://youtu.be/jSY7Xerc8-s
However, echo $TOKEN returns blank
Is there no way to access Secret Manager key-values in the Amplify build settings (https://docs.aws.amazon.com/amplify/latest/userguide/build-settings.html) like you can just the same in Code Build (see above guide)?
So far I have only been able to store my sensitive enviroment variables with Parameter Store (following this guide: https://docs.aws.amazon.com/amplify/latest/userguide/environment-variables.html) but from my understanding does seem secure as the values are displayed when echo is used, which will be exposed during logs, whereas values from Secret Manager with be censored out as '***'.
I am building a Django based application on App Engine. I have created a Postres CloudSql instance. I created a cloudbuild.yaml file with a Cloud Build Trigger.
django = v2.2
psycopg2 = v2.8.4
GAE runtime: python37
The cloudbuild.yaml:
steps:
- name: 'python:3.7'
entrypoint: python3
args: ['-m', 'pip', 'install', '-t', '.', '-r', 'requirements.txt']
- name: 'python:3.7'
entrypoint: python3
args: ['./manage.py', 'migrate', '--noinput']
- name: 'python:3.7'
entrypoint: python3
args: ['./manage.py', 'collectstatic', '--noinput']
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "3000s"
The deploymnet is going alright, the app can connect to the database. But when I try load a page I get the next error:
"...import psycopg2 as Database File "/srv/psycopg2/__init__.py", line 50, in from psycopg2._psycopg import ( # noqa ImportError: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory"
Another interesting thing is if I deploy my app with 'gcloud app deploy' (not through Cloud Build), everything is alright I am not getting the error above, my app can communicate with the database.
I am pretty new with gcloud, so maybe I missed some basic here.
But my questions are:
-What is missing from my cloudbuild.yaml to make it work?
-Do I pip install my dependencies to the correct place?
-The prospective of this error what is the difference with the Cloud Build based deployment and the manual one?
From what I see you're using Cloud Build to run gcloud app deploy.
This command commits your code and configuration files to App Engine. As explained here App engine runs in a Google managed environment that automatically handles the installation of the dependencies specified in the requirements.txt file and executes the entrypoint you defined in your app.yaml. This has the benefit of not having to manually trigger the instalation of dependencies. The first two steps of your cloudbuild are not affecting the App Engine's runtime, since the configuration of it is managed by the aforementioned files once they're deployed.
The purpose of Cloud Build is to import source code from a variety of repositories and build binaries or images according to your specifications. It could be used to build Docker images and push them to a repository, download a file to be included in Docker build or package a Go binary an upload it to Cloud Storage. Furthermore the gcloud builder is aimed to run gcloud commands through a build pipeline for example to create account permissions or configure firewall rules when these are required steps for another operation to succeed.
Since you're not automatizing a build pipeline but trying to deploy an App Engine application Cloud build is not the product you should be using. The way to go when deploying to App Engine is to simply run gcloud app deploy command and let Google's environment take care of the rest for you.
Isn't this Quickstart describing exactly what the OP was trying to do?
https://cloud.google.com/source-repositories/docs/quickstart-triggering-builds-with-source-repositories
I myself was hoping to automate deployment of a Django webapp to an AppEngine "standard" instance.
Currently our company is creating individual software for B2B customers.
Some applications can be used for multiple customers.
Usually we can host the application in the cloud and deploy everything with Docker.
Running a GitLab pipeline and deploying etc. is fine for that.
Now we got some customers who rely on an external installation.
Since some of them still use Windows Server (2008 tho), I can not install a proper Docker environment on there and we need to install an Apache Tomcat and run the application inside the tomcat.
Question: How to deal with that? I would need a pipeline to create a docker image and a war file.
Simply create two completely independent pipelines?
Handle everything in a single pipeline?
Our current gitlab-ci.yml file for the .war
image: maven:latest
variables:
MAVEN_CLI_OPTS: "-s settings.xml -q -B"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
cache:
paths:
- .m2/repository/
- target/
stages:
- build
- test
- deploy
build:
stage: build
script:
- mvn $MAVEN_CLI_OPTS compile
test:
stage: test
script:
- mvn $MAVEN_CLI_OPTS test
install:
stage: deploy
script:
- mvn $MAVEN_CLI_OPTS install
artifacts:
name: "datahub-$CI_COMMIT_REF_SLUG"
paths:
- target/*.war
Using to separate delivery pipeline is preferable: you are dealing with two very installation processes, and you need to be sure which one is running for a given client.
Having two separate GitLab pipeline allows for said client to chose the right one.
What I am trying to do is to enable Continuous delivery from GitLab to my compute engine on Google Cloude. I have Ubuntu 16.04 TSL running over there. I did install all components needed to run my project like: Swift, vapor, nginx.
I have manage to install Gitlab runner as well and created a runner whcihc is accessible from my gitlab repo. Everytime I do push on master the runner triggers. What happen is a failure due to:
could not create leading directories of '/home/gitlab-runner/builds/2bbbbbd/0/Server/Packages/vapor.git': Permission denied
If I change the permissions to chmod -R 777 It will hange on running for build stage visible on gitlab pipeline.
I did something like:
sudo chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/builds
sudo chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/cache
but this haven't help, the error is same Permission denied
Below you have my .gitlab-ci.yml
before_script:
- swift --version
stages:
- build
- deploy
job_build:
stage: build
before_script:
- vapor clean
script:
- vapor build --release
only:
- master
job_run_app:
stage: deploy
script:
- echo "Deploy a API"
- vapor run --name=App --env=production
environment:
name: production
job_run_frontend:
stage: deploy
script:
- echo "Deploy a Frontend"
- vapor run --name=Frontend --env=production
environment:
name: production
But that haven't pass to next stage eg. deploy. I had waited more then 14h for that but with out result.
And... I have few more questions:
Gitlab runner creates builds under location /home/gitlab-runner/builds/ in this location every new job have own folder. for eg. /home/gitlab-runner/builds/2bbbbbd/ in which is my project and the commands are executed. So what happens when the first one is running and I do deploy new version? the ports are blocked by the first instance and so on?
If I want to enable supervisor how do I do that with this when every time I deploy folder is different?
Can anyone explain or show me or point me to tutorial how do Continuous deployment with out docker?
How to start a service using GitLab runner
Thanks to long deep search I finally found an answer! The full article can be found above.
Briefly GitLab CI documentation recommends using dpl for deployment. Gitlab runner run test and process should end. The runner is designed to kill all created processes after finishing each build. The GitLab runner is unable to perform operations outside the catalogue.
I try to connect together our cicleCI with browserstack and run our integration_test and unit tests not only with PhantomJS but on real Firefox and Internet Explorer as well, using Browserstack service.
I try to configure browserstack-cli. I can run the test from circleci via tunnel on Browserstack, but never report back to circleci server.
Could you please share your experience if you already played with this stack? Thank you very much!
The solution is to use BrowserStackLocal and browserstack-cli tools together. The 64bit linux version of BrowserStackLocal builds up the tunnel from circleCI server to Browserstack server. After that we can use browserstack-cli to launch browsers and running tests from testem.
Download BrowserStackLocal
and insert in .browserstack folder in your project.
64bit linux version of BrowserStackLocal: http://www.browserstack.com/local-testing (Binnaries)
Create a script,
which will run and create settings for browserstack-cli. You have to setup global variables in circleCI, and you can keep your access details there secretly. Let's call this file runthis.sh and save in .browserstack folder. This script will run your BrowserStackLocal binary as well, so the tunnel will be exist.
#!/bin/bash
echo "{\"username\":\"`echo $BS_USER`\", \"password\":\"`echo $BS_PASSWORD`\", \"privateKey\": \"`echo $BS_KEY`\", \"apiKey\":\"`echo $BS_KEY`\"}" >> ~/.browserstack/browserstack.json
./.browserstack/BrowserStackLocal $BS_KEY &
CircleCI config
(circle.yml) file mainly depend of your project. We have to copy .browserstack folder in home folder, install bower, browserstack-cli and testem.
An example:
machine:
timezone:
Pacific/Auckland
node:
version: v0.10.28
dependencies:
pre:
- mv ./.browserstack ~/
- sh ~/.browserstack/runthis.sh
post:
- bower install
- npm install browserstack-cli -g
- npm install testem -g
test:
override:
- PATH=$PATH:bin grunt integration_tests_cli; testem ci
- PATH=$PATH:bin grunt tests_cli; testem ci
Testem config:
testem.yml - Most of the part is depend of your project. Important in our case is launchers section.
framework: "qunit"
test_page: "tmp/index.html"
src_files:
- "tmp/assets/application.js"
- "tmp/tests.js"
- "tmp/integration_tests.js"
launchers:
bs_chrome:
command: browserstack launch chrome --attach
protocol: browser
timeout: 300
launch_in_ci:
- "PhantomJS"
- "bs_chrome"
launch_in_dev:
- "Chrome"
- "Firefox"
- "PhantomJS"
parallel: 2
So, if you update your project on github, circleci will launch your test and connect to browserstack and gonna use browsers there...