I'm trying to deploy my app in NuxtJS 3, using environment variables.
However, the environment variables are not being recognized when I deploy to GCP in a Cloud Run. Next I will show you my configuration.
This is my Docker file:
# Stage 1 - build
FROM node:14 AS builder
WORKDIR /app
COPY package\*.json ./
RUN npm install
COPY . .
RUN npm run build
# Stage 2 - production
FROM node:14 AS final
WORKDIR /usr/src/app
ADD package.json .
ADD nuxt.config.ts .
COPY --from=builder /app/.nuxt ./.nuxt
COPY --from=builder /app/.output ./.output
COPY --from=builder /app/node_modules ./.output/server/node_modules
ENV NUXT_HOST=0.0.0.0
ENV NUXT_PORT=3000
EXPOSE 3000
ENTRYPOINT ["node", ".output/server/index.mjs"]
This is my nuxt.config.ts:
let enviroment = process.env.NODE_ENV !== 'production'
export default defineNuxtConfig({
css: [
...
],
postcss: {
plugins: {
...
},
},
runtimeConfig: {
public: {
baseUrl: enviroment ? process.env.BASE_URL_TST : process.env.BASE_URL_PRD
}
}
})
When I deploy my app in GCP, I'm using the following command:
gcloud run deploy ${APP_NAME_CLOUD_RUN} --image gcr.io/${APP_NAMESPACE}/${APP_NAME_CLOUD_RUN} --region us-central1 --tag blue --set-env-vars NODE_ENV=test,BASE_URL_TST=http://test.domain.org/api/,BASE_URL_PRD=http://prd.domain.org/api/
In my login.ts file (store in stores directory), I put this for check if my public URL have the value:
const config = useRuntimeConfig()
console.log('Configuration Runtime: ', config)
But in my browser, when I check the console.log of my useRuntimeConfig() function, I get the following:
config Runtime:
Proxy {
app {
baseURL: "/",
buildAssetsDir: "/_nuxt/",
cdnURL : ""
},
public { /* EMPTY */ } }
}
I check the environment variables in the cloud run and they are created correctly.
When I run the app using npm run dev on localhost, the variables do work (.env file). Am I doing something wrong?
I'm trying to deploy to GCP (Cloud Run) using the environment variables for the files in stores folder.
I try to make my project recognize these variables in the runtimeConfig section of nuxt.config.ts but it doesn't work for me.
I hope to be able to make the environment variables recognized to call the APIs according to the deployed environment.
The command to set env vars looks ok.
May I suggest to use the module dotenv? https://www.npmjs.com/package/dotenv
Example:
require('dotenv').config()
console.log(process.env) //
I use it in my Cloud run services and they work as expected.
Related
I want to integrate my C++ project with Jenkins using Docker.
First I created a setup for Jenkins using:
Dockerfile:
FROM jenkins/jenkins:latest
USER root
RUN apt-get update && apt-get install git && apt-get install make g++ -y
ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
ENV CASC_JENKINS_CONFIG /var/jenkins_home/casc.yaml
ENV JENKINS_ADMIN_ID=admin
ENV JENKINS_ADMIN_PASSWORD=password
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN jenkins-plugin-cli -f /usr/share/jenkins/ref/plugins.txt
COPY casc.yaml /var/jenkins_home/casc.yaml
plugins.txt
ant:latest
antisamy-markup-formatter:latest
authorize-project:latest
build-timeout:latest
cloudbees-folder:latest
configuration-as-code:latest
credentials-binding:latest
email-ext:latest
git:latest
github-branch-source:latest
gradle:latest
ldap:latest
mailer:latest
matrix-auth:latest
pam-auth:latest
pipeline-github-lib:latest
pipeline-stage-view:latest
ssh-slaves:latest
timestamper:latest
workflow-aggregator:latest
ws-cleanup:latest
casc.yaml
jenkins:
securityRealm:
local:
allowsSignup: false
users:
- id: ${JENKINS_ADMIN_ID}
password: ${JENKINS_ADMIN_PASSWORD}
authorizationStrategy:
globalMatrix:
permissions:
- "USER:Overall/Administer:admin"
- "GROUP:Overall/Read:authenticated"
remotingSecurity:
enabled: true
security:
queueItemAuthenticator:
authenticators:
- global:
strategy: triggeringUsersAuthorizationStrategy
unclassified:
location:
url: http://127.0.0.1:8080/
So far, so good. I used docker build -t jenkins:jcasc . and docker run --name jenkins --rm -p 8080:8080 jenkins:jcasc and Jenkins starts normally. I entered the user name (admin) and the password and it worked.
There are although 2 issues that Jenkins displays:
-> It appears that your reverse proxy set up is broken.
-> Building on the built-in node can be a security issue. You should set up distributed builds. See the documentation.
I ignored them so far, and continued with my project and installed Blue Ocean.
After installing Blue Ocean I connected it to my GitHub repository and for this I created a token and the pipeline was created, but it is not giving any result.
The log says:
Branch indexing
Connecting to https://api.github.com with no credentials, anonymous access
Jenkins-Imposed API Limiter: Current quota for Github API usage has 37 remaining (1 over budget). Next quota of 60 in 36 min. Sleeping for 4 min 43 sec.
Jenkins is attempting to evenly distribute GitHub API requests. To configure a different rate limiting strategy, such as having Jenkins restrict GitHub API requests only when near or above the GitHub rate limit, go to "GitHub API usage" under "Configure System" in the Jenkins settings.
My C++ project looks like this:
-->HelloWorld
|
-->HelloWorld.cpp
-->scripts
|
-->Linux-Build.sh
|
-->Linux-Run.sh
-->Jenkinsfile
HelloWorld.cpp:
#include <iostream>
int main(){
std::cout << "Hello World!" << std::endl;
return 0;
}
Linux-Build.sh
#!/bin/bash
vendor/bin/premake/premake5 gmake2
make
Linux-Run.sh
#!/bin/bash
bin/Debug/HelloWorld
Jenkinsfile
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'echo "Building..."'
sh 'chmod +x scripts/Linux-Build.sh'
sh 'scripts/Linux-Build.sh'
archiveArtifacts artifacts: 'bin/Debug/*', fingerprint: true
}
}
stage('Test') {
steps {
sh 'echo "Running..."'
sh 'chmod +x scripts/Linux-Run.sh'
sh 'scripts/Linux-Run.sh'
}
}
}
}
So, where might be the problem? I don't know if the problem is in my C++ code, or in the Docker configuration.
I am also a big fan of cmake, instead of make and I would prefer to use cmake for this, but I am not sure how to integrate this and make it work.
EDIT:
In the tutorial that I followed there was a "vendor" directory but the instructor didn't show what was in that directory.
EDIT:
The program took 4 minutes but it eventually did something and the error is:
+ scripts/Linux-Build.sh
scripts/Linux-Build.sh: line 3: vendor/bin/premake/premake5: No such file or directory
make: *** No targets specified and no makefile found. Stop.
script returned exit code 2
So, the problem is in the C++ project, not Docker.
I'm trying to setup a basic nodeJS app on cloud run, built with typescript
On checking the cloud logs I see a warning Container called exit(1).
Hitting the URL of the project just shows Cannot GET /
Added binding to check that 0.0.0.0 as well as localhost will work
I've tried using cloud-build to avoid the problems with building arm on M1 mac
I also tried building with Docker where my Dockerfile looks like this
#!/bin/sh
# image with bin/sh but not bin/bash
FROM node:16-alpine
# needed for node-gyp
RUN apk --no-cache add --virtual .builds-deps build-base python3
# curl
RUN apk add --update curl
# Create and change to the app directory.
WORKDIR /usr/src/app
# Copy local code to the container image.
COPY . ./
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure copying both package.json AND package-lock.json (when available).
# Copying this first prevents re-running npm install on every code change.
COPY package*.json ./
# Install production dependencies.
RUN npm install --omit=dev
ENV PORT 8080
ENV HOST 0.0.0.0
# for testing local run, not needed on GCP?
EXPOSE 8080
# Run the web service on container startup.
# CMD [ "node", "dist/index.js" ]
# CMD [ "npm", "start" ]
CMD npm run start --bind 0.0.0.0:$PORT
I've tried various ways to start the service. I have a justfile with these commands:
# building with a build pack, not docker image
build-pack: build-source
#echo "making build for ${DOCKER_TAG}"
gcloud builds submit --pack image=${DOCKER_TAG}
gcp-submit:
#echo "submitting image: ${DOCKER_TAG}"
gcloud builds submit --tag=${DOCKER_TAG}
# build on an M1 mac for linux/amd64
build-amd: build-source
docker buildx build --platform linux/amd64 -t ${DOCKER_TAG} .
# deploy from existing docker image
deploy-image:
#echo "deploy DOCKER_TAG=${DOCKER_TAG} on PORT=${PORT}"
gcloud run deploy ${GCP_SERVICE_NAME} \
--image ${DOCKER_IMAGE} \
--port=${PORT} \
--allow-unauthenticated \
--platform managed
The image runs fine locally eg:
docker run -p 8080:8080 ${DOCKER_IMAGE}
when deploying I don't get any errors
just deploy-image
deploy DOCKER_TAG=XXXXX:latest on PORT=8080
gcloud run deploy ${GCP_SERVICE_NAME} --image ${DOCKER_IMAGE} --port=${PORT} --allow-unauthenticated --platform managed
Deploying container to Cloud Run service [XXXX] in project [XXX] region [us-west4]
✓ Deploying... Done.
✓ Creating Revision...
✓ Routing traffic...
✓ Setting IAM Policy...
Done.
Service [XXXX] revision [XXXXX-00004-huc] has been deployed and is serving 100 percent of traffic.
Service URL: https://XXXX.a.run.app
(I've XXX'd out the actual URLs)
But then I just get Cannot GET / on accessing.
So if this was my own infra maybe the app is running but the PORT mapping is broken?
Or the proxy doesn't know how to find the backend server?
Not really sure where else to look on GCP. =Pulls Hair=
actual GCP error:
{
"textPayload": "Container called exit(1).",
"insertId": "633625d6000da4e8178f8c46",
"resource": {
"type": "cloud_run_revision",
"labels": {
"project_id": "xxxx",
"location": "us-west4",
"configuration_name": "xxx-app",
"service_name": "xxx-app",
"revision_name": "xxx-app-00003-nef"
}
},
"timestamp": "2022-09-29T23:10:14.894111786Z",
"severity": "WARNING",
"labels": {
"instanceId": "xxx9da465dbd803c39781062c2e1f392e54b524a563aaafa01e7bdb0b769fd62c94898738c506e09f79b1b6"
},
"logName": "projects/xxx/logs/run.googleapis.com%2Fvarlog%2Fsystem",
"receiveTimestamp": "2022-09-29T23:10:14.897307028Z"
}
After deployment, is there a way to inspect the process.env variables on a running cloud run service?
I thought they would be available in the following page:
https://console.cloud.google.com/run/detail
Is there a way to make them available here? Or to inspect it in some other way?
PS: This is a Docker container.
I have the following ENV on my Dockerfile. And I know they are present, because everything is working as it should. But I cannot see them in the service details:
Dockerfile
ENV NODE_ENV=production
ENV PROJECT_ID=$PROJECT_ID
ENV SERVER_ENV=$SERVER_ENV
I'm using a cloudbuild.yaml file. The ENV directives are present in my Dockerfile, and they are being passed to my container. Maybe I should add env to my cloudbuild.yaml file? Because I'm using --substitutions on my gcloub builds sumbmit call and they are passed as --build-arg to my Docker build step. But I'm not declaring them as env in my cloudbuild.yaml.
I followed the official documentation and set the environment variables on a Cloud Run service using the console.Then I was able to list them on the Google Cloud Console.
You can set environment variables using the Cloud Console, the gcloud
command line, or a YAML file when you create a new service or deploy a
new revision:
With the help of #marian.vladoi's answer. This what I've ended up doing
In my deploy step from cloudbuild.yaml file:
I added the --set-env-vars parameter
steps:
# DEPLOY CONTAINER WITH GCLOUD
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
- "beta"
- "run"
- "deploy"
- "SERVICE_NAME"
- "--image=gcr.io/$PROJECT_ID/SERVICE_NAME:$_TAG_NAME"
- "--platform=managed"
- "--region=us-central1"
- "--min-instances=$_MIN_INSTANCES"
- "--max-instances=3"
- "--set-env-vars=PROJECT_ID=$PROJECT_ID,SERVER_ENV=$_SERVER_ENV,NODE_ENV=production"
- "--port=8080"
- "--allow-unauthenticated"
timeout: 180s
so my goal is to deploy a serverless Dockerized NextJS application on ECS/Fargate.
So when I docker build my project using the command docker build . -f development.Dockerfile --no-cache -t myapp:latest everything is running successfully except Docker build doesn't consider the env file in my project's root directory. Once build finishes, I push the Docker image to Elastic Container Repository(ECR) and my Elastic Container Service(ECS) references that ECR.
So naturally, my built image doesn't have a ENV file(contains the API keys and DB credentials), and as a result my app is deployed but all of the services relying on those credentials are failing because there isn't an ENV file in my container and all of the variables become undefined or null.
To fix this issue I looked at this AWS doc and implemented a solution that stores my .env file in AWS S3 and that S3 ARN gets refrenced in the container service where the .env file is stored. However, that didn't workout and I think it's because of the way I'm setting my
next.config.js to reference my environmental files in my local codebase. I also tried to set my environmental variables manually(very unsecure, screenshot below) when configuring the container in my task defination, and that didn't work either.
My next.confg.js
const dotEnvConfig = { path: `../../${process.env.NODE_ENV}.env` };
require("dotenv").config(dotEnvConfig);
module.exports = {
serverRuntimeConfig: {
// Will only be available on the server side
xyzKey: process.env.xyzSecretKey || "",
},
publicRuntimeConfig: {
// Will be available on both server and client
appUrl: process.env.app_url || "",
},
};
So on my local codebase in the root directory I have two files development.env (local api keys) and production.env(live api keys) and my next.config.js is located in /packages/app/next.config.js
So apparently it was just a plain NextJS's way of handling env variables.
In next.config.js
module.exports = {
env: {
user: process.env.SQL_USER || "",
// add all the env var here
},
};
and to call the environmental variable user in the app all you have to do is call process.env.user and user will reference process.env.SQL_USER in my local .env file where it will be stored as SQL_USER="abc_user"
You should be setting the environment variables in the ECS task definition. In order to prevent storing sensitive values in the task definition you should use AWS Parameter Store, or AWS Secrets Manager, as documented here.
I am trying to set up Google Cloud Build with a really simple project hosted on firebase, but every time it reaches the deploy stage it tells me:
Error: No project active, but project aliases are available.
Step #2: Run firebase use <alias> with one of these options:
ERROR: build step 2 "gcr.io/host-test-xxxxx/firebase" failed: step exited with non-zero status: 1
I have set the alias to production and my .firebasesrc is:
{
"projects": {
"default": "host-test-xxxxx",
"production": "host-test-xxxxx"
}
I have firebase admin and API Keys Admin permissions on my cloud builder service and I also want to encrypt so I have Cloud KMS CryptoKey Decrypter
I do
firebase login:ci
to generate a token in my terminal and paste this to my .env variable, then generate an alias called production and do
firebase use production
My yaml is:
steps:
# Install
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
# Build
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'build']
# Deploy
- name: 'gcr.io/host-test-xxxxx/firebase'
args: ['deploy']
and install and build work fine. What is happening here?
Rerunning firebase init does not seem to help.
Update:
building locally then doing firebase deploy does not help either.
Ok the thing that worked was changing the .firebasesrc file to:
{
"projects": {
"default": "host-test-xxxxx"
}
}
and
firebase use --add
and adding an alias called default.