Google Cloud Build error no project active - google-cloud-platform

I am trying to set up Google Cloud Build with a really simple project hosted on firebase, but every time it reaches the deploy stage it tells me:
Error: No project active, but project aliases are available.
Step #2: Run firebase use <alias> with one of these options:
ERROR: build step 2 "gcr.io/host-test-xxxxx/firebase" failed: step exited with non-zero status: 1
I have set the alias to production and my .firebasesrc is:
{
"projects": {
"default": "host-test-xxxxx",
"production": "host-test-xxxxx"
}
I have firebase admin and API Keys Admin permissions on my cloud builder service and I also want to encrypt so I have Cloud KMS CryptoKey Decrypter
I do
firebase login:ci
to generate a token in my terminal and paste this to my .env variable, then generate an alias called production and do
firebase use production
My yaml is:
steps:
# Install
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
# Build
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'build']
# Deploy
- name: 'gcr.io/host-test-xxxxx/firebase'
args: ['deploy']
and install and build work fine. What is happening here?
Rerunning firebase init does not seem to help.
Update:
building locally then doing firebase deploy does not help either.

Ok the thing that worked was changing the .firebasesrc file to:
{
"projects": {
"default": "host-test-xxxxx"
}
}
and
firebase use --add
and adding an alias called default.

Related

Nextauth deployed on AWS Amplify [next-auth][error][CLIENT_FETCH_ERROR]

I am using NextAuth (Cognito Provider) and it is perfectly working on my local machine but when deployed on AWS Amplify, it keeps giving me [next-auth][error][CLIENT_FETCH_ERROR] https://next-auth.js.org/errors#client_fetch_error Unexpected token '<', "<!DOCTYPE "... is not valid JSON and the request https://mydomain.sample-dev.net/api/auth/session is 404
I have followed the documentation and added the required environment variable below:
Executing command: NEXTAUTH_URL=https://mydomain.sample-dev.net
Executing command: NEXTAUTH_SECRET=TestingSecretKey
Executing command: NEXT_PUBLIC_AUTH_SECRET=TestingSecretKey
my pages/api/auth/[...nextauth].ts is
import NextAuth from 'next-auth/next';
import Cognito from 'next-auth/providers/cognito';
export default NextAuth({
providers: [
Cognito({
clientId: process.env.NEXT_PUBLIC_AUTH_CLIENT_ID!,
clientSecret: process.env.NEXT_PUBLIC_AUTH_CLIENT_SECRET!,
issuer: process.env.NEXT_PUBLIC_AUTH_ISSUER!,
idToken: true,
checks: 'nonce',
}),
],
debug: false,
secret: process.env.NEXTAUTH_SECRET || process.env.NEXT_PUBLIC_AUTH_SECRET,
});
any idea on what am I doing wrong or missing?
The [next-auth][error][CLIENT_FETCH_ERROR] error suggests the NEXTAUTH_URL environment variable is not accessible for use as a result of misconfiguration.
In order to resolve the error, you primarily need to define the required environment variables on your AWS Amplify dashboard.
Proceed by select the affected application under App settings, then click on Environment variables and follow the instructions for that purpose.
Then, click on the Build settings (equally under App settings) menu so as to edit your App build specification script and make the environment variables accessible as illustrated below.
build:
commands:
- echo "NEXTAUTH_URL=$NEXTAUTH_URL" >> .env
- npm run build

Gitlab Cloud run deploy successfully but Job failed

Im having an issue with my CI/CD pipeline ,
its successfully deployed to GCP cloud run but on Gitlab dashboard the status is failed.
I tried to replace images to some other docker images but it fails as well .
# File: .gitlab-ci.yml
image: google/cloud-sdk:alpine
deploy_int:
stage: deploy
environment: integration
only:
- integration # This pipeline stage will run on this branch alone
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild_int.yaml
# File: cloudbuild_int.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build','--build-arg','APP_ENV=int' , '-t', 'gcr.io/$PROJECT_ID/tpdropd-int-front', '.' ]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/tpdropd-int-front']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'tpd-front', '--image', 'gcr.io/$PROJECT_ID/tpdropd-int-front', '--region', 'us-central1', '--platform', 'managed', '--allow-unauthenticated']
gitlab build output :
ERROR: (gcloud.builds.submit)
The build is running, and logs are being written to the default logs bucket.
This tool can only stream logs if you are Viewer/Owner of the project and, if applicable, allowed by your VPC-SC security policy.
The default logs bucket is always outside any VPC-SC security perimeter.
If you want your logs saved inside your VPC-SC perimeter, use your own bucket.
See https://cloud.google.com/build/docs/securing-builds/store-manage-build-logs.
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
I fix it by using:
options:
logging: CLOUD_LOGGING_ONLY
in cloudbuild.yaml
there you can use this work around :
Fix it by giving the Viewer role to the service account running this but this feels like giving too much permission to such a role.
This worked for me: Use --suppress-logs
gcloud builds submit --suppress-logs --tag=<my-tag>
To fix the issue, you just need to create a bucket in your project (by default - without public access) and add the role 'Store Admin' to your user or service account via https://console.cloud.google.com/iam-admin/iam
After that, you can refer the new bucket into the gcloud builds submit via parameter --gcs-log-dir gs://YOUR_NEW_BUCKET_NAME_HERE like this:
gcloud builds submit --gcs-log-dir gs://YOUR_NEW_BUCKET_NAME_HERE ...(other parameters here)
We need a new bucket because the default bucket for logs is a global (cross-projects). That's why it has specific security requirements to access it especially from outside the Google Cloud, like GitLab, Azure DevOps ant etc via service accounts.
(Moreover, in this case you no need to turn off logging via --suppress-logs)
Kevin's answer worked like a magic for me, since I am not able to comment, I am writing this new answer.
Initially I was facing the same issue where inspite of gcloud build submit command passed , my gitlab CI was failing.
Below is the cloudbuild.yaml file where I add the option logging as Kevin suggested.
steps:
name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: ['run_query.sh', '${_SCRIPT_NAME}']
options:
logging: CLOUD_LOGGING_ONLY
Check this document for details: https://cloud.google.com/build/docs/build-config-file-schema#options
To me worked the options solution as mentioned for #Kevin. Just add the parameter as mentioned before in the cloudbuild.yml file.
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/myproject/myimage', '.']
options:
logging: CLOUD_LOGGING_ONLY

Azure DevOps YAML self hosted agent pipeline build is stuck at locating self-agent

Action: I tried to configure and run a simple c++ azure pipeline on a self-hosted windows computer. I'm pretty new to all this. I ran the script below.
Expected: to see build task, display task and clean task. to see hello word.
Result: Error, script can't find my build agent.
##[warning]An image label with the label Weltgeist does not exist.
,##[error]The remote provider was unable to process the request.
Pool: Azure Pipelines
Image: Weltgeist
Started: Today at 10:16 p.m.
Duration: 14m 23s
Info & Test:
My self-hosted agent name is Weltgeist and it's part of the
default agent pools.it's a windows computer, with all g++, mingw and
other related tools on it.
I tried my build task locally with no problem.
I tried my build task using azure 'ubuntu-latest' agent with no
problem.
I created the self-hosted agent following these specification.
I'm the owner of the azure repo.
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=azure-devops
How do I configure correctly the pool ymal parameter for self-hosted agent ?
Do i have addition steps to do server side? or on azure repo configs?
Any other idea of what went wrong?
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
- master
pool:
vmImage: 'Weltgeist' #Testing with self-hosted agent
steps:
- script: |
mkdir ./build
g++ -g ./src/hello-world.cpp -o ./build/hello-world.exe
displayName: 'Run a build script'
- script: |
./build/hello-world.exe
displayName: 'Run Display task'
- script: |
rm -r build
displayName: 'Clean task'
(UPDATE)
Solution:
Thx, after updating it as stated in a answer below and reading a bit more pool ymal definition it works. Note, I modified a couple of other lines to make it work on my environment.
trigger:
- master
pool:
name: Default
demands:
- agent.name -equals Weltgeist
steps:
- script: |
mkdir build
g++ -o ./build/hello-world.exe ./src/hello-world.cpp
displayName: 'Run a build script'
- script: |
cd build
hello-world.exe
cd ..
displayName: 'Run Display task'
- script: |
rm -r build
displayName: 'Clean task'
I was confused by the Default because there was already a pipeline named Default in the organization.
Expanding on the answers provided here.
pool:
name: NameOfYourPool
demands:
- agent.name -equals NameOfYourAgent
Here is screen you'll find that information in DevOps.
Since you are using the self-hosted agent, you could use the following format:
pool:
name: Default
demands:
- agent.name -equals Weltgeist
Then it should work as expected.
You could refer to the doc about POOL Definition in Yaml.
I had faced the same issue and replacing vmImage under pool with "name" worked for me. PFB,
trigger:
master
pool:
name: 'Weltgeist' #Testing with self-hosted agent
Also be aware that if your agent only appears in the "Azure Pipelines" pool and not in any of the other pools then the agent may have been configured to be an "Environment" resource, and can't be used as part of the build step.
I spent ages trying to use a self-hosted VM for a build step, thinking that the correct way to reference the VM was by creating a VM resource from the Pipelines > Enviroments area:
The agent would be properly created and visible in the "Azure Pipelines" pool, but wouldn't be available in any of the other pools, which then meant it couldn't be referenced in the yaml used for setting the server used for builds.
I was able to resolve the issue, by de-registering the agent on my self-hosted VM with .\config.cmd remove and running ./config without the --environment --environmentname "<name>" that was provided within the registration script mentioned above (shown in the "Add reseouce" screenshot)
Oddly, the registration script is a much quicker way to register an Agent than the "New agent" form shown in Agent Pools:
The necessary files are pulled to the server (without having to download one first) and a PAT with a 3-hour lifetime is auto-generated.

Gcloud with cloudbuild and Django Postgres cause psycopg2 ImportError

I am building a Django based application on App Engine. I have created a Postres CloudSql instance. I created a cloudbuild.yaml file with a Cloud Build Trigger.
django = v2.2
psycopg2 = v2.8.4
GAE runtime: python37
The cloudbuild.yaml:
steps:
- name: 'python:3.7'
entrypoint: python3
args: ['-m', 'pip', 'install', '-t', '.', '-r', 'requirements.txt']
- name: 'python:3.7'
entrypoint: python3
args: ['./manage.py', 'migrate', '--noinput']
- name: 'python:3.7'
entrypoint: python3
args: ['./manage.py', 'collectstatic', '--noinput']
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "3000s"
The deploymnet is going alright, the app can connect to the database. But when I try load a page I get the next error:
"...import psycopg2 as Database File "/srv/psycopg2/__init__.py", line 50, in from psycopg2._psycopg import ( # noqa ImportError: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory"
Another interesting thing is if I deploy my app with 'gcloud app deploy' (not through Cloud Build), everything is alright I am not getting the error above, my app can communicate with the database.
I am pretty new with gcloud, so maybe I missed some basic here.
But my questions are:
-What is missing from my cloudbuild.yaml to make it work?
-Do I pip install my dependencies to the correct place?
-The prospective of this error what is the difference with the Cloud Build based deployment and the manual one?
From what I see you're using Cloud Build to run gcloud app deploy.
This command commits your code and configuration files to App Engine. As explained here App engine runs in a Google managed environment that automatically handles the installation of the dependencies specified in the requirements.txt file and executes the entrypoint you defined in your app.yaml. This has the benefit of not having to manually trigger the instalation of dependencies. The first two steps of your cloudbuild are not affecting the App Engine's runtime, since the configuration of it is managed by the aforementioned files once they're deployed.
The purpose of Cloud Build is to import source code from a variety of repositories and build binaries or images according to your specifications. It could be used to build Docker images and push them to a repository, download a file to be included in Docker build or package a Go binary an upload it to Cloud Storage. Furthermore the gcloud builder is aimed to run gcloud commands through a build pipeline for example to create account permissions or configure firewall rules when these are required steps for another operation to succeed.
Since you're not automatizing a build pipeline but trying to deploy an App Engine application Cloud build is not the product you should be using. The way to go when deploying to App Engine is to simply run gcloud app deploy command and let Google's environment take care of the rest for you.
Isn't this Quickstart describing exactly what the OP was trying to do?
https://cloud.google.com/source-repositories/docs/quickstart-triggering-builds-with-source-repositories
I myself was hoping to automate deployment of a Django webapp to an AppEngine "standard" instance.

How to use AWS Amplify with Sapper?

My goal is to use AWS Amplify in a Sapper project.
Creating a Sapper project from scratch (using webpack) then adding AWS Amplify and running it in dev is a success, but run it in production throws a GraphQL error in the console (Uncaught Error: Cannot use e "__Schema" from another module or realm).
Fixing this error thows another one (Uncaught ReferenceError: process is not defined).
A solution is to upgrade GraphQL from 0.13.0 to 14.0.0 unfortunatly GraphQL 0.13.0 is an AWS Amplify API dependency.
Does anyone know what can be done to get AWS Amplify work with Sapper in production ?
The link to the repo containing the source files is located here: https://github.com/ehemmerlin/sapper-aws-amplify
(Apologies for the long post but I want to be explicit)
Detailled steps
1/ Create a Sapper project using webpack (https://sapper.svelte.dev).
npx degit "sveltejs/sapper-template#webpack" my-app
cd my-app
yarn install
2/ Add AWS Amplify (https://serverless-stack.com/chapters/configure-aws-amplify.html) and lodash
yarn add aws-amplify
yarn add lodash
3/ Configure AWS Amplify (https://serverless-stack.com/chapters/configure-aws-amplify.html)
Create src/config/aws.js config file containing (change the values with yours but works as is for the purpose of this post):
export default {
s3: {
REGION: "YOUR_S3_UPLOADS_BUCKET_REGION",
BUCKET: "YOUR_S3_UPLOADS_BUCKET_NAME"
},
apiGateway: {
REGION: "YOUR_API_GATEWAY_REGION",
URL: "YOUR_API_GATEWAY_URL"
},
cognito: {
REGION: "YOUR_COGNITO_REGION",
USER_POOL_ID: "YOUR_COGNITO_USER_POOL_ID",
APP_CLIENT_ID: "YOUR_COGNITO_APP_CLIENT_ID",
IDENTITY_POOL_ID: "YOUR_IDENTITY_POOL_ID"
}
};
Add the following code to the existing code in src/client.js:
import config from './config/aws';
Amplify.configure({
Auth: {
mandatorySignIn: true,
region: config.cognito.REGION,
userPoolId: config.cognito.USER_POOL_ID,
identityPoolId: config.cognito.IDENTITY_POOL_ID,
userPoolWebClientId: config.cognito.APP_CLIENT_ID
},
Storage: {
region: config.s3.REGION,
bucket: config.s3.BUCKET,
identityPoolId: config.cognito.IDENTITY_POOL_ID
},
API: {
endpoints: [
{
name: "notes",
endpoint: config.apiGateway.URL,
region: config.apiGateway.REGION
},
]
}
});
4/ Test it
In dev (yarn run dev): it works
In production (yarn run build; node __sapper__/build): it throws an error.
Uncaught Error: Cannot use e "__Schema" from another module or realm.
Ensure that there is only one instance of "graphql" in the node_modules
directory. If different versions of "graphql" are the dependencies of other
relied on modules, use "resolutions" to ensure only one version is installed.
https://yarnpkg.com/en/docs/selective-version-resolutions
Duplicate "graphql" modules cannot be used at the same time since different
versions may have different capabilities and behavior. The data from one
version used in the function from another could produce confusing and
spurious results.
5/ Fix it
Following the given link (https://yarnpkg.com/en/docs/selective-version-resolutions) I added this code to package.json file:
"resolutions": {
"aws-amplify/**/graphql": "^0.13.0"
}
6/ Test it
rm -rf node_modules; yarn install
Throws another error in the console (even in dev mode).
Uncaught ReferenceError: process is not defined
at Module../node_modules/graphql/jsutils/instanceOf.mjs (instanceOf.mjs:3)
at \_\_webpack_require\_\_ (bootstrap:63)
at Module../node_modules/graphql/type/definition.mjs (definition.mjs:1)
at \_\_webpack_require\_\_ (bootstrap:63)
at Module../node_modules/graphql/type/validate.mjs (validate.mjs:1)
at \_\_webpack_require\_\_ (bootstrap:63)
at Module../node_modules/graphql/graphql.mjs (graphql.mjs:1)
at \_\_webpack_require\_\_ (bootstrap:63)
at Module../node_modules/graphql/index.mjs (main.js:52896)
at \_\_webpack_require\_\_ (bootstrap:63)
A fix given by this thread (https://github.com/graphql/graphql-js/issues/1536) is to upgrade GraphQL from 0.13.0 to 14.0.0 unfortunatly GraphQL 0.13.0 is an AWS Amplify API dependency.
When building my project (I'm using npm and webpack), I got this warning,
WARNING in configuration
The 'mode' option has not been set, webpack will fallback to 'production' for this value. Set 'mode' option to 'development' or 'production' to enable defaults
for each environment.
You can also set it to 'none' to disable any default behavior. Learn more: https://webpack.js.org/configuration/mode/
which seems to be related to the schema error, as these posts indicate that the quickest fix for the error is to set NODE_ENV to production in your environment (mode is set to the NODE_ENV environment variable in the webpack config):
https://github.com/aws-amplify/amplify-js/issues/1445
https://github.com/aws-amplify/amplify-js/issues/3963
How to do that:
How to set NODE_ENV to production/development in OS X
How can I set NODE_ENV=production on Windows?
Or you can mess with the webpack config directly:
https://gist.github.com/jmannau/8039787e29190f751aa971b4a91a8901
Unfortunately some posts in those GitHub issues point out the environment variable change might not work out for a packaged app, specifically on mobile.
These posts suggest that disabling the mangler might be the next best solution:
https://github.com/graphql/graphql-js/issues/1182
https://github.com/rmosolgo/graphiql-rails/issues/58
For anyone just trying to get the basic Sapper and Amplify setup going, to reproduce this error or otherwise, I build up mine with:
npm install -g #aws-amplify/cli
npx degit "sveltejs/sapper-template#webpack" my-app
npm install
npm install aws-amplify
npm install lodash (Amplify with webpack seems to need this)
amplify configure
npm run build
amplify init (dev environment, VS Code, javascript, no framework, src directory, __sapper__\build distribution directory, default AWS profile. This generates aws-exports.js.
In src/client.js:
import Amplify from 'aws-amplify';
import awsconfig from './aws-exports';
Amplify.configure(awsconfig);
npm run build