How to use PNPM with Google Cloud Build? - google-cloud-platform

I'd like to migrate to PNPM, however, I can't find a way to use its lockfile on Google Cloud. My current cloudbuild config is the following:
steps:
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk:latest"
entrypoint: 'gcloud'
args: ["app", "deploy"]
timeout: "1600s"
Afaik these official images only support Yarn and NPM. Is there an easy way to replace Yarn with PNPM here?
I looked on the Cloud Builders GitHub repo, but there's no PNPM there either.

IIUC the App Engine standard Node runtime(s) require that you use npm or yarn. PNPM is thus not user-definable when using standard.
https://cloud.google.com/appengine/docs/standard/nodejs/specifying-dependencies
If you want to use App Engine with a different package manager you could use flex and define a custom runtime. This essentially allows you to define a container image to deploy to App Engine and this may be anything that exposes an httpd on :8080.

You might be able to use pnpm install followed by npm shrinkwrap. I think gcloud deploy will ignore what's in node_modules in favor of package-lock.json but you could delete it.
npm i -g pnpm && pnpm i && npm shrinkwrap
That's npm shrinkwrap. There is pnpm shrinkwrap but that generates a pnpm-style lockfile.

Related

AWS EB deploy fails: npm WARN config production Use `--omit=dev` instead

I'm trying to circumvent the well-known issue with the latest versions of NPM and AWS Elastic Beanstalk where npm install fails because it can't find node_modules. I'm using platform hooks with my NUXTJS application.
It fails when AWS Code Pipeline runs a deploy and returns with this warning:
[ERROR] An error occurred during execution of command [app-
deploy] - [Use NPM to install dependencies]. Stop running the
command. Error: Command /bin/sh -c npm --production install
failed with error signal: killed. Stderr:npm WARN config
production Use `--omit=dev` instead.
So, I've added platform hooks at the app root but it's still failing. Also, I have added an environment variable to the EBS environment of:
NODE_ENV=production
Here's what my platform hooks look like. I thought this would work but something is obviously wrong. Can anyone spot it? Thanks for any helpful tips.
The custom-prebuild-script.sh looks like this:
#!/bin/bash
mkdir node_modules

Expo won't build with locally installed NPM packages

I am using expo#43.0.3 (and expo-cli#5.0.3) to manage my react native project and I have to install an npm package from local source:
$ npm install /path/to/mypackage
In my package.json the package is successfully linked via
"dependencies": {
...
"myPackage": "file:../../mypackage",
...
}
I can also confirm the package works when installing to a new plain node project (same node version 14.8.2)
Now when I start expo via expo start and navigate to the app it does not throw any error but only a warning:
› Reloading apps
warn No apps connected. Sending "reload" to all React Native apps failed. Make sure your app is running in the simulator or on a phone connected via USB.
When using the package from registry everything builds, however.
I tried to use the private packages section form the expo docs, but they only describe how to use private packages from registry but not local.
Anything I'm missing here?
edit:
After resetting the expo network adapters it loads the bundle but it now says it can't find the package:
Unable to resolve module myPackage from /home/user/path/to/myPackage/file.js: myPackage could not be found within the project or in these directories:
node_modules
If you are sure the module exists, try these steps:
1. Clear watchman watches: watchman watch-del-all
2. Delete node_modules and run yarn install
3. Reset Metro's cache: yarn start --reset-cache
4. Remove the cache: rm -rf /tmp/metro-*
However, I'm not using watchman and I'm not using yarn and rmoving metro- folders from /tmp did not make a difference.
As it turned out in this issue on GitHub it can be solved via npm pack:
run npm pack inside of your library and then npm install path/to/the/packed/file.tgz from your project
Which worked fine for the setup I described in the question.

How to make GitLab Windows shared runners to build faster?

Background
I have a CI pipeline for a C++ library I've been developing. So far, I can distribute this lib to Linux and Windows systems. Since I use GitLab to build, test and package my lib, I'd like to have my Windows builds running faster and I have no clue on how to do that.
Currently, I use the following script for my Windows builds:
.windows_template:
tags:
- windows
before_script:
- choco install cmake.install -y --installargs '"ADD_CMAKE_TO_PATH=System"'
- choco install python --pre -y
- choco install git -y
- $env:ChocolateyInstall = Convert-Path "$((Get-Command choco).Path)\..\.."; Import-Module "$env:ChocolateyInstall\helpers\chocolateyProfile.psm1"; refreshenv
- python -m pip install --upgrade pip
- pip install conan monotonic
The problem
Any build with the script above can take up to 10 minutes; worse: I have two stages, each one taking the same amount of time. This means that my whole CI pipeline will take 20 minutes to finish because of slowness in Windows builds.
Ideal solution
EVERYTHING in my before_script can be cached or stored as an image. I only need some hints on how to do it properly.
Additional information
I use the following tools for my builds:
CMake: to support my building process;
Python3: to test and build packages;
Conan (requires Python3): to support the creation of packages with several features, as well as distribute them;
Git: to download Googletest in CMake configuration step This is already provided in the cookbooks - I might just remove this extra installation step in my before_script;
Googletest (requires Python3): testing library;
Visual Studio DEV Tools: to compile the library This is already in the cookbooks.
Installing packages like this (whether it's OS packages though apt-get install... or pip, or anything else) is generally against best practices for CI/CD jobs because every job that runs will have to do the same thing, costing a lot of time as you run more pipelines, as you've seen already.
A few alternatives are to search for an existing image that has everything you need (possible but not likely with more dependencies), split up your job into pieces that might be solved by an image with just one or two dependencies, or create a custom docker image to use in your jobs. I answered a similar question with an example a few weeks ago here: "Unable to locate package git" when running GitLab CI/CD pipeline
But here's an example Dockerfile with Windows:
# Dockerfile
FROM mcr.microsoft.com/windows
RUN ./install_chocolatey.sh
RUN choco install cmake.install -y --installargs '"ADD_CMAKE_TO_PATH=System"'
RUN choco install python --pre -y
RUN choco install git -y
...
The FROM line says that our new image extends the mcr.microsoft.com/windows base image. You can extend any image you have access to, even if it already extends another image (in fact, that's how most images work: they start with something small, like a base OS installation, then add things needed for that package. PHP for example starts on an Ubuntu image, then installs the necessary PHP packages).
The first RUN line is just an example. I'm not a Windows user and don't have experience installing Chocolatey, but you'd do here whatever you'd normally do to install it locally. The rest are for installing whatever else you need.
Then run
docker build /path/to/dockerfile-dir -t mygroup/mytag:version
The path you supply needs to be the directory that contains the Dockerfile, not the Dockerfile itself. The -t flag sets the image's tag after it's built (though you can do that with a separate command, docker tag too).
Next, you'll have to log into whichever registry you're using (Docker Hub (https://docs.docker.com/docker-hub/repos/), Gitlab Container Registry (https://docs.gitlab.com/ee/user/packages/container_registry/), a private registry your employer may support, or any other option.
docker login my.docker.hub.com
Now you can push the image to the registry:
docker push my.docker.hub.com/mygroup/mytag:version
You'll have to review the information in the docs about telling your Gitlab runner or pipelines how to authenticate with the registry (unless it's Public on Docker Hub or you use the Gitlab Container Registry) https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#define-an-image-from-a-private-container-registry
Once all that's done, you can use your new image in your CI jobs, and everything we put into the image will be ready to use:
.windows_template:
image: my.docker.hub.com/mygroup/mytag:version
tags:
- windows
...

Install composer dependencies while deploying

I'm using Elastic Beanstalk to deploy my application as a Single Docker Application.
My Dockerfile does composer install while deploying, but I get a Could not authenticate against github.com error.
I use these lines in my Dockerfile to install my dependencies:
WORKDIR /www
RUN ["composer", "install", "-o"]
How would I solve this issue?
I think you need to configure composer inside your container with your key or something like that, remember that inside your container you're basically on another os and you don't have public keys etc.
I'd try to install it from source rather than from git (as you don't have keys).
try this:
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer ()

How do I make bower install work with aws.push?

As a starting point to making my own app that uses meanjs, I went to the meanjs website and used their yeomen generator to create the template/sample app. Following the instructions getting the sample application running out of the box on my local desktop machine worked within minutes. To complete the exercise I tried to deploy the sample app to an AWS/EC2 instance before making any changes to it. I have used the command line deployment tools in the past and liked it. Also it is nice how now you can just select an EC2 Linux instance with node and npm already installed and ready.
After checking the sample into git, I run "git aws.push" to deploy the app.
The problem is in the package.json the line:
"postinstall": "bower install --config.interactive=false"
In the eb-activity.log:
npm WARN cannot run in wd meansample#0.0.1 bower install --config.interactive=false (wd=/tmp/deployment/application)
The result is that AngularJS ends up not getting installed in /public/lib.
First thing I tried was giving the full path in the package.json file: node_modules/bower/bin/bower. This didn't help and results in the same error. Also noting that other commands like "grunt" don't need the full path specified in the package.json and they work.
I don't understand enough of the black box magic that aws.push does to understand why this error is happening. For example what user does it run as? What permissions does that user have? what options if any does it use when it runs npm install?
I did figure out a work-around, but it adds a lot of extra steps that shouldn't be required if aws.push was able to run bower install directly. Basically I can manually run the bower install in the ssh client connected to my EC2 instance, set the owner/group on the installed files, and restart the server.
Work-around steps:
1) On local command prompt run git aws.push. Wait for unsuccessfully deployment to finish.
2) Connect ssh client to EC2 instance. From the command prompt:
cd /var/app/current
/* NOTE: if I don't use sudo the ec2user I am logged in as does not have permission to create /public/lib needed to install AngularJS into*/
sudo node_modules/bower/bin/bower install --config.interactive=false --allow-root
/* NOTE: just changing the owner and group to match the same as the other files that aws.push deployed */
sudo chown -R nodejs public/lib
sudo chgrp -R nodejs public/lib
From AWS dashboard, select the correct EC2 instance, Action = Restart App Server(s)
Now AngularJS is install and the sample app works.
How do I eliminate the extra steps and make it so aws.push can do the bower install successfully?
I have experienced the same problem when trying to publish my nodejs app in a private server running CentOs using root user. The same error is fired by "postinstall": "./node_modules/bower/bin/bower install" in my package.json file so the only solution that was working for me is to use both options to avoid the error:
1: use --allow-root option for bower install command
"postinstall": "./node_modules/bower/bin/bower --allow-root install"
2: use --unsafe-perm option for npm install command
npm install --unsafe-perm