Vercel dynamic build command based on stage - expo

I have an Expo app where the web component is hosted through Vercel. I use the Vercel GitHub integration for automatic deployment. Expo has a different build command for staging builds and production builds, and it doesn't appear Vercel supports environment/staging based build commands. I'm wondering if I'm possibly missing something and this is possible or anyone has another way of handling this?

I was in a similar situation and managed to solve it with a shell script.
Here is an example you can try out:
#!/bin/bash
if [[ $VERCEL_ENV == "production" ]] ;
then
echo "Building production"
yarn build:production
else
echo "Building staging"
yarn build:staging
fi
And lets say you called the file vercel.sh, you would configure vercel buildCommand to be sh vercel.sh.
Basically, from the shell script you'll be able to use any of the system environment variables vercel provides.

Related

AWS elasticbeanstalk hooks: postdeploy works, predeploy doesn't

I am using an ebs app on linux 2 platforms, and I need to clone a directory during deployment to get configfiles for my app.
I did a predeploy hook so that the files are there when the app starts after deployment: /.platform/hooks/predeploy/01_import
After deployment in a predeploy hook, the files are not there.
When I run the exact same script in a postdeploy hook, the files are there.
So the command works, I see the predeploy hook is running (I see the echo text in the log), but the files are not present. Anyone knows why?
#!/bin/bash
mkdir /var/app/current/config
echo Adding github in known hosts
ssh-keyscan -H github.com >> /home/webapp/.ssh/known_hosts
echo Done Adding github in known hosts
echo deleting old flows
echo cloning
git -c core.sshCommand="ssh -i /etc/pki/tls/certs/githubKey" clone -b dev --single-branch <mygithub> /var/app/current/config
echo done cloning
In predeploy stage, the new code is deployed to /var/app/staging, not /var/app/current.
/var/app/current is actually overwritten by staging if the new staging deployment is successful.
So in predeploy, I've cloned to staging instead, and it works.
This is not well documented in AWS docs; this helped me.

AWS EB ( Elastic Beanstalk) CLI not working in the command line of git bash

AWS EB (Elastic Beanstalk) CLI not running in git bash (Windows 10). I have successfully installed the AWS EB CLI from AWS documentation at https://github.com/aws/aws-elastic-beanstalk-cli-setup/blob/master/README.md . At the end I have set the environment variables as mentioned in the doc. So "eb" command is working from Windows Power shell. But when I am trying to access the "eb" command from GIT Bash / IntelliJ bash prompt, it is not working.
Working fine with windows power shell:
PS C:\> eb --version
EB CLI 3.19.2 (Python 3.7.3)
Environment variable set as below under "User Variable" -> "Path":
Environment variable set windows
While trying to access the "eb" from Git Bash the error is as below:
$ eb
bash: eb: command not found
$ echo $PATH
.....
......
/c/Users/xxxxxx/.ebcli-virtual-env/executables:
Restarted the system and commandline interfaces multiple time.
Can someone please let me know if there are some issue with environment variable set, or need to configure something additional in bash environment?
After so many trial and error with different solution available in internet along with AWS doc suggestion, finally I can use "eb" from Git bash of windows 10. The problem fixed after I put the below location in my environment variable path:
C:\Users\XXXX\AppData\Roaming\Python\Python37\Scripts
The issue for me was a username with a space. The path would then look like this: C:\Users\fname lastname.ebcli-virtual-env\executables. The problem came about with the .bat files created by the AWS script did not wrap the path in double quotes. Windows then interprets it as multiple parameters.
I had to go edit eb.bat and path_exporter.bat and wrap the directives like this: (in eb.bat) CALL "C:\Users\fname lastname.ebcli-virtual-env\Scripts\activate.bat"
#start CALL "C:\Users\fname lastname.ebcli-virtual-env\Scripts\eb.exe" %args%
The EB cli seems to work properly now.

cloud run build and building docker images while building a docker image

Currently, I found out a google cloud build happens during building a docker image time(not as I thought in that it would build my image and then execute my image to do all the building). That was in this post
quick start in google cloud build
soooo, I have a Dockerfile that is real simple now like so
FROM gcr.io/google.com/cloudsdktool/cloud-sdk:alpine
RUN mkdir -p ./monobuild
COPY . ./monobuild/
WORKDIR "/monobuild"
RUN ["/bin/bash", "./downloadAndExtract.sh"]
and I have a single downloadAndExtract that downloads any artifacts(zip files) from the last monobuild run that were built(only modified servers are built OR servers that dependend on changes in the last CI build are built like downstream libraries may be changed). This first line just lists urls of the zip files I need in a file...
curl "https://circleci.com/api/v1.1/project/butbucket/Twilio/orderly/latest/artifacts?circle-token=$token" | grep -o 'https://[^"]*zip' > artifacts.txt
while read url; do
echo "Downloading url=$url"
zipFile=${url/*\//}
projectName=${zipFile/.zip/}
echo "Zip filename=$zipFile"
echo "projectName=$projectName"
wget "$url?circle-token=$token"
mv "$zipFile?circle-token=$token" $zipFile
unzip $zipFile
rm $zipFile
cd $projectName
./deployGcloud.sh
cd ..
done <artifacts.txt
echo "DONE"
Of course, the deployGcloud script has these commands in it sooooo this means we are building docker images WHILE building the google cloud build docker image(which still seems funny to me)....
docker build . --tag gcr.io/twix/authservice
docker push gcr.io/twix/authservice
gcloud run deploy staging-admin --region us-west1 --image gcr.io/twix/authservice --platform managed
BOTH docker commands seem to be erroring out on this..
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
while the gcloud command seems to be very happy doing a deploy but just using a previous image we deployed at that location.
So, how to get around this error so my build will work and build N images and deploy them all to cloud run?
oh, I finally figured it out. Google has this weird thing in it's config.yaml files of use this docker image to run a curl command and then on next step use this OTHER dockerr image to run some other command and so on using 5 different images. This is all very confusing so instead, I realized I had to figure out how to create my ONE docker image and just run it as a command so I modify the above to have an ENTRYPOINT instead and then docker build and docker push my image into google. Then, I have a cloudbuild.yaml with a single step image command to run.
In this way, we can tweak our builds easily within our docker image that is just run. This is now way simpler than the complex model that google had setup as it becomes basic do your build in the container however you like and install whatever tools you need in the one docker image.
ie. beware the google quick starts which honestly IMHO are really overcomplicating it compared to circleCI and other systems. (of course, that is just an opinion and each to their own).

GitHub Cloud Build Integration with multiple cloudbuild.yamls in monorepo

GitHub's Google Cloud Build integration does not detect a cloudbuild.yaml or Dockerfile if it is not in the root of the repository.
When using a monorepo that contains multiple cloudbuild.yamls, how can GitHub's Google Cloud Build integration be configured to detect the correct cloudbuild.yaml?
File paths:
services/api/cloudbuild.yaml
services/nginx/cloudbuild.yaml
services/websocket/cloudbuild.yaml
Cloud Build integration output:
You can do this by adding a cloudbuild.yaml in the root of your repository with a single gcr.io/cloud-builders/gcloud step. This step should:
Traverse each subdirectory or use find to locate additional cloudbuild.yaml files.
For each found cloudbuild.yaml, fork and submit a build by running gcloud builds submit.
Wait for all the forked gcloud commands to complete.
There's a good example of one way to do this in the root cloudbuild.yaml within the GoogleCloudPlatform/cloud-builders-community repo.
If we strip out the non-essential parts, basically you have something like this:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
for d in */; do
config="${d}cloudbuild.yaml"
if [[ ! -f "${config}" ]]; then
continue
fi
echo "Building $d ... "
(
gcloud builds submit $d --config=${config}
) &
done
wait
We are migrating to a mono-repo right now, and I haven't found any CI/CD solution that handles this well.
The key is to not only detect changes, but also any services that depend on that change. Here is what we are doing:
Requiring every service to have a MAKEFILE with a build command.
Putting a cloudbuild.yaml at the root of the mono repo
We then run a custom build step with this little tool (old but still seems to work) https://github.com/jharlap/affected which lists out all packages have changed and all packages that depend on those packages, etc.
then the shell script will run make build on any service that is affected by the change.
So far it is working well, but I totally understand if this doesn't fit your workflow.
Another option many people use is Bazel. Not the most simple tool, but especially great if you have many different languages or build processes across your mono repo.
You can create a build trigger for your repository. When setting up a trigger with cloudbuild.yaml for build configuration, you need to provide the path to the cloudbuild.yaml within the repository.

Docker on EC2, RUN command in dockerfile not reading environment variable

I have two elastic-beanstalk environments on AWS: development and production. I'm running a glassfish server on each instance and it is requested that the same application package be deployable in production and in development environment, without requiring two different .EAR files.The two instance differ in size: the dev has a micro instance while the production has a medium instance, therefore I need to deploy two different configuration files for glassfish, one for each environment.
The main problem is that the file has to be in the glassfish config directory before the server starts, therefore I thought it could be better moving it while the container was created.
Of course each environment uses a docker container to host the glassfish instance, so my first thought was to configure an environment variable for the elastic-beanstalk. In this case
ypenvironment = dev
for the development environment and
ypenvironment = pro
for the production environment. Then in my DOCKERFILE I put this statement in the RUN command:
RUN if [ "$ypenvironment"="pro" ] ; then \
mv --force /var/app/GF_domain.xml /usr/local/glassfish/glassfish/domains/domain1/config/domain.xml ; \
elif [ "$ypenvironment"="dev" ] ; then \
mv --force /var/app/GF_domain.xml.dev /usr/local/glassfish/glassfish/domains/domain1/config/domain.xml ; \
fi
unfortunately, when the startup finishes, both GF_domain files are still in var/app.
Then I red that the RUN command runs things BEFORE the container is fully loaded, maybe missing the elastic-beanstalk-injected variables. So I tried to move the code to the ENTRYPOINT directive. No luck again, the container startup fails. Also tried the
ENTRYPOINT ["command", "param"]
syntax, but it didn't work giving a
System error: exec: "if": executable file not found in $PATH
Thus I'm stuck.
You need:
1/ Not to use entrypoint (or at least use a sh -c 'if...' syntax): that is for runtime execution, not compile-time image build.
2/ to use build-time variables (--build-arg):
You can use ENV instructions in a Dockerfile to define variable values. These values persist in the built image.
However, often persistence is not what you want. Users want to specify variables differently depending on which host they build an image on.
A good example is http_proxy or source versions for pulling intermediate files. The ARG instruction lets Dockerfile authors define values that users can set at build-time using the --build-arg flag:
$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
In your case, your Dockefile should include:
ENV ypenvironment
Then docker build --build-arg ypenvironment=dev ... myDevImage
You will build 2 different images (based on the same Dockerfile)
I need to be able to use the same EAR package for dev and pro environments,
Then you want your ENTRYPOINT, when run, to move a file depending on the value of an environment variable.
Your Dockerfile still needs to include:
ENV ypenvironment
But you need to run your one image with
docker run -x ypenvironment=dev ...
Make sure your script (referenced by your entrypoint) includes the if [ "$ypenvironment"="pro" ] ; then... you mention in your question, plus the actual launch (in foreground) of your app.
Your script needs to not exit right away, or your container would switch to exit status right after having started.
When working with Docker you must differentiate between build-time actions and run-time actions.
Dockerfiles are used for building Docker images, not for deploying containers. This means that all the commands in the Dockerfile are executed when you build the Docker image, not when you deploy a container from it.
The CMD and ENTRYPOINT commands are special build-time commands which tell Docker what command to execute when a container is deployed from that image.
Now, in your case a better approach would be to check if Glassfish supports environment variables inside domain.xml (or somewhere else). If it does, you can use the same domain.xml file for both environments, and have the same Docker image for both of them. You then differentiate between the environments by injecting run-time environment variables to the containers by using docker run -e "VAR=value" when running locally, and by using the Environment Properties configuration section when deploying on Elastic Beanstalk.
Edit: In case you can't use environment variables inside domain.xml, you can solve the problem by starting the container with a script which reads the runtime environment variables and puts their values in the correct places in domain.xml using sed, then starts your application as usual. You can find an example in this post.