We have set the CI/CD pipeline in amplify and since 22nd Dec the backend build is getting failed and throwing the error "Failed to pull the backend" as per the attached screenshot,
Expected behavior: The build should be successfully completed. Here I'm attaching the last successful build screenshot,
I tried to redeploy the last successful build, but that also failed and gave the same error.
version details,
Node.js: 16.18.1
Amplify CLI Version: 10.5.2
OS: Amazon Linux 2
NOTE: The project is working fine locally, and also the amplify pull command runs successfully. In local I'm using windows.
Thank you.
I use GitLab Runner for running CI jobs on AWS EC2 spot instances, using its autoscaling feature with Docker Machine.
All of a sudden, today GitLab CI failed to run jobs and shows me the following job output for all jobs that I want to start:
Running with gitlab-runner 14.9.1 (f188edd7)
on AWS EC2 runner ...
Preparing the "docker+machine" executor
10:05
ERROR: Preparation failed: exit status 1
Will be retried in 3s ...
ERROR: Preparation failed: exit status 1
Will be retried in 3s ...
ERROR: Preparation failed: exit status 1
Will be retried in 3s ...
ERROR: Job failed (system failure): exit status 1
I see in the AWS console that the EC2 instances do get created, but the instances always get stopped immediately by GitLab Runner again.
The GitLab Runner system logs show me the following errors:
ERROR: Machine creation failed error=exit status 1 name=runner-eauzytys-gitlab-ci-1651050768-f84b471e time=1m2.409578844s
ERROR: Error creating machine: Error running provisioning: error installing docker: driver=amazonec2 name=runner-xxxxxxxx-gitlab-ci-1651050768-f84b471e operation=create
So the error seams somehow to be related to Docker machine. Upgrading GitLab Runner as well as GitLab's Docker Machine fork to the newest versions do not fix the error. I'm using GitLab 14.8 and tried GitLab Runner 14.9 and 14.10.
What can be the reason for this?
Update:
In the meantime, GitLab have released a new version of their Docker Machine fork which upgrades the default AMI to Ubuntu 20.04. That means that upgrading Docker Machine to the latest version released by GitLab will fix the issue without changing your runner configuration. The latest release can be found here.
Original Workaround/fix:
Explicitly specify the AMI in your runner configuration and do not rely on the default one anymore, i.e. add something like "amazonec2-ami=ami-02584c1c9d05efa69" to your MachineOptions:
MachineOptions = [
"amazonec2-access-key=xxx",
"amazonec2-secret-key=xxx",
"amazonec2-region=eu-central-1",
"amazonec2-vpc-id=vpc-xxx",
"amazonec2-subnet-id=subnet-xxx",
"amazonec2-use-private-address=true",
"amazonec2-tags=runner-manager-name,gitlab-aws-autoscaler,gitlab,true,gitlab-runner-autoscale,true",
"amazonec2-security-group=ci-runners",
"amazonec2-instance-type=m5.large",
"amazonec2-ami=ami-02584c1c9d05efa69", # Ubuntu 20.04 for amd64 in eu-central-1
"amazonec2-request-spot-instance=true",
"amazonec2-spot-price=0.045"
]
You can get a list of Ubuntu AMI IDs here. Be sure to select one that fits your AWS region and instance architecture and is supported by Docker.
Explanation:
The default AMI that GitLab Runner / the Docker Machine EC2 driver use is Ubuntu 16.04. The install script for Docker, which is available on https://get.docker.com/ and which Docker Machine relies on, seems to have stopped supporting Ubuntu 16.04 recently. Thus, the installation of Docker fails on the EC2 instance spawned by Docker Machine and the job cannot run.
See also this GitLab issue.
Azure and GCP suffer from similar problems.
Make sure to select an ami for Ubuntu and not Debian and that your aws account is subscribed to it
What I did
subscribe in aws marketplace to a Ubuntu Amazon Image (Ubuntu 20.04 LTS - Focal)
select launch instance, choose the region, and copy the ami shown
I had the same issue since yesterday.
It could be related to GitLab releasing 15.0 with breaking changes (going live on GitLab.com sometime between April 23 – May 22)
https://about.gitlab.com/blog/2022/04/18/gitlab-releases-15-breaking-changes/
but there is no mention of missing AMI field to add to field MachineOptions
Adding field AMI solved the issue on my side.
Just wanted to add as well, go here for the ubuntu that corresponds with your region. Amis are region specific
As Moritz pointed out:
Adding:
MachineOptions = [
"amazonec2-ami=ami-02584c1c9d05efa69",
]
solves the issue.
All of a sudden I cannot get GCP local cloud builds running.
I've tried updating the to the latest version of the various pieces
Docker Desktop 2.5.01 Engine 19.03.13
Google Cloud SDK 317.0.0
cloud-build-local 0.5.2
And I have done all possible Windows updates for my current build 2004(19041.572)
If I do a dry run, all is successful and there are no issues.
When I do a full run, a busybox container fires up, status changes to EXITED (0), and that where it all just stops.
Terminal output below
cloud-build-local --config=cloudbuild-hosting-prod.yaml
--dryrun=false . 2020/11/10 15:12:44 Warning: The server docker version installed (19.03.13) is different from the one used in GCB
(19.03.8) 2020/11/10 15:12:44 Warning: The client docker version
installed (19.03.13) is different from the one used in GCB (19.03.8)
A colleague of mine is having the same issue, yet if I run the same build on my laptop, all works fine, so its not the YAML file (which hasn't changed anyway) or a code issue
Docker Desktop 2.5.0.0 Engine 19.03.13
Google Cloud SDK 301.0.0
cloud-build-local
Any advice on how to troubleshoot what the issue could be.
I'm working on a project that I haven't touched in about 4 months. Before everything on the deploy was working fine, but now I'm getting an error when trying to deploy an update.
Failed to pull Docker image amazon/aws-eb-python:3.4.2-onbuild-3.5.1: Pulling repository amazon/aws-eb-python time="2016-01-17T01:40:45Z" level="fatal" msg="Could not reach any registry endpoint" . Check snapshot logs for details. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
In the eb-activity log, it further states [CMD-AppDeploy/AppDeployStage0/AppDeployPreHook/03build.sh] : Activity execution failed, because: Pulling repository amazon/aws-eb-python before repeating what was shown in the UI.
The original was using a Preconfigured Docker 64bit Debian jessie v1.3.1 running Python 3.4. I've tried upgrading to the latest, which is version 2.0.6, but it never completes (don't need to get into specifics of that error, separate issue and I'd like to stay on 1.3.1 if possible). I've also tried upgrading to the latest 1.x but it has the same result of upgrading to 2.0.6.
Any ideas, or anything else I should be looking for clues?
Docker Hub has deprecated pulls from Docker clients on 1.5 and earlier. Make sure that your docker client version is at least above 1.5. See https://blog.docker.com/2015/10/docker-hub-deprecation-1-5/ for more information.
We have a Jenkins master/slave configuration and Perforce is installed on all masters. We just had an unrelated incident occur that made us renew our .p4tickets on all masters and slaves and we came to find that perforce had been removed by someone on our team about a week ago without telling anyone.
Our jobs are setup to wipe completely new the workspaces on the slaves every time a build occurs so that we can issue a p4 sync every time. We build several times a day. Perforce is installed both on the masters and the slaves.
The problem is that the master that had Perforce missing has been doing builds successfully for a week now.
I have been operating under the thought that with the architecture we have, Perforce is doing a push from the Master to the Slave since the jobs are kept on the Master. Is this incorrect?
Regards,
-Caolan.
You don't need the Perforce client on the Jenkins master unless it's set up to run builds that need to pull code from Perforce. If all your builds run on slaves, you don't need Perforce on the master.
If you are using the new p4 plugin you don't need to install any p4 clients on the Master or the Slave. The p4 plugin uses a native p4java API to talk directly to the Perforce Server.