I am creating a codebuild job to run aws canary tests.I have added the command start-canary --name to run the tests.
But the build fails with command not found.
Related
This is a strange one.
A Google Cloud Run deployment run from gcloud commandline on my OSx Mac works — while the identical command run from the identical gcloud version, using a Service Account user within our Alpine based Ci/Cd Gitlab runner container / executor crashes and complains about un-recognized arguments.
With the arguments copied and pasted why is the gcloud (within Alpine gitlab runner / executor container) failing due to not recognizing the arguments where my local install works fine?
As background:
We run Ci/Cd within a Gitlab Ci Runner where the docker executor that deploys our final container previously needed to use Kubectl to push that container to a GCP Managed K8s Cluster — which was expensive. So we moved the production container to Cloud Run — and it was cheaper.
Now I am working on resetting our Ci/CD deployments and ran into the above issue while attempting to deploy a container from within our GitLab Ci pipeline.
The gcloud command that works looks like this (on my local Mac)
gcloud run deploy site-production \
--platform=managed \
--allow-unauthenticated \
--image=us.gcr.io/some-site-333333/site:master \
--region=us-east1
That same (EXACT) command on the GitLab runner gets me:
ERROR: (gcloud.run.deploy) unrecognized arguments:
--platform=managed
--allow-unauthenticated
--image=us.gcr.io/some-site-333333/site:master
--region=us-east1
To search the help text of gcloud commands, run:
gcloud help -- SEARCH_TERMS
Seems super weird — and I was pretty sure I must have had a typo or something — but the command itself was copied (and modified) from Google's own Cloud Run docs.
If I am missing something dumb let me know — until then my plan is to start shaving off optional flags to try to see which one of those parameters it's complaining about. Ideas are appreciated!
Try to make a one liner command like:
gcloud run deploy site-production --platform=managed --allow-unauthenticated --image=us.gcr.io/some-site-333333/site:master --region=us-east1
Im running AWS SAM and using sam build --use-container then get the following error.
Starting Build inside a container Building function 'SamTutorialFunction Build Failed Error: Docker is unreachable. Docker needs to be running to build inside a container
I run sudo service docker start before and still get the same error.
I had the same issue. The problem was that docker was installed to run as the root user. AWS SAM is trying to access as your logged in user. You can set docker to run as non-root user (without sudo) by adding your user to the docker group. See https://docs.docker.com/engine/install/linux-postinstall/
If you are running Ubuntu on WSL2, You need to enable integration between Docker and WSL2 in order to run
sam build --use-container
Steps:
Download Docker Desktop https://desktop.docker.com/win/main/amd64/Docker%20Desktop%20Installer.exe
Go to settings => resources => WSL integration.
Check the enable integration with additional distros.
I installed third party tool (ecs deploy using pip install ecs-deploy) .When I try to deploy using command ecs deploy demo-cluster demo-service in command prompt its working fine when I try with jenkins to deploy getting error
/tmp/jenkins5062380414579854312.sh: line 13: ecs: command not found
Build step 'Execute shell' marked build as failure
Finished: FAILURE
The Jenkins service runs typically runs under the user jenkins.
You have installed the package as the ec2-user. This means the jenkins user may not have the package in its own path or have correct permissions to execute the file.
You can correct this one of two ways:
Use sudo to elevate permissions and install it globally. Set the path in /etc/environment
Interactively login as the jenkins user and install under that account.
You need to run the full AWS CLI command:
aws ecs deploy --cluster demo-cluster --service demo-service
I have a service created on Google Cloud run that I am able to deploy manually through the Google Cloud Console UI using an image on Container registry. But deployment from CLI is failing. Here is the command I am using and the error I get. I am not able to understand what I am missing:
$ gcloud beta run deploy service-name --platform managed --region region-name --image image-url
Deploying container to Cloud Run service [service-name] in project [project-name] region [region-name]
X Deploying...
. Creating Revision...
. Routing traffic...
Deployment failed
ERROR: (gcloud.beta.run.deploy) INVALID_ARGUMENT: The request has errors
- '#type': type.googleapis.com/google.rpc.BadRequest
fieldViolations:
- description: spec.revisionTemplate.spec.container.ports should be empty
field: spec.revisionTemplate.spec.container.ports
Update 1:
I have updated the SDK using gcloud components update, but I still have the same issue
Here's my SDK Version
$gcloud version
Google Cloud SDK 270.0.0
beta 2019.05.17
bq 2.0.49
core 2019.11.04
gsutil 4.46
I am using a multistage docker build. Here's my Dockerfile:
FROM custom-dev-image
COPY . /project_dir
WORKDIR /project_dir
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
/usr/local/bin/go build -a \
-ldflags '-w -extldflags "-static"' \
-o /root/go/bin/executable ./cmds/project/main.go
FROM alpine:3.10
ENV GIN_MODE=release APP_NAME=project_name
COPY --from=0 /root/go/bin/executable /usr/local/bin/
CMD executable
I had this same problem and I assume it was because I had older Cloud Run deployment that was created before I had ran gcloud components update since some update.
I was able to fix it by deleting the whole Cloud Run service (through the GUI) and deploying it from scratch again (via terminal). I noticed that the ports: definition disappeared from the YAML once I did this.
After this I could do deployments normally.
This was a bug in Cloud Run. It has been fixed and deploying with CLI is working for me now. Here's the link to the issue I had raised with Google Cloud which has a response from them https://issuetracker.google.com/issues/144069696.
I have installed AWS CLI on my windows slave in Jenkins. To verify the same, I run the following command in the command line of the windows machine and get this as the output
C:> aws --version
aws-cli/1.11.122 Python/2.7.9 Windows/2008ServerR2 botocore/1.5.85
I am running an aws cli command in the execute windows batch command in the jenkins job and the job is failing for the following reason
C:\Users\ADMINI~1\AppData\Local\Temp\2\hudson1929374596375903011.sh: line 6:
aws: command not found
Build step 'Execute shell' marked build as failure
The aws command I am running is
aws cloudformation validate-template --template-body file://file1.json
I also checked the PATH variable on the windows machine and it contains AWSCLI path.
My goal is to run AWS CLI command via Jenkins job. Can somebody help me with this?
It's possible that Jenkins has a different %PATH% than when you are logged in.
Try finding your path via jenkins. Create a job and in the script that runs echo out your %PATH% to see what jenkins' thinks your path is.
You can modify Jenkins' environment variables, including %PATH%, see https://stackoverflow.com/a/5819768/8207662