Concourse tries to pull docker image using wrong sha256 digest and fails - dockerfile

I am running Concourse 3.10.0, which I installed with the official helm chart, on GKE. I am getting this error, which refers to the wrong sha256:
Pulling registry.hub.docker.com/linkyard/concourse-helm-release#sha256:c47e868ec58fcf81b3b0d597bd10a91fc1908da4c13561e7834584997d1fcb9d...
Error response from daemon: pull access denied for registry.hub.docker.com/linkyard/concourse-helm-release, repository does not exist or may require 'docker login'
If I run docker pull linkyard/concourse-helm-resource:2.8.2-3 locally, it works, but downloads a different sha256.
It looks to me like I have run into issue 33 in concourse's docker-image-resource plugin, but that was fixed 2 years ago.
I had a little look at the Concourse Dockerfile and the helm chart, but I couldn't figure out how docker-image-resource gets included into the Concourse deployment.
How can I upgrade docker-image-resource to see if that fixes this bug?

This was actually just a simple typo - concourse-helm-release instead of concourse-helm-resource - but the error messages were misleading.
For future reference, the docker-image-resource is baked into the Concourse docker image by BOSH, and the relevant version can be found in this file.

Related

Created a pipeline using AWS copilot, original push worked but when I make changes to code and push them to github they don't show up

would appreciate any help with this:
I've followed the guide for AWS copilot here: https://aws.github.io/copilot-cli/docs/getting-started/first-app-tutorial/ and then the guide for creating a pipeline and connecting it to github here: https://aws.github.io/copilot-cli/docs/concepts/pipelines/. That all appears to have worked and I can view the react app I'm working on at the url indicated in aws.
My problem is that when I make changes to my code and then push it to the tracked github branch, the changes don't appear when viewing the app at the url. However, when I make the push to github, the pipeline does register that a change has occured. It indicates that a change has been made and goes through the flow of creating a new build. But whatever I try, the changes don't seem to actually show up.
I assume that I'm missing something simple here, and that for some reason, docker is building the app based on the original code. But I can't figure out why that would be. Maybe something is weird with my DockerFile?
My docker file looks like this:
FROM node:16.14
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm i
COPY . ./
CMD ["npm", "run", "server"]
My understanding of how this should work, is that I push up new code to github, that is sent to the aws pipeline and a new image is generated based on that code, which is then used to create a container that is hosted on ECS. But clearly I am missing something.
copilot deploy does work. I'm unsure if
the problem is that my pipeline is successfully building (as it does not throw an error in the console) and then just not hosting it at the same url as copilot deploy. Or
the pipeline is hitting an error that just doesn't show up in the pipeline console. Digging into the logs I find this:
echo "Cloudformation stack and config files were not generated. Please check build logs to see if there was a manifest validation error." 1>&2;
Which seems to point towards the second option. Any suggestions on how resolve whatever it going on in the container if that is the problem?
The error suggests that I check build logs but these are the build logs. Are there more granular build logs I can examine?
When running containers in ECS, unless your container is already crashing because of an error, it often won't pick up code changes from your new image unless you force a new deployment. You can do this from the command line using the AWS CLI with the following:
aws ecs update-service --cluster <cluster_name> --service <service_name> --force-new-deployment --profile <aws_profile_name>
Note that the profile is optional if you're using your default aws cli configuration profile.

AWS cloudformation: How to run cfn-nag locally in Windows

I have a cloud formation template where I have all the resources and details for the project.
I have the cfn-lint setup locally and it is running perfectly fine. However when I push the code changes, build fails at deployment stage due to cfn-nag stating some simple changes which could be fixed.
I'm using windows machine and I need a way to run this cfn-nag locally so that I could check this just like cfn-lint and fix them locally instead of waiting 40 minutes for build till it reaches deployment stage.
I referred several posts online, found below two helpful
https://stelligent.com/2018/03/23/validating-aws-cloudformation-templates-with-cfn_nag-and-mu/
https://github.com/stelligent/cfn_nag
What is the difference between cfn-nag and cfn-lint and why lint is not failing on what cfn-nag is complaining about?
The above links have some instructions on Ruby and Brew but I'm using Nodejs, felt lost. Please help.
CFN-Nag looks for patterns in AWS CloudFormation templates that may indicate insecure infrastructure,
Ex:
IAM rules that are too permissive (wildcards),
Security group rules that are too permissive (wildcards),
Access logs that aren’t enabled,
Encryption that isn’t enabled,
CFN-Lint scans the AWS CloudFormation template by processing a collection of Rules, where every rule handles a specific function check or validation of the template. It validates against AWS CloudFormation Resource specification.
This collection of rules can be extended with custom rules using the --append-rules argument.
Ex: Whitespaces, alignment(YAML), type checks, valid values for resource properties, and other best practices.
Those two links you previded above have all the information needed, just not directly for a Nodejs developer using a Windows machine.
Step1: Pull the docket image stelligent/cfn-nag
Step2: Add the script to your package.json for cfn-nag
Ex:
"scripts" : {
"cfn:nag": "cfn-nag"
}
If you're using docker-compose.yml
Add the cfn-nag image details to your docker-compose.yml like below
cfn-nag:
image: "stelligent/cfn-nag"
volumes:
-./path_of_cfn_file_to_copy: /path_to_copy_to
command: ${COMMAND: -/path_to_copy_tp/cfn_file}
Just set the scripts in package.json to run via docker-compose
"cfn:nag": "docker-compose run --rm cfn-nag"

yum fails to fetch mirror list 403 Amazon Linux

Edit: seems to be working now. Discussion here https://forums.aws.amazon.com/thread.jspa?threadID=344200
I'm finding that Amazon Linux yum can not retrieve the mirror list, failing with a 403 error.
Going to http://amazonlinux.default.amazonaws.com/2/core/latest/x86_64/mirror.list in a browser does indeed produce a 403 error.
This is running from local docker environment, so no S3 VPC endpoint is involved.
What can I do about this?
To reproduce:
docker run -it --entrypoint bash amazonlinux:latest
yum update
This produces the following:
bash-4.2# yum update
Loaded plugins: ovl, priorities
Could not retrieve mirrorlist http://amazonlinux.default.amazonaws.com/2/core/latest/x86_64/mirror.list error was
14: HTTP Error 403 - Forbidden
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
((truncated long output))
Cannot find a valid baseurl for repo: amzn2-core/2/x86_64
It would seem the files on AWS's S3 bucket at this location have been removed or the access revoked.
This has now been resolved by AWS.

Docker executable not found in PATH when using AWS batch/ECS

I am trying to run a simple Dockerized Python script with AWS batch.
Is there a problem with my Docker image?
I have locally built the Docker image and it runs fine locally. I pushed the image to a AWS repository, and pulling this remote image to my local machine also runs correctly.
Problem
I have setup my compute env, job queue, and job definition, but I get this error
CannotStartContainerError: Error response from daemon:
OCI runtime create failed: container_linux.go:370:
starting container process caused:
exec: "docker": executable file not found in $PATH: unknown
when I run
["docker","run","-t","111111111111.dkr.ecr.us-region-X.amazonaws.com/myimage:latest","python3","hello_world.py","--MSG","ok"]
Is Docker installed?
I am using the ECS_AL2 image type. When I start a EC2 with this AMI and ssh into it, I can see that Docker is already installed. docker run works fine for instance.
Is there a (generic) problem with my compute env, job queue, or job def?
When I instead try to run the command echo hello this works fine.
Appreciate any advice/help you can provide.
UPDATE - ANSWER
#samtoddler helped me to realize that I only needed
["python3","hello_world.py","--MSG","ok"]
in the Command statement
this error
CannotStartContainerError: Error response from daemon:
that means it is coming from docker daemon, so docker is doing its job.
Seems like you have some trouble with your docker image, how it is packaged and how you trying to pass all those vars.
Please check Docker Image CMD section on how to use ENTRYPOINT and CMD.
There is some explanation in this question docker-oci-runtime-create-failed-container-linux-go349-starting-container-pro

Setting up jupyterhub docker using one of the jupyter stacks

I'm trying to get a Jupyterhub up and running. 2.7 Python kernels are required, so basically whatever in the docker-stacks repo would be great. In the documentation, it mentions that it can work with Jupyterhub using DockerSpawner, but I can't quite see how it all fits together. Is anyone aware of a simple step by step guide to get this working?
To use any docker image first pull that from docker hub - docker pull jupyter/scipynotebook
Now install dockerspawner - pip install dockerspawner
Add necessary lines to jupyterhub_config.py
(https://github.com/jupyterhub/dockerspawner/blob/master/README.md)
The way to use specific docker image this line does the magic - c.DockerSpawner.image = 'jupyter/scipynotebook'