I have recently migrated my application from one AWS account to another account during which I have changed the docker registry URL from docker.app.prod.aws.abc.com to docker-hd.app.prod.aws.abc.com.
We have GitLab application hosted on AWS.
The runners/instances which are on AWS are able to push or pull images from the new docker registry without any issues. But the runners which are on-prem are getting error as "Forbidden".
Can someone please help me to fix this issue..
I have updated the DNS records for new docker registry but still getting forbidden error.
It was working fine before the migration.
Related
I am very new to working with AWS but am trying to set up a EC2 service, connected to a github action which deploys my python app to my service.
I am currently creating a ECS cluster [as described by github][1].
During the creation of said cluster the setup asks me for an Image (`repository-url/image:tag`).
What does that mean exactly? I've been looking online for multiple hours but dont understand where I can find said image.
Filling in `12345.dkr.ecr.us-east-2.amazonaws.com/My-Repo:latest` returns a `CannotPullContainerError: inspect image has been retried 1 time(s): failed to resolve ref, not found`.
Could someone help me understand?
Edit: I am completely new to AWS so I apologise if any info is missing and can add whatever is needed to the post.
That would be the docker image (docker image repository and image tag) to deploy to your ECS service. You can't just make that up, it has to be a repository, and image that already exists. You should be creating a docker image that contains your Python app, and pushing that image to an image repository somewhere, such as AWS ECR. You need to be doing that before you look into deploying anything on AWS ECS.
Also, you may be overcomplicating things a lot by using EC2 instead of Fargate.
I am trying to deploy the Strapi on AWS Fargate serverless environment through GitlabCI. Using AWS MySQL Aurora database for the DB integration. The database is up and running properly. When my CICD is trying to deploy it on Fargate, somehow it is not able to get connected with my DB. I am injecting env variables in Task Definition through Secret manager.
I am getting this error "getaddrinfo ENOTFOUND" in cloudwatch logs. Not sure what to do as everything is going on through CICD only. I can think of, Do we need to mention anything in database.js file or server.js file regarding my database or in Dockerfile? Or in GitlabCI configuration?
I know this is a very specific process but still if any one would be able to help me out.
Thanks,
Tushar
I have create a website using VS Code in NodeJS with typescript language.
Now I want to try to deploy it on AWS. I read so many things about EC2 , Cloud9 , Elastic Beanstalk, etc...
So I'm totally lost about what to use to deploy my website.
Honestly I'm a programmer, not a site manager or sysops.
Right Now I create an EC2 instances. One with a Key name and One with no key Name.
In the Elastic Beanstalk, I have a button Upload and Deploy.
Can someone send me the way to create my project as a valid package to upload and deploy it ?
I never deploy a website. (Normally it was the sysops at the job). So I don't know what to do to have a correct distributing package.
Does I need to create both EC2 and Beanstalk ?
Thanks
If you go with ElasticBeanstalk, it will take care of creating the EC2 instances for your.
It actually takes care of creating EC2 instance, DB, loadbalancers, CloudWatch trails and many more. This is pretty much what it does, bundles multiple AWS services and offers on panel of administration.
To get started with EB you should install the eb cli.
Then you should:
go to your directory and run eb init application-name. You'll start a wizard from eb cli asking you in which region you want to deploy, what kind of db and so on
after that your need to run eb create envname to create a new env for your newly create application.
at this point you should head to the EB aws panel and configure the start command for your app, it usually is something like this npm run prod
because you're using TS there are a few steps you need to do before being able to deploy. You should run npm run build, or whatever command you have for transpiling from TS to JS. You'll be deploying compiled scripts and not your source code.
now you are ready to deploy, you can run eb deploy, as this is your only env it should work, when you have multiple envs you can do eb deploy envname. For getting a list of all envs you can run eb list
There are quite a few steps to take care before deploying and any of them can cause multiple issues.
If your website contains only static pages you can use Amazon S3 to deploy your website.
You can put your build files in S3 bucket directly and enable static web hosting.
This will allow anyone to access your website from a url globally, for this you have to make your bucket public also.
Instead you can also use cloudfront here to keep your bucket private but allowing access to bucket through cloudfront url.
You can refer to below links for hosting website through s3.
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/static-website-hosting.html
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/
I have an image on Amazon's Elastic Container Registry (ECR) that I want to deploy as a Docker service in my Docker single-node swarm. Currently the service is running an older version of the image's latest tag, but I've since uploaded a newer version of the latest tag to ECR.
Running docker service update --force my_service on my swarm node, which uses image XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/my_service:latest, results in:
image XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/my_service:latest could not be accessed on a registry to record its digest. Each node will access XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/my_service:latest independently,
possibly leading to different nodes running different versions of the image.
This appears to prevent the node from pulling a new copy of the latest tag from the registry, and the service from properly updating.
I'm properly logged in with docker login to ECR, and running docker pull XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/my_service:latest works fine (and returns a digest).
Why is docker service update unable to read the digest from the ECR registry despite the image being available?
I had the same problem, but I solved it by using --with-registry-auth.
After you logged in with docker login, can you confirm the same update command with --with-registry-auth?
https://github.com/moby/moby/issues/34153#issuecomment-316047924
I follow this and this to create a wordpress website through AWS ElasticBeanstalk.
What I did:
Create a wordpress site in localhost
Create an app in ElasticBeanstalk with RDS
Export database locally and import to RDS
Initialize git in wordpress folder locally
Download the ElasticBeanstalk Command line tool and add it to wordpress folder
Run git aws.config
Run git aws.push
It works well until the step 7. I got the following error message:
Updating the AWS Elastic Beanstalk environment
mywordpress-env... Error: Failed to get the Amazon S3 bucket
name
Can anybody explain what this mean? And how to solve the problem? Thanks.
I had such problem today, when My Ubuntu laptop didn't change the time automatically, so there was one-hour time difference in comparison to real time:).
I hope you solved the problem before my answer :)