Publishing a microservice in AWS ECS - amazon-web-services

I"m trying to push a microservice in a ECS Cluster in AWS, following this tutorial:
https://aws.amazon.com/pt/getting-started/projects/break-monolith-app-microservices-ecs-docker-ec2/module-one/
I clone the repository, login on AWS from AWS Cli, have run the commands, step by step:
Then i receive a message "no basic auth credentials".
Has Anybody faced this issue?

This happens because you haven't authenticated your Docker client to your registry.
To solve this, go to your ECR console in AWS. Then enter your repository. In there you should be able to find a button called View push commands. It will give you
ready, copy-and-paste commands to authenticate, build, tag and push your image to ECR. The commands are for Linux, Mac and Windows.
The description of the commands for authentication is here: https://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html#registry_auth

Related

Uploading Docker Images to AWS ECR

I am trying to create a workflow where developers in my organisation can upload docker images to our AWS ECR. The following commands work :
Step-1: Get Token
aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <repo-url>
Step-2: Tag the already built image
docker tag <local-image:tag> <ecr-repo-url>:latest
Step-3: Finally Push
docker push <ecr-repo-url>:latest
Now this works absolutely fine.
However as I am trying to automate the above steps. I will NOT have AWS CLI configured on end users machine. So Step-1 will fail for the end user
So two quick queries:
Can I get the token from a remote machine and Step-2 and Step-3 can happen from client
Can I do all the three steps in remote and I have a service that uploads the local docker image to the remote server which in turn will take care of tag - push
I'm hoping that the end-user will have docker installed
In that case you can make use AWS CLI docker image to obtain the token from ECR.
The token itself is just a temporary password so whether you use the AWS CLI on the remote server or not it will be valid for the Docker credentials.
You of also have the option of using the AWS SDK that you could package with a small application to perform this action, such as Boto3 although you would need to ensure that the host itself has the relevant programming language configured.
Alternatively if you want this to be automated you could actually look at using a CI/CD pipeline.
GitHub has Actions, BitBucket has Pipelines and GitLab has arguably the most CI/CD built into it. This would have these services perform all of the above actions for you.
As a final suggestion you could use CodeBuild within a CodePipeline to build your image and then tag and deploy it to ECR for you. This will be automated by a trigger and not require any permanent infrastructure.
More information about this option is available in the Build a Continuous Delivery Pipeline for Your Container Images with Amazon ECR as Source article.

Launch Jupyter Notebooks in AWS Sagemaker from a Custom Webapplication

We have a requirement where we are building a Webportal/platform that will use services of AWS and Git as both will host certain content to allow users to search for certain artifacts.
We also want to allow a user after they have searched for certain artifacts (lets say certain jupyter notebooks) to be able to launch these notebooks from our web-application. Note the notebooks are in different domain i.e AWS Console application host them.
Now, When user click on the notebook links from the webportal search it should open up the Jupyter notebook in a notebook instance in a new tab.
We understand there is integration of AWS Sagemaker and GIT so some repos that will store notebooks can be configured. When user performs the search in webapp it will pick up the results from github API Call.
The same repos can also be added in the sagemaker-github integration through AWS Console. So when a user launches the notebook he will see the github repos as well.
I understand we call Sagemaker API either through SDK or Rest API(not sure there is a rest api interface exploring on that). See a CLI call example -
aws sagemaker create-presigned-notebook-instance-url --notebook-instance-name notebook-sagemaker-git
this gives me a response url "AuthorizedUrl": "https://notebook-sagemaker-git.notebook.us-east-2.sagemaker.aws?authToken=eyJhbGciOiJIUzI1NiJ9.eyJmYXNDcmVkZW50aWFscyI6IkFZQURlQlR1NHBnZ2dlZGc3VTJNcjZKSmN3UUFYd0FCQUJWaGQzTXRZM0o1Y0hSdkxYQjFZbXhwWXkxclpYa0FSRUZvUVZadGMxSjFSVzV6V1hGVGJFWmphRXhWUTNwcVlucDZaR2x5ZDNGQ1RsZFplV1YyUkRoTGJubHRWRzVQT1dWM1RTdDBTR0p6TjJoYVdXeDJabnBrUVQwOUFBRUFCMkYzY3kxcmJYTUFTMkZ5YmpwaGQzTTZhMjF6T25WekxXVmhjM1F0TWpvMk5qZzJOek15TXpJMk5UUTZhMlY1THpObFlUaGxNMk14TFRSaU56a3RORGd4T0
However, when i open this url it again asks me the aws console username and password. I feel in the webapp when i logged in a user would already authenticate himself through AWS API as well as GIT API.
So there should be no need to re-authenticate themselves when they connect to AWS-Console to access their notebooks.
Is it something that can be circumvent using SIngle sign on etc.
thanks,
Aakash
The URL that you get from a call to CreatePresignedNotebookInstanceUrl is valid only for 5 minutes. If you try to use the URL after the 5-minute limit expires, you are directed to the AWS console sign-in page. See https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreatePresignedNotebookInstanceUrl.html
Jun

AWS Amplify with repository in different account - assume role

I have gone through the documents and couldn't find a solution for this..
I have two accounts dev and prod. my amplify app exist in dev but code-commit exist prod. Is there any way to connect them?
I have configured assume-role and have also tried using temporary credentials in a different profile and connecting it with:
aws amplify create-app --name app-name-in-dev --repository repo-in-prod
aws amplify create-app --name app-name-in-dev --repository repo-in-prod --iam-service-role-arn arn:aws:sts::prod:assumed-role/CrossAccountRepositoryContributorRole/cross-account
The problem remains the same. It seems impossible to connect amplify with code-commit until, repository and amplify-app exist in the same account.
Is there anyway to achieve this or is it really not configurable?
references:
https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html
https://docs.aws.amazon.com/cli/latest/reference/sts/assume-role.html
https://forums.aws.amazon.com/thread.jspa?threadID=300224
Incase Anyone comes looking for same:
After creating a ticket with AWS, I have received back a response that it is not currently possible as Amplify is still a newer service and only allow repository from same account.
I have tried setting this up at my end and observed the same. I was able to connect to the repositories only in the same account. I did further research on this and could confirm that currently, we cannot integrated with a cross account CodeCommit repository for Amplify applications.

Problem mirroring Bitbucket repo to GCP Cloud Source Repo

I'm attempting to setup CICD for a GCP Cloud Function and App-Engine deployment. The repo is in Bitbucket and I am following the instructions found here to create a mirror between my Bitbucket repo and a GCP Cloud Source repo.
Using the GCP Cloud Source "Connect external repository" UI I am able to select my GCP project, select Bitbucket as the Git provider, connect to Bitbucket using my credentials (I am admin on the Bitbucket repo), and select the desired Bitbucket repo. Then when I click the "Connect selected repository" I get about a 30s delay and finally a simple "Failed to connect repositories" error message with no further explanation as to why. GCP logging shows nothing.
Any ideas would be appreciated.
Thanks
Ensure that you have enabled the source repos API. Retrospectively I guess this is obvious, because the Bitbucket webhooks need to call out to Google's API to announce when changes occur on the repo.
The GCP API is called Cloud Source Repositories API, and the service name is sourcerepo.googleapis.com
https://console.cloud.google.com/apis/api/sourcerepo.googleapis.com

Kubernetes 1.2alpha8 AWS Container Registry Integration

Was trying to integrate kubernetes with AWS Container Registry. From what I have read it sounds like it should be automatically setup if the cluster is deployed to AWS. Which my cluster is.
I also granted the IAM Roles the necessary permissions to pull from ecr but I still get unable to pull image when trying to deploy on kubernetes. It also says authentication failed.
Really just wanted to see if anyone else was having issues or if someone was able to pull an image from aws ecr and how did they accomplish it.