How to remove servers list from AWS migration hub - amazon-web-services

I tried to use AWS migration hub to practice a migration from Azure to AWS.
Used the AWS application migration service (AGN) for the practice.
The migration was unsuccessful and I aborted it even before the discovery was completed and even before the replication was initiated.
I've cleaned up all the source servers, but unable to delete or remove the replicated servers from the migration hub.
they are showing up under "Data collectors" tab. Attached the screenshots for reference.
Any help here would be great.

Related

Unable to push or pull images to/from private docker registry

I have recently migrated my application from one AWS account to another account during which I have changed the docker registry URL from docker.app.prod.aws.abc.com to docker-hd.app.prod.aws.abc.com.
We have GitLab application hosted on AWS.
The runners/instances which are on AWS are able to push or pull images from the new docker registry without any issues. But the runners which are on-prem are getting error as "Forbidden".
Can someone please help me to fix this issue..
I have updated the DNS records for new docker registry but still getting forbidden error.
It was working fine before the migration.

Strapi deployment on AWS Fargate (Serverless)-Aurora MySQL (Serverless)

I am trying to deploy the Strapi on AWS Fargate serverless environment through GitlabCI. Using AWS MySQL Aurora database for the DB integration. The database is up and running properly. When my CICD is trying to deploy it on Fargate, somehow it is not able to get connected with my DB. I am injecting env variables in Task Definition through Secret manager.
I am getting this error "getaddrinfo ENOTFOUND" in cloudwatch logs. Not sure what to do as everything is going on through CICD only. I can think of, Do we need to mention anything in database.js file or server.js file regarding my database or in Dockerfile? Or in GitlabCI configuration?
I know this is a very specific process but still if any one would be able to help me out.
Thanks,
Tushar

Required a Cloud Formation template for setting up a Database Migration Service

I am trying to build a Cloud Formation template to automate the Database Migration (RDS) and replication using DMS (For backup and DR purpose) across different accounts in AWS Cloud.
I have tested the complete setup manually and it seems to be working fine in the pre-prod environment. However, I would need to automate it before I can get it to the production environment.
So, could someone please share a CF template (I can further customise it according to the requirements) for DMS which can also create all the required resources in Database Migration Service (DMS) including RDS, Replication Instances, Endpoints and Tasks.

AWS: How do I continuously deploy a static website on AWS

I have a github repo with static website contents (i.e I try not to use EC2, but the AWS static website service). Now I want to automatically deploy it on AWS anytime I change and push something to the master branch of my github repo.
Any experience or idea doing this?
I do this for many projects by using a Jenkins server - I happen to run it on another ec2 instance, but you could also run it on-premise if you prefer.
Github notifies Jenkins server that a checkin has occurred, and a Jenkins job deploys all the files to the proper places and also notifies me by SMS (or email), that a deployment has occurred.
(Jenkins is not the only tool that can do this there are others).

AWS Aurora migration during deployment

I would like to make a step towards a no-downtime continuous deployment of my application.
I have an app running on an EC2 instance that connects to the aurora database. During the deployment I need to run the database migration scripts and update the app running on EC2. How can I update both of them without causing any down time? I probably can configure Elastic Beanstalk/CodeDeploy to update the EC2 instance in such a way that for a while I will in fact have 2 instances of my app running on 2 separate EC2 instances, but that still gives me only one instance of the database. If I run my migration scripts, this may assassinate one of the instances of my application and if the deployment for some reason fails, I might not be able to revert the changes done to the database.
So basically the question is: what is the right way to apply the SQL migration scripts without causing any downtime?
That would depend on your specific situation and type of DB migration. One route you can follow, taken directly from AWS documentation is :
Clone your Aurora DB, using AWS provided API
Apply your migration script to the new DB
Deploy your new source code version to a new Elastic Beanstalk environment
Test your new source code with new DB
Swap the two environment CNAME using the api provided by AWS or from your DNS, you may want to synchronize the data changes from old DB to new DB before doing so