Google cloud compute vm automatically getting created after destroying - google-cloud-platform

I was trying to install tectonic on a coreos image in Google Compute Engine. It didn't work out well & I made some configuration mistakes so I tried to delete the VM but now it keeps on creating these instances -
Here are the operations log i found in the google cloud compute admin panel -
which suggests that they are in some kind of a target pool which is recreating these instances on deletion.
Can anyone tell how to fix this & permanently delete all the instances.

We have discussed this issue at this thread, and it seems you have been using a managed instance group with autoscaler and load balancer enabled. The autoscaler is setup to recreate the instances as per the configurations.
In order to delete the entire instance group, you have to delete all resources that are using it first, such as the load balancer, backend services, etc.

Related

AWS EC2 Auto scaling philosophy

Hello there!
I'm at beginning of the investigation of AWS, but one of the concepts looks unclear to me. Based on it I want to ask for assistance with an understanding of functionality.
I have a web application on PHP installed on EC2.
My application is huge loaded and I need to use a load balancer for the best performance. How to do and set up this is clear. The Code of my application is hosted on Gitlab.
After EC2 and load balancer setup did I want to use Autoscaling.
So, I need to use the autoscale group.
Main question: what I should do next? As I understand I need somehow create a new instance, but I need a correct image for the instance with all dependencies and source code.
Code auto-deploy is also a big question. When the new feature merged I need to run the GitLab pipeline and delivery code somehow to the new EC2.
So what do I need to read and investigate to have the ability to deploy new code to the new EC2 instance automatically? Is AWS provide some tools for this?
Thank you for the help with my journey.
Regards,
Mavis.
You can begin with this link https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg-from-instance.html which explains to you how to create an autoscaling group based on an EC2 instance.
In short you can generate an AMI ( Amazon machine Image) from your current EC2 (host php) and create a launch configuration/launch template for your autoscaling group.
Next, you may add a load balancer to distribute traffic to theses instances, you can associate it with target groups and your Autoscaling goup https://docs.aws.amazon.com/autoscaling/ec2/userguide/attach-load-balancer-asg.html
For the Auto deploy, you can automate within your pipeline to create a new launch configuration or to get the last version of your code PHP from S3 or another location in the user data part. You may use gitlab ci or CodeDeploy which is the perfect candidate for this kind of stuff
Be aware also, that the autoscaling group is statless(create/terminate instances) and you must store your images and assets in a shared location like S3, DB or EFS, because if an instance is unhealthy or terminated by the ASG, you may lose data.

Turn off Google Cloud Memorystore?

This might be a stupid question.
I'm just curious. I'm new to Redis and would like to experiment with it.
However, I would like to turn the instance on and off whenever I am experimenting as I want to save on costs rather than have the instance running all the time.
But I don't see a stop button like other products such as compute.
Is there a reason for this?
Thank you
You won't be able to manage a Cloud Memorystore for Redis instance as a Compute Engine instance as they are different products with different billing requirements and therefore you can't stop a Cloud Memorystore for Redis instance.
If you are only interested in learning more about Redis you can always install Redis on a Compute Engine instance (see the following tutorial for a clear path as to how to accomplish this or this other tutorial as to how to accomplish this task using docker) and afterwards delete the Compute Engine instance in order for charges to stop accruing.
To avoid incurring charges to your Google Cloud account for the resources used in this quickstart:
Go to the Memorystore for Redis page in the Cloud Console.
Memorystore for Redis
Click the instance ID of the instance you want to delete.
Click the Delete button.
In the prompt that appears, enter the instance ID.
Click Delete.
https://cloud.google.com/memorystore/docs/redis/quickstart-console#clean_up

GCloud Compute Engine won't delete or stop. Keeps respawing

I have a google compute VM instance that will not stop or be killed.
I don't know where it came from and I can't delete it or pause it. I don't have anything running on it nor is has anything scheduled with it.
'gke-cluster-1-default-pool-....`
That is a VM from Google Container Engine. In the left menu, navigate to Container Engine and check if you have any clusters created. If a cluster was created and then removed it is possible that the VM did not get cleaned up properly.
In you dashboard, there should be an Activity tab. You can use this to filter the activity on the account to see if someone created a Google Container Engine cluster.

How does the AWS EC2 Auto Scaling synchronisation work automatically?

We started our wordpress blog some time ago with only one single EC2 Instance and a Multi-AZ RDS Database.
The traffic increased with heavy ups and downs (up to 1.500 user per minute), so we decided to use EC2 Auto Scaling. Here is our problem: Every time we changed some code, we have to create a new AMI for the Auto Scaling Group and terminate all instance so new instances will start with the new AMI Data.
Is there a easy way to synchronize all instance automatically, when changing some code on one of them? Perhaps Opsworks could to that but I haven't experience with this. I already searched a couple of days for a tutorial, but could not find anything helpful.
You could configure your AMI to download the latest code on startup, so that you don't have to constantly update the AMI.
Or you could just use Elastic Beanstalk and let it manage all this stuff for you.
If you want an easy way to deploy changes to instances in your autoscaling group, I would recommend using Code Deploy.
Code Deploy integrates nicely with Autoscaling. If a scaling up event occurs, it will start a deployment to the newly launched instance and won't bring that instance into service in the AutoScaling group until the deployment has finished.
The deployments can be as simple as changing the code or else they can involve more thanks to Code Deploys deployment hooks.
Also you can have Code Deploy grab your code from S3, Github or CodeCommit.
Code Deploy is pretty easy to set up and the documentation is great:
Docs AutoScaling Integreation

How to deploy to autoscaling group with only one active node without downtime

There are two questions about AWS autoscaling + deployment which I cannot clearly answer:
I'm currently trying to figure out, whats the best strategy to deploy to an EC2 instance behind an ELB which is the only member of an autoscaling group without downtime.
By now the EC2 setup will be done with puppet including the deployment of the application, triggered after an successful build by jenkins.
The best solution I have found is to check per script how many instances are registered at the ELB. If a single one is registered, spawn a new one, which runs puppet on startup (the new node will be up to date) and kill the old node.
How to deploy (autoscaling EC2 behind an ELB) without delivering two different versions of the application?
Possible solution: Check per script how many EC2 instances are registered to the ELB, spawn the same amount of instances, register all new instances and unregister all old ones.
My experiences with AWS teacher me that AWS has a service for everything. So are there any services out there to accomplish my requirements and my solutions are inconvenient?
You can create an entirely new environment with its own ELB and when it's ready and checked, you switch the DNS record to the new ELB.
Anyway for a brief time (60 seconds or so, depending on the TTL of your DNS record) some users will see your old version while some others will see the new version.
In the end there were two possible solutions. Both of them would temporarily deliver two versions of the app.
Use AWS CodeDeploy to perform an sequential deployment (one after another). This solution offers the possibility to rollback to a previous state and visual shows the state and results of the deployment.
Create a python script to get the registered nodes (using Boto) and run the appropriate puppet script on them (using Fabric). This solution offers more control of the deployment but requires some time to build these script. Also there can be bugs..
For now I choose AWS CodeDeploy because its already available and - hopefully - well tested.