I have created a terraform stack with modules, and within this stack I create:
3 Instances (000, 001,002)
Instance group per each Google Compute Engine (gce) instance
Internal back-end internal load balancer (ilb)
forwarding rule
Everything gets created fine: However as part of new image recycling we have to destroy single instance from those 3 instances created above and rebuild with new image..
My question here is that : When i am try to destroy an Instance
destroy -force -target=module.gce_test.google_compute_instance.test_initial_follower-02
Deletes also the ilb backend, forwarding rules which other instances are using...
Any ideas? Can I delete just the instance with the instance group only, no backend and forwarding rule? Is this possible?
Thanks
Related
In GCP, I have a GKE with a workload configured. My service definition has the following annotation which automatically creates the network endpoint groups for me:
cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "app2-service-80-neg"}}}'
I can then attach this to a backend load balancer service and all works well. However, shouldn't these network endpoint groups disappear if I delete the underlying service/deployment/pods? They seem to stick around after I delete everything at the kubernetes level which causes issues for my terraform because I run a terraform destroy regularly and it can't delete everything since these objects are still kicking around preventing me from deleting my VPC.
As #Vishal Bulbule mentioned NEG has to be deleted separately.
As per this official doc
Note that the NEG cannot be deleted if there are backend services referencing it.
So,
When a GKE service is deleted, the associated NEG will not be
garbage collected if the NEG is still referenced by a backend
service.
Dereference the NEG from the backend service to allow NEG deletion.
When a cluster is deleted, standalone NEGs are not deleted automatically and need to be deleted manually.
Use this Http request to delete the NEG or else refer to this doc to help in deleting it manually.
You can also use below to delete a network endpoint group named my-neg:
gcloud compute network-endpoint-groups delete my-neg --zone=us central1-a
Backend service for load balancer and actual NEG are two separate resources. NEG has to be deleted separately. This will happen even if you add VM Instances IP to NEG .
I have created a cluster to run our test environment on Aws ECS everything seems to work fine including zero downtime deploy, But I realised that when I change instance types on Cloudformation for this cluster it brings all the instances down and my ELB starts to fail because there's no instances running to serve this requests.
The cluster is running using spot instances so my question is there by any chance a way to update instance types for spot instances without having the whole cluster down?
Do you have an AutoScaling group? This would allow you to change the launch template or config to have the new instances type. Then you would set the ASG desired and minimum counts to a higher number. Let the new instance type spin up, go into service in the target group. Then just delete the old instance and set your Auto scaling metrics back to normal.
Without an ASG, you could launch a new instance manually, place that instance in the ECS target group. Confirm that it joins the cluster and is running your service and task. Then delete the old instance.
You might want to break this activity in smaller chunks and do it one by one. You can write small cloudformation template as well because by default if you update the instance type then your instances will be restarted and to avoid zero downtime, you might have to do it one at a time.
However, there are two other ways that I can think of here but both will cost you money.
ASG: Create a new autoscaling group or use the existing one and change the launch configuration.
Blue/Green Deployment: Create the exact set of resources but this time with updated instance type and use Route53's weighted routing policy to control the traffic.
It solely depends upon the requirement, if you can pour money then go with above two approaches otherwise stick with the small deployments.
I am a begineer to aws elb. Can someone help me to understand how elb automatically creates a new instance depending on the traffic or cpu usage. Also when it creates a new instnces how does it copies code from existing instance?
Any link/article will also be appreciated.
Thanks in adavnce.
How elb automatically creates a new instance depending on the traffic
or cpu usage?
ELB does not create the new instance the new instance is created by Launch configuration and Autoscaling group rules that you have set. https://docs.aws.amazon.com/autoscaling/latest/userguide/GettingStartedTutorial.html
Also when it creates a new instnces how does it copies code from
existing instance?
When a new instance gets created from the AMI you have to add the replication mechanism either using user data scripts (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) or running a cron job on that instance that will copy the files to that instance from a S3 bucket.
Is it possible to do AutoScaling with Static IPs in AWS ? The newly created instances should either have a pre-defined IP or pick from a pool of pre-defined IPs.
We are trying to setup ZooKeeper in production, with 5 zooKeeper instances. Each one should have a static-IP which are to hard-coded in the Kafka's AMI/Databag that we use. It should also support AutoScaling, so that if one of the zooKeeper node goes down, a new one is spawned with the same IP or from a pool of IPs. For this we have decided to go with 1 zoo-keeper instance per AutoScaling group, but the problem is with the IP.
If this is the wrong way, please suggest the right way. Thanks in advance !
One method would be to maintain a user data script on each instance, and have each instance assign itself an elastic IPs from a set of EIPs assigned for this purpose. This user data script would be referenced in the ASGs Launch Configuration, and would run on launch.
Say the user script is called "/scripts/assignEIP.sh", using the AWS CLI you would have it consult the pool to see which ones are available and which ones are not (already in use). Then it would assign itself one of the available EIPS.
For ease of IP management, you could keep the pool of IPs in a simple text properties file on S3, and have the instance download and consult that list when the instance starts.
Keep in mind that each instance will need an to be assigned IAM instance profile that will allow each instance to consult and assign EIPs to itself.
OK, odd thing is happening on AWS.
I downloaded the AWS .NET developer tools and created an elastic beanstalk default instance.
I then, for one reason or another, created another instance via the Visual Studio interface and that instance is where all the clients code / configurations reside. I then returned to the default instance created by elastic beanstalk and terminated it. An hour later, I logged back on and another default instance was up and running. It seems that AWS has detected that I terminated the instance and has spawned another. Some sort of check seems to be in place.
Can somebody tell me what is going on here and how to completely remove the default instance (and its termination protection behavior)?
Thanks.
I've experienced something similar. If the instance was created through Elastic Beanstalk, you need to go the Elastic Beanstalk screen in the AWS console and remove the application from there first. If you just terminate the instance from the EC2 screen, Elastic Beanstalk probably thinks that the instance crashed and launches a new one.
if beanstalk is not enable, then most probably it is creating from auto scaling. which is present in EC2 service itself.
Go to auto scaling and first delete the auto configuration group and launch configuration for that associate instance.
As described here it is caused by Auto Scaling Group's desired/minimum 1 instance setting. So, what that means is that the instances in that auto scaling group will always have a running instance. If you delete the only one it will create another one. To prevent this go to EC2 dashboard and on the left sidebar scroll down to find/click Auto Scaling Groups under AUTO SCALING menu. You will see the list of groups where you can click on them to see which instances are in those groups. Find the group that your instance is in and either delete it (that happens when an environment is deleted as well) or change its desired and minimum 1 instance rule to 0 and save. That is it.
Check your autoscaling groups. If you would have created node groups for your EKS cluster.