I have an AWS Codebuild Project that is connected to a VPC. Now I'm trying to understand how can I get the availability zone of codebuild build during execution time. Is this possible?
It turns out there is an Environment Variable, CODEBUILD_VPC_AZ, that holds this value.
Based on AWS documentation there are two environment variables having the AWS Region where the build is running (AWS_DEFAULT_REGION and AWS_REGION). I don't think there is any possibility to get the actual availability zone.
But is there something specific that you want to achieve by knowing the availability zone? Maybe we can provide a solution to that.
Related
I am facing this issue from yesterday. This is the exact error: Failed to start feature-config: A e2-micro VM instance is currently unavailable in the us-central1-a zone. Alternatively, you can try your request again with a different VM hardware configuration or at a later time. For more information, see the troubleshooting documentation.
I had scheduled Google Compute Engine to TURN on & off at specific time using Instance scheduler but now I am locked out of it. I cannot even create a machine image to deploy on another zone
I changed the Machine Configuration. As from your answer I could figure out that resources might not be available for the US Central Zone possibly due to traffic. I changed configuration to - n2-highcpu-2 vCPU 2 & Memory -2 GB
At the end, it seems this was a general issue that multiple users experienced in us-central1 among other regions.
In this thread more is talked and it seems it got worse during the weekend.
As some suggestions in the comments, changing the zone/region/hardware can help but not always since this also depends on any constraints you may have.
As the error suggests, there aren't any available resources in that regions. I contacted GCP support after facing the same issue and got the following response:
Google Cloud Support, : Upon further checking, the reason that the e2-medium VM instance is currently unavailable is because there are limited VMs available to a specific zone and regions. Best we can do is to try another time or select a different zone so that the VMs that you desire to use will start. Rest assured that there is nothing wrong with your account and it was on the us-central1 zone who do not have available VMs you selected as of the moment.
If possible, try deploying to a different instance. For those who need an instance in us-central1 (for Qwiklabs?) might have to wait until more instances are available.
Similar issue here, but coming from a terraform apply. I've tried multiple zones and every one says both 'e2-small' and 'e2-micro' instances are unavailable. Seems google completely fumbled the "cloud game" here since AWS doesn't have this problem EVER! (not that I like using AWS, it's just "ick" compared to google).
In AWS we have a requirement to build VMs with terraform and increase the count of VMs if the utilisation increases. I have couple of questions in regards to this:
If we place VMs in same placement group then does it ensure that
they are built in different availability zones in the region (as we only
want them to be in our local region). Or is there a better way of
ensuring this so that I don't have to specify Availability zone for each VM manually in Terraform code and yet achieve equal distribution of VMs in availability zones?
How can i use the "count" feature in terraform to increase the count of these VMs? I have created backup vault and monitoring with cloudwatch. I also need to ensure that each subsequent VM that gets added is automatically added to these as well.
My VM in google cloud can't run due to below error has shown.
"Starting VM instance failed. Error: The zone does not have enough
resources available to fulfill the request. Try a different zone, or
try again later."
Then I what to start VM from other zones by changing the zone of my VM by this method but it's required VM running.
The problem is I can't run the VM. How can I use another solution?
It looks like Google's having issues with limited External IPs. Try removing the external IP before starting the instance. Then create an external IP and attach it your instance.
You’ve just encountered a stockout issue. A Stockout means that the particular GCP datacenter in that zone has reached its resource limit.
The Google Cloud Platform team are there to make sure that there are available resources in all zones. This type of issue is rare. When a situation like this occurs or is about to occur, the team is notified immediately, the issue is investigated and quickly fixed.
I recommend deploying and balancing your workload across multiple zones or regions to reduce the likelihood of a stockout. Please review the documentation which outlines how to build resilient and scalable architectures on the Google Cloud Platform.
You may also try again later, once resources will be available again in the region.
This being said, I see that you’ve posted your question on November 9th. That was a long time ago. Can you confirm if your issue is fixed now? It is very rare for stockouts to last this long.
I have created an environment in AWS Cloud9 with a Python Lambda function.
This was working fine and for several days I was adding functionality.
However one day the environment failed to open. After several minutes of loading it displayed an error message:
This is taking longer than expected.
If you think there might be an issue, contact AWS Support.
It might be caused by VPC configuration issues.
Please check documentation:
https://docs.aws.amazon.com/cloud9/latest/user-guide/vpc-settings.html?icmpid=docs_ac9_console
I looked at the suggested link, but I don't think the VPC is the issue. I didn't make any changes to it. Moreover I am able to make new environments and open them.
Any ideas how to solve this?
Turns out the problem was the default t2.micro (1 GiB RAM) instance that is used to run Cloud9. I was probably running out of memory. Moving my environment to t2.small (2 GiB RAM) solved the problem.
Documentation on moving environments:
https://docs.aws.amazon.com/cloud9/latest/user-guide/move-environment.html
I had the error message:
"This is taking longer than expected. The delay may be caused by high CPU usage in your environment, or your T2 or T3 instance is running out of burstable CPU capacity credits, or there are VPC configuration issues."
What I did to solve it was to have my internet gateway attached to a VPC and for that VPC to have a public subnet.
I found this link to be useful to help solve this issue particularly when it states the
VPC requirements for AWS Cloud9: https://docs.aws.amazon.com/cloud9/latest/user-guide/vpc-settings.html?icmpid=docs_ac9_console
I agree with the answer above but just to expand with details on what I did:
I created a VPC attached to an Internet Gateway
create a route table and associate with subnet
Route table with routing to the subnet (making it public) and another routing to the internet gateway
This solved my problem.
My solution was different:
I changed the Region to Ohio from N. Virginia and that fixed the problem. But, it could be timing issue where N. Virginia was having problem.
There might have some processes hanging that will blew up memory.
Reboot the instance and try reloading the environment.
Here's what I have in AWS:
Application ELB
Auto Scaling Group with 2 instances in different regions (Windows IIS servers)
Launch Config pointing to AMI_A
all associated back end stuff configured (VPC, subnets, security groups, ect)
Everything works. However, when I need to make an update or change to the servers, I am currently manually creating a new AMI_B, creating a new LaunchConfig using AMI_B, updating the AutoScalingGroup to use the new LaunchConfig, increasing min number of instances to 4, waiting for them to become available, then decreasing the number back to 2 to kill off the old instances.
I'd really love to automate this process. Amazon gave me some links to CLI stuff, and I'm able to script the AMI creation, create the LaunchConfig, and update the AutoScalingGroup...but I don't see an easy way to script spinning up the new instances.
After some searching, I found some CloudFormation templates that look like they'd do what I want, but most do more, and it's a bit confusing to me.
Should I be exploring CloudFormation? Is there a simple guide I can follow to get started? Or should I stay with the scripting I have started?
PS - sorry if this is a repeated question. Things change frequently at AWS, so sometimes the older responses may not be the current best answers.
You have a number of options to automate the process of updating the instances in an Auto Scaling Group to a new or updated Launch Configuration:
CloudFormation
If you do want to use CloudFormation to manage updates to your Auto Scaling Group's instances, refer to the UpdatePolicy attribute of the AWS::AutoScaling::AutoScalingGroup Resource for documentation, and the "What are some recommended best practices for performing Auto Scaling group rolling updates?" page in the AWS Knowledge Center for more advice.
If you'd also like to script the creation/update of your AMI within a CloudFormation resource, see my answer to the question, "Create AMI image as part of a cloudformation stack".
Note, however, that CloudFormation is not a simple tool- it's a complex, relatively low-level service for orchestrating AWS resources, and migrating your existing scripts to it will likely take some time investment due to its steep learning curve.
Elastic Beanstalk
If simplicity is most important, then I'd suggest you evaluate Elastic Beanstalk, which also supports both rolling and immutable updates during deployments, in a more fully managed, console-oriented, platform-as-a-service environment. Refer to my answer to the question, "What is the difference between Elastic Beanstalk and CloudFormation for a .NET project?" for further comparisons between CloudFormation and Elastic Beanstalk.
CodeDeploy
If you want a solution for updating instances in an auto-scaling group that you can plug into existing scripts, AWS CodeDeploy might be worth looking into. You install an agent on your instances, then trigger deployments through the API/CLI/Console and it manages deploying application updates to your fleet of instances. See Deploy an Application to an Auto Scaling Group Using AWS CodeDeploy for a complete tutorial. While CodeDeploy supports 'in-place' deployments and 'blue-green' deployments (see Working With Deployments for details), I think this service assumes an approach of swapping out S3-hosted application packages onto a static base AMI rather than replacing AMIs on each deployment. So it might not be the best fit for your AMI-swapping use case, but perhaps worth looking into anyway.
You want a custom Termination policy on the Auto Scaling Group.
OldestLaunchConfiguration. Auto Scaling terminates instances that have the oldest launch configuration. This policy is useful when you're updating a group and phasing out the instances from a previous configuration.
To customize a termination policy using the console
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
On the navigation pane, choose Auto Scaling Groups.
Select the Auto Scaling group.
For Actions, choose Edit.
On the Details tab, locate Termination Policies. Choose one or more
termination policies. If you choose multiple policies, list them in
the order that you would like them to apply. If you use the Default
policy, make it the last one in the list.
Choose Save.
On the CLI
aws autoscaling update-auto-scaling-group --auto-scaling-group-name my-asg --termination-policies "OldestLaunchConfiguration"
https://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html
We use Ansible's ec2_asg module for that purpose. There are replace_all_instances and replace_batch_size settings for that purpose. Per documentation:
In a rolling fashion, replace all instances that used the old launch configuration with one from the new launch configuration.
It increases the ASG size by C(replace_batch_size), waits for the new instances to be up and running.
After that, it terminates a batch of old instances, waits for the replacements, and repeats, until all old instances are replaced.
Once that's done the ASG size is reduced back to the expected size.
If you provide target_group_arns, module will check for health of instances in target groups before going to next batch.
Edit: in order to maintain desired number of instances, we first set min to desired.