i am working on usecase where we are having a auto-scaling environment inside google cloud platform the thing is that right now i don't know how to deploy new version of application in GCP auto scaling environment the code is on github.
previously it was getting deploy through jenkins but since we have configure auto-scaling so its impossible to deploy via jenkins can anyone help me with this ?
I was thinking that we should configure a new VM image each time and deploy it by adding that image to the new instace group but it is quite complicated
As #guillaume blaquiere suggested,you have to create a new instance template and to perform a roll out over your managed instance group
To create a new instance template follow the steps below:
In the Google Cloud console, go to the Instance templates page.
Click Create instance template.
Enter values for the following fields, or accept the default values. The default values change based on the machine family that you select.
Click Create to create the template.
Also refer this link for more information
Related
I'm trying to make an API call in Python (inside a Cloud Function) to do some various things and as part of the information I'd like to pass along is whether the VM was created from something in the Marketplace.
The use case is this: The user is in the GCP Console in Compute Engine. They click on Marketplace in the left column of the display which then brings up VMs to choose from. The user picks one (say "Ubuntu 20.4 LTS (Focal)"). The display shows information about the VM with a "Launch" button. When they click that, they are then taken to the "Create an instance" page and they continue making choices and eventually create the VM.
This creates a log entry that the client's security group checks inside of a cloud function. When I look at the log entry for beta.compute.instances.insert, I don't see anything about it being created via Marketplace. If I make an API call to get the instance, there's nothing in the object returned that shows that either. Anyone know of any way to determine this?
It depends on what you mean by "via Marketplace". In general, the Marketplace offer is usually a Deployment Manager template and an image in a public project (public projects are available only to partners publishing to Marketplace). So if you deploy a Marketplace VM solution you will have:
a VM with source image in some project outside your org; but this will also match VMs created manually using that image (does it match your "via Marketplace" definition?) and VMs created from custom images your individual users have access to. Hint: your service account assigned to function will also have access to all public images, but usually not to images shared between users.
Deployment Manager deployment - that's a nice one as such deployments have some marketplace-specific labels. The problem is that deployment metadata can be deleted without deleting the deployed resources. And there's the case you mentioned with some marketplace listings being just redirections to deploying a single VM.
I'm afraid there's no way to detect if an Ubuntu VM was deployed after visiting Marketplace, or after clicking add VM button or using CLI or terraform - for the GCE it was simply an API call to insert a new instance.
Hello there!
I'm at beginning of the investigation of AWS, but one of the concepts looks unclear to me. Based on it I want to ask for assistance with an understanding of functionality.
I have a web application on PHP installed on EC2.
My application is huge loaded and I need to use a load balancer for the best performance. How to do and set up this is clear. The Code of my application is hosted on Gitlab.
After EC2 and load balancer setup did I want to use Autoscaling.
So, I need to use the autoscale group.
Main question: what I should do next? As I understand I need somehow create a new instance, but I need a correct image for the instance with all dependencies and source code.
Code auto-deploy is also a big question. When the new feature merged I need to run the GitLab pipeline and delivery code somehow to the new EC2.
So what do I need to read and investigate to have the ability to deploy new code to the new EC2 instance automatically? Is AWS provide some tools for this?
Thank you for the help with my journey.
Regards,
Mavis.
You can begin with this link https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg-from-instance.html which explains to you how to create an autoscaling group based on an EC2 instance.
In short you can generate an AMI ( Amazon machine Image) from your current EC2 (host php) and create a launch configuration/launch template for your autoscaling group.
Next, you may add a load balancer to distribute traffic to theses instances, you can associate it with target groups and your Autoscaling goup https://docs.aws.amazon.com/autoscaling/ec2/userguide/attach-load-balancer-asg.html
For the Auto deploy, you can automate within your pipeline to create a new launch configuration or to get the last version of your code PHP from S3 or another location in the user data part. You may use gitlab ci or CodeDeploy which is the perfect candidate for this kind of stuff
Be aware also, that the autoscaling group is statless(create/terminate instances) and you must store your images and assets in a shared location like S3, DB or EFS, because if an instance is unhealthy or terminated by the ASG, you may lose data.
I'm planning to use the deployment manager to deploy a new project for each of our client.
I'm just wondering can I do the following using the deployment manager or put into script/YAML, so it deploys all components all at once through the command shell?
create a new GCP project
create a VPC for the client with custom subnet assigned
create a VM and set the network to the custom VPC/subnet
create an app engine with different services using the yaml file
create storage buckets
create cloud Postgres SQL instance
What I tried so far, I can deploy the VM only through the deployment manager, I can do them individually using the command line, but not using the deployment manager in one single step.
Thanks for your help.
Deployment Manager should work perfectly for this type of setup. There are a few minor caveats though.
You need to have a project in place where you can run deployment manager from
You will need to provide the deployment manager service account all the required permissions before creating the deployment (such as project creator at the org level). The service account is [PROJECT_ID]#cloudservices.gserviceaccount.com
Next, you will want to call each of the resources individually in your deployment manager manifest, luckily all these resource APIs are supported by DM:
Projects to create the project.
** All following resources should make a reference to this resource to create a dependancy so that DM does not try to create them before the project exists... which would result in a failure
VPC and VMs: use something like this
** This includes adding GKE clusters at the end and a VPC peering you won't need, but it demonstrates the creation of a VPC, subnets, firewall rules and a VM
App Engine
GCS Bucket
SQL instance
As long as your overall config is less than 1 MB, you can place all these resources into a single config.
If you are new to DM, I recommend trying each of these resources individually to make sure that you have the syntax correct. Trying to debug syntax errors with multiple resources is much more difficult.
I also recommend using the --preview flag before creating or updating resources so that you can make sure that your configurations or changes will come into effect the way you planned.
Finally, you can either write all this directly into a YAML config or you can create templates using either jinja or python2 which can be imported into your config.yaml
Please take a look at the Deployment Manager Cloud Foundation Toolkit which is a sets of well designed templates.
I have two ec2 servers named Ec2-Webserver-1 and EC2-WebServer-2 inside same VPC under two different subnets served by Application Load Balancer.
When I made small changes to the first servers, Then I have to manually change the another server too. Otherwise I have to create an AMI and create a new server from the AMI.
I think, creating AMI each time when I made little changes is not the appropriate one.
Is there any other tools in AWS or third-party tools that can auto replicate the changes made on Server 1 to Server 2? I am currently using CentOS AMI.
I would suggest look into cloudformation. You can define your ec2, what IAM roles you want it to have and a whole lot of other stuff. Once that is done you can just run the cloudformation script and AWS will provision the EC2 with your defined settings automatically. CloudFormation link
You should be looking into Code Deploy https://aws.amazon.com/codedeploy/getting-started/?nc=sn&loc=4 Possibly combine it with Code Pipeline. Here is a starting point for deciding whether you need one or both. https://forums.aws.amazon.com/thread.jspa?threadID=172485
We use AWS cloudformation service to initialize our stack, and set up the auto scaling service to bring up new app servers when load is rising.
My understanding is that Auto Scaling can only start predefined AMI as new instances. These instances could be different from other running instances, because we may have updated packages/source code deployed on those instances.
How can I bring the new instances up-to-date?
Should I update the AMIs everytime I deploy something new to the running instances? Or is there anyway to trigger auto-deployment on new instances (Opsworks) when auto scaling?
I am new to AWS, so pardon me if my question is rudimentary.
There are multiple ways of doing this. My preferred approach is never to touch the servers directly, but instead create a new AMI whenever I deploy a new version of the software.
To do this, use the AutoScalingRollingUpdate property for the auto-scaling group. When you then change ImageId for the launch configuration, AWS will automatically replace your old servers with new ones as a rolling upgrade.
I have a simple deploy script that creates a new AMI, replaces ImageId in the template, and then does a stack update - AWS takes care of the rest.
When creating EC2 instances from Beanstalk, it automatically creates a AutoScaling Group and Launch Configuration based on the specified environment selections. Creating the instance from base AMI is done using a custom code call user data which includes the shell script to create folders and install relevant software.
You can add a new shell scripts or commands there to do your custom work before starting a new instance. This way it is much simpler. e.g. you can run yum update before starting a instance
To find user data section
Go to EC2 Console -> Go to launch configurations section (on left) -> Select the correct launch configuration and copy it -> Click view user data -> Add your scripts and commands as required -> Modify the relevant Auto Scaling group to point to the new launch configuration