I'm trying to create a GCP instance template which has the most recent version of my repo on it. My repository is private and I cant figure out how to clone it in the instance groups. I don't think I can use SSH because the machines will be randomly destroyed and created and therefore the generated keys will be inconsistent. Whats the best way to do this?
An Instance Template is based on an Image. This image can be a clean Ubuntu/Windows/Debian copy or a custom image created by you.
Saying that, I can think of 2 ways for you to get your repository inside there.
Using a custom image.
In essence, A snapshot of an instance with your latest code and dependencies installed on it.
There are two paths you can go with here.
a. Create a custom image when you clone the repository to the instance. You might need to that for every update in the code.
b. An alternative is to use some sort of Network File System (NFS/SMB). This will usually require more resources like another server that is always available.
If you want to avoid creating images, or as a solution to the issue mentioned in 1a you can set up a Startup Script to run on the server at boot(creation) time to clone/pull the latest copy.
There are Pros and Cons for both. I guess only you can tell what is best for you. I hope it gets you in the right direction.
Read more about creating an image here.
Read more about startup scripts here.
Related
My use case is:
I have trained model which i want to use for infer small messages.
Not sure about where should i keep my models in cloud run.
inside container
On cloud storage and download it at the time of container start
Mount cloud storage as local directory and use it
I am able to write and run code successfully for option 1 and 2.
Tried option 3 but not lucky there. I am using this link https://cloud.google.com/run/docs/tutorials/network-filesystems-fuse
Actually here my entry point is an pub sub event. thats where i am not able to make it working.
But before exploring more into it i would like to know about which approach is better here. or is there any other better solution.
Thanks for valuable comments, it helped a lot.
If model is static better to club it with container. downloading it from storage bucket or mounting FS will download model again whenever we spin new container.
I have two GCP projects communicating with each over over a Classic VPN, I'd like to duplicate this entire configuration to another GCP account with two projects. So in addition to the tunnels and gateways, I have one network in each project to duplicate, some firewall rules, and a custom routing rule on one project.
I've found how I can largely dump these using various
gcloud compute [networks | vpn-tunnels | target-vpn-gateways] describe
commands, but looking at the create commands they don't seem setup to be piped to, nor use this output data as a file, not to mention there are some items that won't be applicable in the new projects.
I'm not just trying to save time, I'm trying to make sure I don't miss anything and I also want a hard copy of sorts, of my current configuration.
Is there any way to do this? thank you!
As clarified in other similar cases - like here and here - it's not possible to clone or duplicate entire projects in Google Cloud Platform. As explained in these other cases, you can use Terraformer as to generate Terraform files from existing infrastructure (reverse Terraform) and then, recreate the files in your new instance as explained here.
To summarize, you can try this CLI as a possible alternative to copy part of your structure, but as emphasized in this answer here, there is no automatic way or magic tool that will copy everything, so even your VMs configuration, your app contents, your data content, won't be duplicated.
I need to move more than 50 compute instances from a Google Cloud project to another one, and I was wondering if there's some tool that can take care of this.
Ideally, the needed steps could be the following (I'm omitting regions and zones for the sake of simplicity):
Get all instances in source project
For each instance get machine sizing and the list of attached disks
For each disk create a disk-image
Create a new instance, of type machine sizing, in target project using the first disk-image as source
Attach remaining disk-images to new instance (in the same order they were created)
I've been checking on both Terraform and Ansible, but I have the feeling that none of them supports creating disk images, meaning that I could only use them for the last 2 steps.
I'd like to avoid writing a shell script because it doesn't seem a robust option, but I can't find tools that can help me doing the whole process either.
Just as a side note, I'm doing this because I need to change the subnet for all my machines, and it seems like you can't do it on already created machines but you need to clone them to change the network.
There is no tool by GCP to migrate the instances from one project to another one.
I was able to find, however, an Ansible module to create Images.
In Ansible:
You can specify the “source_disk” when creating a “gcp_compute_image” as mentioned here
Frederic
Let's say I've created an AMI from one of my EC2 instances. Now, I can add this manually to then LB or let the AutoScaling group to do it for me (based on the conditions I've provided). Up to this point everything is fine.
Now, let's say my developers have added a new functionality and I pull the new code on the existing instances. Note that the AMI is not updated at this point and still has the old code. My question is about how I should handle this situation so that when the autoscaling group creates a new instance from my AMI it'll be with the latest code.
Two ways come into my mind, please let me know if you have any other solutions:
a) keep AMIs updated all the time; meaning that whenever there's a pull-request, the old AMI should be removed (deleted) and replaced with the new one.
b) have a start-up script (cloud-init) on AMIs that will pull the latest code from repository on initial launch. (by storing the repository credentials on the instance and pulling the code directly from git)
Which one of these methods are better? and if both are not good, then what's the best practice to achieve this goal?
Given that anything (almost) can be automated using the AWS using the API; it would again fall down to the specific use case at hand.
At the outset, people would recommend having a base AMI with necessary packages installed and configured and have init script which would download the the source code is always the latest. The very important factor which needs to be counted here is the time taken to checkout or pull the code and configure the instance and make it ready to put to work. If that time period is very big - then it would be a bad idea to use that strategy for auto-scaling. As the warm up time combined with auto-scaling & cloud watch's statistics would result in a different reality [may be / may be not - but the probability is not zero]. This is when you might consider baking a new AMI frequently. This would enable you to minimize the time taken for the instance to prepare themselves for the war against the traffic.
I would recommend measuring and seeing which every is convenient and cost effective. It costs real money to pull down the down the instance and relaunch using the AMI; however thats the tradeoff you need to make.
While, I have answered little open ended; coz. the question is also little.
People have started using Chef, Ansible, Puppet which performs configuration management. These tools add a different level of automation altogether; you want to explore that option as well. A similar approach is using the Docker or other containers.
a) keep AMIs updated all the time; meaning that whenever there's a
pull-request, the old AMI should be removed (deleted) and replaced
with the new one.
You shouldn't store your source code in the AMI. That introduces a maintenance nightmare and issues with autoscaling as you have identified.
b) have a start-up script (cloud-init) on AMIs that will pull the
latest code from repository on initial launch. (by storing the
repository credentials on the instance and pulling the code directly
from git)
Which one of these methods are better? and if both are not good, then
what's the best practice to achieve this goal?
Your second item, downloading the source on server startup, is the correct way to go about this.
Other options would be the use of Amazon CodeDeploy or some other deployment service to deploy updates. A deployment service could also be used to deploy updates to existing instances while allowing new instances to download the latest code automatically at startup.
We are currently setting up a private Cloud by using Heat in combination with Openstack. But we are struggling with the "AWS::ElasticLoadBalancing::LoadBalancer"-Resource when setting up a Loadbalancer with Heat because this Resource-Type has no User-Data and seems to use the F17-x86_64-cfntools-Image by default (can I change it?). Since we are behind a proxy and cfn-init starts trying to install some packages via yum (like haproxy) when Bootstrapping the Image we need to set a proxy before cfn-init starts. Is there any solution for this problem (except of patching the above Image while keeping its name unchanged)?
Thx!
This question is answered here: https://answers.launchpad.net/heat/+question/237480 (no, you can't change it, but you can try using the quantum LBaaS instead)
And for people following along, the current version of the Heat source code that I've just looked at requires that the F20-x86_64-cfntools tools image be present.