Cloud build uses worker pool of VM and that is not able to access my on-prem Compute Engine resources So, is there any way to run cloud build on my own VM or any solution for these?
While waiting for the custom worker-pool feature you mentioned in your previous question to become available to public, you can use the custom builder remote-builder.
You'll need to first build the builder image that you'll be able to use then in your Cloud Builds steps. When using the remote-builder image, the following will happen:
A temporary SSH key will be created in your Container Builder
workspace
A instance will be launched with your configured flags
The workpace will be copied to the remote instance
Your command will be run inside that instance's workspace
The workspace will be copied back to your Container Builder
workspace
The build steps using this builder image will therefore run on a VM instance in your project's network and will be able to access other resources, provided your network configuration allows it.
Edit: The cos image used in the example cloudbuild.yaml file seems to include it so you'd be able to run it directly. In case you'd like to customize your instances with specific software, you have several options:
you can create an instance template (based on a custom image that includes the software or with a startup script that will install it at boot time) and specify that instance template in INSTANCE_ARGS in your cloudbuild.yaml.
you can use a standard image and just pass the startup script installing the software as INSTANCE_ARGS.
you can install it within a shell script executed in your build step.
Why can't you just fix the access issue? You can configure cloud build to create build workers within your VPC within your cloud infrastructure:
See the following video which explain how this works:
https://youtu.be/IUKCbq1WNWc?t=820
Hope this helps.
Related
I am planning to use Azure VMSS for deploying a set of spring boot apps. I am planning to create a custom linux VM image with all the required softwares/utilities as well as the required directory structure and configure this image in VMSS. We use jenkins as CI/CD tool and Git as source code repo. What is the best way to build and deploy these spring boot apps on VMSS?
I think one way is to write a custom script extension which downloads code from Git repo and then starts these spring boot apps. I believe this script will then get executed every time a new VM is provisioned.
But what about cases where already multiple VMs are running on top of minimum scale instance count. I believe a manual restart will not trigger the CSE script to run on these already running VMs right?
Could anyone advise the best way to handle this?
Also once a VM is deallocated due to auto scale down, what is the best/cost optimal way to back up the log files from VM to storage (blob or file share)?
You could enable Automatically tear down virtual machines after every use in the organization settings/project setting >> agent pool >> VMSS agent pool >> settings. Then, a new VM instance is used for every job. After running a job, the VM will go offline and be reimaged before it picks up another job. The Custom Script Extension will be executed on every virtual machine in the scaleset immediately after it is created or reimaged. Here is the reference document: Create the scale set agent pool.
To back up the log files from VM, you could refer to Troubleshoot and support about related file path on the target virtual machine.
I have several projects that run on Google Cloud Run. Cloud Build connects each instance to a corresponding branch of a Git repository. Each time a commit is pushed to a branch, a build is triggered to update the Cloud Run instance.
I'd like to be able to show information about the build within the Cloud Run application (e.g. branch and commit that the build has been built from). How can I pass this information from the repo/commit/build to the instance?
As #guillaume blaquiere stated in his comment:
You have the information in Cloud Build when it runs. You can get the data and paste them in your container somewhere. Then you have to serve them. Depends on your implementation.
I would like to include an environment variable on a Google VM which is running a JupyterLab notebook - this variable needs to be present before the notebook is started.
So setting it in the terminal or in the notebook does not work.
I have also tried to modify the bashrc with no luck.
In order to have an environment variable set up on you Compute Engine instance from boot you might be interested in checking startup scripts.
Startup scripts are automated tasks that are performed when your instance boots up. To set them it can be done when creating the instance under the automation section; if the instance is already created accessing your instance details in the compute engine console and under custom metadata clicking on Add item.
Steps to create startup scripts can be found here and here.
If you mean google Colab, one solution is using python, for example:
import os
os.environ["BASE_DIR"]="/content/drive/MyDrive/"
I want to create users in windows server on google cloud during instance creation. Searched in google cloud documentation and other sites but could not find answers. I am aware of startup scripts but those are great when you want to do something every time machine boots up. Please help.
You can use GCP startup script to do it. Please have a look at the documentation Running startup scripts. For example, you can easily add a user John and add him to the group Remote Desktop Users by using metadata:
and, as a result, you'll be able to login via RDP to your VM instance with login John and password fadf24as.FD*.
By default such script will be executed during each start cycle of VM instance:
Compute Engine lets you create and run your own startup scripts on
your virtual machine (VM) instances to perform automated tasks every
time your instance boots up.
To change this default behavior you can add there additional step like creating some folder or file and use them as a flag: if folder or file already exist than rest part of the script should be skipped. In such case PowerShell looks more suitable than cmd, final script could be uploaded from Google Cloud Bucket.
I am currently using AWS ElasticBeanStalk and I was curious as to how (as in internally) it knows that when you fire up an instance (or it automatically does with scaling), to unpack the zip I uploaded as a version? Is there some enviroment setting that looks up my zip in my S3 bucket and then unpacks automatically for every instance running in that environment?
If so, could this be used to automate a task such as run an SQL query on boot-up (instance deployment) too? Are these automated tasks changeable or viewable at all?
Thanks
I don't know how beanstalk knows which version to download and unpack, but running a task on start-up is trivial. Check out cloud-init, a tool written by Ubuntu that's now packaged in Amazon Linux. It allows you to pass arbitrary shell scripts into the UserData section of the instance configuration, and those shell scripts will run on startup.
It's a great way to bootstrap instances on startup, which avoids the soul-sucking misery of managing AMIs.
A quick (possibly non-applicable) warning: If you're running a SQL query on a database that lives on the beanstalk AMI, you're pretty much guaranteed to lose your database at some point. Those machines are designed to be entirely transient. Do not put databases on them. See this answer for more details.
Since your goal seems to be to run custom configuration tasks, the answer is yes, there is a way to do that. You can define custom actions in an .ebextensions file packaged with your app. For example, you can configure a command to run every time a new machine is deployed:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#linux-commands