I have an workspace in AWS workspace with a lot of configuration files, installed software and files with templates, shell scripts and code, so it's fully configured.
My problem is that when I try to create an image, I lost everything but the installed software. So anybody knows how can I create a backup of my AWS workspace to avoid to have to configure the desktop in terrible case where my images and my workspaces was accidentally removed?
Thanks.
As per the official docs,
A custom image contains only the OS, software, and settings for the WorkSpace. A custom bundle is a combination of both that custom image and the hardware from which a WorkSpace can be launched.
Seems like the image does not carry forward the personal settings like set wallpaper or any browser settings. I experienced this myself.
However, if you are worried about losing whatever configurations you have done if workspace becomes unhealthy, then you can use Rebuild or Restore option.
By default aws takes auto snapshots of root & user volumes of your workspace every 12 hrs.
You can read more bout this here
in terrible case where my images and my workspaces was accidentally removed
If your workspace is deleted/ terminated, no data can be retrieved.
Related
I am planning to use Azure VMSS for deploying a set of spring boot apps. I am planning to create a custom linux VM image with all the required softwares/utilities as well as the required directory structure and configure this image in VMSS. We use jenkins as CI/CD tool and Git as source code repo. What is the best way to build and deploy these spring boot apps on VMSS?
I think one way is to write a custom script extension which downloads code from Git repo and then starts these spring boot apps. I believe this script will then get executed every time a new VM is provisioned.
But what about cases where already multiple VMs are running on top of minimum scale instance count. I believe a manual restart will not trigger the CSE script to run on these already running VMs right?
Could anyone advise the best way to handle this?
Also once a VM is deallocated due to auto scale down, what is the best/cost optimal way to back up the log files from VM to storage (blob or file share)?
You could enable Automatically tear down virtual machines after every use in the organization settings/project setting >> agent pool >> VMSS agent pool >> settings. Then, a new VM instance is used for every job. After running a job, the VM will go offline and be reimaged before it picks up another job. The Custom Script Extension will be executed on every virtual machine in the scaleset immediately after it is created or reimaged. Here is the reference document: Create the scale set agent pool.
To back up the log files from VM, you could refer to Troubleshoot and support about related file path on the target virtual machine.
This question assumes you have used Google Drive Sync or at least have knowledge of what files it creates in your cloud drive
While using rclone to sync a local ubuntu directory to a Google Drive (a.k.a. gdrive) location, I found that rclone wasn't able to (error googleapi: Error 500: Internal Error, internalError; the Google Cloud Platform API console revealed that the gdrive API call drive.files.create was failing)
By location I mean the root of the directory structure that the Google Drive Sync app creates on the cloud (eg. emboldened of say: Computers/laptopName/(syncedFolder1,syncedFolder2,...)). In the current case, the gdrive sync app (famously unavailable on Linux) was running from a separate Windows machine. It was in this location that rclone wasn't able to create a dir.
Forget rclone. Trying to manually create the folder in the web app also fails as follows.
Working...
Could not execute action
Why is this happening and how to achieve this - making a directory in the cloud region where gdrive sync has put all my synced folders?
Basically you can't. I found an explanation here
If I am correct in my suspicion, there are a few things you have to understand:
Even though you may be able to create folders inside the Computers isolated containers, doing so will immediately create that folder not only in your cloud, but on that computer/device. Any changes to anything inside the Computers container will automatically be synced to the device/computer the container is linked to- just like any change on the device/computer side is also synced to the cloud.
It is not possible to create anything at the "root" level of each container in the cloud. If that were permitted then the actual preferences set in Backup & Sync would have to be magically altered to add that folder to the preferences. Thus this is not done.
So while folders inside the synced folders may be created, no new modifications may be made in the "root" dir
I have a project created via the console. The project is on hold and might be revived in 6 months. I would like to download to local storage, delete it, then upload it to the cloud if/when the project is taken off hold. I have not been able to get gsutil working on my machine. (Newbie to the SDK)
I know how to delete the project "Shutdown", I have saved parts (i.e. MySQL Database, Cloud Functions, etc.). but I can't figure out how to save the project with it's users, permissions, settings, intact.
Is there a way? (I am concerned about costs while it is not used, otherwise I would leave it.)
I have Google Cloud Build and Kubernetes Engine set up in my project and I want to back my builds to another project. I am doing it in order to have a backup for a case of a disaster so I will be able to restore the builds.
I noticed that all of the builds are saved into a bucket called: artifacts.{project-id}.appspot.com
Option I came up with
Making a transfer of this bucket into another project.
This will physically backup these builds.
Questions
If the original project gets deleted will this be enough for me to restore the builds? How will i do that?
What other ways can I backup these builds?
Cloud build creates a Docker image and it uploads it to Google Cointainer Registry.
Answer to 1:
yes, if the bucket is transfered from project A to project B if project A is deleted the images in project B will not be affected.
Answer to 2:
You can copy it from a container registry location to another or dowload it to your local computer.
To copy the docker image in container registry to another location you can use the following command from your cloud Shell:
gcloud container images add-tag \
[SOURCE_HOSTNAME]/[SOURCE_PROJECT-ID]/[SOURCE_IMAGE]:[SOURCE_TAG] \
[DESTINATION_HOSTNAME]/[DESTINATION_PROJECT-ID]/[DESTINATION_IMAGE]:[DESTINATION_TAG]
The hostnames will be one of: grc.io, eu.gcr.io us.gcr.io asia.gcr.io
Project-IDs are the source and destination project ids
and the image and tags are the ones you choose for the image
I am deploying a Django application using the following steps:
Push updates to GIT
Log into AWS
Pull updates from GIT
The issue I am having is my settings production.py file. I have it in my .gitignore so it does not get uploaded to GITHUB due to security. This, of course, means it is not available when I PULL updates onto my server.
What is a good approach for making this file available to my app when it is on the server without having to upload it to GITHUB where it is exposed?
It is definitely a good idea not to check secrets into your repository. However, there's nothing wrong with checking in configuration that is not secret if it's an intrinsic part of your application.
In large scale deployments, typically one sets configuration using a tool for that purpose like Puppet, so that all the pieces that need to be aware of a particular application's configuration can be generated from one source. Similarly, secrets are usually handled using a secret store like Vault and injected into the environment when the process starts.
If you're just running a single server, it's probably just fine to adjust your configuration or application to read secrets from the environment (or possibly a separate file) and set those values on the server. You can then include other configuration settings (secrets excluded) as a file in the repository. If, in the future, you need more flexibility, you can pick up other tools in the future.