When an image is created from a compressed RAW image stored in a gcs bucket, is an instance spun up in the background to validate the image? I would like to understand how the image creation process works and if Google adds some software on top of what's in the RAW image.
According to our documentation regarding the importation of Boot Disk Images to Compute Engine. In the overview section, we explain all the needed steps to understand how the image creating process works. This will reply to your question “I would like to understand how the image creation process works”.
Verifying this steps details will allow you to address the remaining questions and you’ll know that we don’t spin up an instance in the background to validate the image and that Google doesn’t add any software on top of what’s in the RAW image.
Customers are responsible for:
1- Planning for the import path
2- Preparing the boot disk so it can boot in Compute Engine environment
3- Creating and compressing the boot disk
4- Uploading the image
5- Using the imported image to create a VM
Related
I've looked for this across the web a few times, and I feel like this hasn't been asked exactly, or I may just be getting bogged down with the wrong syntax. Hoping to get an easy answer here (yes, you can't get this, is an acceptable answer).
The variations from the base CentOS image are listed here: Link to GCP
However, they don't actually provide a download for this image. I'm trying to get a local VM running in VMWare with this image.
I feel as though they'd provide this to their clients to make it easier to prepare for use of their product, but I'm not finding it anywhere.
If anyone could toss me a link to a pre-configured CentOS ISO with the minor changes, I'd definitely take that as an alternative. I'm just not confident in my skills with Linux enough to configure the firewall properly :)
GCP doesn't support Google-provied images for exporting. However, they support exporting images for custom images.
I don't have any experience about image exporting, but I think this works.
Create custom images
You can create custom images based on your GCE VM instance.
Go navigation -> Compute engine -> images page.
You can create custom image via disk or snapshot in this page.
Select one and create a custom image.
Export your image
After creating custom image successfully, Go custom image page and click "export" on upper side.
Select export format and GCS destination. then click export.
Now you have an image in the Google Cloud storage.
Download image file and import to your local VM machine.
I want to build a Google Cloud image using Packer, but can't seem to find a way for packer to add additional disks with googlecompute builder. This is required as want a persistent disk for application to store data on it.
Is it something that can be done through startup_script or any other way?
GCE images only support one disk.
Please check here for an open feature request to support this.
I was looking on the prices on the calculator when see on Free saying "... or User Provided OS". And I want know how I upload a OS to Google Compute Engine.
There are a number of ways to import/migrate a custom instance to GCP. Two possible and popular solutions would be either migrating with Velostrata or using CloudEndure.
To upload or import a boot disk image to Compute Engine, you may use the following process:
Plan your import path. You must identify where you are going to prepare your boot disk image before you upload it, and how you are going to connect to that image after it boots in the Compute Engine environment.
Prepare your boot disk so it can boot within the Compute Engine environment and so you can access it after it boots.
Create and compress the boot disk image file.
Upload the image file to Google Cloud Storage [1] and import the image to Compute Engine as a new custom image [2].
Use the imported image to create a virtual machine instance and make sure it boots properly.
If the image does not successfully boot, you can troubleshoot the issue by attaching the boot disk image to another instance and reconfiguring it.
Optimize the image [3] and install the Linux Guest Environment [4] so that your imported operating system image can communicate with the metadata server and use additional Compute Engine features.
You may visit this link to learn on how to import Boot Disk Images to Compute Engine [5].
[1] https://cloud.google.com/storage/
[2] https://cloud.google.com/compute/docs/images#custom_images
[3] https://cloud.google.com/compute/docs/images/configuring-imported-images
[4] https://cloud.google.com/compute/docs/images/configuring-imported-images#install_guest_environment
[5] https://cloud.google.com/compute/docs/images/import-existing-image
Here's my situation. I've been working on building a service at work that takes dynamically generated images and outputs animations as mp4 or gif. The user has the options of setting dimensions, time for each frame, etc.
I have this working currently with ffmpeg. It works ok, but is difficult (and potentially expensive) to scale due largely to the cpu/memory requirements that ffmpeg needs.
I just spent some time experimenting with AWS's Elastic Transcoder. It doesn't seem to like static image files (jpg, png) as source material in jobs. The file types aren't listed under the available Preset options either.
I'm sure that I could adapt the existing architecture to save the static images as video files (sound isn't needed) and upload those. That will still require ffmpeg to be in the pipeline though.
Are there any other AWS services that might work with my needs and allow the use of Elastic Transcoder?
I have an android app which uploads images taken by the camera to AWS S3. I would like to be able to keep the image if it contains the face of the user, and only the face of the user. (ie a selfie - unfortunately android does not save which camera was used in EXIF data).
I have found code to do this on android, but that seems like an unnecessary amount of network calls. Seeing as I am using S3, it seems like there should be away to have S3 do if for me automatically. Ie, every image uploaded to a folder is automatically run through Rekog, stored if the same as reference image and deleted otherwise.
The service is so new however, the documentation rather sparse, than I cannot find any docs describing if this is possible. Does anyone know?
You can do the following:
S3 upload event -> trigger lambda -> calls Rekognition CompareFaces API -> based on Confidence score threshold -> decides to delete or retain.
Points to note:
You need to have a reference image stored in S3
If there are too many images uploaded, you can see if AWS Batch is better suited, if you are OK with not doing it real time, then spot instances should be preferable.
I'm working with Rekognition as well. As best I can tell from your question, ComparesFaces or SearchFaces could be used to determine whether to store or delete the image. As far as how to get Rekog auto-run on a specific folder I guess it could start with S3 invoking Lambda but I'm not sure what additional AWS services would be required beyond...