My work requires me installing a software that checks the hard drive is encrypted.
I hear the drive files are encrypted by vmware itself, but the software won't be able to detect that. (I have no way to check right now).
Is it possible to use an actual encrypted hard drive from within the OS with VMware?
First check which tool your encryption detector support (VeraCrypt ? BitLocker ? ...).
If VMware as a guest is not listed, you may perform disk encryption within the guest using a supported tool. This is probably the easiest way to make this software happy.
Test this first in a disposable VM first to ensure there is no big performance issues and that encryption is detected.
I recommend you to keep WMware encryption as it is and lower the guest encryption strength to have encrypted snapshots (something I expect from a VMware encrypted machine...). Without that, an accidental snapshot of a running VM will leak sensitive data in plain-text to your disk, cleaning the mess is hard and in extreme case, require physical disk destruction.
Note that using known broken encryption should trigger a serious disk encryption detector.
Procedure
Connect to vCenter Server by using the vSphere Client.
Right-click the virtual machine that you want to change and select VM Policies > Edit VM Storage Policies.
You can set the storage policy for the virtual machine files, represented by VM home, and the storage policy for virtual disks.
Select the storage policy.
To encrypt the VM and its hard disks, select an encryption storage policy and click OK.
To encrypt the VM but not the virtual disks, toggle on Configure per disk, select the encryption storage policy for VM Home and other storage policies for the virtual disks, and click OK.
You cannot encrypt the virtual disk of an unencrypted VM.
If you prefer, you can encrypt the virtual machine, or both virtual machine and disks, from the Edit Settings menu in the vSphere Client.
Right-click the virtual machine and select Edit Settings.
Select the VM Options tab, and open Encryption. Choose an encryption policy. If you deselect all disks, only the VM home is encrypted.
Click OK.
Related
I'm using an always-free VM on Google Cloud (e2-micro). When creating the instance, there's an option Enable Confidential Computing service, but that requires n2d machine, not part of the always-free resources.
Does that mean Google can read my VM's data?
In other words, without that option enabled, what can Google read on my VM?
I'm not worried about system health monitoring data. I'm only concerned with files and folders that I put there.
Google has written policies that describe what they can access and when. Google also provides the ability to log their access.
Confidential Computing is a different type of technology that is not related to Google accessing your data.
Start with this page which provides additional links:
Creating trust through transparency
This Whitepaper is a good read. Page 9 answers your question:
Trusting your data with Google Cloud Platform
You may have heard of Encryption in Transit, or Encryption at Rest. Confidential Computing just encrypts data while it's being processed within the VM as well (Encryption during Processing?).
You need to use n2d machine types because it uses tech/features available on the AMD EPYC procs.
A Confidential Virtual Machine (Confidential VM) is a type of N2D Compute Engine VM running on hosts based on the second generation of AMD Epyc processors, code-named "Rome." Using AMD Secure Encrypted Virtualization (SEV), Confidential VM features built-in optimization of both performance and security for enterprise-class high memory workloads, as well as inline memory encryption that doesn't introduce significant performance penalty to those workloads.
You can select the Confidential VM service when creating a new VM using the Google Cloud Console, the Compute Engine API, or the gcloud command-line tool.
You can find more details here.
You can check their privacy document here.
I am a beginner of AWS and I have a question about the EBS volume. I know that when we create an EBS volume, there is an option for enabling the encryption (default is unencrypted). With security concern, it is better to enable the encryption of EBS volume, why EBS is not force to be encrypted? What is the use cases/reasons for choosing unencrypted EBS volume?
My guess is that it would be because Amazon EBS encryption was not always available. It was a feature added at some point, so the ability to use a non-encrypted volume remains.
Encrypted volumes also make some tasks more difficult, such as sharing AMIs publicly or between Accounts. There's plenty of reason to offer non-encrypted volumes.
Therefore, it would not be a good idea to "force" encryption.
However, you are welcome to force encryption within your organization, but be aware that there may be times when you do not want it activated.
This is likely down to the fact that it might change the way in which users can interact with their resources (and is technically a breaking change as the previous default was unencrypted volumes), so a user should understand these changes before they actually start using encryption.
Encryption in AWS is actively encouraged but by enabling it for services such as EBS, it does change some flows for example snapshots will be encrypted. If you need to migrate these between regions (or accounts) for DR you have additional steps to allow this.
Regarding security, yes it is better for security primarily from the physical level of AWS. It means that should anyone gain physical access to the storage they will not be able to access the data without access to the key used to encrypt the volume. However, should someone SSH into your server it will behave like normal.
AWS has enabled a feature for users who was this to be the default, you need to Opt-In to default encryption to enable this.
according to the AMD SEV API specification [1], the guest owner authenticates the AMD platform and verifies the integrity measurement of the launched VM guest, and later encrypts the disk encryption key and sends it to the guest (this flow is shown in Appendix A). However, when searching through the docs of Google confidential VM [2] I could not find any information about either authenticating the platform or sending the wrapped disk encryption keys to the guest.
My specific question is: in the Google Confidential VM implementation, which party generates the disk encryption key? How can the guest owner verify the launch and generate the disk encryption key? If the key is generated by the firmware under the platform provider's control, Google Cloud Platform (GCP) in this case, then the user does not gain any additional security/privacy protection from GCP insiders (as claimed in the docs [2]).
P.S. A bug in the docs: to get support one is advised to post on Stack Overflow with the "confidential-vm-tag" [3], however, no such tag exists as of 2020-07-29.
[1] AMD Secure Encrypted Virtualisation API v0.24 https://www.amd.com/system/files/TechDocs/55766_SEV-KM_API_Specification.pdf
[2] https://cloud.google.com/compute/confidential-vm/docs/about-cvm
[3] https://cloud.google.com/compute/confidential-vm/docs/getting-support
I totally agree with Nico. One way would be to extract PDH and PEK keys from Confidential VM but so far, I have not found any way to do this.
mebius99's answer is correct. I think, based on your reply, that you expect something unique for Confidential VMs. This is not the case, and ultimately why Confidential VMs are so powerful...you don't need to drastically change your existing tooling/orchestration. Google's implementation is flexible so you can use disk encryption in a variety of ways... But Google does not allow users to give the LAUNCH_SECRET per the AMD doc.
I don't think this is "going against the spec", Appendix A of the AMD spec says
The following flow charts are provided to illustrate how the usage
of the SEV API might be implemented. Note that these are only examples
and there may be other implementation strategies.
Unless I am completely missing the boat on what you are asking...
Technically, the answer to the questions depends on which of three available approaches is chosen:
Google-managed keys;
Customer-managed;
Customer-supplied.
Practically, data protection issues are rarely purely technical. Crucial points are the data sensitivity level and Data Protection and Privacy agreement between the customer and the cloud provider. In other words, whether the customer can trust Google in terms of data protection, or due to the existing compliance policies the Cloud is considered as a hostile environment.
Google-managed keys (default). Google uses its infrastructure to generate and manage keys for the customer automatically. The customer has no control over the encryption keys.
Customer-managed keys (CMEK). Google uses its infrastructure to create, maintain and rotate keys for the customer. But CMEK gives the customer control over the keys via Cloud KMS. KMS used for CMEK is a cloud-hosted service that helps customers to ensure the lifecycle of encryption keys: generate, rotate, disable, revoke. Thus the customer gets more control over protected data, because the customer, for instance, can quickly terminate access to data by disabling or destroying the CMEK key.
Customer-supplied keys (CSEK). Data is encrypted using the keys owned by the customer. These keys are not sent to Google, but are stored and managed outside of the Google Cloud Platform. Key maintenance, rotation and deprecation is the responsibility of the customer.
The downside of the customer-managed or customer-supplied keys is that access to the encrypted data can be lost due to an unintentional deletion or loss of the key.
Cloud HSM. Encryption keys can be securely stored on a fully managed Hardware Security Modules in Google datacenters. Customers are provided with strong guarantees that their keys cannot leave the boundary of certified HSMs, and that their keys cannot be accessed by malicious persons or insiders. Permissions on key resources are managed with IAM.
Update
The document [1] provided by AMD is a technical preview. It shouldn't be considered as an established standard. That is why the cloud providers are not required to strictly follow this specification, and the offering "Confidential VM" based on a technical preview is reasonably in Beta.
For those who is concerned in verifying the platform by following the example implementation from the AMD's technical preview, Google provides a validation mechanism
Google Cloud > Confidential VM > Doc > Validating Confidential VMs using Cloud Monitoring:
Cloud Monitoring and Cloud Logging let you monitor and validate your
Confidential VM instances.
Integrity monitoring is a feature of both
Shielded VM and Confidential VM that helps you understand and make
decisions about the state of your VM instances.
You can view
integrity reports in Cloud Monitoring and set alerts on integrity
failures. You can review the details of integrity monitoring results
in Cloud Logging.
Confidential VM generates a unique type of
integrity validation event, called a launch attestation report
event. Every time an AMD Secure Encrypted Virtualization (SEV)
-based Confidential VM boots, a launch attestation report event is generated as part of the integrity validation events for the VM.
AMD SEV does not deal with the generation of disk encryption keys. Within the CSEK approach the keys are generated by the customer. The AMD SEV role is to provide a safe environment with the encrypted memory in order to protect the customer supplied key while in use. To safely encrypt memory, AMD SEV relies on the 2nd Gen AMD EPYC™ processors. "These keys are generated by the AMD Secure Processor during VM creation and reside solely within it, making them unavailable to Google or any VMs running on the host."
Please see links below for more details:
Google Cloud Blog > Introducing Google Cloud Confidential Computing with Confidential VMs
Google Cloud > Confidential VM > Doc > Confidential VMs and Compute Engine
Google Cloud > Compute Engine > Doc > Encrypt disks with customer-supplied encryption keys
SUSE > AMD Secure Encrypted Virtualization (AMD-SEV) Guide
I'm bit confused about what purpose does the AMI serve.
Is AMI something which provides a platform with particular OS and other configurations to access the instance?
An Amazon Machine Image (AMI) is basically a copy of the disk that will be attached to a newly-launched instance. It is normally just the boot disk, but an AMI can actually contain multiple disk images.
The AMI is 'copied' to the disk of the newly launched instance. (Not quite accurate, but you can think of it that way.) Changes to the local disk do not impact the AMI.
AWS provides a number of AMIs with pre-loaded operating systems such as Windows, Amazon Linux and Ubuntu. Some of them contain additional software, such as Windows with SQL Server.
There are also community AMIs that are created by somebody other than AMI, but shared to all users. For example, a company might load a demo version of their software onto the AMI, so customers can simply launch an Amazon EC2 instance and it will have all software already loaded and configured.
An AMI is actually just a Snapshot, plus additional metadata. However, a Snapshot can only be restored to an Amazon EBS volume, whereas an AMI can be used to launch an instance. The Amazon EC2 service will then load the disk and attach it to the new instance.
It is pretty much what it's name implies - a machine image. There is, for example, a variety of Linux images. You can use an image to create a Linux instance. The AMI is not "used up" during the use - it can be used any number of times. There are also images that have an operating system such a Linux and software - for example a database server or a closed source server or pretty much anything you can imagine.
Think of the AMI as something you would use as the source for a copy machine. On the source paper there may be a little or a lot. The copier creates a new page that has whatever was on the source page. And you can make any number of copies.
Access to the instance varies on the AMI. A Linux one usually opens an ssh port while a Windows one usually uses some sort of remote desktop. The AWS console can guide you a bit but usually you'll need some documentation to know how to use the instance created from the AMI to know how to use it.
I am trying to start an virtual machine on Google Cloud. I get an error that there isn't enough resources to fulfill my request.
I have been using Google Cloud for about one week to study and try automated trading systems through Metatrader5 on a Linux server.
I was able to use my machine using VNC server, even this morning, but suddenly all my machines (are all on same location) started to show an error when trying to start:
The zone 'projects/metatrader-227016/zones/southamerica-east1-b' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
I read about moving my instance to another region, but it's not a simple instruction. What is strange is that my VM is really small and lightweight.
Unfortunately this problem appears with Google Cloud Compute once in a while. You have several options:
Wait. The resource will eventually be available.
Resize your instance to a different size. A different instance size might be available.
Change regions.
If you have paid support, open a support ticket with Google Cloud Support.
The smaller instance sizes are cheaper and therefore in higher demand.
To move an instance to a different region:
Login to the Google Cloud Console. Go to Compute Engine -> Disks.
Select your disk for the instance you plan to move.
At the top of the screen click CREATE IMAGE. Give the image a name. For Family enter anything you want but remember it.
Once the image creation completes, create a new Compute Engine VM in the region that you want. When creating the new VM, under Boot disk, click Change. You will find your image under the tab Custom images.