Unable to delete virtual machines image and storage - azure-virtual-machine

Unable to delete virtual machines image and storage.
I was trying to delete the virtual machine image and get the "Initial Server Error" error message.
Then I entered the storage of the image. Then the page keep redirect me to following
You are seeing this message because we cannot determine if your
account must comply with the requirements of the Federal Information
Security Management Act (FISMA). You are accessing an information
system that may contain U.S. Government data. System usage may be
monitored, recorded, and subject to audit. Unauthorized use of the
system is prohibited and may be subject to criminal and civil
penalties. Use of the system indicates consent to monitoring and
recording. Administrative personnel remotely accessing the Microsoft
Azure environment
1) Shall maintain their remote computer in a secure
manner, in accordance with organizational security policies and
procedures as defined in Microsoft Remote Connectivity Security
Policies.
2) Shall only access the Microsoft Azure environment in
execution of operational, deployment and support responsibilities
using only administrative applications or tools directly related to
performing these responsibilities and
3) Shall not store, transfer
into, or process in the Microsoft Azure environment data exceeding a
FIPS 199 Moderate security categorization (FISMA Controlled
Unclassified Information)
Here is an image with this error
Firstly, there is no government data in my image, the storage of the image is automatically created by wizard when I created the image.
I have tried it again over several days, it just never allow me to delete.
In addition, it just keep draining out my Azure credit. I am not sure what I should do to delete this image and its storage.
Please give me some advise. Thanks.

Related

vMotion vs Active State Migration

I am in the process of evaluating vendors for upgrading our existing VMware environment. In a conversation with a provider, he told me that vMotion was not possible without a separate SAN appliance or vSAN (the latter requiring 6+ hosts and expensive licensing).
Under the impression that our 3-host cluster already had vMotion licensing and capability, I tried to "vMotion" a running Windows VM using the vSphere client. I was able to "migrate" both the VM and its disk to a new host and datastore respectively, but nowhere did I see the term "vMotion" in the Recent Tasks log at the bottom of the UI. What I did see there was "Migrating Virtual Machine - Active State" and I was able to maintain an RDC connection and interact with the VM all through the migration process.
My question: Am I misunderstanding the term vMotion? Is it different than migration in an "active state"?
Also, assuming vMotion is an unattended convenience and seeing as we already have an image-level backup solution for our VMs and my company is okay with manually restoring those VMs from a backup (as opposed to the convenience of an "instant," unattended, back-end restoration), is vMotion worth the investment in a dedicated SAN server if we're already capable of "live migration" on demand?
And don't worry about selling me on all the benefits of a SAN. Believe me, I'm already with you on that. The people over here who sign the checks just have different priorities is all.
TWIMC: We're in a 3-host cluster, ESXi 6.0 on all. Enterprise Plus licensing.
vMotion is VMware's branding for being able to migrate powered-on / running Virtual Machines from one ESX/ESXi host to another. vSphere UI does not refer to the actual operation in the UI as vMotion except for a number of places where the branding matters i.e. when configuring a feature called Enhanced vMotion Compatibility (EVC) or when enabling vMotion traffic through specific VMkernel virtual network adapter.
On the point about vSAN / physical SAN being mandatory - you already confirmed that you can migrate the VMDKs of a live VM so it's not a complete necessity. The official docs have a section about the limitations of simultaneous comput + storage migration: https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.vcenterhost.doc/GUID-9F1D4A3B-3392-46A3-8720-73CBFA000A3C.html.
I'd bet that migration should be faster if only the memory image of a powered-on VM is migrated - this is especially true in automated DRS setups where VMs are migrated automatically based on a pre-configured policy. Users on reddit seem to have tested this - https://www.reddit.com/r/vmware/comments/matict/vmware_drs_cluster_without_shared_storage_das/gru579m/?utm_source=reddit&utm_medium=web2x&context=3.
Note that I am a VMware employee (albeit not in sales), and you'd probably want a different, unbiased opinion about the product's merits ;)

Is the content on disk in cloud (Azure, AWS) zeroized prior to re-releasing to other users?

Wanted to know if cloud based platforms such as Azure and Amazon zeroize the content on the hard disk whenever an 'instance' is 'deleted' and prior to making it available for other users?
I've tried using 'dd' command on an Amazon-LightSail instance and it appears that the raw data is indeed zeroized. However was not sure if it was by chance (i just tried few random lengths) or if they actually take care to do that.
The concern is, if I leave passwords in configuration files, then someone who comes along would be able to read them (theoretically). Same goes for data in a database.
Generically, the solution to your concern typically used by Azure is storage encryption.
Your data is encrypted by default at the platform level with a key specific to your subscription; when the data or resource is removed, whether or not the storage is zeroed, it is effective inaccessible to a resource deployed on the same storage in another subscription.

Stackdriver Logging Client Libraries - What happens during Google Downtime?

If you embed the Stackdrvier client library in your application and the Google stack driver API has downtime (Google documentation indicates 99.95% or 21.92 minutes of downtime/month)
My question is: What will happen to my application during the downtime? Will logging info build up in memory? Will it cause application errors or will it discard the log data and continue on?
Logging API downtimes can have different root causes and consequences. Google System Engineers have mechanisms in place to track and take mitigation actions so the downtime and its consequences are minimal but Google cannot guarantee data loss prevention in all outages all the time related to logging API.
Hopefully your application and pipeline can withstand up to (21.56 minutes) expected downtime a month (SLA 99.95%) as per the internal SLOs and SLAs of GCP.
The three scenarios you listed are plausible. In this period, your application sending the logs may have 500 responses from the network so it has to be able to deal with this kind of issue.
If the logging data manages to reach Google's platform but an outage prevents the data to be accessible, then Google's team will try their best to release backlogs, repopulate data, etc. They will post general notice on https://status.cloud.google.com/
If the issue is caused by the logging agent not sending data to our platform, then logging data may not be retrievable (but it could still be an infrastructural outage with one of the GCP products) or linked to something other than an outage like your application or its underlying host running out of resources or the logging agent being corrupted which is not covered by GCP Stackdriver SLA [1].
If the pipeline that ingests data from Logging API is backlogged, it could cause an outage but GCP team will try their best to make the data accessible after the outage ends.
If you suspect issues with Logging API malfunctioning, please contact support or file issue tracker or inspect open issues where Google's product team will provide updates live. Links below:
[1] https://cloud.google.com/stackdriver/sla#sla_exclusions
[2]
create new incident:
https://issuetracker.google.com/issues/new?component=187203&template=0
[3]
open issues:
https://issuetracker.google.com/savedsearches/559764

How to automatically extend the virtual machine vertically based on memory metrics

The virtual machine on Azure can monitor the guest operating system’s data, such as CPU, memory usage, and so on, by enabling guest OS diagnostic data collection. Now, I want to know how to use the memory usage to automatically extend the virtual machine vertically.
First enable guest OS diagnostics data collection so that we can collect more disk, CPU, and memory data. If it is not checked during creation, it can be configured by monitoring the diagnostic settings in the VM panel.
Click on the alert rule under monitoring to create an alert rule. You can choose your own metrics. Since we are expanding based on memory here, we choose the percentage of memory that has been used here.
Select a threshold, the condition is greater than. This means that when the memory usage exceeds the threshold, this alarm is triggered to execute the action. The period is a time range of data statistics.
Finally choose to take action, which Azure has built a lot of scripts for us to use. Since we need to scale up here, select Scale up VM. Then select one of our automated accounts or create a new one.
The above is how to automatically expand the virtual machine by monitoring the memory usage. For more information, please refer to
https://docs.azure.cn/zh-cn/virtual-machines/windows/monitor
https://docs.azure.cn/zh-cn/virtual-machines/windows/tutorial-monitoring.
Besides, If you’re Microsoft partner, I find a free channel to solve azure queries: aka.ms/devchat. They support online chat and email.

Google Vision privacy: image deletion

I'm planning to use Google Vision for document recognition.
For example, I will upload driver license and I should get all text data and verify that it is driver license and not the cover of a magazine.
The question is: does Google Vision has API for deletion of uploaded images?
Does Google Vision fit my case if I have some security requirements?
If you use Google's mobile vision API, text and face detection is done on device rather than being uploaded:
https://developers.google.com/vision/
For those who wondering the same problem, You can check their data policy here.
https://cloud.google.com/vision/docs/data-usage
My reading of Google APIs Terms of Service indicates that you will not be able to delete the images.
5b. Submission of Content
Some of our APIs allow the submission of content. Google does not acquire any ownership of any intellectual property rights in the content that you submit to our APIs through your API Client, except as expressly provided in the Terms. For the sole purpose of enabling Google to provide, secure, and improve the APIs (and the related service(s)) and only in accordance with the applicable Google privacy policies, you give Google a perpetual, irrevocable, worldwide, sublicensable, royalty-free, and non-exclusive license to Use content submitted, posted, or displayed to or from the APIs through your API Client. "Use" means use, host, store, modify, communicate, and publish. Before you submit content to our APIs through your API Client, you will ensure that you have the necessary rights (including the necessary rights from your end users) to grant us the license.
Being able to "publish" your driver's licenses is probably not something you want.
The above terms are also completely at odds with the GDPR where the user has the right to delete and modify their data.
7a. Google Privacy Policies
By using our APIs, Google may use submitted information in accordance with our privacy policies.
Note that those privacy policies are the ones that govern normal users, not cloud specifically. In plain text, and IANAL, it means that Google assumes that for whatever content you give them, the user has agreed to anything that Google does for a user that directly use, say Google Docs.
That's another indication that it's impossible to use their APIs and be GDPR compliant.
This should solve your issue
tl;dr "The stored image is typically deleted in a few hours."
Will the image I send to the Cloud Vision API, the results or other
information about the request itself, be stored on Google servers? If
so, how long and where is the information kept, and do I have access
to it? When you send an image to Cloud Vision API, we must store that
image for a short period of time in order to perform the analysis and
return the results to you. The stored image is typically deleted in a
few hours. Google also temporarily logs some metadata about your
Vision API requests (such as the time the request was received and the
size of the request) to improve our service and combat abuse.
Some of the other answers a bit outdated so adding my own answer. The data usage FAQ states
When you send an image to Vision API, we must store that image for a short period of time in order to perform the analysis and return the results to you. For asynchronous offline batch operations, the stored image is typically deleted right after the processing is done, with a failsafe Time to live (TTL) of a few hours. For online (immediate response) operations, the image data is processed in memory and not persisted to disk.
If you use the synchronous Vision API methods, the image is never persisted in Vision API and so there is nothing to delete. If you use the asynchronous Vision API methods, the image is only persisted during the operation and is deleted immediately after the operation completes with a fail-safe of a few hours. Again there is nothing for the user to delete, Vision API takes care of deleting the data for you.
A related question that sometimes comes up is about enforcing usage to take palce in a particular region. You can see the answer here: Google Vision: How to enforce processing in EU
Depends on your security requirements, and the exact privacy law one needs to abide by. In my case, it was HIPAA, one needs to jump through a lot of hoops, but according to https://cloud.google.com/security/compliance/hipaa, Google Cloud Vision API is a HIPAA covered product.