I am trying to share volume disk between multiple vms, is there any vmware API or any sdk solution to do this? I tried VIJava sdk but there are basic operations as vm creation , deletion and cloning..
thanks in advance...!!
From the API/SDK side, check out the ReconfigVM_Task. It's complex to use, but it lets you reconfigure each volume separately.
In the VirtualMachineConfigSpec object, you should provide a deviceChange representing the disk you want to add/modify. You will then add the same disk (same vmdk filename in case of a VMDK or same physical device in case of RDM) to another VM.
If you've never used the vSphere SDK then it can be a bit daunting. Post specific follow up questions (with your code) if needed.
Is using PowerCLI out of the question? It has a New-HardDisk cmdlet.
Related
I am data analyst. My company is moving all data science to a cloud provider (it could be Azure, GCP,AWS). All the data science programming tools like Jupyter notebook will be installed on the cloud environment (there will be no local installations of Python, or Jupyter Notebooks on the laptop).
For most of my work, I will be reading/ingesting relational database tables directly from an on-premise Database. Also most of my data analysis work does not require any GPU instances for data processing. Sometimes, I also do simple research or experimentation data analysis programming such as data cleaning using Jupyter notebooks without the need for usage of GPU instances.
I would like to find out if it would be possible to do such activities without incurring any pay-per-use costs or unnecessary expenses for my company on their data science cloud computing platform given that none of my tasks utilize GPUs? Please advise, thank you.
EDIT Note: It is difficult to work & develop locally with Jupyter on my company PC because I do not have full permissions to install Python packages(usually this has to be requested for approval, which is very painful and takes a very long time).
Jupyter Notebook can be installed in the cloud, but also on prem and on your workstation. You pay either resource in the cloud, on prem, or your worstation.
Of course, if you add large disk, GPUs, CPUs, memory, it costs more! The problem isn't the cost, it is more where do you want to run your notebook?
I think, there is a bad alternative. With Colab you have free Jupyter Notebook instance. But, AFAIK, it's not private, it's public instances and if you work for your company, you can have data leakage. (Not sure, to validate, but it's not a recommended solution in any case)
EDIT 1
Considering your latest comment, I wondering if you need a jupyter notebook to run your code.
Indeed, Jupyter is simply and IDE: you could create your script, even this one that need GPU locally, and to run it on production data on Compute Engine that you provision only for the process. At the end of the script destroy the VM. No Jupyter notebook environment for that, no?
EDIT 2
Thanks to your note, I understand that developing locally isn't an option. In this case, I recommend you to use a managed Jupyter Notebook solution. You can provision this VM on Google Cloud if you want, you can also have different VM, with or without GPU.
The principle is the same: when you stop to work with your instance, stop it. You will only pay for the storage (the disk) when the instance is down.
And the dev principle can be the same: use a small CPU/GPU for your dev, and when you have to process big data, run your script on a powerful VM. Because you pay only when the VM is running, you can optimize cost like that.
In addition to Guillaume's answer, if you want to keep track or to plan ahead if there are cost that will occur while using instances. You can use Google Cloud Platform's Pricing calculator:
https://cloud.google.com/products/calculator?hl=en
With this, you can can choose what product do you're interested to, what kind of components do want in your set-up (e.g. how many RAM, capacity of your storage space, CPU)in case you choose to use GCP Compute Engine, choose what location you are and check if that location price suits your company's budget.
If you want to have more information regarding Google Cloud Platform pricing, you can check out this link:
https://cloud.google.com/compute/all-pricing#compute-optimized_machine_types
I have developed Django API which accepts images from livefeed camera using in the form of base64 as request. Then, In API this image is converted into numpy arrays to pass to machine learning model i.e object detection using tensorflow object API. Response is simple text of detected objects.
I need GPU based cloud instance where i can deploy this application for fast processing to achieve real time results. I have searched a lot but no such resource found. I believe google cloud console (instances) can be connected to live API but I am not sure how exactly.
Thanks
I assume that you're using GPU locally or wherever your Django application is hosted.
First thing is to make sure that you are using tensorflow-gpu and all the necessary setup for Cuda is done.
You can start your GPU instance easily on Google Cloud Platform (GCP). There are multiple ways to do this.
Quick option
Search for notebooks and start a new instance with the required GPU and
RAM.
Instead of the notebook instance, you can set up the instance separately if you need some specific OS and more flexibility on choosing the machine.
To access the instance with ssh simply add your ssh public key
to Metadata which can be seen when you open the instance details.
Setup Django as you would do on the server. To test it simply just debug run it on host 0 or 0.0.0.0 and preferred port.
You can access the APIs with the external IP of the machine which can be found out in the instance details page.
Some suggestions
While the first option is quick and dirty, it's not recommended to use that in production.
It is better to use some deployment services such as tensorflow-serving along with Kubeflow.
If you think that you're handling the inference properly itself, then make sure that you load balance the server properly. Use NGINX or any other good server along with gunicorn/uwsgi.
You can use redis for queue management. When someone calls the API, it is not necessary that GPU is available for the inference. It is fine not to use this when you have very less number of hits on the API per second. But when we think of scaling up, think of 50 requests per second which a single GPU can't handle at a time, we can use a queue system.
All the requests should directly go to redis first and the GPU takes the jobs required to be done from the queue. If required, you can always scale the GPU.
Google Cloud actually offers Cloud GPUs. If you are looking to perform higher level computations with your applications that require real-time capabilities I would suggest your look into the following link for more information.
https://cloud.google.com/gpu/
Compute Engine also provides GPUs that can be added to your virtual machine instances. Use GPUs to accelerate specific workloads on your instances such as Machine Learning and data processing.
https://cloud.google.com/compute/docs/gpus/
However, if your application requires a lot of resources you’ll need to increase your quota to ensure you have enough GPUs available in your project. Make sure to pick a zone where GPUs are available. If this requires much more computing power you would need to submit a request for an increase of your quota. https://cloud.google.com/compute/docs/gpus/add-gpus#create-new-gpu-instance
Since you would be using the Tensorflow API for your application on ML Engine I would advise you to take a look at this link below. It provides instructions for creating a Deep Learning VM instance with TensorFlow and other tools pre-installed.
https://cloud.google.com/ai-platform/deep-learning-vm/docs/tensorflow_start_instance
Good afternoon, colleagues!
Faced with a new task for me - it is required to combine two types of hypervisors under one management console. I want to create, delete, clone and do other manipulation with VMs from the common management console. This console must be a free or open source.
Our hypervisors are VMware and Proxmox.
Maybe someone faced such a challenge - I ask for your advice.
Thanks for your answers!
Sounds like what you need is to add a cloud manager on top of your hypervisors. OpenStack was created with some of this in mind
With OpenStack you can manage varied hypervisor technologies in one place. OpenStack is a bit of a beast to setup though.
You might want to start with this video which goes over the advantages of a VMWare + OpenStack deployment.
I am doing some research on VMWare VSAN because we are looking at our options for storage. I am getting mixed answers when I Google. We are building a new host in our new office and we are starting fresh. Our old setup we had a server host HP with a few drives which ESXi connected to a SAN and we used a combination of both for storage of VM's and file storage. We did not use VSAN, but with the new setup this is definitely an option. We are looking at a HP ProLiant DL380 GEN9 server that is capable of holding several drives. If I loaded this up with large drives and setup VSAN, would this be a good option for a file storage server? This host will also host several other VM's as well.
So, basically you want to do the hardware refresh and system architecture reconfiguration. Correct me if I`m wrong.
If so, then IMHO the best way gonna be to go with one of the hyper-converged solutions. Is see three options here:
Simplivity (https://www.simplivity.com/). Its really good, but it was too high cost for one of the projects that I had. Also its perfromance is mostly bottlenecked by the propritory component (FPGA), which in most of cases means lack of flexibility
VMware VSAN (I'm sure you don't need link for that :) ). According to my friend who works at VMware - it is usually considered for big deployments. so if that is your case - go for it.
StarWind Hyper-Converged Appliance (https://www.starwindsoftware.com/starwind-hyperconverged-appliance ). That one is SMB-oriented. It combines commodity Dell hardware and bunch of software. Since everything is commodity - it is easy to handle.
I hope that makes gonna help.
P.S. I`m not sure if this is the best place to ask this Q, possible serverfault would be the better place.
Fault tolerant file storage is possible with VMware Virtual SAN but kind of expensive. Either way VMware does solve storage redundancy for running VMs but it does not solve the issue with exporting SMB 3.0 or NFS v4.1 mount points you'll need, you have to use custom VMs for that. FreeBSD / Linux with Samba for NFS and Windows / Hyper-V Server for SMB 3.0 will do the trick!
Similar discussion on REDDIT some time ago lots of good thoughts.
https://www.reddit.com/r/vmware/comments/4223ol/virtual_san_for_file_servers/
I want to learn Apache Nutch and I have an account at Amazon Web Services (AWS). I have three machines at AWS and one of them is micro sized, other one is small and the other one is medium. I want to start with small sized and I will install Nutch, Hadoop and Hbase on it. I have Centos 6 at my machines.
There is a question here but not I ask: Nutch 2.1 (HBase, SOLR) with Amazon Web Services
I want to learn that which approach is better. I want to install them on small size machine. After that I want to add micro sized. On the other hand I don't have any experience about Nutch maybe I should work on local or is there a possibility using my machine and AWS both (does it charge more i.e. copying data from AWS may be charged.)
When I want to implement a wrapper into my Nutch, should I install it on my local(to have source codes) and run it on AWS.
Any ideas?
It sounds like your facing a steep learning curve.
For one, you admit that you're just learning Nutch, so I would recommend you install CentOS on a physical box at home and play around there.
On the other hand, you are pondering the use of a micro AWS instance, which will not be useful in running a CPU/memory intensive application like Nutch. Read about AWS micro instances here.
My suggestion is to stick to a single physical box solution at home and work on scripting your solution before moving on to an AWS instance.