I never had this issue until recently, but now, when creating a VM, the option to add a gpu is always not clickable.
this is what it looks like.what is the cause for this?
this is not caused because there are no gpus in my region, I checked a lot of them. also I don't think its an issue with my account, I CAN make gpu instanced through the marketplace
It is probably because of the current machine type that has been set. You can only attach GPUs to general-purpose N1 machine types. GPUs are not supported for other machine types. Feel free to check this documentation for reference.
Related
i'm planning to subs google colab pro to get better GPU memory when doing some research. But i was wondering if i exhaust my 100 compute units in the first day due to continues usage of GPU, can i still use GPU for my google colab?
If anyone know or already tried to use GPU after having 0 compute units, is it still possible to use a GPU? please kindly share your experience
For what I understood reading here: Google colab subscription
What happens if I don't have computing units? All users can access Colab resources based on availability. If you do not have computing units, you can only use Colab resources reserved for non-paying users.
So, if you finish your compute units you'll be downgraded to the free user version of Colab. In that case, you can be assigned another gpu as a free user but with the common limitations.
Another useful link to understand a little bit better about usage of computing units is the following: What exactly is a compute unit?.
I'll be trying the paid version in a few days for at least a month to see if it is really worth it.
Hope it helps!
I'm trying to use google cloud ml with GPU mode.
When I train BASIC_GPU mode, I have many error log.
But, It works training well.
I am not sure whether the learning was good working in GPU mode.
This is error log history.
enter image description here
This is the some part of print config.log_device_placement.
enter image description here
Also, I tried training complex_model_m_gpu mode.
I also have error log like BASIC_GPU.
But, I can't see gpu:/1, gpu:/2, gpu:/3 when i print config.log_device_placement. Only gpu:/0 i can see.
The important thing is that BASIC_GPU and complex_model_m_gpu have same speed for running time.
I wonder whether the learning was good working in GPU mode or there is something wrong.
Sorry for my english, anyone knows the problem then help me.
thank you.
Please refer to TensorFlow's performance guide for optimizing for GPUs for tips on how to make the most of your GPUs.
A couple things to note
You can turn on logging of device placement to see which ops get assigned to which Devices. This is a great way to check that ops are actually assigned to GPUs and that you are using all GPUs when you have multiple GPUs.
TensorBoard should also provide information about device placement so that is another way to check that you are using all GPUs.
When using multiple GPUs, you need to make sure you are assigning ops to all GPUs. The TensorFlow guide provides more information on this topic.
I am trying to set table_cache option, however, I cannot find table_cache in RDS parameters. Where can I change this option?
Thank you.
The table_cache system variable was deprecated and renamed table_open_cache back in MySQL 5.1, and was still called table_open_cache in 5.6.
It's in the RDS parameter group.
However, it's very rare that this value is an appropriate value to tweak. It has long been known to scale negatively -- the more "optimum" your configuration, the worse the server will perform.
If you're using a tuning script, the odds are extremely high that you're operating on bad advice if changing that value has been recommended. Tuning scripts in general are notorious for their well-intentioned, but ill-conceived, bad advice.
I have a program that is using a configuration file.
I would like to tie the configuration file to the PC, so copying the file on another PC with the same configuration won't work.
I know that Windows Activation Mecanism is monitoring hardware to detect changes and that it can tolerates some minor changes to the hardware.
Is there any library that can help me doing that?
My other option is to use WMI to get Hardware configuration and to program my own tolerance mecanism.
Thanks a lot,
Nicolas
Microsoft Software Licensing and Protection Services has functionality to bind a license to hardware. It might be worth looking into. Here's a blog posting that might be of interest to you as well.
If you wish to restrict the use of data to a particular PC you'll have to implement this yourself, or find a third-party solution that can do this. There are no general Windows API's that offer this functionality.
You'll need to define what you currently call a "machine."
If I replace the CPU, memory, and hard drive, is it still the same computer? Network adaptor, video card?
What defines a machine?
There are many, many licensing libraries out there to do this for you, but almost all are for pay (because, ostensibly, you'd only ever want to protect commercial software this way). Check out what RSA, Verisign, and even microsoft have to offer. The windows API does not expose this, ostensibly to prevent hacking.
Alternately, do it yourself. It's not hard to do, the difficult part is defining what you believe a machine to be.
If you decide to track 5 things (HD, Network card, Video card, motherboard, memory sticks) and you allow 3 changes before requiring a new license, then users can duplicate the hard drive, take out two of the above, put them in a new machine, replace them with new parts in the old machine and run your program on the two separate PCs.
So it does require some thought.
-Adam
If the machine has a network card you could always check its mac address. This is supposed to be unique and checking it as part of the program's startup routine should guarantee that it only works in one machine at a time... even if you remove the network card and put it another machine it will then only work in that machine. This will prevent network card upgrades though.
Maybe you could just keep something in the registry? Like the last modification timestamp for this file - if there's no entry in the registry or the timestamps do not match then fall back to defaults - would that work? (there's more then one way to skin a cat ;) )
We are a startup company and doesnt have invested yet in HW resources in order to prepre our dev and testing environment. The suggestion is to buy a high end server, install vmware ESX and deploy mutiple VMs for build, TFS, database, ... for testing,stging and dev enviornment.
We are still not sure what specs to go with e.g. RAM, whether SAN is needed?, HD, Processor, etc..?
Please advice.
You haven't really given much information to go on. It all depends on what type of applications you're developing, resource usage, need to configure different environments, etc.
Virtualization provides cost savings when you're looking to consolidate underutilized hardware. If each environment is sitting idle most of the time, then it makes sense to virtualize them.
However if each of your build/tfs/testing/staging/dev environments will be heavily used by all developers during the working day simultaniously then there might not be as many cost savings by virtualizing everything.
My advice would be if you're not sure, then don't do it. You can always virtualize later and reuse the hardware.
Your hardware requirements will somewhat depend on what kind of reliability you want for this stuff. If you're using this to run everything, I'd recommend having at least two machines you split the VMs over, and if you're using N servers normally, you should be able to get by on N-1 of them for the time it takes your vendor to replace the bad parts.
At the low-end, that's 2 servers. If you want higher reliability (ie. less downtime), then a SAN of some kind to store the data on is going to be required (all the live migration stuff I've seen is SAN-based). If you can live with the 'manual' method (power down both servers, move drives from server1 to server2, power up server2, reconfigure VMs to use less memory and start up), then you don't really need the SAN route.
At the end of the day, your biggest sizing requirement will be HD and RAM. Your HD footprint will be relatively fixed (at least in most kinds of a dev/test environment), and your RAM footprint should be relatively fixed as well (though extra here is always nice). CPU is usually one thing you can skimp on a little bit if you have to, so long as you're willing to wait for builds and the like.
The other nice thing about going all virtualized is that you can start with a pair of big servers and grow out as your needs change. Need to give your dev environment more power? Get another server and split the VMs up. Need to simulate a 4-node cluster? Lower the memory usage of the existing node and spin up 3 copies.
At this point, unless I needed very high-end performance (ie. I need to consider clustering high-end physical servers for performance needs), I'd go with a virtualized environment. With the extensions on modern CPUs and OS/hypervisor support for them, the hit is not that big if done correct.
This is a very open ended question that really has a best answer of ... "It depends".
If you have the money to get individual machines for everything you need then go that route. You can scale back a little on the hardware with this option.
If you don't have the money to get individual machines, then you may want to look at a top end server for this. If this is your route, I would look at a quad machine with at least 8GB RAM and multiple NICs. You can go with a server box that has multiple hard drive bays that you can setup multiple RAIDS on. I recommend that you use a RAID 5 so that you have redundancy.
With something like this you can run multiple VMWare sessions without much of a problem.
I setup a 10TB box at my last job. It had 2 NICs, 8GB, and was a quad machine. Everything included cost about 9.5K
If you can't afford to buy the single machines then you probably are not in a good position to start re-usably with virtualisation.
One way you can do it is take the minimum requirements for all your systems, i.e. TFS, mail, web etc., add them all together and that will give you an idea of half the minimum server you need to host all those systems. Double it and you be near what will get you buy, if you have spare cash double/triple the RAM. Most OSes run better with more RAM to particular ceiling. Think about buying expandable storage of some kind and aim for half populated to start with which will keep the initial cost/GB and make for some expansion at lower cost in the future.
You can also buy servers which take multiple CPUs but only put in the minimum amount of CPUs. Also go for as many cores on a CPU as you can get for thermal, physical and licensing efficiency.
I appreciate this is a very late reply but as I didn't see many ESX answers here I wanted to post a reply though my post equally relates to Hyper-V etc.