virtualbox: programmatically change resolution from inside guest - virtualbox

I'm setting up a Linux VM that will be accessed via XRDP. The client user will only have access to the VM through RDP. I want him to be able to resize the guest but I haven't found a way to do that from inside the guest. How does one go about changing the guest resolution from inside the guest? I have guest additions installed but haven't been able to find any utilities that would help.
From the host you can run this:
VBoxManage controlvm "Arch Linux" setvideomodehint 1440 900 32
But, like I said earlier, the host will be inaccessible to the user.
Any ideas?

The user should have access to xrandr which will list the available video modes. If the RDP client supports resizing after a connect, this should allow them to see the change immediately. Otherwise the VM should retain the setting after disconnecting and reconnecting. Running xrandr without any arguments will give a list of available resolutions available. Eg:
% xrandr
Screen 0: minimum 640 x 480, current 1280 x 1024, maximum 1280 x 1024
default connected 1280x1024+0+0 0mm x 0mm
1280x1024 0.0*
1024x768 0.0
800x600 0.0
640x480 0.0
Then supplying a -s n parameter will set the resolution. Eg. xrandr -s 1 will set the resolution to 1024x768 in this example.
You can also add resolutions by using setextradata like this:
VBoxManage setextradata virtualmachine CustomVideoMode1 1120x986x32
Multiple modes / resolutions can be set by incrementing the 1 on the end. Just be sure you have the guest additions installed otherwise this might not work as intended.

Related

Change HDMI color space using X11 API

The nvidia settings tools offers the possibility to change the HDMI color space to RGB or YCbCr444 as shown in this picture
I wonder if there is a way to do the same using X11 API (aka, modifying the color space of the HDMI output/screen) ?
Modifying your Screen Colorspace
1. Programmaticaly
xlib doesn't provide a way to modify the Colorspace screen hardware.
So, you have to use your vendor specific API, if it exists.
Luckily, nvidia provides an API clearly named nvapi.
It was designed for MSWIN, but since some years ago, it is available on Linux too:
nvapi-open-source-sdk
NVidia provides a programmer's guide (a bit outdated) too:
PG-5116-001_v02_public.pdf
which is more explicit than the one you'll get from the sdk.
2. or Manually at startup
2.1 using xorg.conf config options
according to xconfigoptions from the nvidia documentation (see your nvidia package),
they may be specified either in the Screen or Device sections of the X config file
So you could have something like:
Section "Device"
Identifier "card0"
Driver "nvidia"
# ...
Option "ColorSpace" "YCbCr444"
# ...
EndSection
2.2 Using nvidia-settings tool:
You can query your current settings:
$ nvidia-settings --query=CurrentColorSpace
from which you will retrieve the display number and its value
then, you could modify it from shell script (eg .xinitrc) or from command line:
$ nvidia-settings --assign=:0[dpy:2]/ColorSpace=0
where :0 is your X11 $DISPLAY, and dpy:2 the screen you want to modify

freeglut (something): failed to open display ''

I compiled a C++ code under Linux (Ubuntu) and everything is fine as far as I connect a monitor to my PC.
My code shows some graphics and then it saves their screenshots. The runtime graphic is not important to me but the screenshots.
But if I run the code remotely, I face with the following runtime error:
freeglut (something): failed to open display ''
If I forward x (ssh -v -X) everything would be find. But what if I don't do that?!
How to get around it? I don't care if anything is displayed or not.
Is it possible to define a temporary virtual screen on the remote computer or get around this problem in any other way? I just need the screenshot files.
I suggest you try XVFD as your X server on the remote machine
Quote form this answer: Does using Xvfb to run OpenGL effects version?
Xvfb is an X server which whole purpose is to provide X11 services without having dedicated graphics hardware
This allows you to have both a GL context and a window without using a GPU

VirtualBox .vdi drive does not grow

I have a CentOS guest system in a Windows host. The virtual drive was originally 20 GB, then I resized it to 60 using the VBoxManage utility.
VirtualBox reports the expected virtual size (see picture below), but the guest system keeps reporting it's out of storage space. Copying in some files above the limit fails, df reports it having only 20 GB for some reason.
Are there any extra steps I need to take to actually increase the size of the drive?
Guest system:
In my case, after resizing the partition with GParted and booting up the VM, I also had to extend the logical volume and then grow the file system to the available size:
(logged in as root)
lvextend -l +100%FREE /dev/centos/root
xfs_growfs /dev/centos/root
'df -h' then shows the new size.
Your disk is now having more space, but your partition still does not. You have to resize also partition to get the additional "physical" space to be available to the OS. As from you question is not clear what Centos version you use, assuming it's Centos7, i may recommend you to use lvextend
sudo lvextend -L+40G -r /dev/mapper/centos-root
Also, i recommend to make a snapshot of your VM hard drive, before doing any actual changes, so you can quickly rollback changes and start over.

OpenGL and MultiGPU

We are trying to setup a server with Multiple Tesla M2050 to run with OpenGL.
The current setup is as follows : Ubuntu 12.04 with NVidia Drivers. We have setup the xorg.conf with separate devices identified by BUS ID.
Now we have tied an X server each with display which in turn is tied to each device and our code is attached to each of these X servers. But somehow only one X session seems to work out alright. The other one produces garbled output and while watching it from nvidia-smi, we notice that when the garbled output is being produced the GPU's are not at all used.
Could someone verify that our setup seems reasonable? The other thing we noticed was that, it was only the first X server that was started is the one that has the issue.
EDIT : This is in headless mode.
A problem with multiple X servers is, that each server may grab the active VT and hence disable the other X server's rendering output. This can be avoided. But I think in your situation good ole' "Zaphod Mode" would suit your needs far better:
Zaphod mode is a single X server, controlling multiple Devices, each with its own Monitor forming a Screen, joined in a single screen layout. This is not TwinView or Xinerama! In Zaphod mode you can not move windows between Screens, i.e. each Screen acts on its own.

Open a custom Finder Window on plugging in USB

I would like to open a custom finder window on plugging in a USB drive without activating any applescript on the Mac.
I know for sure that it is possible because, I have a USB from VMWare (VMware fusion app) that does the same. As soon as I plug in the Fusion app USB, it opens up a custom finder window.
Could someone help me out with this please?
Have you taken a look at this?
http://jordan.broughs.net/archives/2008/03/creating-cross-platform-windows-and-mac-installer-cds
It rounds down to creating a custom .dmg and then burning it to the usb key with the following command sudo dd if=~/path/to/image.dmg of=/dev/rdisk{N} bs=1m
where {N} should be replaced with the correct disk number that you can lookup at: diskutil list
That's all there is to it.