Using Linux on Virtual Box as primary workspace? - virtualbox

I will be getting a new Windows computer for work but am interested in doing as much of my actual work as possible (emails, Office Libre, basic data analysis and Python scripting with Anaconda) via a Linux virtual box. Let’s say my new computer comes with 8-12 gigs of RAM and a 256 gig hard drive. Should I be allocating as much as possible of those resources to the VM since I’m not expecting/hoping to do much with the Windows setup? In other words, what considerations are there in making a Linux VM my primary computing workspace?

You can allocate disk space dynamically, so that it won't take more than is needed. Memory can be adjusted easily, so start with the recommended value and adjust as required. Start with a minimal Linux install and add what you need for your work. This arrangement works well, and you can used the shared drive feature so that you can edit files in the VM using the native Windows tools if desired. I find it is very handy for trying out distros and applications that you can blow away later.

Related

Using a STM upsd3200 series mcu for the first time

I've received a dk 3200 kit recently. I know it's old but I would like to start using it to have more of a challenge than just arduino. It came with the board, an st flashlink FL-101B and some cables. I do not have the install disc, but I found the software for psd soft express online. It doesn't work with current windows 7 64bit. If I could use my arduino to program it, that would be great! Or maybe just do it through USB or the parallel ports? I've read that st provides a stm32 library used to help make code. If that works for my mcu, I'll use that.
Thanks
ST is a company that loves to re-organize their website and break links, but a data sheet for a part of what seems to be the same family as on your board is available from a toolchain vendor at:
http://www.keil.com/dd/docs/datashts/st/upsd321x_ds.pdf
On page 118 this seems to indicate that programs can be loaded using JTAG In System Configuration commands, which may be somewhat standardized though quick searching isn't producing a lot of leads. A reference is also made to ST's AN1153 which would be worth trying to locate a copy of, however it's not entirely clear if that will say anything about the programming, or if it only covers the additional two optional signals which have been added for hardware acceleration of the interface.
In theory, if you can find sufficient information about this device (looking at related devices may provide clues) then you should be able to craft a programming from an Arduino or anything else that can be adapted to twiddle bits at the appropriate voltage/threshold levels.
In practice, you may be better off trying to find an old Windows XP box, or even trying to run that software on a virtual/emulated environment and trap the I/O access either to proxy or to figure out what it is doing and reverse engineer your own programmer.
But unless you have a large installed base of boards using these chips, or particular need some unusual feature of them (I thought I saw something about a built-in CPLD?) it's really not going to be worth the effort.

Creating a File System "Driver"

I'm looking to create a "driver" I guess for a custom file system on physical disk for Windows. I don't exactly know the best way to explain it, but the device already has proper drivers and everything like that for Windows to communicate with it, but what I want to happen is for the user to be able to plug the device in to their PC, have it show up in My Computer, and give them full support for browsing the device.
I realize it's probably a little scary thinking about someone who doesn't know the basics of doing something like this even asking the question, but I already have classes and everything constructed for reading it within my own app... I just want everything to be more centralized and without more work from the end user. Does anyone have a good guide for creating a project like this?
The closest thing I know of to what I understand from your description is an installable file system, like the Ext2 installable file system that allows Windows computers to work with
Linux originating ext2 (and to a certain degree ext3) filesystems.
Maybe that can serve as a starting point for your investigations.
As an alternative approach there's the Shell extension which is a lot less complicated than the IFS. The now-defunct GMail shell extension used that approach, and even though it's become nonfunctional due to changes in GMail, it can still serve as inspiration.
Your options are:
Create a kernel mode file system driver. 9-12 months of work for experienced developer.
Use a framework and do everything in user mode. A couple of weeks of work to get the prototype working. The only drawback of this approach is that it's slower, than kernel-mode driver. You can play with Dokan mentioned above, or you can use our Callback File System for commercial-grade development.
I think you need to look through the Windows Driver Kit documentation (and related subjects) to figure out exactly what you're looking to create.
If you're intending to rely on the drivers that already exist, i.e. you don't need to actually execute your code in kernel land to communicate with it, I would recommend you take a look at FUSE for windows Dokan
If you indeed need to run in kernel space, and communicate directly with the hardware, you probably want to download windows DDK (driver development kit). Keep in mind that drivers for communicating with a block device and filesystems are separated, and it sound like you're talking about the filesystem itself. I believe that anything you run in kernel space will not have access to the c++ runtime, which means you can only use a subset of c++ for kernel drivers.

Restrict functionality to a certain computer

I have a program that is using a configuration file.
I would like to tie the configuration file to the PC, so copying the file on another PC with the same configuration won't work.
I know that Windows Activation Mecanism is monitoring hardware to detect changes and that it can tolerates some minor changes to the hardware.
Is there any library that can help me doing that?
My other option is to use WMI to get Hardware configuration and to program my own tolerance mecanism.
Thanks a lot,
Nicolas
Microsoft Software Licensing and Protection Services has functionality to bind a license to hardware. It might be worth looking into. Here's a blog posting that might be of interest to you as well.
If you wish to restrict the use of data to a particular PC you'll have to implement this yourself, or find a third-party solution that can do this. There are no general Windows API's that offer this functionality.
You'll need to define what you currently call a "machine."
If I replace the CPU, memory, and hard drive, is it still the same computer? Network adaptor, video card?
What defines a machine?
There are many, many licensing libraries out there to do this for you, but almost all are for pay (because, ostensibly, you'd only ever want to protect commercial software this way). Check out what RSA, Verisign, and even microsoft have to offer. The windows API does not expose this, ostensibly to prevent hacking.
Alternately, do it yourself. It's not hard to do, the difficult part is defining what you believe a machine to be.
If you decide to track 5 things (HD, Network card, Video card, motherboard, memory sticks) and you allow 3 changes before requiring a new license, then users can duplicate the hard drive, take out two of the above, put them in a new machine, replace them with new parts in the old machine and run your program on the two separate PCs.
So it does require some thought.
-Adam
If the machine has a network card you could always check its mac address. This is supposed to be unique and checking it as part of the program's startup routine should guarantee that it only works in one machine at a time... even if you remove the network card and put it another machine it will then only work in that machine. This will prevent network card upgrades though.
Maybe you could just keep something in the registry? Like the last modification timestamp for this file - if there's no entry in the registry or the timestamps do not match then fall back to defaults - would that work? (there's more then one way to skin a cat ;) )

What is the recommened HW specs for virtualizations?

We are a startup company and doesnt have invested yet in HW resources in order to prepre our dev and testing environment. The suggestion is to buy a high end server, install vmware ESX and deploy mutiple VMs for build, TFS, database, ... for testing,stging and dev enviornment.
We are still not sure what specs to go with e.g. RAM, whether SAN is needed?, HD, Processor, etc..?
Please advice.
You haven't really given much information to go on. It all depends on what type of applications you're developing, resource usage, need to configure different environments, etc.
Virtualization provides cost savings when you're looking to consolidate underutilized hardware. If each environment is sitting idle most of the time, then it makes sense to virtualize them.
However if each of your build/tfs/testing/staging/dev environments will be heavily used by all developers during the working day simultaniously then there might not be as many cost savings by virtualizing everything.
My advice would be if you're not sure, then don't do it. You can always virtualize later and reuse the hardware.
Your hardware requirements will somewhat depend on what kind of reliability you want for this stuff. If you're using this to run everything, I'd recommend having at least two machines you split the VMs over, and if you're using N servers normally, you should be able to get by on N-1 of them for the time it takes your vendor to replace the bad parts.
At the low-end, that's 2 servers. If you want higher reliability (ie. less downtime), then a SAN of some kind to store the data on is going to be required (all the live migration stuff I've seen is SAN-based). If you can live with the 'manual' method (power down both servers, move drives from server1 to server2, power up server2, reconfigure VMs to use less memory and start up), then you don't really need the SAN route.
At the end of the day, your biggest sizing requirement will be HD and RAM. Your HD footprint will be relatively fixed (at least in most kinds of a dev/test environment), and your RAM footprint should be relatively fixed as well (though extra here is always nice). CPU is usually one thing you can skimp on a little bit if you have to, so long as you're willing to wait for builds and the like.
The other nice thing about going all virtualized is that you can start with a pair of big servers and grow out as your needs change. Need to give your dev environment more power? Get another server and split the VMs up. Need to simulate a 4-node cluster? Lower the memory usage of the existing node and spin up 3 copies.
At this point, unless I needed very high-end performance (ie. I need to consider clustering high-end physical servers for performance needs), I'd go with a virtualized environment. With the extensions on modern CPUs and OS/hypervisor support for them, the hit is not that big if done correct.
This is a very open ended question that really has a best answer of ... "It depends".
If you have the money to get individual machines for everything you need then go that route. You can scale back a little on the hardware with this option.
If you don't have the money to get individual machines, then you may want to look at a top end server for this. If this is your route, I would look at a quad machine with at least 8GB RAM and multiple NICs. You can go with a server box that has multiple hard drive bays that you can setup multiple RAIDS on. I recommend that you use a RAID 5 so that you have redundancy.
With something like this you can run multiple VMWare sessions without much of a problem.
I setup a 10TB box at my last job. It had 2 NICs, 8GB, and was a quad machine. Everything included cost about 9.5K
If you can't afford to buy the single machines then you probably are not in a good position to start re-usably with virtualisation.
One way you can do it is take the minimum requirements for all your systems, i.e. TFS, mail, web etc., add them all together and that will give you an idea of half the minimum server you need to host all those systems. Double it and you be near what will get you buy, if you have spare cash double/triple the RAM. Most OSes run better with more RAM to particular ceiling. Think about buying expandable storage of some kind and aim for half populated to start with which will keep the initial cost/GB and make for some expansion at lower cost in the future.
You can also buy servers which take multiple CPUs but only put in the minimum amount of CPUs. Also go for as many cores on a CPU as you can get for thermal, physical and licensing efficiency.
I appreciate this is a very late reply but as I didn't see many ESX answers here I wanted to post a reply though my post equally relates to Hyper-V etc.

Guide to New Vista Features [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm an MFC programmer. I just had my first taste of Vista (on a VPC... Yup, I'm late). I should be evaluating things in depth in the coming days. On taking a casual look, I noticed two major changes:
The shell is new
User Access Control
Event Viewer has changed (would like more info on this)
What other new features should I look out for from a programmer's point of view?
There's a significant set of changes depending on what sort of software you write.
It's never a bad idea to check out the Windows Logo Certification (for Vista). There's a link to the Software technical requirements here. It always gives you a bit of an idea what to avoid doing (and what to design for).
In my opinion, Vista mostly started to enforce [existing] Logo certification requirements, in particular:
Don't write to HKLM
Don't save application data under the Program Files directory
Don't assume administrative permissions
Do save data to the user's application data directory
Regarding User Access Control (new to Vista), It's also a good idea to get across Manifest files. The best thing I could find on them is this blog entry here.
Windows Drivers are under higher scrutiny under Windows Vista and pretty much require certification IMHO.
The TCP/IP stack was rewritten and so too the audio subsystem (and multimedia streaming etc). Obviously advances in graphics, plus the inclusion of DirectX 10 and usual rollout of an updated Media Player, etc.
Sorry, I also forgot to mention that Microsoft replaced ActiveSync (for Windows Mobile) with a completely new framework in Vista.
Vista is much more strict about enforcing rules that you were supposed to follow for XP anyway.
For example, you're not supposed to do anything that requires write access to your program's install folder. In XP a lot of programmers got away with breaking that because so many users run as adminstrator, but Vista will actually enforce it. A bunch of folders did move around ("Users" instead of "Documents and Settings", my Documents is different, etc), but if you're using the correct methods to retrieve those paths rather than assuming they're always in the same place you'll be fine.
Perhaps wikipedia's Features new to Windows Vista and possibly Features removed from Windows Vista will be of use to you.
Processes and resources have "integrity levels". A process is only able to access resources at or under its own integrity level.
If you ever do any work with IE extensions this will become a PITA when you want to access something and discover that everything has a higher integrity level than IE in protected mode (default).
Well, from a programmer's point of view, WPF is "built in" to the system. That means that if you target an app to the 3.0 version of the .NET Framework, it should be able to install on Vista without a .NET Framework Install.
DirectX 10 is also new in Vista, but I assume if you didn't know that, you probably won't be programming against it.
Search is pervasive. Numerous kernel improvements. SuperFetch (friggin' awesome if you have enough RAM). IMO Vista goes to sleep and wakes up a LOT easier and more reliably than XP ever did. I/O priority -- now apps like AntiVirus and search indexers can request lower priority for disk access than they did in XP or before. That makes the user experience much more enjoyable when something's indexing the drive or a scan is running. All in all, Vista is good stuff IF you have gobs and gobs of memory to throw at it. I run Vista x64 with 4GB of RAM, and I actually like it.
The audio subsystem has been redeveloped, so if you do anything audio related it is worth checking very carefully if everything still works.
Although many of the older API calls still work, some may not work as expected.
As a simple example, sound devices have much longer and more descriptive names than in XP, but if you continue to use the older APIs then you may find these longer names are truncated.
Oh, yeah. There's a completely different driver model where much of the code is kicked out of kernel space and back into userland, to prevent poor drivers from trampling over the system. So if you do any driver work it's almost like starting over from scratch.
1- Machine with Vista have usually more Ram, this is a good news for you :)
2- Path to "Program files" are splitted in 2 : \Program Files (x86)\ and \Program Files\
3- My Document has changed
VIRTUALIZATION is also an interesting and necessary feature of vista.