Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Improve this question
What are the tools available to place on the level on top of VMWare or Xen (or other VM managers) that would monitor the VMs?
I know there are a few solutions like Netuitive, CA Infrastructure Manager / eHealth, Nimsoft - what are the areas of application and how popular are they?
CA has root-cause analysis of potential problems with the host. Is there something that performs other types of analysis or even attempts to fix the problems?
Alex, which VM's are you talking about in particular ? Xen, vmware, VirtualPC, etc? If you just want to monitor performance & status - you can treat them like regular servers - nagios or something. But I haven't heard of a tool that would monitor their "performance as VM's" for various architectures.
You can also setup nagios or groundwork or spunk or something to monitor your hosts if you have heterogeneous environment with both vmware and xen.
Could you clarify or expand a little bit?
Based on your comment below:
Ok I see, then the tools I mentioned may work for you. I use GroundWork - it's based on nagios - but in addition has some nicer and more developed graphs and history analysis, than nagios and works with ldap :-) (see monitoringforge.org - both commercial and free - for over 2000 extensions, including some support for xen and vmware).
Also check out splunk ( they have a commmunity version) and there is another interesting tool called Spiceworks. I tried both, but like groundwork more for my needs.
To be honest I think splunk is more of what you are looking for since it's more of a data collector and then you can do what you want with that data - like run reports in mysql or write some nifty web app that draws graphs and such.
P.S.:
A word of warning - I run groundwork community version on Debian and still it required a 2.8 ghz dual-core cpu and 4 gigs of ram to run properly, it's kind of a memory hog, but other than that it's been running without a hitch for a year.
You might want to look at "BMC ProactiveNet".
Pretty expensive but they have some stuff that monitors the VMWare VMs (not sure if you can get a trial or something)
(http://www.bmc.com/products/product-listing/ProactiveNet-Performance-Management.html)
VMware monitoring can mean different things. If you are looking to monitor the hypervisor, the built in tools from VMware (vCenter for example) is pretty good. If you want to look into the VMs in-depth or you want to monitor other elements of the infrastructure (e.g., applications), you should consider a third party tool. For instance, we use the eG VMware Monitoring tool - http://www.eginnovations.com/web/vmware.htm
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
We plan to give access remotely for one of our users to Solidedge as he is working from home and other users into our offices. Currently, he has is own PC at home and has to come regulary here.
I already tried XenDesktop, XenApp (but not the lastest release at it is not available for demo purposes, only the 6.0 release that I will try today).
Also, regarding XenApp coupled with OpenGL, because of Solidedge, is the setup process 'complex' because I could not find any specific documentation regarding that usage.
The main factor is the bandwidth: XenApp doesn't seems to provide bandwitth compression features like XenDesktop do, that can dicrease the bandwith usage to 2M/Bits: from our LAN, I have doubts it will be usable.
Well, do you see an other easier way to do what I would like to set up?
Could you provide me we some details that could help me?
If on Windows I'd suggest simple plain VNC on desktop machine. Yes I know it sounds stupid. For a terminal server kind of access NVidia provides apropriate combinations of hard and software.
On Linux there's another possibility: Xpra. Simply spoken Xpra is a special kind of compositing manager, that uses a different display as the composition surface. It can operate over low bandwidth links by using efficient video codecs for compression. Usually Xpra uses a virtual framebuffer X server to run the clients on. But is can also be used in combination with a X server using a GPU.
However each user uses it's own X server for this. So if the system's GPU is used only one user can use Xpra at a time (actually since the NVidia drivers claim supporting hybrid graphics now, it may be possible to exploit that somehow – I haven't researched into that yet, though).
So the user starts Xpra using
xpra --start-child=xterm --vfb=/usr/bin/X start :100
And can then connect to the remote machine
xpra attach ssh:remote-host:100
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am just trying to look at different licensing models and potential technical C++ implementations.
Suppose you have a desktop application including several algorithms (A1, A2, A3). This application is communicating with some server (potentially in the cloud). These "local" algos may be used independently. Is there any solution/framework out there which could allow us to bill their usage independtly?
For example, , one user uses algo A2 and A3. Before saving files, the software computes the final bill, sends it to some server, asks the user to pay it and generate the results file.
This would allow to ship a potentially expensive software "for free" to the users and without the risk for them to spend an enourmous amount of money upfront without being sure this software will actually be heavily used.
Related question: what are the risks?
Though your Pricing model is feasible for large scale and probably same as what cloud offers.
I don't think any native application would be scalable/feasible with this model.
Most of the License of software's that are too costly to buy for each user, they give a floater license and a cap limit of number of simultaneous users.
Pay as you use is great but it is same as cloud computing but then question is simple.
Do you want to reinvent the wheel?
Unless you don't want to invest in your own cloud server, you can easily put your application on other cloud.
If you are ready for investment into build and maintain your own cloud then you should go ahead.
Edit:
You can use web service or the payment method. Expose the web service for calculating the price to be incurred. I would personally use Java or C# for this purpose.
as java and C# have enough support for it, for the wrapper around the C++ code i would use any of the jni or C++/cli language support.
Another thing is, I have not come accross any open source tool for it, each web service has it's own requirements. You can get the architecture but no ready made work.
C++ code->webservice->price billing->result returned to caller.
Regarding Technical Difficulties.
It would not be possible to do things completely in C++, You may require many other tools with C++.
Consider such a scenario:
The program processes the data on the customer's computer, produces some cryptic data at this stage and calls home (your server).
The server there decodes the data, makes the final analysis and sends info to the client "It will costs you X dollars to see the result. Do you want to proceed?"
If yes, the client makes the payment and gets the result.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 13 years ago.
Improve this question
I need to create an application which will be reading and writing to files(C++/MFC). but I need the process not to appear in process monitor (which comes with SysInternals).
From the reactions of others, I now confirm that this seems "illegal". but that is the request of the client I'm dealing with. so, I guess I just have to satisfy the client's request.
One of the uses of Process Monitor is to find and remove malicious software that tries to hide from the user:
Process Monitor is an advanced
monitoring tool for Windows that shows
real-time file system, Registry and
process/thread activity. It combines
the features of two legacy
Sysinternals utilities, Filemon and
Regmon, and adds an extensive list of
enhancements including rich and
non-destructive filtering,
comprehensive event properties such
session IDs and user names, reliable
process information, full thread
stacks with integrated symbol support
for each operation, simultaneous
logging to a file, and much more. Its
uniquely powerful features will make
Process Monitor a core utility in your
system troubleshooting and malware
hunting toolkit.
I am not saying that what you want to do is impossible, rather that you are trying to do something that feels a bit dishonest.
That being said I would like you to consider the fact that you are trying to hide a process from a utility that was written to find anything and everything by folks that are a lot smarter than you and me.
I'll assume you're not planning to do anything malicious. If that's the case, it's important you don't hide your application from diagnostic tools. You can't guarantee your application is bug free. Even if it is, you can't predict its interaction with other applications. Because of that, you should leave it visible so other technical people can troubleshoot if something goes wrong.
Regarding your comment, "so, I guess I just have to satisfy the client's request" - not if it's illegal or technically dangerous for them. You need to protect yourself and them from bad judgment.
PM reads data at a very low level so to hide from it you have to actually take over certain NT kernel structures and methods to report different information to PM than what Windows itself sees. Doing this is platform and version dependent ( ie. Windows XP SP1 is different than Windows XP SP2 is different than Vista x64, etc.). It's nearly impossible to do correctly without creating an incredible number of system instability issues.
While it's not strictly illegal, every company that has done it and been discovered (which you will) has enjoyed lots of backlash and criticism from users and security professionals. Again while not explicitly illegal, the kinds of changes required can open severe security holes on the end users' machines. Should they have major system crashes or be exposed to hackers/viruses you may be legally liable for the damage.
Possible semi-legitimate (though I wouldn't want my name associated with them) applications you would want to keep people from seeing are DRM enforcers and nanny-cam style monitors for kids and errant spouses.
That said, I don't think your client really wants you to subvert such an important system. They likely want something less rootkit-like but they picked up the vocabulary watching "24" and have failed to adequately express what it is they want done.
My advice would be to go back to them for clarification. If they do indeed want something to be completely undetectable then you need to decide based on your own conscience whether to proceed or leave the client.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I have recently been asked to develop an application that will have to integrate with Sage Line 50 financial software.
I've done some googling and I am surprised at the lack of info on interfacing with Sage from Java or .Net.
Is Sage such a black box that you need to sign up to a Sage Developer program before you get any info?
Are there any open source options to allow apps to talk to Sage?
Any info appreciated.
Cheers
Paul
Theres a new methodology Sage are moving to called SData. I think you can read about this at http://sdata.sage.com/
The long term aspiration is that SData will provide full CRUD facilities and simplify integration between different Sage programs (of which there are many!) and therefore provide a consistent web service that 3rd party applications can be integrated with too.
Looking on the Sage UK site I found the following Developer SDK.
Upshot is that you need to use .Net if you want to use the SDK.
Problem is that the SDK is only available under the Developer programme which starts at £1500: Here's the brochure.
However the developer programme does give you free copies of the Sage software for development purposes, so I can see the benefits if your business is Sage integration.
Another option is an addon for Sage which is sold by Sage for £299
http://shop.sage.co.uk/pdf/connect_for_Sage_50.pdf
This gives an XML import/export facility, this may be enough for my purposes.
I've done quite a bit with Sage Line 50 V9 (a couple of versions old, I know). Sage provide an ODBC driver which you can happily talk to with ADO & ADO.NET. The driver is however read-only which may or may not be an issue to you. There do seem to be some limitations with SQL queries though - in particular, double joins don't work (a JOIN b JOIN c) & need to be flattened-out. Also, the DISTINCT keyword doesn't seem to be recognised. Hope this of some use.
Going back a few years, but Sage also used to provide a read-write API (not ODBC based) for accessing the data in their products.
I'm not surprised that you need to join the developer program - Sage is a traditional closed source commercial application - it's unlikly to have open source options available for it.
Joining the dev program used to be free for Sage customers, which the people you are working for should be, surely...?
EDIT - yikes, not free any more
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 9 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I currently use VMware workstation to create separate workspaces for various clients that I do work for. So for a given client I only install software needed for a specific job, and don't have to worry about software A for client #1 mucking up software B for client #2.
With an adequate sized hard drive this works well but I am not sure I am using VMware to its best advantage. The steps I go through when setting up for a new job are:
Do a windows update on a base VMware image that I have saved away.
Make a full clone of the base image and rename for the new job
Launch the new image and configure/install as necessary.
After doing this for a while I now have a collection of VMware images sitting around that are all at different levels of updates unless I manually go into each image and kick off an update cycle. And if there is some new tool that I want in all of my images I have to also go around and do multiple manual installs. But I feel secure in knowing that each image is self contained (albeit taking 10+Gb at a hit) and that if anything happens to a single image then an issue cannot propagate into any other image. (Note that I do do regular backups of all my images)
So my question is am I doing this the best way, or should I consider linked clones so that I only have to do a windows update or common application install on my base system? What are the pro's and con's of each way of organizing things?
In addition, although I try not to keep data files inside the Image's local disks I find that the share to my local hard drive seems very slow compared to the Images file system, hence I tend to leave work inside the image. Should I force myself to not leave data inside the image? Or put another way, how much corruption can a VMware image take before any single file in the images filesystem becomes inaccessible?
Edit
Some thoughts I have had since posting this question
With a full clone I can archive old work away and remove it from my primary hard drive
Link clones take up a lot less space than a full clone, but I am concerned about archiving a linked clone away somewhere else.
The time taken to make a full clone is not too significant (5-10 mins) and I am generally doing it on a weekly basis.
However I also tend to do a lot of "Lets see what happens with a clean install", especially when I am documenting how to install applications for the client.
I have found that my VMware images cause a lot of fragmentation on my hard drive, so I also end up doing a lot more defrag that before I used VMware. I am not sure if using linked clones would reduce this.
I'd stick with your current system. In this situation, having isolated images gives you a lot more flexibility. It might cost you some more time doing updates and installs, but it will be worth it. And that's mostly stuff that you can have going in the background while you do other things, so if you manage your time well the time spent on that should be negligible.
Also, it's probably a good idea to keep your images on their own disk or at least on their own partition. If you do that it shouldn't have any effect on fragmentation on the rest of your system.
This is really going to depend on what kind of and how many projects and clients you have. Building a new VM for every client doesn't scale well if you have dozens of clients, since you'll have to be keeping them all up to date.
I'd be wary of keeping files spread between the host and VMs as you mention though. It's better to keep all your dependencies in one place.
I'm interested to see others' VM strategies here too.
I work for CohesiveFT, the guys who make the Elastic Server platform - so I am biased - but we use the platform to deliver projects to partners and customers. It allows us to set up assembly-time components for different projects and then build them into VMs on the fly for VMware, Parallels, Xen and EC2. The service has a tagging feature so you can tag software packages, server specifications and templates and keep your assets straight.
You can also create assembly portals (think a content management system for assembling virtual servers) which you can control or even let customers have access to customizing their own virtual servers.
http://www.elasticserver.com
You can have a quick browse at virt-manager, just as an aside as to whats also there.. you never know, you might even like it..I think having such a tool can give you a bigger kick in performance and less disk defrag issues.
You would have to go for a steep learning curve and the conversion time to make it all work perhaps.
If updates is your main time spender, try WSUS, nothing related to VMs itself, but it helps with deploying windows updates.
Lastly, check Hanselman's blog on Invirtus, Virtual Machine Optimization at its best.