Solidedge remote access for only one user: best way? [closed] - opengl

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
We plan to give access remotely for one of our users to Solidedge as he is working from home and other users into our offices. Currently, he has is own PC at home and has to come regulary here.
I already tried XenDesktop, XenApp (but not the lastest release at it is not available for demo purposes, only the 6.0 release that I will try today).
Also, regarding XenApp coupled with OpenGL, because of Solidedge, is the setup process 'complex' because I could not find any specific documentation regarding that usage.
The main factor is the bandwidth: XenApp doesn't seems to provide bandwitth compression features like XenDesktop do, that can dicrease the bandwith usage to 2M/Bits: from our LAN, I have doubts it will be usable.
Well, do you see an other easier way to do what I would like to set up?
Could you provide me we some details that could help me?

If on Windows I'd suggest simple plain VNC on desktop machine. Yes I know it sounds stupid. For a terminal server kind of access NVidia provides apropriate combinations of hard and software.
On Linux there's another possibility: Xpra. Simply spoken Xpra is a special kind of compositing manager, that uses a different display as the composition surface. It can operate over low bandwidth links by using efficient video codecs for compression. Usually Xpra uses a virtual framebuffer X server to run the clients on. But is can also be used in combination with a X server using a GPU.
However each user uses it's own X server for this. So if the system's GPU is used only one user can use Xpra at a time (actually since the NVidia drivers claim supporting hybrid graphics now, it may be possible to exploit that somehow – I haven't researched into that yet, though).
So the user starts Xpra using
xpra --start-child=xterm --vfb=/usr/bin/X start :100
And can then connect to the remote machine
xpra attach ssh:remote-host:100

Related

Getting started with network programming in Qt [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I just started a summer job at my university doing work in a lab, and I've been given a rather large, vague problem to tackle without much guidance so I was hoping someone could help point me in the right direction.
Essentially it's a 3-d world built in Qt using VTK (Visualization ToolKit) for use in therapy and rehab, and my task is to find a way to network two or more instances of the program so that users can share the same 3-d environment (essentially a networked video game).
The professor wants it to be secure, for latency to be as low as possible, and for the program to record data after a session is complete.
So far I was thinking (without much experience) of doing a client/server model built in Qt, but I'm not sure where to start.
Q1:
Should I use Boost.asio, or Qt library for networking?
Q2:
Are there any concepts I should be mindful of from the get-go for security, and network programming in general? (I've heard good things about Beej's Guide, and books by W. Richard Stevens)
Trying to answer your first question, it depends on which platforms are you targeting (Windows, Linux, OSX...). You could use native OS socket api (bsd sockets or winsock) but Qt provides very good abstractions for these so for the sake of simplicity I would stick with it. I'm unfamiliar with boost.asio but I'm pretty sure that Qt could provide you with all you need disregarding the platform you intent to target.
As for the second question, you have to carefully analyze what kind of information you want to transmit and what will characterize the exchange.
If you intend to make it like a video game where players interact in real time, you are bound to use UDP sockets (although some data can go missing, it allows for "real time" communication). Control messages can circulate through TCP as the impact of latency won't be so critical and you want them to be reliably sent, so consider having two sockets (TCP and UDP) if that's the case, and use them for their purposes.
Those resources that you link are quite informative but assuming you have knowledge of TCP and UDP and their features, I would suggest you to brush up your multithreading skills. Qt offers you good infrastructure for this, but topics like Asynchronous I/O (how to implement selectable sockets) may help you to create a better design that will remove a lot of overhead from additional threads (specially for reading).
These are my 2 cents.
Good luck for your project, I'm sure it will be a good chance for you to learn and put some theory into practice.

Usage-based Licensing framework [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am just trying to look at different licensing models and potential technical C++ implementations.
Suppose you have a desktop application including several algorithms (A1, A2, A3). This application is communicating with some server (potentially in the cloud). These "local" algos may be used independently. Is there any solution/framework out there which could allow us to bill their usage independtly?
For example, , one user uses algo A2 and A3. Before saving files, the software computes the final bill, sends it to some server, asks the user to pay it and generate the results file.
This would allow to ship a potentially expensive software "for free" to the users and without the risk for them to spend an enourmous amount of money upfront without being sure this software will actually be heavily used.
Related question: what are the risks?
Though your Pricing model is feasible for large scale and probably same as what cloud offers.
I don't think any native application would be scalable/feasible with this model.
Most of the License of software's that are too costly to buy for each user, they give a floater license and a cap limit of number of simultaneous users.
Pay as you use is great but it is same as cloud computing but then question is simple.
Do you want to reinvent the wheel?
Unless you don't want to invest in your own cloud server, you can easily put your application on other cloud.
If you are ready for investment into build and maintain your own cloud then you should go ahead.
Edit:
You can use web service or the payment method. Expose the web service for calculating the price to be incurred. I would personally use Java or C# for this purpose.
as java and C# have enough support for it, for the wrapper around the C++ code i would use any of the jni or C++/cli language support.
Another thing is, I have not come accross any open source tool for it, each web service has it's own requirements. You can get the architecture but no ready made work.
C++ code->webservice->price billing->result returned to caller.
Regarding Technical Difficulties.
It would not be possible to do things completely in C++, You may require many other tools with C++.
Consider such a scenario:
The program processes the data on the customer's computer, produces some cryptic data at this stage and calls home (your server).
The server there decodes the data, makes the final analysis and sends info to the client "It will costs you X dollars to see the result. Do you want to proceed?"
If yes, the client makes the payment and gets the result.

Virtual machine monitoring/optimization - what tools? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Improve this question
What are the tools available to place on the level on top of VMWare or Xen (or other VM managers) that would monitor the VMs?
I know there are a few solutions like Netuitive, CA Infrastructure Manager / eHealth, Nimsoft - what are the areas of application and how popular are they?
CA has root-cause analysis of potential problems with the host. Is there something that performs other types of analysis or even attempts to fix the problems?
Alex, which VM's are you talking about in particular ? Xen, vmware, VirtualPC, etc? If you just want to monitor performance & status - you can treat them like regular servers - nagios or something. But I haven't heard of a tool that would monitor their "performance as VM's" for various architectures.
You can also setup nagios or groundwork or spunk or something to monitor your hosts if you have heterogeneous environment with both vmware and xen.
Could you clarify or expand a little bit?
Based on your comment below:
Ok I see, then the tools I mentioned may work for you. I use GroundWork - it's based on nagios - but in addition has some nicer and more developed graphs and history analysis, than nagios and works with ldap :-) (see monitoringforge.org - both commercial and free - for over 2000 extensions, including some support for xen and vmware).
Also check out splunk ( they have a commmunity version) and there is another interesting tool called Spiceworks. I tried both, but like groundwork more for my needs.
To be honest I think splunk is more of what you are looking for since it's more of a data collector and then you can do what you want with that data - like run reports in mysql or write some nifty web app that draws graphs and such.
P.S.:
A word of warning - I run groundwork community version on Debian and still it required a 2.8 ghz dual-core cpu and 4 gigs of ram to run properly, it's kind of a memory hog, but other than that it's been running without a hitch for a year.
You might want to look at "BMC ProactiveNet".
Pretty expensive but they have some stuff that monitors the VMWare VMs (not sure if you can get a trial or something)
(http://www.bmc.com/products/product-listing/ProactiveNet-Performance-Management.html)
VMware monitoring can mean different things. If you are looking to monitor the hypervisor, the built in tools from VMware (vCenter for example) is pretty good. If you want to look into the VMs in-depth or you want to monitor other elements of the infrastructure (e.g., applications), you should consider a third party tool. For instance, we use the eG VMware Monitoring tool - http://www.eginnovations.com/web/vmware.htm

a Process hidden from the Process Monitor [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 13 years ago.
Improve this question
I need to create an application which will be reading and writing to files(C++/MFC). but I need the process not to appear in process monitor (which comes with SysInternals).
From the reactions of others, I now confirm that this seems "illegal". but that is the request of the client I'm dealing with. so, I guess I just have to satisfy the client's request.
One of the uses of Process Monitor is to find and remove malicious software that tries to hide from the user:
Process Monitor is an advanced
monitoring tool for Windows that shows
real-time file system, Registry and
process/thread activity. It combines
the features of two legacy
Sysinternals utilities, Filemon and
Regmon, and adds an extensive list of
enhancements including rich and
non-destructive filtering,
comprehensive event properties such
session IDs and user names, reliable
process information, full thread
stacks with integrated symbol support
for each operation, simultaneous
logging to a file, and much more. Its
uniquely powerful features will make
Process Monitor a core utility in your
system troubleshooting and malware
hunting toolkit.
I am not saying that what you want to do is impossible, rather that you are trying to do something that feels a bit dishonest.
That being said I would like you to consider the fact that you are trying to hide a process from a utility that was written to find anything and everything by folks that are a lot smarter than you and me.
I'll assume you're not planning to do anything malicious. If that's the case, it's important you don't hide your application from diagnostic tools. You can't guarantee your application is bug free. Even if it is, you can't predict its interaction with other applications. Because of that, you should leave it visible so other technical people can troubleshoot if something goes wrong.
Regarding your comment, "so, I guess I just have to satisfy the client's request" - not if it's illegal or technically dangerous for them. You need to protect yourself and them from bad judgment.
PM reads data at a very low level so to hide from it you have to actually take over certain NT kernel structures and methods to report different information to PM than what Windows itself sees. Doing this is platform and version dependent ( ie. Windows XP SP1 is different than Windows XP SP2 is different than Vista x64, etc.). It's nearly impossible to do correctly without creating an incredible number of system instability issues.
While it's not strictly illegal, every company that has done it and been discovered (which you will) has enjoyed lots of backlash and criticism from users and security professionals. Again while not explicitly illegal, the kinds of changes required can open severe security holes on the end users' machines. Should they have major system crashes or be exposed to hackers/viruses you may be legally liable for the damage.
Possible semi-legitimate (though I wouldn't want my name associated with them) applications you would want to keep people from seeing are DRM enforcers and nanny-cam style monitors for kids and errant spouses.
That said, I don't think your client really wants you to subvert such an important system. They likely want something less rootkit-like but they picked up the vocabulary watching "24" and have failed to adequately express what it is they want done.
My advice would be to go back to them for clarification. If they do indeed want something to be completely undetectable then you need to decide based on your own conscience whether to proceed or leave the client.

How do you organize VMware Workstation images? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 9 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I currently use VMware workstation to create separate workspaces for various clients that I do work for. So for a given client I only install software needed for a specific job, and don't have to worry about software A for client #1 mucking up software B for client #2.
With an adequate sized hard drive this works well but I am not sure I am using VMware to its best advantage. The steps I go through when setting up for a new job are:
Do a windows update on a base VMware image that I have saved away.
Make a full clone of the base image and rename for the new job
Launch the new image and configure/install as necessary.
After doing this for a while I now have a collection of VMware images sitting around that are all at different levels of updates unless I manually go into each image and kick off an update cycle. And if there is some new tool that I want in all of my images I have to also go around and do multiple manual installs. But I feel secure in knowing that each image is self contained (albeit taking 10+Gb at a hit) and that if anything happens to a single image then an issue cannot propagate into any other image. (Note that I do do regular backups of all my images)
So my question is am I doing this the best way, or should I consider linked clones so that I only have to do a windows update or common application install on my base system? What are the pro's and con's of each way of organizing things?
In addition, although I try not to keep data files inside the Image's local disks I find that the share to my local hard drive seems very slow compared to the Images file system, hence I tend to leave work inside the image. Should I force myself to not leave data inside the image? Or put another way, how much corruption can a VMware image take before any single file in the images filesystem becomes inaccessible?
Edit
Some thoughts I have had since posting this question
With a full clone I can archive old work away and remove it from my primary hard drive
Link clones take up a lot less space than a full clone, but I am concerned about archiving a linked clone away somewhere else.
The time taken to make a full clone is not too significant (5-10 mins) and I am generally doing it on a weekly basis.
However I also tend to do a lot of "Lets see what happens with a clean install", especially when I am documenting how to install applications for the client.
I have found that my VMware images cause a lot of fragmentation on my hard drive, so I also end up doing a lot more defrag that before I used VMware. I am not sure if using linked clones would reduce this.
I'd stick with your current system. In this situation, having isolated images gives you a lot more flexibility. It might cost you some more time doing updates and installs, but it will be worth it. And that's mostly stuff that you can have going in the background while you do other things, so if you manage your time well the time spent on that should be negligible.
Also, it's probably a good idea to keep your images on their own disk or at least on their own partition. If you do that it shouldn't have any effect on fragmentation on the rest of your system.
This is really going to depend on what kind of and how many projects and clients you have. Building a new VM for every client doesn't scale well if you have dozens of clients, since you'll have to be keeping them all up to date.
I'd be wary of keeping files spread between the host and VMs as you mention though. It's better to keep all your dependencies in one place.
I'm interested to see others' VM strategies here too.
I work for CohesiveFT, the guys who make the Elastic Server platform - so I am biased - but we use the platform to deliver projects to partners and customers. It allows us to set up assembly-time components for different projects and then build them into VMs on the fly for VMware, Parallels, Xen and EC2. The service has a tagging feature so you can tag software packages, server specifications and templates and keep your assets straight.
You can also create assembly portals (think a content management system for assembling virtual servers) which you can control or even let customers have access to customizing their own virtual servers.
http://www.elasticserver.com
You can have a quick browse at virt-manager, just as an aside as to whats also there.. you never know, you might even like it..I think having such a tool can give you a bigger kick in performance and less disk defrag issues.
You would have to go for a steep learning curve and the conversion time to make it all work perhaps.
If updates is your main time spender, try WSUS, nothing related to VMs itself, but it helps with deploying windows updates.
Lastly, check Hanselman's blog on Invirtus, Virtual Machine Optimization at its best.