Deploying a Custom Program to a Hosting Service - c++

I am a total newbie in servers/hosting etc, although I have some experience in programming in C,Java,etc. So excuse me if the question is 'absurd'.
I recently bought service from a hosting site,namely this(hostmds). I have some code I've written in C++ and I want to run it in the hosting site. So my question is:
Is this possible, or will I have to rewrite everything in a new language?
What should my approach be?
Edit: I have a Shared-Hosting account.

You will have to get a "virtual private server" account from your host in order to do this. This will enable you to compile your program on your host machine and run it essentially as if it were a separate machine under your control.
This means you will also be responsible for maintaining your own HTTP server program (such as Apache, if running on a Linux/Unix host), and your own database servers and other support.
If you have a "shared hosting" account (the most common low cost option) with SSH support, you may be able to compile your program, and even run it, but you will be subject to the whims (capricious or otherwise) of the administrators of your system (that it, you may find that libraries you need are removed or moved around)

What type of hosting is this?
What kind of application is this, is it a daemon?
Depending on the amount of access rights you have, you can run the code in the cgi-bin folder or through the shell of the server.
Depending on the OS/compiler you've used to write your code in you might have to modify some things so that it'll work on the target OS. You should probably add some more details. :)

Many hosting services provide CGI/FastCGI/SCGI that can be used for running C++ webapps. However, it depends on your host whether you can actually do this, as it may be difficult to get binaries built on some other system to run on the web hosting service (if you even can upload them in the first place).
On shell services and virtual servers you can also run daemons (that directly listen to a port), but especially on shell services you cannot listen on low ports (0..1024), for security reasons.
Notice that the cheapest hosting packages generally only allow PHP at most, so you will need something more expensive for more access.
It is best to ask the hosting provider for further information, as these things wildly differ from host to another.

Related

Linux webserver sandbox for Python

I'm very new to web server and I would like to propose to my users the capability to use Python through my website.
My main problem is that Python is not harmless, even if it is armless (sorry for this very bad play on words). So I need to use a kind of sandbox but for me this is more a concept than a technic that I can use it.
So what would be the best way to do that ?
Sandboxing
You will need support from the operating system to effectively sandbox an application.
On FreeBSD, you can use a jail. This has proven to be quite secure over the years. (While there have been vulnerabilities in this system, the concensus is that it is not possible for a program to break out of a jail without outside help. <1>)
On Linux you can use LXC.
On MS-Windows you could use sandboxie.
Have a look at the comparison of virtualization technologies before deciding what to do. Personally I'd suggest only using technologies that offer "root privilege isolation".
Another possibility would be to use a virtual machine. But that would probably have more overhead.
And no matter what you use, you still need the firewall on the host to redirect some traffic to the sandbox/virtual machine.
Python security
CPython itself
The source code for the standard CPython is regularly audited by coverity. In their 2012 scan they found 0.005 defects in 1000 lines of code. The average for open source projects is 0.69 defects per 1000 lines, and 1 defect/1000 lines is accepted as a good industry standard.
So CPython itself doesn't have many defects.
Web programming with Python
The OWASP Python security project has identified security concerns in the CPython source code, as well as security concerns in modules and functions.
The built-in eval() function deserves special mention here.
It executes the Python code given to it as a string without check or boundaries. So while it is sometimes very useful, this function should never ever be given untrusted input from the web!
For instance, don't be tempted to use it to give your web app a built in calculator.
Their top-10 list of web app vulnerabilities also makes for interesting reading.
<1> From the documentation;
Jails are a powerful tool, but they are not a security panacea. While it is not possible for a jailed process to break out on its own, there are several ways in which an unprivileged user outside the jail can cooperate with a privileged user inside the jail to obtain elevated privileges in the host environment.
Most of these attacks can be mitigated by ensuring that the jail root is not accessible to unprivileged users in the host environment. As a general rule, untrusted users with privileged access to a jail should not be given access to the host environment.
You can run a cgi enabled server on the local machine with a single command:
python -m http.server 8000 --cgi
It will serve whatever is in the working directory that you executed the command.
You can browse the pages served by pointing your browser to localhost:8000
If you are using python 3.4 or later, you can restrict this to the local interface so that no one from the outside can access the pages:
python -m http.server 8000 --cgi --bind 127.0.0.1

How to use gssapi kerberos in c / c++ client server cross-platform programs? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I had to "sporadically" work with Heimdal / MIT Gssapi for kerberos authentication over past couple of years. I had to build an application that was to be used as a web-service running on a Linux box, and serve client applications like browsers, running on Windows and/or Linux Desktops and Workstations. Surely not the easiest of beasts to tame. Eventually when summarizing my work, I could record that the difficulties emanated due to challenges in multiple dimensions. Getting started with gssapi programming is truly a challenge just because of poor documentation, and practically non-existant tutorials. Googling mostly results in either some theoretical discussion on what's kerberos, or leads to content written with presumption that you already know everything besides some particular semantic issue.
Some really good hacks around here contributed to help me, I therefore suppose it would be a good idea to summarize the stuff, from a developer's perspective, and share it here as some sort of a wiki, to give something back to this fantastic place, and fellow programmers.
Haven't really done a wiki like this before, and I am surely no authority on GSSAPI nor Kerberos, so please be kind, but more than that please contribute and correct my mistakes. Site Editors, I am counting on you to do your magic ;)
Getting your project completed successfully will require 3 specific things to be done correctly:
Setup of your test environment
Setup of your libraries
Your code
As I said already, such projects are beasts, just because all the three haven't been put together on the same page anywhere.
Ok So let's begin at the beginning.
Unavoidable theory for a newbie
GSSAPI helps a client application to provide credentials for a server to authoritatively identify the user. Extremely useful because the server applications can modulate their served responses if they wish to, as per the user. Very naturally therefore both - the client and the server applications must be kerberized, or as some would state kerberos-aware.
The kerberos based authentication, requires both the client and server applications, to be members of a Kerberos Realm. KDC (Kerberos Domain Controller) is the designated authority that rules the realm. Microsoft's AD servers are one of the most popularly experienced examples of a KDC, though you can of course be using a *NIX based KDC. But surely without a KDC there can be no Kerberos business at all. Desktops, Servers & workstations joined into the domain identify each other as long as all of them remain joined into the domain.
For your initial experiments, setup the client & server applications in the same realm.
Though Kerberos Authentication can surely be also used across realms by creating trusts between KDCs of these realms, or even merging keytabs from different KDCs that do not trust each other. Your code will not really need any change to accommodate such different and complex-sounding scenarios.
Kerberos Authentication basically works via "tickets (or tokens)". When a member joins the realm, the KDC "grants tokens" to each of them. These tokens are unique; time and FQDN are essential factors for these tickets.
Before you even think of the very first line of your code make sure you have got these two right:
Pitfall #1 When you setup your development and test environment, make sure everything is tested and addressed as FQDN. For example if you want to check connectivity, ping using FQDN, not IP. Needless to say therefore, they must necessarily have the same DNS service configuration.
Pitfall #2 Make sure all the host systems - that are running your KDC, client software, server software have the same time server. Time synchronization is something that one forgets, and realizes to be amiss after a lot of hair-splitting, and head-banging!
Both, the client and server applications NEED kerberos keytabs. So if your application is going to run in a *NIX host, and be a part of a Microsoft Domain, you have to get a kerberos keytab generated, before we start to look at the remaining preparatory steps for gss programming.
Step-by-Step Guide to Kerberos 5 (krb5 1.0) Interoperability at is an absolute must-read.
GSS-API Programming Guide is an excellent bookmark.
Depending upon your *NIX distribution you can install the headers & libraries for building your code. My suggestion however is to download the source and build it yourself. Yes, you might not get it right at one go, but it surely is worth the trouble.
Pitfall #3 Make sure that your application is running in an Kerberos aware environment.
I really learnt this the hard way, but maybe because I am not so smart. In my earliest stages of gssapi programming struggle, I had discovered that kerberos keytabs were absolutely necessary for making my application kerberos-aware. But I simply couldn't find anything about how to load these keytabs in my application. You know why?!! Because no such api exists!!!
Because: The application is to be run in an environment which is aware of the keytabs.
Ok, let me make this simple: Your application that is supposed to do the GSSAPI / Kerberos things has to run after you have set environment variable KRB5_KTNAME to the path where you have stored the keytabs. So either you do something like:
export KRB5_KTNAME=<path/to/your/keytab>
or make use of setenv to set KRB5_KTNAME in your application sufficiently before the very first line of your code that uses gssapi is run.
We are now ready to do the necessary things in the application's code.
I understand there are quite a few other aspects that must be reviewed by the application developer, to write and test an application. I know of a few environment variables, that can be important.
Can anybody please shed some more light upon that?

Multi-CFML engine test environment

Does anyone have a good way to set up multiple CFML engines, and versions of them, together in a suitable environment for cross testing a CFML based application.
Ideally, I'd like this to be Ubuntu Server based as I'm using it with VirtualBox (under Windows 7). Plus it'd be helpful if it was possible to switch between, so my laptop can cope with one at a time rather than all running at once. I'm thinking of the following:
Adobe ColdFusion 9
Adobe ColdFusion 10
Railo 3.3.x
Railo 4.x
OpenBD 2.x
I'd also like to get them serving from the same shared directory, so I don't have to have a copy of the code for each engine. Cheers
You mentioned being able to "switch between, so my laptop can cope with one at a time rather than all running at once", I'm guessing that you are thinking that each one will run on a different VM, or that they might require a huge amount of memory. I don't think you need to worry about that. Unless you require that they be on different machines, I think you could do this all on one VM and with one instance of a servlet container (like Tomcat).
From a high-level view, here is how I would do it.
Install Tomcat
Create or download .wars for each of the engines.
Deploy said .wars to that one instance of Tomcat
Set up Tomcat to use each of those servlets from a different host name (server.xml)
Create a code directory outside of Tomcat for your one copy of the code
Set up a Symbolic link in each webapp to link the code folder into the servlet
You should then be able to hit the same source from each engine by visiting the different host names in the browser.
I may be missing something. It has been a long time since I set something like this up. You'll likely need to make a bunch of tweaks (JVM settings, switching to Sun/ORACLE JVM vs. OpenJDK, etc).
I don't think running this many engines will cause you great trouble. In my experiences, for development, I have had 3 instances of CF9 running on Tomcat using only 189mb of RAM. And each additional instance did not increase that number by 1/3. Far less. It would not surprise me if you could run all of those handily with less than 512md of RAM. Possibly even 256mb if you are really hurting on memory.
I hope this helps.
For ColdFusion 10, Railo and OpenBD you would be looking at deploying with standalone installations of Tomcat, Jetty or JBoss.
ColdFusion 9, probably the easiest solution is "Enterprise Multiserver configuration" setup.
With these kinds of installation they are pretty much platform agnostic.
The things to be aware of are the web server, proxy and jndi ports that are used by each installation, but only if you want to run more than one server at a time.
After that it's whether you are bothered about proxying from apache or Nginx to the server instances and the connector you want to use.
No idea if this helps...
Since you've mentioned the VirtualBox, I'll share my personal approach to this task. It includes few fairly simple steps:
Install Ubuntu Server as VirtualBox guest (host is also Ubuntu).
Set up only basic software like JVM and updates. Set up virtual
machine networking as bridged adapter to use my Wi-Fi connection.
Configure my Wi-Fi router DHCP to assign static IP for MAC address of the virtual machine.
Add entry to my (host) system hosts: ip_assigned_to_vm virtual.ubuntu
Set up guest additions and mount my ~/www directory inside the machine to access web applications.
Now, when I need another machine for experiments, or some other configuration of software (I've tested ACF 10 and Railo 4 this way) I do two things:
Clone existing clean machine.
Make sure it is using the same MAC address with bridged interface.
That's it.
It doesn't matter which of the machines I run, they all can be accessed as http://virtual.ubuntu (of course, it requires proper web-server configuration on the guest). Same time they are independent and it is completely safe to make anything I wish and test anything that runs on Ubuntu.
Obvious downsides are that I can run just one machine at a time, plus much more disk space is used. Not a problem to me.
I've tried approach with Tomcat and multiple WARs, but it has couple of issues: I can't use different JVM and Tomcat settings, also if I screw the setup -- all the Tomcat hosts are down.
Hope this helps.

Django web server -- where should I draw the line between production and development?

I know that it's bad to use the Django web server in production. There's been at least one Stackoveflow question on this already.
But I'm wondering about where to draw the line between development and production? If I'm only allowing HTTP access to one (or a few) IP addresses, then I know I'm in development. What if I open it to all IP addresses, but only e-mail a couple friends to see what they think of what I've built?
As far as I can tell, the problems with using the Django server are:
It's single-threaded
Security
I don't think (1) is likely to be an issue if I'm only sharing it with a few people. For (2)--what's the worst-case scenario? Does it make a difference that I'm running on an Amazon EC2 server that I could very easily restart from a backup if something bad happened?
Well, the answer is very simple actually, you've left development when you have something you must protect: real user personal information, real data in your database that you'd be afraid to lose, etc.
Security isn't a concern until these things are present. The rule about not using the dev server in "production" is guidance, not mandatory. You can fire up the dev server in your production environment any time you want. However, you'd be silly to do so and then open up universal access to it, once your site is truly live and in use by the world.
Setting up mod_wsgi (or some other WSGI container) on a development machine takes all of 5 minutes, and can help you sort out deployment issues before you actually reach deployment. So really, why ever use the development server if you don't have to?

how to read list of running processes on a remote computer in C++

What can be done to know and list all running processes on a remote computer?
One idea is to have a server listening to our request on the remote machine and the other one is to use ssh.
The problem is i dont know whether there will be such a server running on the remote machine and i cannot use ssh because it needs authentication.
Is there any other way out ?
If you
cannot install a server program on the remote machine
cannot use anything that requires authentication
then you should not be allowed to know the list of all running processes on a machine. That request would be a security nightmare!
You can do something much simpler without (as many) security problems: scan the publicly-available ports for programs that are running. Programs like nmap.org let you know a fair bit of information about the publicly-running programs on machines.
I have done something similar in the past using SNMP. I don't have the specifics in front of me, but something like "snmpwalk -v2 -c public hostname prTable" got me the process table. I recall later configuring SNMP to generate errors when the number of processes didn't meet our specified requirement, like httpd must have at least 1 and less than 50.
I suggest you look at the code for a remote login, rlogin. You could remotely login to an account that has the privileges that you need. Once logged in, you can fetch a list of processes.
This looks like a good application for a script rather than a C or C++ program.