Best way to pass variable to remote (Golang) app from Django View - django

I'm creating a Django web application that'll assist in barebone server deployments, where a bare bone server will PXE boot to a custom LiveCD to send a cURL command to register itself to a DRF REST API.
When Django receives the POST request it'll start a Go app remotely that'll find the bare bone server based on entries in the REST API then start configuring the server. What would be the best way to identify/introduce the bare bone server to my Go server?
My thought is either to use a parser parameter to identify the server then Go will pull the bare bone server info from the REST API or add a Boolean field in the REST API and the Go app will look for entries that are TRUE then flip it to FALSE when it starts setting up the bare bone server.
Would that be the best way to get this done or is there a better way?

Actually, PXELINUX comes with an identification mechanism based on the systems MAC dress and the configuration can be customized accordingly. Since you need to do accounting of your bare metal servers anyway (port security anyone? ;) ), you should know the MAC dresses of all the interfaces on your bare metal servers anyway.
Your directory usually looks like this (path prefix may be different).
/srv/pxe/pxelinux.cfg/default
Now what happens is that your system starts up, sends a DHCP Request and gets an offer containing the DHCP options "next-server" and "filename". When the system selects said offer, it will connect to the "next-server" and request "filename", usually pxelinux.0. Here is your first potential hook: Write a tftp server which deals with the request and registers your system.
Now pxelinux.0 is executed , it will read the above config file. But here is the thing: Say the Mac address of the system is 23:67:33:5a:cc:e8, and the file
/srv/pxe/pxelinux.cfg/23-67-33-5a-cc-e8
exists, this will be read instead. Which is your second hook: the request will be logged by tftp.
Regardless of wether the default or a system specific config file is used, basically we are talking of GRUB config file. Assuming you use Kickstart to install the system, it will look something like this
default linux
prompt 0
timeout 1
label linux
kernel /images/yourdistro/vmlinuz
ipappend 2
append initrd=/images/yourdistro/initrd.img console=ttyS0,115200
Now, here is the thing: you have several possibilities to execute a custom program on boot:
Append the path to your executable to the append parameter. By convention, the Kernel will send all parameters it does not know to pid 1. Though I have not tested wether systemd adheres to the convention and simply executes a parameter it does not know in turn, I assume as much.
cron. Most cron implementations nowadays support the #boot time definition.
the init system, be it either systemd or openrc or good ol' SYSV init.
Last but not least, how to configure the machine. I strongly suggest against reinventing the wheel. I had quite similar requirements in a (closed source) project. We used kickstart to do the basic system installation and simply shot a curl command after reboot to Ansible Tower, triggering the more detailed configuration. Since we had a DHCP server with the MAC, an IP reserved for said MAC and a hostname readily configured (dnsmasq, caugh, caugh), that was not much of a problem. Basically, all we had to do manually is to register the MAC address and assign an IP and a hostname, then fire up the machine.

Related

DLL injection for browser alone

I want to be able to type www.mydomain.com into my web browser but have the actual traffic go to something.mydomain.com. I thought to, maybe inject a dll into the process browser(firefox.exe). I tried to use some methods like hooking, dll injection using create remote thread etc. But, since I'm a newbie, especially when it comes to C++ or Assembly level languages, I coudn't understand much about it. The one's I could understand are no longer compatible with Win 7 or higherCould someone help me by directing me in the right path.
All I want is know how to intercept/manipulate an outgoing URL request from the browser. I found that TCP/IP first creates some socket using socket() function and then connect() function. I sthere a way to intercept that?
I want this to be easy, simple and compatible with windows XP to 10. If it's not easy I'm okay with building different codes for different versions. If the script is cross platform, it would be even more awesome.
I don't think what you want to do (or more precisely the way you want to do it) is possible without being the owner of the domain and setting a HTTP redirect on the server.
Modifying the hosts file or setting up your own DNS server and having the machine or its router use that to resolve DNS queries is really the only way but...
Dependant on the browser this may not be possible. Current versions of firefox and chrome implement dns prefetching which essentially means that they come preloaded with a bunch of popular dns entries for faster page loading times.

Run Linux command remotely from Window based application

I want to run Linux command remotely from Window based Qt C++ application programmatically. What is the simpliest way to do it?
You need some sort of server on the Linux machine and your Windows machine will be a client. I'd say the easiest way would be just make a php script to run your command and drop it in your www root and have your Windows machine fetch that URL.
At the end of the day, without knowing what your requirements are with regard to security and with regard to what kind of commands you'll be running is, it's very difficult to give a definitive answer to this question.
Simply connect to telnet server on the linux using sockets, and send the commands.
This actually requires very little code. Check the Java version here:
Sending telnet commands and reading the response with Java
You can do similarly with Qt/C++ as well.
Simple server-side program witch will handle the requests and then using ex system() function will be this "remotely" part of solution.
And on client-side simple text field handled by function witch will be able to connect to server to send command run request.
The most important thing in this solution will be to take care about security.
One way to do it is, to have a client-server model, the server resides in linux and client can be your computer. That way you can send commands to the server and have its output thrown at you. That's one way I think of this problem.
Use UPnP to get past the firewall(or use NAT traversal or UDP/TCP hole punching). Otherwise (without forwarding the port) it would be impossible to reach the server.
The second is to write your own RSH and SSH utility. (or use putty or other pre-existing software)
You could use Plink if you are on Windows whatever version. If you can run PuTTY, then you can run PLink. PuTTY Plink Documentation. Using that you could use the executable, and automate things. Otherwise, if you're looking for a specific programming language, they'd still be dependent upon some SSH Library. If you're writing your own installer, you could include the PLink.exe in your installer, and distribute it with your application.
From the documentation page:
Z:\sysosd>plink login.example.com 'echo "Hello World"'

Realtime server push with Socket IO (or Strophe.js), XMPP and Django

I have a couple of Android and iOS native mobile application that I wrote which connect directly to an XMPP server that I host. They push and pull realtime data through XMPP. I also use some of the XMPP XEP extensions. For other operations, I have a django application running on the same server which all the mobile applications consume through an HTTP REST interface. I use Celery and Redis for the django side to do some operations asynchronously (like doing heavy batched writes to my db).
This all works fine and dandy. Yay.
But now I want to write a web front-end to all of this, so I started researching my options and well - there are so many ways to skin the cat that I wanted to check with the SO community first.
The idea to have a js library that gives me a unified API for socket communications (i.e try different implementations of web sockets or fall back to flash) appeals to me hence why I mention Socket IO. The idea of having to run a nodejs server, well, not so much (one more thing to learn), but if I have to, I definitely will. I know that some people use gevent as a replacement of the node server. Others, decide to write a small nodejs which they connect to the rest of their stack. I would probably do this.
Another option, is to use an js XMPP library like Strophe which I don't think has a flash fallback. Also, I would need to research what this means for my server.
I have read several of the Stackoverflow answer on how to do comet and django - hence why it seems that there are several options.
The question is:
If I want to have the advantage of Socket IO behavior (with the fallbacks) and I want to push realtime data to the web client (which is being fed to the server through XMPP), and use Django what is my best option?
Update: The XMPP server that I use is ejabberd, which also supports BOSH. I realize that I could use Strophe.js and thus my communication would go over a type of long polling http connection instead of websockets. As far as I can tell, there are some XMPP over Websockets open source library, but AFAIK the community is not as active as the SocketIO one.
Update 2: Browsers that I need to support are only modern browsers. I guess this means that Flash fallback will not be that important, which is leaning me towards strophe.js.
I think once you get your hands dirty with some node you'll find that straying from Node for socket.io is going to be much harder. There are very easy to use xmpp modules in node ready to go (see https://github.com/astro/node-xmpp). Remember, node is all javascript, so you're probably familiar with programming in it already.
Personally, I've had some memory leak issues using node 0.6 or higher. Node 0.4 worked without those issues. If you are new to github (as I was before playing with Node) here is how you would get going with a node server.
Getting Node
Login to your linux box and favorite directory (I'll assume /)
git clone https://github.com/joyent/node.git
cd /node
git tag -l (this will list all available version of node)
git checkout v0.6.16 (this will checkout 0.6.16 version of node, you could replace that with v0.4.12 for example if you have memory issues)
./configure
make
make install
You'll need certain development tools to build it such as g++, but at this point you'll have a working node command.
Installing Node Modules like xmpp
Node has a nice amount of modules where most things have already been written for you. There is a search facility at http://search.npmjs.org or you can access all modules directly from your shell by using the npm command. NPM is nodes tool for installing and managing node modules. You can type npm search xmpp to search for all xmpp modules, for instance. To install a basic xmpp library for node you would do npm install node-xmpp. By the way, most github node module pages will include instructions on the front page readme file.
Keeping Node Running in Production
This threw me when I first started out. If you have any errors that are not caught node will simply die. So, you can either
1. Make sure there are no errors whatsoever or they are all caught (unlikely because even Node itself will error)
2. Use the uncaughtException handler to catch these problems. You would use code like this in your program
process.addListener("uncaughtException", function (err) {
util.log("Uncaught exception: " + err);
console.log(err.stack);
console.log(typeof(this));
// maybe email me?
});
Be Extra Safe and Use Forever
Even with the uncaughtException issue your program in production might die. Memory running out, segfaults, who knows what. That's where it pays to use something like the wonderful Node module called "Forever" (see https://github.com/nodejitsu/forever). You can type npm install forever -g to install forever. Note the -g option which puts forever in the GLOBAL node module directory. Without -g it puts the node module in the current working directory. You'll then be able to type something like (assuming your node program was called my_program.js) forever start my_program.js and then the Forever program will make sure that if it dies it gets restarted.
Not sure why you'd need Flash fallback if you're going to do BOSH (XEP-0124, XEP-0206), which is what strophe.js does. If you don't need to support IE7, you can do CORS from strophe.js, and you don't even need a proxy for same-origin. IE6 will work because it's insecure, and IE8+ support a just-barely-working form of CORS.
To get information from django through XMPP to your client, make a component connection (XEP-0114) to your server using your favorite Python XMPP library, such as SleekXMPP from your Django app. Arrange for that connection to be relatively long-lived, for performance (i.e. don't create a new one for each client connection). Send protocol as needed.
You didn't mention what XMPP server you're using. XMPP servers that don't support BOSH are getting rare, but if you've got one, you might need Punjab as a BOSH-to-XMPP proxy, or you might want to switch to a newer server, such as Prosody.
First of all, full disclosure: I work for a company called PubNub, which I'm going to mention shortly.
There are a whole range of hosted bidirectional messaging services (sometimes called IaaS - Infrastructure as a Service) that I think are worth considering. They are Pusher, Firebase, Flotype, PubNub, and others. I'm reasonably confident you could use any of them for what you want to accomplish. Firebase has a built-in database that ties right into their service, which is a pretty cool feature, but probably not useful for your particular use case (I assume you already have a database on your backend).
I can't speak too heavily about our competitors, but as far as you wanting a JavaScript library on the frontend that communicates with your Python backend, we (PubNub) provide a very similar api in both languages and that communicate on the same databus in the cloud. So you can send messages with Python and catch them with JavaScript, or vice-versa. We even wrote a PubNub-hosted version of socket.io, which you could use instead of our vanilla JavaScript api, and would still tie into your Django backend in about 10 lines of code.
Finally, the nice thing about using an IaaS (or at least us; again I'm not certain about the others) is that we handle that tricky scaling problem for you. If you reach the point of a million simultaneous users and need to push something to them in real-time, you'll find that's no problem.
We are using real-time push as well with Django and Celery. When I first created the architecture, I also researched my options. Eventually, I decided that I'd rather focus on getting the app just right rather then on fiddling around with devops work. There are several services out there that offer hosted real-time push technology that can be easily integrated with any app.
I chose PubNub and I couldn't be happier. They support socket.io for the client side and have a Python lib I use from Django and Celery workers. They also have SDKs you could use from native mobile apps.
I know, you already have a working setup in place. But I'm betting that the time it will take you to replace your current setup with such a hosted solution would be less than the time it will take you to find a good solution for what you're looking for and implement it. Also keep in mind maintenance costs down the road (esp if you opt for a lib which is not well maintained).
True, you will be paying for the service, but they price is very reasonable and you will be getting a solid service with nice perks like colocation.
I'm not affiliated with that company, just a happy customer. There are other similar services out there.

Which one can I choose? SSH or AMQP?

My application runs in Windows and is implemented using C++/Qt.
The application will invoke another application deployed in the Linux server which in turn will invoke some third party tools. The Linux server application will send some status updates based on the running of third party tools. Usually the third party application will run for hours and the updates will be sent at various stages. The Linux server may also has to send some files in addition to the status updates and the Windows client will also send some files required for the running of those third party tools.
I planned to implement this in libssh2 since file transfers can be done and applications can be executed as well using libssh2_channel_exec(). Updates can be sent and received through non-blocking socket transfers. Also the transfers must be secured and they are password authenticated, so I thought SSH will conform my requirements.
I also looked into Qpid of apache which implements the AMQP. The messaging seems to be a more appropriate one for my status updates since the updates are less frequent. But I am not so sure about the secured connection, password authentication and also the application invocation.
So, which one can I choose between these two? Or is there any other better option available? I am not quite used to network programming so any pointers, links regarding this are welcome..
Have you considered some web-based solutions like XML-RPC, REST, SOAP or other? Note that you can either have constant network connection and stream updates or just make your client ask for update as often as it needs.
Also, I think that building solution based on some of these protocols will give you easier coding - no need for some low-level solutions when you have great libraries. As for security part, I would consider SSL that is part of HTTPS protocol to be secure enough. Of course you can also do it hybrid style, for example SSH tunel to secure server and use SSH key authorization.
But if you are sure youwant SSH or AMQP then use first one - I think it has better security. Also, try not using username/passowrd. Instead use mentioned above keys.
Start with SSH, and then consider layering other protocols on top. You can use SSH port forwarding to create a VPN connection to a server, and maybe that will make it easier to use something like AMQP or 0MQ.

how to read list of running processes on a remote computer in C++

What can be done to know and list all running processes on a remote computer?
One idea is to have a server listening to our request on the remote machine and the other one is to use ssh.
The problem is i dont know whether there will be such a server running on the remote machine and i cannot use ssh because it needs authentication.
Is there any other way out ?
If you
cannot install a server program on the remote machine
cannot use anything that requires authentication
then you should not be allowed to know the list of all running processes on a machine. That request would be a security nightmare!
You can do something much simpler without (as many) security problems: scan the publicly-available ports for programs that are running. Programs like nmap.org let you know a fair bit of information about the publicly-running programs on machines.
I have done something similar in the past using SNMP. I don't have the specifics in front of me, but something like "snmpwalk -v2 -c public hostname prTable" got me the process table. I recall later configuring SNMP to generate errors when the number of processes didn't meet our specified requirement, like httpd must have at least 1 and less than 50.
I suggest you look at the code for a remote login, rlogin. You could remotely login to an account that has the privileges that you need. Once logged in, you can fetch a list of processes.
This looks like a good application for a script rather than a C or C++ program.