Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I'm interested in how I might send requests from a web server using Python to a constantly running C++ program. Basically users should be able to send "orders" via their browser to the web server. The web server then needs to forward those orders to a constantly running application written in C++. Eventually the C++ program should be able to send order results back to the web server who can forward the results to the user's browser if they're still connected.
I've thought about having the web server record pending orders to a database which the C++ program polls for changes. That doesn't seem very efficient though. I believe it will have issues with to many users. Is there some method/technology that is typically used for this type of situation?
You have a few options;
1. API
This is more the traditional option, you have some form of API built into your website, and your C++ program contacts the API to receive and update orders. You would probably want to use this if your C++ program isn't hosted on the same server. However you will need to ensure you keep the API secure from outside parties accessing it to fake orders etc.
2. Shared file or database
If your application is running on the same server you could have both programs access a database or flat-file.
3. Sockets (TCP)
This method is likely overkill, you have your C++ program act as TCP server and your python program connects to it and sends it the orders as they come in. You should be aware that programming this option would be significantly harder to the previous options, however it provides an instant response that the others don't.
Easy to implement BaseHttpServer on python, use pipe to communicate with c++ process and proxy_pass clients`s requests via web server.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I have been using Firebase C++ SDK's Auth and Realtime Database (for Windows) in a simple test application. After a succesful authentication every new message (node) is being arrived from the cloud within just a few millisecs until the following happens:
I leave my computer untouched in idle state.
Due to the energy settings it goes to sleep after 10-15 minutes. (don't want to change the settings!)
After I wake it up again the network connection is re-established for all other background applications (like Skype, Outlook etc)
It seems Firebase's connection is NOT re-established.
Is there any built-in function to get notification from Firebase when it's lost the connection and try to re-login, re-connect to the database either automatically or manually?
I guess it has a background keep-alive connection to check network status but I couldn't get any useful information about it. The documentation says it can keep everything synced even in offline mode.
any built-in function to get notification from Firebase when it's lost the connection[?]
For that you'd attach a listener to the virtual .info/connected node, as shown here: https://firebase.google.com/docs/database/android/offline-capabilities#section-connection-state. Somehow this section is missing from the C++ documentation, which is why I linked you to the Android version.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm an absolute beginner in the field and had a couple of questions that I'm really looking forward to having answers to.
Is IPFS a distributed or a decentralized file system? Which one of these options is more suitable to file systems in general?
Is there a record of all the hashes on the ipfs network? How does my request travel through the network?
How could blockchain fit in with IPFS? Has it been implemented already?
If we become an interplanetary species, IPFS could be the protocol we use to communicate with each other. It is a new protocol that could upgrade the entire internet.
HTTP protocol is the most popular on the internet. You know how you go to a website and it says HTTP at the beginning of URL bar. That's becoz your web browser is using the HTTP protocol to retrieve the page and this protocol was created by Tim Berners Lee in 1989. It defines two entities : A client and a Server. If the request was succesfull, a response is sent back so when you type in http://google.com in your browser which is the client, it uses http to request the Google main page and google's server uses http to send it to you as a response. This protocol is the backbone of world wide web. But http is not good enough anymore, infact it is totally broken and made the web completely centralised. These centralised servers are getting mor and more powerful by absorbing all of our data. Servers are given a unique IP address which define it's location. And becoz data is location addressed, if that location gets shutdown. That data is lost forever. But may be we could make a permanent web. A web where links never die. That's the vision of IPFS. It's peer to peer protocol, there is no central entity. People connect to each other directly. That means if you built a website on IPFS, it could never be shutdown by anyone, even if the government shuts down the internet during protests. People could still communicate with each other offline with IPFS and data would be owned by us, the people (not by any group). And becoz it's peer to peer so data transfer is much faster. The network gives data a content address, not an ip address so if you want to load a website, instead of your computer requesting from server across the world, it will find the nearest copy and your computer would retrieve it from there directly and if multiple people have copies of it, your computer would request it from all of them at the same time. The more peers, the faster the download. Videos would load so much faster. You could download the games 10 times faster. It is just better in every way.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to develop a server for an application of mine in C++. I'm not really familiar with networking concepts. This server is going be a simple one and I'll use one of the networking libraries out there. I just couldn't figure out the necessary keywords to research the following issue:
Let's say that there are 100 users on 100 different computers, all sharing the same internet connection, behind the same router. They all decide to open my client to connect to my server. How do you deal with this issue if you want to keep the connections open and on the same port.
For the purposes of your server, it doesn't make any difference whether those 100 connections are all coming from the same computer, from the same router, or from totally separate networks.
While the server side of the connection will use the same port for all of these, each connection will have a different combination of client side IP address and port. In the case you describe, where all 100 are behind the same router using the same IP address, the router will take care of making sure they all have different client side port numbers. You can read about network address translation (NAT) if you want to learn the details about one common way that is done.
This kind of server programming is not easy and requires network skills. You can have a look at this tutorial. It's C and unix, but it shows the function you'll need to use:
socket interface for network access
listening/accepting new connextion
forking new processes to handle the different clients (although in C++ you'd probebly look for multithreading which is more efficient for this kind of task).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Hi, I am a little confused in finalizing the software architecture of 1 of the projects I am planning to build. The solution is something like-
There are 2 devices in separate home network(private addresses) A and B. A acts as the data source and provides access to the data source over TCP to authenticated users. B is used by the user to fetch the data from A via a web browser. Authentication is something which is not the problem at hand now.
There is a central server S with a public IP address, which acts as a relay b/w A and B. S hosts a web application over a web server accessed by B. When the browser requests for the data, web app needs to fetch this data from device A.
There is an application on the same server S which has a TCP connection established with device A. So basically for web app to get the data one of the approach is to request it from this application which in turn fetches it from A. For this I can have a web service exposed from this application which can fetch the data.
First question - Is this approach good enough? or is there a better alternative
Second - As the application with TCP connection might want to talk to device A for updates or other things, I want a thread of execution running in this application which runs parallely to the context of web service, meaning when web app calls the web service, it will perform the job and also might want to do something on its own without being triggered by the web app.
I might have missed something basic as I am new to web services
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
i want to create two programs in Qt with one server and another client, my server programs insert user and customer information like fingerptint and another important data and in client users and customers use their information for working on some privacy stuff, these programs must send information on network.
so i think using Postgresql for database on server and client just connect to database and get needed information as login and etc.
and now this is my problems
my network connection must be secure no one can extract data send to
client? (so i think postgres handle this for me, am i right?)
i want to client has offline mode, so i don't mind if i must setup
another Postgresql database on client PC, and then how i can tell
postgres update himself from server or vice versa?
finally whats the best solution you think?
thanks a lot
Wow, that's a bit open-ended. See https://stackoverflow.com/faq#dontask . Keep your questions specific and focused. Open ended I-could-write-a-book-on-this questions will get closed.
Quick version:
my network connection must be secure no one can extract data send to client? (so i think postgres handle this for me, am i right?)
Correctly used SSL will give you one-way trust, where the client can verify the identity of the server. The server must still rely on passwords to identify the client, but it can do that over SSL.
You can use client certificates for true two-way verification.
If you're doing anything privacy sensitive consider using your own self-signed CA and distributing the CA cert through known-secure means. There are too many suborned sub-CAs signing wildcard certificates for nations to use in transparent SSL decryption for me to trust SSL CAs for things like protecting dissidents and human rights workers when they're using an Internet connection supplied or controlled by someone hostile to them.
Don't take my word on this; read up on it carefully.
i want to client has offline mode, so i don't mind if i must setup another Postgresql database on client PC, and then how i can tell postgres update himself from server or vice versa?
It sounds like you want asynchronous replication with intermittent connections.
This is hard. I recommend doing it at the application level where you can implement application-specific sync schedules and conflict resolution logic. You can use trigger maintained change-list tables to keep a record of what changed since the DBs last saw each other. Don't use timestamps to keep in sync, as they clock drift between server and client will cause you to miss changes. You might want to use something like the pgq ticker on the master DB.
finally whats the best solution you think?
Too open ended, not enough info provided to even start to answer.