Postgresql server and client offline mode [closed] - c++

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
i want to create two programs in Qt with one server and another client, my server programs insert user and customer information like fingerptint and another important data and in client users and customers use their information for working on some privacy stuff, these programs must send information on network.
so i think using Postgresql for database on server and client just connect to database and get needed information as login and etc.
and now this is my problems
my network connection must be secure no one can extract data send to
client? (so i think postgres handle this for me, am i right?)
i want to client has offline mode, so i don't mind if i must setup
another Postgresql database on client PC, and then how i can tell
postgres update himself from server or vice versa?
finally whats the best solution you think?
thanks a lot

Wow, that's a bit open-ended. See https://stackoverflow.com/faq#dontask . Keep your questions specific and focused. Open ended I-could-write-a-book-on-this questions will get closed.
Quick version:
my network connection must be secure no one can extract data send to client? (so i think postgres handle this for me, am i right?)
Correctly used SSL will give you one-way trust, where the client can verify the identity of the server. The server must still rely on passwords to identify the client, but it can do that over SSL.
You can use client certificates for true two-way verification.
If you're doing anything privacy sensitive consider using your own self-signed CA and distributing the CA cert through known-secure means. There are too many suborned sub-CAs signing wildcard certificates for nations to use in transparent SSL decryption for me to trust SSL CAs for things like protecting dissidents and human rights workers when they're using an Internet connection supplied or controlled by someone hostile to them.
Don't take my word on this; read up on it carefully.
i want to client has offline mode, so i don't mind if i must setup another Postgresql database on client PC, and then how i can tell postgres update himself from server or vice versa?
It sounds like you want asynchronous replication with intermittent connections.
This is hard. I recommend doing it at the application level where you can implement application-specific sync schedules and conflict resolution logic. You can use trigger maintained change-list tables to keep a record of what changed since the DBs last saw each other. Don't use timestamps to keep in sync, as they clock drift between server and client will cause you to miss changes. You might want to use something like the pgq ticker on the master DB.
finally whats the best solution you think?
Too open ended, not enough info provided to even start to answer.

Related

IPFS, blockchain and file searching questions [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm an absolute beginner in the field and had a couple of questions that I'm really looking forward to having answers to.
Is IPFS a distributed or a decentralized file system? Which one of these options is more suitable to file systems in general?
Is there a record of all the hashes on the ipfs network? How does my request travel through the network?
How could blockchain fit in with IPFS? Has it been implemented already?
If we become an interplanetary species, IPFS could be the protocol we use to communicate with each other. It is a new protocol that could upgrade the entire internet.
HTTP protocol is the most popular on the internet. You know how you go to a website and it says HTTP at the beginning of URL bar. That's becoz your web browser is using the HTTP protocol to retrieve the page and this protocol was created by Tim Berners Lee in 1989. It defines two entities : A client and a Server. If the request was succesfull, a response is sent back so when you type in http://google.com in your browser which is the client, it uses http to request the Google main page and google's server uses http to send it to you as a response. This protocol is the backbone of world wide web. But http is not good enough anymore, infact it is totally broken and made the web completely centralised. These centralised servers are getting mor and more powerful by absorbing all of our data. Servers are given a unique IP address which define it's location. And becoz data is location addressed, if that location gets shutdown. That data is lost forever. But may be we could make a permanent web. A web where links never die. That's the vision of IPFS. It's peer to peer protocol, there is no central entity. People connect to each other directly. That means if you built a website on IPFS, it could never be shutdown by anyone, even if the government shuts down the internet during protests. People could still communicate with each other offline with IPFS and data would be owned by us, the people (not by any group). And becoz it's peer to peer so data transfer is much faster. The network gives data a content address, not an ip address so if you want to load a website, instead of your computer requesting from server across the world, it will find the nearest copy and your computer would retrieve it from there directly and if multiple people have copies of it, your computer would request it from all of them at the same time. The more peers, the faster the download. Videos would load so much faster. You could download the games 10 times faster. It is just better in every way.

How to handle a single connection from multiple clients on a single ip using TCP [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to develop a server for an application of mine in C++. I'm not really familiar with networking concepts. This server is going be a simple one and I'll use one of the networking libraries out there. I just couldn't figure out the necessary keywords to research the following issue:
Let's say that there are 100 users on 100 different computers, all sharing the same internet connection, behind the same router. They all decide to open my client to connect to my server. How do you deal with this issue if you want to keep the connections open and on the same port.
For the purposes of your server, it doesn't make any difference whether those 100 connections are all coming from the same computer, from the same router, or from totally separate networks.
While the server side of the connection will use the same port for all of these, each connection will have a different combination of client side IP address and port. In the case you describe, where all 100 are behind the same router using the same IP address, the router will take care of making sure they all have different client side port numbers. You can read about network address translation (NAT) if you want to learn the details about one common way that is done.
This kind of server programming is not easy and requires network skills. You can have a look at this tutorial. It's C and unix, but it shows the function you'll need to use:
socket interface for network access
listening/accepting new connextion
forking new processes to handle the different clients (although in C++ you'd probebly look for multithreading which is more efficient for this kind of task).

Best architectural choice for communication b/w a web client and a TCP server via relay [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Hi, I am a little confused in finalizing the software architecture of 1 of the projects I am planning to build. The solution is something like-
There are 2 devices in separate home network(private addresses) A and B. A acts as the data source and provides access to the data source over TCP to authenticated users. B is used by the user to fetch the data from A via a web browser. Authentication is something which is not the problem at hand now.
There is a central server S with a public IP address, which acts as a relay b/w A and B. S hosts a web application over a web server accessed by B. When the browser requests for the data, web app needs to fetch this data from device A.
There is an application on the same server S which has a TCP connection established with device A. So basically for web app to get the data one of the approach is to request it from this application which in turn fetches it from A. For this I can have a web service exposed from this application which can fetch the data.
First question - Is this approach good enough? or is there a better alternative
Second - As the application with TCP connection might want to talk to device A for updates or other things, I want a thread of execution running in this application which runs parallely to the context of web service, meaning when web app calls the web service, it will perform the job and also might want to do something on its own without being triggered by the web app.
I might have missed something basic as I am new to web services

Connect web server to another application [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I'm interested in how I might send requests from a web server using Python to a constantly running C++ program. Basically users should be able to send "orders" via their browser to the web server. The web server then needs to forward those orders to a constantly running application written in C++. Eventually the C++ program should be able to send order results back to the web server who can forward the results to the user's browser if they're still connected.
I've thought about having the web server record pending orders to a database which the C++ program polls for changes. That doesn't seem very efficient though. I believe it will have issues with to many users. Is there some method/technology that is typically used for this type of situation?
You have a few options;
1. API
This is more the traditional option, you have some form of API built into your website, and your C++ program contacts the API to receive and update orders. You would probably want to use this if your C++ program isn't hosted on the same server. However you will need to ensure you keep the API secure from outside parties accessing it to fake orders etc.
2. Shared file or database
If your application is running on the same server you could have both programs access a database or flat-file.
3. Sockets (TCP)
This method is likely overkill, you have your C++ program act as TCP server and your python program connects to it and sends it the orders as they come in. You should be aware that programming this option would be significantly harder to the previous options, however it provides an instant response that the others don't.
Easy to implement BaseHttpServer on python, use pipe to communicate with c++ process and proxy_pass clients`s requests via web server.

How to test internet application at local computer (windows-7)? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
This application sends data periodically to a server. What I need to do is setup a testing environment on the local developing machine so that I can check the correct packets are being sent in each situation. I thought a good approach would be a server VM set up on the local computer which would receive the packets and respond just like the real thing, but the problem is how do I route the packets of an application running on windows to a VM machine. I don't want to modify my application code. I just want to have windows pass on the packets it receives from the application to the VM or otherwise another application that will do the testing. Is this possible? If not, please let me know about any other solution(s) to this problem.
If you're running a decent VM you should be able to give it an IP address visible from the host, and configure it so that you can run web servers on it, ssh to it, etc.
Look at the networking features of your VM. Or find a tutorial on how to do this, such as this one for VirtualBox:
http://www.tolaris.com/2009/03/05/using-host-networking-and-nat-with-virtualbox/
Well it's some kind of a hack but you can use ARP Poisoning (man in the middle attack) to sniff packets. There is a tool named Cain & Abel which can do this for you. I've used this tool to sniff packets between two non-pc machines. Use at your own risk and if your anti-virus tool alerts, know that the tool has no virus but what it does is detected as one.
Edit: Please note that my approach doesn't require a VM server.