Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm an absolute beginner in the field and had a couple of questions that I'm really looking forward to having answers to.
Is IPFS a distributed or a decentralized file system? Which one of these options is more suitable to file systems in general?
Is there a record of all the hashes on the ipfs network? How does my request travel through the network?
How could blockchain fit in with IPFS? Has it been implemented already?
If we become an interplanetary species, IPFS could be the protocol we use to communicate with each other. It is a new protocol that could upgrade the entire internet.
HTTP protocol is the most popular on the internet. You know how you go to a website and it says HTTP at the beginning of URL bar. That's becoz your web browser is using the HTTP protocol to retrieve the page and this protocol was created by Tim Berners Lee in 1989. It defines two entities : A client and a Server. If the request was succesfull, a response is sent back so when you type in http://google.com in your browser which is the client, it uses http to request the Google main page and google's server uses http to send it to you as a response. This protocol is the backbone of world wide web. But http is not good enough anymore, infact it is totally broken and made the web completely centralised. These centralised servers are getting mor and more powerful by absorbing all of our data. Servers are given a unique IP address which define it's location. And becoz data is location addressed, if that location gets shutdown. That data is lost forever. But may be we could make a permanent web. A web where links never die. That's the vision of IPFS. It's peer to peer protocol, there is no central entity. People connect to each other directly. That means if you built a website on IPFS, it could never be shutdown by anyone, even if the government shuts down the internet during protests. People could still communicate with each other offline with IPFS and data would be owned by us, the people (not by any group). And becoz it's peer to peer so data transfer is much faster. The network gives data a content address, not an ip address so if you want to load a website, instead of your computer requesting from server across the world, it will find the nearest copy and your computer would retrieve it from there directly and if multiple people have copies of it, your computer would request it from all of them at the same time. The more peers, the faster the download. Videos would load so much faster. You could download the games 10 times faster. It is just better in every way.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Today I am thinking about connect two computer without tcp/ip. Actually i am searching: connection without ip; if i manage to connect without ip, these network is untraceable.
My full question is :
It is possible to connect two computer without tcp/ip over internet.
May these scenario impossible for the ISP. I don't know.
If possible, It can be competitor of Internet.
From the first line of Wikipedia on Internet:
The Internet is a global system of interconnected computer networks that use the standard Internet protocol suite (TCP/IP) to link several billion devices worldwide.
The internet is built upon the IP framework. You can't "not use" IP through the internet. That's like to say I want to use the post system without addresses. Without the IP framework, there is no way to identify devices from each other or have any standard format to route packets anywhere at all. This is not to say that it is the only way to establish networked communications, it's just the most popular and most used way.
Regarding the first part of your question: It is possible to connect two computer without tcp/ip? There are plenty of ways this is done e.g. Bluetooth, RS-232, proprietary RF communications and so forth.
Also, towards competitor of Internet is that really such a good idea? For once we have one system that is universally compatible with all devices around the globe (almost!). I don't think the rest of the world would be keen on a brand new system unless it is much much much better (in which it'll probably be implemented into the Internet Protocol Suite anyway).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Hi, I am a little confused in finalizing the software architecture of 1 of the projects I am planning to build. The solution is something like-
There are 2 devices in separate home network(private addresses) A and B. A acts as the data source and provides access to the data source over TCP to authenticated users. B is used by the user to fetch the data from A via a web browser. Authentication is something which is not the problem at hand now.
There is a central server S with a public IP address, which acts as a relay b/w A and B. S hosts a web application over a web server accessed by B. When the browser requests for the data, web app needs to fetch this data from device A.
There is an application on the same server S which has a TCP connection established with device A. So basically for web app to get the data one of the approach is to request it from this application which in turn fetches it from A. For this I can have a web service exposed from this application which can fetch the data.
First question - Is this approach good enough? or is there a better alternative
Second - As the application with TCP connection might want to talk to device A for updates or other things, I want a thread of execution running in this application which runs parallely to the context of web service, meaning when web app calls the web service, it will perform the job and also might want to do something on its own without being triggered by the web app.
I might have missed something basic as I am new to web services
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I'm interested in how I might send requests from a web server using Python to a constantly running C++ program. Basically users should be able to send "orders" via their browser to the web server. The web server then needs to forward those orders to a constantly running application written in C++. Eventually the C++ program should be able to send order results back to the web server who can forward the results to the user's browser if they're still connected.
I've thought about having the web server record pending orders to a database which the C++ program polls for changes. That doesn't seem very efficient though. I believe it will have issues with to many users. Is there some method/technology that is typically used for this type of situation?
You have a few options;
1. API
This is more the traditional option, you have some form of API built into your website, and your C++ program contacts the API to receive and update orders. You would probably want to use this if your C++ program isn't hosted on the same server. However you will need to ensure you keep the API secure from outside parties accessing it to fake orders etc.
2. Shared file or database
If your application is running on the same server you could have both programs access a database or flat-file.
3. Sockets (TCP)
This method is likely overkill, you have your C++ program act as TCP server and your python program connects to it and sends it the orders as they come in. You should be aware that programming this option would be significantly harder to the previous options, however it provides an instant response that the others don't.
Easy to implement BaseHttpServer on python, use pipe to communicate with c++ process and proxy_pass clients`s requests via web server.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
i want to create two programs in Qt with one server and another client, my server programs insert user and customer information like fingerptint and another important data and in client users and customers use their information for working on some privacy stuff, these programs must send information on network.
so i think using Postgresql for database on server and client just connect to database and get needed information as login and etc.
and now this is my problems
my network connection must be secure no one can extract data send to
client? (so i think postgres handle this for me, am i right?)
i want to client has offline mode, so i don't mind if i must setup
another Postgresql database on client PC, and then how i can tell
postgres update himself from server or vice versa?
finally whats the best solution you think?
thanks a lot
Wow, that's a bit open-ended. See https://stackoverflow.com/faq#dontask . Keep your questions specific and focused. Open ended I-could-write-a-book-on-this questions will get closed.
Quick version:
my network connection must be secure no one can extract data send to client? (so i think postgres handle this for me, am i right?)
Correctly used SSL will give you one-way trust, where the client can verify the identity of the server. The server must still rely on passwords to identify the client, but it can do that over SSL.
You can use client certificates for true two-way verification.
If you're doing anything privacy sensitive consider using your own self-signed CA and distributing the CA cert through known-secure means. There are too many suborned sub-CAs signing wildcard certificates for nations to use in transparent SSL decryption for me to trust SSL CAs for things like protecting dissidents and human rights workers when they're using an Internet connection supplied or controlled by someone hostile to them.
Don't take my word on this; read up on it carefully.
i want to client has offline mode, so i don't mind if i must setup another Postgresql database on client PC, and then how i can tell postgres update himself from server or vice versa?
It sounds like you want asynchronous replication with intermittent connections.
This is hard. I recommend doing it at the application level where you can implement application-specific sync schedules and conflict resolution logic. You can use trigger maintained change-list tables to keep a record of what changed since the DBs last saw each other. Don't use timestamps to keep in sync, as they clock drift between server and client will cause you to miss changes. You might want to use something like the pgq ticker on the master DB.
finally whats the best solution you think?
Too open ended, not enough info provided to even start to answer.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
As an example, see the reference documentation for one of paypal's APIs:
http://www.paypalobjects.com/en_US/ebook/PP_NVPAPI_DeveloperGuide/Appx_fieldreference.html#2824913
The question is, why do they need it? Doesn't the server get it as part of the HTTP protocol?
UPDATE: Just realized the example I gave wasn't so good. I'm talking about instances where the client is talking directly to the web service. I'll close the question.
I'm not sure about PayPal specifically, but one use case for a service requiring the client's IP is that the server needs to do fraud detection (too many requests coming from the same end user), but the source IP in the packet comes from an aggregator of end user actual IPs. Perhaps the aggregator has NATted clients behind it (possibly mobile devices, who knows). The server will want the aggregator to send it the IP of its clients.
There may be other cases; this is the only one I know of.
They want to be able to identify the end user, usually to protect both you and them from abuse - both to detect fraud attempts (too many requests coming from the same IP) and to be able to find the culprit after the fact (in case of criminal activity, ISPs in many countries are required to reveal user information based on an IP to the investigating authorities).
Of course you could do the logging yourself, but considering the general state of security awareness on the internet, I understand that they're not trusting you to do it well enough.