Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
I just logged into my online credit card account and was getting ready to make a payment, however I needed to add a new payment method. In doing so, just out of curiosity I opened the Chrome Developer Tools and looked at the network tab to view the request data I was sending, and it seems that everything I put in (credit card number, bank account number, bank routing number, etc.) is all sent directly to their servers in plain text.
Is this legal? I thought it was against the law to send/store this kind of information in your servers, let alone send it via the internet in plain text since that can be intercepted?
I'd like someone with more knowledge on the subject to explain this to me please, as I may be misinformed.
Edit: I guess a better question may be, are members of the FDIC allowed to store such information on their own servers? Because according to their legal information, they are a member of the FDIC.
The communication between your Chrome web browser and the bank site expected to be thru HTTPS, i.e. secure connection. Check this always when you need to enter clear payment details anywhere in web.
Chrome Tool as the network tool just show the HTML Forms and Items values which were sent thru HTTP/HTTPS protocol.
For sure at some stages of payment request you need to enter payment, card or bank account details. The payment services which processed such details should be PCI complain and depending of situation can store these details. Usually encrypted on their side.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm an absolute beginner in the field and had a couple of questions that I'm really looking forward to having answers to.
Is IPFS a distributed or a decentralized file system? Which one of these options is more suitable to file systems in general?
Is there a record of all the hashes on the ipfs network? How does my request travel through the network?
How could blockchain fit in with IPFS? Has it been implemented already?
If we become an interplanetary species, IPFS could be the protocol we use to communicate with each other. It is a new protocol that could upgrade the entire internet.
HTTP protocol is the most popular on the internet. You know how you go to a website and it says HTTP at the beginning of URL bar. That's becoz your web browser is using the HTTP protocol to retrieve the page and this protocol was created by Tim Berners Lee in 1989. It defines two entities : A client and a Server. If the request was succesfull, a response is sent back so when you type in http://google.com in your browser which is the client, it uses http to request the Google main page and google's server uses http to send it to you as a response. This protocol is the backbone of world wide web. But http is not good enough anymore, infact it is totally broken and made the web completely centralised. These centralised servers are getting mor and more powerful by absorbing all of our data. Servers are given a unique IP address which define it's location. And becoz data is location addressed, if that location gets shutdown. That data is lost forever. But may be we could make a permanent web. A web where links never die. That's the vision of IPFS. It's peer to peer protocol, there is no central entity. People connect to each other directly. That means if you built a website on IPFS, it could never be shutdown by anyone, even if the government shuts down the internet during protests. People could still communicate with each other offline with IPFS and data would be owned by us, the people (not by any group). And becoz it's peer to peer so data transfer is much faster. The network gives data a content address, not an ip address so if you want to load a website, instead of your computer requesting from server across the world, it will find the nearest copy and your computer would retrieve it from there directly and if multiple people have copies of it, your computer would request it from all of them at the same time. The more peers, the faster the download. Videos would load so much faster. You could download the games 10 times faster. It is just better in every way.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We have just started using MailGun in earnest.
In about 16% of my emails are being dumped (550's and 530's)
When I look into the problem, I'm told that certain antispam services (spamhaus) have blacklisted the IP that we are using.
I have looked at my logs and found that we have sent about 6000 pieces of the the email over the last six months. We have had zero spam complaints. We are just testing at this point. We are using the API both SMTP and HTTP.
The only answer I can come up with is that the IP address was occupied by someone prior to me, And they had a bit of a spam problem.
Am I missing something?
Is there anyway to get the good folks at MailGUn to change the IP (I have put into support tickets over the last three days and have not gotten a response)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
i want to create two programs in Qt with one server and another client, my server programs insert user and customer information like fingerptint and another important data and in client users and customers use their information for working on some privacy stuff, these programs must send information on network.
so i think using Postgresql for database on server and client just connect to database and get needed information as login and etc.
and now this is my problems
my network connection must be secure no one can extract data send to
client? (so i think postgres handle this for me, am i right?)
i want to client has offline mode, so i don't mind if i must setup
another Postgresql database on client PC, and then how i can tell
postgres update himself from server or vice versa?
finally whats the best solution you think?
thanks a lot
Wow, that's a bit open-ended. See https://stackoverflow.com/faq#dontask . Keep your questions specific and focused. Open ended I-could-write-a-book-on-this questions will get closed.
Quick version:
my network connection must be secure no one can extract data send to client? (so i think postgres handle this for me, am i right?)
Correctly used SSL will give you one-way trust, where the client can verify the identity of the server. The server must still rely on passwords to identify the client, but it can do that over SSL.
You can use client certificates for true two-way verification.
If you're doing anything privacy sensitive consider using your own self-signed CA and distributing the CA cert through known-secure means. There are too many suborned sub-CAs signing wildcard certificates for nations to use in transparent SSL decryption for me to trust SSL CAs for things like protecting dissidents and human rights workers when they're using an Internet connection supplied or controlled by someone hostile to them.
Don't take my word on this; read up on it carefully.
i want to client has offline mode, so i don't mind if i must setup another Postgresql database on client PC, and then how i can tell postgres update himself from server or vice versa?
It sounds like you want asynchronous replication with intermittent connections.
This is hard. I recommend doing it at the application level where you can implement application-specific sync schedules and conflict resolution logic. You can use trigger maintained change-list tables to keep a record of what changed since the DBs last saw each other. Don't use timestamps to keep in sync, as they clock drift between server and client will cause you to miss changes. You might want to use something like the pgq ticker on the master DB.
finally whats the best solution you think?
Too open ended, not enough info provided to even start to answer.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'm trying to raise the limits for my application because after ~50 or so requests the app becomes non-functional. How can I do this?
This is the error:
Fatal error: Uncaught OAuthException: (#341) Feed action request limit reached
What #DMCS says is correct - there is an internal system at facebook that monitors user feedback - that means invites that users have declined, users that have blocked or hid your applications stories in their feeds, and possibly some sort of ratio between users that installed the application and removed it.
I was having trouble with these limits when I was developing the part of my application to do with user invites. After testing the application by sending requests and accepting declining them quite a bit, I noticed some limitations being enforced - before my application even went live!
That was when I learnt about sandbox mode in the application settings panel. When an application is in sandbox mode all the calculations and limitations are not enforced. Now you can go about testing your invitation systems without worrying about beeing deemed a "bad" application. In addition only those users which you have granted access using the developers app role tab ( https://developers.facebook.com/apps/YOUR_APP_ID/permissions ) will be able to see and use the application. Another thing to note is that invites sent from an application in sandbox mode will not be received by users who have not been granted access.
Facebook rates these limits on a per application level. At F8 in 2009 people were asking about these limits and how they are being calculated, but everyone was shot down. Their algorithm is very hush-hush. The best way to get any limit raise is by being a good app, the way to get them lowered is from being a bad app. Facebook uses a lot of user feedback to help them determine if you're being naughty or nice.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
As an example, see the reference documentation for one of paypal's APIs:
http://www.paypalobjects.com/en_US/ebook/PP_NVPAPI_DeveloperGuide/Appx_fieldreference.html#2824913
The question is, why do they need it? Doesn't the server get it as part of the HTTP protocol?
UPDATE: Just realized the example I gave wasn't so good. I'm talking about instances where the client is talking directly to the web service. I'll close the question.
I'm not sure about PayPal specifically, but one use case for a service requiring the client's IP is that the server needs to do fraud detection (too many requests coming from the same end user), but the source IP in the packet comes from an aggregator of end user actual IPs. Perhaps the aggregator has NATted clients behind it (possibly mobile devices, who knows). The server will want the aggregator to send it the IP of its clients.
There may be other cases; this is the only one I know of.
They want to be able to identify the end user, usually to protect both you and them from abuse - both to detect fraud attempts (too many requests coming from the same IP) and to be able to find the culprit after the fact (in case of criminal activity, ISPs in many countries are required to reveal user information based on an IP to the investigating authorities).
Of course you could do the logging yourself, but considering the general state of security awareness on the internet, I understand that they're not trusting you to do it well enough.