Question
If client A is using TLS 1.2 protocol and client B is using SFTP protocol (SSH),
Can client A or B, be able send files between them without having security conflict, Or do they need to be on the same protocol?
(SFTP is a layer on top of the SSH protocol)
The SSH protocol and TLS have nothing to do with each other. Each is a layer sitting on top of TCP. Both provide the same function: to create a secure channel/tunnel for the communication of arbitrary byte streams.
If a client is "speaking" TLS, then it must be talking to a server "speaking" the server-side of the TLS protocol.
Likewise, if a client is "speaking" SSH, it can only be talking to a server speaking "SSH". This is the meaning of "protocol" -- a well defined set of rules for communications. A client speaking one protocol cannot communicate with a server speaking a different protocol. An FTP client cannot speak FTP with an IMAP server speaking the IMAP protocol. It would make no sense, just like it makes no sense for a TLS client to be speaking with an SSH server.
It is possible, however, to tunnel TLS through SSH.
See: https://www.example-code.com/csharp/socket_tlsSshTunnel.asp
or you can tunnel other protocols through SSH.
See: https://www.example-code.com/csharp/sshTunnel.asp
you can also do other things, like run SSH through HTTP:
https://www.example-code.com/csharp/sftp_http_proxy.asp
Related
I work on a remote server via ssh, I ran a service locally on the remote server but how can I hit API's from my local machine's Postman to the service API's on remote server.
I am able to make curl requests from the remote server but I am not able to do the ssh tunneling in Postman, what are the steps I should follow?
While both ssh and HTTP are protocols to communicate between client and server. The basic difference between SSH and HTTP;
I guess you know, but just for others/clarification - SSH means “Secure Shell”. It has a built-in username/password authentication system to establish a connection. Thing is, it uses Port 22 to perform the negotiation or authentication process for connection. Authentication of the remote system is done by providing a public-key from your machine.
The default Port for most Web-Servers to listen for requests is either Port 80 for HTTP or 443 for HTTPS
To make it work
You can either expose a Port on your remote server by defining a firewall rule (even though 80 should probably be open) and make your server listen to incoming requests on that Port.
OR
Now, if you wan't to making it publicly available
put both, your remote Server and your local machine in the same VPN Network - still your server needs to listen for HTTP requests on some Port.
If you are not using some kind of reverse proxy, make sure to specify the port you are contacting the server on e.g. http://localhost:8080
I have a Java application configured with some self signed certificates that communicates with ActiveMQ version 5.13.0 over SSL/TLS version 1.2. The relevant self signed certificates reside in their respective keystores and truststores. This connection over TLS works just fine on my local Windows machine, clients without the proper certificates are unable to communicate with the broker and clients with the proper certificates can.
However, this does not work when the same code and keystores are used on an AWS EC2 instance. I have the same version of ActiveMQ installed there and am using the very same keystores and truststores on the broker and client side. Clients without any certificates configured are able to connect to the broker and communicate.
I would like to understand if SSL/TLS for ActiveMQ must be configured differently on a Linux machine or if there is something else that I am missing.
Snippets from the activemq.xml file that enable activemq to use SSL/TLS:
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<sslContext>
<sslContext keyStore="file:${activemq.base}/conf/broker.ks"
keyStorePassword="changeit" trustStore="file:${activemq.base}/conf/broker.ts"
trustStorePassword="changeit"/>
</sslContext>
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxInactivityDuration=300000&wireFormat.maxFrameSize=104857600&jms.messagePrioritySupported=false"/>
<transportConnector name ="ssl" uri="ssl://0.0.0.0:61714?transport.enabledProtocols=TLSv1.2"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
Answering my own query.
I handle the Java client and that client connects to port 61714 that is designated for SSL.
The folks dealing with the IoT device side told me that these devices default to port 1883 for MQTT connections and port 8883 for secure MQTT connections.
This can be configured by adding the below line to the transport connectors :
<transportConnector name="mqtt+ssl" uri="mqtt+ssl://0.0.0.0:8883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
The device has some constraints due to which it cannot connect to an SSL port and publish MQTT messages. The Java client on the other hand has no issues connecting to the SSL port and publishing and consuming MQTT messages, so adding the above line resolved this.
If needed, one could comment out the transport connector for port 1883 so that no clients without the needed certificates are able to connect to the MQTT broker.
I have an app which uses a backend (REST webservice) on a public server. Currently I am using 8080 as the incoming port and asked myself if this is correct. In theory I could choose almost any port. Theoretically... But it is advisable to use a non-reserved port.
I once heard that calling a web service with an "exotic" port could be blocked in a public WLAN. Due to firewall/proxy rules. Could that really happen?
Would it make sense to use port 443 for the web service? (I use a SSL certificate on my backend)
This concept is pretty difficult to tackle, there are a lot of options when considering networked services. I'd advise against using a well known port for your web service in general, although in the case of REST there is a case to be made.
As you mentioned, obscure port numbers can be blocked inside certain networks by strict sys admins. Operating your service over TLS on port 443 is a secure and reliable way to access your api from within a network.
Being that REST is an http(s) api, and being that port 443 is designated for https traffic, using 443 for https-REST api seems appropriate.
TLDR; It's okay to use the well known http(s) ports, 80 and 443, for your REST api
i hope your help.
My cloudWatch example is below.
image capture: ssh connection logs with 172.0.0.10
As you see, cloudWatch is logging both of request and response packets.
In this case, everyone knows that packets displaying 22 as destination port is reponse packets because port 22 is well-known ssh server port.
However, if it is not a well-known port number, you will not be able to distinguish between request and response packets. How do you distinguish it in that case? The cloudwatch log alone does not show me how. No matter how I google it, I can not find a way. Please advise.
In this case, everyone knows that packets displaying 22 as destination port is reponse packets because port 22 is well-known ssh server port.
That's not actually correct. It's the opposite.
The server side of a TCP connection is using the well-known port, not the client¹ thus the well-known port is the destination of a request and the source of a response.
Packets with the source port of 22 would be the SSH "response" (server → client) packets. Ports with the destination port of 22 would be the SSH "request" (client → server) packets.
When I make a request to a web server, my source port is ephemeral but the destination port is 80. Responses come from source port 80.
But of course, the argument can be made that the terms "request" and "response" don't properly apply to packets,
But rather they apply to what the packet contains -- and that is protocol specific. In many cases, the client does the requesting and the server does the responding, but that correlation does not cleanly map down to the low layers of the protocol stack.
In the case of TCP, one side is always listening for connections, usually on a specific port, and that port is usually known to you, if not a "well-known" port, because you are the one who created the service and configured it to listen there.
As these flow log records do not capture the flags that are needed to discern the source and dest of the SYN... SYN+ACK... ACK sequence, you can't ascertain who originated the connection.
With no knowledge of the well-known-ed-ness or other significance of "port 22," it is still easy to conclude from your logs that 172.0.0.10 has a TCP socket listening on that port and that numerous other clients are connecting to it from their ephemeral ports... and we can confirm that this is still listening by running netstat -tln on that machine.
¹ not the client most of the time. There are cases where a server daemon is also a client and will use the well-known port as its source port for outgoing connections, so source and dest might be the same in such a case. I believe Sendmail might be an example of this, at least in some cases, but these are exceptions.
I looking for add support to a VPN for my software,
I known PPTP and OpenVPN , the two makes a system-wide binding, installing a TAP driver so all applications route their traffic to then.
How could i implement a VPN support for just my application ? There´s any library, example, hint or way to do it ?
My software is actually made in C++ /MFC. Using the standard CAsyncSocket.
Forwading incoming connections to your application is relatively easy:
stunnel allows you to forward traffic to specific ports through an an SSL tunnel. It requires that you run it on both ends, though.
Most decent SSH clients, such as OpenSSH or PuTTY also support port forwarding, with the added advantage that any remote SSH server can usually act as the other end of the tunnel without any modifications.
You can also use OpenVPN and other VPN solutions, but this requires specific forwarding rules to be added to the remote server.
Forwarding outgoing connections, though, is trickier without modifying your application. The proper way to do it is to implement the SOCKS protocol, preferrably SOCKS5. Alternatively, you can use an external application, such as FreeCap, to redirect any connections from your application.
After you do that, you can forward your connections to any SOCKS server. Most SSH clients, for example, allow you to use the SOCKS protocol to route outgoing connections through the remote server.
As a sidenote, OpenVPN servers do not necessarily become the default gateway for all your traffic. Some do push such a route table entry to the clients, but it can be changed. In my own OpenVPN setup I only use the VPN to access the private network and do not route everything through it.
If you can force your application to bind all outgoing sockets to one or more specific ports, you could use IP filtering rules on your system to route any connections from those ports through the VPN.
EDIT:
Tunneling UDP packets is somewhat more difficult. Typically you need a proxy process on both the remote server and the local client that will tunnel incoming and outgoing connections through a persistent TCP connection.
Your best bet would be a full SOCKS5 client implementation in your application, including the UDP-ASSOCIATE command for UDP packets. Then you will have to find a SOCKS5 proxy that supports tunnelling.
I have occasionally used Delegate which seems to be the Swiss pocket-knife of proxies. As far as I know, it supports the UDP-ASSOCIATE command in its SOCKS5 implementation and it also supports connecting two Delegate processes through a TCP connection. It is also available for both Linux and Windows. I don't remember if it can also encrypt that TCP connection, but you could always tunnel that one through stunnel or SSH if you need to.
If you have system administrator rights on a remote VPN server, however, you could probably have a simpler set-up:
Have your P2P application bind it's outgoing UDP sockets to the client VPN interface. You many need to setup a secondary default route for that interface. This way your application's outgoing packets will go through the remote server.
Have the remote server forward incoming UDP packets to specific ports through the VPN connection back to you.
This should be a simpler set-up, although if you really care about anonymity you might be interested in ensuring your P2P application does not leak DNS or other requests that can be tracked.
Put SSH connectivity in your app or use SSL. You'll have to use a protocol/service instead of VPN technology. Good luck!
I think you simply need SSL: http://www.openssl.org/
OpenVPN is based on SSL - but it is a full vpn.
The question is what do you need? If you need encryption (application private connection) - and not a vpn (virtual private network) go for ssl.
Hints can be found here:
Adding SSL support to existing TCP & UDP code?
http://sctp.fh-muenster.de/dtls-samples.html
http://fixunix.com/openssl/152877-ssl-udp-traffic.html