I have a NAT configured to run when loading up my favorite Linux distribution in VitualBox. This allows outgoing connections to work successfully.
How do I allow incoming connections to this box, like, say, Web traffic? The IP address is 10.0.2.15. A ping request from my main box results in a Timeout.
VirtualBox (after version 1.3.8, anyway) will let you map incoming connections in the NAT configuration. There's an excellent tutorial on Aviran's Place that describes the steps to configure port mapping.
Related
I'm trying to get TCP timestamp from the packets for clock skewing purposes on my application which is hosted on EC2. In my network I have an ALB.
So my question is how do I get TCP level packet information in my app ? Since ALB filters out all the OSI Layers except application level (HTTP)
If the only reason to get access to TCP packet is to detect timestamp and correct clock drift, I would suggest to configure your EC2 instance to use NTP time server instead.
https://aws.amazon.com/blogs/aws/keeping-time-with-amazon-time-sync-service/
That being said, the ALB is not "removing" TCP information from network packets. HTTP connections made to your application are still transported over IP and TCP. If you need low level access to network packets from an app, I would suggest to look at the pCAP library which is used by TCPDUMP and many other tool to capture network traffic on an interface.
https://www.tcpdump.org/
[UPDATED to include comments]
It is important to understand the TCP connection between your client and the ALB is terminated at the ALB level. The ALB creates a second TCP connection to forward HTTP requests to your EC2 instance. The ALB does not remove information from TCP/IP, it just creates a second, independent and new connection. Usually the only information you want to propagate from the initial TCP connection is the source IP address. The ALB, like most load balancers and proxies, captures this information from the original connection (the one received from the client) and embed the information in an HTTP header called X-Forwarded-For.
This is documented at https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/x-forwarded-headers.html
If you want to capture other information from the original connection, I am afraid it will not be possible using ALB. (but I also would be very curious about the use case, i.e. WHAT you're trying to achieve)
I am relatively new to AWS and I've been looking at quite a few tutorials for the past couple of days trying to figure out how to make my AWS ubuntu instance accessible from the browser.
What I've done:
1st: I configured security groups to accept all traffic for ssh, http, https just to see if the public DNS listed in the instance is accessible.
2nd: I changed the IP of my instance to an elastic IP
3rd: I wrote a simple node.js file that listens on port: 9000 and console.logs 'hello world'
For some reason ssh works, and I can run my node.js file, but agina I cannot access the remote instance from the browser.
Any help would be greatly appreciated since I've been on this for a couple of days
Thanks!
Thank you everyone for the quick responses!
My issue was I did not include a TCP rule to my specific port. Now I am able to access that port via ec2-DNSNAME:9123.
And, just to clarify, if I want to host that DNS for all traffic I should specify 'anywhere' for the TCP rule, correct?
I configured security groups to accept all traffic for ssh, http, https
In security groups, "HTTP" does not mean "HTTP on any port"... it means "any traffic on TCP port 80" -- 80 being the standard IANA assigned port for HTTP.
Security groups are not aware of the type of traffic you are passing, only the IP protocol (e.g. TCP, UDP, ICMP, GRE, etc.) and port number (for protocols that use port numbers) and any protocol specific information (ICMP message types).
You need a rule allowing traffic to port 9000.
Firstly go to your EC2 and see if curl http://localhost works..
Also, if you are exposing your nodejs on port 9000 ; did u open 9000 also on security groups or not ?
Few things to check:
Security groups
Subnet NACLS (these can function as a subnet level
firewall, but unless you've messed with these they should allow all
traffic.)
On the server if you run netstat -na | grep <PORT> do you see your
application listening on the correct ports?
You may also check your system for a firewalls that could be short circuiting the requests.
If the above doesn't point you towards where your issue is you can grab tcpdump and filter it just for requests coming from your web browser (e.g after installing tcpdump -vvn host 10.20.30.40 port 8000 Substitute your ip and port). This will let you know if you're running into a network issue (Packets aren't reaching the server) or if its something with the app.
I'd also recommend using IP addresses while doing your initial troubleshooting. That way we can establish it is not network/server configuration before going into DNS.
I have created a VPC with public and private subnets on AWS. All app servers are in private subnets and all outbound requests have to be through an internet-facing NAT instance.
At the moment, our project requires the app servers to access a ftp server provided by a service provider.
I have tried several ways to manage that, but all no luck. What I have done was to open a port range, let's say (40000 - 60000) on both NAT and APP security groups, also standard ftp ports 20 - 21 as well.
The user authentication can be passed, but I could not list contents from app servers.
I am able to access the ftp server from NAT, not problem at all.
So what should I do to make it work?
#JohnRotenstein is absolutely correct that you should use Passive FTP if you can. If, like me, you're stuck with a client who insists that you use Active FTP because their FTP site that they want you to connect to has been running since 1990 and changing it now is completely unreasonable, then read on.
AWS's NAT servers don't support a machine in a private subnet connecting using Active FTP. Full stop. If you ask me, it's a bug, but if you ask AWS support they say it's an unsupported feature.
The solution we finally came up with (and it works) is to:
Add an Elastic Network Interface (ENI) in a public subnet on to your EC2 instance in the private subnet
So now your EC2 instance has 2 network adapters, 2 internal IPs, etc.
Let's call this new ENI your "public ENI"
Attach a dedicated elastic IP to your new public ENI
Let's assume you get 54.54.54.54 and the new public ENI's internal IP address is 10.1.1.10
Add a route in your operating system's networking configuration to only use the new public ENI
In windows, the command will look like this, assuming the evil active ftp server you're trying to connect to is at 8.1.1.1:
route add 8.1.1.1 mask 255.255.255.254 10.1.1.1 metric 2
This adds a route for all traffic to the FTP server at 8.1.1.1 using subnet mask 255.255.255.254 (ie. this IP and only this IP) should go to the internet gateway 10.1.1.1 using ethernet adapter 2 (your second NIC)
Fed up yet? Yeah, me too, but now comes the hard part. The OS doesn't know it's public IP address for the public EIN. So you need to teach your FTP client to send the PORT command with the public IP. For example if using CURL, use the --ftp-port command like so:
curl -v --ftp-port 54.54.54.54 ftp://8.1.1.1 --user myusername:mypass
And voila! You can now connect to a nightmare active FTP site from an EC2 machine that is (almost entirely) in a private subnet.
Try using Passive (PASV) mode on FTP.
From Slacksite: Active FTP vs. Passive FTP, a Definitive Explanation:
In active mode FTP the client connects from a random unprivileged port (N > 1023) to the FTP server's command port, port 21. Then, the client starts listening to port N+1 and sends the FTP command PORT N+1 to the FTP server. The server will then connect back to the client's specified data port from its local data port, which is port 20.
Thus, the traffic is trying to communicate on an additional port that is not passed through the NAT. Passive mode, instead, creates an outbound connection, which will then be permitted through the NAT
I am trying to setup an Elastic load balancer to route requests to a cluster of node.js servers running Primus.io with sockjs to manage real time communications.
I have set up the load balancer to listen with the following configuration:
HTTPS 8084 -> HTTPS 8084 (The port used on my node.js servers)
SSL 443 -> TCP 80
My understanding is that the only way to get websockets to work through ELB is via SSL->TCP, hence the above configuration.
I have correctly enabled the new proxy protocol for ELB as described here:
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html
When trying to connect to the server from a client an HTTPS request is initially sent and then from what I can gather it should be upgraded to websockets. But the request is simply failing when I send it to the loadbalancer address.
If I send the initial Primus connection request to the ip of a single nodejs server like so:
var primus = new Primus('https://ip.address.of.single.server:8084');
The request is correctly returned and is upgraded to websockets correctly.
When I switch the ip address to that of the balancer, it fails and the initial https request to the node.js server returns nothing. I assume this means that the websocket transfer could not be established, but to be honest I have little experience in this area so could be completely wrong.
Does anyone have any idea what I am doing wrong?
Thanks in advance
Do you have clustered your NodeJS-instances? For example, if you use SocketIO you should use a clustered session store. Actually, I'm also currently investigating the same with SockJS running on top of Vertx.
The problem behind is Amazon ELB won't respect any forwards in the past (in opposite to Sticky Session on top of HTTP) which means that a connections via TCP level can be forwarded at any cluster's node. Yes, one tcp channel would be okay. But frameworks like SocketIO do a little bit more to support sessions (does not exist in WebSockets) and multiple transport layers (http, polling, sockets, and so on).
I looking for add support to a VPN for my software,
I known PPTP and OpenVPN , the two makes a system-wide binding, installing a TAP driver so all applications route their traffic to then.
How could i implement a VPN support for just my application ? ThereĀ“s any library, example, hint or way to do it ?
My software is actually made in C++ /MFC. Using the standard CAsyncSocket.
Forwading incoming connections to your application is relatively easy:
stunnel allows you to forward traffic to specific ports through an an SSL tunnel. It requires that you run it on both ends, though.
Most decent SSH clients, such as OpenSSH or PuTTY also support port forwarding, with the added advantage that any remote SSH server can usually act as the other end of the tunnel without any modifications.
You can also use OpenVPN and other VPN solutions, but this requires specific forwarding rules to be added to the remote server.
Forwarding outgoing connections, though, is trickier without modifying your application. The proper way to do it is to implement the SOCKS protocol, preferrably SOCKS5. Alternatively, you can use an external application, such as FreeCap, to redirect any connections from your application.
After you do that, you can forward your connections to any SOCKS server. Most SSH clients, for example, allow you to use the SOCKS protocol to route outgoing connections through the remote server.
As a sidenote, OpenVPN servers do not necessarily become the default gateway for all your traffic. Some do push such a route table entry to the clients, but it can be changed. In my own OpenVPN setup I only use the VPN to access the private network and do not route everything through it.
If you can force your application to bind all outgoing sockets to one or more specific ports, you could use IP filtering rules on your system to route any connections from those ports through the VPN.
EDIT:
Tunneling UDP packets is somewhat more difficult. Typically you need a proxy process on both the remote server and the local client that will tunnel incoming and outgoing connections through a persistent TCP connection.
Your best bet would be a full SOCKS5 client implementation in your application, including the UDP-ASSOCIATE command for UDP packets. Then you will have to find a SOCKS5 proxy that supports tunnelling.
I have occasionally used Delegate which seems to be the Swiss pocket-knife of proxies. As far as I know, it supports the UDP-ASSOCIATE command in its SOCKS5 implementation and it also supports connecting two Delegate processes through a TCP connection. It is also available for both Linux and Windows. I don't remember if it can also encrypt that TCP connection, but you could always tunnel that one through stunnel or SSH if you need to.
If you have system administrator rights on a remote VPN server, however, you could probably have a simpler set-up:
Have your P2P application bind it's outgoing UDP sockets to the client VPN interface. You many need to setup a secondary default route for that interface. This way your application's outgoing packets will go through the remote server.
Have the remote server forward incoming UDP packets to specific ports through the VPN connection back to you.
This should be a simpler set-up, although if you really care about anonymity you might be interested in ensuring your P2P application does not leak DNS or other requests that can be tracked.
Put SSH connectivity in your app or use SSL. You'll have to use a protocol/service instead of VPN technology. Good luck!
I think you simply need SSL: http://www.openssl.org/
OpenVPN is based on SSL - but it is a full vpn.
The question is what do you need? If you need encryption (application private connection) - and not a vpn (virtual private network) go for ssl.
Hints can be found here:
Adding SSL support to existing TCP & UDP code?
http://sctp.fh-muenster.de/dtls-samples.html
http://fixunix.com/openssl/152877-ssl-udp-traffic.html