I have a self-hosted wcf service on a pc in the office. The service address is the IPv4 of the PC + port + / service / ERP.
So that the client application can access from the LAN, it uses the address of the server as I mentioned, and to access from outside the LAN, I use the public IPv4 or a url of some DNS provider and in the Reuter I redirect the port to the PC where the service is hosted. So far everything works perfectly, the problem I have is that the notebooks must access from inside the LAN and sometimes from outside, but if I use public IPv4 from within the network I cannot access the service.
My current solution is to have two configurations or two icons, one for internal access and another for external access where the user selects depending on their location, but I would like to have access and somehow be able to detect if it is on the LAN or on the outside, or if you know of any way to allow access from a LAN using a public IPv4
Related
I have whitelisted my mobile data IP address in the Load Balancer Security Group and I can access my application. But my mobile data IP address keeps changing when I am travelling. And I can not keep whitelisting my new IP addresses every time to access the mobile application running backend server on EC2.
So, how this situation can be tackled ?
One suggestion to can tackle this - The service that your mobile app is using open it for public IP (instead of restricting it mobile device app) and have some kind of authentication in your service using which only your mobile app can access.
For example - Send some userid/pwd in our service call which only your mobile device knows, validate the userid and password at server-side and hence no one else can access the service.
I've created a server using c++ and crow that uses webSockets to communicate with the client (which is an ionic app). I've been do everything through localhost but now I want to deploy the webapp to my iphone and have it communicate with the server. How do I get the URL for the server for the client to use in it's websocket so it can talk to the server?
In most cases when you want to host it for production you would upload it to a hosting provider (e.g. Azure, AWS, Heroku...).
Once you set up a server with the hosting provider they will provide you with the IP address and/or a URL to connect to the hosted service which you can use in your application.
Well, if your server is like webhost,VPS,CLOUD,or dedicated server, it has static ip address which you can use in your client websocket as the address. Also, the better approach is to point a domain name to the ip address of the server (or host) so that your client can always find the server.
In case you are trying to connect it to your local machine behind a router or modem, then your server might be behind NAT. Find appropriate port-forwarding configuration for your router and forward incoming and outgoing TCP public ports to your local address. then use your public ip address for websocket address (what is my ip).
Also, in case your public ip address is dynamic and might change over time, there are services like noip.com that help you create a free domain and use it in your client which helps you find the right ip address all the time.
I have just started with aws ec2.
I have deployed a simple web app on ec2 which listens on port 12345.
After SSH-ing into the instance from my windows machine using putty and once connected using curl for the apps endpoint and using either localhost, private IP address as well as public Dns (IPv4) it works fine.It does not work with IPv4 public ip though.
I now want to make that app accessible from the internet via browser. But, when I use public Dns or even Ipv4 Public ip along with the port , and try to access , it is not accessible. I only get the message , "This site can’t be reached" from chrome.
I have Inbound rule setup for the security group associated with the instance,
which allows ,
all trafic, via all protocol , via all ports (0-65535), and also I have given the Source as Custom, with IP range of 0.0.0.0/0.
I also have added another rule with same attributes, except for Source a Custom with IP range of , ::/0.
Can someone advise me on the right way to do this.
Basically my problem is I want to do following things :
Develop a web service on work space
Now give demo of that web service usage from my AWS works space by a public IP like
http://172.23.0.1:8090
I want an IP for my workplace by which I can access web app or web service hosted locally on that machine to be accessed from any where from internet.
Is that possible if not then tell me alternative ?
Below are the steps that you should follow:
Select Assign Public IP while creating the new instance
In the assigned Security Group settings, open the port 8090 for 0.0.0.0 (means accessible for all) along with the protocol you will be using (TCP, UDP, etc)
I have created a VPC with public and private subnets on AWS. All app servers are in private subnets and all outbound requests have to be through an internet-facing NAT instance.
At the moment, our project requires the app servers to access a ftp server provided by a service provider.
I have tried several ways to manage that, but all no luck. What I have done was to open a port range, let's say (40000 - 60000) on both NAT and APP security groups, also standard ftp ports 20 - 21 as well.
The user authentication can be passed, but I could not list contents from app servers.
I am able to access the ftp server from NAT, not problem at all.
So what should I do to make it work?
#JohnRotenstein is absolutely correct that you should use Passive FTP if you can. If, like me, you're stuck with a client who insists that you use Active FTP because their FTP site that they want you to connect to has been running since 1990 and changing it now is completely unreasonable, then read on.
AWS's NAT servers don't support a machine in a private subnet connecting using Active FTP. Full stop. If you ask me, it's a bug, but if you ask AWS support they say it's an unsupported feature.
The solution we finally came up with (and it works) is to:
Add an Elastic Network Interface (ENI) in a public subnet on to your EC2 instance in the private subnet
So now your EC2 instance has 2 network adapters, 2 internal IPs, etc.
Let's call this new ENI your "public ENI"
Attach a dedicated elastic IP to your new public ENI
Let's assume you get 54.54.54.54 and the new public ENI's internal IP address is 10.1.1.10
Add a route in your operating system's networking configuration to only use the new public ENI
In windows, the command will look like this, assuming the evil active ftp server you're trying to connect to is at 8.1.1.1:
route add 8.1.1.1 mask 255.255.255.254 10.1.1.1 metric 2
This adds a route for all traffic to the FTP server at 8.1.1.1 using subnet mask 255.255.255.254 (ie. this IP and only this IP) should go to the internet gateway 10.1.1.1 using ethernet adapter 2 (your second NIC)
Fed up yet? Yeah, me too, but now comes the hard part. The OS doesn't know it's public IP address for the public EIN. So you need to teach your FTP client to send the PORT command with the public IP. For example if using CURL, use the --ftp-port command like so:
curl -v --ftp-port 54.54.54.54 ftp://8.1.1.1 --user myusername:mypass
And voila! You can now connect to a nightmare active FTP site from an EC2 machine that is (almost entirely) in a private subnet.
Try using Passive (PASV) mode on FTP.
From Slacksite: Active FTP vs. Passive FTP, a Definitive Explanation:
In active mode FTP the client connects from a random unprivileged port (N > 1023) to the FTP server's command port, port 21. Then, the client starts listening to port N+1 and sends the FTP command PORT N+1 to the FTP server. The server will then connect back to the client's specified data port from its local data port, which is port 20.
Thus, the traffic is trying to communicate on an additional port that is not passed through the NAT. Passive mode, instead, creates an outbound connection, which will then be permitted through the NAT