Make API requests on a remote server from Postman - postman

I work on a remote server via ssh, I ran a service locally on the remote server but how can I hit API's from my local machine's Postman to the service API's on remote server.
I am able to make curl requests from the remote server but I am not able to do the ssh tunneling in Postman, what are the steps I should follow?

While both ssh and HTTP are protocols to communicate between client and server. The basic difference between SSH and HTTP;
I guess you know, but just for others/clarification - SSH means “Secure Shell”. It has a built-in username/password authentication system to establish a connection. Thing is, it uses Port 22 to perform the negotiation or authentication process for connection. Authentication of the remote system is done by providing a public-key from your machine.
The default Port for most Web-Servers to listen for requests is either Port 80 for HTTP or 443 for HTTPS
To make it work
You can either expose a Port on your remote server by defining a firewall rule (even though 80 should probably be open) and make your server listen to incoming requests on that Port.
OR
Now, if you wan't to making it publicly available
put both, your remote Server and your local machine in the same VPN Network - still your server needs to listen for HTTP requests on some Port.
If you are not using some kind of reverse proxy, make sure to specify the port you are contacting the server on e.g. http://localhost:8080

Related

exposing kestrel server deployed as web job for external interaction

I have deployed an application hosting Kestrels server bindded to a specific port as web job .I want to access that port in order to have to access to APIs implemented in that application.
If I try to bind with port 443 it fails on other ports the server starts but cant interact with external requests.Is there any way I can expose this port to listen to incoming requests
Azure Web App only support port 443 and 80. And webjob host in Azure App Service.
After a lot of searching for information and trying. I can tell you with certainty that other ports cannot be used.
For more details, you can read below post.
Opening ports to Azure Web Job
Is it possible to use an Azure Web Job to listen on a public socket
The above is a statement of port restrictions in webjob.
For you want webjob to monitor and process incoming requests, my suggestion is that webjob monitors ports 443 and 80 instead of binding. You can use RawSocket.
Monitor all requests, analyze whether the request content contains instructions that need to be executed, and then proceed to the next business operation.
If you already have completed project, you also can choose VM or Cloud Services.

Could I use port 443 for my webservice or is this port reserved?

I have an app which uses a backend (REST webservice) on a public server. Currently I am using 8080 as the incoming port and asked myself if this is correct. In theory I could choose almost any port. Theoretically... But it is advisable to use a non-reserved port.
I once heard that calling a web service with an "exotic" port could be blocked in a public WLAN. Due to firewall/proxy rules. Could that really happen?
Would it make sense to use port 443 for the web service? (I use a SSL certificate on my backend)
This concept is pretty difficult to tackle, there are a lot of options when considering networked services. I'd advise against using a well known port for your web service in general, although in the case of REST there is a case to be made.
As you mentioned, obscure port numbers can be blocked inside certain networks by strict sys admins. Operating your service over TLS on port 443 is a secure and reliable way to access your api from within a network.
Being that REST is an http(s) api, and being that port 443 is designated for https traffic, using 443 for https-REST api seems appropriate.
TLDR; It's okay to use the well known http(s) ports, 80 and 443, for your REST api

tunnelling and normal http server on the same server

I am using go-http-tunnel for local tunneling. My server domain is let's say xyz.com, clients connect to *.xyz.com.
What I want to achieve is use xyz.com for normal apache server and subdomain.xyz.com for tunneling.(I want to achive it using only ports 80 and 443)

Why websocket don't work on the cloud?

I developed our websocket project on wildfly. When we test it on localhost or within our local network, everything work fine. But when I deployed it on AWS, websocket don't work any longer. We can access other html pages. But when we conenct to "ws://ip/project location ", chrome just says hand shake error. I have experienced the same web socket problem on jelastic hosting too. My question is
Why it is happening like this?
Is websocket protocol not stable enough?
Is there any suitable hosting for websocket projects in java?
So far balancers don't forward websocket headers. To make WS working you must have a public IP address and no other services in front of your application.
I suggest you try deploying to the cloud provider : Heroku - their sample app code using node.js and websockets will get you up and running quickly. A locally running websocket app which uses a specific port - say 8888 will run fine on heroku with :
var port = process.env.PORT || 8888;
as heroku internally will deploy your app with a run-time generated port visible via PORT .
If you are using node.js with websockets I suggest using the einaros ws implementation
var WebSocketServer = require("ws").Server;
which seamlessly handles the notion of ws port -vs- the http port
Currently ELB doesn't support Websocket in HTTP mode. To be able to handle Websocket you need to configure the ELB in tcp mode (the payload of the tcp connection will be send directly to the server, so the ELB doesn't impact the http and ws flow). With this set up you won't be able to see the caller ip.
Without the ELB Websocket works perfectly (AWS only sees ip traffic and the OS only tcp one), we haven't change any thing for a plain old http server in order to use WS (except the WS handling code in the web server).
To know if you are using ELB look at the bill, AWS can provide you a lot of very interesting services, for a fee.

Front-Ending an app server on AWS EC2

I have 2 instances set up in EC2. One is running nginx and has an association with the elastic IP address, so its publicly accessible.
The other doesn't have a web server but has a RESTful server running on port 8080.
Both belong to a security group with these rules:
Ports Protocol Source MongoDB-2-2-2-AutogenByAWSMP-
22 tcp 0.0.0.0/0
80 tcp 0.0.0.0/0
8080 tcp 0.0.0.0/0
If I understand that right then port 8080 should be open.
If I ssh onto my web box (with nginx running) I'm trying to test access to my RESTful server on the other instance:8080, so I tried:
curl http://10.151.87.76:8080/1/tlc/ping
curl http://ip-10-151-87-76:8080/1/tlc/ping
curl http://ip-10-151-87-76.ec2.internal:8080/1/tlc/ping
All of these gave me "couldn't connect to host" errors.
If I log onto the RESTful box directly and do the following, it works.
curl localhost:8080/1/tlc/ping
So I know my service is up and healthy.
Any ideas why I can't see port 8080 from the other instance are appreciated.
Make sure instances are in the same availability zone. If not, you may need to access the instance by public DNS name (something like ec2-XXX-XX-XXX-XXX.YYY.amazonaws.com).
Make sure 10.151.87.76 is the correct IP. Note that this will probably change after the instance is stopped and started again.
Make sure your headless service is publicly available -- it may listen on localhost:8080 only but should listen on 0.0.0.0:8080. Try nmap 10.151.87.76 -p 8080 from other instance, it should list 8080 as open port.
Make sure your headless service is publicly available << so this is the reason. What web server are you using for REST API? If it is Apache, make sure config says Listen 8080, not Listen 1.2.3.4:8080. If it is standalone app, make sure it can listen on all interfaces -- some clients will listen on localhost by default. – hudolejev 54 mins ago
This! Buried deep (deep) within my code was a piece of the server wired to "localhost". Changed that to key off hostname and all was well! Happy.