I'm trying to create a web app on AWS and I'm running into port issues. I would like to have multiple apps providing different services on different ports. I've created a website (on the same instance) to receive a text query and pass it to my app on port 3000. The app listening on 3000 is written in CherryPy.
We are using a VPN to provide security for the AWS instance. When logged into the VPN, everything works fine. The web page loads, the query returns the correct data. When I disconnect from the VPN, or someone else goes to the page, the page still loads, but queries to the service time out.
I've used netstat to make sure the service is listening but I'm not sure what could be blocking traffic. I've worked through the CORS issues as evident by the fact it works when I'm signed into the VPN.
What can I check now?
When I disconnect from the VPN, or someone else goes to the page, the page still loads, but queries to the service time out.
My assumption is that the web server and the app are on the same server.
It sounds very much like the connection from web server to app is happening via a routed IP address rather than localhost. In addition to being slower, it's also hitting your firewall rules.
Configure the web server to access your app on localhost:3000 and the issue should clear up.
I actually got it working. I have an AWS instance with nginix and CherryPy. When the user goes to a web address, the nginix page loads with a form for a query string. When they submit a string, the string is POSTed to a CherryPy service running on port 3000. The CherryPy service does some computations and returns a result via JSON.
I thought I had opened up everything completely for testing, but I was having so many issues. It turned out that having CherryPy set
"Access-Control-Allow-Origin" = "*"
wasn't working, instead I needed to specifically set the origin of the calling page.
Related
For network gurus out there, I'll like to ask some questions regarding some unique setup where the server will be sending a request to a client on localhost on a certain port.
I have a cloudy understanding of some network fundamentals that I hope you'll be able to help me out.
Kindly check the image below:
Basically, there's a static website hosted in AWS s3 and at some point this website will send a request to https://localhost:8001.
I was expecting that it will connect to the nginx container listening on port 8001 in my local machine, but it results in 504 gateway error.
My questions are:
Is it possible for a remote server to directly send data to a client at a particular port by addressing it as localhost?
How is it possible for the static website to communicate to my local docker container?
Thanks in advance.
In the setup you show, in the context of a Web site, localhost isn't in your picture at all. It's the desktop machine running the end user's Web browser.
More generally, you show several boxes in your diagram – "local machine", "Docker VM", "individual container", "server in Amazon's data center" – and within each of these boxes, if they make an outbound request to localhost, it reaches back to itself.
You have two basic options here:
(1) Set up a separate (Route 53) DNS name for your back-end service, and use that https://backend.example.com/... host name in your front-end application.
(2) Set up an HTTP reverse proxy that forwards /, /assets, ... to S3, and /api to the back-end service. In your front-end application use only the HTTP path with no host name at all.
The second option is more work to set up, but once you've set it up, it's much easier to develop code for. Webpack has a similar "proxy the backend" option for day-to-day development. This setup means the front-end application itself doesn't care where it's running, and you don't need to rebuild the application if the URL changes (or an individual developer needs to run it on their local system).
I'm quite new to AWS and I have been starting to work with EC2 instances. I have a web application that has a frontend and backend separately. So first I hosted the backend application on EC2 instance and it is a Symfony framework based REST API Application. So I have installed all dependencies and now the application is running. But to check the application I ran some API calls to the application using postman and seems application is not working as intended. I get following response from Postman. I have also provided security group configurations properly.
When I start sysmfony app it says [OK] Server listening on http://127.0.0.1:8000.
Can't figure out why this is happening. Can someone help me here?
You are running your application trough CLI (Symfony web server bundle) , by default this will bind to 127.0.0.1 which can't be accessed from outside. To fix this, you must bind to your server's public IP/hostname and port:
php bin/console server:start 192.168.1.1:8000 # replace with your ip
You can also bind to all your IP addresses using 0.0.0.0
But keep in mind, you should not use built in server for production, it's slow and less secure. Use a real web server instead, like Apache or Nginx.
I have a small Linux server acting as a reverse proxy running Nginx. The main server behind Nginx is running a website in asp.net with a forms authentication login and an instance of ArcServer, running some REST services on port 6080.
Is it possible to only allow traffic to port 6080 on Nginx to people that have a session cookie from the asp.net login? Basically I only want logged in users to be able to access those REST services and not the whole wide web.
If someone could point me in the right direction, I am running short on ideas.
Thanks.
The following works quite well. But naturally is only a bit of obfuscation and doesn't replace any security checking deeper in the app
location /url/to/secret/ {
if ($cookie_secretCookieName) {
proxy_pass http://serverhere;
}
}
This wouldn't prevent anyone that knows the cookie name from getting access e.g. someone who was a user and isn't anymore; but it could be a nice extra step to reduce a bit of load on your servers
CFHTTP on my new CF 9 server is failing. I get back "408 Request Time-out" when attempting to connect to the test page on the server via its internal or external IP. I am not using SSL and using the standard port 80.
My old CF 9 server can connect to itself fine but it also fails if attempting to connect to the new server.
If I RDP into the server, I am able to pull up the same test page via a web browser or via telnet to that ip port 80.
I suspect that this is a firewall issue. I'd like to know how CF makes an HTTP request under the hood before I talk to the hosting team. What service is making the call? What port is it running under, etc.
You don't say what operating system you are running under, but if it is Windows, I'd take a look at the Windows Firewall settings on your new machine, and disable the firewall. That will allow you to check if indeed it is the Firewall in the way.
If that works you can then try and add a firewall exception for the application, i.e. JRun.
Hope that helps.
I have a web service running under IIS7 on a server with a host header set so that it receives requests made to http://myserver1.mydomain.com.
I've set Windows INtegrated Authentication to Enabled and everything else (basic, anonymous, etc) to Disabled.
I'm testing the web service using a powershell script, and it works fine when I run it from my workstation against http://myserver1.mydomain.com
However, when I run the same exact script on the IIS server itself, I get a 401-Unauthorized message.
In addition, I've tried installing the web service on a second server, myserver2.mydomain.com. Again I can call my test script fine from BOTH my workstation and from myserver1.
So it seems the only issue is when the client is on the same box as the web server itself - somehow the windows credentials are not being passed or recognized.
I tried playing with IE settings on myserver1 (checked and unchecked 'Enable Windows Integrated Authentication', and added the URL to Local Sites). That did not seem to have an effect.
When I look at the IIS logs, I see the 401 unauthorized line but very little other information.
I see basically the same behavior when testing with IE (v9) - works from my workstation but not when IE is running on the IIS server.
I found the answer after several hours:
By default, there is something called a LoopbackCheck which will reject windows authentication if the host header used for the site does not match the local host's name. This behavior will only be seen when the client is on the local host. The check is there to defeat possible reflection attacks.
More details here:
http://support.microsoft.com/kb/896861
The kb item discusses ways to disable the Loopback check, but I ended up just switching from using host headers to ports to distinguish the different sites on the IIS server.
Thanks to those who gave assistance.
Try checking the actual credential that is being passed when you are running on the server itself. Often times you will be running on some system account that doesn't have access to the resource in question.
For example, on your box your credentials are running as...
MYDOMAIN\MYNAME
and the server will be something like...
SYSTEM\SYSTEM_ACCOUNT
and so this will fail because 'SYSTEM\SYSTEM_ACCOUNT' doesn't have credentials.
If this is the case, you can fix the problem in one of two ways.
Give 'SYSTEM\SYSTEM_ACCOUNT' access to the resource in question. Most people would avoid this strategy due to security concerns (which is why the account has no access in the first place).
Impersonate, or change the credentials of the client manually to something that does have access to the resource, 'MYDOMAIN\MYNAME' for example. This is what most people would probably go with, including myself.