Disable Port 9443 from API Manager - wso2

I don't want to expose ports to users.
therefore I want to use "https://hostname/devportal", "https://hostname/publisher" rather than "https://hostname:9443/devportal", "https://hostname:9443/publisher". what should I do?
Please help me on this.
thanks. (API manager version: 4.0)

You need to add the following two properties to the deployment.toml if you want to start WSO2 server on port 443. (Tip: Start everything on a fresh pack, if you are trying following on a pack you already started you may have to change already registered callback URLs)
[transport.https.properties]
proxyPort = 443
port = 443
Also, for an application to bind to a port <1000(In this case 443) it needs root permissions. Hence you will have to start the server with a user who has root access.
Having said that, this is not a good deployment pattern as to expose your servers directly to external access. You should probably deploy an LB fronting the WSO2 servers and expose LB to the external users.

Related

How to allow calls to Cognito from an AWS ECS container instance?

I have a setup with an ALB and a target group created by ECS, I'm using Fargate and created a build pipeline by following this article. My app is built with NET core, I have an Angular frontend. Got all this working, I'm able to deploy my code changes, but I'm a bit stuck with the following issue.
I'm using Cognito for authentication and a custom domain that I set for the hosted UI. It seems that, from the browser, when I try to hit an endpoint that is secured, I get a 504 Gateway error, which somehow is not doing the redirect to Cognito in the browser. All this works fine when I run the application on localhost.
When I looked at the logs, I noticed the following exception:
System.InvalidOperationException: IDX20803: Unable to obtain configuration from: 'https://cognito-idp.<region>.amazonaws.com/<region_and_a_code>/.well-known/openid-configuration'
Apparently, it can't establish a connection to Cognito. My containers are using only port 80, my target group instances are also using port 80, ALB uses HTTPS on 443 which directs the traffic to the target group, and for ALB port 80 I just redirect to 443.
I tried a few different things, like setting the authority value instead of the metadata address, tried using a BackChannelHttpHandler to execute the HTTPS call, tried updating the port mappings to allow communication on 443, but somehow it seems that it gets overridden by the task definition that I have created when I set up the build pipeline. The network mode in my task definition is now awsvpc, and if I try to set it to host, it will complain that I can't use it with Fargate.
What do I need to do to allow the HTTPS request from my Docker container instances to reach Cognito?
You are trying to set this up in a public ALB. This setup using a private NLB will work, might work with a private ALB as well. You can then set up vmc private links to get at the service's you need access too.

VM Instance group to configure to listen on port 80 and 8080

I have configure my VM in such a way that I have 2 application running on one VM.
First App listen on ip:80 port
Second App listen on ip:8080 port
I have enabled ports on VM instances group like this.
I have my Load Balancer configured with two front rules like this.
I want to map ip1:80 to my 80 port application and ip2:8080 to 8080 application
when I tried accessing my application using load balancers IP address it always show me 8080 port application.
I have two backend service running
help me here google team. I m newb
If you want to use IP addresses but not URLs/Domain(s) to reach to your web applications, then URL Maps cannot help to implement your design, as URL map forwards the request to the correct backend service using host values (example.com) and path values (/path) in the destination URL.
That being said, you can add one more Target Proxy to your LB resources to route incoming requests directly to the desired backend services. This will allow you to keep your minimum number of instances as one VM.
For more information, visit this article.
I had similar problem and I had to add second backend.
So I have two backends: one for 80 port, other for 8080. And I have on managed group.

Enabling SSL on Rails 4 with AWS Load Balancer Ngnix and Puma

I have tried unsuccessfully to configure SSL for my project.
My AWS load balancer is configured correctly and accepts the certificate keys. I have configured the listeners to route both port 80 traffic and port 443 traffic to my port 80 on the instance.
I would imagine that no further modification is necessary on the instance (Nginx and Puma) since everything is routed to port 80 on the instance. I have seen examples where the certificate is installed on the instances but I understand the load balancer is the SSL termination point so this is not necessary.
When accessing via http://www.example.com eveything works fine. However, accessing via https://www.example.com times out.
I would appreciate some help with the proper high-level setup.
Edit: I have not received any response to this question. I assume it is too general?
I would appreciate confirmation that the high level reasoning I am using is the right one. I should install the certificate in the load balancer only and configure the load balancer to accept connections on the 443 port, BUT route everything on the 80 port internally to the web server instances.
I just stumbled over this question as I had the same problem: All requests to https://myapp.com timed-out and I could not figure out why. Here in short how I could achieve (forced) HTTPS in a Rails app on AWS:
My app:
Rails 5 with enabled config.force_ssl = true (production.rb) - so all connections coming from HTTP will get re-routed to HTTPS in the Rails App. No need to set-up difficult nginx rules. The same app used the gem 'rack-ssl-enforcer' as it was on Rails 4.2.
Side note: AWS LoadBalancers used in the past HTTP GET requests to check the health of the instances (today they support HTTPS). Therefore exception rules had to be defined for the SSL enforcement: Rails 5: config.ssl_options = { redirect: { exclude: -> request { request.path =~ /health-check/ } } } (in production.rb) with a respective route to a controller in the Rails App.
Side note to side note: In Rails 5, the initializer new_framework_defaults.rb has already defined "ssl_options". Make sure to deactivate this before using the "ssl_options" rule in production.rb.
AWS:
Elastic Beanstalk set-up on AWS with a valid cert on the Load Balancer using two Listener rules:
HTTP 80 requests on LB get directed to HTTP 80 on the instances
HTTPS 443 requests on LB get directed to HTTP 80 on the instances (here the certificate needs to be applied)
You see that the Load Balancer is the termination point of SSL. All requests coming from HTTP will go through the LB and will then be redirected by the Rails App to HTTPS automatically.
The thing no one tells you
With this in place, the HTTPS request will still time-out (here I spent days! to figure out why). In the end it was an extremely simple issue: The Security Group of the LoadBalancer (in AWS console -> EC2 -> Security Groups) only accepts requests on Port 80 (HTTP) -> Just activate Port 443 (HTTPS) as well. It should then work (at least it did for me).
I don't know if you managed your problem but for whoever may find this question here is what I did to get it working.
I've been all day reading and found a mix of two configurations that at this moment are working
Basically you need to configure nginx to redirect to https, but some of the recommended configurations do nothing to the nginx config.
Basically I'm using this gist configuration:
https://gist.github.com/petelacey/e35c98f9a35063a89fa9
But from this configuration I added the command to restart the nginx server:
https://gist.github.com/KeithP/f8534c04d20c2b4e4b1d
My take on this is that when the eb deploy process manages to copy the config files nginx has already started(?) making those changes useless. Hence the need to manually restarted, if some has a better approach let us know
Michael Fehr's answer worked and should be the accepted answer. I had the same problem, adding the config.force_ssl = true is what I missed. With the remark that you don't need to add the ebs configuration file they say you have to add if you are using the load balancer. That can be misleading and they do not specify it in the docs

AWS - Virtual Host point to localhost

I have a SOAPUI MockService that works pretty well on local (by that I mean on the AWS machine.). I've done testing on the machine and the service returns the proper XML.
The service is set to respond to localhost:8081
Now I'm trying to access that service from a browser and I can't.
I think I need to map a virtual host(apache) that listens on port 8081 and redirects to the Mock Service. But I can't figure out how to do that.
Any help would be appreciated.
Thanks
You need to open up port 8081 in the security group for your instance.

AWS ELB fails healthcheck with http ports other than 80

We've been using AWS for a while and have also setup many ELB's. The problem we have is that we have multiple sites running on multiple servers. All IIS 7.5 sites are on each of 3 web servers. We utilize ports 80 and 443 with all bindings for sites setup correctly with domains/subdomains. We have an ELB for each site.
The problem we have is that each ELB is currently setup with a healthcheck of HTTP:80/ so each ELB is not really checking the health of it's respective site.
What we'd like to do is setup each site to listen to a different extra port (i.e., 8082, 8083, etc.) and have each ELB's healthcheck check the site's extra port (i.e., HTTP:8082/, HTTP:8083/, etc.). The ports are opened on the firewalls and in the security groups correctly and we can hit each site on their respective extra port (i.e., http://web1.mysite.com:8082/).
AWS's documentation says you should be able to do what we're trying to do, but the healthchecks of the instances don't pass. I've even gone so far as to define a listener for the respective port. Just to confirm, I can set the check to be HTTP:80/ and the instance comes "InService", but when I change it to HTTP:8082/, it immediately goes out of service. This is driving me nuts so any help would be greatly appreciated.
We determined that we can fix this by setting up each site's extra port binding to it's corresponding port with a blank host name. Also, by assuring that the "Default Site" is turned off (in our case it was). The binding hostname was set to the site's name before (i.e., www.mysite.com).