Get variable value by manifest in pivotal - cloud-foundry

I needed to change the actuator management port. With this I need to explicit the port in the check request url of the health check of pitoval. With this, I have my application running on port 8080 and my actuator on port 8091. How to explicitly in the url port that health check should make the request?
I'm trying to do so in the manifest.yml:
health-check-type: http
health-check-http-endpoint: ${CF_INSTANCE_IP}:8091/actuator/health
However, this approach does not work

You cannot put a port into the health check entry point. The health-check-http-endpoint property takes an endpoint (i.e. the path part of a URL, not a full URL).
The error you get if you try to set the health check endpoint to a full URL confirms this:
{
"description": "The app is invalid: health_check_http_endpoint HTTP health check endpoint is not a valid URI path: http://www.google.com/",
"error_code": "CF-AppInvalid",
"code": 100001
}
It's looking for a URI Path [1]. Example acceptable values: "/", "/health", "/info", etc..
You can also see this point in the code where the healthcheck is loading the port from the CF_INSTANCE_PORTS env variable.

Related

WSO2 API Manager 3.2.0 Registered callback does not match with customUrl behind a proxy

The problem I am facing is that after changing the hostname and configuring the reverse proxy as described here and here, as well as following the troubleshooting guide here to resolve the 'the registered callback does not match' I am unable to get any further.
I've followed a number of other examples of how to configure nginx and add the reverseProxy property to the settings.js configs but with no luck.
As you can see below if I go to https://example.com/publisher I continue getting the error 'The registered callback does not match'
Here is what I have the callback regex set to:
regexp=(https://example.com/publisher/services/auth/callback/login|https://example.com/publisher/services/auth/callback/logout)
If I inspect the authorize request query I can see that the redirect_url is being set to 127.0.0.1 and I suspect that is the problem as when I add that url to the service provider regex callback it works, but this is not suitable in a non local environment.
And here is the request query (where I suspect the main issue lies - note redirect_uri):
https://example.com/oauth2/authorize?response_type=code&client_id=1obvNiUMBcJwMa3euoHjrsckuGIa&scope=apim:api_create%20apim:api_delete%20apim:api_import_export%20apim:api_product_import_export%20apim:api_publish%20apim:api_view%20apim:app_import_export%20apim:client_certificates_add%20apim:client_certificates_update%20apim:client_certificates_view%20apim:document_create%20apim:document_manage%20apim:ep_certificates_add%20apim:ep_certificates_update%20apim:ep_certificates_view%20apim:external_services_discover%20apim:mediation_policy_create%20apim:mediation_policy_manage%20apim:mediation_policy_view%20apim:pub_alert_manage%20apim:publisher_settings%20apim:shared_scope_manage%20apim:subscription_block%20apim:subscription_view%20apim:threat_protection_policy_create%20apim:threat_protection_policy_manage%20openid&state=/&redirect_uri=https://127.0.0.1/publisher/services/auth/callback/login
Here is how my deployment.toml is configured (I've replaced my actual domain with example.com):
Note I had to remove the ports to work behind the proxy
And here is my settings.js:
I added the reverseProxy property as suggested in a github issue
And here is my nginx conf:
This is a known limitation. Please find the steps to resolve the issue - https://apim.docs.wso2.com/en/latest/troubleshooting/troubleshooting-invalid-callback-error/#troubleshooting-registered-callback-does-not-match-with-the-provided-url-error
The reason for this error comes down to a missing X-Forwarded-For header, I ended up changing the forwardedHeader in settings.js to Host as that was being passed from my proxy server.
Thanks for the detailed question "user3745065".
I was having the exactly same issue you described in this post, and I guess I nailed the problem down.
Like you mentioned the issue is with the forwardedHeader, that in your case you switched to Host.
But checking the product documentation, the sample they provide is the following:
customUrl: { // Dynamically set the redirect origin according to the forwardedHeader host|proxyPort combination
enabled: true,
forwardedHeader: 'X-Forwarded-Host',
},
It took me a while to noticed that the forwardedHeader is supposed to be 'X-Forwarded-Host' not 'X-Forwarded-For' as it comes as default.
Few other things I needed to tweak that wasn't clear in the documentation for changing the hostname (here), I had to remove the port variable ${mgt.transport.https.port} from devportal.
That's outlined on the installation step 5 also, here. However worth mentioning:
from:
[apim.devportal]
url = "https://{Your Domain}:${mgt.transport.https.port}/devportal"
to
url = "https://{Your Domain}/devportal"
otherwise when the it tries to redirect to the portal (for instance, from the publisher) it construct the url with the port number, and that default port 9443 isn't going to work on your proxy (tested on nginx with the provided settings that is on the documentation here) which is listening and expecting calls on port 443.
Things that I noticed you configured but perhaps it's not necessary:
Set the apim.idp settings
Set the reverseProxy settings
Set the apim.gateway.environment settings (Not related to the callback url issue, this is meant for you to configure the runtime gateway urls)
Last but not the least, Following the "Troubleshooting 'Registered callback does not match with the provided url' error", again you need to remove the port number from the url, otherwise you will have the same issue aforementioned on your proxy.
Just my 2 cents! ;)

AWS ELB unhealthy 4xx

I am using express.js and graphql for backend server which uses port 4000. I am trying to connect my server with ELB but the status returns unhealthy with error code 400 or 404. These are my settings about ELB.
Target group
Target groups registered targets
Target groups health checks
If I change the path to /graphql it returns 400, and just / returns 404. It is still working with no error when I call the API but it seems like I should fix it. Could anyone please tell me what can I do?
The standard way to handle this case is to define the path in your express application for example /health or /ping which should return HTTP status code 200.
If /graphql does not return HTTP status code 200 the LB will mark the target unhealthy.
you have two option
Update Success codes to 400, the LB will mark the target healhty
Define path like / or /health and return status code 200.
In express for example
app.get('/health',function(req,res){
res.status(200).send("instance is healthy");
})
Then setting for LB will
Or if you want to go without changing in the application the below will work.
Apollo supports a simple health check by default at this URL /.well-known/apollo/server-health (source), so you can add change the path in the target group from / to the URL

AWS Load Balancer health check fails for url with #

I have set up my EC2 load balancer health check to point to a url with a # in it like /#/applications
When I ssh into the box and curl the url I get a Response code of 200.
However the load balancer gives this error:
Health checks failed with these codes: [400]
If I change the health check URL to be / then the load balancer say it is fine.
I am suspecting it could be a url encoding issue. Are there any restrictions on what characters are allowed in the URL ?
# is not a valid character in the request URI. The # symbol marks the beginning of the URL fragment.
When you access a URL with #, the URI is truncated by the browser at the # before it is sent to the server. Servers never see this -- it's for client-side use only.
It is thus invalid in a health check and the server is correct to reject it as 400 Bad Request. Access a URL on your site with a fragment from a browser and you will notice that the # is not logged by the web server because the browser doesn't send it.
If for some reason you're actually needing a url-encoded #, that would be written as %23 but I would not expect this to be what you are looking for.

Aws-elb health check failing at 302 code

Hi i created ALB listener 443 and target group instance on 7070 port (not-ssl)
I can access instanceip:7070 without problem , but with https://elb-dns-name not able to access.. instance health check also failed with 302 code
ALB listener port https and instance is http protocol ,
when i browse with https://dns-name it redirecting to http://elb-dns-name
you get 302 when performing URL redirection, any ELB Health check will look for success code 200 for the health check to pass. In ALB, this can be configured under health check in the ELB console.
To modify the health check settings of a target group using the console
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
On the navigation pane, under LOAD BALANCING, choose Target Groups.
Select the target group.
On the Health checks tab, choose Edit.
On the Edit target group page, modify the setting Success Codes to 302 or as needed, and then choose Save.
I stuck with the same problem in AWS ALB (Health checks failed with these codes: [302])
Configuration:
Tomcat 9 servers that are listening on port 80 only
ALB health check path was set to "/my_app_name" expecting to serve health check from the application's root index page.
My configured health page is not expected to do any redirects, but to return HTTP/200 if server is healthy and HTTP/500 if unhealthy.
The proposed solution just to add HTTP/302 as a success code is absolutely WRONG and misleading.
It means that the page's internal health check logic isn't run, as HTTP/302 redirect code just shows common ability of the server to respond.
The problem was in Tomcat server itself that in the case of request to "/my_app_name" was redirecting with HTTP/302 to "/my_app_name/" (pay attention to the slash at the end).
So setting health check path to "/my_app_name/" fixed the problem, health check logic runs well and HTTP/200 is returned.
add this annotation in your ingress controller it will modify the success code and nodes will be in healthy state.
alb.ingress.kubernetes.io/success-codes: 200,404,301,302
I run into the same issue recently, and as suggested by #SudharsanSivasankaran we have edited the health check settings at the target level.
But we have kept the 200 only status code and instead updated the path to directly hit the page the redirection goes to.
For instance if a website hosted under instance:80 needs the user to be logged on and redirect it to the /login page, all we need to do is add the /login path in the health check.
I had a similar case where I'm offloading TLS on the ELB and then sending traffic to port 80 with plain HTTP. I'm always getting the 302 code from the ELB.
You can change the status code for the target group and specify the success code as 302, but I don't think that is a very good idea. Since you may encounter a different status code if you changed some configuration in your Apache or htaccess files which may cause your instance to put out of service. The goal of Health Check is identify faulty servers and remove them from the production environment.
This solution worked great for me: https://stackoverflow.com/a/48140513/14033386
Cited below with more explanation:
Enable the mod_rewrite module. In most Linux distros it's enabled by default when you install Apache. But check for it anyway. Check this: https://stackoverflow.com/a/5758551/14033386
LoadModule rewrite_module modules/mod_rewrite.so
and then add the following to your virtual host.
ErrorDocument 200 "ok"
RewriteEngine On
RewriteRule "/AWS-HEALTH-CHECK-URL" - [R=200]
AWS-HEALTH-CHECK-URL is the one you specify in the health check settings.
This solution will always return 200 code that specific URL as long as your server is active and serving requests.
In my case I had a domain www.domain.com
but by default when you accessing the domain and you are not logged in you are immediately redirected to www.domain.com/login
... and that is something that caused the problem
So you have 2 options:
Go to your aws target group -> health check and change your default path / to the new one which in my case was /login. I'm really sure if login endpoint works - website works too.
Go to your aws target group -> health check and change your default status code from 200 to 200,302. It is definitely less appropriate way but still acceptable, depends on the case

Django and Elastic Beanstalk URL health checks

I have a Django webapp. It runs inside Docker on Elastic Beanstalk.
I'd like to specify a health check URL for slightly more advanced health checking than "can the ELB establish a TCP connection".
Entirely reasonably, the ELB does this by connecting to the instance over HTTP, using the instance's hostname (e.g. ec2-127-0-0-1.compute-1.amazonaws.com) as the Host header.
Django has ALLOWED_HOSTS which validates the Host header of incoming requests. I set this to my application's external domain via environment variable.
Unsurprisingly and entirely reasonably, Django thus rejects ELB URL health checks due to lack of matching Host.
We don't want to disable ALLOWED_HOSTS because we'd like to be able to trust get_host().
The solutions so far seem to be:
Somehow persuade Django to not care about ALLOWED_HOSTS for certain specific paths (i.e. the health check URL)
Do something funky like calling the EC2 info API on startup to get the host's FQDN and append it to ALLOWED_HOSTS
Neither of these seem particularly pleasant. Can anyone recommend a better / existing solution?
(For the avoidance of doubt, I believe this problem to be identical to the scenario of "Disabled ALLOWED_HOSTS, fronting HTTPD that filters on host" - I want the health check to hit Django, not a fronting HTTPD)
If the ELB health check is sending its request with a host header containing the elastic beanstalk domain (*.elasticbeanstalk.com, or an EC2 domain *.amazonaws.com) then the standard ALLOWED_HOSTS can still be used with a wildcard entry of '.amazonaws.com' or '.elasticbeanstalk.com'.
In my case I received standard ipv4 addresses as the health check hosts, so a different solution was needed. If you can't predict the host at all, and it might be safer to assume you can't, you would need to take a route such as one of the following.
You can use Apache to handle approved hosts instead of propagating ambiguous requests to Django. Since the host header is intended to be the hostname of the server receiving the request, this solution changes the header of valid requests to use the expected site hostname. With elastic beanstalk you'll need to configure Apache using .ebextensions as described here. Under the .ebextensions directory in your project root, add the following to a .config file.
files:
"/etc/httpd/conf.d/eb_healthcheck.conf":
mode: "000644"
owner: root
group: root
content: |
<If "req('User-Agent') == 'ELB-HealthChecker/1.0' && %{REQUEST_URI} == '/status/'">
RequestHeader set Host "example.com"
</If>
Replacing /status/ with your health check URL and example.com with your site's appropriate domain. This tells Apache to check all incoming requests and change the host headers on requests with the appropriate health check user agent that are requesting the appropriate health check URL.
If you would really prefer not to configure Apache, you could write a custom middleware to authenticate health checks. The middleware would have to override Django's CommonMiddleware which calls HttpRequest's get_host() method that validates the request's host. You could do something like this
from django.middleware.common import CommonMiddleware
class CommonOverrideMiddleware(CommonMiddleware):
def process_request(self, request):
if not('HTTP_USER_AGENT' in request.META and request.META['HTTP_USER_AGENT'] == 'ELB-HealthChecker/1.0' and request.get_full_path() == '/status/'):
return super().process_request(request)
Which just allows any health check requests to skip the host validation. You'd then replace django.middleware.common.CommonMiddleware with path.CommonOverrideMiddleware in your settings.py.
I would recommend using the Apache configuration approach to avoid any details in the middleware, and to completely isolate Django from host issues.
This is what I use, and it works well:
import socket
local_ip = str(socket.gethostbyname(socket.gethostname()))
ALLOWED_HOSTS=[local_ip, '.mydomain.com', 'mydomain.elasticbeanstalk.com' ]
where you replace mydomain and mydomain.elasticbeanstalk.com with your own.