WSO2 API Manager 3.2.0 Registered callback does not match with customUrl behind a proxy - wso2

The problem I am facing is that after changing the hostname and configuring the reverse proxy as described here and here, as well as following the troubleshooting guide here to resolve the 'the registered callback does not match' I am unable to get any further.
I've followed a number of other examples of how to configure nginx and add the reverseProxy property to the settings.js configs but with no luck.
As you can see below if I go to https://example.com/publisher I continue getting the error 'The registered callback does not match'
Here is what I have the callback regex set to:
regexp=(https://example.com/publisher/services/auth/callback/login|https://example.com/publisher/services/auth/callback/logout)
If I inspect the authorize request query I can see that the redirect_url is being set to 127.0.0.1 and I suspect that is the problem as when I add that url to the service provider regex callback it works, but this is not suitable in a non local environment.
And here is the request query (where I suspect the main issue lies - note redirect_uri):
https://example.com/oauth2/authorize?response_type=code&client_id=1obvNiUMBcJwMa3euoHjrsckuGIa&scope=apim:api_create%20apim:api_delete%20apim:api_import_export%20apim:api_product_import_export%20apim:api_publish%20apim:api_view%20apim:app_import_export%20apim:client_certificates_add%20apim:client_certificates_update%20apim:client_certificates_view%20apim:document_create%20apim:document_manage%20apim:ep_certificates_add%20apim:ep_certificates_update%20apim:ep_certificates_view%20apim:external_services_discover%20apim:mediation_policy_create%20apim:mediation_policy_manage%20apim:mediation_policy_view%20apim:pub_alert_manage%20apim:publisher_settings%20apim:shared_scope_manage%20apim:subscription_block%20apim:subscription_view%20apim:threat_protection_policy_create%20apim:threat_protection_policy_manage%20openid&state=/&redirect_uri=https://127.0.0.1/publisher/services/auth/callback/login
Here is how my deployment.toml is configured (I've replaced my actual domain with example.com):
Note I had to remove the ports to work behind the proxy
And here is my settings.js:
I added the reverseProxy property as suggested in a github issue
And here is my nginx conf:

This is a known limitation. Please find the steps to resolve the issue - https://apim.docs.wso2.com/en/latest/troubleshooting/troubleshooting-invalid-callback-error/#troubleshooting-registered-callback-does-not-match-with-the-provided-url-error

The reason for this error comes down to a missing X-Forwarded-For header, I ended up changing the forwardedHeader in settings.js to Host as that was being passed from my proxy server.

Thanks for the detailed question "user3745065".
I was having the exactly same issue you described in this post, and I guess I nailed the problem down.
Like you mentioned the issue is with the forwardedHeader, that in your case you switched to Host.
But checking the product documentation, the sample they provide is the following:
customUrl: { // Dynamically set the redirect origin according to the forwardedHeader host|proxyPort combination
enabled: true,
forwardedHeader: 'X-Forwarded-Host',
},
It took me a while to noticed that the forwardedHeader is supposed to be 'X-Forwarded-Host' not 'X-Forwarded-For' as it comes as default.
Few other things I needed to tweak that wasn't clear in the documentation for changing the hostname (here), I had to remove the port variable ${mgt.transport.https.port} from devportal.
That's outlined on the installation step 5 also, here. However worth mentioning:
from:
[apim.devportal]
url = "https://{Your Domain}:${mgt.transport.https.port}/devportal"
to
url = "https://{Your Domain}/devportal"
otherwise when the it tries to redirect to the portal (for instance, from the publisher) it construct the url with the port number, and that default port 9443 isn't going to work on your proxy (tested on nginx with the provided settings that is on the documentation here) which is listening and expecting calls on port 443.
Things that I noticed you configured but perhaps it's not necessary:
Set the apim.idp settings
Set the reverseProxy settings
Set the apim.gateway.environment settings (Not related to the callback url issue, this is meant for you to configure the runtime gateway urls)
Last but not the least, Following the "Troubleshooting 'Registered callback does not match with the provided url' error", again you need to remove the port number from the url, otherwise you will have the same issue aforementioned on your proxy.
Just my 2 cents! ;)

Related

using custom domain for a django page through traefik

i have my first custom domain (its through godaddy)
ive hooked it up to cloudflare.
i want to connect to it with traefik.
i have a django webpage that works fine on port 8000, so i switched it over to 80 and no dice. trying to connect to my custom domain just hangs and the port gives me a 404 error.
traefik dashboard looks fine and so do my records on cloudflare (as far as i can tell ive never done this before)
i was hoping someone could help me connect to my django page through my custom domain. is there anything ive done in the evidence provided below that looks wrong?
is there anything else you would need to see?
or any steps ive missed?
i recieve this error from traefik as the docker container starts
traefik2 | time="2023-02-13T14:08:29Z" level=error msg="Unable to obtain ACME certificate for domains \"tgmjack.com\": unable to generate a certificate for the domains [tgmjack.com]: error: one or more domains had a problem:\n[tgmjack.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: 2606:4700:3033::ac43:a864: Invalid response from http://tgmjack.com/.well-known/acme-challenge/PnsiuL5AtrJXM9UQNrLvhlGdm1MpJ8ZS6i_atIVWCA4: \"<!doctype html><html lang=\\\"en\\\"><head><meta http-equiv=\\\"content-type\\\" content=\\\"text/html;charset=utf-8\\\" /><meta name=\\\"viewport\\\" c\"\n" providerName=myhttpchallenge.acme ACME CA="https://acme-v02.api.letsencrypt.org/directory" routerName=frontend#docker rule="Host(`tgmjack.com`)"
according to chatgpt
required file is an ACME challenge file and it should be present at the URL specified in the log message: "http://tgmjack.com/.well-known/acme-challenge/qC1w4L8-pPVgXvXmWm55u6ETasZWK2iCqJUfZNArY5U".
investigating, i belive the following few lines from my cmd line show that the only file on my computer called acme.json is here.
[ec2-user#ip-172-31-19-18 letsencrypt]$ sudo find / -name "acme.json"
/home/ec2-user/thing4/new_ui_51_fix_backend_for_8081/running_prices/TRAEFIK/letsencrypt/acme.json
and also there is no "acme-challenge" anywhere.
so is TRAEFIK/letsencrypt/acme.json the correct file? because the path looks miles away from what it should be? i didnt make it.
#####################################
extra info below
#################################
below is a collection of screenshots each thing ive stated above
do you have any advice or questions?
ps:)
this happens on my local machine and on amazon-linux ec2 containers, i have all my ports open (on the aws end of things)
Some considerations:
GoDaddy's pointing is ignored if you are using Cloudflare for your DNS, so we only look at Cloudflare's.
On CloudFlare you need to remove that random ip you found set as the A record.
You don't need to change your container port from 8000 to 80, you'll have to manage with an "ingress" or otherwise a webserver (nginx for example) a proxypass to localhost:8000.
Traefik,p probabbly, already has an "ingress" used to provisoning the certificate, which is why it returns you an error on ".well-known/acme-challenge". This file is used to identify the actual ownership of a domain and this is needed to generate a valid SSL/TLS certificate.
To do this you need to make sure that when you call your server at localhost:8000/.well-know/acme-challenge it returns the file with the unique key. You certainly find this information on Traefik (https://doc.traefik.io/traefik/https/acme/) this a link to the tutorial.
I recommend you to start checking the correct configuration of CloudFlare targeting by removing anything that is not useful to you.
I hope I have been of some help to you!

Keystone session cookie only working on localhost

Edit:
After investigating this further, it seems cookies are sent correctly on most API requests. However something happens in the specific request that checks if the user is logged in and it always returns null. When refreshing the browser a successful preflight request is sent and nothing else, even though there is a session and a valid session cookie.
Original question:
I have a NextJS frontend authenticating against a Keystone backend.
When running on localhost, I can log in and then refresh the browser without getting logged out, i.e. the browser reads the cookie correctly.
When the application is deployed on an external server, I can still log in, but when refreshing the browser it seems no cookie is found and it is as if I'm logged out. However if I then go to the Keystone admin UI, I am still logged in.
In the browser settings, I can see that for localhost there is a "keystonejs-session" cookie being created. This is not the case for the external server.
Here are the session settings from the Keystone config file.
The value of process.env.DOMAIN on the external server would be for example example.com when Keystone is deployed to admin.example.com. I have also tried .example.com, with a leading dot, with the same result. (I believe the leading dot is ignored in newer specifications.)
const sessionConfig = {
maxAge: 60 * 60 * 24 * 30,
secret: process.env.COOKIE_SECRET,
sameSite: 'lax',
secure: true,
domain: process.env.DOMAIN,
path: "/",
};
const session = statelessSessions(sessionConfig);
(The session object is then passed to the config function from #keystone-6/core.)
Current workaround:
I'm currently using a workaround which involves routing all API requests to '/api/graphql' and rewriting that request to the real URL using Next's own rewrites. Someone recommended this might work and it does, sort of. When refreshing the browser window the application is still in a logged-out state, but after a second or two the session is validated.
To use this workaround, add the following rewrite directive to next.config.js
rewrites: () => [
{
source: '/api/graphql',
destination:
process.env.NODE_ENV === 'development'
? `http://localhost:3000/api/graphql`
: process.env.NEXT_PUBLIC_BACKEND_ENDPOINT,
},
],
Then make sure you use this URL for queries. In my case that's the URL I feed to createUploadLink().
This workaround still means constant error messages in the logs since relative URLs are not supposed to work. I would love to see a proper solution!
It's hard to know what's happening for sure without knowing more about your setup. Inspecting the requests and responses your browser is making may help figure this out. Look in the "network" tab in your browser dev tools. When you make make the request to sign in, you should see the cookie being set in the headers of the response.
Some educated guesses:
Are you accessing your external server over HTTPS?
They Keystone docs for the session API mention that, when setting secure to true...
[...] the cookie is only sent to the server when a request is made with the https: scheme (except on localhost)
So, if you're running your deployed env over plain HTTP, the cookie is never set, creating the behaviour you're describing. Somewhat confusingly, in development the flag is ignored, allowing it to work.
A similar thing can happen if you're deploying behind a proxy, like nginx:
In this scenario, a lot of people choose to have the proxy terminate the TLS connection, so requests are forwarded to the backend over HTTP (but on a private network, so still relatively secure). In that case, you need to do two things:
Ensure the proxy is configured to forward the X-Forwarded-Proto header, which informs the backend which protocol was used originally request
Tell express to trust what the proxy is saying by configuring the trust proxy setting
I did a write up of this proxy issue a while back. It's for Keystone 5 (so some of the details are off) but, if you're using a reverse proxy, most of it's still relevant.
Update
From Simons comment, the above guesses missed the mark 😭 but I'll leave them here in case they help others.
Since posting about this issue a month ago I was actually able to work around it by routing API requests via a relative path like '/api/graphql' and then forwarding that request to the real API on a separate subdomain. For some mysterious reason it works this way.
This is starting to sound like a CORS or issue
If you want to serve your front end from a different origin (domain) than the API, the API needs to return a specific header to allow this. Read up on CORS and the Access-Control-Allow-Origin header. You can configure this setting the cors option in the Keystone server config which Keystone uses to configure the cors package.
Alternatively, the solution of proxying API requests via the Next app should also work. It's not obvious to me why your proxying "workaround" is experiencing problems.

Configuring WSO2 IS behind a reverse proxy at some context

I am trying to set up WSO2 Identity Server behind a reverse proxy for SSL offloading. For example, let's say if WSO2 IS is available at say https://<some-ip>:9443/, I am trying to put it behind reverse proxy with address such as https://<domain name>/is/. Note the context path /is and SSL port 443. I thought that this will be trivial enough but sadly I am unable to find any conclusive documentation for achieving the same.
My applications are using OIDC to connect to WSO2 IS and using Azure Application Gateway as reverse proxy - typically all API calls works well but neither of UI (or flows involving redirections) works due to context. I can also fix redirects by URL rewriting at reverse proxy but that still doesn't solve problems. For example, login page will appear but XHR call from the same will go to /logincontext instead of /is/logincontext. Where can I set up the proxy context path in WSO2 IS? I already tried setting the same in .toml file (equivalent of setting it in carbon.xml) but it seems to be affecting only Management Portal.
WSo2 IS documentation talks about setting it up behind ngnix but that documentation is not using any path context. I could find reverse proxy documentation for other WSO2 product such as WSO2 API Manager but it only involves updating carbon.xml and that doesn't work for WSO2 IS. I am not a java person and hence, finding it difficult to figure out web app organization of WSO2.
Any help/link to documentation/guide to set up with proxy context will be useful.
I know that this answer comes a little bit late but recently I had a similar issue and here it is how I made it work, maybe it could be helpful for someone. I was using WSO2 IS 5.11.0.
Note:
I checked similar questions on stackoverflow and found a few but none was enough by itself for my case.
Maybe the solution I came up with is not the best or the most correct but it is the only one I could make work.
Here's how I did, assuming the context path is is:
Open Carbon Management Console and go to Identity Providers -> Resident. Then, go to Inbound Authentication Configuration -> OAuth2/OpenID Connect Configuration. Here, change the hostname under Identity Provider Entity ID to https://domain_name:443/is/<remaining path>.
Make sure that the port number is present or absent both here and in the client application. If there is a mismatch between the two, for some reason, it won't work (or at least it didn't for me).
Open the file deployment.toml and modify it as follows:
under the [server] section, add your proxy context at the end of the base_path url, e.g. base_path = "https://$ref{server.hostname}:${carbon.management.port}/is";
of course, also add proxy_context_path = "is" (actually, this last line should be enough but for some reason in my case it wasn't, so I had to modify the base path too);
under [transport.https.properties] add proxyPort="443".
For the record, I also turned off compression, by adding:
[transport.http.properties]
compression="off"
[transport.https.properties]
...
compression="off"
and set the token issuer URL equal to the entity id set up in Carbon, with:
[oauth]
use_entityid_as_issuer_in_oidc_discovery = true
but found out that these last two steps (turning off compression and setting the entity id as issuer) weren't needed.
Disable the csrf guard by setting org.owasp.csrfguard.Enabled = false
in the file /repository/resources/conf/templates/repository/conf/security/Owasp.CsrfGuard.Carbon.properties.j2.
This step was necessary for me to avoid the 403 Error after logging in on the Carbon Console (turning off compression didn't work).
Lastly, if you use nginx as reverse proxy (as I did), add these two lines in the location used for wso2:
proxy_redirect https://domain_name/oauth2/ https://domain_name/is/oauth2/;
proxy_redirect https://domain_name/carbon/ https://domain_name/is/carbon/;
These are needed (or at least were for me) because some URLs are not under the context path. In particular, the last one allows you to open the Carbon Console at https://domain_name/is/carbon/.
References:
wso2 api manger carbon page gives 403 Forbidden
WSO2 Identity Server login returns a 403
WSO2 Identity Server port configuration
To understand the template-based configuration model adopted from version 5.9.0 onwards, see:
https://apim.docs.wso2.com/en/latest/reference/understanding-the-new-configuration-model/
https://mcvidanagama.medium.com/understand-wso2-api-managers-new-configuration-model-6425a2710faa
Here are some useful configuration mappings from the old xml to the new toml based model:
https://github.com/ayshsandu/samples/tree/master/config-mapping

Cubesviewer configuration for proper authentication

I'm trying to configure cubesviewer and try out the setup.
I've got the app installed running, along with cubes slicer app too.
However, when I visit the home page
http://127.0.0.1:8000/cubesviewer/
it fails popping up an error "Error occurred while accessing the data server"
Debugging with the browser console, shows a http status 403 error with the url http://localhost:8000/cubesviewer/view/list/
After some googling and reading, I figured I'll need to add rest frame auth settings. (as mentioned here.).
Now after running migrate and runserver, I get 401 error on that url.
Clearly I'm missing something with settings.py , Can somebody help me out.
I'm using the cubesviewer tag v0.10 from the github repo.
And find my settings here. http://dpaste.com/2G5VB5K
P.S: I've verified Cubes slicer works separately on its' own.
I have reproduced this. This is error may occur when you use different URL to access a website and to access related resources. For security reasons, browsers allow to access resources from exactly the same host as the page you are viewing.
Seems you are accessing the app via http://127.0.0.1:8000, but you have configured CubesViewer to tell clients to access the data backend via http://localhost:8000. While it's the same IP address, they are different strings.
Try accessing the app as http://localhost:8000.
If you deploy to a different server, you need to adjust settings. Here are the relevant configuration options, now with more comments:
# Base Cubes Server URL.
# Your Cubes Server needs to be running and listening on this URL, and it needs
# to be accessible to clients of the application.
CUBESVIEWER_CUBES_URL="http://localhost:5000"
# CubesViewer Store backend URL. It should point to this application.
# Note that this must match the URL that you use to access the application,
# otherwise you may hit security issues. If you access your server
# via http://localhost:8000, use the same here. Note that 127.0.0.1 and
# 'localhost' are different strings for this purpose. (If you wish to accept
# requests from different URLs, you may need to add CORS support).
CUBESVIEWER_BACKEND_URL="http://localhost:8000/cubesviewer"
Alternatively, you could change CUBESVIEWER_BACKEND_URL to "http://127.0.0.1:8000/cubesviewer" but I recommend you to use hostnames and not IP addresses for this.
Finally, I haven't yet tested with CORS support, but check this pull request if you wish to try that approach.

Connecting a DD-WRT router to a Squid proxy running on AWS

I am trying to get a Linksys router with the latest DD-WRT (v24-sp2) in my house connected, via Comcast, to an external Squid (v3) proxy that I am running on AWS. When I connect over the WiFi to the DD-WRT router, it connects to the Squid proxy, but I get the nasty message (abbreviated here to show relevant part):
While trying to retrieve the URL: /
Note the backlash. I get this when I go to a root domain, like www.cnn.com. If I go to a page under a site, like www.cnn.com/today (fake link used for example only), that returns and error like:
While trying to retrieve the URL: /today
Again, notice the "/today", as if the root domain has been removed, and the string to the right of the domain name is being searched on.
For some background, I have installed Squid as generally as possible, and have done it on two servers with the same results. I get this same error no matter what domain I go to. Also, if I switch my network on my Mac to use this Squid proxy, it works fine. Only the connections from the DD-WRT give this error.
I have tried the instructions on the DD-WRT site with no luck. Others seem to have gotten this working well, so I assume I am making a configuration mistake.
Any clues for me? TIA...