SimpleSAML_Error_Error: UNHANDLEDEXCEPTION --Destination in response doesn't match the current URL - drupal-8

I am receiving below error when i try to login from an IDP.
Caused by: Exception: Destination in response doesn't match the current URL. Destination is "http://example.com/simplesaml/module.php/saml/sp/saml2-acs.php/SP", current URL is "https://example.com:16116/simplesaml/module.php/saml/sp/saml2-acs.php/SP".
I am using Drupal 8 and simplesamlphp_auth module.

URL matching in SimpleSAML was very particular.
You do notice that one request is using the protocol prefix 'https, while the other is using 'http'.
Additionally, the port number being reflected could throw it off.
I last worked with this technology a year and a half ago, but we did make use of hostfile switching to target different environemnts, Dev, QA, Production, etc. Avoiding the use of custom ports we had few problems on that front.
Checking the source, you can observe that the Error thrown, stems from a very literal string comparison:
https://github.com/simplesamlphp/simplesamlphp/blob/4bbcdd84ebda4c876542af582cac53bbd7056160/modules/saml/lib/Message.php#L585

Check for baseurlpath value in config.php of SimpleSAMLPHP library.
Also cross verify the RelayState in authsources.php
If required metadata also need to be updated
ref: https://groups.google.com/g/simplesamlphp/c/AcWUojq2aWg

Related

Request data seemingly dirty in multithreaded flask app

We are seeing a random error that seems to be caused by two requests' data getting mixed up. We receive a request for quoting shipping costs on an Order, but the request fails because the requested Order is not accessible by the requesting account. I'm looking for anyone who can provide an inkling on what might be happening here, I haven't found anything on google, the official flask help channels, or SO that looks like what we're experiencing.
We're deployed on AWS, with apache, mod_wsgi, 1 process, 15 threads, about 10 instances.
Here's the code that sends the email:
msg = f"Order ID {self.shipping.order.id} is not valid for this Account {self.user.account_id}"
body = f"Error:<br/>{msg}<br/>Request Data:<br/>{request.data}<br/>Headers:<br/>{request.headers}"
send_email(msg, body, "devops#*******.com")
request_data = None
The problem is that in that scenario we email ourselves with the error and the request data, and the request data we're getting, in many cases, would've never landed in that particular piece of code. It can be a request from the frontend to get the current user's settings, for example, that make no reference to any orders, nevermind trying to get a shipping quote for it.
Comparing the application logs with apache's access_log, we see that, in all cases, we got two requests on the same instance, one requesting the quoting, and another which is the request that is actually getting logged. We don't know whether these two requests are processed by the same thread in rapid succession, or by different threads, but they come so close together that I think the latter is much more probable. We have no way of univocally tying the access_log entries with the application logging, so far, so we don't know which one of the requests is logging the error, but the fact is that we're getting routed to a view that does not correspond to the request's content (i.e., we're not sure whether the quoting request is getting the wrong request object, or if the other one is getting routed to the wrong view).
Another fact that is of interest is that we use graphql, so part of the routing is done after flask/werkzeug do theirs, but the body we get from flask.request at the moment the error shows up does not correspond with the graphql function/mutation that gets executed. But this also happens in views mapped directly through flask. The user is looked up by the flask-login workflow at the very beginning, and it corresponds to the "bad" request (i.e., the one not for quoting).
The actual issue was a bug on one of python-graphql's libraries (promise), not on Flask, werkzeug or apache. It was not the request data that was "moving" to a different thread, but a different thread trying to resolve the promise for a query that was supposed to be handled elsewhere.

Sitecore 9 Page Test in "Experience Optimization"

Sitecore Experience Optimization Page Testing.
I am trying to test created simple page in Experience Optimization. Locally everything working fine. When i tried same steps in Azure environment it giving me errors like below.
HTTP400: BAD REQUEST - The request could not be processed by the server due to invalid syntax.
(XHR)POST - https://mc-4b5a2ea8-f571-4c60-bf02-50220a-cm.azurewebsites.net/api/sitecore/Settings/SetUserProfileKey
HTTP500: SERVER ERROR - The server encountered an unexpected condition that prevented it from fulfilling the request.
(XHR)GET - https://mc-4b5a2ea8-f571-4c60-bf02-50220a-cm.azurewebsites.net/sitecore/shell/api/ct/ItemInfo/GetByUri?datauri=sitecore%3A%2F%2F%7BED7C2C82-114C-4FB1-96A7-6CCF1F37317B%7D%3Fver%3D2%26lang%3Den&_=1518266879976
HTTP500: SERVER ERROR - The server encountered an unexpected condition that prevented it from fulfilling the request.
(XHR)GET - https://mc-4b5a2ea8-f571-4c60-bf02-50220a-cm.azurewebsites.net/sitecore/shell/api/ct/ItemInfo/GetByUri?datauri=sitecore%3A%2F%2F%7BED7C2C82-114C-4FB1-96A7-6CCF1F37317B%7D%3Fver%3D2%26lang%3Den&_=1518266879972
Please check attached screenshots for same.
Update for All viewers.
Thanks to #Pete Navarra for answer.
This is a stretch... But I think you are hitting the URL max length, with the length of your domain name. Can you shorten your machine names?
I will try and keep you update on same.

Mapping Lambda output in API Gateway gives server error

I have an AWS API Gateway setup, served by a Python Lambda function. For successful responses the Lambda returns a response of the form:
"200:{\"somekey\": \"somevalue\"}"
By default, the Integration Response settings in the gateway console have just one rule configured with a Lambda Error Regex of .* mapping to a response status of 200. This works fine.
The problem is when I try to change that to 200.* (with a view to enabling more specific codes going forward). Now I get a
{"message": "Internal server error"}
every time I hit the gateway with any request (resulting in a 200 or not).
No error logs are written to CloudWatch.
I want to know how I can map Lambda outputs to HTTP status codes successfully in AWS API Gateway.
If you do a test run in API gateway, the test run log should show an error similar to:
Sat Nov 21 07:41:25 UTC 2015 : Execution failed due to configuration
error: No match for output mapping and no default output mapping
configured
The problem is there is no-longer a default mapping to catch a successful response. The intention of the error regex is to catch errors and map them to status codes. The error regex searches the errorMessage property of the failed json response. If you were to set the status code 400 to be the default (blank) - you would find your successful response always mapping to a status code of 400.
You'll be best off to leave 200 as a default (blank) regex. Then set specific regex values to check for your different error messages. For example, you can have multiple specific error regex status codes and a generic catch-all to result in 500's.
For those who tried everything put on this question and couldn't make this work (like me), check the thedevkit comment on this post (saved my day):
https://forums.aws.amazon.com/thread.jspa?threadID=192918
Reproducing it entirely below:
I've had issues with this myself, and I believe that the newline
characters are the culprit.
foo.* will match occurrences of "foo" followed by any characters
EXCEPT newline. Typically this is solved by adding the '/s' flag, i.e.
"foo.*/s", but the Lambda error regex doesn't seem to respect this.
As an alternative you can use something like: foo(.|\n)*

Monit http response content regex behavior

I am using a Logstash + Elasticsearch stack to aggregate logs from a few interrelated apps.
I am trying to get Monit to alert whenever the word 'ERROR' is returned as part of an Elasticsearch REST query from Monit, but the 'content' regex check does not seem to be working for me. (I am sending email and SMS alerts from Monit via M/Monit.)
I know my Monit and M/Monit instances are configured properly because I can get alerts for server pings and file checksum changes, etc. just fine.
My Monit Elasticsearch HTTP query looks like this:
check host elasticsearch_error with address 12.34.56.789
if failed
url http://12.34.56.789:9200/_search?q=severity%3AERROR%20AND%20timestamp%3A>now-2d
and content = "ERROR"
then alert
BTW, %20 escapes 'space', %3A escapes ':'
My logstash only has error log entries that are between one and two days old. i.e., when I run
http://12.34.56.789:9200/_search?q=severity%3AERROR%20AND%20timestamp%3A>now-2d
in the browser, I see errors (with the word 'ERROR') in the response body, but when I run
http://12.34.56.789:9200/_search?q=severity%3AERROR%20AND%20timestamp%3A>now-1d
I do not. (Note the one-day difference.) This is expected behavior. Note: my response body is a JSON with the "ERROR" string in a child element a few levels down. I don't know if this affects how Monit processes the regex.
When I run the check as above I see
'elasticsearch_error' failed protocol test [HTTP] at
INET[12.34.56.789:9200/_search
q=severity%3AERROR%20AND%20timestamp%3A>now-2d]
via TCP -- HTTP error: Regular expression doesn't match:
regexec() failed to match
in the log. Good. Content == "ERROR" is true. I can alert from this (even though I find the Connection failed message in the Monit browser dashboard a little irritating...should be something like Regex failure.)
The Problem
When I 'monit reload' and run the check with
url http://12.34.56.789:9200/_search?q=severity%3AERROR%20AND%20timestamp%3A>now-1d
I STILL get the regexec() failed to match error as above. Note, I return no "ERROR" string in the response body. Content == "ERROR" is false. Why does this check fail? Any light shed on this issue will be appreciated!
The Answer
Turns out this problem is about URL encoding for the Elasticsearch query.
I used url http://12.34.56.789:9200/_search?q=severity:ERROR&timestamp:>now-36d in the check to get Monit to make a request that looks like 12.34.56.789:9200/_search?q=severity:ERROR&timestamp:%3Enow-36d. Note change in encoding. This seems to work.
The actual URL used by monit can be seen by starting monit in debug mode using monit -vI.
Side Question
The 'content' object seems to respect '=' and '==' and '!='. '=' is referenced in the documentation, but a lot of third-party examples use '=='. What is the most correct use?
Side Question Answer
The helpful folks on the M/Monit team advise that "=" is an alias for "==" in the Monit configuration file.
I added the solution I found to my question above.

Setting Tomcat 7 sessionid and value to be identified via Hardware Load Balancing for session affinity

Although easily done from my perspective with IIS, I'm a total noob to Tomcat and have no idea how to set static values for cookie contents. Yes I've read the security implications and eventually will access via SSL so I'm not concerned. Plus I've read the Servlet 3.0 spec about not changing the value and I accept that.
In IIS I would simply set a HTTP Header named Set-Cookie with an arbitrary setting of WebServerSID and a value of 1001.
Then in the load balancer VIP containing this group of real servers, set the value WebServerSID at the VIP level, and for the first web server a cookie value of 1001 and so one for the remaining machines 1002 for server 2, 1003 for server 3.
This achieves session affinity via cookies until the client closes the browser.
How can this be done with Tomcat 7.0.22?
I see a great deal of configuration changes have occurred between Tomcat 6.x and 7.x with regard to cookies and how they're set up. I've tried the following after extensive research
over the last week.
In web.xml: (this will disable URL rewriting under Tomcat 7.x)
<tracking-mode>COOKIE</tracking-mode> under the default session element
In context.xml: (cookies is true by default but I was explicit as I can't get it working)
cookies=true
sessionCookiePath=/
sessionCookieName=WebServerSID
sessionCookieName=1001
I have 2 entries in context.xml for sessionCookieName because the equivalent commands from Tomcat 6.x look like they've been merged into 1.
See http://tomcat.apache.org/migration-7.html#Tomcat_7.0.x_configuration_file_differences
Extract:
org.apache.catalina.SESSION_COOKIE_NAME system property: This has been removed. An equivalent effect can be obtained by configuring the sessionCookieName attribute for the global context.xml (in CATALINA_BASE/conf/context.xml).
org.apache.catalina.SESSION_PARAMETER_NAME system property: This has been removed. An equivalent effect can be obtained by configuring the sessionCookieName attribute for the global context.xml (in CATALINA_BASE/conf/context.xml).
If this is not right then I simply do not understand the syntax that is required and I cannot find anywhere that will simply spell it out in plain black and white.
Under Tomcat 6.x, I would have used Java Options in the config like:
-Dorg.apache.catalina.SESSION_COOKIE_NAME=WebServerSID
-Dorg.apache.catalina.SESSION_PARAMETER_NAME=1001
The application I'm using does not have any of these values set elsewhere so it's not the application.
All these settings are in context/web/server.xml files at the Catalina base
At the end of the day what I need to see in the response headers under Set-Cookies: (as seen using Fiddler) is:
WebServerSID=1001
NOT
JSESSIONID=as8sd9787ksjds9d8sdjks89s898
thanks in advance
regards
The best you can do purely with configuration is to set the jvmRoute attribute of the Engine which will add the constant value to the end of the session ID. Most load-balancers can handle that. It would look like:
JSESSIONID=as8sd9787ksjds9d8sdjks89s898.route1
If that isn't good enough and you need WebServerSID=1001 you'll have to write a ServletFilter and configure that to add the header on every response.