wso2am-2.6.0 invalid jwt signature - wso2

Enabling JWT signature for backend services in WSO2AM-2.6.0 (6.x branch)
<JWTConfiguration>
<EnableJWTGeneration>true</EnableJWTGeneration>
<JWTHeader>X-JWT-Assertion</JWTHeader>
<SignatureAlgorithm>SHA256withRSA</SignatureAlgorithm>
<JWTGeneratorImpl>org.wso2.carbon.apimgt.keymgt.token.JWTGenerator</JWTGeneratorImpl>
However - developers complain the signature is not valid (according to the JOSE library). I tested the token in jwt.io page and it as well claims the signature is not valid.
I see from the previous version (wso2am-2.1.0) the signature generation is changed (not using any external framework), but as well for the change the signature is not considered valid by other frameworks (jose, jwt.io)
Any way to configure wso2am to create a valid (validable) signature?
Edit:
I see the JWT token is signed only using APIMJWTGenerator, though it doesn't help to make to token validable
The exception is
"stacktrace": "org.jose4j.jwt.consumer.InvalidJwtException:
Unable to process JOSE object (cause: org.jose4j.lang.UnresolvableKeyException:
The X.509 Certificate Thumbprint header(s) in the JWS do not identify any of the provided Certificates - x5t=NTA3YzJmZDk0OTg4N2ViNWRlY2M4N2NlMDdjMmNlNjliOTRkYjM1OA vs. SHA-1 thumbs:[UHwv2UmIfrXezIfOB8LOablNs1g].)
does the validation has something to do with the x5t header attribute?
Edit2: apparently the xt5t header is expected to contain SHA-1 certificate signature, the provided value NTA3YzJmZDk0OTg4N2ViNWRlY2M4N2NlMDdjMmNlNjliOTRkYjM1OA is too long to be SHA-1 or is invalid
Edit3:
Seems the issue is related to https://github.com/wso2/carbon-apimgt/issues/5535 , which fixing appareantly breaks compatibility with backend services (and used frameworks, preparing a fix)

Fixing with pull https://github.com/gusto2/carbon-apimgt/pull/1 (maybe not perfect, but working and tested)

Related

Exclude headers from s3v4 signature calculation

We are using an onPrem S3 compatible storage server in an intranet network and we want to expose this intranet url to internet so we used a ReverseProxy with a mapping to the intranet url. When we test the intranet url it works perfectly but when we test the internet url we get the 403 error:
The request signature we calculated does not match the signature you provided. Check your Secret Access Key and signing method. For more information, see REST Authentication and SOAP Authentication for details. (Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: 0a440c7f:15cc604b1e2:12d3af:24d; S3 Extended Request ID: null), S3 Extended Request ID: null
After debugging, we found that the proxy modifies the host header used to calculate the signature in order to redirect the request to the intranet url...
So my question is how to supress some headers from the V4 signature calculation using AWS SDK or Boto3 client. or is there a better architecture to expose an onPrem S3 service.
Thanks in advance.
Amir.
There are essentially two solutions to this.
The first one is easier: sign the request for the internal URL, then just use simple string prefix replacement to rewrite the host part of the signed URL to point it to the hostname of the external proxy. When the proxy rewrites the Host header, it will end up rewriting it back to exactly what you signed.
It is, I assume, common knowledge that signed URLs are immune to tampering, for all practical purposes: you can't change anything about a signed URL without invalidating it... but that's not what this is. The change is temporary, and the proxy's net effect is to undo the change.
The alternate solution requires the proxy or another service in the chain (before the storage service) to know the signing keys and secrets, so that it can first validate the incoming request, and if valid, modify the request and then generate a new signature that the service will accept. I once wrote a service to do this so that when a request was for HEAD, the proxy would use the same key and secret (which it knew) to generate a signature for the same request, but with GET. If it matched the signature in the incoming request, the proxy would replace the existing signature with a signature for a HEAD request -- thus allowing the client to use a URL originally signed for a GET request, to make either a GET or a HEAD request -- something S3 does not natively support, since a GET and a HEAD for the same object require two different signed URLs. The concept is the same, though -- generate a signature in the proxy for what the client is requesting, to validate the incoming signature, and then re-sign the request as needed. The solution I built used HAProxy's Lua integration to examine and modify the request in flight.

Invalid format when attempting to create AWS Signature V4 to sign API Gateway URL POST request

I am using the code found here to base my signing class on:
https://gist.github.com/yvanin/0bdf68c1139ad698519e
From this I have been able to build an Authorisation header, but when this is passed along with my POST request, it fails with the error
The format of value 'Redacted' is invalid.
When I compare my authorisation header composure to an example found on the internet, it looks pretty much spot on (although i don't use Content-Type header as the request has no payload). Can anyone assist with where I might be going wrong?
Internet example is the top one below, mine is underneath. I have changed the relevant access key and signature data to obfuscate real values (EDIT: and obviously I'm trying to reach the API Gateway service, not IAM, so that's why I have execute-api in the header, have also tried apigateway and neither have any effect...)
// AWS4-HMAC-SHA256 Credential=AKIDEXAMPLE/20150830/us-east-1/iam/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=5d672d79c15b13162d9279b0855cfba6789a8edb4c82c400e06b5924a6f2b5d7
// AWS4-HMAC-SHA256 Credential=AAAAI7T5JMMLSKVKA6EQ/20180302/ap-southeast-2/execute-api/aws4_request, SignedHeaders=host;x-amz-date, Signature=856bc41f18582836b56a02d9563c8f4f621fce7338ae2ec3afabe254a1543667
EDIT: Solved! So, when I first used the Sig V4 code at the github link, there was an extension function ParseQueryString that wouldn't resolve. It was because I was missing a reference to System.Net.Http.Formatting. Attempts to locate this library and add it failed, so I wrote my own extension method to do what I believed that function was doing.
Clearly my version of ParseQueryString was not right, because I finally solved the issue of the missing reference by locating a very specific version of the assembly to add via NuGet (any other versions produce the following error:
Unable to find a version of 'Microsoft.AspNet.WebApi.Client' that is compatible with 'System.Net.Http.Formatting
The specific version I required was:
PM> Install-Package System.Net.Http.Formatting -Version 4.0.20505
Once that was installed, and the ParseQueryString extension method I wrote was replaced with the standard one, viola! I now have a response from my API Gateway using IAM authorisation. It's a beautiful day :)

Browser support of nextnonce directive in HTTP digest authentication

I've done a C++ based HTTP server (or to rephrase - spilled another drop in the ocean) and encountered an issue with HTTP digest authentication.
According to the HTTP authentication RFC using the nextnonce directive in the Authentication-Info header is a valid way of implementing a single use nonce mechanism. I've done this according to the RFC but both Chrome and Firefox seem to ignore the directive and issue all further requests with the initial nonce thus triggering unneeded 401 responses. An example illustration with Firefox:
First request - my server returns 401 and issues the initial nonce a1f778b2afc8590e4a64f414f663128b
Firefox successfully authenticates and gets a reply with the Authentication-Info: nextnonce="0b72e74afbcab33a5aba05d4db03b801" header
Firefox issues a new request to fetch image from the returned html - still the initial nonce c1587dd7be6251fa715540e0d6121aa5 is used and thus a reply with a new nonce and a flag that the provided nonce is expired is sent back.
Same scenario as for the first image request.
Now authentication succeeds with the new nonce.
The authentication succeeds for the second request as well.
As can be seen in the images - even though I reply with Authentication-Info: nextnonce="0b72e74afbcab33a5aba05d4db03b801" upon a successful authorization on the first request the next two requests still use the original nonce instead of the provided nextnonce value. Has anyone had a similar experience? I am most certainly doing something wrong - even though the RFC says that the client SHOULD reply with the provided nextnonce value and thus it is not mandatory I highly doubt that the most popular browsers do not use this technique.
Looks like it's a known Firefox bug that's been open since 2001.
Bug 150605 - digest authentication problem: Mozilla ignores the nextnonce parameter of Authentication-Info Response Header.
which is a duplicate of
Bug 116177 - next nonce digest auth test fails

Use single certificate in WS Security

I'm working in WS-Security configurations in SOAP UI. In Signature, I have an option called "Use Single Certificate for signing" I tried checking it and unchecking it, the request are same. I don't find the differences. When should I use that option?
I tried to search on google, I couldn't find the answer. Pardon me if my understanding is wrong.
After a little search with the OP seems that we found the answer.
This checks adds a specific <wsse:BinarySecurityToken> in the <wsse:Security> headers, specifying a certificate (in SOAPUI specific case specifying the certificate used to perform the signature).
From the oasis spec we can see the definition of this element:
3.1 Token types
This profile defines the syntax of, and processing rules for, three types of binary security token using the URI values specified in Table 2 (note that URI fragments are relative to the URI for this specification).
3.1.1 X509v3 Token Type
The type of the end-entity that is authenticated by a certificate used in this manner is a matter of policy that is outside the scope of this specification.
In this document there is also a sample of the <wsse:BinarySecurityToken> node added to the <wsse:Security> headers, which is basically a <wsse:BinarySecurityToken> with a valueType"wsse:X509v3" attribute and the certificate codified as base64 in the text value of this node:
<wsse:BinarySecurityToken
wsu:Id="binarytoken"
ValueType="wsse:X509v3"
EncodingType="wsse:Base64Binary">MIIEZzCCA9CgAwIBAgIQEmtJZc0…
</wsse:BinarySecurityToken>
The Reason for coming wsu:Id in SOAP Body:
This attribute, defined as type xsd:ID, provides a well-known
attribute for specifying the local ID of an element.
Used to locate elements in the message e.g. correlating signatures to
sec. tokens
XML Schema defines several id and referencing data types, but they
require consumer to have or obtain schema definition.

WSO2 - simple endpoint fails

I am trying to setup a simple API test against a local endpoint. I have create the sample API (phone number lookup) and that works fine.
http://192.168.1.11:8080/api/simpleTest is my endpoint and the WSO2 service also runs on 192.168.1.11 ... but when I test it in 'publisher', it always fails. This is a simple GET with no parameters.
I can run it from a browser or CURL (outside of WSO2) and it works fine.
Thanks.
I assume you talk about clicking the Test button when providing Backend Endpoint in API publisher.
The way that Test button works at the moment (as far as I understand) is that it invokes HTTP HEAD method on the endpoint provided (because according to RFC 2616, "This method is often used for testing hypertext links for validity, accessibility, and recent modification.")
Then it checks response. If response is valid or 405 (method not allowed), then the URL is marked as Valid.
Thus sometimes, if backend is not properly following RFC, you might get otherwise working URLs declared as Invalid during the test because of that improper HEAD response evaluation. Obviously, this is just a check for your convenience and you can ignore the check if you know the endpoint works for the methods and resources you need it to work.
P.S. Checked it on API Cloud but behavior is identical to downloadable API Manager.