I'm using django-openid-provider (https://bitbucket.org/romke/django_openid_provider/) and need to test it's features before deploying on a real server. I've tried to construct POST request by documentation of OpenID 2.0 and send it to django's test server to get openid token.
My post looks like so:
http://192.168.232.151:8008/openid/
BODY:
openid.ns:http://specs.openid.net/auth/2.0
openid.mode:associate
openid.assoc_type:HMAC-SHA256
openid.session_type:DH-SHA256
Also i tried to provide it with public key (such as openid.dh_modulus, openid.dh_gen, openid.dh_consumer_public) for Diffie-Hellman algorythm, and sniffing traffic of OpenID authentication for get additional keys in request, but allways got 500 Internal server error
with
Exception Type: ProtocolError
Exception Value:
No mode value in message <openid.message.Message {('http://openid.net/signon/1.0', u'ns:http://specs.openid.net/auth/2.0\nopenid.mode:associate\nopenid.assoc_type:HMAC-SHA256\nopenid.session_type:DH-SHA256'): u'DH-SHA256EABv%252BfEoZlgh%252BeU71rlInEppkiuX\nopenid.dh_modulus:ANz5OguIOXLsDhmYmsWizjEOHTdxfo2Vcbt2I3MYZuYe91ouJ4mLBX%2BYkcLiemOcPym2CBRYHNOyyjmG0mg3BVd9RcLn5S3IHHoXGHblzqdLFEi%2F368Ygo79JRnxTkXjgmY0rxlJ5bU1zIKaSDuKdiI%2BXUkKJX8Fvf8W8vsixYOr\nopenid.dh_gen:Ag%3D%3D\nopenid.dh_consumer_public:AJs12O5ypo2N%2FL0RJiiOgu9llg2dFsnjthyH49dx6FXz52iDXNkS7gquOm6KEr%2BUfTmktyVMA5DrZwJ%2BrX1jk7sKmXJMmi9%2B7N5fa0wvz%2Fi6nrvg8Oqw31kh%2BtbD9ansUeATSlCfUoRCqeUHEABv%2BfEoZlgh%2BeU71rlInEppkiuX'}>
Debugging the django-openid module I've discovered that constructing Message object raises this error but can not find values of parameters to satisfy openid-provider server
Please show me what I'm doing wrong? Am i choosing the hard way, can I use something that emulates consumer site with openid-client locally. Or maybe someone have correct example of such POST request?
Thanks
You are probably best off using a publicly accessible OpenID consumer or an OpenID client library to test django-openid-provider, since constructing an OpenID request manually is inconvenient.
In the past, I've used mod_auth_openid (an Apache module) for testing against django_openid_provider, it works well.
If you are really intent on manually providing the HTTP requests against the OpenID endpoint:
OpenID uses GET requests, not POST requests.
The parameters should be passed in the query string, not in the body.
Using httpie, here's an example of a valid request against an OpenID provider , assuming:
The OpenID endpoint is http://192.168.232.151:8008/openid/
You've used django-openid-provider to create an openid called myopenid
The OpenID consumer (relaying party) is http://www.example.com/protected/
The OpenID consumer is protected using mod_auth_openid
Here's the initial request:
$ http get http://192.168.232.151:8008/openid/ \
openid.assoc_handle=={HMAC-SHA256}{42a4370e}{G804lQ====} \
openid.claimed_id==http://192.168.232.151:8008/openid/myopenid/ \
openid.identity==http://192.168.232.151:8008/openid/myopenid/ \
openid.mode==checkid_setup \
openid.ns==http://specs.openid.net/auth/2.0 \
openid.realm==http://www.example.com/protected/ \
openid.return_to==http://www.example.com/protected/?modauthopenid.nonce=qAgqlNCdLl \
openid.trust_root==http://www.example.com/protected/
This is equivalent to:
$ curl http://192.168.232.151:8008/openid/?openid.assoc_handle=%7BHMAC-SHA256%7D%7B42a4370e%7D%7BG804lQ%3D%3D%3D%3D%7D&openid.claimed_id=http%3A%2F%2F192.168.232.151%3A8008%2Fopenid%2Fmyopenid%2F&openid.identity=http%3A%2F%2F192.168.232.151%3A8008%2Fopenid%2Fmyopenid%2F&openid.mode=checkid_setup&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.realm=http%3A%2F%2Fwww.example.com%2Fprotected%2F&openid.return_to=http%3A%2F%2Fwww.example.com%2Fprotected%2F%3Fmodauthopenid.nonce%3DqAgqlNCdLl&openid.trust_root=http%3A%2F%2Fwww.example.com%2Fprotected%2F
Note that the openid.assoc_handle and modauthopenid.nonce values are not valid in this example, you'd have to generate proper values for those.
If this succeeds, then the server should redirect you via 302 to http://www.example.com/protected/ with a number of query parameter arguments.
Also note that this is only the initial step in the protocol, there are additional requests involved.
But you really don't want to manually craft these OpenID HTTP requests. Use an OpenID library or an OpenID consumer instead.
Related
Infra of system
Expected:
I want to block requests, which is not from Server FE (domain.com)
Ex: Users make request from another apps such as Postman -> it will response 403, message access denied.
I used the rules of ALB, it works but users can cheat on Postman
Also I use AWS WAF to detect request. But it's not work.
Is there any way to block request from Postman or another apps?
We can generate secret_key and check between Server FE and Server BE. But users can see it on Headers and simulator the headers on Postman and call API success.
Current Solution:
I use Rule of Application Load Balancer to check Host and Origin. But users can add these params on Postman and request success.
Rule ALB
When I add Origin matching value (set on ALB) -> We can request successful
Postman success
Postman denied
Users can cheat and call API success.
Thanks for reading. Please help me give any solution for this one. Thanks a lot.
No. HTTP servers have no way to know what client is being used to make any HTTP request. Any HTTP client (Browsers, PostMan, curl, whatever) is capable of making exactly the same requests as each other.
The user-agent header is a superficial way to do this, but it's easy enough for PostMan or any other HTTP client to spoof the user-agent header to one that makes the request look like it is coming from a web browser agent.
You can only make it more challenging to do so. Some examples to thwart this behavior includes using tools like Google captcha or CloudFlare browser integrity check, but they're not bulletproof and ultimately aren't 100% effective at stopping people from using tools/automation to access your site in unintended ways. At the end of the day, you're limited to what can be done with HTTP, and PostMan can do everything at the HTTP layer.
We are using envoy access logs
https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage , does envoy validate the fields that are passed to the access logs, e.g. the field format.
I ask it from basic security reason to verify that if I use for example %REQ(:METHOD) I will get a real http method like get post etc and not something like foo. or [%START_TIME%] is in time format and I will not get something else...
I think it's related to this envoy code
https://github.com/envoyproxy/envoy/blob/24bfe51fc0953f47ba7547f02442254b6744bed6/source/common/access_log/access_log_impl.cc#L54
I ask it since we are sending the data from the access logs to another system and we want to verify that the data is as its defined in the access logs and no one will change it from security perspective.
like ip is real ip format and path is in path format and url is in url format
I'm not sure I understand the question. Envoy doesn't have to validate anything as it is generating those logs. Envoy is HTTP proxy who receives the request and does some routing/rewriting/auth/drop/.. actions based on the configuration (configured by virtualservice / destinationrule / envoyfilter if we're talking about istio). After the action it generates the log entry and fills the fields with details about original request and actions taken.
Also there is nothing like 'real' http method. HTTP method is just a string and it can hold any value. Envoy is just the proxy who sits between client and application and passes the requests (unless you explicitly configure it i.e. drop some method).
It depends on application who receives the method how it's treated. GET/POST/HEAD are commonly associated with standard HTTP and static pages. PUT/DELETE/PATCH are used in REST APIs. But nothing prevents you to develop application who will accept 'FOOBAR' method and runs some code over it.
We are using an onPrem S3 compatible storage server in an intranet network and we want to expose this intranet url to internet so we used a ReverseProxy with a mapping to the intranet url. When we test the intranet url it works perfectly but when we test the internet url we get the 403 error:
The request signature we calculated does not match the signature you provided. Check your Secret Access Key and signing method. For more information, see REST Authentication and SOAP Authentication for details. (Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: 0a440c7f:15cc604b1e2:12d3af:24d; S3 Extended Request ID: null), S3 Extended Request ID: null
After debugging, we found that the proxy modifies the host header used to calculate the signature in order to redirect the request to the intranet url...
So my question is how to supress some headers from the V4 signature calculation using AWS SDK or Boto3 client. or is there a better architecture to expose an onPrem S3 service.
Thanks in advance.
Amir.
There are essentially two solutions to this.
The first one is easier: sign the request for the internal URL, then just use simple string prefix replacement to rewrite the host part of the signed URL to point it to the hostname of the external proxy. When the proxy rewrites the Host header, it will end up rewriting it back to exactly what you signed.
It is, I assume, common knowledge that signed URLs are immune to tampering, for all practical purposes: you can't change anything about a signed URL without invalidating it... but that's not what this is. The change is temporary, and the proxy's net effect is to undo the change.
The alternate solution requires the proxy or another service in the chain (before the storage service) to know the signing keys and secrets, so that it can first validate the incoming request, and if valid, modify the request and then generate a new signature that the service will accept. I once wrote a service to do this so that when a request was for HEAD, the proxy would use the same key and secret (which it knew) to generate a signature for the same request, but with GET. If it matched the signature in the incoming request, the proxy would replace the existing signature with a signature for a HEAD request -- thus allowing the client to use a URL originally signed for a GET request, to make either a GET or a HEAD request -- something S3 does not natively support, since a GET and a HEAD for the same object require two different signed URLs. The concept is the same, though -- generate a signature in the proxy for what the client is requesting, to validate the incoming signature, and then re-sign the request as needed. The solution I built used HAProxy's Lua integration to examine and modify the request in flight.
So I am using a burp suite to intercept a request to
stage.training.com/ats/getAllStates.html?countryCode=CR
Once Intercepted I change the Hostname to localhost:4502
The localhost uses an authentication which I have already added to Platform Authentication under
User Options --> Platform Authentication
However I keep getting a 400 Bad Request response.
Any idea whats going wrong here
Firstly, we need to understand why 400 Http Bad Request is causing the response.
400 - Bad request. The request could not be understood by the server due to malformed syntax. The client should not repeat the request without modifications.
You can check the following operations from inside the burp suite for authentication.
User options > Connections > Platform Authentication > Add
Destination host: target URL
Authentication type: Basic, NTLMv1, NTLMv2 or Digest
Username and Password
Both sides need updating. from both intercept and user options. Otherwise, you will continue to receive errors.
I am trying to setup a simple API test against a local endpoint. I have create the sample API (phone number lookup) and that works fine.
http://192.168.1.11:8080/api/simpleTest is my endpoint and the WSO2 service also runs on 192.168.1.11 ... but when I test it in 'publisher', it always fails. This is a simple GET with no parameters.
I can run it from a browser or CURL (outside of WSO2) and it works fine.
Thanks.
I assume you talk about clicking the Test button when providing Backend Endpoint in API publisher.
The way that Test button works at the moment (as far as I understand) is that it invokes HTTP HEAD method on the endpoint provided (because according to RFC 2616, "This method is often used for testing hypertext links for validity, accessibility, and recent modification.")
Then it checks response. If response is valid or 405 (method not allowed), then the URL is marked as Valid.
Thus sometimes, if backend is not properly following RFC, you might get otherwise working URLs declared as Invalid during the test because of that improper HEAD response evaluation. Obviously, this is just a check for your convenience and you can ignore the check if you know the endpoint works for the methods and resources you need it to work.
P.S. Checked it on API Cloud but behavior is identical to downloadable API Manager.