I've setup a private composer repository via gem-fury, but when I'm trying to download one of the packages (using composer-require) I receive the following error:
[Composer\Downloader\TransportException]
The 'https://s3.amazonaws.com:443/gemfury/gems/[SOME_STRING]/[VENDOR][PACKAGE]_[VERSION]_zip?Signature=SIGNATURE&Expires=1481739039&AWSAccessKeyId=[AWS_ACCESS_KEY]' URL could not be accessed: HTTP/1.1 400 Bad Request
P.S.
I know that the authentication worked because composer does receive the package.json file (the latest version is recognized)
Any help would be appretiated
Short answer: You may see this issue if you are using auth.json to store your Gemfury token. At this time, the only way to work around this issue is to embed the token directly into your repository URL in composer.json.
Long answer: The reason it doesn't work is due to a bug in Composer CLI. In that particular use-case, when Composer acts on Gemfury's redirect from your private php.fury.io repo to a secure S3 download, it includes the Authorization header with your Gemfury token. This header conflicts with S3's authentication model, and results in a 400 Bad Request response.
Resending Authorization header on a redirect from one host to another is a fairly significant security concern, and I recommend you reset your Gemfury token and stop using auth.json authentication method until this issue is resolved.
Related
I am trying to get hls / dash streams working via Google Cloud CDN for a video on demand solution. The files / manifests sit in a Google Cloud Storage Bucket and everything looks properly configured since i followed every step of the documentation https://cloud.google.com/cdn/docs/using-signed-cookies.
Now i am using an equivalent Node.js code from Google Cloud CDN signed cookies with bucket as backend to create a signed cookie with the proper signing key name and value which i previously set up in google cloud. The cookie get's sent to my load balancer backend in Google Cloud.
Sadly, i always get a 403 response saying <?xml version='1.0' encoding='UTF-8'?><Error><Code>AccessDenied</Code><Message>Access denied.</Message></Error>.
Further info:
signed urls / cookies is activated on load balaner backend
IAM role in bucket for cdn-account is set to "objectViewer"
signing key is created, saved and used to sign the cookie
Would really appreciate any help on this.
Edit:
I just tried the exact python code google states to create the signed cookies from https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/cdn/snippets.py with the following params:
Call: sign_cookie('http://cdn.myurl.com/', 'mykeyname', 'mybase64encodedkey', 1614110180545)
The key is directly copied from google since I generated it there.
The load balancer log writes invalid_signed_cookie.
I'm stumbling across the same problem.
The weird thing is that it doesn't work correctly only in web browsers. I've seen GoolgeChrome and Safari return a 403 even though they contain cookies. However, I have noticed that the same request with the exact same cookie in curl returns 200. I think this means that it does not work correctly in web browser. I'm asking GCP support about this right now, but I'm not getting a good answer.
Edit:
As a result of several hypotheses and tests, I found out that when the cookie library I use formats and inserts values into the Set-Cookie header, URLEncoding is automatically executed and cookies that CloudCDN cannot understand are sent. Therefore, it is now possible for web browsers to retrieve content by adding it to the Set-Cookie header without URLEnconding.
i am playing around with Postman to get some insight on how things work behind the curtain and ran into, what I believe, is an issue but wanted to ask before I create a new issue on GitHub.
I am intercepting the request from my browser to the same site using the Postman Interceptor to use the request values in the native app. I have cookies enabled and the site (the whole domain) whitelisted.
When I use the history to resend the same request that was captured I get an auth error that is caused by the fact that the cookies are not included in the request (found that out by checking the cURL code snippet). I believe the reason for that is, that the cookies are set under another sub domain than that the request is send to.
I will try to include some pictures to clarify. My question here is:
Am I missing something/did I set something up in the wrong way
or is this an issue and I should create an issue in the official Postman Github page
cURL request
Cookies in Postman Native App
you should see if cookie is being send not using code snippet but the console :
its indeed sending cookies ,
I've created an API with C++ and the following library: https://github.com/yhirose/cpp-httplib
In the API I've added a header to responses for CORS:
svr.Post("/suggest", [&dr](const Request &req, Response &res){
res.set_header("Access-Control-Allow-Origin","(origin here)");
(origin here) is the origin of the server making the request.
On the browser side I've also enabled an extension to bypass CORS. But when trying to make an AJAX request to the API, I still get this error in my browser console:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://192.168.1.10:10120/suggest. (Reason: CORS request did not succeed).
The AJAX request is done through a script written in the Tampermonkey extension to work on a specific website.
Do I need to modify headers on the server hosting the website? Have I done something wrong on the C++ side?
Also it might be important to mention that the code worked before. All I did was come back to it another day with a different local IP address (which i reprogrammed into the c++ API)
I tried it again to answer #sideshowbarker and it gave me a new error about self signed certificates. After adding the exception it worked.
So, I've recently taken over the front-end of a project in which the previous front-end developer always did his debugging with a localhost instance connecting to a remote staging back-end.
Right now we are improving some security issues and are using CSRF tokens.
For each POST,PUT,DELETE request, I'm first GETting a csrf endpoint, which sets a JSESSIONID HttpOnly cookie and returns a CSRF token in the response body. For the subsequent request the CSRF token goes into the request header and the cookie of course gets sent automatically of course.
Now,.. my code works fine deployed on the remote staging front-end. But, this new functionality has totally prevented me from debugging with a local instance, because the cookie wont work when I GET the CSRF token from localhost, because this is a different domain of course.
This requires me to deploy every single change of code to the staging front end. Very uncomfortable workflow when performing the usual trial-and-error fix.
Adding the remote as a localhost alias to my hosts file also doesn't work, because this routes all my requests to the remote to my own machine, which doesn't run a local instance of the remote.
I would've thought there would be a Chrome extension or something like that for problems like these, but since I couldn't find any I wondered if I'm missing a very obvious point here.
Okay, it seems that this was a really general issue:
set withCredentials to true for the request to the remote and use a chrome plugin to overwrite the response headers to:
Access-Control-Allow-Origin: http://localhost
Access-Control-Allow-Credentials: true
I've been using WSO2 API Manager 1.9.1 for the past month on a static IP and we liked it enough to put it on Azure behind a full qualified domain name. As we are still only using for internal purposes, we shut the VM down during off hours to save money. Our Azure setup does not guarantee the same IP address each time the VM restarts. The FQDN allows us to always reach https://api.mydomain.com regardless of what happens with the VM IP.
I updated the appropriate config files to the FQDN and everything seems to be working well. However! The one issue I have and cannot seem to resolve is calling APIs from the API consoloe. No matter what I do, I get a response as below
Response Body
no content
Response Code
0
Response Headers
{
"error": "no response from server"
}
Mysteriously, I can successfully make the same calls from command line or SOAPUI. So it's something unique about the API Console. I can't seem to find anything useful in the logs or googling. I do see a recurring error but it's not very clear or even complete (seems to cut off).
[2015-11-17 21:33:21,768] ERROR - AsyncDataPublisher Reconnection failed for
Happy to provide further inputs / info. Any suggestions on root cause or where to look is appreciated. Thanks in advance for your help!
Edit#1 - adding screenshots from chrome
The API Console may not be giving you response due to following issues
If you are using https, you have to type the gateway url in browser and accept it before invoke the API from the API Console (This case there is no signed certificate in the gateway)
CORS issue which may due to your domain is not in access allow origins response of Options call
If you create a API which having https backend. You have to import endpoint SSL certificate to client-trustore.jks