AWS SDK: How to get the response body for a 404 - amazon-web-services

I'm attempting to access a remote GraphQL server used by a publicly available web site. I've pieced together the appropriate code to interact with the database and can run it locally successfully. It involves me getting some createCognitoIdentity() credentials and then using those credentials to send a GraphQL query. Works like a charm and I get the data I'm looking for... until deployed to prod.
Once in prod, the same code produces a 404 error and I'm unable to even try to query the db because getting the credentials fails with:
Error retrieving credentials from the instance profile metadata service. (Client error: GET http://169.254.169.254/latest/meta-data/iam/security-credentials/ resulted in a 404 Not Found response: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://ww (truncated...) )
Here's my code to recreate it:
$sdk = (new \Aws\Sdk([
'region' => 'us-east-2',
'version' => 'latest',
]));
$result = $sdk->createCognitoIdentity()->getCredentialsForIdentity([
'IdentityId' => 'us-east-2:3945b61f-5ad6-4e57-b7bf-2d01874e94d4',
]);
My production environment is hosted within AWS, so I suspect it's possible the 404 is because it's within AWS? Seems strange to add such a restriction. I'd like to rule out any potential xml present within the response body, but I'm having trouble obtaining the full body.
How can I echo out the response body when a 404 is encountered?

The issue you're having is that there are no IAM credentials associated with the EC2. I have an EC2 that has an IAM role tied to it. To check this I run:
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
On the instances with an IAM role attached I get the role name that is attached - nothing but the name (i.e. no HTML or anything else). On another instance that has no credentials and running the same command I get:
<?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>404 - Not Found</title>
</head>
<body>
<h1>404 - Not Found</h1>
</body>
</html>
which looks like what you're getting. This is from the command line but cURL should get you almost the exact same thing in PHP.
Edit based on the comments
It sounds like the challenge is ultimately that your development environment has credentials set but your EC2 doesn't. The error is a bit misleading as it's ultimately a permission denied (as there are no credentials) but it's surfaced as a 404 because there isn't anything in the metadata.
There is more information here regarding the use of instance profiles. As you're using a Docker to deploy and based on this post the container should be able to get the same profile as if you were running natively.

Related

405 error for POST on Docker Container in Cloud Run

I tested a container I built locally. It accepts a POST request with a file and returns another processed file.
I uploaded the container to Artifact Registry on GCP. I have been trying to make some POST requests from my computer to test the service. Here is a CURL below, same issue with various client libraries. The same request works when I use a local port instead of the cloud run URL.
curl --globoff https://SERVICE_NAME.a.run.app
-X POST
-H "content-type: application/json"
-H "Authorization: bearer $(gcloud auth print-identity-token)"
-d '{"filename": RANDOM_FILE_NAME.pdf}'
I am receiving a 405 I pasted below.
<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Error 405 HTTP method POST is not supported by this URL</title>
</head>
<body><h2>HTTP ERROR 405</h2>
<p>Problem accessing /. Reason:
<pre> HTTP method POST is not supported by this URL</pre></p>
</body>
</html>
What am I doing wrong ? I haven't seen any further options on Cloud Run I need to update, and I am clear my container accepts POST.
The “HTTP ERROR 405” error occurs when the web server is configured in a way that does not allow you to perform a specific action for a particular URL.
It’s an HTTP response status code that indicates that the request method which uses PDF as an input and outputs a parsed JSON after processing is known by the server but is not supported by the target resource.
Also you need to make sure that the service account used by PubSub has proper IAM permissions so it can indeed trigger your app which is in reference to posting requests as per your requirement.
I would also suggest that you check this tutorial which outlines step by step how to achieve this.

Cloud Front : The request could not be satisfied

I am coming across this problem, i have a chat server which needs to communicate to the lambda service hosted in aws , but cloud front throws the following error.
BODY: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<HTML><HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<TITLE>ERROR: The request could not be satisfied</TITLE>
</HEAD><BODY>
<H1>ERROR</H1>
<H2>The request could not be satisfied.</H2>
<HR noshade size="1px">
Bad request.
<BR clear="all">
<HR noshade size="1px">
<PRE>
Generated by cloudfront (CloudFront)
Request ID: h5kPdVnMXwh-P7e7mxQ5LL1gj9fAupp_MNAPxmxufI74W4WhE_MByw==
</PRE>
<ADDRESS>
</ADDRESS>
</BODY></HTML>
This is how my request goes in application.
const options = {
hostname: 'xxx.uat.com',
port : '443',
path: '/qa/addMessage',
method: 'POST'
};
const req = http.request(options, (res) => {
}
the chat server.js is hosted in ec2. what is the issue here?
I am facing the same error, I solved this by removing the body from my postman request.
require('http');
That is an HTTP client -- not an HTTPS client.
Specifying port 443 doesn't result in an HTTPS request, even though port 443 is the assigned port for HTTPS. It just makes an ordinary HTTP request against destination port 443.
This isn't a valid thing to do, so CloudFront is returning a Bad Request error.
You almost certainly want to require('https');.
I have seen this problem before. It happens due to the following reasons,
Invalid Protocol (using http instead of https)
Unknown http verb, make sure the endpoint is having the POST implemented in your case. If you are using API gateway, make sure you have deployed it.
In my case, I had the same problem as #Kireeti K, where I solved this by removing the body from my postman request.
it seems that Cloudfront throws an error if you send a GET request with a body, if you want to use the body, you will need to change your method to something else than GET, for me POST worked perfectly, the error was gone and I was able to read the body.
I encountered the same problem, this thread worked for me.
This error message:
"The request could not be satisfied. Bad Request."
is from the client and the error can occur due to one of the following reasons:
The request is initiated over HTTP, but the CloudFront distribution is configured to allow only HTTPS requests.
The requested alternate domain name (CNAME) isn't associated with the CloudFront distribution.
(In my case, the reason was #2).
For me the problem was that, I restarted the EC2 which changed the Instance ID, but my cloudfront origin was still pointing to the previous ID. So, once I changed it, it worked fine.
In my case, I have a client-side load balancer when calling CloudFront. As a result, I am calling CF by IP address instead of hostName.
I checked with Amazon AWS Support team, in this case, CF rejects the request and returns "403 Error, The request could be satisfied".

How to configure AWS S3 to allow POST to work like GET

Facebook states in their canvas setup documentation:
Our servers will make an HTTP POST request to this web address. The
retrieved result will be displayed within the Canvas frame on
Facebook.
My application is hosted on AWS S3 as a static website using the following CORS configuration:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Already I'm having an issue here. GET requests work perfectly, but POSTing to http://my-bucket-name.s3-website-us-east-1.amazonaws.com kicks back:
<html>
<head>
<title>405 Method Not Allowed</title>
</head>
<body>
<h1>405 Method Not Allowed</h1>
<ul>
<li>Code: MethodNotAllowed</li>
<li>Message: The specified method is not allowed against this resource.</li>
<li>Method: POST</li>
<li>ResourceType: OBJECT</li>
<li>RequestId: 94159551A72424C7</li>
<li>HostId: +Lcz+XaAzL97Y47OZFvaTwqz4Z7r5koaJGHjZJBBMOTUHyThTfKbZG6IxJtYEbtsXWcb/bFxeI8=</li>
</ul>
<hr/>
</body>
</html>
Step 1: ^ I think I need to get this this working.
but wait, there's more
Facebook also requires a secure url. so for this, I went to cloudfront.
My configuration looks like this:
Just like when working with S3 directly, making GET requests to https://app-cloudfront-id.cloudfront.net/ works like a champ, POSTing, kicks back this:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>MethodNotAllowed</Code>
<Message>The specified method is not allowed against this resource.</Message>
<Method>POST</Method>
<ResourceType>OBJECT</ResourceType>
<RequestId>657E87A80AFBB3B0</RequestId>
<HostId>SY2g4smvhr06kAAQYVMsYeQZ+pSKbIIvsh/OaPBiMADGt5UKut0sXSZkFsnFmcRXQ2PFBVgPK4M=</HostId>
</Error>
Viewing the app on facebook.com shows:
Am I missing something?
so - I too thought this should be easy and well supported by AWS in 2016. Apparently, from all the reading I've done, we're wrong.
There's no way to serve the index page for a facebook app from s3 - with or without cloudfront.
It might be possible to serve the index page from an alternate origin (ie, your own httpd running somewhere) through cloudfront and everything else from s3 - but I haven't tried to dig into that rabbit hole. And if you're still having to run your own HA httpd...the complexity might not be worth it depending on your asset scale. ie http://www.bucketexplorer.com/documentation/cloudfront--how-to-create-distributions-post-distribution-with-multiple-origin-servers.html
you -can- use cloudfront in front of your own origin httpd serving the static content to take advantage of the cache and edge distribution - it will just forward the POST (and PUT etc) to your origin and bypass the cache edge.
these answers are old, circa 2011, - but I can't find any evidence that anything has changed with this.
https://forums.aws.amazon.com/thread.jspa?messageID=228988&#228988
https://forums.aws.amazon.com/thread.jspa?threadID=62525
Hopefully we can get some activity on this thread to prove me wrong, I could use this right now too.
I have a similar situation, using a single-page JS application, where all unresolved requests normally should be handled main page, /index.html.
The underlying problem is that S3 does not treat a POST like a GET. POST is a modifying request. There is a way to configure S3 to handle the POST, but that is intended for S3 modifications, not a read-only request like GET.
In order to handle the POST requests, I created an AWS CloudFront behavior that redirects the errors back to /index.html with a 200 HTTP response code. That way the POST request will go to the main page, and managed through the application. I have done the same thing for the 403 and 404 errors.
Edit the CloudFront distribution, go Error Pages, and create 3 different custom error responses as shown above.
FYI, you can easily add a dynamic side via CloudFront, avoiding all CORS issues.
CloudFront can mix both static and dynamic content through the behaviors.
Try creating the page as a response object to a Lambda function and use the ApiGateway to create a route to handle the page processing.
Leave the static content on S3, CloudFront for the SSL support and Lambda for any dynamic page processing.

AWS S3 gracefully handle 403 after getSignedUrl expired

I'm trying to gracefully handle the 403 when visiting an S3 resource via an expired URL. Currently it returns an amz xml error page. I have uploaded a 403.html resource and thought I could redirect to that.
The bucket resources are assets saved/fetched by my app. Still, reading the docs I set bucket properties to handle the bucket as a static webpage page and uploaded a 403.html to bucket root. All public permissions are blocked, except public GET access to the resource 403.html. In bucket properties, website settings I indicated the 403.html as error page. Visiting http://<bucket>.s3-website-us-east-1.amazonaws.com/some-asset.html redirects correctly to http://<bucket>.s3-website-us-east-1.amazonaws.com/403.html
However, when I use aws-sdk js/node and call method getSignedUrl('getObject', params) to generate the signed url, it returns a different host url: https://<bucket>.s3.amazonaws.com/ Visiting expired resources from this method do not get redirected to 403.html. I'm guessing that since the host address is different this is the reason it is not automatically redirecting.
I have also set up static website routing rules for condition
<Condition>
<HttpErrorCodeReturnedEquals>403</HttpErrorCodeReturnedEquals>
</Condition>
<Redirect>
<ReplaceKeyWith>403.html</ReplaceKeyWith>
</Redirect>
Still that's not redirecting the signed urls. So I'm at a loss of how to gracefully handle these expired urls. Any help would be greatly appreciated.
S3 buckets have 2 public-facing interfaces, REST and website. That is the difference between the two hostnames, and the difference in behavior you are seeing.
They have two different feature sets.
feature REST Endpoint Website Endpoint
---------------- ------------------- -------------------
Access control yes no, public content only
Error messages XML HTML
Redirection no yes, bucket, rule, and object-level
Request types all supported GET and HEAD only
Root of bucket lists keys returns index document
SSL yes no
Source: http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html
So, as you can see from the table, the REST endpoint supports signed URLs, but not friendly errors, while the website endpoint supports friendly errors, but not signed URLs. The two can't be mixed and matched, so what you're trying to do isn't natively supported by S3.
I have worked around this limitation by passing all requests for the bucket through HAProxy on an EC2 instance and on to the REST endpoint for the bucket.
When a 403 error message is returned, the proxy modifies the response body XML using the new embedded Lua interpreter, adding this before the <Error> tag.
<?xml-stylesheet type="text/xsl" href="/error.xsl"?>\n
The file /error.xsl is publicly readable, and uses browser-side XSLT to render a pretty error response.
The proxy also injects a couple of additional tags into the xml, <ProxyTime> and <ProxyHTTPCode> for use in the output. The resulting XML looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="/error.xsl"?>
<Error><ProxyTime>2015-10-13T17:36:01Z</ProxyTime><ProxyHTTPCode>403</ProxyHTTPCode><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>9D3E05D20C1BD6AC</RequestId><HostId>WvdkvIRIDMjfa/1Oi3DGVOTR0hABCDEFGHIJKLMNOPQRSTUVWXYZ+B8thZahg7W/I/ExAmPlEAQ=</HostId></Error>
Then I vary the output shown to the user with XSL tests to determine what error condition S3 has thrown:
<xsl:if test="//Code = 'AccessDenied'">
<p>It seems we may have provided you with a link to a resource to which you do not have access, or a resource which does not exist, or that our internal security mechanisms were unable to reach consensus on your authorization to view it.</p>
</xsl:if>
And the final result looks like this:
The above is a general "Access Denied" because no credentials were supplied. Here's an example of an expired signature.
I don't include the HostId in the output, since it's ugly and noisy, and, if I ever need it, the proxy captured and logged it for me, and I can cross-reference to the request-id.
As a bonus, of course, running the requests through my proxy means I can use my own domain name and my own SSL certificate when serving up bucket content, and I have real-time access logs with no delay. When the proxy is in the same region as the bucket, there is no additional charge for the extra step of data transfer, and I've been very happy with this setup.

Flex does not recognize crossdomain.xml policy file

I'm using Flex 3 and I want to access a webservice on another server. I've imported the webservice (Data->Import) succesfully into my application, but when I'm accessing the functions in the code itself I get the following error:
Warning: Domain ... does not specify a meta-policy. Applying default meta-policy "all".
This configuration is deprecated ...
Error: Request for resource at ... by requestor from ... is denied due to lack of policy file permissions
Security sendbox violation
Connection to ... halted - not permitted from ...
I've put the "crossdomain.xml" policy file in the root directory of the server that the web service is installed on. This is the content of this file:
<!DOCTYPE cross-domain-policy SYSTEM "http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd">
<cross-domain-policy>
<allow-access-from domain="*" secure="false" />
</cross-domain-policy>
I've called the Security.loadPolicyFile() in my code and am still getting this error. Any suggestions?
Try this:
<?xml version="1.0" ?>
<cross-domain-policy>
<site-control permitted-cross-domain-policies="master-only"/>
<allow-access-from domain="*"/>
<allow-http-request-headers-from domain="*" headers="*"/>
</cross-domain-policy>
Can you check if you are not getting a 404 when requesting for the crossdomain.xml file. Just type http://servername:port/crossdomain.xml in your browser if you are getting the xml file in the browser and not a 404.