AWS S3 is working on my localhost and on my live website, but my development server (which is EXACTLY the same configuration) is throwing the following error: http://xxx.xx.xxx.xxx/latest/meta-data/iam/security-credentials/resulted in a404 Not Found` response: Error retrieving credentials from the instance profile metadata server.
Localhost URL is http://localhost/example
Live URL is http://www.example.com
Development URL is http://dev.example.com
Why would this work on localhost and live but not my development server?
Here is my sample code:
$bucket = 'example'
$s3Client = new S3Client([
'region'=>'us-west-2',
'version'=>'2006-03-01',
'key'=>'xxxxxxxxxxxxxxxxxxxxxxxx',
'secret'=>'xxxxxxxxxxxxxxxxxxxxxxxx',
]);
$uniqueFileName = uniqid().'.txt';
$s3Client->putObject([
'Bucket' => $bucket,
'Key' => 'dev/'.$uniqueFileName,
'Body' => 'this is the body!'
]);
Here is policy:
{
"Version": "2012-10-17",
"Id": "Policyxxxxxxxxx",
"Statement": [
{
"Sid": "Stmtxxxxxxxxx",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example/*"
}
]
}
The values returned by http://169.254.169.254/latest/meta-data/iam/security-credentials/ are associated with the Role assigned to the EC2 instance when the instance was first launched.
Since you are receiving a 404 Not Found response, it is likely that your Development server does not have a Role assigned. You can check in the EC2 management console -- just click on the instance, then look at the details pane and find the Role value.
If you wish to launch a new server that is "fully" identical, use the Launch More Like This command in the Actions menu. it will also copy the Role setting.
Related
I am creating a simple API Gateway and trying to apply its auth. I created an IAM user (called postman-user) and created its credential (as AccessKeyId and SecretAccessKey).
My IAM User policy is like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "execute-api:*",
"Resource": "*"
}
]
}
and in my api gateway I applied the resource policy as below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<my account id>:root",
"arn:aws:iam::<my account id>:user/postman-user"
]
},
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:us-west-2:<my account id>:<my api g id>/*"
}
]
}
I applied the key id and secret key id in postman:
enter image description here
then the problem comes. no matter how I call the api endpoint using aws credential of this IAM user, I always got this error:
User: anonymous is not authorized to perform: execute-api:Invoke on resource: arn:aws:execute-api:us-west-2:******
I thought it was postman failed to sign this AWS sigV4, then I tried this in python:
url = 'https://<apig id>.execute-api.us-west-2.amazonaws.com/beta/query/'
auth = AWSRequestsAuth( aws_access_key='<my key id>',
aws_secret_access_key='<my secret key>',
aws_host='ec2.amazonaws.com',
aws_region='us-west-2',
aws_service='api')
response = requests.get(url, auth=auth)
This error is just forever for me
User: anonymous is not authorized to perform: execute-api:Invoke on resource: arn:aws:execute-api:us-west-2:******
Anyone can tell me what I missed ? I clicked on deployAPI in resource to stage beta 100 times ...
tried python, tried postman, nothing works
it sounds like there is something missing on the api plane. It may be the you havent configured IAM auth right on the http method you try to use. I may also be that the resource policy is not attached to the api gateway. Note if the policy is updated and reattached you need to redeploy the api gateway.
Link:
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-resource-policies-create-attach.html
This is an API Gateway config issue:
Resources -> click on the method -> Method Request -> Authorization: it used to be None, changing to to AWS IAM made this work.
I'd like to copy files to/from an s3 bucket from servers on my vpc without having to add credentials to each server in my cloud.
I followed the instructions at https://aws.amazon.com/premiumsupport/knowledge-center/s3-private-connection-no-authentication/ to set up a policy for a newly created VPC endpoint. As far as I can tell, I did everything correctly and added a good bucket policy to my bucket. I double checked the routing table settings. All appears good.
But perhaps I don't understand what this is supposed to do. When I type in:
aws s3 cp s3://My_Bucket_Name/some.pdf .
I just get:
fatal error: Unable to locate credentials
from my server in the vpc.
Here is the anonymized bucket policy I have:
{
"Version": "2012-10-17",
"Id": "Policy232323232323",
"Statement": [
{
"Sid": "Stmt1607462615603",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::1234567789:root"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::MyBucket/*",
"Condition": {
"StringEquals": {
"aws:SourceVpce": "vpce-123456c789"
}
}
}
]
}
Try appending: --no-sign-request
From Command line options - AWS Command Line Interface:
A Boolean switch that disables signing the HTTP requests to the AWS service endpoint. This prevents credentials from being loaded.
I am trying to deploy my website to aws with cloudfront and route53. The site is deployed and running on https://higgle.io
However the assets are not loading, for the images are throwing 403. How do I fix it?
I am using serverless serverless-next.js. And I was following one of their blog post to set it up.
So far I added which has serverless.yml on the route level.
higgle:
component: serverless-next.js
and my next.config.js looks like
module.exports = {
target: 'serverless',
webpack: (config) => {
config.module.rules.push({
test: /\.svg$/,
use: ['#svgr/webpack'],
});
return config
}
}
While the folder structrs looks like
-root
-.next
-pages
-_document.js
-index.js
-public
-static
-favicon.ico
-next.config.js
-package.json
-serverless.yml
Any idea how to fix this?
Thanks
S3 is returning a 403 because your items are private.
Change your S3 items to public. Check that they are accessible via your S3 static hosting address.
Step 1 will fix any static resources. If you are running a single page application, you will also need to return your index page when a 404 is returned by S3. In CloudFront, go to error pages, create a custom error response, choose 404 response, choose the option to customize the response, make the response code 200 and the response page path your index page.
Your bucket policy should be:
{
"Version": "2012-10-17",
"Id": "Policy1517754859350",
"Statement": [
{
"Sid": "Stmt1517754856505",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*"
}
]
}
I'm trying to make a post_to_connection request from an ecs-task to an APIGateway Websocket #connection api but was unable to do so from this ecs-task getting a Forbidden response every time.
This ecs-task has a role attached to it with the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": "execute-api:ManageConnections",
"Resource": "arn:aws:execute-api:*:*:*"
}
]
}
Tests I've done:
I have copied the credentials found on this container; access_key_id, secret_access_key and the session and made the requests from my local machine which was successful.
I have created a credential object and passed that to the apigateway client and made the request from inside the container but it failed again with a Forbidden response.
To make these requests I'm using the same version of aws-sdk-apigatewaymanagementapi gem.
client = Aws::ApiGatewayManagementApi::Client.new(endpoint: url, region: 'eu-west-1')
resp = client.post_to_connection({data: "{type: 'hello'}",connection_id: "conn_id"})
At this point, I'm out of ideas as I don't have too much exp with AWS. Can you think of anything I could try?
Fixed this by creating a Custom Domain which was pointed to this ApiGateway instance.
I am having issues using the temporary credentials to initiate a connection to AWS IoT using STS temporary credentials, whilst keeping things secure.
I have already successfully connected embedded devices using certificates with policies.
But when I come to try connecting via the browser, using a pre-signed URL, I have hit a stumbling block.
Below is a code snippet from a Lambda function which first authenticates the request (not shown), and then builds the url using STS credentials via assumeRole.
Using my generated URL along with Paho javascript client, I have been successful up to the point of receiving a response of "101 Switching Protocols" in the browser. But the connection is terminated instead of switching to websockets.
Any help or guidance anyone out there can provide me with would be much appreciated.
const iot = new AWS.Iot();
const sts = new AWS.STS({region: 'eu-west-1'});
const params = {
DurationSeconds: 3600,
ExternalId: displayId,
Policy: JSON.stringify(
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iot:*"
],
"Resource": [
"*"
]
},
/*{
"Effect": "Allow",
"Action": [
"iot:Connect"
],
"Resource": [
"arn:aws:iot:eu-west-1:ACCID:client/" + display._id
]
},
{
"Effect": "Allow",
"Action": [
"iot:Receive"
],
"Resource": [
"*"
]
}*/
]
}
),
RoleArn: "arn:aws:iam::ACCID:role/iot_websocket_url_role",
RoleSessionName: displayId + '-' + Date.now()
};
sts.assumeRole(params, function(err, stsData) {
if (err) {
fail(err, db);
return;
}
console.log(stsData);
const AWS_IOT_ENDPOINT_HOST = 'REDACTED.iot.eu-west-1.amazonaws.com';
var url = v4.createPresignedURL(
'GET',
AWS_IOT_ENDPOINT_HOST,
'/mqtt',
'iotdata',
crypto.createHash('sha256').update('', 'utf8').digest('hex'),
{
key: stsData.Credentials.AccessKeyId,
secret: stsData.Credentials.SecretAccessKey,
protocol: 'wss',
expires: 3600,
region: 'eu-west-1'
}
);
url += '&X-Amz-Security-Token=' + encodeURIComponent(stsData.Credentials.SessionToken);
console.log(url);
context.succeed({url: url});
});
Edit: If it helps, I just checked inside the "Frames" window in Chrome debugger, after selecting the request which returns a 101 code. It shows a single frame: "Binary Frame (Opcode 2, mask)".
Does this Opcode refer to MQTT control code 2 AKA "CONNACK"? I am not an expert at MQTT (yet!).
I realised my mistake by reading the docs on STS.
If you pass a policy to this operation, the temporary security credentials that are returned by the operation have the permissions that are allowed by both the access policy of the role that is being assumed, and the policy that you pass.
The RoleARN that is supplied must also allow the actions that you are requesting via STS assumeRole.
i.e. The RoleARN could allow iot:*, then when you assume role, you can narrow the permissions down to, for instance iot:Connect and for specific resources.