Symfony AWS S3 - Change signature_version - amazon-web-services

i use aws S3 bucket with Symfony 3.4 and when i send file i have this error :
Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.
I think i need change 'signature_version' to v4 https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/guide_configuration.html#signature-version
But i don't know how
config.yml:
aws:
version: 'lastest'
region: 'eu-west-3'
credentials: false
Sqs:
credentials: "#a_service"
sendFile.php
use Aws\S3\S3Client;
public function __construct(S3Client $s3Client){
$this->s3Client = $s3Client;
}
public function sendFile($dataBase64){
$this->s3Client->putObject([
'Bucket' => $monbucket,
'Key' => $key,
'Body' => $dataBase64,
'ACL' => 'public-read'
]);
}
Bundle version: "aws/aws-sdk-php-symfony": "^2.0",

Related

How to setup AWS-SDK credentials in NextJS

I need to upload some files to S3 from a NextJs application. Since it is server side I am under the impression simply setting environment variables should work but it doesn't. I know there are other alternative like assigning a role to EC2 but I want to use accessKeyID and secretKey.
This is my next.config.js
module.exports = {
env: {
//..others
AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID
},
serverRuntimeConfig: {
//..others
AWS_SECRET_ACCESS_KEY: process.env.AWS_SECRET_ACCESS_KEY
}
}
This is my config/index.js
export default {
//...others
awsClientID: process.env. AWS_ACCESS_KEY_ID,
awsClientSecret: process.env.AWS_SECRET_ACCESS_KEY
}
This is how I use in my code:
import AWS from 'aws-sdk'
import config from '../config'
AWS.config.update({
accessKeyId: config.awsClientID,
secretAccessKey: config.awsClientSecret,
});
const S3 = new AWS.S3()
const params = {
Bucket: "bucketName",
Key: "some key",
Body: fileObject,
ContentType: fileObject.type,
ACL: 'public-read'
}
await S3.upload(params).promise()
I am getting this error:
Unhandled Rejection (CredentialsError): Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
If I hard code the credentials in code, it works fine.
How can I make it work correctly?
Looks like the Vercel docs are currently outdated (AWS SDK V2 instead of V3). You can pass the credentials object to the AWS service when you instantiate it. Use an environment variable that is not reserved by adding the name of your app to it for example.
.env.local
YOUR_APP_AWS_ACCESS_KEY_ID=[your key]
YOUR_APP_AWS_SECRET_ACCESS_KEY=[your secret]
Add these env variables to your Vercel deployment settings (or Netlify, etc) and pass them in when you start up your AWS service client.
import { S3Client } from '#aws-sdk/client-s3'
...
const s3 = new S3Client({
region: 'us-east-1',
credentials: {
accessKeyId: process.env.TRENDZY_AWS_ACCESS_KEY_ID ?? '',
secretAccessKey: process.env.TRENDZY_AWS_SECRET_ACCESS_KEY ?? '',
},
})
(note: undefined check so Typescript stays happy)
are you possibly hosting this app via vercel?
As per vercel docs, some env variables are reserved by vercel.
https://vercel.com/docs/concepts/projects/environment-variables#reserved-environment-variables
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Maybe that's the reason why it is not getting those env vars
I was able to workaround this by adding my custom env variables into .env.local and then calling for those variables
AWS.config.update({
'region': 'us-east-1',
'credentials': {
'accessKeyId': process.env.MY_AWS_ACCESS_KEY,
'secretAccessKey': process.env.MY_AWS_SECRET_KEY
}
});
As last step would need to add these into vercel UI
obviously not ideal solution and not recommended by AWS.
https://vercel.com/support/articles/how-can-i-use-aws-sdk-environment-variables-on-vercel
If I'm not mistaken, you want to make AWS_ACCESS_KEY_ID into a runtime variable as well. Currently, it is a build time variable, which won't be accessible in your node application.
// replace this
env: {
//..others
AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID
},
// with this
module.exports = {
serverRuntimeConfig: {
//..others
AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID
}
}
Reference: https://nextjs.org/docs/api-reference/next.config.js/environment-variables

Uploading file to s3 bucket without using AWS console using nodeJs

I am new to AWS technology, i need to do above assignment so i want to know procedure do complete above task. What are prerequisites software and tools should i required.
Please suggest me in simple way. TIN Happy Coding
you have to configure your AWS credential before suing Aws with aws-sdk npm package using below method
import AWS from "aws-sdk";
const s3 = new AWS.S3({
accessKeyId: YOUR_ACCESS_KEY_ID_OF_AMAZON,
secretAccessKey: YOUR_AMAZON_SECRET_ACCESS_KEY,
signatureVersion: "v4",
region: YOUR_AMAZON_REGION // country
});
export { s3 };
then call s3 and do upload request using below code
const uploadReq: any = {
Bucket: "YOUR_BUCKET"
Key: "FILE_NAME",
Body: "FILE_STREAM",
ACL: "public-read", //ACCESSIBLE TO REMOTE LOCATION
};
await new Promise((resolve, reject) => {
s3.upload(uploadReq).send(async (err: any, data: any) => {
if (err) {
console.log("err", err);
reject(err);
} else {
//database call
resolve("STATUS");
}
});
});

Possible to set up multiple AWS configurations in rails?

Is there a way to set up multiple aws configurations in a rails
environment? For example:
Aws account #1
Aws.config.update({
region: aws_config[:region_1],
credentials: Aws::Credentials.new(aws_config[:access_key_id_1],
aws_config[:secret_access_key_1])
})
Aws account #2
Aws.config.update({
region: aws_config[:region_2],
credentials: Aws::Credentials.new(aws_config[:access_key_id_2],
aws_config[:secret_access_key_2])
})
I tried setting the second specification manually in the model, but it
doesn't login with the new credentials for some reason:
foo = Aws::S3::Client.new(
region: aws_config[:region_2],
access_key_id: aws_config[:access_key_id_2],
secret_access_key: aws_config[:secret_access_key_2]
)
bar = foo.list_objects_v2(bucket: aws_config[:bucket_2])

Issues generating CloudFront signed URLs; always Access Denied

I’m having issues generating signed URLs with CloudFront. Whatever I try, I just get an “Access Denied” response.
I’ve created a distribution in CloudFront, and a CloudFront key pair ID. I’ve downloaded the private and public keys for that key pair ID.
In a simple PHP script, I’m trying the following:
use Aws\CloudFront\CloudFrontClient;
$cloudfront = new CloudFrontClient([
'credentials' => [
'key' => '[redacted]', // Access key ID of IAM user with Administrator policy
'secret' => '[redacted]', // Secret access key of same IAM user
],
'debug' => true,
'region' => 'eu-west-1',
'version' => 'latest',
]);
$expires = strtotime('+6 hours');
$resource = 'https://[redacted].cloudfront.net/mp4/bunny-trailer.mp4';
$url = $cloudfront->getSignedUrl([
'url' => $resource,
'policy' => json_encode([
'Statement' => [
[
'Resource' => $resource,
'Condition' => [
'DateLessThan' => [
'AWS:EpochTime' => $expires,
],
],
],
],
]),
'expires' => $expires,
'key_pair_id' => '[redacted]', // Access key ID of CloudFront key pair
'private_key' => '[redacted]', // Relative path to pk-[redacted].pem file
]);
But when visiting the generated URL, it just always gives me an error in the browser with a code of “AccessDenied”.
What am I doing wrong?
Discovered what the issue was. The objects in my S3 bucket weren’t publicly-accessible, and I hadn’t added an Origin Access Identity, so CloudFront couldn’t pull the objects from my origin (my S3 bucket) to cache them.
As soon as I added an Origin Access Identity and added it to my S3 bucket’s policy, my objects immediately became accessible through my CloudFront distribution via signed URLs.
Relevant documentation: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#private-content-creating-oai

Elasticsearch AWS with Elastica

Is it possiple to Connect to an Amazon Elasticsearch with Elastica and the "AWS Account access policy"?
When i use "Allow open access to the domain" it works.
$elasticaClient = new \Elastica\Client([
'connections' => [
[
'transport' => 'Https',
'host' => 'search-xxxxxxxx-zzzzzzzz.us-west-2.es.amazonaws.com',
'port' => '',
'curl' => [
CURLOPT_SSL_VERIFYPEER => false,
],
],
],
]);
But in dont know how to set the "Authorization header requires" when i use the "AWS Account access policy".
I am using the FriendsOfSymfony FOSElasticaBundle for Symfony. I solved that problem using AwsAuthV4 as transport like this:
fos_elastica:
clients:
default:
host: "YOURHOST.eu-west-1.es.amazonaws.com"
port: 9200
transport: "AwsAuthV4"
aws_access_key_id: "YOUR_AWS_KEY"
aws_secret_access_key: "YOUR_AWS_SECRET"
aws_region: "eu-west-1"
This is not implemented yet as it needs more then just setting the headers. Best is to follow the issue here in the Elastica repository for progress: https://github.com/ruflin/Elastica/issues/948