I have a PHP app running on a AWS EC2 instance. I want to upload a file to unit and then save that file to a S3 bucket. I can get the file up to the Ec2 instance with now problem but for the life of me I cannot figure out how to use the puObject call to move it to the S3 instance... I am very new to AWS so if anyone can give me a pointer, that'd help out a lot!
You have three options for how to transfer data from the EC2 instance to S3 using PHP:
1. Use the AWS PHP SDK
This is the preferred option. The SDK is released by Amazon and includes an easy to use API interface for all of their services. It can also allow you to use the EC2 instance's IAM role for credentials, meaning you don't necessarily need to store your API keys etc in the code. Example use:
// Use an Aws\Sdk class to create the S3Client object.
$s3Client = $sdk->createS3();
// Send a PutObject request and get the result object.
$result = $s3Client->putObject([
'Bucket' => 'my-bucket',
'Key' => 'my-key',
'Body' => file_get_contents($pathToYourFile)
]);
// Download the contents of the object.
$result = $s3Client->getObject([
'Bucket' => 'my-bucket',
'Key' => 'my-key'
]);
// Print the body of the result by indexing into the result object.
echo $result['Body'];
2. Use the EC2 instance's AWS CLI
This is less ideal, but still doable. I wouldn't recommend doing this as it would mean you use PHP to access the shell, and assume that the shell has the AWS CLI configured (most if not all EC2 instances would do by default). Example:
shell_exec('aws s3 cp /path/to/your/file.txt s3://bucket-name/path/your/file.txt');
3. Bake your own S3 API call using REST
This is probably the next best option after using the SDK, although it requires that you store the credentials in code, have to formulate your HMAC signatures and ensure that your API structure matches Amazon's guidelines. Note that the PHP SDK does this all for you, but this way you don't need the whole SDK installed.
If you go this way, you'll need to read up about how to sign your requests.
Related
I am on a federated account that only allows for 60 minutes access tokens. This makes using AWS difficult since I have to constantly relog in with MFA, even for the AWS CLI on my machine. I'm fairly certain that any programmatic secret access key and token I generate would be useless after an hour.
I am writing a .NET program (.NET framework 4.8) that will run on a EC2 instance to read and write from an S3 bucket. As per the documentation example, they give this example to initalize the AmazonS3Client:
// Before running this app:
// - Credentials must be specified in an AWS profile. If you use a profile other than
// the [default] profile, also set the AWS_PROFILE environment variable.
// - An AWS Region must be specified either in the [default] profile
// or by setting the AWS_REGION environment variable.
var s3client = new AmazonS3Client();
I've looked into SecretManager and ParameterStore, but that would matter if the programmatic access keys go inactive after an hour. Perhaps there is another way to give the program access to S3 and the SDK...
If I cannot use access keys and tokens stored in a file, could I use the IAM access that awscli uses? For example, I can type into powershell aws s3 ls s3://mybucket to list and read files from s3 to the ec2 instance. Could the .NET SDK use the same credentials to access the S3 bucket?
I am trying to work with the new Instance Metadata Service Version 2 (IMDSv2) API.
It works as expected when I try to query the metadata manually as described on Retrieve instance metadata - Amazon Elastic Compute Cloud.
However, if I try to query for the instance tags it fails with error message:
Couldn't find AWS credentials in environment, credentials file, or IAM role
The tags query is done by the Rusoto SDK that I am using, that works when I set --http-tokens optional as described on Configure the instance metadata options - Amazon Elastic Compute Cloud.
I don't fully understand why setting the machine to work with IMDSv2 would effect the DescribeTags request, as I believe it's not using the same API - so I am guessing that's a side effect.
If I try and do a manual query using curl (instead of using the SDK):
https://ec2.amazonaws.com/?Action=DescribeTags&Filter.1.Name=resource-id&Filter.1.Value.1=ami-1a2b3c4d
I get:
The action DescribeTags is not valid for this web service
Thanks :)
The library that I was using (Rusoto SDK 0.47.0) doesn't support fetching the credentials needed when the host is set to work with the IMDSv2.
The workaround was to manually query for the IAM role credentials.
First, you get the token:
GET /latest/api/token
Next, use the token header "X-aws-ec2-metadata-token" with the value from the previous:
GET /meta-data/iam/security-credentials
Afterwards, use the result from the previous query (and don't forget to set the token header), and query:
GET /meta-data/iam/security-credentials/<query 2 result>
This will provide with the following data:
struct SecurityCredentials {
#[serde(rename = "AccessKeyId")]
access_key_id: String,
#[serde(rename = "SecretAccessKey")]
secret_access_key: String,
#[serde(rename = "Token")]
token: String,
}
Then what I needed to do was to build a custom credentials provider using that data (but this part is already lib specific).
I want to extend commands for aws-cli it is possible, I mean aws-cli has some library to extend some functionality, it hasn't? For example, I want to obtain the size in MB of my files in the bucketmio, May I could create some extension for that, e.g
aws s3 bucketmio get-size-all-files
get-size-all-files is the new extension that I want.
Thanks.
Yes, you can Create and use AWS CLI aliases, which are "shortcuts you can create in the AWS Command Line Interface (AWS CLI) to shorten commands or scripts that you frequently use. You create aliases in the alias file located in your configuration folder."
Here is an example that sends a message via Amazon SNS:
textalert =
!f() {
aws sns publish --message "${1}" --phone-number ${2}
}; f
You can write shell script that calls the AWS CLI but also adds additional logic around those calls.
I'm having a problem with my AWS credentials. I used the credentials file that I created on ~/.aws/credentials just as it is written on the AWS doc. However, apache just can't read it.
First, I was getting this error:
Error retrieving credentials from the instance profile metadata server. When you are not running inside of Amazon EC2, you must provide your AWS access key ID and secret access key in the "key" and "secret" options when creating a client or provide an instantiated Aws\Common\Credentials CredentialsInterface object.
Then I tried some solutions that I found on internet. For example, I tried to check my HOME variable. It was /home/ubuntu. I tried also to move my credentials file to the /var/www directory even if it is not my web server directory. Nothing worked. I was still getting the same error.
As a second solution, I saw that we could call directly the CredentialsProvider and indicate the directory on the client.
https://forums.aws.amazon.com/thread.jspa?messageID=583216򎘰
The error changed but I couldn't make it work:
Cannot read credentials from /.aws/credentials
I saw also that we could use the default provider of the CredentialsProvider instead of indicating a path.
http://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/credentials.html#using-credentials-from-environment-variables
I tried and I keep getting the same error:
Cannot read credentials from /.aws/credentials
Just in case you need this information, I'm using aws/aws-sdk-php (3.2.5). The service I'm trying to use is the AWS Elastic Transcoder. My EC2 instance is an Ubuntu 14.04. It runs a Symfony application deployed using Capifony.
Before I try on this production server, I tried it in a development server where it works perfectly only with the ~/.aws/credentials file. This development server is exactly a copy of the production server. However, it doesn't use Capifony for the deployment. It is just a normal git clone of the project. And it has only one EBS volume while the production server has one for the OS and one for the application.
Ah! And I also checked if the permissions/owners of the credentials file were the same on both servers and they are the same. I tried a 777 to see if it could change something but nothing.
Does anybody have an idea?
It sounds like you're doing it wrong. You do not need to deploy credentials to an EC2 instance in order to have that instance interact with other AWS services, and if fact should not ever deploy credentials to an EC2 instance.
Instead, when you create your instance, you associate an IAM role with it. That role has policies that control access to the other AWS services.
You can create an empty role, launch the instance, and then modify the role later. You can't assign a role after the instance has been launched.
You can now add roles to an instance after it has been assigned.
It is still considered a best practice to not deploy actual credentials to an EC2 instance.
If this can help someone, I managed to make my .ini file work, doing this way:
$profile = 'default';
$path = '/mnt/app/www/.aws/credentials/default.ini';
$provider = CredentialProvider::ini($profile, $path);
$provider = CredentialProvider::memoize($provider);
$client = ElasticTranscoderClient::factory(array(
'region' => 'eu-west-1',
'version' => '2012-09-25',
'credentials' => $provider
));
The CredentialProvider is explained on this doc:
http://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/credentials.html#ini-provider
I still don't understand why my application can't read the file on the home directory (~/.aws/credentials/default.ini) on one server but in the other it does.
If someone knows something about it, please let me know.
The SDK reads from a file located at ~/.aws/credentials, but it looks like you're saving a file at ~/.aws/credentials/default.ini. If you move the file, the error you were experiencing should be cleared up.
2 Ways of solving this problem to me Node.js
Its going to get my credentials from /home/{USER}/.aws/credentials usin' the default profile
const aws = require('aws-sdk');
aws.config.credentials = aws.SharedIniFileCredentials({profile: 'default'})
...
The hardcoded way
var lambda = new aws.Lambda({
region: 'us-east-1',
accessKeyId: <KEY>
secretAccessKey: <KEY>
});
Hi Cloud Computing Geeks,
Is there any way of mounting/connecting S3 bucket with EC2 instances using JAVA AWS SDK (not by using CLI commands/ec2-api-tools). I do have all the JAVA sdks required. I successfully created a bucket using Java AWS SDK, now further want to connect it with my EC2 instances being present in North Virginia region. I didn't find any way of to do it but I hope there must be some way.
Cheers,
Hammy
You don't "mount S3 buckets", they don't work that way for they are not filesystems. Amazon S3 is a service for key-based storage. You put objects (files) with a key and retrieve them back with the same key. The key is merely a name that you assign arbitrarily and you can include "/" markers in the names to simulate a hierarchy of files. For example, a key could be folder1/subfolder/file1.txt. For illustration I show the basic operations with the java sdk.
First of all you must set up your amazon credentials and get an S3 client:
AWSCredentials credentials = new BasicAWSCredentials("your_accessKey", your_secretKey");
AmazonS3Client s3client = new AmazonS3Client(credentials);
Store a file:
File file = new File("some_local_path/file1.txt");
String fileKey = "folder1/subfolder/file1.txt";
s3client.putObject("bucket_name", fileKey, file);
Retrieve back the file:
S3ObjectInputStream objectInputStream = s3client.getObject("bucket_name", fileKey).getObjectContent();
You can read the InputStream or you can save it as file.
List the objects of a (simulated) folder: See my answer in another question.