AWS S3 put object (video) with pre-signed URL, Not working? - amazon-web-services

I am uploading a video on S3 bucket via PHP S3 API with pre-signed URL.
The mp4 video is uploaded successfully to S3, but it's not streaming
and not giving any kind of error.
Here are the details.
My PHP file to create pre-singed url for S3 putObject method.
require 'aws/aws-autoloader.php';
use Aws\S3\S3Client;
use Aws\Exception\AwsException;
$s3Client = new Aws\S3\S3Client([
'version' => 'latest',
'region' => 'ap-south-1',
'credentials' => [
'key' => 'XXXXXXX',
'secret' => 'XXXXXXX'
]
]);
/*echo '<pre>';
print_r($_FILES);die;*/
if(!$_FILES['file']['tmp_name'] || $_FILES['file']['tmp_name']==''){
echo json_encode(array('status'=>'false','message'=>'file path is required!'));die;
}else{
$SourceFile =$_FILES['file']['tmp_name'];
$key=$_FILES['file']['name'];
$size=$_FILES['file']['size'];
}
try {
$cmd = $s3Client->getCommand('putObject', [
'Bucket' => 's3-signed-test',
'Key' => $key,
'SourceFile' => $SourceFile,
'debug' => false,
'ACL' => 'public-read-write',
'ContentType' => 'video/mp4',
'CacheControl'=>'no-cache',
'ContentLength'=>$size
]);
$request = $s3Client->createPresignedRequest($cmd, '+120 minutes');
// Get the actual presigned-url
$presignedUrl = (string) $request->getUri();
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";die;
}
echo json_encode(array('status'=>'true','signedUrl'=>$presignedUrl));die;
This code is working fine and uploading video mp4 on s3 bucket.
But after upload when I am going to access that video, it's not working
I have tried also with getObject pre-singed url but it's not working.
Here are the S3 object URLs-
(1) getObject pre-singed URL
https://s3-signed-test.s3.ap-south-1.amazonaws.com/file.mp4?X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIVUO7AT4W4MCPDIA%2F20180402%2Fap-south-1%2Fs3%2Faws4_request&X-Amz-Date=20180402T112848Z&X-Amz-SignedHeaders=host&X-Amz-Expires=7200&X-Amz-Signature=d6b877f9bba5dd2221381f10017c8659fe42342d81f7af940d8693478679a8fc
(2) S3 Direct object URL-
https://s3.ap-south-1.amazonaws.com/s3-signed-test/file.mp4
My Problem is I am unable to access video which I have uploaded with pre-singed URL on the s3 bucket, bucket permission is public, and accessible for all origins.
Please let me know, someone who have solution for this.

Related

Net::Amazon::S3::Client Produces Error on initiate_multipart_upload

There seems to be some issue triggering initiate_multipart_upload in Net::Amazon::S3::Client. When I do so, I receive the error:
Can't call method "findvalue" on an undefined value at /usr/local/share/perl5/Net/Amazon/S3/Operation/Object/Upload/Create/Response.pm line 17.
I can do a normal put on the object without any error. All else seems to be functional, with lists of the bucket working, etc. Here is a code snippet of what I'm trying:
use Net::Amazon::S3;
our $s3 = Net::Amazon::S3->new({
aws_access_key_id => $config{'access_key_id'},
aws_secret_access_key => $config{'access_key'},
retry => 1
});
our $s3client = Net::Amazon::S3::Client->new( s3 => $s3 );
my $bucket = $s3client->bucket( name => $bucketName );
my $s3object = $bucket->object( key => 'test.txt' );
print 'Key: ' . $s3object->key;
my $uploadId = $s3object->initiate_multipart_upload;
Net::Amazon::S3::Client doesn't object to creating the bucket or the object.
I've been able to initialize a multipart upload using the Paws::S3 library on this bucket just fine. I switched to Net::Amazon::S3::Client since Paws::S3 seems to get stuck at around the 21% mark of uploading my multipart file.

Unable to move files completely to S3 bucket using AWS SDK for PHP

I'm trying to get past a road block situation related to S3.
Background: From my mobile app users can pick upto 10 images and post it to server. This is received and moved to concerned folders in AWS instance - Let's call it instance "A" for the purpose of our discussion. As and when the file is moved to instance "A" we are calling "moveFileToS3" . The code looks something like this.
foreach($_FILES as $file)
{
$file_info = $file['name'];
$extension = pathinfo($file['name'],PATHINFO_EXTENSION);
$destination_file_name = $imagesMoved . '-'. $sku . '.'. $extension;
$file_path = $images_directory . $destination_file_name;
if (move_uploaded_file($file['tmp_name'], $file_path))
{
$imagesMoved++;
//Move the file to S3
moveFileToS3($destination_file_name, $images_directory, $parent_folder . '/' . $images_folder . '/');
}
if (intval($imagesMoved) == intval($totalFileCount))
{
$image_moved = true;
//Begin to update the DB
}
}
When this call (moveFileToS3) is NOT made.. all the files selected by the user make it to instance "A".
But when this call(moveFileToS3) is made... not all the files selected by the user make it to instance "A" and only a few files from instance "A" get moved to S3 location. Neither set of instructions after $image_moved = true get executed.
Any assistance to get past this situation would be very much appreciated. I have attached the file that has the method "moveFileToS3" for your quick reference.
<?php
require '../api/vendor/autoload.php';
use Aws\Common\Exception\MultipartUploadException;
use Aws\S3\MultipartUploader;
use Aws\S3\S3Client;
function moveFileToS3($fileName, $fileLocation, $targetLocation)
{
date_default_timezone_set('Asia/Kolkata');
$region = 'xxxxx';
$version = 'xxxxx';
$bucket = 'xx-xxxxxx-xxxxx';
$endpoint = 'xxxxxxxx.s3-accelerate.amazonaws.com';
$key = 'xxxxxx';
$secret = 'xxxxxxxxx';
$fileFullPathLocal = $fileLocation.$fileName;
$s3 = new S3Client([
'version' => $version,
'region' => $region,
'debug' => true,
'credentials' => [
'key' => $key,
'secret' => $secret,
]
]);
// Prepare the upload parameters.
$uploader = new MultipartUploader($s3, $fileFullPathLocal, [
'bucket' => $bucket,
'key' => $targetLocation.$fileName
]);
// Perform the upload.
try
{
$responseLogFile = fopen("../log/S3UploadLog_".date("Y-m-d").".log", "a+");
fwrite($responseLogFile, '['.date("Y-m-d H:i:s").']: Upload Started : '.$fileName. PHP_EOL. PHP_EOL);
fclose($responseLogFile);
$result = $uploader->upload();
$responseLogFile = fopen("../log/S3UploadLog_".date("Y-m-d").".log", "a+");
fwrite($responseLogFile, '['.date("Y-m-d H:i:s").']: Upload Finished : '.$fileName. PHP_EOL. PHP_EOL);
// fwrite($responseLogFile, '['.date("Y-m-d H:i:s").']: Upload Result : '.$result. PHP_EOL. PHP_EOL);
fwrite($responseLogFile, '['.date("Y-m-d H:i:s").']: Object Url : '.$result['ObjectURL']. PHP_EOL. PHP_EOL);
fclose($responseLogFile);
// echo "Upload complete: {$result['ObjectURL']}" . PHP_EOL;
unlink($fileFullPathLocal);
}
catch (MultipartUploadException $e)
{
$responseLogFile = fopen("../log/S3UploadLog_".date("Y-m-d").".log", "a+");
fwrite($responseLogFile, '['.date("Y-m-d H:i:s").']: Upload Failed : '.$fileName. PHP_EOL. PHP_EOL);
fclose($responseLogFile);
echo $e->getMessage() . PHP_EOL;
}
}
?>
Seems to issue when you move file from your local dev sever to a bucket.. But when moved to AWS environment.. works without any issues

How to get the AWS S3 bucket location via PHP api call?

I am searching on the internet on how can I get the AWS s3 bucket region with an API call or directly in PHP using their library but have not luck finding the info.
I have the following info available:
Account credentials, bucket name, access key + secret. That is for multiple buckets, that I have access to, and I need to get the region programatically, so logging in to aws console and checking out is not an option.
Assuming you have an instance of the AWS PHP Client in $client, you should be able to find the location with $client->getBucketLocation().
Here is some example code:
<?php
$result = $client->getBucketLocation([
'Bucket' => 'yourBucket',
]);
The result will look like this
[
'LocationConstraint' => 'the-region-of-your-bucket',
]
When you create a S3 client, you can use any of the available regions in AWS, even if it's not one that you use.
$s3Client = new Aws\S3\S3MultiRegionClient([
'version' => 'latest',
'region' => 'us-east-1',
'credentials' => [
'key' => $accessKey,
'secret' => $secretKey,
],
]);
$region = $s3Client->determineBucketRegion($bucketname);

How to get the file from Amazon AWS S3 bucket from URL?

I've created an Amazon S3 bucket and I've uploaded the files/images from mobile phone app. I've to show the posts with a lot of images and the images are automatically bind for image URLs. But I don't know how to get the URL because images should not be public to show directly. How can I show them in my app?
$cmd = $client->getCommand('GetObject',[
'Bucket' => 'myinstaclassbucket',
'Key' => 'e12e682c-936d-4a97-a049-6f104dd7c904.jpg',
]);
$request = $client->createPresignedRequest($cmd,$timetoexpire);
$presignedurl = (string) $request->getUri();
echo $presignedurl;
First of all you need to use AWS PHP SDK. Also make sure you have valid Access key and Secret key.
Than everything is straight forward.
$bucket = 'some-bucket';
$key = 'mainFolder/subFolder/file.xx';
// Init client
$client = new S3Client([
'key' => '*YOUR ACCESS KEY*',
'secret' => '*YOUR SECRET KEY*',
]);
if ($client->doesObjectExists($bucket, $key)) {
// If passing `expire` time you will get signed URL
$url = $client->getObjectUrl($bucket, $key, time() + (60 * 60 * 2));
} else {
$url = null;
}

Location to put credentials file for AWS PHP SDK

I created an EC2 Ubuntu instance.
The following is working using the AWS 2.6 SDK for PHP:
$client = DynamoDbClient::factory(array(
'key' => 'xxx',
'secret' => 'xxx',
'region' => 'eu-west-1'
));
I created a credentials file in ~/.aws/credentials.
I put this in /home/ubuntu/.aws/credentials
[default]
aws_access_key_id=xxx
aws_secret_access_key=xxx
Trying the following does not work and gives an InstanceProfileCredentialsException :
$client = DynamoDbClient::factory(array(
'profile' => 'default',
'region' => 'eu-west-1'
));
There is a user www-data and a user ubuntu.
In what folder should I put the credentials file?
One solution to set the credentials is:
sudo nano /etc/apache2/envvars
add environment variables:
export AWS_ACCESS_KEY_ID="xxx"
export AWS_SECRET_ACCESS_KEY="xxx"
sudo service apache2 restart
After that the following works:
$client = DynamoDbClient::factory(array(
'region' => 'eu-west-1'
));
If you are calling the API from an EC2 instance, you should use IAM roles.
Using IAM roles is the preferred technique for providing credentials
to applications running on Amazon EC2. IAM roles remove the need to
worry about credential management from your application. They allow an
instance to "assume" a role by retrieving temporary credentials from
the EC2 instance's metadata server. These temporary credentials, often
referred to as instance profile credentials, allow access to the
actions and resources that the role's policy allows.
This is way too late, but the solution I found for shared servers where you can't actually use environment vars is to define a custom ini file location, like this:
require (__DIR__.'/AWSSDK/aws-autoloader.php');
use Aws\Credentials\CredentialProvider;
use Aws\S3\S3Client;
$profile = 'default';
$path = '/path/to/credentials';
$provider = CredentialProvider::ini($profile, $path);
$provider = CredentialProvider::memoize($provider);
$client = new \Aws\S3\S3Client([
'version' => 'latest',
'region' => 'us-west-2',
'credentials' => $provider
]);
Note that you could even define different profiles with this method.
Documentation HERE
I have a non-EC2 server that accesses SQS and needs credentials. I can't use envvars because there are various people with differing rights who run on this server and envvars is global. For the same reason I don't think I can use an AWS credential file stored under a user's home (although I also couldn't figure out how to make that work for user www-data.)
What I have done is set up a small file AWS_Creds.php
<?php
define ("AWS_KEY", "MY KEY HERE");
define ("AWS_SECRET", "MY SECRET");
?>
The file is stored outside of the webroot and included with include ('ABSOLUTEPATH/AWS_Creds.php') and I include the hard wired reference to the client factory.
Elegant? No. Done? Yes.
EDIT
I forgot to mention: I gitignore AWS_Creds.php so that it doesn't go into our repo.
basicly you can use like this :
$client = DynamoDbClient::factory(array(
'key' => 'aws_key',
'secret' => 'aws_secret',
'region' => 'us-east-1'
));
but in documentation :
Starting with the AWS SDK for PHP version 2.6.2, you can use an AWS credentials file to specify your credentials. This is a special, INI-formatted file stored under your HOME directory, and is a good way to manage credentials for your development environment. The file should be placed at ~/.aws/credentials, where ~ represents your HOME directory.
and usage :
$dynamoDbClient = DynamoDbClient::factory(array(
'profile' => 'project1',
'region' => 'us-west-2',
));
more info : http://docs.aws.amazon.com/aws-sdk-php/guide/latest/credentials.html
After watch the source code Credential.php in aws/aws-sdk-php/src,
php can not directly access /root folder in default. You can write $_SERVER['HOME']=[your new home path] in your php, and put the credential file in newHomePath/.aws/credentials.
require('vendor/autoload.php');
use Aws\Ec2\Ec2Client;
$credentials = new Aws\Credentials\Credentials('Your Access Key',
'Your Secret Key'); // Place here both key
$ec2Client = new Aws\Ec2\Ec2Client([
'version' => 'latest',
'region' => 'ap-south-1',
'credentials' => $credentials
]);
$result = $ec2Client->describeKeyPairs();
echo '<pre>';
print_r($result);
Reference site : https://docs.aws.amazon.com/aws-sdk-php/v2/guide/credentials.html#passing-credentials-into-a-client-factory-method