I am trying to send some text to AWS Polly to convert to speech and then save that mp3 file to S3. That part seems to work now.
// Send text to AWS Polly
$client_polly = new Aws\Polly\PollyClient([
'region' => 'us-west-2',
'version' => 'latest',
'credentials' => [
'key' => $aws_useKey,
'secret' => $aws_secret,
]
]);
$text = 'Test. Test. This is a sample text to be synthesized.';
$voice = 'Matthew';
$result_polly = $client_polly->startSpeechSynthesisTask([
'Text' => $text,
'TextType' => 'text',
'OutputFormat' => 'mp3',
'OutputS3BucketName' => $aws_bucket,
'OutputS3KeyPrefix' => 'files/audio/,
'VoiceId' => $voice,
'ACL' => 'public-read'
]);
echo $result_polly['ObjectURL'];
I'm also trying to accomplish couple other things:
Make mp3 file publicly accessible. Currently I have to go to AWS console to
click "Make Public" button. It seems that 'ACL' => 'public-read' doesn't work for me
I need to return full URL of the mp3 file. For some reason $result_polly['ObjectURL']; doesn't get any value.
What am I missing?
There is no ACL field in the StartSpeechSynthesisTask call:
$result = $client->startSpeechSynthesisTask([
'LanguageCode' => 'arb|cmn-CN|cy-GB|da-DK|de-DE|en-AU|en-GB|en-GB-WLS|en-IN|en-US|es-ES|es-MX|es-US|fr-CA|fr-FR|is-IS|it-IT|ja-JP|hi-IN|ko-KR|nb-NO|nl-NL|pl-PL|pt-BR|pt-PT|ro-RO|ru-RU|sv-SE|tr-TR',
'LexiconNames' => ['<string>', ...],
'OutputFormat' => 'json|mp3|ogg_vorbis|pcm', // REQUIRED
'OutputS3BucketName' => '<string>', // REQUIRED
'OutputS3KeyPrefix' => '<string>',
'SampleRate' => '<string>',
'SnsTopicArn' => '<string>',
'SpeechMarkTypes' => ['<string>', ...],
'Text' => '<string>', // REQUIRED
'TextType' => 'ssml|text',
'VoiceId' => 'Aditi|Amy|Astrid|Bianca|Brian|Carla|Carmen|Celine|Chantal|Conchita|Cristiano|Dora|Emma|Enrique|Ewa|Filiz|Geraint|Giorgio|Gwyneth|Hans|Ines|Ivy|Jacek|Jan|Joanna|Joey|Justin|Karl|Kendra|Kimberly|Lea|Liv|Lotte|Lucia|Mads|Maja|Marlene|Mathieu|Matthew|Maxim|Mia|Miguel|Mizuki|Naja|Nicole|Penelope|Raveena|Ricardo|Ruben|Russell|Salli|Seoyeon|Takumi|Tatyana|Vicki|Vitoria|Zeina|Zhiyu', // REQUIRED
]);
Therefore, you will either need to make another call to Amazon S3 to change the ACL of the object, or use an Amazon S3 Bucket Policy to make the bucket (or a path within the bucket) public.
The output location is given in the OutputUri field (NOT OutputUrl -- URI vs URL).
Related
Any body knows how to solve this
Error message after trying to convert the image to ami
Note (I already try to import via command line and it's working good)
I'm trying to convert image from row to ami using laravel 8
Here is the information
1.) "aws/aws-sdk-php": "^3.231"
2.) php 8
3.) my code to process the conversion
public function startConversion(string $s3DiskImageName, string $description): void{
$client = ImportExportClient::factory(array(
'credentials' => array(
'key' => config('filesystems.disks.s3.key'),
'secret' => config('filesystems.disks.s3.secret'),
),
'region' => config('filesystems.disks.s3.region'),
'version' => 'latest'
));
$bucket = config('filesystems.disks.s3.bucket');
$client->createJob([
'JobType' => 'Import',
'ValidateOnly' => false,
'Manifest' => json_encode([
[
'Description' => "$s3DiskImageName - $description",
'Format' => 'raw',
'Url' => 's3:://'.$bucket .'/'.$s3DiskImageName
]
])
]);
}
Thanks in advance!
I'm trying to work with AWS PHP SDK using PostObjectV4 to upload image from clients browser.
$client = new S3Client([
'version' => 'latest',
'region' => 'us-east-2',
'credentials' => [
'key' => S3_KEY,
'secret' => S3_SECRET,
]
]);
$bucket = 'bucketsomewhere.coim';
$formInputs = ['acl' => 'public-read'];
$options = [
['acl' => 'public-read'],
['bucket' => $bucket],
['starts-with', '$key', 'Users/'],
['starts-with', '$Content-Type', 'image/'],
['starts-with', '$Cache-Control', 'max-age='],
];
$expires = '+40 minutes';
$postObject = new \Aws\S3\PostObjectV4(
$client,
$bucket,
$formInputs,
$options,
$expires
);
After sending POST request from JS. It gives me following error:
Invalid according to Policy: Policy Condition failed: ["starts-with", "$Cache-Control", "max-age=31536000"]
I read it about from their docs, but no success. Help is needed :) .
I want to create presigned S3 URL as mentioned here:
https://docs.aws.amazon.com/aws-sdk-php/v3/guide/service/s3-presigned-url.html
My code is quite similar to the example mentioned in the url:
$sdk = new Aws\Sdk( [
'region' => 'eu-west-2',
'version' => 'latest',
] );
$s3Client = $sdk->createS3();
$cmd = $s3Client->getCommand('GetObject', [
'Bucket' => 'books.com',
'Key' => 'testKey'
]);
$request = $s3Client->createPresignedRequest($cmd, '+20 minutes');
// Get the actual presigned-url
$presignedUrl = (string) $request->getUri();
The above generates urls like so:
https://s3.eu-west-2.amazonaws.com/books.com/testKey?X-Amz-Content-Sha256=....
This is as expected. However my S3 bucket has Static Website Hosting and I use a CNAME record allowing me to use a different base url.
Therefore I want the following URL instead:
http://books.com/my-bucket/testKey?X-Amz-Content-Sha256=....
How can I do this?
You can set the endpoint to your bucket domain name:
$sdk = new Aws\Sdk( [
'region' => 'eu-west-2',
'version' => 'latest',
'endpoint' => 'http://books.com',
'bucket_endpoint' => true
] );
This will generate a signed URL that looks like this:
http://books.com/testKey?X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI2V4Lxxxxxxxxxxx%2F20171116%2Feu-west-2%2Fs3%2Faws4_request&X-Amz-Date=20171116T191003Z&X-Amz-SignedHeaders=host&X-Amz-Expires=1200&X-Amz-Signature=0b735cb661b1d2e25c7f5b477d4c657f160a85aa53bee3ea91244340f6d37dee
I am using barryvdh elfinder package to display all the files and folders from my AWS S3 bucket. In elfinders config I have defined root as follows:
[
'driver' => 'Flysystem',
'path' => '',
'defaults' => array('read' => true, 'write' => true),
'filesystem' => new \League\Flysystem\Filesystem(
new \League\Flysystem\AwsS3v2\AwsS3Adapter( \Aws\S3\S3Client::factory(array(
'key' => 'key',
'secret' => 'secret'
)), 'bucket-name'))
]
This seems to work fine, all the files are being displayed. But the folders are not being listed. If I create a folder, it shows error message, but the folder is being created in the bucket, only it doesnt show any folders.
Can anyone help me with the solution.
Logstash version 1.5.0.1
I am trying to use the logstash s3 input plugin to download cloudfront logs and the cloudfront codec plugin to filter the stream.
I installed the cloudfront codec with bin/plugin install logstash-codec-cloudfront.
I am getting the following: Error: Object: #Version: 1.0 is not a legal argument to this wrapper, cause it doesn't respond to "read".
Here is the full error message from /var/logs/logstash/logstash.log
{:timestamp=>"2015-08-05T13:35:20.809000-0400", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n Plugin: <LogStash::Inputs::S3 bucket=>\"[BUCKETNAME]\", prefix=>\"cloudfront/\", region=>\"us-east-1\", type=>\"cloudfront\", secret_access_key=>\"[SECRETKEY]/1\", access_key_id=>\"[KEYID]\", sincedb_path=>\"/opt/logstash_input/s3/cloudfront/sincedb\", backup_to_dir=>\"/opt/logstash_input/s3/cloudfront/backup\", temporary_directory=>\"/var/lib/logstash/logstash\">\n Error: Object: #Version: 1.0\n is not a legal argument to this wrapper, cause it doesn't respond to \"read\".", :level=>:error}
My logstash config file: /etc/logstash/conf.d/cloudfront.conf
input {
s3 {
bucket => "[BUCKETNAME]"
delete => false
interval => 60 # seconds
prefix => "cloudfront/"
region => "us-east-1"
type => "cloudfront"
codec => "cloudfront"
secret_access_key => "[SECRETKEY]"
access_key_id => "[KEYID]"
sincedb_path => "/opt/logstash_input/s3/cloudfront/sincedb"
backup_to_dir => "/opt/logstash_input/s3/cloudfront/backup"
use_ssl => true
}
}
I'm using a similar s3 input stream successfully to get my cloudtrail logs into logstash that is based on the Answer from a stackoverflow post.
CloudFront logfile from s3 (I only included the header from the file):
#Version: 1.0
#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type
The header looks like it is basically the correct format based on lines 26-29 from the cloudfront plugin github repo cloudfront_spec.rb
and the official AWS CloudFront Access Logs docs.
Any ideas? Thanks!
[UPDATE 9/23/2015]
Based on this post I tried using the gzip_lines codec plugin, installed with bin/plugin install logstash-codec-gzip_lines and parse the file with a filter, unfortunately I am getting the exact same error. It looks like it is an issue with the first character of the log file having #.
For the record, here is the new attempt, including an updated pattern for parsing the cloudfront logfile due to four new fields:
/etc/logstash/conf.d/cloudfront.conf
input {
s3 {
bucket => "[BUCKETNAME]"
delete => false
interval => 60 # seconds
prefix => "cloudfront/"
region => "us-east-1"
type => "cloudfront"
codec => "gzip_lines"
secret_access_key => "[SECRETKEY]"
access_key_id => "[KEYID]"
sincedb_path => "/opt/logstash_input/s3/cloudfront/sincedb"
backup_to_dir => "/opt/logstash_input/s3/cloudfront/backup"
use_ssl => true
}
}
filter {
grok {
type => "cloudfront"
pattern => "%{DATE_EU:date}\t%{TIME:time}\t%{WORD:x_edge_location}\t(?:%{NUMBER:sc_bytes}|-)\t%{IPORHOST:c_ip}\t%{WORD:cs_method}\t%{HOSTNAME:cs_host}\t%{NOTSPACE:cs_uri_stem}\t%{NUMBER:sc_status}\t%{GREEDYDATA:referrer}\t%{GREEDYDATA:User_Agent}\t%{GREEDYDATA:cs_uri_stem}\t%{GREEDYDATA:cookies}\t%{WORD:x_edge_result_type}\t%{NOTSPACE:x_edge_request_id}\t%{HOSTNAME:x_host_header}\t%{URIPROTO:cs_protocol}\t%{INT:cs_bytes}\t%{GREEDYDATA:time_taken}\t%{GREEDYDATA:x_forwarded_for}\t%{GREEDYDATA:ssl_protocol}\t%{GREEDYDATA:ssl_cipher}\t%{GREEDYDATA:x_edge_response_result_type}"
}
mutate {
type => "cloudfront"
add_field => [ "listener_timestamp", "%{date} %{time}" ]
}
date {
type => "cloudfront"
match => [ "listener_timestamp", "yy-MM-dd HH:mm:ss" ]
}
}
(this question should probably be marked as duplicate, but until then I copy my answer to the same question on ServerFault)
I had the same issue, changing from
codec > "gzip_lines"
to
codec => "plain"
in the input fixed it for me. Looks like S3 input automatically uncompress gzip files. https://github.com/logstash-plugins/logstash-input-s3/blob/master/lib/logstash/inputs/s3.rb#L13
FTR here is the full config that is working for me:
input {
s3 {
bucket => "[BUCKET NAME]"
delete => false
interval => 60 # seconds
prefix => "CloudFront/"
region => "us-east-1"
type => "cloudfront"
codec => "plain"
secret_access_key => "[SECRETKEY]"
access_key_id => "[KEYID]"
sincedb_path => "/opt/logstash_input/s3/cloudfront/sincedb"
backup_to_dir => "/opt/logstash_input/s3/cloudfront/backup"
use_ssl => true
}
}
filter {
if [type] == "cloudfront" {
if ( ("#Version: 1.0" in [message]) or ("#Fields: date" in [message])) {
drop {}
}
grok {
match => { "message" => "%{DATE_EU:date}\t%{TIME:time}\t%{WORD:x_edge_location}\t(?:%{NUMBER:sc_bytes}|-)\t%{IPORHOST:c_ip}\t%{WORD:cs_method}\t%{HOSTNAME:cs_host}\t%{NOTSPACE:cs_uri_stem}\t%{NUMBER:sc_status}\t%{GREEDYDATA:referrer}\t%{GREEDYDATA:User_Agent}\t%{GREEDYDATA:cs_uri_stem}\t%{GREEDYDATA:cookies}\t%{WORD:x_edge_result_type}\t%{NOTSPACE:x_edge_request_id}\t%{HOSTNAME:x_host_header}\t%{URIPROTO:cs_protocol}\t%{INT:cs_bytes}\t%{GREEDYDATA:time_taken}\t%{GREEDYDATA:x_forwarded_for}\t%{GREEDYDATA:ssl_protocol}\t%{GREEDYDATA:ssl_cipher}\t%{GREEDYDATA:x_edge_response_result_type}" }
}
mutate {
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "listener_timestamp", "%{date} %{time}" ]
}
date {
match => [ "listener_timestamp", "yy-MM-dd HH:mm:ss" ]
}
date {
locale => "en"
timezone => "UCT"
match => [ "listener_timestamp", "yy-MM-dd HH:mm:ss" ]
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
}
}