Livewire Content Too Large large file not uploading - laravel-livewire

I am New To Livewire,
I want to upload Image > 15 MB size .
I have gone through the document of livewire.
They Said,
You can change the upload file size
'temporary_file_upload' => [
'disk' => null, // Example: 'local', 's3' Default: 'default'
'
rules' => ['required', 'file', 'max:1024000'],
// Example: ['file', 'mimes:png,jpg'] Default: ['required', 'file', 'max:12288'] (12MB)
'directory' => null, // Example: 'tmp' Default 'livewire-tmp'
'middleware' => null, // Example: 'throttle:5,1' Default: 'throttle:60,1'
'preview_mimes' => [ // Supported file types for temporary pre-signed file URLs.
'png', 'gif', 'bmp', 'svg', 'wav', 'mp4',
'mov', 'avi', 'wmv', 'mp3', 'm4a',
'jpg', 'jpeg', 'mpga', 'webp', 'wma',
],
'max_upload_time' => 60, // Max duration (in minutes) before an upload gets invalidated.
],
But Still Not Upload the large file.

I had the same issue. Changes in conf/livewire.php aren't enough. It necessary to change php.ini. In php.ini find and set:
post_max_size = 16M
upload_max_filesize = 16M
and then restart fpm service or restart your computer. It helps me.
p.s. If you don't know where php.ini is, create a route foe example
Route::get('/test/page', function (){
dd(phpinfo());
});
and looking for "Configuration File (php.ini) Path"

Related

AWS Transcription and Content Redaction

wondered if someone could help.
I am using the PHP AWS SDK, v3 (just updated to latest), it's working as it should, can send MP3, returns a json file with transcription.
However, I want to use Content Redaction, but no matter what I try I can't see to get it to work, I don't get any errors, my code looks like this:
$transcribe->startTranscriptionJob([
'ContentRedaction' => [
'RedactionType' => 'PII',
'RedactionOutput' => 'redacted'
],
'LanguageCode' => 'lang',
'Media' => [
'MediaFileUri' => 'someurl',
],
'MediaFormat' => 'file',
'OutputBucketName' => 'bucket_name',
'Settings' => [
'ChannelIdentification' => true,
'SplitChannelTranscription' => true,
],
'TranscriptionJobName' => 'output_filename'
]);
I don't get any errors, it just transcribes it without the Content Redaction.
From their docs:
"To enable content redaction using the API, complete the request parameters of the ContentRedaction object in the StartTranscriptionJob operation. See the request syntax for the StartTranscriptionJob action for more information. To see if content redaction has been enabled for a particular transcription job, use GetTranscriptionJob. To see which jobs have content redaction enabled, use ListTranscriptionJobs."

Save AWS Polly mp3 file to S3

I am trying to send some text to AWS Polly to convert to speech and then save that mp3 file to S3. That part seems to work now.
// Send text to AWS Polly
$client_polly = new Aws\Polly\PollyClient([
'region' => 'us-west-2',
'version' => 'latest',
'credentials' => [
'key' => $aws_useKey,
'secret' => $aws_secret,
]
]);
$text = 'Test. Test. This is a sample text to be synthesized.';
$voice = 'Matthew';
$result_polly = $client_polly->startSpeechSynthesisTask([
'Text' => $text,
'TextType' => 'text',
'OutputFormat' => 'mp3',
'OutputS3BucketName' => $aws_bucket,
'OutputS3KeyPrefix' => 'files/audio/,
'VoiceId' => $voice,
'ACL' => 'public-read'
]);
echo $result_polly['ObjectURL'];
I'm also trying to accomplish couple other things:
Make mp3 file publicly accessible. Currently I have to go to AWS console to
click "Make Public" button. It seems that 'ACL' => 'public-read' doesn't work for me
I need to return full URL of the mp3 file. For some reason $result_polly['ObjectURL']; doesn't get any value.
What am I missing?
There is no ACL field in the StartSpeechSynthesisTask call:
$result = $client->startSpeechSynthesisTask([
'LanguageCode' => 'arb|cmn-CN|cy-GB|da-DK|de-DE|en-AU|en-GB|en-GB-WLS|en-IN|en-US|es-ES|es-MX|es-US|fr-CA|fr-FR|is-IS|it-IT|ja-JP|hi-IN|ko-KR|nb-NO|nl-NL|pl-PL|pt-BR|pt-PT|ro-RO|ru-RU|sv-SE|tr-TR',
'LexiconNames' => ['<string>', ...],
'OutputFormat' => 'json|mp3|ogg_vorbis|pcm', // REQUIRED
'OutputS3BucketName' => '<string>', // REQUIRED
'OutputS3KeyPrefix' => '<string>',
'SampleRate' => '<string>',
'SnsTopicArn' => '<string>',
'SpeechMarkTypes' => ['<string>', ...],
'Text' => '<string>', // REQUIRED
'TextType' => 'ssml|text',
'VoiceId' => 'Aditi|Amy|Astrid|Bianca|Brian|Carla|Carmen|Celine|Chantal|Conchita|Cristiano|Dora|Emma|Enrique|Ewa|Filiz|Geraint|Giorgio|Gwyneth|Hans|Ines|Ivy|Jacek|Jan|Joanna|Joey|Justin|Karl|Kendra|Kimberly|Lea|Liv|Lotte|Lucia|Mads|Maja|Marlene|Mathieu|Matthew|Maxim|Mia|Miguel|Mizuki|Naja|Nicole|Penelope|Raveena|Ricardo|Ruben|Russell|Salli|Seoyeon|Takumi|Tatyana|Vicki|Vitoria|Zeina|Zhiyu', // REQUIRED
]);
Therefore, you will either need to make another call to Amazon S3 to change the ACL of the object, or use an Amazon S3 Bucket Policy to make the bucket (or a path within the bucket) public.
The output location is given in the OutputUri field (NOT OutputUrl -- URI vs URL).

How to resolve Recoverable fatal error: Object of class Drupal\Core\Link could not be converted to string in Drupal\Component\Utility\Xss::filter?

I'm following the How to Configure SimpleSAMLphp for Drupal 8 on Acquia instruction. I'm at the bottom where it says, "SimpleSAMLphp_auth module settings. I personally recommend to store configuration for SimpleSAMLphp_auth module settings in settings.php." Once I copied the code he has in that code snippet to my settings.php file (pasted it at the bottom) and push it to Acquia, I got this error when I tried to login via the dev.mysite.com/user url.
The website encountered an unexpected error. Please try again later. Recoverable fatal error: Object of class Drupal\Core\Link could not be converted to string in Drupal\Component\Utility\Xss::filter() (line 67 of core/lib/Drupal/Component/Utility/Xss.php).
The code shown below is what I have in my settings.php file.
$config['simplesamlphp_auth.settings'] = [
// Basic settings.
'activate' => TRUE, // Enable or Disable SAML login.
'auth_source' => 'default-sp',
'login_link_display_name' => 'Login with your SSO account',
'register_users' => TRUE,
'debug' => FALSE,
// Local authentication.
'allow' => [
'default_login' => TRUE,
'set_drupal_pwd' => TRUE,
'default_login_users' => '',
'default_login_roles' => [
'authenticated' => FALSE,
'administrator' => 'administrator',
],
],
'logout_goto_url' => '',
// User info and syncing.
// `unique_id` is specified in Transient format, otherwise this should be `UPN`
// Please talk to your SSO adminsitrators about which format you should be using.
'unique_id' => 'http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn',
'user_name' => 'uid',
'mail_attr' => 'mail',
'sync' => [
'mail' => FALSE,
'user_name' => FALSE,
],
];
If I commented out this whole block of code in my setings.php file then I can login to my dev.mysite.com/user drupal site. One other thing I'm not clear is, do I "Check Activate authentication via SimpleSAMLphp option" first then copied the code snippet to my settings.php file and push to Acquia or the other way around?
Any help is much appreciated.
It seems that update to version 8.x-3.0-rc2 resolves the error above. However, looks like it introduces another issues, "This site can't be reached" and redirected the site to port 80 instead.

elfinder not displaying directories of AWS S3 with Laravel

I am using barryvdh elfinder package to display all the files and folders from my AWS S3 bucket. In elfinders config I have defined root as follows:
[
'driver' => 'Flysystem',
'path' => '',
'defaults' => array('read' => true, 'write' => true),
'filesystem' => new \League\Flysystem\Filesystem(
new \League\Flysystem\AwsS3v2\AwsS3Adapter( \Aws\S3\S3Client::factory(array(
'key' => 'key',
'secret' => 'secret'
)), 'bucket-name'))
]
This seems to work fine, all the files are being displayed. But the folders are not being listed. If I create a folder, it shows error message, but the folder is being created in the bucket, only it doesnt show any folders.
Can anyone help me with the solution.

logstash cloudfront codec plugin: Error: Object: #Version: 1.0 is not a legal argument to this wrapper, cause it doesn't respond to "read"

Logstash version 1.5.0.1
I am trying to use the logstash s3 input plugin to download cloudfront logs and the cloudfront codec plugin to filter the stream.
I installed the cloudfront codec with bin/plugin install logstash-codec-cloudfront.
I am getting the following: Error: Object: #Version: 1.0 is not a legal argument to this wrapper, cause it doesn't respond to "read".
Here is the full error message from /var/logs/logstash/logstash.log
{:timestamp=>"2015-08-05T13:35:20.809000-0400", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n Plugin: <LogStash::Inputs::S3 bucket=>\"[BUCKETNAME]\", prefix=>\"cloudfront/\", region=>\"us-east-1\", type=>\"cloudfront\", secret_access_key=>\"[SECRETKEY]/1\", access_key_id=>\"[KEYID]\", sincedb_path=>\"/opt/logstash_input/s3/cloudfront/sincedb\", backup_to_dir=>\"/opt/logstash_input/s3/cloudfront/backup\", temporary_directory=>\"/var/lib/logstash/logstash\">\n Error: Object: #Version: 1.0\n is not a legal argument to this wrapper, cause it doesn't respond to \"read\".", :level=>:error}
My logstash config file: /etc/logstash/conf.d/cloudfront.conf
input {
s3 {
bucket => "[BUCKETNAME]"
delete => false
interval => 60 # seconds
prefix => "cloudfront/"
region => "us-east-1"
type => "cloudfront"
codec => "cloudfront"
secret_access_key => "[SECRETKEY]"
access_key_id => "[KEYID]"
sincedb_path => "/opt/logstash_input/s3/cloudfront/sincedb"
backup_to_dir => "/opt/logstash_input/s3/cloudfront/backup"
use_ssl => true
}
}
I'm using a similar s3 input stream successfully to get my cloudtrail logs into logstash that is based on the Answer from a stackoverflow post.
CloudFront logfile from s3 (I only included the header from the file):
#Version: 1.0
#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type
The header looks like it is basically the correct format based on lines 26-29 from the cloudfront plugin github repo cloudfront_spec.rb
and the official AWS CloudFront Access Logs docs.
Any ideas? Thanks!
[UPDATE 9/23/2015]
Based on this post I tried using the gzip_lines codec plugin, installed with bin/plugin install logstash-codec-gzip_lines and parse the file with a filter, unfortunately I am getting the exact same error. It looks like it is an issue with the first character of the log file having #.
For the record, here is the new attempt, including an updated pattern for parsing the cloudfront logfile due to four new fields:
/etc/logstash/conf.d/cloudfront.conf
input {
s3 {
bucket => "[BUCKETNAME]"
delete => false
interval => 60 # seconds
prefix => "cloudfront/"
region => "us-east-1"
type => "cloudfront"
codec => "gzip_lines"
secret_access_key => "[SECRETKEY]"
access_key_id => "[KEYID]"
sincedb_path => "/opt/logstash_input/s3/cloudfront/sincedb"
backup_to_dir => "/opt/logstash_input/s3/cloudfront/backup"
use_ssl => true
}
}
filter {
grok {
type => "cloudfront"
pattern => "%{DATE_EU:date}\t%{TIME:time}\t%{WORD:x_edge_location}\t(?:%{NUMBER:sc_bytes}|-)\t%{IPORHOST:c_ip}\t%{WORD:cs_method}\t%{HOSTNAME:cs_host}\t%{NOTSPACE:cs_uri_stem}\t%{NUMBER:sc_status}\t%{GREEDYDATA:referrer}\t%{GREEDYDATA:User_Agent}\t%{GREEDYDATA:cs_uri_stem}\t%{GREEDYDATA:cookies}\t%{WORD:x_edge_result_type}\t%{NOTSPACE:x_edge_request_id}\t%{HOSTNAME:x_host_header}\t%{URIPROTO:cs_protocol}\t%{INT:cs_bytes}\t%{GREEDYDATA:time_taken}\t%{GREEDYDATA:x_forwarded_for}\t%{GREEDYDATA:ssl_protocol}\t%{GREEDYDATA:ssl_cipher}\t%{GREEDYDATA:x_edge_response_result_type}"
}
mutate {
type => "cloudfront"
add_field => [ "listener_timestamp", "%{date} %{time}" ]
}
date {
type => "cloudfront"
match => [ "listener_timestamp", "yy-MM-dd HH:mm:ss" ]
}
}
(this question should probably be marked as duplicate, but until then I copy my answer to the same question on ServerFault)
I had the same issue, changing from
codec > "gzip_lines"
to
codec => "plain"
in the input fixed it for me. Looks like S3 input automatically uncompress gzip files. https://github.com/logstash-plugins/logstash-input-s3/blob/master/lib/logstash/inputs/s3.rb#L13
FTR here is the full config that is working for me:
input {
s3 {
bucket => "[BUCKET NAME]"
delete => false
interval => 60 # seconds
prefix => "CloudFront/"
region => "us-east-1"
type => "cloudfront"
codec => "plain"
secret_access_key => "[SECRETKEY]"
access_key_id => "[KEYID]"
sincedb_path => "/opt/logstash_input/s3/cloudfront/sincedb"
backup_to_dir => "/opt/logstash_input/s3/cloudfront/backup"
use_ssl => true
}
}
filter {
if [type] == "cloudfront" {
if ( ("#Version: 1.0" in [message]) or ("#Fields: date" in [message])) {
drop {}
}
grok {
match => { "message" => "%{DATE_EU:date}\t%{TIME:time}\t%{WORD:x_edge_location}\t(?:%{NUMBER:sc_bytes}|-)\t%{IPORHOST:c_ip}\t%{WORD:cs_method}\t%{HOSTNAME:cs_host}\t%{NOTSPACE:cs_uri_stem}\t%{NUMBER:sc_status}\t%{GREEDYDATA:referrer}\t%{GREEDYDATA:User_Agent}\t%{GREEDYDATA:cs_uri_stem}\t%{GREEDYDATA:cookies}\t%{WORD:x_edge_result_type}\t%{NOTSPACE:x_edge_request_id}\t%{HOSTNAME:x_host_header}\t%{URIPROTO:cs_protocol}\t%{INT:cs_bytes}\t%{GREEDYDATA:time_taken}\t%{GREEDYDATA:x_forwarded_for}\t%{GREEDYDATA:ssl_protocol}\t%{GREEDYDATA:ssl_cipher}\t%{GREEDYDATA:x_edge_response_result_type}" }
}
mutate {
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "listener_timestamp", "%{date} %{time}" ]
}
date {
match => [ "listener_timestamp", "yy-MM-dd HH:mm:ss" ]
}
date {
locale => "en"
timezone => "UCT"
match => [ "listener_timestamp", "yy-MM-dd HH:mm:ss" ]
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
}
}