I am using Amazon SES for sending mails in a custom PHP project. I am facing a couple of issues.
1) The amazon ses allows me to send small sized pdf files. Where i can change the file size limit? I am unable to find it.
2) The amazon ses just allows pdf files to be sent. Whenever I try to send any other file type it says illegal file name. Please tell me how to fix this?
Thanks in advance.
Any help would be highly appreciated.
AWS SES mail size limit is 10MB. It will allow PDF's and many other file types, but there are restrictions.
You can read more here: http://aws.amazon.com/ses/faqs/#49
If you need to send a restricted file type, you can rename the file before it goes out and the recipient would have to know enough to rename it when it arrives (which is a pain), so I use a backup SMTP server in those cases.
While the default is 10 MB, in 2021 it is now possible to request Amazon increase your maximum message size to up to 40 MB as per https://aws.amazon.com/ses/faqs/#49.
Related
In AWS, what is the file size limit adding from command line ?
I am trying to fetch the schema ddl using dbms_metadata.fetch and trying to add to a file into AWS using PutFile Rest API of AWS. https://docs.aws.amazon.com/codecommit/latest/APIReference/API_PutFile.html
For larger schema > 60KB , everything working good without any error , but when I look back at AWS console I am not seeing the file which I have created. Means file is actually not getting created.
any idea how can I overcome this ?
The limits are described on the Quota page for AWS CodeCommit. For individual files this is 6 MB, so you should have received an error message if you were trying to upload a file larger than this. Below is from the CLI, but will be similar when using the API directly.
An error occurred (FileContentSizeLimitExceededException) when calling the PutFile operation: The maximum file size for adding a file from the AWS CodeCommit console or using the PutFile API is 6 MB. For files larger than 6 MB but smaller than 2 GB, use a Git client.
Or, via the Console:
If you're saying though that the operation was successful, but you're not seeing the file in CodeCommit, the problem is probably not related to the file size.
Please check if you've followed the right Git procedures for committing and pushing the file. And make sure that you're viewing the same branch as the one that you've pushed the file to.
I am trying to build a video/audio/image upload feature for a mobile application. Currently we have set the file size limit to be 1 GB for video and 50 MB for audio and images. These uploaded files will be stored in an s3 bucket and we will use AWS Cloudfront CDN to serve them to users.
I am trying to compress/optimize the size of the media content using some AWS service after they store in S3 bucket. Ideally it will be great if I can put some restriction on the output file like no video file should be greater than 200 MB or with quality greater than 720p. Can someone please help me on this that what AWS service should I use and with some helpful links if available. Thanks
The AWS Elemental MediaConvert service transcodes files on-demand. The service supports output templates which can specify output parameters including resolution, so guaranteeing a 720P maximum resolution is simple.
AWS S3 supports File Events to trigger other AWS actions, such as running a Lambda Function when a new file arrives in a bucket. The Lambda function can load & customize a job template, then submit a transcoding job to MediaConvert to transcode the newly arrived file. See ( https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html) for details.
Limiting the size of an output file is not currently a feature within MediaConvert, but you could leverage other AWS tools to do this. Checking the size of a transcoded output could be done with another Lambda Function when the output file arrives in a certain bucket. This second Lambda Fn could then decide to re-transcode the input file with more aggressive job settings (higher compression, different codec, time clipping, etc) in order to produce a smaller output file.
Since file size is a factor for you, I recommend using QVBR or VBR Bit compression with a max bitrate cap to allow you to better predict the worst case file size at a given quality, duration & bitrate. You can allocate your '200MB' per file budget in different ways. For example, you could make 800 seconds (~13min) of 2mbps video, or 1600 seconds (~26min) of 1mbps video, et cetera. You may want to consider several quality tiers, or have your job assembly Lambda Fn do the math for you based on input file duration, which could be determined using mediainfo, ffprobe or other utilities.
FYI there are three ways customers can obtain help with AWS solution design and implementation:
[a] AWS Paid Professional Services - There is a large global AWS ProServices team able to help via paid service engagements.
The fastest way to start this dialog is by submitting the AWS Sales team 'contact me' form found here, and specifying 'Sales Support' : https://aws.amazon.com/contact-us/
[b] AWS Certified Consulting Partners -- AWS certified partners with expertise in many verticals. See search tool & listings here: https://iq.aws.amazon.com/services
[c] AWS Solutions Architects -- these services focused on Enterprise-level AWS accounts. The Sales contact form in item [a] is the best way to engage them. Purchasing AWS Enterprise Support will entitle the customer to a dedicated TAM /SA combination.
I have seen lot of examples of Apache Beam where you read data from PubSub and write to GCS bucket, however is there any example of using KafkaIO and writing it to GCS bucket?
Where I can parse the message and put it in appropriate bucket based on the message content?
For e.g.
message = {type="type_x", some other attributes....}
message = {type="type_y", some other attributes....}
type_x --> goes to bucket x
type_y --> goes to bucket y
My usecase is streaming data from Kafka to GCS bucket, so if someone suggest some better way to do it in GCP its welcome too.
Thanks.
Regards,
Anant.
You can use Secor to load messages to a GCS bucket. Secor is also able to parse incoming messages and puts them under different paths in the same bucket.
You can take a look at the example present here - https://github.com/0x0ece/beam-starter/blob/master/src/main/java/com/dataradiant/beam/examples/StreamWordCount.java
Once you have read the data elements if you want to write to multiple destinations based on a specific data value you can look at multiple outputs using TupleTagList the details of which can be found here - https://beam.apache.org/documentation/programming-guide/#additional-outputs
I'm trying to look for reports on my Amazon S3 usage, but Amazon only provides simple summary on the usage, such as amount of storage / transfer in a particular month. I need to have a breakdown of these data by files, for example:
abc.mp3 : 123 GET request / 0.12Mb transferred
hello.mp4 : 345 GET request / 0.32Mb transferred
fun.docx : 834 GET request / 0.20Mb transferred
Also, I need to know where do these GET request coming from, so I can better monitor and control on the S3 usage. For example:
abc.mp3:
53 GET request from http://www.example.com/music/page1.html
70 GET request from http://www.example.com/music/page2.html
Any tools / method on achieving these? Thanks!
You need to enable S3 access logs: http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html
Then you should be able to parse the logs for the information you want. Once you start getting logs there are many options for parsing them, here are a few that I found with a quick search:
s3stat
s3-logs-analyzer
Loggly S3 support
I know of limiting the upload size of an object using this method: http://doc.s3.amazonaws.com/proposals/post.html#Limiting_Uploaded_Content
But i would like to know how it can be done while generating a pre-signed url using S3 SDK on the server side as an IAM user.
This Url from SDK has no such option in its parameters : http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property
Neither in this:
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getSignedUrl-property
Please note: I already know of this answer: AWS S3 Pre-signed URL content-length and it is NOT what i am looking for.
The V4 signing protocol offers the option to include arbitrary headers in the signature. See:
http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html
So, if you know the exact Content-Length in advance, you can include that in the signed URL. Based on some experiments with CURL, S3 will truncate the file if you send more than specified in the Content-Length header. Here is an example V4 signature with multiple headers in the signature
http://docs.aws.amazon.com/general/latest/gr/sigv4-add-signature-to-request.html
You may not be able to limit content upload size ex-ante, especially considering POST and Multi-Part uploads. You could use AWS Lambda to create an ex-post solution. You can setup a Lambda function to receive notifications from the S3 bucket, have the function check the object size and have the function delete the object or do some other action.
Here's some documentation on
Handling Amazon S3 Events Using the AWS Lambda.
For any other wanderers that end up on this thread - if you set the Content-Length attribute when sending the request from your client, there a few possibilities:
The Content-Length is calculated automatically, and S3 will store up to 5GB per file
The Content-Length is manually set by your client, which means one of these three scenarios will occur:
The Content-Length matches your actual file size and S3 stores it.
The Content-Length is less than your actual file size, so S3 will truncate your file to fit it.
The Content-Length is larger than your actual file size, and you will receive a 400 Bad Request
In any case, a malicious user can override your client and manually send a HTTP request with whatever headers they want, including a much larger Content-Length than you may be expecting. Signed URLs do not protect against this! The only way is to setup an POST policy. Official docs here: https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html
More details here: https://janac.medium.com/sending-files-directly-from-client-to-amazon-s3-signed-urls-4bf2cb81ddc3?postPublishedType=initial
Alternatively, you can have a Lambda that automatically deletes files that are larger than expected.