I have a lambda function for making thumbnails from videos, but I'm encountering some issues when I try to use the mediainfo module. I get the same problem when I test the function
/var/task/node_modules/mediainfo-wrapper/lib/linux64/mediainfo: Permission denied.
I changed the permission of the whole folder before zipping it (first to 644, then to 755 and 777) but nothing changes. Could I get some king of advice on what can be causing the issue?
Related
I am new to using AWS (and S.O.), and I am following the tutorial for Machine Learning, where it asks you to create a bucket. However, it keeps saying "Error Access Denied" when I am trying to create the bucket, lets me fill out some properties, and still says Access Denied. I have researched this question carefully and for quite a while, with many suggestions saying to correct the code for "Sid","Action","Effect","Allow", etc. However, I do not understand if this is my problem, and if it is, WHERE to write this code? I will show some screenshots of my screen that it shows me, and I opened up a file that shows code related to buckets. Thank you so much and I will be reading every answer carefully, I apologize
Screenshot of my screen when attempting to create bucket
List of files when i clicked on this file named "alphaindex.h"
The likely answer to why you can't create a bucket is that your IAM user does not have the appropriate permission to do so. Whomever controls your account can add it to your IAM user.
I would suggest at minimum the following:
s3:CreateBucket
s3:ListAllMyBuckets
s3:PutObject
s3:GetObject
s3:DeleteObject
Though I can't guarantee that those are sufficient to do what you're trying to do.
Alternatively, you can be granted unlimited access to a specific bucket created for you. Instructions for doing so are here:
https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/
I am trying to run a demo project for uploading to S3 with Grails 3.
The project in question is this, more specifically the S3 upload is only for the 'Hotel' example at the end.
When I run the project and go to upload the image, I get an updated message but nothing actually happens - there's no inserted url in the dbconsole table.
I think the issue lies with how I am running the project, I am using the command:
grails -Daws.accessKeyId=XXXXX -Daws.secretKey=XXXXX run-app
(where I am supplementing the X's for my keys obviously).
This method of running the project appears to be slightly different to the method shown in the example. I run my project from the command line and I do not use GGTS, just Sublime.
I have tried inserting my AWS keys into the application.yml but I receive an internal server error then.
Can anyone help me out here?
Check your bucket policy in s3. You need to grant permissions to the API user to allow uploads.
I know that this question may have been asked multiple times but I tried those solutions and it didn't workout. Therefore, asking it in a new thread for a definite solution.
I have created a IAM user with S3 read only permission (Get and List on all S3 resources) but when I try to access S3 from EMR cluster using HDFS command it throws "Error Code 403 Forbidden" exception for certain folders. People in other post has answered it to be a permission issue; which I didn't find a right solution as it is "Forbidden" instead of "Access Denied". The behavior of this error has appeared only for certain folders (containing objects) inside a bucket and for certain empty folders. It was observed that if I use native API calls then it works normally as follows:
Exception "Forbidden" when using s3a calls:
hdfs dfs -ls s3a://<bucketname>/<folder>
No error when using s3 native calls s3n and s3:
hdfs dfs -ls s3://<bucketname>/<folder>
hdfs dfs -ls s3n://<bucketname>/<folder>
Similar behavior have also been observed for empty folders and I understand on S3 only objects are physical files whereas rest "buckets and folders" are just a place holder. However, if I create a new empty folder then s3a calls doesn't throw this exception.
P.S. - Root IAM access key surpass this exception.
I'd recommend you file a JIRA on issues.apache.org, HADOOP project, component fs/s3 with the exact hadoop version you are using. Add the stack trace as the first comment, as that's the only way we could begin to work out what is happening.
FWIW, we haven't tested restricted permissions other than simple read-only and R/W; mixing permissions down the path is inevitably going to break things, as the client code expects to be able to HEAD, GET & LIST anything in the bucket.
BTW, the Hadoop S3 clients all mock empty directories by creating 0 byte objects with a "/" suffix, e/g "folder/"; then use a HEAD on that to probe for an empty bucket. When data is added under an empty dir, the mock parent dir is DELETE-d.
I am searching for a specific file in a S3 bucket that has a lot of files. In my application I get an error of 403 access denied, and with s3cmd I am getting an error of 403 (Forbidden) if I try to get a file from the bucket. My problem is that I am not sure if the permissions are the problem (because I can get other files) or the file isn't present on the bucket. I have started to search in the Amazon console interface, but I am scrolling for hours and I have not arrived at "4...." (I am still at "39...") and the file I am looking for is in a folder "C03215".
So, is there a faster way to verify that the file exists on the bucket? Or is there a way to do auto-scrolling and meanwhile doing something else (because if I do not scroll nothing new is loading)?
P.S.: I have no permission to list with s3cmd
Regarding accelerating the scrolling in the console
Like you I have many thousands of objects that takes an eternity to scroll through to in the console.
I recently discovered though how to jump straight to a specific path/folder in the console that is going to save my mouse finger and my sanity!
This will only work for folders though not the actual leaf objects themselves.
In the URL bar of your browser when viewing a bucket you will see something like:
console.aws.amazon.com/s3/home?region=eu-west-1#&bucket=your-bucket-name&prefix=
If you append your object's path after the prefix and hit enter you assume that it should jump to that object but it does nothing (in chrome at least).
However if you append your object's path after the prefix, hit enter and then hit refresh (f5) the console will reload at your specified location.
e.g.
console.aws.amazon.com/s3/home?region=eu-west-1#&bucket=your-bucket-name&prefix=development/2015-04/TestEvent/93edfcbg-5e27-42d3-a2f9-3d86a63d27f9/
There was much joy in our office when this was figured out!
The only "faster way" is to have the s3:ListBucket permission on the bucket, because, as you have noticed, S3's response to a GET request is intentionally ambiguous if you don't.
If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
If you have the s3:ListBucket permission on the bucket, Amazon S3 will return an HTTP status code 404 ("no such key") error.
If you don’t have the s3:ListBucket permission, Amazon S3 will return an HTTP status code 403 ("access denied") error.
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html
Also, there's not a way to accelerate scrolling in the console.
Here's my setup:
I am trying to copy files from an external Webserver to a S3 Bucket using the DataPipeline.
To do this I'm using the ShellCommandActivity which uses a script to Download the files to the Output-Bucket specified in the Pipeline. In the script I use the environment variable ${OUTPUT1_STAGING_DIR} to adress the bucket. Of course I turned 'staging' to true in my pipeline.
When the script finishes, the state of the Activity becomes "FAILED" with following Error:
Staging local files to S3 failed. The request signature we calculated does not match the signature you provided. Check your key and signing method
When I look in the stdout file, I can see that my script finished sucessfully, only the staging to the bucket did not work.
I recon this could be an permission problem with the bucket but I have no idea which things I have to change!
I came across some discussions, where people got this error because the path to the bucket was configured wrong, so this is how I did it in the Pipeline Datanode Directory Path:
s3://testBucket
Is this correct?
I would appreciate any help here!
The problem was the datanode directory Path: It cannot be just a bucket, but HAS to be a directory inside a bucket.
Like this:
s3://testBucket/test
Great work with the error messages, Amazon!