My s3 bucket and AWS Rekognition model are both in us-east-1. My lambda which is also in us-east-1 is triggered by an upload into my s3 bucket. I pasted the auto generated code from the model (for python) and used it in my lambda function. I have even tried giving full access to my s3 bucket (allowing public access with full permission) but when I trigger the lambda I get this exception
[ERROR] InvalidS3ObjectException: An error occurred (InvalidS3ObjectException) when calling the DetectCustomLabels operation: Unable to get object metadata from S3. Check object key, region and/or access permissions.
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 80, in lambda_handler
label_count=show_custom_labels(model,bucket,photo, min_confidence)
File "/var/task/lambda_function.py", line 59, in show_custom_labels
response = client.detect_custom_labels(Image={'S3Object': {'Bucket': bucket, 'Name': photo}},
File "/var/runtime/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 676, in _make_api_call
raise error_class(parsed_response, operation_name)
I am printing out my bucket name and key in the logs and they are fine. My key has a folder path included (folder1/folder2/image.jpg).
How can I get over this error?
Related
I have been trying to set up an Upsert job in AWS Glue, which uses pyspark to create and update tables at the data lake catalog database (in Lakeformation). This job also applied LFTags to the resources (tables and columns).
The error I keep receiving is always about Insufficient Permissions.
E.g.:
022-01-18 21:11:58,381 ERROR [main] glue.ProcessLauncher (Logging.scala:logError(73)): Error from Python:Traceback (most recent call last):
File "/tmp/driver.py", line 186, in <module>
run()
File "/tmp/driver.py", line 182, in run
main(sys.argv)
File "/tmp/driver.py", line 173, in main
cf.process_spark(spark)
File "/tmp/spark_pyutil-0.0.1-py3-none-any.zip/spark_pyutil/conf_process.py", line 772, in process_spark
self.__write_output_spark(spark, df_udf)
File "/tmp/spark_pyutil-0.0.1-py3-none-any.zip/spark_pyutil/conf_process.py", line 543, in __write_output_spark
r = lk.add_lf_tags_to_resource(Resource=table_resource, LFTags = lf_tags)
File "/home/spark/.local/lib/python3.7/site-packages/botocore/client.py", line 386, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/spark/.local/lib/python3.7/site-packages/botocore/client.py", line 705, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.AccessDeniedException: An error occurred (AccessDeniedException) when calling the AddLFTagsToResource operation: Insufficient Glue permissions to access table gene_genesymbol
I also tried unchecking the options
> Use only IAM access control for new databases Use only IAM access
> control for new tables in new databases
And add Associate and Describe permissions to the role in all LFTags in Lakeformation.
In another scenario, I tried adding different policies to the IAM role, checking theIAM access control options, and it also results in Insufficient permissions.
Does anybody see something off in what I'm doing?
Trying to have tags automatically added to objects post-upload following this guide
https://heywoodonline.com/posts/Automatically%20Tagging%20Uploads%20to%20S3.html
But when the function runs I'm getting the following error
[ERROR] ClientError: An error occurred (AccessDenied) when calling the PutObjectTagging operation: Access Denied
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 37, in lambda_handler
raise e
File "/var/task/lambda_function.py", line 22, in lambda_handler
response = s3.put_object_tagging(
File "/var/runtime/botocore/client.py", line 386, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 705, in _make_api_call
raise error_class(parsed_response, operation_name)
I've checked the Role that's created found in the Configuration menu
config menu Role
Editing that brings me to this policy, which I've added a bunch of Actions to.
Anything I add to the policy seems to be ignored as I continue to get the same Access Denied error.
Other similar posts to stackOverflow do not mention what policy needs to be edited but when I search the roles there is only one with the title I gave it. It's got to be the one.
What am I missing?
EDIT: FIXED!
I added a new resource to the above policy and it worked as needed.
"Resource": [
"arn:aws:logs:us-east-1:367384020442:log-group:/aws/lambda/addTagPostUpload:*",
"arn:aws:s3:::*/*"
]
You policy only mentioned 'logs' resources and not s3 resources. Unless you specify what s3 resources your s3 actions have permissions on, it does not matter what you put in actions. Right now the policy says you have s3 and logs action permissions on the specified cloudwatch log group and nothing else
I'm trying to use s3fs in python to connect to an s3 bucket. The associated credentials are saved in a profile called 'pete' in ~/.aws/credentials:
[default]
aws_access_key_id=****
aws_secret_access_key=****
[pete]
aws_access_key_id=****
aws_secret_access_key=****
This seems to work in AWS CLI (on Windows):
$>aws s3 ls s3://my-bucket/ --profile pete
PRE other-test-folder/
PRE test-folder/
But I get a permission denied error when I use what should be equivalent code using the s3fs package in python:
import s3fs
import requests
s3 = s3fs.core.S3FileSystem(profile = 'pete')
s3.ls('my-bucket')
I get this error:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\s3fs\core.py", line 504, in _lsdir
async for i in it:
File "C:\ProgramData\Anaconda3\lib\site-packages\aiobotocore\paginate.py", line 32, in __anext__
response = await self._make_request(current_kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\aiobotocore\client.py", line 154, in _make_api_call
raise error_class(parsed_response, operation_name)
ClientError: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<ipython-input-9-4627a44a7ac3>", line 5, in <module>
s3.ls('ma-baseball')
File "C:\ProgramData\Anaconda3\lib\site-packages\s3fs\core.py", line 993, in ls
files = maybe_sync(self._ls, self, path, refresh=refresh)
File "C:\ProgramData\Anaconda3\lib\site-packages\fsspec\asyn.py", line 97, in maybe_sync
return sync(loop, func, *args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\fsspec\asyn.py", line 68, in sync
raise exc.with_traceback(tb)
File "C:\ProgramData\Anaconda3\lib\site-packages\fsspec\asyn.py", line 52, in f
result[0] = await future
File "C:\ProgramData\Anaconda3\lib\site-packages\s3fs\core.py", line 676, in _ls
return await self._lsdir(path, refresh)
File "C:\ProgramData\Anaconda3\lib\site-packages\s3fs\core.py", line 527, in _lsdir
raise translate_boto_error(e) from e
PermissionError: Access Denied
I have to assume it's not a config issue within s3 because I can access s3 through the CLI. So something must be off with my s3fs code, but I can't find a whole lot of documentation on profiles in s3fs to figure out what's going on. Any help is of course appreciated.
for bucket in boto3.resource('s3').buckets.all():
print(bucket.name)
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\Users\icode\PycharmProjects\AWS\venv\lib\site-packages\boto3\resources\collection.py", line 83, in __iter__
for page in self.pages():
File "C:\Users\icode\PycharmProjects\AWS\venv\lib\site-packages\boto3\resources\collection.py", line 161, in pages
pages = [getattr(client, self._py_operation_name)(**params)]
File "C:\Users\icode\PycharmProjects\AWS\venv\lib\site-packages\botocore\client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "C:\Users\icode\PycharmProjects\AWS\venv\lib\site-packages\botocore\client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (NotSignedUp) when calling the ListBuckets operation: Your account is not signed up for the S3 service. You must sign up before you can use S3.```
You've not signed up for S3. You'll need to visit the AWS Console first, sign up, and wait for the activation email. See the documentation for more details.
take a look at your error message, on the last line it says your are not signed up
botocore.exceptions.ClientError: An error occurred (NotSignedUp) when calling the ListBuckets operation:
Your account is not signed up for the S3 service. You must sign up before you can use S3.```
what you need to do is:
from s3 documentation
When I try to run very simple Python script to get object from s3 bucket:
import boto3
s3 = boto3.resource('s3',
region_name="eu-east-1",
verify=False,
aws_access_key_id="QxxxxxxxxxxxxxxxxxxxxxxxxFY=",
aws_secret_access_key="c1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxYw==")
obj = s3.Object('3gxxxxxxxxxxs7', 'dk5xxxxxxxxxxn94')
result = obj.get()['Body'].read().decode('utf-8')
print(result)
I got an error:
$ python3 script.py
Traceback (most recent call last):
File "script.py", line 7, in <module>
result = obj.get()['Body'].read().decode('utf-8')
File "//anaconda3/lib/python3.7/site-packages/boto3/resources/factory.py", line 520, in do_action
response = action(self, *args, **kwargs)
File "//anaconda3/lib/python3.7/site-packages/boto3/resources/action.py", line 83, in __call__
response = getattr(parent.meta.client, operation_name)(**params)
File "//anaconda3/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "//anaconda3/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError:
An error occurred (AuthorizationHeaderMalformed)
when calling the GetObject operation:
The authorization header is malformed; the authorization component
"Credential=QUtJxxxxxxxxxxxxxxxxxlZPUFY=/20191005/us-east-1/s3/aws4_request"
is malformed.
I'm not sure what can be causing it, worth adding that:
I don't know what is the bucket region (don't ask why) but I tried manually to connect to all of them (by changing default region name in command to every region) and without success.
I don't have access to bucket configuration. And anything that is in aws console. I just have the Key ID, Secret, bucket name and object name.
An AWS-Access-Key-ID always begins with AKIA for IAM users or ASIA for temporary credentials from Security Token Service, as noted in IAM Identifiers in the AWS Identity and Access Management User Guide.
The value you're using does not appear to be one of these, since it starts with QUtJ... so this it isn't the value you should be using here. You appear to be using something that isn't an AWS-Access-Key-ID.
Not long time ago I had similar problem because I am 90% sure it a task from recruitment interview ;)
For all future travelers trying to recruit to this company: this credentials are encrypted, unfortunetly I forgot encryption type but surely a very common one.