Below is the simple code snippet, which I'm trying to run, it gives exception.
I've configured AWS on my local and I'm able to describe the same stack in the AWS Opsworks UI. Can someone help here, with what could be the reason:
import boto3
client=boto3.client('opsworks')
response=client.describe_stack_summary(
StackId="6efce529-0b77-43dc-981b-ff20b906c4ae"
)
print(response)
Stacktrace for error:
Traceback (most recent call last):
File "botoTest.py", line 9, in <module>
StackId="6efce529-0b77-43dc-981b-ff20b906c4ae"
File
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-
packages/botocore/client.py", line 320, in _api_call
return self._make_api_call(operation_name, kwargs)
File
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-
packages/botocore/client.py", line 623, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ResourceNotFoundException: An error occurred
(ResourceNotFoundException) when calling the DescribeStackSummary
operation: Unable to find stack with ID 6efce529-0b77-43dc-981b-
ff20b906c4ae
Related
EMR released a new cluster version today
But when I attempt to upgrade to the latest released EMR version using the contributed EMR create job flow operator I'm hitting
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1138, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1311, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1341, in _execute_task
result = task_copy.execute(context=context)
File "/usr/local/airflow/dags/plugins/operators/shippo_emr_operators.py", line 133, in execute
return super().execute(context)
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/operators/emr_create_job_flow.py", line 81, in execute
response = emr.create_job_flow(job_flow_overrides)
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/emr.py", line 88, in create_job_flow
response = self.get_conn().run_job_flow(**config)
File "/usr/local/airflow/.local/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/airflow/.local/lib/python3.7/site-packages/botocore/client.py", line 676, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the RunJobFlow operation: The supplied release label is invalid: emr-6.8.0.
Looking at the EMR contribution code I don't see any hard coded values so I'm not sure why were hitting this error at this point. Has the label format changed and if so where can I find the exact string?
EDIT: The plot thickens. If I run aws emr list-release-labels I get
NextToken: AAIAAdZ_6MGjAhReZYcOrXICLpYU98iQO_ZB3kCK65qEWRH9MrJLdi_r-alVGb1AZlnFg0vsdxRUzdBLt-SyQ3TznUBM8Ncu7n94pJVQykbWe_TapxBi2WpUkcZfRAcxYgcg6TwejeaxGKcbysA89Jc9M3vIlVQetGgY1zQESS2Dq3P9vxvsOo3xxZoTqnmOVjs24Hy1hPM8zfzoUfH7MMomXkqhU5MHZ0cG3Aee5F51LtNS0_NBge399SiDYwhz1W2RB2tAjDc=
ReleaseLabels:
- emr-6.7.0
- emr-6.6.0
- emr-6.5.0
- emr-6.4.0
Which indicates that the release label has been updated in the docs but not actually released to the tooling?
EMR release the new versions in a few regions first, probably you are trying to launch a cluster in a no available region yet.
I'm trying to use s3fs in python to connect to an s3 bucket. The associated credentials are saved in a profile called 'pete' in ~/.aws/credentials:
[default]
aws_access_key_id=****
aws_secret_access_key=****
[pete]
aws_access_key_id=****
aws_secret_access_key=****
This seems to work in AWS CLI (on Windows):
$>aws s3 ls s3://my-bucket/ --profile pete
PRE other-test-folder/
PRE test-folder/
But I get a permission denied error when I use what should be equivalent code using the s3fs package in python:
import s3fs
import requests
s3 = s3fs.core.S3FileSystem(profile = 'pete')
s3.ls('my-bucket')
I get this error:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\s3fs\core.py", line 504, in _lsdir
async for i in it:
File "C:\ProgramData\Anaconda3\lib\site-packages\aiobotocore\paginate.py", line 32, in __anext__
response = await self._make_request(current_kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\aiobotocore\client.py", line 154, in _make_api_call
raise error_class(parsed_response, operation_name)
ClientError: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<ipython-input-9-4627a44a7ac3>", line 5, in <module>
s3.ls('ma-baseball')
File "C:\ProgramData\Anaconda3\lib\site-packages\s3fs\core.py", line 993, in ls
files = maybe_sync(self._ls, self, path, refresh=refresh)
File "C:\ProgramData\Anaconda3\lib\site-packages\fsspec\asyn.py", line 97, in maybe_sync
return sync(loop, func, *args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\fsspec\asyn.py", line 68, in sync
raise exc.with_traceback(tb)
File "C:\ProgramData\Anaconda3\lib\site-packages\fsspec\asyn.py", line 52, in f
result[0] = await future
File "C:\ProgramData\Anaconda3\lib\site-packages\s3fs\core.py", line 676, in _ls
return await self._lsdir(path, refresh)
File "C:\ProgramData\Anaconda3\lib\site-packages\s3fs\core.py", line 527, in _lsdir
raise translate_boto_error(e) from e
PermissionError: Access Denied
I have to assume it's not a config issue within s3 because I can access s3 through the CLI. So something must be off with my s3fs code, but I can't find a whole lot of documentation on profiles in s3fs to figure out what's going on. Any help is of course appreciated.
I am having a question regarding the DataAccessRoleArn setting in boto3 start_transcription_job function
Here is my code below:
transcribe.start_transcription_job(TranscriptionJobName=transcriptname,
Media = {"MediaFileUri": s3_url},
MediaFormat = file_type,
OutputBucketName = outputbucket,
Settings={
'ShowSpeakerLabels':True,
'MaxSpeakerLabels':2
},
JobExecutionSettings ={
'AllowDeferredExecution':True,
'DataAccessRoleArn':'arn:aws:iam::358110801253:role/service-role/transcribe-role-k5easa7b'
},
LanguageCode = language)
If I comment out JobExecutionSettings portion, it works perfectly. But I want to turn on the AllowDeferredExecution so that I have to assign a DataAccessRoleArn. The role I assign here have full access to lambda and S3, but I am still receiving an Error like below:
[ERROR] ClientError: An error occurred (AccessDeniedException) when calling the StartTranscriptionJob operation: User: arn:aws:sts::358110801253:assumed-role/transcribe-role-k5easa7b/transcribe is not authorized to perform: iam:PassRole on resource: arn:aws:iam::358110801253:role/service-role/transcribe-role-k5easa7b
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 43, in lambda_handler
transcribe.start_transcription_job(TranscriptionJobName=transcriptname,
File "/var/runtime/botocore/client.py", line 316, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 626, in _make_api_call
raise error_class(parsed_response, operation_name)END RequestId: 88e3bb78-60c1-42e5-a2e1-717918b6f7b9
I am trying to deploy simple django application from zappa (https://romandc.com/zappa-django-guide/) I am getting the following error. Is there any permission issue or some other issue with the dev setup?
Traceback (most recent call last):
File "e:\personal\envs\py3\lib\site-packages\zappa\cli.py", line 753, in deploy
function_name=self.lambda_name)
File "e:\personal\envs\py3\lib\site-packages\zappa\core.py", line 1286, in get_lambda_function
FunctionName=function_name)
File "e:\personal\envs\py3\lib\site-packages\botocore\client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "e:\personal\envs\py3\lib\site-packages\botocore\client.py", line 612, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the GetFunction operation: Function not found: arn:aws:lambda:ap-south-1:122866061462:function:frankie-dev
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "e:\personal\envs\py3\lib\site-packages\zappa\cli.py", line 2778, in handle
sys.exit(cli.handle())
File "e:\personal\envs\py3\lib\site-packages\zappa\cli.py", line 512, in handle
self.dispatch_command(self.command, stage)
File "e:\personal\envs\py3\lib\site-packages\zappa\cli.py", line 549, in dispatch_command
self.deploy(self.vargs['zip'])
File "e:\personal\envs\py3\lib\site-packages\zappa\cli.py", line 786, in deploy
self.lambda_arn = self.zappa.create_lambda_function(**kwargs)
File "e:\personal\envs\py3\lib\site-packages\zappa\core.py", line 1069, in create_lambda_function
response = self.lambda_client.create_function(**kwargs)
File "e:\personal\envs\py3\lib\site-packages\botocore\client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "e:\personal\envs\py3\lib\site-packages\botocore\client.py", line 586, in _make_api_call
api_params, operation_model, context=request_context)
File "e:\personal\envs\py3\lib\site-packages\botocore\client.py", line 641, in _convert_to_request_dict
api_params, operation_model)
File "e:\personal\envs\py3\lib\site-packages\botocore\validate.py", line 291, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Unknown parameter in input: "Layers", must be one of: FunctionName, Runtime, Role, Handler, Code, Description, Timeout, MemorySize, Publish, VpcConfig, DeadLetterConfig, Environment, KMSKeyArn, TracingConfig, Tags
I had the exact same error trying to deploy a Flask app using Zappa and then realized that I was using an old botocore package version. I changed all the package versions in my requirements.txt file to the ones on zappa's github page and it fixed the issue for me!
for bucket in boto3.resource('s3').buckets.all():
print(bucket.name)
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\Users\icode\PycharmProjects\AWS\venv\lib\site-packages\boto3\resources\collection.py", line 83, in __iter__
for page in self.pages():
File "C:\Users\icode\PycharmProjects\AWS\venv\lib\site-packages\boto3\resources\collection.py", line 161, in pages
pages = [getattr(client, self._py_operation_name)(**params)]
File "C:\Users\icode\PycharmProjects\AWS\venv\lib\site-packages\botocore\client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "C:\Users\icode\PycharmProjects\AWS\venv\lib\site-packages\botocore\client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (NotSignedUp) when calling the ListBuckets operation: Your account is not signed up for the S3 service. You must sign up before you can use S3.```
You've not signed up for S3. You'll need to visit the AWS Console first, sign up, and wait for the activation email. See the documentation for more details.
take a look at your error message, on the last line it says your are not signed up
botocore.exceptions.ClientError: An error occurred (NotSignedUp) when calling the ListBuckets operation:
Your account is not signed up for the S3 service. You must sign up before you can use S3.```
what you need to do is:
from s3 documentation