Hi I am trying to retrieve secrets using python SDK which will retrieve the secret name called test in the specified region's AWS secrets manager and getting the below error while running a simple python script:
#!/usr/bin/env python
import boto3
import base64
from botocore.exceptions import ClientError
def get_secret():
secret_name = "test"
region_name = "tesdas"
print("inside rds secrete")
# Create a Secrets Manager client
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name
)
# In this sample we only handle the specific exceptions for the 'GetSecretValue' API.
# See https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html
# We rethrow the exception by default.
try:
get_secret_value_response = client.get_secret_value(
SecretId=secret_name
)
# Decrypts secret using the associated KMS CMK.
# Depending on whether the secret is a string or binary, one of these fields will be populated.
if 'SecretString' in get_secret_value_response:
secret = get_secret_value_response['SecretString']
print("RDS Secret")
print(secret)
else:
decoded_binary_secret = base64.b64decode(get_secret_value_response['SecretBinary'])
get_secret()
Below is the error which I am getting, while executing the script:
Traceback (most recent call last):
File "test.py", line 60, in <module>
get_secret()
File "test.py", line 18, in get_secret
region_name=region_name
File "/usr/lib/python2.7/site-packages/boto3/session.py", line 263, in client
aws_session_token=aws_session_token, config=config)
File "/usr/lib/python2.7/site-packages/botocore/session.py", line 836, in create_client
client_config=config, api_version=api_version)
File "/usr/lib/python2.7/site-packages/botocore/client.py", line 64, in create_client
service_model = self._load_service_model(service_name, api_version)
File "/usr/lib/python2.7/site-packages/botocore/client.py", line 97, in _load_service_model
api_version=api_version)
File "/usr/lib/python2.7/site-packages/botocore/loaders.py", line 132, in _wrapper
data = func(self, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/botocore/loaders.py", line 378, in load_service_model
known_service_names=', '.join(sorted(known_services)))
botocore.exceptions.UnknownServiceError: Unknown service: 'secretsmanager'. Valid service names are: acm, apigateway, application-autoscaling, appstream, athena, autoscaling, batch, budgets, clouddirectory, cloudformation, cloudfront, cloudhsm, cloudsearch, cloudsearchdomain, cloudtrail, cloudwatch, codebuild, codecommit, codedeploy, codepipeline, codestar, cognito-identity, cognito-idp, cognito-sync, config, cur, datapipeline, dax, devicefarm, directconnect, discovery, dms, ds, dynamodb, dynamodbstreams, ec2, ecr, ecs, efs, elasticache, elasticbeanstalk, elastictranscoder, elb, elbv2, emr, es, events, firehose, gamelift, glacier, greengrass, health, iam, importexport, inspector, iot, iot-data, kinesis, kinesisanalytics, kms, lambda, lex-models, lex-runtime, lightsail, logs, machinelearning, marketplace-entitlement, marketplacecommerceanalytics, meteringmarketplace, mturk, opsworks, opsworkscm, organizations, pinpoint, polly, rds, redshift, rekognition, resourcegroupstaggingapi, route53, route53domains, s3, sdb, servicecatalog, ses, shield, sms, snowball, sns, sqs, ssm, stepfunctions, storagegateway, sts, support, swf, waf, waf-regional, workdocs, workspaces, xray
It looks like #Narsireddy is correct. The line region_name = "tesdas" does not specify a valid region. The region should look something like "us-east-1" or "us-west-2".
Related
I have a gRPC service deployed on Google Cloud Run which I want to call from Composer.
I have assigned the roles/iam.serviceAccountTokenCreator role to the service account which my composer worker nodes are running under, and I'm not mounting any custom service key files or setting the GOOGLE_APPLICATION_CREDENTIALS environment variable.
Using the JWT_GOOGLE authentication option in the airflow gRPC hook I get the following error:
[2022-05-31 14:20:16,082] {grpc.py:90} INFO - Calling gRPC service
[2022-05-31 14:20:16,097] {taskinstance.py:1152} ERROR - 'Credentials' object has no attribute 'signer_email'
Traceback (most recent call last):
File "/usr/local/lib/airflow/airflow/models/taskinstance.py", line 985, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/airflow/airflow/providers/grpc/operators/grpc.py", line 95, in execute
for response in responses:
File "/usr/local/lib/airflow/airflow/providers/grpc/hooks/grpc.py", line 136, in run
with self.get_conn() as channel:
File "/usr/local/lib/airflow/airflow/providers/grpc/hooks/grpc.py", line 104, in get_conn
jwt_creds = google_auth_jwt.OnDemandCredentials.from_signing_credentials(credentials)
File "/opt/python3.6/lib/python3.6/site-packages/google/auth/jwt.py", line 695, in from_signing_credentials
kwargs.setdefault("issuer", credentials.signer_email)
AttributeError: 'Credentials' object has no attribute 'signer_email'
[2022-05-31 14:20:16,100] {taskinstance.py:1196} INFO - Marking task as FAILED. dag_id=example_dag, task_id=example_task, execution_date=20220531T135709, start_date=20220531T142015, end_date=20220531T142016
[2022-05-31 14:20:23,826] {local_task_job.py:102} INFO - Task exited with return code 1
Does anyone have any idea how/why my credentials aren't including the field I need?
Found a solution to this after discussing with Google Cloud - essentially, it looks like the JWT_GOOGLE authentication method isn't set up for GCE service accounts so I went down the CUSTOM authentication route instead:
import google.auth.transport.grpc
import google.auth.transport.requests
import google.oauth2.credentials
import google.oauth2.id_token
from airflow.providers.grpc.operators.grpc import GrpcOperator
def connection_func(conn):
"""Custom connection function for gRPC authentication.
Args:
conn: Airflow Connection object
Returns:
An instantiated gRPC channel for making calls to our remote service.
"""
request = google.auth.transport.requests.Request()
if not str(conn.host).startswith("https://"):
audience = f"https://{conn.host}"
else:
audience = conn.host
token = google.oauth2.id_token.fetch_id_token(request, audience)
creds = google.oauth2.credentials.Credentials(token)
base_url = conn.host
if conn.port:
base_url = f"{base_url}:{conn.port}"
channel = google.auth.transport.grpc.secure_authorized_channel(
creds, None, base_url
)
return channel
return GrpcOperator(
...
custom_connection_func=connection_func,
)
This uses the approach seen here to fetch an ID token for a given audience, then create a set of credentials from there and finally instantiate the gRPC secure channel for use in the operator.
I have a Python Cloud Function which uses a KMS key to decrypt some authentication tokens for other services from the environment, as in https://dev.to/googlecloud/using-secrets-in-google-cloud-functions-5aem
I keep getting a 403 Permission Denied whenever I run my function. When I call the function locally on my computer it works fine. I've tried adding the "Cloud KMS CryptoKey Decrypter" role to the default Compute Engine service account but that didn't work.
Any other ideas?
Edit: here's some code that shows what I'm doing. The environment variables are stored in an environment.yaml file which I point to when I gcloud functions deploy
def decrypt_secret(key: str, secret: str):
kms_client = kms.KeyManagementServiceClient()
decrypted = kms_client.decrypt(key, base64.b64decode(secret))
return decrypted.plaintext.decode("ascii")
def do_kms_stuff():
key = os.environ["KMS_RESOURCE_NAME"]
session = boto3.Session(
profile_name="my-profile",
aws_access_key_id=decrypt_secret(
key, os.environ["AWS_ACCESS_KEY_ID_ENCRYPTED"]
),
aws_secret_access_key=decrypt_secret(
key, os.environ["AWS_SECRET_ACCESS_KEY_ENCRYPTED"]
),
)
# ...
And here's the error from the Cloud Functions console:
File "<string>", line 3, in raise_from: google.api_core.exceptions.PermissionDenied: 403 Permission 'cloudkms.cryptoKeyVersions.useToDecrypt' denied on resource 'projects/my-project/locations/my-location1/keyRings/my-keyring/cryptoKeys/my-key' (or it may not exist). at error_remapped_callable (/env/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py:59) at func_with_timeout (/env/local/lib/python3.7/site-packages/google/api_core/timeout.py:214) at retry_target (/env/local/lib/python3.7/site-packages/google/api_core/retry.py:182) at retry_wrapped_func (/env/local/lib/python3.7/site-packages/google/api_core/retry.py:277) at
__call__ (/env/local/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py:143) at decrypt (/env/local/lib/python3.7/site-packages/google/cloud/kms_v1/gapic/key_management_service_client.py:1816) at decrypt_secret (/user_code/kms_stuff.py:17) at do_kms_stuff (/user_code/kms_stuff.py:48) at my_cloud_function (/user_code/main.py:46) at call_user_function (/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py:214) at invoke_user_function (/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py:217) at run_background_function (/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py:383)
As #DazWilkin and #pessolato mentioned, the issue was that I was using the wrong service account. Once I changed to use the default AppSpot account everything worked smoothly.
Application: Google App Engine Python standard environment
Purpose: Access Google APIs (not Cloud APIs) through the google-api-python-client, e.g. Sheets API v4, by using a service account and impersonate a user, because the app is supposed to act on behalf of this user. (2-legged auth, the user won't be asked to grant access)
I've got a setup running in production environment, but it runs only on the local development server (dev_appserver.py) for testing if a certain environment variable would be removed. I'm looking for a solution that would work without adding/removing the environment variable.
The service account was created for the app and configured with domain-wide delegation DWD in Admin Console. Sheets API is turned on for this project.
Of the many quick-starts, samples, and references available, it was only after reading the Google Auth Library for Python documentation (google-auth) that I've noticed the missing parts (an environment variable and the SSL library) and finally got the code running on production.
The app code will use the private key JSON file that was downloaded from Cloud Console IAM.
requirements.txt
# as suggested by almost all docs, but this isn't everything we need:
google-api-python-client==1.6.5
google-auth==1.4.0
google-auth-httplib2==0.0.3
app.yaml
env_variables:
# enable socket support of paid app, needed for OAuth2 service-accounts
# see google-auth documentation, v1.4.1, chapter 1.2.4
GAE_USE_SOCKETS_HTTPLIB : true
# some other stuff
libraries:
# to make HTTPS calls to other services, needed for OAuth2 service-accounts
# see google-auth documentation, v1.4.1, chapter 1.2.4
- name: ssl
version: latest
appengine_config.py (partial sample for Sheets API v4 access)
from google.oauth2 import service_account
SCOPES = ["https://www.googleapis.com/auth/spreadsheets"]
APP_ROOT_DIR = os.path.abspath(os.path.dirname(__file__))
SERVICE_ACCOUNT_FILE = "service-account-private-key.json"
import googleapiclient.discovery
credentials = service_account.Credentials.from_service_account_file(SERVICE_ACCOUNT_FILE, scopes=SCOPES)
# impersonate as user#example.com (G Suite domain account)
credentials = credentials.with_subject('user#example.com')
service = googleapiclient.discovery.build('sheets', 'v4', credentials=credentials)
# until here, the code works in production and local dev server
result = service.spreadsheets().values().get(spreadsheetId="DOC-ID-HERE", range="A1:C5").execute()
# execute() will work only in production,
# on local dev, it will raise an ResponseNotReady exception
traceback
ERROR 2018-03-05 16:32:03,183 wsgi.py:263]
Traceback (most recent call last):
File "/Users/user/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/Users/user/google-cloud-sdk/platform/google_appengine/google/appengine/api/lib_config.py", line 351, in __getattr__
self._update_configs()
File "/Users/user/google-cloud-sdk/platform/google_appengine/google/appengine/api/lib_config.py", line 287, in _update_configs
self._registry.initialize()
File "/Users/user/google-cloud-sdk/platform/google_appengine/google/appengine/api/lib_config.py", line 160, in initialize
import_func(self._modname)
File "/Users/user/git/project/gae/appengine_config.py", line 143, in <module>
spreadsheetId=spreadsheetId, range=rangeName).execute()
File "/Users/user/git/project/gae/_lib/oauth2client/_helpers.py", line 133, in positional_wrapper
return wrapped(*args, **kwargs)
File "/Users/user/git/project/gae/_lib/googleapiclient/http.py", line 839, in execute
method=str(self.method), body=self.body, headers=self.headers)
File "/Users/user/git/project/gae/_lib/googleapiclient/http.py", line 166, in _retry_request
resp, content = http.request(uri, method, *args, **kwargs)
File "/Users/user/git/project/gae/_lib/google_auth_httplib2.py", line 187, in request
self._request, method, uri, request_headers)
File "/Users/user/git/project/gae/_lib/google/auth/credentials.py", line 121, in before_request
self.refresh(request)
File "/Users/user/git/project/gae/_lib/google/oauth2/service_account.py", line 322, in refresh
request, self._token_uri, assertion)
File "/Users/user/git/project/gae/_lib/google/oauth2/_client.py", line 145, in jwt_grant
response_data = _token_endpoint_request(request, token_uri, body)
File "/Users/user/git/project/gae/_lib/google/oauth2/_client.py", line 106, in _token_endpoint_request
method='POST', url=token_uri, headers=headers, body=body)
File "/Users/user/git/project/gae/_lib/google_auth_httplib2.py", line 116, in __call__
url, method=method, body=body, headers=headers, **kwargs)
File "/Users/user/git/project/gae/_lib/httplib2/__init__.py", line 1659, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/Users/user/git/project/gae/_lib/httplib2/__init__.py", line 1399, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/Users/user/git/project/gae/_lib/httplib2/__init__.py", line 1355, in _conn_request
response = conn.getresponse()
File "/Users/user/google-cloud-sdk/platform/google_appengine/google/appengine/dist27/python_std_lib/httplib.py", line 1121, in getresponse
raise ResponseNotReady()
I have figured out that if I delete GAE_USE_SOCKETS_HTTPLIB from app.yaml's env_variables list, the code will work on local development server (but not in production anymore).
Am I doing something wrong here? Could I use the same code (maybe with a small switch) for both environments, without manually adding/removing the variable from app.yaml?
Purpose: Access Google APIs (not Cloud APIs) through the google-api-python-client, e.g. Sheets API v4, ….
Here they explain that:
Private, broadcast, multicast, and Google IP ranges (except those whitelisted below), are blocked:
Google Public DNS: 8.8.8.8, 8.8.4.4, 2001:4860:4860::8888, 2001:4860:4860::8844 port 53
Gmail SMTPS: smtp.gmail.com port 465 and 587
Gmail POP3S: pop.gmail.com port 995
Gmail IMAPS: imap.gmail.com port 993
I have figured out that if I delete GAE_USE_SOCKETS_HTTPLIB from app.yaml's env_variables list, the code will work on local development server (but not in production anymore).
This is explained here:
Using sockets with the development server
You can run and test code
using sockets on the development server, without using any special
command line parameters.
Finally, this question and the accepted answer describe a similar scenario.
Hope this helps you :-)
I’m trying to access s3 (s3a protocol) from pyspark (version 2.2.0) and I’m having some difficulty.
I’m using the Hadoop and AWS sdk packages.
pyspark --packages com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.7.2
Here is what my code looks like:
sc._jsc.hadoopConfiguration().set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", AWS_ACCESS_KEY_ID)
sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key", AWS_SECRET_ACCESS_KEY)
rdd = sc.textFile('s3a://spark-test-project/large-file.csv')
print(rdd.first().show())
I get this:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/rdd.py", line 1361, in first
rs = self.take(1)
File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/rdd.py", line 1313, in take
totalParts = self.getNumPartitions()
File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/rdd.py", line 385, in getNumPartitions
return self._jrdd.partitions().size()
File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o34.partitions.
: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: 32750D3DED4067BD, AWS Error Code: null, AWS Error Message: Bad Request, S3 Extended Request ID: jAhO0tWTblPEUehF1Bul9WZj/9G7woaHFVxb8gzsOpekam82V/Rm9zLgdLDNsGZ6mPizGZmo6xI=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:258)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:194)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:61)
at org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:45)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
Is this a bug with the AWS Java SDK? I’m new to spark, so I don’t know if there a way to get better logging information from AWS other than AWS Error Code: null
For what it's worth, I have this line in my spark-defaults.conf file on aws:
spark.jars.packages com.amazonaws:aws-java-sdk:1.11.99,org.apache.hadoop:hadoop-aws:2.7.2
I also made sure that the security group I'm using when setting up my EC2 has access to s3.
After those two things, I've had no issues reading files from s3:
%pyspark
df = spark.read.csv("s3a://my_bucket/name/")
Alternatively, if you use AWS EMR, you should be able to access s3 right out of the box:
%pyspark
df = spark.read.csv("s3://my_bucket/name/")
"Bad request" is the message to fear from S3, it means "This didn't work and we won't tell you why".
There's a whole section on troubleshooting S3A in the docs.
If your bucket is hosted someone which only supports the S3 "v4" auth protocol (frankfurt, london, seoul) then you need to set the fs.s3a.endpoint field to that of the specific region ... the doc has details.
Otherwise, try using s3a://landsat-pds/scene_list.gz as a source. It's a public CSV File which doesn't need authentication. If you can't see it, then you are in serious trouble
I was trying to read a parquet file from s3 using pyspark and this worked for me.
from pyspark.sql import SparkSession
spark = SparkSession\
.builder\
.config('spark.master', 'local')\
.config('spark.app.name', 's3app')\
.config('spark.jars.packages', 'org.apache.hadoop:hadoop-aws:3.3.4,org.apache.hadoop:hadoop-common:3.3.4')\
.getOrCreate()
sc = spark.sparkContext
sc._jsc.hadoopConfiguration().set('fs.s3a.access.key', 'access-key')
sc._jsc.hadoopConfiguration().set('fs.s3a.secret.key', 'secret-key')
df = spark.read.format('parquet').load('s3a://path-to-s3')
df.show()
I am trying to pass boto3 a list of bucket names and have it first enable versioning on each bucket, then enable a lifecycle policy on each.
I have done aws configure, and do have two profiles, both current, active user profiles with all necessary permissions. The one I want to use is named "default."
import boto3
# Create session
s3 = boto3.resource('s3')
# Bucket list
buckets = ['BUCKET-NAME']
# iterate through list of buckets
for bucket in buckets:
# Enable Versioning
bucketVersioning = s3.BucketVersioning('bucket')
bucketVersioning.enable()
# Current lifecycle configuration
lifecycleConfig = s3.BucketLifecycle(bucket)
lifecycleConfig.add_rule={
'Rules': [
{
'Status': 'Enabled',
'NoncurrentVersionTransition': {
'NoncurrentDays': 7,
'StorageClass': 'GLACIER'
},
'NoncurrentVersionExpiration': {
'NoncurrentDays': 30
}
}
]
}
# Configure Lifecycle
bucket.configure_lifecycle(lifecycleConfig)
print "Versioning and lifecycle have been enabled for buckets."
When I run this I get the following error:
Traceback (most recent call last):
File "putVersioning.py", line 27, in <module>
bucketVersioning.enable()
File "/usr/local/lib/python2.7/dist-packages/boto3/resources/factory.py", line 520, in do_action
response = action(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/boto3/resources/action.py", line 83, in __call__
response = getattr(parent.meta.client, operation_name)(**params)
File "/home/user/.local/lib/python2.7/site-packages/botocore/client.py", line 253, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/user/.local/lib/python2.7/site-packages/botocore/client.py", line 557, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the PutBucketVersioning operation: Access Denied
My profiles has full privileges, so that shouldn't be a problem. Is there something else I need to do for passing credentials? Thanks everyone!
To set the versioning state, you must be the bucket owner.
The above statement means - To use PutBucketVersioning operation to enable the versioning, you must be the owner of the bucket.
Use the below command to check the owner of the bucket. If you are the owner of the bucket, you should be able to set the versioning state as ENABLED / SUSPENDED.
aws s3api get-bucket-acl --bucket yourBucketName
Ok, notionquest is correct; however, it appears I also goofed up in my code by quoting a variable:
bucketVersioning = s3.BucketVersioning('bucket')
should be
bucketVersioning = s3.BucketVersioning(bucket)