I am facing an issue where my SSO expired earlier when I tried to create a session programmatically using boto3 but NOT my awscli.
python version: 3.8.12
boto3 version: 1.21.46
awscli version: aws-cli/2.4.27 Python/3.8.8 Darwin/21.6.0 exe/x86_64 prompt/off
Sample boto3 code (boto3-test.py)
import boto3
session = boto3.Session(profile_name='RoleA')
sts = session.client('sts')
print(sts.get_caller_identity())
Steps to reproduce:
aws sso login --profile RoleA
aws sts get-caller-identity --profile RoleA (SUCCESS)
python boto3-test.py. (SUCCESS)
WAIT AFTER 1 HOUR ......
aws sts get-caller-identity --profile RoleA (SUCCESS)
python boto3-test.py (FAIL)
I have check ~/.aws/sso/cache and ~/.aws/cli/cache the expiresAt and Expiration in both cache file is still valid. I am expecting boto3 to discover the token cache the same way as the awscli, but it seems not. Any clue why both are not in sync ?
Error from boto3-test.py
Traceback (most recent call last):
File "/Users/tester/venv/lib/python3.8/site-packages/botocore/credentials.py", line 2056, in _get_credentials
response = client.get_role_credentials(**kwargs)
File "/Users/tester/venv/lib/python3.8/site-packages/botocore/client.py", line 415, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/tester/venv/lib/python3.8/site-packages/botocore/client.py", line 745, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.UnauthorizedException: An error occurred (UnauthorizedException) when calling the GetRoleCredentials operation: Session token no
t found or invalid
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/client.py", line 415, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/client.py", line 731, in _make_api_call
http, parsed_response = self._make_request(
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/client.py", line 751, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/endpoint.py", line 107, in make_request
return self._send_request(request_dict, operation_model)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/endpoint.py", line 180, in _send_request
request = self.create_request(request_dict, operation_model)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/endpoint.py", line 120, in create_request
self._event_emitter.emit(event_name, request=request,
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/hooks.py", line 358, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/hooks.py", line 229, in emit
return self._emit(event_name, kwargs)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/hooks.py", line 212, in _emit
response = handler(**kwargs)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/signers.py", line 95, in handler
return self.sign(operation_name, request)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/signers.py", line 159, in sign
auth = self.get_auth_instance(**kwargs)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/signers.py", line 239, in get_auth_instance
frozen_credentials = self._credentials.get_frozen_credentials()
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/credentials.py", line 632, in get_frozen_credentials
self._refresh()
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/credentials.py", line 527, in _refresh
self._protected_refresh(is_mandatory=is_mandatory_refresh)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/credentials.py", line 543, in _protected_refresh
metadata = self._refresh_using()
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/credentials.py", line 684, in fetch_credentials
return self._get_cached_credentials()
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/credentials.py", line 694, in _get_cached_credentials
response = self._get_credentials()
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/credentials.py", line 2058, in _get_credentials
raise UnauthorizedSSOTokenError()
botocore.exceptions.UnauthorizedSSOTokenError: The SSO session associated with this profile has expired or is otherwise invalid. To refresh this SSO session run aws sso login with the corresponding profile.
To all, not sure what the issue is, but I resolve it by removing ~/.aws/sso and ~/.aws/cli entirely and letting it recreate again. I have also upgraded boto3 to version 1.24.90.
Things just work by themselves after that.
Related
I'm trying to put to work AWS's Textract export table suggestion in this link
I'm a complete newbie in AWS's solutions and in command prompt so I'm trying to do exactly as they suggest. I'm running that in python so I'm using this piece of code:
import os
k=os.system("python textract_python_table_parser.py my_pdf_file_path.pdf")
print(k)
The code runs, I get an Image loaded my_pdf_file_path.pdf however at some point it bugs on credential matters:
Traceback (most recent call last):
File "/Users/santanna_santanna/PycharmProjects/KlooksExplore/PDFWork/textract_python_table_parser.py", line 108, in <module>
main(file_name)
File "/Users/santanna_santanna/PycharmProjects/KlooksExplore/PDFWork/textract_python_table_parser.py", line 94, in main
table_csv = get_table_csv_results(file_name)
File "/Users/santanna_santanna/PycharmProjects/KlooksExplore/PDFWork/textract_python_table_parser.py", line 53, in get_table_csv_results
response = client.analyze_document(Document={'Bytes': bytes_test}, FeatureTypes=['TABLES'])
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/client.py", line 316, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/client.py", line 622, in _make_api_call
operation_model, request_dict, request_context)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/client.py", line 641, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/endpoint.py", line 132, in _send_request
request = self.create_request(request_dict, operation_model)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/endpoint.py", line 116, in create_request
operation_name=operation_model.name)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/signers.py", line 90, in handler
return self.sign(operation_name, request)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/signers.py", line 160, in sign
auth.add_auth(request)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/auth.py", line 357, in add_auth
raise NoCredentialsError
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I'm aware I didn't pass any credentials and that's natural to happen but where should I pass it and what would be the right syntax for that using python os? Amazon's example doesn't say anything about that.
It depends where you run your code, for example:
local computer - can use aws configure CLI to set your credetnails
EC2 instance - use instance role
lambda function - use lambda execution role
I'm following the guide here: https://cloudprovider.dask.org/en/latest/packer.html#ec2cluster-with-rapids
In particular I set up my instance with packer, and am now trying to run the final piece of code:
cluster = EC2Cluster(
ami=pack_ami, # AMI ID provided by Packer
region="eu-west-2",
docker_image="rapidsai/rapidsai:cuda10.1-runtime-ubuntu18.04-py3.8",
instance_type="p3.2xlarge",
bootstrap=False,
filesystem_size=120,
)
cluster.scale(1)
client = Client(cluster)
Note that I had to add the region to avoid complaining. Unfortunately now I get this error:
botocore.exceptions.ClientError: An error occurred (InvalidParameterValue) when calling the RunInstances operation: User data is limited to 16384 bytes
when Creating scheduler instance.
Full trace here:
Creating scheduler instance
Traceback (most recent call last):
File "tpotmodel.py", line 124, in <module>
main()
File "tpotmodel.py", line 83, in main
bootstrap=False,
File "/home/simon/.conda/envs/tpot-cuml/lib/python3.7/site-packages/dask_cloudprovider/aws/ec2.py", line 474, in __init__
super().__init__(**kwargs)
File "/home/simon/.conda/envs/tpot-cuml/lib/python3.7/site-packages/dask_cloudprovider/generic/vmcluster.py", line 284, in __init__
super().__init__(**kwargs, security=self.security)
File "/home/simon/.conda/envs/tpot-cuml/lib/python3.7/site-packages/distributed/deploy/spec.py", line 281, in __init__
self.sync(self._start)
File "/home/simon/.conda/envs/tpot-cuml/lib/python3.7/site-packages/distributed/deploy/cluster.py", line 189, in sync
return sync(self.loop, func, *args, **kwargs)
File "/home/simon/.conda/envs/tpot-cuml/lib/python3.7/site-packages/distributed/utils.py", line 340, in sync
raise exc.with_traceback(tb)
File "/home/simon/.conda/envs/tpot-cuml/lib/python3.7/site-packages/distributed/utils.py", line 324, in f
result[0] = yield future
File "/home/simon/.conda/envs/tpot-cuml/lib/python3.7/site-packages/tornado/gen.py", line 762, in run
value = future.result()
File "/home/simon/.conda/envs/tpot-cuml/lib/python3.7/site-packages/dask_cloudprovider/generic/vmcluster.py", line 324, in _start
await super()._start()
File "/home/simon/.conda/envs/tpot-cuml/lib/python3.7/site-packages/distributed/deploy/spec.py", line 309, in _start
self.scheduler = await self.scheduler
File "/home/simon/.conda/envs/tpot-cuml/lib/python3.7/site-packages/distributed/deploy/spec.py", line 71, in _
await self.start()
File "/home/simon/.conda/envs/tpot-cuml/lib/python3.7/site-packages/dask_cloudprovider/generic/vmcluster.py", line 86, in start
ip = await self.create_vm()
File "/home/simon/.conda/envs/tpot-cuml/lib/python3.7/site-packages/dask_cloudprovider/aws/ec2.py", line 139, in create_vm
response = await client.run_instances(**vm_kwargs)
File "/home/simon/.conda/envs/tpot-cuml/lib/python3.7/site-packages/aiobotocore/client.py", line 154, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InvalidParameterValue) when calling the RunInstances operation: User data is limited to 16384 bytes
I'm on arch with conda if that changes anything.
The Dask Community is tracking this problem here: github.com/dask/dask-cloudprovider/issues/249 and a potential solution github.com/dask/distributed/pull/4465. 4465 should resolve the issues.
We have a Glue Python Shell Pipeline Job trying to invoke AWS AppFlow through boto3 APIs. The challenge we have is that we need to design our solution without the need for internet access within Glue
Previously we had challenges in upgrading the boto3 APIs in AWS Glue Shell to the latest version without internet. But then we have found the solution and documented in the following link. In our architecture, we are using AWS Python Shell as our lightweight Datapipeline Engine leveraging boto3 APIs
Git Glue Boto3 Bug & Solution
The following Appflow API python code is working perfectly fine in our local Jupyter Notebooks, as AWS App flow API is invoked over the internet.
##Extra code as per above link to update boto3 version as well, which is omitted
import boto3
print(boto3.__version__)
client=boto3.client('appflow')
response=client.describe_flow(
flowName='sf_non_pii'
)
But while we use the same code in AWS Glue, it is failing as it is not able to access the internet.
botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://appflow.us-east-2.amazonaws.com/describe-flow"
Please find the logs below
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/connection.py", line 160, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/usr/local/lib/python3.6/site-packages/urllib3/util/connection.py", line 84, in create_connection
raise err
File "/usr/local/lib/python3.6/site-packages/urllib3/util/connection.py", line 74, in create_connection
sock.connect(sa)
socket.timeout: timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/botocore/httpsession.py", line 263, in send
chunked=self._chunked(request.headers),
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 727, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/usr/local/lib/python3.6/site-packages/urllib3/util/retry.py", line 386, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python3.6/site-packages/urllib3/packages/six.py", line 735, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 677, in urlopen
chunked=chunked,
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 381, in _make_request
self._validate_conn(conn)
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 978, in _validate_conn
conn.connect()
File "/usr/local/lib/python3.6/site-packages/urllib3/connection.py", line 309, in connect
conn = self._new_conn()
File "/usr/local/lib/python3.6/site-packages/urllib3/connection.py", line 167, in _new_conn
% (self.host, self.timeout),
urllib3.exceptions.ConnectTimeoutError: (<botocore.awsrequest.AWSHTTPSConnection object at 0x7f5f8802c0f0>, 'Connection to appflow.us-east-2.amazonaws.com timed out. (connect timeout=60)')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/runscript.py", line 123, in <module>
runpy.run_path(temp_file_path, run_name='__main__')
File "/usr/local/lib/python3.6/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/usr/local/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/local/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/glue-python-scripts-a3cgmro3/ref_appflow.py", line 44, in <module>
File "/tmp/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/tmp/botocore/client.py", line 663, in _make_api_call
operation_model, request_dict, request_context)
File "/tmp/botocore/client.py", line 682, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/tmp/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/tmp/botocore/endpoint.py", line 137, in _send_request
success_response, exception):
File "/tmp/botocore/endpoint.py", line 256, in _needs_retry
caught_exception=caught_exception, request_dict=request_dict)
File "/tmp/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/tmp/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/tmp/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/tmp/botocore/retryhandler.py", line 183, in __call__
if self._checker(attempts, response, caught_exception):
File "/tmp/botocore/retryhandler.py", line 251, in __call__
caught_exception)
File "/tmp/botocore/retryhandler.py", line 277, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/tmp/botocore/retryhandler.py", line 317, in __call__
caught_exception)
File "/tmp/botocore/retryhandler.py", line 223, in __call__
attempt_number, caught_exception)
File "/tmp/botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
File "/tmp/botocore/endpoint.py", line 200, in _do_get_response
http_response = self._send(request)
File "/tmp/botocore/endpoint.py", line 269, in _send
return self.http_session.send(request)
File "/tmp/botocore/httpsession.py", line 287, in send
raise ConnectTimeoutError(endpoint_url=request.url, error=e)
botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://appflow.us-east-2.amazonaws.com/describe-flow"
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/runscript.py", line 142, in <module>
raise e_type(e_value).with_traceback(new_stack)
TypeError: __init__() takes 1 positional argument but 2 were given
We had faced similar issues for Redshift, AWS Athena, SSM, etc. But for these services, we have VPC endpoints available. Creating the endpoints enabled our glue job to connect privately within the VPC. Please refer to the link below for supported products
AWS services that you can use with AWS PrivateLink
But since Appflow is a newer product, we do not have endpoint, even though some unclear documentation regarding a throwaway create and delete VPC endpoint as below
Private Amazon AppFlow flows
Please let us know how we can invoke boto3 APIs for Appflow without going through the internet.
Hi I'm trying to access serverless API. I got as far as creating virtual environments, activating it and puting my credentials in. Though when I try to deploy aws chalice, this is what i get:
Creating deployment package.
Traceback (most recent call last):
File "c:\users\jerom\desktop\venv\lib\site-packages\chalice\cli\__init__.py", line 599, in main
return cli(obj={})
File "c:\users\jerom\desktop\venv\lib\site-packages\click\core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "c:\users\jerom\desktop\venv\lib\site-packages\click\core.py", line 782, in main
rv = self.invoke(ctx)
File "c:\users\jerom\desktop\venv\lib\site-packages\click\core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "c:\users\jerom\desktop\venv\lib\site-packages\click\core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "c:\users\jerom\desktop\venv\lib\site-packages\click\core.py", line 610, in invoke
return callback(*args, **kwargs)
File "c:\users\jerom\desktop\venv\lib\site-packages\click\decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
File "c:\users\jerom\desktop\venv\lib\site-packages\chalice\cli\__init__.py", line 206, in deploy
deployed_values = d.deploy(config, chalice_stage_name=stage)
File "c:\users\jerom\desktop\venv\lib\site-packages\chalice\deploy\deployer.py", line 353, in deploy
return self._deploy(config, chalice_stage_name)
File "c:\users\jerom\desktop\venv\lib\site-packages\chalice\deploy\deployer.py", line 364, in _deploy
plan = self._plan_stage.execute(resources)
File "c:\users\jerom\desktop\venv\lib\site-packages\chalice\deploy\planner.py", line 139, in execute
result = handler(resource)
File "c:\users\jerom\desktop\venv\lib\site-packages\chalice\deploy\planner.py", line 195, in _plan_lambdafunction
if not self._remote_state.resource_exists(resource):
File "c:\users\jerom\desktop\venv\lib\site-packages\chalice\deploy\planner.py", line 61, in resource_exists
result = handler(resource)
File "c:\users\jerom\desktop\venv\lib\site-packages\chalice\deploy\planner.py", line 94, in _resource_exists_lambdafunction
return self._client.lambda_function_exists(resource.function_name)
File "c:\users\jerom\desktop\venv\lib\site-packages\chalice\awsclient.py", line 103, in lambda_function_exists
client = self._client('lambda')
File "c:\users\jerom\desktop\venv\lib\site-packages\chalice\awsclient.py", line 708, in _client
self._client_cache[service_name] = self._session.create_client(
File "c:\users\jerom\desktop\venv\lib\site-packages\botocore\session.py", line 831, in create_client
client = client_creator.create_client(
File "c:\users\jerom\desktop\venv\lib\site-packages\botocore\client.py", line 83, in create_client
client_args = self._get_client_args(
File "c:\users\jerom\desktop\venv\lib\site-packages\botocore\client.py", line 285, in _get_client_args
return args_creator.get_client_args(
File "c:\users\jerom\desktop\venv\lib\site-packages\botocore\args.py", line 99, in get_client_args
endpoint = endpoint_creator.create_endpoint(
File "c:\users\jerom\desktop\venv\lib\site-packages\botocore\endpoint.py", line 286, in create_endpoint
raise ValueError("Invalid endpoint: %s" % endpoint_url)
ValueError: Invalid endpoint: https://lambda.New Jersey.amazonaws.com
does anyone have any idea how to solve this?
It would appear that you provided an invalid valid value for "Region" when storing your credentials.
The region name forms part of the URL when connecting to AWS services, which is why your code is trying to access https://lambda.New Jersey.amazonaws.com. (New Jersey is not a valid Region.)
To fix:
Use the AWS CLI aws configure command to update your credentials.
In the Region field, provided a region code from the list of AWS Endpoints, such as us-west-2 or eu-west-2.
i Created new config file:
$ sudo vi ~/.boto
there i paste my credentials (as written in readthedocs for botp):
[Credentials]
aws_access_key_id = YOURACCESSKEY
aws_secret_access_key = YOURSECRETKEY
im trying to check connection:
import boto
boto.set_stream_logger('boto')
s3 = boto.connect_s3("us-east-1")
and my answer:
2014-11-26 14:05:49,532 boto [DEBUG]:Using access key provided by client.
2014-11-26 14:05:49,532 boto [DEBUG]:Retrieving credentials from metadata server.
2014-11-26 14:05:50,539 boto [ERROR]:Caught exception reading instance data
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/boto/utils.py", line 210, in retry_url
r = opener.open(req, timeout=timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 404, in open
response = self._open(req, data)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 422, in _open
'_open', req)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1214, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1184, in do_open
raise URLError(err)
URLError: <urlopen error timed out>
2014-11-26 14:05:50,540 boto [ERROR]:Unable to read instance data, giving up
Traceback (most recent call last):
File "/Users/user/PycharmProjects/project/untitled.py", line 8, in <module>
s3 = boto.connect_s3("us-east-1")
File "/Library/Python/2.7/site-packages/boto/__init__.py", line 141, in connect_s3
return S3Connection(aws_access_key_id, aws_secret_access_key, **kwargs)
File "/Library/Python/2.7/site-packages/boto/s3/connection.py", line 190, in __init__
validate_certs=validate_certs, profile_name=profile_name)
File "/Library/Python/2.7/site-packages/boto/connection.py", line 569, in __init__
host, config, self.provider, self._required_auth_capability())
File "/Library/Python/2.7/site-packages/boto/auth.py", line 975, in get_auth_handler
'Check your credentials' % (len(names), str(names)))
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] Check your credentials
why its not found the Credentials?
there is something that i did wrong?
Your issue is:
The string 'us-west-1' you provide as the first argument will be treat as the AWSAccessKeyID.
What you want is:
First creating a connection, note that a connection has no region or location info in it.
conn = boto.connect_s3('your_access_key', 'your_secret_key')
And then when you want to do some thing with the bucket, write the region info as an argument.
from boto.s3.connection import Location
conn.create_bucket('mybucket', location=Location.USWest)
or:
conn.create_bucket('mybucket', location='us-west-1')
By default, the location is the empty string which is interpreted as the US Classic Region, the original S3 region. However, by specifying another location at the time the bucket is created, you can instruct S3 to create the bucket in that location.