AssertionError: INTERNAL: No default project is specified - python-2.7

New to airflow. Trying to run the sql and store the result in a BigQuery table.
Getting following error. Not sure where to setup the default_rpoject_id.
Please help me.
Error:
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 28, in <module>
args.func(args)
File "/usr/local/lib/python2.7/dist-packages/airflow/bin/cli.py", line 585, in test
ti.run(ignore_task_deps=True, ignore_ti_state=True, test_mode=True)
File "/usr/local/lib/python2.7/dist-packages/airflow/utils/db.py", line 53, in wrapper
result = func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/airflow/models.py", line 1374, in run
result = task_copy.execute(context=context)
File "/usr/local/lib/python2.7/dist-packages/airflow/contrib/operators/bigquery_operator.py", line 82, in execute
self.allow_large_results, self.udf_config, self.use_legacy_sql)
File "/usr/local/lib/python2.7/dist-packages/airflow/contrib/hooks/bigquery_hook.py", line 228, in run_query
default_project_id=self.project_id)
File "/usr/local/lib/python2.7/dist-packages/airflow/contrib/hooks/bigquery_hook.py", line 917, in _split_tablename
assert default_project_id is not None, "INTERNAL: No default project is specified"
AssertionError: INTERNAL: No default project is specified
Code:
sql_bigquery = BigQueryOperator(
task_id='sql_bigquery',
use_legacy_sql=False,
write_disposition='WRITE_TRUNCATE',
allow_large_results=True,
bql='''
#standardSQL
SELECT ID, Name, Group, Mark, RATIO_TO_REPORT(Mark) OVER(PARTITION BY Group) AS percent FROM `tensile-site-168620.temp.marks`
''',
destination_dataset_table='temp.percentage',
dag=dag
)

EDIT: I finally fixed this problem by simply adding the bigquery_conn_id='bigquery' parameter in the BigQueryOperator task, after running the code below in a separate python script.
Apparently you need to specify your project ID in Admin -> Connection in the Airflow UI. You must do this as a JSON object such as "project" : "".
Personally I can't get the webserver working on GCP so this is unfeasible. There is a programmatic solution here:
from airflow.models import Connection
from airflow.settings import Session
session = Session()
gcp_conn = Connection(
conn_id='bigquery',
conn_type='google_cloud_platform',
extra='{"extra__google_cloud_platform__project":"<YOUR PROJECT HERE>"}')
if not session.query(Connection).filter(
Connection.conn_id == gcp_conn.conn_id).first():
session.add(gcp_conn)
session.commit()
These suggestions are from a similar question here.

I get the same error when running airflow locally. My solution is to add a the following connection string as a environment variable:
AIRFLOW_CONN_BIGQUERY_DEFAULT="google-cloud-platform://?extra__google_cloud_platform__project=<YOUR PROJECT HERE>"
BigQueryOperator uses the "bigquery_default" connection. When not specified, local airflow uses an internal version of the connection which misses the property project_id. As you can see the connection string above provides the project_id property.
On startup Airflow loads environment variables that start with "AIRFLOW_" into memory. This mechanism can be used to override airflow properties and providing connections when running locally, as explained in the airflow documentation here. Note this also works when running airflow directly without starting the web server.
So I have set up environments variables for all my connections, for example AIRFLOW_CONN_MYSQL_DEFAULT. I have put them into a .ENV file that get sourced from my IDE, but putting them into your .bash_profile would work fine too.
When you look inside your airflow instance on Cloud Composer, you see that the at the "bigquery_default" connection there has the project_idproperty set. That's why BigQueryOperator works when running through Cloud Composer.
(I am on airflow 1.10.2 and BigQuery 1.10.2)

Related

MLflow proxied artifact access: Unable to locate credentials

I am using MLflow to track my experiments. I am using an S3 bucket as an artifact store. For acessing it, I want to use proxied artifact access, as described in the docs, however this does not work for me, since it locally looks for credentials (but the server should handle this).
Expected Behaviour
As described in the docs, I would expect that locally, I do not need to specify my AWS credentials, since the server handles this for me. From docs:
This eliminates the need to allow end users to have direct path access to a remote object store (e.g., s3, adls, gcs, hdfs) for artifact handling and eliminates the need for an end-user to provide access credentials to interact with an underlying object store.
Actual Behaviour / Error
Whenever I run an experiment on my machine, I am running into the following error:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
So the error is local. However, this should not happen since the server should handle the auth instead of me needing to store my credentials locally. Also, I would expect that I would not even need library boto3 locally.
Solutions Tried
I am aware that I need to create a new experiment, because existing experiments might still use a different artifact location which is proposed in this SO answer as well as in the note in the docs. Creating a new experiment did not solve the error for me. Whenever I run the experiment, I get an explicit log in the console validating this:
INFO mlflow.tracking.fluent: Experiment with name 'test' does not exist. Creating a new experiment.
Related Questions (#1 and #2) refer to a different scenario, which is also described in the docs
Server Config
The server runs on a kubernetes pod with the following config:
mlflow server \
--host 0.0.0.0 \
--port 5000 \
--backend-store-uri postgresql://user:pw#endpoint \
--artifacts-destination s3://my_bucket/artifacts \
--serve-artifacts \
--default-artifact-root s3://my_bucket/artifacts \
I would expect my config to be correct, looking at doc page 1 and page 2
I am able to see the mlflow UI if I forward the port to my local machine. I also see the experiment runs as failed, because of the error I sent above.
My Code
The relevant part of my code which fails is the logging of the model:
mlflow.set_tracking_uri("http://localhost:5000")
mlflow.set_experiment("test2)
...
# this works
mlflow.log_params(hyperparameters)
model = self._train(model_name, hyperparameters, X_train, y_train)
y_pred = model.predict(X_test)
self._evaluate(y_test, y_pred)
# this fails with the error from above
mlflow.sklearn.log_model(model, "artifacts")
Question
I am probably overlooking something. Is there a need to locally indicate that I want to use proxied artified access? If yes, how do I do this? Is there something I have missed?
Full Traceback
File /dir/venv/lib/python3.9/site-packages/mlflow/models/model.py", line 295, in log
mlflow.tracking.fluent.log_artifacts(local_path, artifact_path)
File /dir/venv/lib/python3.9/site-packages/mlflow/tracking/fluent.py", line 726, in log_artifacts
MlflowClient().log_artifacts(run_id, local_dir, artifact_path)
File /dir/venv/lib/python3.9/site-packages/mlflow/tracking/client.py", line 1001, in log_artifacts
self._tracking_client.log_artifacts(run_id, local_dir, artifact_path)
File /dir/venv/lib/python3.9/site-packages/mlflow/tracking/_tracking_service/client.py", line 346, in log_artifacts
self._get_artifact_repo(run_id).log_artifacts(local_dir, artifact_path)
File /dir/venv/lib/python3.9/site-packages/mlflow/store/artifact/s3_artifact_repo.py", line 141, in log_artifacts
self._upload_file(
File /dir/venv/lib/python3.9/site-packages/mlflow/store/artifact/s3_artifact_repo.py", line 117, in _upload_file
s3_client.upload_file(Filename=local_file, Bucket=bucket, Key=key, ExtraArgs=extra_args)
File /dir/venv/lib/python3.9/site-packages/boto3/s3/inject.py", line 143, in upload_file
return transfer.upload_file(
File /dir/venv/lib/python3.9/site-packages/boto3/s3/transfer.py", line 288, in upload_file
future.result()
File /dir/venv/lib/python3.9/site-packages/s3transfer/futures.py", line 103, in result
return self._coordinator.result()
File /dir/venv/lib/python3.9/site-packages/s3transfer/futures.py", line 266, in result
raise self._exception
File /dir/venv/lib/python3.9/site-packages/s3transfer/tasks.py", line 139, in __call__
return self._execute_main(kwargs)
File /dir/venv/lib/python3.9/site-packages/s3transfer/tasks.py", line 162, in _execute_main
return_value = self._main(**kwargs)
File /dir/venv/lib/python3.9/site-packages/s3transfer/upload.py", line 758, in _main
client.put_object(Bucket=bucket, Key=key, Body=body, **extra_args)
File /dir/venv/lib/python3.9/site-packages/botocore/client.py", line 508, in _api_call
return self._make_api_call(operation_name, kwargs)
File /dir/venv/lib/python3.9/site-packages/botocore/client.py", line 898, in _make_api_call
http, parsed_response = self._make_request(
File /dir/venv/lib/python3.9/site-packages/botocore/client.py", line 921, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File /dir/venv/lib/python3.9/site-packages/botocore/endpoint.py", line 119, in make_request
return self._send_request(request_dict, operation_model)
File /dir/venv/lib/python3.9/site-packages/botocore/endpoint.py", line 198, in _send_request
request = self.create_request(request_dict, operation_model)
File /dir/venv/lib/python3.9/site-packages/botocore/endpoint.py", line 134, in create_request
self._event_emitter.emit(
File /dir/venv/lib/python3.9/site-packages/botocore/hooks.py", line 412, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File /dir/venv/lib/python3.9/site-packages/botocore/hooks.py", line 256, in emit
return self._emit(event_name, kwargs)
File /dir/venv/lib/python3.9/site-packages/botocore/hooks.py", line 239, in _emit
response = handler(**kwargs)
File /dir/venv/lib/python3.9/site-packages/botocore/signers.py", line 103, in handler
return self.sign(operation_name, request)
File /dir/venv/lib/python3.9/site-packages/botocore/signers.py", line 187, in sign
auth.add_auth(request)
File /dir/venv/lib/python3.9/site-packages/botocore/auth.py", line 407, in add_auth
raise NoCredentialsError()
botocore.exceptions.NoCredentialsError: Unable to locate credentials
The problem is that the server is running on wrong run parameters, the --default-artifact-root needs to either be removed or set to mlflow-artifacts:/.
From mlflow server --help:
--default-artifact-root URI Directory in which to store artifacts for any
new experiments created. For tracking server
backends that rely on SQL, this option is
required in order to store artifacts. Note that
this flag does not impact already-created
experiments with any previous configuration of
an MLflow server instance. By default, data
will be logged to the mlflow-artifacts:/ uri
proxy if the --serve-artifacts option is
enabled. Otherwise, the default location will
be ./mlruns.
Having the same problem and the accepted answer doesn't seem to solve my issue.
Neither removing or setting mlflow-artifacts instead of s3 works for me. Moreover it gave me an error that since I have a remote backend-store-uri I need to set default-artifact-root while running the mlflow server.
How I solved it that I find the error self explanatory, and the reason it states that it was unable to find credential is that mlflow underneath uses boto3 to do all the transaction. Since I had setup my environment variables in .env, just loading the file was enough for me and solved the issue. If you have the similar scenario then just run the following commands before starting your mlflow server,
set -a
source .env
set +a
This will load the environment variables and you will be good to go.
Note:
I was using both remote server for backend and artifacts storage, mainly postgres and minio.
For remote backend backend-store-uri is must otherwise you will not be able to startup your mlflow server
The answer #bk_ helped me. I ended up with the following command to get my Tracking Server running with proxied connection for artifact storage:
mlflow server \
--backend-store-uri postgresql://postgres:postgres#postgres:5432/mlflow \
--default-artifact-root mlflow-artifacts:/ \
--serve-artifacts \
--host 0.0.0.0

Using Google Coud Run (or any cloud service) with Docker-compose app with simple python script

I have a relatively simple docker application using docker-compose that I would like to deploy. All it contains is a python script that I would like to automatically run every day that doesn't require user input.
I would like to use Google Cloud Run to deploy this application, since it doesn't need to be online 24/7. but I'm not sure if it's compatible with a docker-compose.yml.
Here is my docker-compose file:
version: '3.9'
secrets:
venmo_api_key:
file: ./secrets/venmo_api_token.txt
services:
app:
build: ./app
secrets:
- venmo_api_key
As you can see, I need docker-compose so that my secrets can be used in my container just by running docker-compose up. It runs fine locally!
To deploy to my image to Google Container Registry, I've run:
docker-compose build
docker tag cb3605 gcr.io/venmoscription-v2/venmoscription-service
docker push gcr.io/venmoscription-v2/venmoscription-service
In Google Cloud Run, I selected the GCR URL and left all the other options as default just to see if my container could run online. However, I got this error in Google Cloud Run Logs:
False for Revision venmoscription-service-00001-qig with message: Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
I also got this error message after:
Traceback (most recent call last):
File "/app/main.py", line 13, in <module>
client = Client(access_token = get_docker_secret("venmo_api_key"))
File "/home/app/.local/lib/python3.9/site-packages/venmo_api/venmo.py", line 15, in __init__
self.__profile = self.user.get_my_profile()
File "/home/app/.local/lib/python3.9/site-packages/venmo_api/apis/user_api.py", line 26, in get_my_profile
response = self.__api_client.call_api(resource_path=resource_path,
File "/home/app/.local/lib/python3.9/site-packages/venmo_api/utils/api_client.py", line 58, in call_api
return self.__call_api(resource_path=resource_path, method=method,
File "/home/app/.local/lib/python3.9/site-packages/venmo_api/utils/api_client.py", line 103, in __call_api
processed_response = self.request(method, url, session,
File "/home/app/.local/lib/python3.9/site-packages/venmo_api/utils/api_client.py", line 139, in request
validated_response = self.__validate_response(response, ok_error_codes=ok_error_codes)
File "/home/app/.local/lib/python3.9/site-packages/venmo_api/utils/api_client.py", line 170, in __validate_response
raise HttpCodeError(response=response)
venmo_api.models.exception.HttpCodeError: HTTP Status code is invalid. Could not make the request because -> 401 Unauthorized.
Basically, the container in Google Cloud Run was unable to access the secret that I defined in docker-compose.yml.
Does anyone know what I should be doing, or please explain how to get my docker-compose app up and running with a serverless solution? Thank you!!
Cloud Run doesn't support multiple containers like docker-compose does, so you'll need to deploy a single container that accomplishes your goal. Cloud Run does expect that your container starts up and listens on a port (like a web application) or else it won't start.
This page has a good step by step simple python app that can deploy on Cloud Run and listens on a port
https://cloud.google.com/run/docs/quickstarts/build-and-deploy/python
You can also make your venmo secret available at a path for your service by using Google Secret Manager.
https://cloud.google.com/run/docs/configuring/secrets
I hope that helps get you started.
Best,
Josh

Airflow: How to unpause DAG from python script

I'm creating a python script to generate DAGs (generate a new python file with the DAG specifications) from templates. It all works fine, except that I need the DAG to be be generated as unpaused.
I've searched and tried to run shell commands in the script like this:
bash_command1 = 'airflow list_dags'
bash_command2 = 'airflow trigger_dag ' + str(DAG_ID)
bash_command3 = 'airflow list_tasks ' + str(DAG_ID)
bash_command4 = 'airflow unpause '+ str(DAG_ID)
subprocess.call(bash_command1.split())
subprocess.call(bash_command2.split())
subprocess.call(bash_command3.split())
subprocess.call(bash_command4.split())
But every time I create a new DAG it is shown as paused in the web UI.
By the research I´ve made, the command airflow unpause <dag_id> should solve the problem, but when the script executes it, I get the error:
Traceback (most recent call last):
File "/home/cubo/anaconda2/bin/airflow", line 28, in <module>
args.func(args)
File "/home/cubo/anaconda2/lib/python2.7/site-packages/airflow/bin/cli.py", line 303, in unpause
set_is_paused(False, args, dag)
File "/home/cubo/anaconda2/lib/python2.7/site- packages/airflow/bin/cli.py", line 312, in set_is_paused
dm.is_paused = is_paused
AttributeError: 'NoneType' object has no attribute 'is_paused'
But when I execute the same airflow unpause <dag_id> command in the terminal it works fine, and it prints:
Dag: <DAG: DAG_ID>, paused: False
Any help would be greatly appreciated.
Airflow (1.8 and newer) pauses new dags by default. If you want all DAGs to be unpaused at creation, you can override the Airflow config to retain the prior behavior of unpausing at creation.
Here's the link that walks you through setting configuration options. You want to set a core configuration setting: dags_are_paused_at_creation to False.
We use the environment variable approach on my team.

gcloud 403 permission errors with wrong project

I used to work at a company and had setup my gcloud previously with gcloud init or gcloud auth login (I don't recall which one). We were using google container engine (GKE).
I've since left the company and been removed from the permissions on that project.
Now today, I wanted to setup a brand new app engine for myself unrelated to the previous company.
Why is it that I cant run any commands without getting the below error? gcloud init, gcloud auth login or even gcloud --help or gcloud config list all display errors. It seems like it's trying to login to my previous company's project with gcloud container cluster but I'm not typing that command at all and am in a differerent zone and interested in a different project. Where is my config for gcloud getting these defaults?
Is this a case where I need to delete my .config/gcloud folder? Seems rather extreme of a solution just to login to a different project?
Traceback (most recent call last):
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/gcloud.py", line 65, in <module>
main()
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/gcloud.py", line 61, in main
sys.exit(googlecloudsdk.gcloud_main.main())
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/gcloud_main.py", line 130, in main
gcloud_cli = CreateCLI([])
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/gcloud_main.py", line 119, in CreateCLI
generated_cli = loader.Generate()
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 329, in Generate
cli = self.__MakeCLI(top_group)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 517, in __MakeCLI
log.AddFileLogging(self.__logs_dir)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/core/log.py", line 676, in AddFileLogging
_log_manager.AddLogsDir(logs_dir=logs_dir)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/core/log.py", line 365, in AddLogsDir
self._CleanUpLogs(logs_dir)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/core/log.py", line 386, in _CleanUpLogs
self._CleanLogsDir(logs_dir)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/core/log.py", line 412, in _CleanLogsDir
os.remove(log_file_path)
OSError: [Errno 13] Permission denied: '/Users/terence/.config/gcloud/logs/2017.07.27/19.07.37.248117.log'
And the log file:
/Users/terence/.config/gcloud/logs/2017.07.27/19.07.37.248117.log
2017-07-27 19:07:37,252 DEBUG root Loaded Command Group: ['gcloud', 'container']
2017-07-27 19:07:37,253 DEBUG root Loaded Command Group: ['gcloud', 'container', 'clusters']
2017-07-27 19:07:37,254 DEBUG root Loaded Command Group: ['gcloud', 'container', 'clusters', 'get_credentials']
2017-07-27 19:07:37,330 DEBUG root Running [gcloud.container.clusters.get-credentials] with arguments: [--project: "REMOVED_PROJECT", --zone: "DIFFERENT_ZONE", NAME: "REMOVED_CLUSTER_NAME"]
2017-07-27 19:07:37,331 INFO ___FILE_ONLY___ Fetching cluster endpoint and auth data.
2017-07-27 19:07:37,591 DEBUG root (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Required "container.clusters.get" permission for "projects/REMOVED_PROJECT/zones/DIFFERENT_ZONE/clusters/REMOVED_CLUSTER_NAME".
Traceback (most recent call last):
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 712, in Execute
resources = args.calliope_command.Run(cli=self, args=args)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 871, in Run
resources = command_instance.Run(args)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/surface/container/clusters/get_credentials.py", line 69, in Run
cluster = adapter.GetCluster(cluster_ref)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/api_lib/container/api_adapter.py", line 213, in GetCluster
raise api_error
HttpException: ResponseError: code=403, message=Required "container.clusters.get" permission for "projects/REMOVED_PROJECT/zones/DIFFERENT_ZONE/clusters/REMOVED_CLUSTER_NAME".
2017-07-27 19:07:37,596 ERROR root (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Required "container.clusters.get" permission for "projects/REMOVED_PROJECT/zones/DIFFERENT_ZONE/clusters/REMOVED_CLUSTER_NAME".
I had to delete my .config/gcloud to make this work although I don't believe that is a good "solution".
Okay so not sure if things have changed but ran into a similar issue. Please try this before nuking your configuration.
gcloud supports multiple accounts and you can see what account is active by running gcloud auth list.
ACTIVE ACCOUNT
* Work-Email#company.com
Personal-Email#gmail.com
If you are not on the correct one, you can do
$ gcloud config set account Personal-Email#gmail.com
And it'll set the correct account. Running a gcloud auth list again should show the ACTIVE now on your personal.
if you haven't auth'd into your personal, you'll need to login. You can rungcloud auth login Personal-Email#gmail.com and follow the flow from there and then return to the above.
Make sure to set PROJECT_ID or whatever things you may need when switching.
Now from there I found it's STILL possible that you might not be auth'd correctly. I think for this, you may need to restart your terminal session or even simply doing a source ~/.bash_profile was sufficient. (Perhaps I needed to do this to refresh the GOOGLE_APPLICATION_CREDENTIALS environment variable but I'm not sure).
Hope this helps. Try this before nuking
Rename / delete config/gcloud/logs folder and try Instead of deleting .config/gcloud folder.
This Solution worked for me :)

boto3 throws error in when packaged under rpm

I am using boto3 in my project and when i package it as rpm it is raising error while initializing ec2 client.
<class 'botocore.exceptions.DataNotFoundError'>:Unable to load data for: _endpoints. Traceback -Traceback (most recent call last):
File "roboClientLib/boto/awsDRLib.py", line 186, in _get_ec2_client
File "boto3/__init__.py", line 79, in client
File "boto3/session.py", line 200, in client
File "botocore/session.py", line 789, in create_client
File "botocore/session.py", line 682, in get_component
File "botocore/session.py", line 809, in get_component
File "botocore/session.py", line 179, in <lambda>
File "botocore/session.py", line 475, in get_data
File "botocore/loaders.py", line 119, in _wrapper
File "botocore/loaders.py", line 377, in load_data
DataNotFoundError: Unable to load data for: _endpoints
Can anyone help me here. Probably boto3 requires some run time resolutions which it not able to get this in rpm.
I tried with using LD_LIBRARY_PATH in /etc/environment which is not working.
export LD_LIBRARY_PATH="/usr/lib/python2.6/site-packages/boto3:/usr/lib/python2.6/site-packages/boto3-1.2.3.dist-info:/usr/lib/python2.6/site-packages/botocore:
I faced the same issue:
botocore.exceptions.DataNotFoundError: Unable to load data for: ec2/2016-04-01/service-2
For which I figured out the directory was missing. Updating botocore by running the following solved my issue:
pip install --upgrade botocore
Botocore depends on a set of service definition files that it uses to generate clients on the fly. Boto3 further depends on another set of files that it uses to generate resource clients. You will need to include these in any installs of boto3 or botocore. The files will need to be located in the 'data' folder of the root of the respective library.
I faced similar issue which was due to old version of botocore. Once I updated it, it started working.
Please consider using below command.
pip install --upgrade botocore
Also please ensure, you have setup boto configuration profile.
Boto searches credentials in below order.
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM
role configured.