Boto3 does not use specified region - amazon-web-services

I have the following script to list trails from CloudTrail:
import boto3
import os
os.environ['AWS_DEFAULT_REGION'] = 'us-east-2'
current_session = boto3.session.Session(profile_name='production')
client = current_session.client('cloudtrail')
response = client.list_trails()
print(response)
This only gives me the list in us-east-1.
I have tried setting the variable by passing it as an argument to the session and also setting it as env var on command line but it only looks at us-east-1.
Any suggestions?

I suspect your profile does not have a region associated to it. For this reason, the session instantiation is using us-east-1 as a default.
To fix this, explicitly specify the region name in the session instantiation:
current_session = boto3.session.Session(profile_name='production', region_name='us-east-2')

Define it in session config which has the following:
aws_access_key_id - A specific AWS access key ID.
aws_secret_access_key - A specific AWS secret access key.
region_name
The AWS Region where you want to create new connections. profile_name - The profile to use when creating your session.
So just add region_name in your example.
See:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/session.html#session-configurations

Related

How to : terraform snowflake stage credentials and use AWS IAM role arn

I am trying to Terraform snowflake_stage and use the arn from the IAM role, that was also terraformed, as the credential.
The Snowflake SQL works when I use:
create stage dev
URL='s3://name_of_bucket/'
storage_integration = dev_integration
credentials=(AWS_ROLE='arn:aws:iam:999999999999:role/service-role-name')
encryption=(TYPE='AWS_SSE_KMS' KMS_KEY_ID='aws/key')
FILE_FORMAT=DATABASE.PUBLIC.SCHEMA.FORMAT_NAME
COPY_OPTION=(ON_ERROR='CONTINUE' PURGE='FALSE' RETURN_FAILED_ONLY='TRUE');
but when I try to write an equivalent Terraform resource "snowflake_stage" using:
resource "snowflake_stage" "stage" {
name = "dev"
url = "s3://name_of_bucket/"
storage_integration = "dev_integration"
schema = "public"
credentials = "AWS_ROLE='aws_iam_role.snowflake_stage.arn'"
encryption = "(TYPE='AWS_SSE_KMS' KMS_KEY_ID='aws/key')
file_format = "DATABASE.PUBLIC.SCHEMA.FORMAT_NAME"
copy_options = "(ON_ERROR='CONTINUE' PURGE='FALSE' RETURN_FAILED_ONLY='TRUE')"
}
I get :
SQL compilation error: invalid value [Not a property list: TOK_LIST] for parameter '{1}
The value on the encryption seems to need the "AWS_ROLE='..'" to be valid.
I've tried just using :
credentials = aws_iam_role.snowflake_stage.arn
but got a different set of errors.
How do I combine the :
credentials = "AWS_ROLE='
with the
aws_iam_role.snowflake_stage.arn
then append the :
`)"
for the credentials value ?
First, you are missing closing " in encryption. It should be:
encryption = "(TYPE='AWS_SSE_KMS' KMS_KEY_ID='aws/key')"
Second, for the role:
credentials = "AWS_ROLE='${aws_iam_role.snowflake_stage.arn}'"
A bit late on this, but encryption should be:
encryption = "TYPE='AWS_SSE_KMS' KMS_KEY_ID='aws/key'"
rather than:
encryption = "(TYPE='AWS_SSE_KMS' KMS_KEY_ID='aws/key')"
Moreover using storage integration on it's own will be fine as long as you configure it with the appropriate role and permission on the role ( S3, KMS, and STS policy document)
Then you can get rid of encryption and credentials field.

Finding Canonical ID of the account using CDK

I'm writing a custom S3 bucket policy using AWS that requires canonical ID of the account as a key parameter. I can get the account ID programmatically using cdk core. You may refer the python sample below.
cid = core.Aws.ACCOUNT_ID
Is there any way that we can get the same for canonical-ID.
Update:
I've found a workaround using S3API call. I've added the following code in my CDK stack. May be helpful to someone.
def find_canonical_id(self):
s3_client = boto3.client('s3')
return s3_client.list_buckets()['Owner']['ID']
I found 2 ways to get the canonical ID (boto3):
Method-1 Through List bucket API (also mentioned by author in the update)
This method is recommended by AWS as well.
import boto3
client = boto3.client("s3")
response = client.list_buckets()
canonical_id = response["Owner"]["ID"]
Method-2 Through Get bucket ACL API
import boto3
client = boto3.client("s3")
response = client.get_bucket_acl(
Bucket='sample-bucket' # should be in your acct
)
canonical_id = response["Owner"]["ID"]

Is there anyway to fetch tags of a RDS instance using boto3?

rds_client = boto3.client('rds', 'us-east-1')
instance_info = rds_client.describe_db_instances( DBInstanceIdentifier='**myinstancename**')
But the instance_info doesn't contain any tags I set in the RDS instance. I want to fetch the instances that has env='production' in them and want to exclude env='test'. Is there any method in boto3 that fetched the tags as well?
Only through boto3.client("rds").list_tags_for_resource
Lists all tags on an Amazon RDS resource.
ResourceName (string) --
The Amazon RDS resource with tags to be listed. This value is an Amazon Resource Name (ARN). For information about creating an ARN, see Constructing an RDS Amazon Resource Name (ARN) .
import boto3
rds_client = boto3.client('rds', 'us-east-1')
db_instance_info = rds_client.describe_db_instances(
DBInstanceIdentifier='**myinstancename**')
for each_db in db_instance_info['DBInstances']:
response = rds_client.list_tags_for_resource(
ResourceName=each_db['DBInstanceArn'],
Filters=[{
'Name': 'env',
'Values': [
'production',
]
}])
Either use a simple exclusion over the simple filter, or you can dig through the documentation to build complicated JMESPath filter using
paginators.
Notes : AWS resource tags is not a universal implementation. So you must always refer to the boto3 documentation.
Python program will show you how to list all rds instance, there type and status.
list_rds_instances.py
import boto3
#connect ot rds instance
client = boto3.client('rds')
#rds_instance will have all rds information in dictionary.
rds_instance = client.describe_db_instances()
all_list = rds_instance['DBInstances']
print('RDS Instance Name \t| Instance Type \t| Status')
for i in rds_instance['DBInstances']:
dbInstanceName = i['DBInstanceIdentifier']
dbInstanceEngine = i['DBInstanceClass']
dbInstanceStatus = i['DBInstanceStatus']
print('%s \t| %s \t| %s' %(dbInstanceName, dbInstanceEngine, dbInstanceStatus))
Important Note: While working with boto3 you need to setup your credentials in two files ~/.aws/credentials and ~/.aws/config
~/.aws/credentials
[default]
aws_access_key_id=<ACCESS_KEY>
aws_secret_access_key=<SECRET_KEY>
~/.aws/config
[default]
region=ap-south-1

Setting .authorize_egress() with protocol set to all

I am trying to execute the following code
def createSecurityGroup(self, securitygroupname):
conn = boto3.resource('ec2')
response = conn.create_security_group(GroupName=securitygroupname, Description = 'test')
VPC_NAT_SecurityObject = createSecurityGroup("mysecurity_group")
response_egress_all = VPC_NAT_SecurityObject.authorize_egress(
IpPermissions=[{'IpProtocol': '-1'}])
and getting the below exception
EXCEPTION :
An error occurred (InvalidParameterValue) when calling the AuthorizeSecurityGroupEgress operation: Only Amazon VPC security
groups may be used with this operation.
I tried several different combinations but not able to set the protocol to all . I used '-1' as explained in the boto3 documentation. Can somebody pls suggest how to get this done.
(UPDATE)
1.boto3.resource("ec2") class actually a high level class wrap around the client class. You must create an extract class instantiation using boto3.resource("ec2").Vpc in order to attach to specific VPC ID e.g.
import boto3
ec2_resource = boto3.resource("ec2")
myvpc = ec2_resource.Vpc("vpc-xxxxxxxx")
response = myvpc.create_security_group(
GroupName = securitygroupname,
Description = 'test')
2.Sometime it is straightforward to use boto3.client("ec2") If you check boto3 EC2 client create_security_group, you will see this:
response = client.create_security_group(
DryRun=True|False,
GroupName='string',
Description='string',
VpcId='string'
)
If you use automation script/template to rebuild the VPC, e.g. salt-cloud, you need give the VPC a tag name in order to acquire it automatically from boto3 script. This will save all the hassle when AWS migrate all the AWS resources ID from 8 alphanumeric to 12 or 15 character.
Another option is using cloudformation that let you put everything and specify variable in a template to recreate the VPC stack.

Where can I find a profile name in AWS for credentials file?

I want to have a credentials file. It looks like this:
[default]
aws_access_key_id = ACCESS_KEY
aws_secret_access_key = SECRET_KEY
aws_session_token = TOKEN
[Alice]
aws_access_key_id = Alice_access_key_ID
aws_secret_access_key = Alice_secret_access_key
[Bob]
aws_access_key_id = Bob_access_key_ID
aws_secret_access_key = Bob_secret_access_key
I read this article, the guy refers all these [Alice], [Bob] as profile names but doesn't say a word where to get them in AWS. In my newly created AWS account I have only this. No any profile name:
Where can I find those profile names? May that Account Name or Account Id on the screenshot be a profile name? or is profile name an email I stated on registration?
You just make up whatever labels you want. They are only used locally by the SDK to figure out which set of credentials to read from the local credential store. AWS doesn't know anything about them.
They are arbitrary labels and I use them [IAM username] for ease of use.