I want to use DynamoDB in an EC2 instance in Python. I have tested it locally, and set up my DynamoDB resource locally by using:
dynamodb = boto3.resource('dynamodb', aws_access_key_id=ACCESS_ID,
aws_secret_access_key= ACCESS_KEY, region_name='us-west-2', endpoint_url='http://localhost:8000')
I am wondering if, once it is running on an EC2 instance, the endpoint_url should be changed (to something different than http://localhost:8000), or if I should set up the resource in a completely different way. Thank you!
Firstly, you should avoid putting credentials in your source code. This can lead to security breaches and is difficult to update Instead:
When running on an Amazon EC2 instance: Assign an IAM Role to the instance. The code will automatically find credentials.
When running on your own system: Store credentials in the ~.aws/credentials file (or run aws configure to create the file).
If you wish to connect with DynamoDB, leave out the endpoint parameter. I assume that you have been using DynamoDB Local, which runs on your own computer. To use the 'real' DynamoDB, leave out the endpoint.
Also, it is a good idea to include a region, such as:
dynamodb = boto3.resource('dynamodb', region_name='ap-southeast-2')
Related
Currently I have the ec2 in a private subnet within VPC. This ec2 has to get a "template-file" from a github repo. Ideally, I would like it to fetch the "template-file" only if changes are made. In short, the changes will tell the ec2 to fetch the new template. What is the best way to accomplish this?
I was thinking of using github-actions to sync the changes into S3 bucket, and have the ec2 constantly pull from it.
You can use SNS to handle the event of a new object created in the bucket and make sure that the EC2 is consuming this event.
You can try this approach:-
sync changes with the s3 bucket.
configure bucket notifications on upload using sns.
Run a script in your ec2 instance, which is running continuously and and checking whether there is object in sns or not, if yes download the updated file in your ec2 instance
As of the date of this question I'm using the most recent version of the AWS CLI (2.4.6) running on macOS. According to the v2 docs the Instances that are returned should include properties like InstanceLifecycle, Licenses, MetadataOptions -> PlatformDetails and several others that are missing for me. While I'm getting back most data, some fields are absent... I've tried this is two separate AWS accounts and I have admin IAM creds that I'm using locally, why does the aws ec2 describe-instances call not return all of the fields listed in the docs?
Not all outputs is available for every ec2 instance, it depends on the way of provisioning of your ec2 instances.
Ex:
InstanceLifecycle: is exclusive if you provisioned the ec2 instance as spot instance or reserved one.
Licenses: If you used BYOL when provisioning EC2 (Bring your own license)
Extra.. The docs describe every possible output from querying ec2 api endpoint, but it depends on the different parameters of your provisioned ec2 instance.
For example, try to provision a spot instance, and query the instance lifecycle.
I have a question.
So I am trying to automate my RDS connection string with terraform, but the database identifier is randomly generated per region in each aws account.
Is it possible to know before hand the database identifier? and if so i there a way i can automate it?
Below is my current scripts:
sudo socat TCP-LISTEN:5432,reuseaddr,fork TCP4:env0.**cvhjdfmcu7ed**.us-east-1.rds.amazonaws.com:5432
I current i'm using this below script to feed in variables to terraform in my userdata tpl file
sudo nohup socat TCP-LISTEN:${port},reuseaddr,forkTCP4:${name}.${connection}.${aws_region}.rds.amazonaws.com:${port}
If some here can suggests ways that i can use to automate the ${connection} variable, so that i can deploy it in any aws account and region and don't have to worry what possibly the identifier would be.
You would use the endpoint attribute available in either the aws_db_instance or aws_rds_cluster Terraform resource to access the hostname:port RDS endpoint in Terraform.
If you are not creating/managing the RDS instance in the same Terraform where you need access to the endpoint address, then you would use the appropriate Terraform datasource instead of a Terraform resource, which would lookup the RDS information and make the endpoint value available within the rest of your Terraform template.
I would like to use aws sam to setup my serverless application. I have used it with dynamoDB before. This was very easy to since all I had to do was setup a dynamoDB table as a resource and then link it to the lambda functions. AWS SAM seams to know where the table is located. I was even able ot run the functions on my local machine using the sam-cli.
With RDS its a lot harder. The RDS Aurora Instance I am using sits behind a specific endpoint, in a specific subnet with security groups in my vpc protected by specific roles.
Now from what I understand, its aws sams job to use my template.yml to generate the roles and organize access rules for me.
But I don't think RDS is supported by aws sam by default, which means I would either be unable to test locally or need a vpn access to the aws vpc, which I am not a massive fan of, since it might be a real security risk.
I know RDS proxies exist, which can be created in aws sam, but they would also need vpc access, and so they just kick the problem down the road.
So how can I connect my aws sam project to RDS and if possible, execute the lambda functions on my machine?
What I'm trying to do
I am working on a lambda function which will simply register some metadata about files which are uploaded onto an s3 bucket. This is not about actually processing the data in the files yet. To start with, I just want to register the fact that certain files have been uploaded or not. Then I want to connect that metadata to QuickSight just so that we can have a nice visual about which files have been uploaded.
What I've done so far
This part is fairly easy:
Some simply python code with the pymysql module
Chalice to manage the process of creating and updating the lambda function
I created the database
Where I'm stuck
QuickSight is somehow external to AWS in general. So I had to create the RDS (mysql) in the DMZ of our VPC.
I have configured the security group so that the DB is accessible both from QuickSight and from my own laptop.
But the lambda function can't connect.
I configured the right policy for the role, so that the lambda can connect with IAM
I tested that policy with the simulator
But of course the lambda function is going to have some kind of dynamic IP and that needs to be in the security group
Any Ideas ??
I am even thinking about this right?
Two things.
You shouldn't have to put your RDS in a DMZ. See this article about granting QuickSight access to your RDS: https://docs.aws.amazon.com/quicksight/latest/user/enabling-access-rds.html
In order for a lambda to access something in a VPC (like and RDS instance) the lambda must have a VPC configuration. https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html