AWS Secrets Manager Pricing - amazon-web-services

I want to use an AWS RDS Proxy with RDS Postgres for which I have to create at least one secret using AWS Secrets manager. I understand that one secret would cost $0.40 per month. However, I was not able to understand about the pricing for the API calls made. How many API calls would be made for this minimal set up per month ? Is it according to each connections made ? And does it depend upon the RDS Plan - for me it is db.t3.micro

Related

Best practices for AWS cross account RDS snapshot share programatically

I have the following AWS cross account use case where Account A belong
to another team and my team is the owner of account B.
AWS account A would like to copy/share snapshot of AWS RDS
Oracle to AWS account B
Process/curate data by restoring snapshot in RDS instance and
using AWS Step Functions workflow in account B
Share snapshot back to AWS account A from account B.
I am using boto3 APIs and have working step functions code and I am looking for advice on solving step 1) and 3). I am thinking of asking account A team to write a Lambda which share snapshot and trigger Cloudwatch event which account B listens to and another lambda to share snapshot back to account A programmatically. I am not sure if that is an optimal approach and if there is any better way.
For an "optimal approach" - Have you considered using AWS backup? If the accounts are under the same organization AWS backup can do this all for you.
See here: https://docs.aws.amazon.com/aws-backup/latest/devguide/manage-cross-account.html
and here: https://aws.amazon.com/getting-started/hands-on/amazon-rds-backup-restore-using-aws-backup/

Connect RDS from a local aws sam instance

I would like to use aws sam to setup my serverless application. I have used it with dynamoDB before. This was very easy to since all I had to do was setup a dynamoDB table as a resource and then link it to the lambda functions. AWS SAM seams to know where the table is located. I was even able ot run the functions on my local machine using the sam-cli.
With RDS its a lot harder. The RDS Aurora Instance I am using sits behind a specific endpoint, in a specific subnet with security groups in my vpc protected by specific roles.
Now from what I understand, its aws sams job to use my template.yml to generate the roles and organize access rules for me.
But I don't think RDS is supported by aws sam by default, which means I would either be unable to test locally or need a vpn access to the aws vpc, which I am not a massive fan of, since it might be a real security risk.
I know RDS proxies exist, which can be created in aws sam, but they would also need vpc access, and so they just kick the problem down the road.
So how can I connect my aws sam project to RDS and if possible, execute the lambda functions on my machine?

AWS SSM parameter store reliability

I am looking at using AWS SSM Parameter Store to store secrets such as database connection strings for applications deployed on EC2, Elastic Beanstalk, Fargate docker containers etc).
The linked document states that the service is Highly scalable, available, and durable, but I can't find more details on what exactly that means. For example, is it replicated across all regions?
Is it best to:
a) read secrets from the parameter store at application startup (i.e. rely on it being highly available and scalable, even if, say, another region has gone down)?
or
b) read and store secrets locally when the application is deployed? Arguably less secure, but it means that any unavailability of the Parameter Store service would only impact deployment of new versions.
If you want to go with the parameter store go with your option a. And fail the app if get parameter call failed. (This happens, I have seen rate limiting happening for Parameter Store API requests) See here.
Or
The best option is AWS secrets manager. Secrets manager is a superset of the parameter store. It supports RDS password rotation and many more. Also its paid.
Just checked the unthrottled throughput of SSM. It is not in the spec, but it is ca. 50req/s.

Multiple Hashicorp Vault servers in different AZs in AWS

I have 3 Availability Zones in my AWS VPC and I would like to run Vault to connect to S3. I would like to run 3 Vault servers (one for each zone) all of them syncing to the same S3 bucket. Is this HA scenario for Vault possible?
I read that Vault doesn't support HA using S3 as the backend and might need to use Consul (which runs 3 servers by default). A bit confused about this. All I want is to run multiple Vault servers all storing/reading secrets from the same S3 bucket.
Thanks for your inputs.
Abdul
Note you could use DynamoDB to use an Amazon managed service & get HA support:
High Availability – the DynamoDB storage backend supports high availability. Because DynamoDB uses the time on the Vault node to implement the session lifetimes on its locks, significant clock skew across Vault nodes could cause contention issues on the lock.
https://www.vaultproject.io/docs/configuration/storage/dynamodb.html
There are several Storage Backends in Vault, and only some of them supports HA, like Consul. However, if a backend doesn't support HA it doesn't mean that it can't be used at all.
So, if you need to run multiple Vault istance, each one independent from the other ones, you should be able to use S3 as a Storage Backend. But if you need HA you need to use Consul, or any other backend that support HA.
Hope this help

Does your Amazon Redshift database need be in the same region as your Machine Learning model?

When trying to use Amazon Redshift to create a datasource for my Machine Learning model, I encountered the following error when testing the access of my IAM role:
There is no '' cluster, or the cluster is not in the same region as your Amazon ML service. Specify a cluster in the same region as the Amazon ML service.
Is there anyway around this, as this would be a huge pain since all of our development team's data is stored in a region that Machine Learning doesn't work in?
That's an interesting situation to be in.
What probably you can do :
1) Wait for Amazon Web Services to support AWS ML in your preferred Region. (That's a long wait though).
2) OR what else you can do is Create a backup plan for your Redshift data.
Amazon Redshift provides you some by Default tools to back up your
cluster via snapshot to Amazon Simple Storage Service (Amazon S3).
These snapshots can be restored in any AZ in that region or
transferred automatically to other regions wherever you want (In your
case where your ML is running).
There is (Probably) no other way around to use your ML with Redshift being in different regions.
Hope it will help !