Change RDS to Public accessible - amazon-web-services

I am a newbie in amazon web services and have got some questions related to amazon RDS:
1.How can we use AWS API to define an RDS and send the parameter 'publicly accessible' to it? I know that the CLI has a -pub flag (CLI-RDS) which can be used, but what about when we are not using CLI and gonna use some programming language like Node.js?
2.Is it possible to change the state of publicly-accessible parameter of an RDS? I mean If we have already defined an RDS in private state, can we change it later? If yes How? I also read the discussion here (RDS to public), and they offered to Delete the current RDS & create final snapshot and then Restore the Snapshot with the the public availability zone. It's not possible in my case. Is there any other way? we want to change the state of publicly accessible parameter dynamically because of some security issues.

This API call is available on all clients (Console, SDK, CLI, ...) here is the documentation for node.js, check the PubliclyAccessible parameter:
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/RDS.html#modifyDBInstance-property
It is surely possible. However, as the cloudformation documentation mentions, that requires substitution and so expect and plan for some downtime:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-publiclyaccessible

Related

How to add some new code to an existing EC2 instance

Bear with me, what I am requesting may be impossible. I am a AWS noob.
So I am going to describe to you the situation I am in...
I am doing a freelance gig and was essentially handed the keys to AWS. That is, I was handed the root user login credentials for the AWS account that powers this website.
Now there are 3 EC2 instances. One of the instances is a linux box that, from what I am being told, is running a Django Python backend.
My new "service" if you will must exist within this instance.
How do I introduce new source code into this instance? Is there a way to pull down the existing source code that lives within it?
I am not be helped by any existing/previous developers so I am kind of just handed the AWS credentials and have no idea where to start.
Is this even possible. That is, is it possible to pull the source code from an EC2 instance and/or modify the code? How do I do this?
EC2 instances are just virtual machines. So you can use SSH/SCP/SFTP files to and from. You can use the AWS CLI tools to copy stuff from S3. Dealers choice...
Now to get into this instance... If you look in the web console you can find its IP(s), what the security groups (firewall rules), and the key pair name. Hopefully they gave you the keys. You need these to SSH in.
You'll also want to check to make sure there's a security group applied that has SSH open. Hopefully only to your IP :)
If you don't have the keys you'll have to create an AMI image of the instance so you can create a new one with a key pair you do have.
Amazon has a set of tools for you in Amazon CodeSuite.
The tool used for "deploying" the code is Amazon CodeDeploy. By using this service you install an agent onto your host, then when triggered it will pull down an artifact of a code base and install it matching hosts. You can even specify additional commands through the hook system.
But you also want to trigger this to happen, maybe even automatically? CodeDeploy can be orchestrated using the CodePipeline tool.

AWS SSM parameter store reliability

I am looking at using AWS SSM Parameter Store to store secrets such as database connection strings for applications deployed on EC2, Elastic Beanstalk, Fargate docker containers etc).
The linked document states that the service is Highly scalable, available, and durable, but I can't find more details on what exactly that means. For example, is it replicated across all regions?
Is it best to:
a) read secrets from the parameter store at application startup (i.e. rely on it being highly available and scalable, even if, say, another region has gone down)?
or
b) read and store secrets locally when the application is deployed? Arguably less secure, but it means that any unavailability of the Parameter Store service would only impact deployment of new versions.
If you want to go with the parameter store go with your option a. And fail the app if get parameter call failed. (This happens, I have seen rate limiting happening for Parameter Store API requests) See here.
Or
The best option is AWS secrets manager. Secrets manager is a superset of the parameter store. It supports RDS password rotation and many more. Also its paid.
Just checked the unthrottled throughput of SSM. It is not in the spec, but it is ca. 50req/s.

Create new vm with aws api

Is there a way to create or destroy virtual machines with aws api? I could not find a very comprehensive resource on it. I am trying to use this for being a dynamic web host.
Yes, this is possible using the EC2 API ('RunInstances' docs here). Alternatively if you're dealing with an expanding and contracting pool of instances you might want to look into Auto Scaling Groups (docs here).

Is it possible to launch an RDS instance without a VPC?

I'm trying to insert records into a Postgres database in RDS from a Lambda function. My Node.js lambda function works correctly when run locally, but the database connection times out when run in AWS.
I've read several articles and tutorials which suggest that AWS Lambda functions cannot access RDS instances that are within a VPC. For example: http://ashiina.github.io/2015/01/amazon-lambda-first-impression/
Unfortunately; it seems I am unable to create an RDS instance that exists outside of a VPC. At this dropdown I would expect to be able to select an option for "No VPC" or something along those lines.
Has this option been removed? Perhaps I have missed a step?
You can create a publicly accessible RDS instance. Then you should be able to access it from anywhere, inside or outside AWS. I believe that would get around your issue with Lambda. You are asked if the instances needs to be publicly accessible when you create a new RDS instance via the web console.
Or you could just wait a few weeks, as Lambda within a VPC is supposed to be enabled "later this year".
Edit: Note that newer Amazon accounts are restricted to VPC only resources. You can't create EC2 or RDS instances outside of a VPC anymore. That's why you don't see the "No VPC" option anymore.
Second Edit: VPC access for Lambda functions is now genearally available.
This question is awhile back, but for those of you who are using MySQL, now you can connect AWS Lambda with Aurora Serverless without VPC, utilizing their new Data API. Take a look at this example for details https://coderecipe.ai/architectures/77374273

Create AWS cache clusters in VPC with CloudFormation

I am creating an AWS stack inside a VPC using CloudFormation and need to create ElastiCache clusters on it. I have investigated and there is no support in CloudFormation to create cache clusters in VPCs.
Our "workaround" was to to create the cache cluster when some "fixed" instance (like a bastion for example) bootstrap using CloudInit and AWS AmazonElastiCacheCli tools (elasticache-create-cache-subnet-group, elasticache-create-cache-cluster). Then, when front end machines bootstrap (we are using autoscaling), they use elasticache-describe-cache-clusters to get cache cluster nodes and update configuration.
I would like to know if you have different solutions to this problem.
VPC support has now been added for Elasticache in Cloudformation Templates.
To launch a AWS::ElastiCache::CacheCluster in your VPC, create a AWS::ElastiCache::SubnetGroup that defines which subnet in your VPC you want Elasticache and assign it to the CacheSubnetGroupName property of AWS::ElastiCache::CacheCluster.
You workaround is a reasonable one (and shows that you seem to be in control of your AWS operations already).
You could improve on your custom solution eventually by means of the dedicated CustomResource type, which are special AWS CloudFormation resources that provide a way for a template developer to include resources in an AWS CloudFormation stack that are provided by a source other than Amazon Web Services. - the AWS CloudFormation Custom Resource Walkthrough provides a good overview of what this is all about, how it works and what's required to implement your own.
The benefit of using this facade for a custom resource (i.e. the Amazon ElastiCache cluster in your case) is that its entire lifecycle (create/update/delete) can be handled in a similar and controlled fashion just like any officially supported CloudFormation resource types, e.g. resource creation failures would be handled transparently from the perspective of the entire stack.
However, for the use case at hand you might actually just want to wait for official support becoming available:
AWS has announced VPC support for ElastiCache in the context of the recent major Amazon EC2 Update - Virtual Private Clouds for Everyone!, which boils down to Default VPCs for (Almost) Everyone.
We want every EC2 user to be able to benefit from the advanced networking and other features of Amazon VPC that I outlined above. To enable this, starting soon, instances for new AWS customers (and existing customers launching in new Regions) will be launched into the "EC2-VPC" platform. [...]
You don’t need to create a VPC beforehand - simply launch EC2
instances or provision Elastic Load Balancers, RDS databases, or
ElastiCache clusters like you would in EC2-Classic and we’ll create a
VPC for you at no extra charge. We’ll launch your resources into that
VPC [...] [emphasis mine]
This update sort of implies that any new services will likely be also available in VPC right away going forward (else the new EC2-VPC platform wouldn't work automatically for new customers as envisioned).
Accordingly I'd expect the CloudFormation team to follow suit and complete/amend their support for deployment to VPC going forward as well.
My solution for this has been to have a controller process that polls a message queue, which is subscribed to the SNS topic which I notify CloudFormation events to (click advanced in the console when you create a CloudFormation stack to send notifications to an SNS Topic).
I pass the required parameters as tags to AWS::EC2::Subnet and have the controller pick them up, when the subnet is created. I execute the set up when a AWS::CloudFormation::WaitConditionHandle is created, and use the PhysicalResourceId to cURL with PUT to satisfy a AWS::CloudFormation::WaitCondition.
It works somewhat, but doesn't handle resource deletion in ElastiCache, because there is no AWS::CloudFormation::WaitCondition analogue for stack deletion. That's a manual operation procedure wth my approach.
The CustomResource approach looks more polished, but requires an endpoint, which I don't have. If you can put together an endpoint, that looks like the way to go.