To create kubernetes cluster in AWS, I use the set up script "https://get.k8s.io". That script creates a new VPC automatically, but I want to create kubernetes cluster inside an existing VPC in AWS. Is there a way to do it?
I checked /kubernetes/cluster/aws/config-default.sh file, but there doesn't seem to be any environment variables about VPC.
You can add this ENV variable (we are using ver 1.1.8)
export VPC_ID=vpc-YOURID
Also Kubernetes creates a VPC with 172.20.0.0/16 and I think it expects this.
EC2 instances are used as a backend for kubernetes in the AWS cloud.
You can always run necessary number of instances manually and deploy any service above.
The following article describes how to launch your EC2 instance:
http://docs.aws.amazon.com/AmazonVPC/latest/GettingStartedGuide/getting-started-launch-instance.html
By the way, Amazon already provides a managed service similar to kubernetes based on docker. I will suggest you to consider using it.
More information here:
https://aws.amazon.com/ecs/details/
Related
I made and deployed my Django application in AWS Elastic Beanstalk. It has a connection to a Postgres DB in RDS, through the EBS console.
When I click configuration -> network in EBS, I see: "This environment is not part of a VPC."
How can I make it a part of a VPC? Thanks!
NOTE: very new to this ;)
You have to recreate the Elastic Beanstalk environment and pick the VPC during the creation. It is not possible to move an existing environment into a VPC.
But, unless you have access to EC2-Classic, the EC2 servers that were launched are already be in a VPC. They are just in the default VPC. But as far as Elastic Beanstalk is concerned, it seems oblivious to this.
I am not sure if there are any features that are exclusively available to VPC environments. My suggestion is to try to use your current environment, and if you happen to recreate the environment later for some other reason, then you can try picking a VPC and see if it offers anything new.
As already explained by #stefansundin you can't move existing EB into a custom VPC. You have to create new one.
These are general steps to consider:
Create a custom VPC with public and private subnets as described in the docs: VPC with public and private subnets (NAT). NAT is needed for instances and rds in private subnet to communicate with internet, but no inbound internet traffic will be allowed. This ensures that your instances and rds are not accessible from the outside.
Create new RDS, external to EB. This is good practice as otherwise the lifetime of your RDS is coupled with EB environment. Starting point is the following AWS documentation: Launching and connecting to an external Amazon RDS instance in a default VPC
Create new EB environment and ensure to customize its settings to use the VPC. Pass the RDS endpoint to the EB instances using environmental variables. Depending on how you want to handle password to the RDS, there are few options, starting from using environmental variables (low security) through SSM Parameter Store (free) or AWS Secrets Manager (not free).
Setting this all up correctly can be difficult for someone new to AWS. But with patience and practice it can be done. Thus, I would recommend with starting with default VPC, as you have now. Then once you are comfortable with how to work with external RDS, think on creating custom VPC as described.
I have an existing environment which I used for Development, which was setup using the AWS console, the Web Interface. I would like to create a new environment based off this configuration for Production or Staging.
How can I extract or use the Development configuration as a template, and manipulate it in order to create a new Production environment?
Manually creating the environment as before, would be error prone, also undocumented.
I would prefer to extract the current config using the AWS CLI or similiar and then manipulate the config:
Rename where applicable
Remove irrelevant configurations, such as the default VPC.
My current configuration from what I recall consists of:
VPC
Internet GateWay
Private and Public SubnetWorks
Routing Rules
ACL
Security Policies
RDS, MariaDB
Secure Key Store
ELB classic, with a certificate
ECS Container Registry - Docker
ECS Cluster
AutoScale Group
EC2 AutoScale Definition
CloudWatch
CloudFormation templates are good for replicating the same resources in different regions or accounts.
You create a template that describes all the AWS resources that you
want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS
CloudFormation takes care of provisioning and configuring those
resources for you.
There is a beta tool, CloudFormer, that attempts to generate a CloudFormation template from resources that exist in your account.
I'm trying to create table automatically with npm migrate whenever we deploy any changes with serverless framework. It's quite fine when I used with aurora database. But I've moved to Aurora Serverless RDS (Sydney region), it's not working at all. Because Aurora Serverless RDS itself is working inside VPC, thus when we need to access it lambda function should must be at same VPC.
PS: we're using Github Action as pipeline to deploy everything to Lambda.
Please let me know how to solve that issue, thanks.
There are only two basic ways that you can approach this: open a tunnel into the VPC or run your updates inside the VPC. Here are some of the approaches to each that I've used in the past:
Tunnel into the VPC:
VPN, such as OpenVPN.
Relatively easy to set up, but designed to connect two networks together and represents an always-on charge for the server. Would work well if you're running the migrations from, say, your corporate network, but not something that you want to try to configure for GitHub Actions (or any third-party build tool).
Bastion host
This is an EC2 instance that runs in a public subnet and exposes SSH to the world. You make an SSH connection to the Bastion and then tunnel whatever protocol you want underneath. Typically run as an "always on" instance, but you can start and stop programmatically.
I think this would add a lot of complexity to your build. Assuming that you just want to run on demand, you'd need a script that would start the instance and wait for it to be ready to accept connections. You would probably also want to adjust the security group ingress rules to only allow traffic from your build machine (whose IP is likely to change for each build). Then you'd have to open the tunnel, by running ssh in the background, and close it again after the build is done.
Running the migration inside the VPC:
Simplest approach (imo) is to just move your build inside the VPC, using CodeBuild. If you do this you'll need to have a NAT so that the build can talk to the outside world. It's also not as easy to configure CodeBuild to talk to GitHub as it should be (there's one manual step where you need to provide an access token).
If you're doing a containerized deployment with ECS, then I recommend packaging your migrations in a container and deploying it onto the same cluster that runs the application. Then you'd trigger the run with aws ecs run-task (I assume there's something similar for EKS, but haven't used it).
If you aren't already working with ECS/EKS, then you can implement the same idea with AWS Batch.
Here is an example on how you could approach database schema migration using Amazon API Gateway, AWS Lambda, Amazon Aurora Serverless (MySQL) and Python CDK.
I am trying to develop a spring cloud micro services. And I planned to deploy into AWS cloud. When I reading AWS resources I found that ECS providing configuration less environment for deploying microservices other than EC2. My doubt is that
Can I choose ECS resource for my complete services deployment without configurations?
For creating ECS service, is EC2 instance mandatory? Can I use ECS only in my account without creating EC2 VM? I need to know about ECS is alternative for EC2?
ECS is a service which offers clustering of vm for docker container, manages container lifecycle.
1) Yes. You can use ECS for your service deployment and it needs some basic configuration which will be one time.
2) No. To run docker container you need EC2 instance without that its not possible to run. EC2 instance are managed by ECS so you only need to provide some config like region,security group etc.
For complete config and deployment refer below link.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_GetStarted.html
I'm new to AWS, and I want to deploy a web application on an EC2 instance,
So far I've tried Elastic Beanstalk, but AWS always requires me to create a new Environment for the application instead of letting me choose an existing EC2 instance that I've created before.
Actually my main purpose is to set a policy group that allow HTTPS access, and idk how to set it to the "Environment" instance.
Any help is greatly welcome. :)
That is not currently viable, as you'd need to set up an AMI based on your instance and use a custom AMI for beanstalk, and that is not a trivial task. If you need to run a custom environment in Elastic Beanstalk, using Docker would be much easier.
But none of that is required to set a security group allowing HTTPS, you can configure security groups and HTTP/s listeners for ELBs on you Environment configuration.