I have a Terraform codebase which deploys a private EKS cluster, a bastion host and other AWS services. I have also added a few security groups to the in Terraform. One of the security groups allows inbound traffic from my Home IP to the bastion host so that i can SSH onto that node. This security group is called bastionSG, and that works fine also.
However, initially I am unable to run kubectl from my bastion host, which is the node I use to carry out my kubernetes development on against the EKS cluster nodes. The reason is because my EKS cluster is a private and only allows communication from nodes in the same VPC and i need to add a security group that allows the communication from my bastion host to the cluster control plane which is where my security group bastionSG comes in.
So my routine now is once Terraform deploys everything, I then find the automatic generated EKS security group and add my bastionSG as an inbound rule to it through the AWS Console (UI) as shown in the image below.
I would like to NOT have to do this through the UI, as i am already using Terraform to deploy my entire infrastructure.
I know i can query an existing security group like this
data "aws_security_group" "selectedSG" {
id = var.security_group_id
}
In this case, lets say selectedSG is the security group creared by EKS once terraform is completed the apply process. I would like to then add an inbound rule of bastionSG to it without it ovewriting the others it's added automatically.
UPDATE: > EKS NODE GROUP
resource "aws_eks_node_group" "flmd_node_group" {
cluster_name = var.cluster_name
node_group_name = var.node_group_name
node_role_arn = var.node_pool_role_arn
subnet_ids = [var.flmd_private_subnet_id]
instance_types = ["t2.small"]
scaling_config {
desired_size = 3
max_size = 3
min_size = 3
}
update_config {
max_unavailable = 1
}
remote_access {
ec2_ssh_key = "MyPemFile"
source_security_group_ids = [
var.allow_tls_id,
var.allow_http_id,
var.allow_ssh_id,
var.bastionSG_id
]
}
tags = {
"Name" = "flmd-eks-node"
}
}
As shown above, the EKS node group has the bastionSG security group in it. which i expect to allow the connection from my bastion host to the EKS control plane.
EKS Cluster
resource "aws_eks_cluster" "flmd_cluster" {
name = var.cluster_name
role_arn = var.role_arn
vpc_config {
subnet_ids =[var.flmd_private_subnet_id, var.flmd_public_subnet_id, var.flmd_public_subnet_2_id]
endpoint_private_access = true
endpoint_public_access = false
security_group_ids = [ var.bastionSG_id]
}
}
bastionSG_id is an output of the security group created below which is passed into the code above as a variable.
BastionSG security group
resource "aws_security_group" "bastionSG" {
name = "Home to bastion"
description = "Allow SSH - Home to Bastion"
vpc_id = var.vpc_id
ingress {
description = "Home to bastion"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [<MY HOME IP address>]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Name = "Home to bastion"
}
}
Let's start with creating first of all a public security group.
################################################################################
# Create the Security Group
################################################################################
resource "aws_security_group" "public" {
vpc_id = local.vpc_id
name = format("${var.name}-${var.public_security_group_suffix}-SG")
description = format("${var.name}-${var.public_security_group_suffix}-SG")
dynamic "ingress" {
for_each = var.public_security_group_ingress
content {
cidr_blocks = lookup(ingress.value, "cidr_blocks", [])
ipv6_cidr_blocks = lookup(ingress.value, "ipv6_cidr_blocks", [])
from_port = lookup(ingress.value, "from_port", 0)
to_port = lookup(ingress.value, "to_port", 0)
protocol = lookup(ingress.value, "protocol", "-1")
}
}
dynamic "egress" {
for_each = var.public_security_group_egress
content {
cidr_blocks = lookup(egress.value, "cidr_blocks", [])
ipv6_cidr_blocks = lookup(egress.value, "ipv6_cidr_blocks", [])
from_port = lookup(egress.value, "from_port", 0)
to_port = lookup(egress.value, "to_port", 0)
protocol = lookup(egress.value, "protocol", "-1")
}
}
tags = merge(
{
"Name" = format(
"${var.name}-${var.public_security_group_suffix}-SG",
)
},
var.tags,
)
}
Now creating a private security group, making inbound from the public security group, and outbound to the elasticache and rds security group.
resource "aws_security_group" "private" {
vpc_id = local.vpc_id
name = format("${var.name}-${var.private_security_group_suffix}-SG")
description = format("${var.name}-${var.private_security_group_suffix}-SG")
ingress {
security_groups = [aws_security_group.public.id]
from_port = 0
to_port = 0
protocol = "-1"
}
dynamic "ingress" {
for_each = var.private_security_group_ingress
content {
cidr_blocks = lookup(ingress.value, "cidr_blocks", [])
ipv6_cidr_blocks = lookup(ingress.value, "ipv6_cidr_blocks", [])
from_port = lookup(ingress.value, "from_port", 0)
to_port = lookup(ingress.value, "to_port", 0)
protocol = lookup(ingress.value, "protocol", "-1")
}
}
dynamic "egress" {
for_each = var.private_security_group_egress
content {
cidr_blocks = lookup(egress.value, "cidr_blocks", [])
ipv6_cidr_blocks = lookup(egress.value, "ipv6_cidr_blocks", [])
from_port = lookup(egress.value, "from_port", 0)
to_port = lookup(egress.value, "to_port", 0)
protocol = lookup(egress.value, "protocol", "-1")
}
}
egress {
security_groups = [aws_security_group.elsaticache_private.id] # it communciates via network interfaces
from_port = 6379 # redis port
to_port = 6379
protocol = "tcp"
}
egress {
security_groups = [aws_security_group.rds_mysql_private.id]
from_port = 3306
to_port = 3306
protocol = "tcp"
}
tags = merge(
{
"Name" = format(
"${var.name}-${var.private_security_group_suffix}-SG"
)
},
var.tags,
)
depends_on = [aws_security_group.elsaticache_private, aws_security_group.rds_mysql_private]
}
Creating just an egress rule in elasticache security group, and adding one more rule for ingress from the private security group as it resolves the dependency. The same goes for the RDS Security group.
resource "aws_security_group" "elsaticache_private" {
vpc_id = local.vpc_id
name = format("${var.name}-${var.private_security_group_suffix}-elasticache-SG")
description = format("${var.name}-${var.private_security_group_suffix}-elasticache-SG")
egress {
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
from_port = 0
to_port = 0
protocol = "-1"
}
tags = merge(
{
"Name" = format(
"${var.name}-${var.public_security_group_suffix}-elasticache-SG",
)
},
var.tags,
)
}
resource "aws_security_group_rule" "elsaticache_private_rule" {
type = "ingress"
from_port = 6379 # redis port
to_port = 6379
protocol = "tcp"
source_security_group_id = aws_security_group.private.id
security_group_id = aws_security_group.elsaticache_private.id
depends_on = [aws_security_group.private]
}
resource "aws_security_group" "rds_mysql_private" {
vpc_id = local.vpc_id
name = format("${var.name}-${var.private_security_group_suffix}-rds-mysql-SG")
description = format("${var.name}-${var.private_security_group_suffix}-rds-mysql-SG")
egress {
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
from_port = 0
to_port = 0
protocol = "-1"
}
tags = merge(
{
"Name" = format(
"${var.name}-${var.public_security_group_suffix}-rds-mysql-SG",
)
},
var.tags,
)
}
resource "aws_security_group_rule" "rds_mysql_private_rule" {
type = "ingress"
from_port = 3306 # mysql / aurora port
to_port = 3306
protocol = "tcp"
source_security_group_id = aws_security_group.private.id
security_group_id = aws_security_group.rds_mysql_private.id
depends_on = [aws_security_group.private]
}
There was a simpler solution.
Query AWS using terraform data attribute, get the id of the security group then use that to create security_group_rule in terraform with the inbound rule that is required.
I am currently learning Terraform and I need help with regard to the below code. I want to create a simple architecture of an autoscaling group of EC2 instances behind an Application load balancer. The setup gets completed but when I try to access the application endpoint, it gets timed out. When I tried to access the EC2 instances, I was unable to (because EC2 instances were in a security group allowing access from the ALB security group only). I changed the instance security group ingress values and ran the user_data script manually following which I reverted the changes to the instance security group to complete my setup.
My question is why is my setup not working via the below code? Is it because the access is being restricted by the load balancer security group or is my launch configuration block incorrect?
data "aws_ami" "amazon-linux" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-kernel-5.10-hvm-2.0.20220426.0-x86_64-gp2"]
}
}
data "aws_availability_zones" "available" {
state = "available"
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.14.0"
name = "main-vpc"
cidr = "10.0.0.0/16"
azs = data.aws_availability_zones.available.names
public_subnets = ["10.0.4.0/24","10.0.5.0/24","10.0.6.0/24"]
enable_dns_hostnames = true
enable_dns_support = true
}
resource "aws_launch_configuration" "TestLC" {
name_prefix = "Lab-Instance-"
image_id = data.aws_ami.amazon-linux.id
instance_type = "t2.nano"
key_name = "CloudformationKeyPair"
user_data = file("./user_data.sh")
security_groups = [aws_security_group.TestInstanceSG.id]
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "TestASG" {
min_size = 1
max_size = 3
desired_capacity = 2
launch_configuration = aws_launch_configuration.TestLC.name
vpc_zone_identifier = module.vpc.public_subnets
}
resource "aws_lb_listener" "TestListener"{
load_balancer_arn = aws_lb.TestLB.arn
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.TestTG.arn
}
}
resource "aws_lb" "TestLB" {
name = "Lab-App-Load-Balancer"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.TestLoadBalanceSG.id]
subnets = module.vpc.public_subnets
}
resource "aws_lb_target_group" "TestTG" {
name = "LabTargetGroup"
port = "80"
protocol = "HTTP"
vpc_id = module.vpc.vpc_id
}
resource "aws_autoscaling_attachment" "TestAutoScalingAttachment" {
autoscaling_group_name = aws_autoscaling_group.TestASG.id
lb_target_group_arn = aws_lb_target_group.TestTG.arn
}
resource "aws_security_group" "TestInstanceSG" {
name = "LAB-Instance-SecurityGroup"
ingress{
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = [aws_security_group.TestLoadBalanceSG.id]
}
ingress{
from_port = 22
to_port = 22
protocol = "tcp"
security_groups = [aws_security_group.TestLoadBalanceSG.id]
}
egress{
from_port = 0
to_port = 0
protocol = "-1"
security_groups = [aws_security_group.TestLoadBalanceSG.id]
}
vpc_id = module.vpc.vpc_id
}
resource "aws_security_group" "TestLoadBalanceSG" {
name = "LAB-LoadBalancer-SecurityGroup"
ingress{
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress{
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress{
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
vpc_id = module.vpc.vpc_id
}
I am running tableau server 2021-1-2 on EC2 instance.
I can connect using the default public ip on port 80, also on port 8050 for the Tableau TSM UI. And the same using the hostname I defined. The only issue I have is despite following several guidelines I can't connect using https.
I setup the ports on the security-group, the load-balancer, the certificate, i waited for hours as I saw that the ssl certificate could take more than half of an hour and nothing.
I can connect using:
http://my_domain.domain
But not:
https://my_domain.domain
I receive the following error in the browser: Can't connect to the server https://my_domain.domain.
I run curl -i https://my_domain.domain
It returns:
curl: (7) Failed to connect to my_domain.domainport 443: Connection refused
The security group of my instance has the following ports (u can see it in tf too):
Here you have my tf setup.
I did the EC2 setup with:
resource "aws_instance" "tableau" {
ami = var.ami
instance_type = var.instance_type
associate_public_ip_address = true
key_name = var.key_name
subnet_id = compact(split(",", var.public_subnets))[0]
vpc_security_group_ids = [aws_security_group.tableau-sg.id]
root_block_device{
volume_size = var.volume_size
}
tags = {
Name = var.namespace
}
}
I created the load balancer setup using:
resource "aws_lb" "tableau-lb" {
name = "${var.namespace}-alb"
load_balancer_type = "application"
internal = false
subnets = compact(split(",", var.public_subnets))
security_groups = [aws_security_group.tableau-sg.id]
ip_address_type = "ipv4"
enable_cross_zone_load_balancing = true
lifecycle {
create_before_destroy = true
}
idle_timeout = 300
}
resource "aws_alb_listener" "https" {
depends_on = [aws_alb_target_group.target-group]
load_balancer_arn = aws_lb.tableau-lb.arn
protocol = "HTTPS"
port = "443"
ssl_policy = "my_ssl_policy"
certificate_arn = "arn:xxxx"
default_action {
target_group_arn = aws_alb_target_group.target-group.arn
type = "forward"
}
lifecycle {
ignore_changes = [
default_action.0.target_group_arn,
]
}
}
resource "aws_alb_target_group" "target-group" {
name = "${var.namespace}-group"
port = 80
protocol = "HTTP"
vpc_id = var.vpc_id
target_type = "instance"
health_check {
healthy_threshold = var.health_check_healthy_threshold
unhealthy_threshold = var.health_check_unhealthy_threshold
timeout = var.health_check_timeout
interval = var.health_check_interval
path = var.path
}
tags = {
Name = var.namespace
}
lifecycle {
create_before_destroy = false
}
depends_on = [aws_lb.tableau-lb]
}
resource "aws_lb_target_group_attachment" "tableau-attachment" {
target_group_arn = aws_alb_target_group.target-group.arn
target_id = aws_instance.tableau.id
port = 80
}
The security group:
resource "aws_security_group" "tableau-sg" {
name_prefix = "${var.namespace}-sg"
tags = {
Name = var.namespace
}
vpc_id = var.vpc_id
# HTTP from the load balancer
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# HTTP from the load balancer
ingress {
from_port = 8850
to_port = 8850
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# HTTP from the load balancer
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# 443 secure access from anywhere
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# Outbound internet access
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
lifecycle {
create_before_destroy = true
}
}
Also setup a hostname domain using:
resource "aws_route53_record" "tableau-record-dns" {
zone_id = var.route_53_zone_id
name = "example.hostname"
type = "A"
ttl = "300"
records = [aws_instance.tableau.public_ip]
}
resource "aws_route53_record" "tableau-record-dns-https" {
zone_id = var.route_53_zone_id
name = "asdf.example.hostname"
type = "CNAME"
ttl = "300"
records = ["asdf.acm-validations.aws."]
}
Finally solved the issue, it was related to the record A. I was assignin an ip there an its impossible to redirect to an specific ip with the loadbalancer there. I redirect traffic to an ELB and worked fine
I've been trying to create RDS instance using terraform. The problem that I've been strugling with is that every time I create new instance, it's not reachable.
I'm creating it in subnets group containing public and private subnets, security group has rule allowing access from my IP, there is internet gateway in this vpc.
The strangest thing is that to fix that, I just need to change instance class using AWS console to any other, f.e. from db.t2.small to db.t2.micro and it suddenly starts working.
Here is fragment of my code:
resource "aws_db_subnet_group" "dbSubnetGroup" {
name = "${var.prefix}-db-subnet-group"
subnet_ids = concat(aws_subnet.publicSubnet.*.id, aws_subnet.privateSubnet.*.id)
tags = var.defaultTags
}
resource "aws_security_group" "rdsSecurityGroup" {
name = "${var.prefix}-rds-sg"
vpc_id = aws_vpc.vpc.id
ingress {
from_port = 1433
to_port = 1433
protocol = "tcp"
security_groups = [aws_eks_cluster.eksCluster.vpc_config[0].cluster_security_group_id]
}
ingress {
from_port = 1433
to_port = 1433
protocol = "tcp"
cidr_blocks = [var.myIP]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = var.defaultTags
}
resource "random_password" "rdsPassword" {
length = 32
special = true
override_special = "!#$%&*()-_=+[]{}<>:?"
}
resource "aws_db_instance" "dbInstance" {
allocated_storage = 20
storage_type = "gp2"
engine = var.dbInstanceEngine
license_model = "license-included"
instance_class = var.dbInstanceType
identifier = "${var.prefix}-db-instance"
username = var.dbUserName
password = random_password.rdsPassword.result
tags = var.defaultTags
db_subnet_group_name = aws_db_subnet_group.dbSubnetGroup.name
vpc_security_group_ids = [aws_security_group.rdsSecurityGroup.id]
skip_final_snapshot = true
allow_major_version_upgrade = true
copy_tags_to_snapshot = true
performance_insights_enabled = true
max_allocated_storage = 1000
enabled_cloudwatch_logs_exports = ["error"]
publicly_accessible = true
}
Am I doing something wrong or can it be a bug in aws provider?
If you want RDS to be connectable, the DB subnet group must be in public subnets only
I am running a spring cloud config server on aws which is just a docker container running a spring boot app. It is reading properties from a git repo. Our client applications read config from the server on startup and intermittently at runtime. About a third of the time, the client apps will timeout when pulling config at startup, causing the app to crash. At runtime, the apps seem to succeed 4 out of 5 times, though they will just use existing config if a request fails.
I am using an ec2 instance behind an alb which handles ssl termination. I was orignally using a t3.micro, but upgraded to an m5.large guessing that the t3 class may not support continuous availability.
The alb required 2 subnets, so I created a second with nothing in it initially. I am unsure if the alb will attempt to route to the second subnet at some point, which could be causing the failures. The target group is using a health check which returns correctly, but idk enough about alb's to rule out round-robin'ing to an empty subnet. I attempted to create a second ec2 instance to parallel my first config server in the second subnet. however, I was unable to ssh into the second instance even though it's using the same security group and config as the first. I'm not sure why that failed, but i'm guessing there is something else wrong with my setup.
All infrastructure was deployed with terraform, which I have included below.
resources.tf
provider "aws" {
region = "us-east-2"
version = ">= 2.38.0"
}
data "aws_ami" "amzn_linux" {
most_recent = true
filter {
name = "name"
values = ["amzn2-ami-hvm-2.0.*-x86_64-gp2"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["137112412989"]
}
resource "aws_vpc" "config-vpc" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
}
resource "aws_security_group" "config_sg" {
name = "config-sg"
description = "http, https, and ssh"
vpc_id = aws_vpc.config-vpc.id
ingress {
from_port = 9000
to_port = 9000
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_subnet" "subnet-alpha" {
cidr_block = cidrsubnet(aws_vpc.config-vpc.cidr_block, 3, 1)
vpc_id = aws_vpc.config-vpc.id
availability_zone = "us-east-2a"
}
resource "aws_subnet" "subnet-beta" {
cidr_block = cidrsubnet(aws_vpc.config-vpc.cidr_block, 3, 2)
vpc_id = aws_vpc.config-vpc.id
availability_zone = "us-east-2b"
}
resource "aws_internet_gateway" "config-vpc-ig" {
vpc_id = aws_vpc.config-vpc.id
}
resource "aws_route_table" "config-vpc-rt" {
vpc_id = aws_vpc.config-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.config-vpc-ig.id
}
}
resource "aws_route_table_association" "subnet-association-alpha" {
subnet_id = aws_subnet.subnet-alpha.id
route_table_id = aws_route_table.config-vpc-rt.id
}
resource "aws_route_table_association" "subnet-association-beta" {
subnet_id = aws_subnet.subnet-beta.id
route_table_id = aws_route_table.config-vpc-rt.id
}
resource "aws_alb" "alb" {
name = "config-alb"
subnets = [aws_subnet.subnet-alpha.id, aws_subnet.subnet-beta.id]
security_groups = [aws_security_group.config_sg.id]
}
resource "aws_alb_target_group" "alb_target_group" {
name = "config-tg"
port = 9000
protocol = "HTTP"
vpc_id = aws_vpc.config-vpc.id
health_check {
enabled = true
path = "/actuator/health"
port = 9000
protocol = "HTTP"
}
}
resource "aws_instance" "config_server_alpha" {
ami = data.aws_ami.amzn_linux.id
instance_type = "m5.large"
vpc_security_group_ids = [aws_security_group.config_sg.id]
key_name = "config-ssh"
subnet_id = aws_subnet.subnet-alpha.id
associate_public_ip_address = true
}
resource "aws_instance" "config_server_beta" {
ami = data.aws_ami.amzn_linux.id
instance_type = "m5.large"
vpc_security_group_ids = [aws_security_group.config_sg.id]
key_name = "config-ssh"
subnet_id = aws_subnet.subnet-beta.id
associate_public_ip_address = true
}
resource "aws_alb_target_group_attachment" "config-target-alpha" {
target_group_arn = aws_alb_target_group.alb_target_group.arn
target_id = aws_instance.config_server_alpha.id
port = 9000
}
resource "aws_alb_target_group_attachment" "config-target-beta" {
target_group_arn = aws_alb_target_group.alb_target_group.arn
target_id = aws_instance.config_server_beta.id
port = 9000
}
resource "aws_alb_listener" "alb_listener_80" {
load_balancer_arn = aws_alb.alb.arn
port = 80
default_action {
type = "redirect"
redirect {
port = 443
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}
resource "aws_alb_listener" "alb_listener_8080" {
load_balancer_arn = aws_alb.alb.arn
port = 8080
default_action {
type = "redirect"
redirect {
port = 443
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}
resource "aws_alb_listener" "alb_listener_https" {
load_balancer_arn = aws_alb.alb.arn
port = 443
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2016-08"
certificate_arn = "arn:..."
default_action {
target_group_arn = aws_alb_target_group.alb_target_group.arn
type = "forward"
}
}
config server
#SpringBootApplication
#EnableConfigServer
public class ConfigserverApplication {
public static void main(String[] args) {
SpringApplication.run(ConfigserverApplication.class, args);
}
}
application.yml
spring:
profiles:
active: local
---
spring:
profiles: local, default, cloud
cloud:
config:
server:
git:
uri: ...
searchPaths: '{application}/{profile}'
username: ...
password:...
security:
user:
name: admin
password: ...
server:
port: 9000
management:
endpoint:
health:
show-details: always
info:
git:
mode: FULL
bootstrap.yml
spring:
application:
name: config-server
encrypt:
key: |
-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----