Iam unable to get the ALB URL.. The address is empty - amazon-web-services

Iam trying to follow this guide
https://aws.amazon.com/blogs/containers/using-alb-ingress-controller-with-amazon-eks-on-fargate/
Steps below:
Cluster provisioning
AWS_REGION=us-east-1
CLUSTER_NAME=eks-fargate-alb-demo
eksctl create cluster --name $CLUSTER_NAME --region $AWS_REGION --fargate
kubectl get svc
You should get the following response:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 443/TCP 16h
Set up OIDC provider with the cluster and create the IAM policy used by the ALB Ingress Controller
wget -O alb-ingress-iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/master/docs/examples/iam-policy.json
aws iam create-policy --policy-name ALBIngressControllerIAMPolicy --policy-document file://alb-ingress-iam-policy.json
STACK_NAME=eksctl-$CLUSTER_NAME-cluster
VPC_ID=$(aws cloudformation describe-stacks --stack-name "$STACK_NAME" | jq -r '[.Stacks[0].Outputs[] | {key: .OutputKey, value: .OutputValue}] | from_entries' | jq -r '.VPC')
AWS_ACCOUNT_ID=$(aws sts get-caller-identity | jq -r '.Account')
cat > rbac-role.yaml <<-EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
rules:
- apiGroups:
- ""
- extensions
resources:
- configmaps
- endpoints
- events
- ingresses
- ingresses/status
- services
verbs:
- create
- get
- list
- update
- watch
- patch
- apiGroups:
- ""
- extensions
resources:
- nodes
- pods
- secrets
- services
- namespaces
verbs:
- get
- list
- watch
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: alb-ingress-controller
subjects:
- kind: ServiceAccount
name: alb-ingress-controller
namespace: kube-system
EOF
kubectl apply -f rbac-role.yaml
These commands will create two resources for us and the output should be similar to this:
clusterrole.rbac.authorization.k8s.io/alb-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/alb-ingress-controller created
And finally the Kubernetes Service Account:
eksctl create iamserviceaccount \
--name alb-ingress-controller \
--namespace kube-system \
--cluster $CLUSTER_NAME \
--attach-policy-arn arn:aws:iam::$AWS_ACCOUNT_ID:policy/ALBIngressControllerIAMPolicy \
--approve
This eksctl command will deploy a new CloudFormation stack with an IAM role. Wait for it to finish before keep executing the next steps.
Deploy the ALB Ingress Controller
Let’s now deploy the ALB Ingress Controller to our cluster:
cat > alb-ingress-controller.yaml <<-EOF
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: alb-ingress-controller
template:
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
spec:
containers:
- name: alb-ingress-controller
args:
- --ingress-class=alb
- --cluster-name=$CLUSTER_NAME
- --aws-vpc-id=$VPC_ID
- --aws-region=$AWS_REGION
image: docker.io/amazon/aws-alb-ingress-controller:v1.1.4
serviceAccountName: alb-ingress-controller
EOF
kubectl apply -f alb-ingress-controller.yaml
Deploy sample application to the cluster
Now that we have our ingress controller running, we can deploy the application to the cluster and create an ingress resource to expose it.
Let’s start with a deployment:
cat > nginx-deployment.yaml <<-EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "nginx-deployment"
namespace: "default"
spec:
replicas: 3
template:
metadata:
labels:
app: "nginx"
spec:
containers:
- image: nginx:latest
imagePullPolicy: Always
name: "nginx"
ports:
- containerPort: 80
EOF
kubectl apply -f nginx-deployment.yaml
The output should be similar to:
deployment.apps/alb-ingress-controller created
Then, let’s create a service so we can expose the NGINX pods:
cat > nginx-service.yaml <<-EOF
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/target-type: ip
name: "nginx-service"
namespace: "default"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "nginx"
EOF
kubectl apply -f nginx-service.yaml
The output will be similar to:
deployment.extensions/nginx-deployment created
Finally, let’s create our ingress resource:
cat > nginx-ingress.yaml <<-EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "nginx-ingress"
namespace: "default"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: nginx-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "nginx-service"
servicePort: 80
EOF
kubectl apply -f nginx-ingress.yaml
The output will be:
ingress.extensions/nginx-ingress created
Once everything is done, you will be able to get the ALB URL by running the following command:
kubectl get ingress nginx-ingress
The output of this command will be similar to this one:
NAME HOSTS ADDRESS PORTS AGE
nginx-ingress * 5e07dbe1-default-ngnxingr-2e9-113757324.us-east-2.elb.amazonaws.com 80 9s
but Iam unable to get the ALB URL in this step
kubectl get ingress nginx-ingress
Any help.. Thanks in advance..

I had the same issue and I fixed it by updating alb-ingress-controller.yaml. I replaced $CLUSTER_NAME, $VPC_ID and $AWS_REGION by their value.

Related

AWS Elastic Loadbalancer name not getting reflected after creation

We are trying to create an Istio Loadbalancer with the help of Yaml file. After running the Yaml file, the name of the loadbalncer specified in the service annotation is not getting reflected on the AWS console. What could be the possible reason?
We have used the below code to create the loadbalancer :
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
accessLogFile: /dev/stdout
accessLogEncoding: TEXT
profile: demo
hub: <jfrog-repo>
namespace: istio-system
value:
global:
imagePullSecrets: ["mysecret"]
components:
ingressGateways:
- name: istio-ingressgateway
k8s:
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-name: "my-istio-lb"
service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-abc012"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-subnets: sub-abc01234,sub-qwe56789
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "mybucket"

Unable to connect to AWS services form EKS

I have created an EKS cluster using eksctl. I am following these steps to establish connectivity to AWS services like S3, cloudwatch using spring-boot.
Create EKS using eksctl - This has my service account details and OIDC enabled.
List the service accounts to see if they were created fine
Create a deployment using the account name
Create a service
I am seeing a 403 in the logs:
User: arn:aws:sts:xxx/xxxx is not authorized to perform:
cloudformation:DescribeStackResources because no identity-based policy allows
the cloudformation:DescribeStackResources action (Service: AmazonCloudFormation; Status Code: 403;
Error Code: AccessDenied; Request ID: xxxx)
Can I get some help here to troubleshoot this issue, please?
What I have figured out after posting this issue is my node which is provisioned by eksctl, has been applied with rules. This is the rule which my app is picking up due to the default CredentialChain.
What I haven't still figured out is how do I enable the apps in the pod to assume a service account role.
YAML for #1
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: name
region: ap-south-1
availabilityZones: ["xxxx", "xxxx", "xxxx"]
managedNodeGroups:
- name: c5large-nodes
desiredCapacity: 1
instanceType: c5.large
labels:
node-type: large
volumeSize: 5
cloudWatch:
clusterLogging:
enableTypes: [ "*" ]
iam:
withOIDC: true
serviceAccounts:
- metadata:
name: cluster-autoscaler
namespace: kube-system
labels: {aws-usage: "autoscaling"}
wellKnownPolicies:
autoScaler: true
roleName: eksctl-cluster-autoscaler-role
roleOnly: true
- metadata:
name: backend-stage-iam-role
namespace: backend-stage
labels: { aws-usage: "all-backend-allow" }
attachPolicyARNs:
- "arn:aws:iam::xxxx"
- metadata:
name: mq-access
namespace: backend-stage
labels: { aws-usage: "MQ" }
attachPolicyARNs:
- "arn:aws:iam::aws:policy/AmazonMQFullAccess"
YAML for deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
namespace: backend-stage
spec:
replicas: 1
selector:
matchLabels:
app: my-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: my-app
spec:
serviceAccountName: backend-stage-iam-role
containers:
- image: xxx/my-app:latest
imagePullPolicy: Always
name: my-app
ports:
- containerPort: 8080
protocol: TCP
env:
- name: SPRING_PROFILES_ACTIVE
value: stage
YAML for service
apiVersion: v1
kind: Service
metadata:
name: my-app
namespace: backend-stage
spec:
selector:
app: my-app
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 8080
The role is defined like this for now:
- Effect: Allow
Action:
- cloudformation:*
Resource: "*"
I did further debugging, by describing the pod, I can see the role passed as an ENV parameter:
AWS_ROLE_ARN: arn:aws:iam::MYACCOUNT:role/MyRole```
Just add missing permission to arn:aws:sts:xxx/xxxx assumed role.

Dynamically set External dns for EKS fargate ingress alb using external-dns.alpha.kubernetes.io

I am trying to set up external dns from Eks manifest file.
I created EKS cluster and created 3 fargate profiles, default, kube-system and dev.
Coredns pods are up and running.
I then installed AWS Load Balancer Controller by following this doc.
https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html
The load balancer controller came up in kube-system.
I then installed external-dns deployment using the following manifest file.
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
namespace: kube-system
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::xxxxxxxxx:role/eks-externaldnscontrollerRole-XST756O4A65B
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
namespace: kube-system
spec:
selector:
matchLabels:
app: external-dns
strategy:
type: Recreate
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: bitnami/external-dns:0.7.1
args:
- --source=service
- --source=ingress
#- --domain-filter=xxxxxxxxxx.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
- --provider=aws
#- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
- --registry=txt
- --txt-owner-id=my-identifier
#securityContext:
# fsGroup: 65534
I used both namespace kube-system and dev for external-dns, both came up fine.
I then deployed, the application and ingress manifest files. I used both namespaces, kube-system and dev.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-nginx-deployment
labels:
app: app1-nginx
namespace: kube-system
spec:
replicas: 2
selector:
matchLabels:
app: app1-nginx
template:
metadata:
labels:
app: app1-nginx
spec:
containers:
- name: app1-nginx
image: kube-nginxapp1:1.0.0
ports:
- containerPort: 80
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "500Mi"
cpu: "1000m"
---
apiVersion: v1
kind: Service
metadata:
name: app1-nginx-nodeport-service
labels:
app: app1-nginx
namespace: kube-system
annotations:
#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer
alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html
spec:
type: NodePort
selector:
app: app1-nginx
ports:
- port: 80
targetPort: 80
----------
# Annotations Reference: https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-usermgmt-restapp-service
labels:
app: usermgmt-restapp
namespace: kube-system
annotations:
# Ingress Core Settings
alb.ingress.kubernetes.io/scheme: internet-facing
kubernetes.io/ingress.class: alb
# Health Check Settings
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer
#alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
## SSL Settings
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0xxxxxxxxxx:certificate/0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
#alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used)
# SSL Redirect Setting
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
# External DNS - For creating a Record Set in Route53
external-dns.alpha.kubernetes.io/hostname: palb.xxxxxxx.com
# For Fargate
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- http:
paths:
- path: /* # SSL Redirect Setting
pathType: ImplementationSpecific
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: app1-nginx-nodeport-service
port:
number: 80
All pods came up fine but it is not dynamically registering the the dns alias for thr alb.
Can you please guide me to know what i am going wrong?
First, check the ingress itself works. Check the AWS load balancers and load balancer target groups. The target group targets should be active.
If you do a kubectl get ingress this should be also output the DNS name of the load balancer created.
Use curl to check this url works!
The annotation external-dns.alpha.kubernetes.io/hostname: palb.xxxxxxx.com does not work for ingresses. It is only valid for services. But you don't need it. Just specify the host field for the ingress spec.rules. In your example there is no such property. Specify it!

Kubernetes ingress in AWS

Please, help me to deal with accessibility of my simple application.
I created YML with an application:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-test
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: gcr.io/kubernetes-e2e-test-images/echoserver:2.1
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Service
apiVersion: v1
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- host: myapp.com
http:
paths:
- path: /
backend:
serviceName: myapp-service
servicePort: 80
- path: /hello
backend:
serviceName: myapp-service
servicePort: 80
Then I created k8s cluster via kops, like this, all services k8s have risen, I can enter the master:
kops create cluster \
--node-count = 2 \
--node-size = t2.micro \
--master-size = t2.micro \
--master-count = 1 \
--zones = us-east-1a \
--name = ${KOPS_CLUSTER_NAME}
In the end, I can't get to the application on port 80, it write's that the connection is refused!
Can someone tell me, what is the problem? This yml above fully works, but in the minikube environment(
Indeed you have created an Ingress resource, but I presume you have not deployed prior the NGINX Ingress Controller for your on-premise cluster on AWS. It's explained here on how to do this in general.
In case of Kubernetes cluster bootsrapped with Kops, things are more complex, and it requires you to modify an existing cluster, to use a dedicated kops add-on: kube-ingress-aws-controller, as explained on their github project page here
In current form your app can be reached only via Node/AWS Instance external IP on port assigned from default range (30000-32767). You can check currently assign port via kubectl get svc myapp-service), but this requires opening it first on firewall (default Inbound rules deny All traffic apart SSH). Based on you deploy/service manifest files:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myapp-service NodePort 100.64.187.80 <none> 80:32076/TCP 37m
with port 32076 open in inbound rules of Security Group assigned to my instance I can now reach app on NodePort:
curl <node_external_ip>:32076
Hostname: myapp-test-f87bcbd44-8nxpn
Pod Information:
-no pod information available-
Server values:

kubectl - cert manager - credentials not found

I want to have TLS termination enabled on ingress (on top of kubernetes) on google cloud platform.
My ingress cluster is working, my cert manager is failing with the error message
textPayload: "2018/07/05 22:04:00 Error while processing certificate during sync: Error while creating ACME client for 'domain': Error while initializing challenge provider googlecloud: Unable to get Google Cloud client: google: error getting credentials using GOOGLE_APPLICATION_CREDENTIALS environment variable: open /opt/google/kube-cert-manager.json: no such file or directory
"
This is what I did in order to get into the current state:
created cluster, deployment, service, ingress
executed:
gcloud --project 'project' iam service-accounts create kube-cert-manager-sv-security --display-name "kube-cert-manager-sv-security"
gcloud --project 'project' iam service-accounts keys create ~/.config/gcloud/kube-cert-manager-sv-security.json --iam-account kube-cert-manager-sv-security#'project'.iam.gserviceaccount.com
gcloud --project 'project' projects add-iam-policy-binding --member serviceAccount:kube-cert-manager-sv-security#'project'.iam.gserviceaccount.com --role roles/dns.admin
kubectl create secret generic kube-cert-manager-sv-security-secret --from-file=/home/perre/.config/gcloud/kube-cert-manager-sv-security.json
and created the following resources:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kube-cert-manager-sv-security-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-cert-manager-sv-security
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kube-cert-manager-sv-security
rules:
- apiGroups: ["*"]
resources: ["certificates", "ingresses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["*"]
resources: ["secrets"]
verbs: ["get", "list", "create", "update", "delete"]
- apiGroups: ["*"]
resources: ["events"]
verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kube-cert-manager-sv-security-service-account
subjects:
- kind: ServiceAccount
namespace: default
name: kube-cert-manager-sv-security
roleRef:
kind: ClusterRole
name: kube-cert-manager-sv-security
apiGroup: rbac.authorization.k8s.io
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: certificates.stable.k8s.psg.io
spec:
scope: Namespaced
group: stable.k8s.psg.io
version: v1
names:
kind: Certificate
plural: certificates
singular: certificate
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: kube-cert-manager-sv-security
name: kube-cert-manager-sv-security
spec:
replicas: 1
template:
metadata:
labels:
app: kube-cert-manager-sv-security
name: kube-cert-manager-sv-security
spec:
serviceAccount: kube-cert-manager-sv-security
containers:
- name: kube-cert-manager
env:
- name: GCE_PROJECT
value: solidair-vlaanderen-207315
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /opt/google/kube-cert-manager.json
image: bcawthra/kube-cert-manager:2017-12-10
args:
- "-data-dir=/var/lib/cert-manager-sv-security"
#- "-acme-url=https://acme-staging.api.letsencrypt.org/directory"
# NOTE: the URL above points to the staging server, where you won't get real certs.
# Uncomment the line below to use the production LetsEncrypt server:
- "-acme-url=https://acme-v01.api.letsencrypt.org/directory"
# You can run multiple instances of kube-cert-manager for the same namespace(s),
# each watching for a different value for the 'class' label
- "-class=kube-cert-manager"
# You can choose to monitor only some namespaces, otherwise all namespaces will be monitored
#- "-namespaces=default,test"
# If you set a default email, you can omit the field/annotation from Certificates/Ingresses
- "-default-email=viae.it#gmail.com"
# If you set a default provider, you can omit the field/annotation from Certificates/Ingresses
- "-default-provider=googlecloud"
volumeMounts:
- name: data-sv-security
mountPath: /var/lib/cert-manager-sv-security
- name: google-application-credentials
mountPath: /opt/google
volumes:
- name: data-sv-security
persistentVolumeClaim:
claimName: kube-cert-manager-sv-security-data
- name: google-application-credentials
secret:
secretName: kube-cert-manager-sv-security-secret
anyone knows what I'm missing?
Your secret resource kube-cert-manager-sv-security-secret may contains a JSON file named kube-cert-manager-sv-security.json and it is not matched with GOOGLE_APPLICATION_CREDENTIALS value. You can confirm file name in the secret resource with kubectl get secret -oyaml YOUR-SECRET-NAME.
So you change the file path to the actual file name, cert-manager works fine.
- name: GOOGLE_APPLICATION_CREDENTIALS
# value: /opt/google/kube-cert-manager.json
value: /opt/google/kube-cert-manager-sv-security.json