I'm investigating a move to Kubernetes (coming from AWS ECS). But I haven't solved the local development issue when depending on internal services.
Let me elaborate:
When developing and testing microservices, before they are deployed as a Kubernetes Service I want to be able to talk to other, internal Kubernetes Services. As there are > 20 microservices I have a Kubernetes cluster running latest development versions. I can't run a MiniKube.
example:
I'm developing an user-service which needs access to the email service. The Email service is already on Kubernetes and is an internal service.
So before the user-service is deployed I want to be able to talk to the internal email service for dev/testing. I can't make use of K8S nice service discovery env vars.
As we currently already have a VPN up to restrict DEV env to testers/development only, could I use this VPN to provide access to the Kubernetes-Service IP-addresses? I do have Kubernetes DEV-env on the same VPC as the VPN is in.
If you deploy your internal services as type NodePort, then you can access them over your VPN via that nodePort. NodePorts can be dynamically allocated or you can customize them to be 'static' where they are known by you up front.
When developing an app on your local machine, you can access the dependent service by that NodePort.
As an alternative, you can use port-forwarding from kubectl (https://kubernetes.io/docs/user-guide/connecting-to-applications-port-forward/) to forward a pod to your local machine. (Note: This only handles traffic to a pod not a service).
Telepresence (http://telepresence.io) is designed for this scenario, though it presumes developers have kubectl access to the staging/dev cluster.
Related
I have read other questions about this that all mention enabling service discovery, but my issue is a little different as to how to go about setting this up for my current Fargate deployments.
I have four spring boot api containers built via Gradle, pushed to ECR, and deployed in ECS Fargate with Terraform IaC setting up the appropriate resources. Three of these containerized apis have environment variables set within them to reference the fourth container, thus making an external api call outside of the container to that one service. DNS and 443 load balancer is setup for these deployments.
I have created a new service in the cluster containing the api that needs to be discovered. I have enabled service discover and created a local CloudMap A record for the api and then set each environment variable in the other containzers to use that local A record url, e.g., ecsservicename.local. Additionally I have tried to dig the service that I am connecting to in the other apis and that returns an IP so I am sure that that is working.
My questions are as follows:
(1) Since really only one services should be picked up by the others, was it correct to set service discovery on that one api and not the others or should I set up service discovery on all the other apis?
(2) Even if route53 is setup should this be an A record or SRV? I was confused by the documentation as to when to use which on aws.
(3) Is there a better or easier approach to use for inter-container communication that I am missing?
That's correct. Discovery should be set only for the one service. Other discoveries are not needed, as you are not inter-connection to those other services.
SRV also gives port, so from docs:
if the task definition that your service task specifies uses the bridge or host network mode, an SRV record is the only supported DNS record type.
I think your architecture is well thought and can't think of anything "easier" or better.
I'd like to have my Google Cloud Run services privately communicate with one another over non-HTTP and/or without having to add bearer authentication in my code.
I'm aware of this documentation from Google which describes how you can do authenticated access between services, although it's obviously only for HTTP.
I think I have a general idea of what's necessary:
Create a custom VPC for my project
Enable the Serverless VPC Connector
What I'm not totally clear on is:
Is any of this necessary? Can Cloud Run services within the same project already see each other?
How do services address one another after this?
Do I gain the ability to use simpler by-convention DNS names? For example, could I have each service in Cloud Run manifest on my VPC as a single first level DNS name like apione and apitwo rather than a larger DNS name that I'd then have to hint in through my deployments?
If not, is there any kind of mechanism for services to discover names?
If I put my managed Cloud SQL postgres database on this network, can I control its DNS name?
Finally, are there any other gotchas I might want to be aware of? You can assume my use case is very simple, two or more long lived services on Cloud Run, doing non-HTTP TCP/UDP communications.
I also found a potentially related Google Cloud Run feature request that is worth upvoting if this isn't currently possible.
Cloud Run services are only reachable through HTTP request. you can't use other network protocol (SSH to log into instances for example, or TCP/UDP communication).
However, Cloud Run can initiate these kind of connection to external services (for instance Compute Engine instances deployed in your VPC, thanks to the serverless VPC Connector).
the serverless VPC connector allow you to make a bridge between the Google Cloud managed environment (where live the Cloud Run (and Cloud Functions/App Engine) instances) and the VPC of your project where you have your own instances (Compute Engine, GKE node pools,...)
Thus you can have a Cloud Run service that reach a Kubernetes pods on GKE through a TCP connection, if it's your requirement.
About service discovery, it's not yet the case but Google work actively on that and Ahmet (Google Cloud Dev Advocate on Cloud Run) has released recently a tool for that. But nothing really build in.
We are using Spring Cloud Netflix Eureka for Service Registration. We will be deploying all microservices in GCP (Google Cloud).
Environment
We have Eureka Servers running as a cluster.
Eureka Server registers themselves as client to its peer in application.properties
eureka.client.service-url.default-zone=http://xx.xx.xx.xxx:8762/eureka
Client microservices register/enroll themselves by
providing Eureka Server IPs in application.properties
eureka.client.service-url.default-zone=http://xx.xx.xx.xxx:8761:/eureka,http://xx.xx.xx.xxx:8762:/eureka
Since IP Address and hostnames are dynamic in cloud, can we configure Eureka Servers in cluster without using ipaddress/hostname.
Please provide a sample confiugration to use in Google Cloud.
gcloud maintains internal DNS resolver for subnets (if you are using default OS images).
So you can use host names to resolve IP addresses. Like prod-redis-2.c.project-<id>.internal.
You may probably need to configure links between subnets to avoid making IP addresses public.
I have not used GCP but have implemented and deployed spring cloud on PCF (which, on a higher level, is pretty much same as GCP).
You cannot make defaultZone completely dynamic. Why? Because these propeties are picked up during the application startup.
There needs to be something (some service or database) in your architecture that tells your services the dynamic hostnames/IP-addresses of other services. That is Eureka server in your case. All services needs to know the address (hostname/IP-address) of Eureka service. Now if Eureka server's hostname is dynamic, then how will your services know about the new hostname of Eureka server when that hostname changes?
You'll have to update the address of Eureka server manually only. What, at max, you can do is externalize defaultZone to a centralized configuration server (or something similar). That way you'll have to update the new address at one place only.
Let me start this by saying I am fairly new to k8s. I'm using kops on aws.
I currently have 3 deployments on a cluster.
FrontEnd nginx image serving an angular web app. One pod. External service.
socket.io server. Internal service. (this is a chat application, and we decided to separate this server from our api. Was this a good idea?)
API that is requested by both the socket.io server and the web application. Internal Service (should it be external?)
The socket.io deployment and API seem to be able to communicate through the cluster ips and corresponding services I have set up for the deployments; however, the webapp times out when querying the API.
From the web app, I am querying the API using the API's cluster IP address. Should I be requesting a different address?
Additionally, what is the best way to configure these addresses in my files without having to change the addresses in the files each time I create a new deployment? (the cluster ip addresses change every time you tare down and recreate the deployment)
If I understood correctly your frontend web application depends on API server, so that it sends requests to it. In such case, your API service should be available from outside of the cluster. It means it should be exposed as the NodePort or LoadBalancer service type.
P.S. you can refer to service using ClusterIP only inside of the cluster.
I'm getting ready to setup HashiCorp Vault with my web application, and while the examples HashiCorp provides make sense, I'm a little unclear of what the intended production setup should be.
In my case, I have:
a small handful of AWS EC2 instances serving my web application
a couple EC2 instances serving Jenkins for continuous deployment
and I need:
My configuration software (Ansible) and Jenkins to be able to read secrets during deployment
to allow employees in the company to read secrets as needed, and potentially, generate temporary ones for certain types of access.
I'll probably be using S3 as a storage backend for Vault.
The types of questions I have are:
Should vault be running on all my EC2 instances, and listening at 127.0.0.1:8200?
Or do I create an instance (maybe 2 for availability) that just run Vault and have other instances / services connect to those as needed for secret access?
If i needed employees to be able to access secrets from their local machines, how does that work? Do they setup vault locally against the S3 storage, or should they be hitting the REST API of the remote servers from step 2 to access their secrets?
And to be clear, any machine that's running vault, if it's restarted, then vault would need to be unsealed again, which seems to be a manual process involving x number of key holders?
Vault runs in a client-server architecture, so you should have a dedicated cluster of Vault servers (usually 3 is suitable for small-medium installations) running in availability mode.
The Vault servers should probably bind to the internal private IP, not 127.0.0.1, since they they won't be accessible within your VPC. You definitely do not want to bind 0.0.0.0, since that could make Vault publicly accessible if your instance has a public IP.
You'll want to bind to the IP that is advertised on the certificate, whether that's the IP or the DNS name. You should only communicate with Vault over TLS in a production-grade infrastructure.
Any and all requests go through those Vault servers. If other users need to communicate with Vault, they should connect to the VPC via a VPN or bastion host and issue requests against it.
When a machine that is running Vault is restarted, Vault does need to be unsealed. This is why you should run Vault in HA mode, so another server can accept requests. You can setup monitoring and alerting to notify you when a server needs to be unsealed (Vault returns a special status code).
You can also read the production hardening guide for more tips.
Specifically for point 3 & 4:
They would talk to the Vault API which is running on one/more machines in your cluster (If you do run it in HA mode with multiple machines, only one node will be active at anytime). You would provision them with some kind of authentication, using one of the available authentication backends like LDAP.
Yes, by default and in it's recommended configuration if any of your Vault nodes in your cluster get restarted or stopped, you will need to unseal it with however many keys are required; depending on how many key shards were generated when you stood up Vault.