Istio limit access to Google cloud resources - google-cloud-platform

I have a service running o Google Container Engine(Kubernetes). It access Google Cloud Storage and works fine.
On the same Kubernetes cluster, I installed Istio 0.1 following to https://istio.io/v-0.1/docs/tasks/installing-istio.html
I deploy my service via kube-inject
kubectl create -f <(istioctl kube-inject -f myservice.yaml)
But now my service cannot access Google Cloud Storage any more. I get the following error message:
java.lang.IllegalArgumentException: A project ID is required for this service but could not be determined from the builder or the environment. Please set a project ID using the builder.
To me it looks like the kube-inject and the sidecar make something so my service cannot access information about my google cloud project I am running in. As far as I can see is the sidecar the only difference.
Service still works when deploying without kube-inject.
What can cause this effect?

You may want to configure access to your external services as explained in Enabling Egress Traffic: either as Kubernetes external services or to use istioctl --includeIPRanges to exclude external traffic from being controlled by Istio.

Related

Private service to service communication for Google Cloud Run

I'd like to have my Google Cloud Run services privately communicate with one another over non-HTTP and/or without having to add bearer authentication in my code.
I'm aware of this documentation from Google which describes how you can do authenticated access between services, although it's obviously only for HTTP.
I think I have a general idea of what's necessary:
Create a custom VPC for my project
Enable the Serverless VPC Connector
What I'm not totally clear on is:
Is any of this necessary? Can Cloud Run services within the same project already see each other?
How do services address one another after this?
Do I gain the ability to use simpler by-convention DNS names? For example, could I have each service in Cloud Run manifest on my VPC as a single first level DNS name like apione and apitwo rather than a larger DNS name that I'd then have to hint in through my deployments?
If not, is there any kind of mechanism for services to discover names?
If I put my managed Cloud SQL postgres database on this network, can I control its DNS name?
Finally, are there any other gotchas I might want to be aware of? You can assume my use case is very simple, two or more long lived services on Cloud Run, doing non-HTTP TCP/UDP communications.
I also found a potentially related Google Cloud Run feature request that is worth upvoting if this isn't currently possible.
Cloud Run services are only reachable through HTTP request. you can't use other network protocol (SSH to log into instances for example, or TCP/UDP communication).
However, Cloud Run can initiate these kind of connection to external services (for instance Compute Engine instances deployed in your VPC, thanks to the serverless VPC Connector).
the serverless VPC connector allow you to make a bridge between the Google Cloud managed environment (where live the Cloud Run (and Cloud Functions/App Engine) instances) and the VPC of your project where you have your own instances (Compute Engine, GKE node pools,...)
Thus you can have a Cloud Run service that reach a Kubernetes pods on GKE through a TCP connection, if it's your requirement.
About service discovery, it's not yet the case but Google work actively on that and Ahmet (Google Cloud Dev Advocate on Cloud Run) has released recently a tool for that. But nothing really build in.

How to move Java REST application to Google Cloud

I have REST API Java application and want move it to cloud.
But I don't understand which tutorial use.
I already have docker image in Container Registry made by Jib and want connect it with some cloud database (Cloud SQL/Spanner).
How change this connection props to cloud?
db.driver=com.mysql.cj.jdbc.Driver
db.url=jdbc:mysql://localhost:3306/db
db.username=usrname
db.password=pswd
db.entity.package = com.example.model
We do this through Cloud SQL Proxy docker image: https://cloud.google.com/sql/docs/mysql/connect-docker
Enable the Cloud SQL Admin API.
Install the mysql client on the Compute Engine instance or client machine, if it is not already installed.
If needed, install the Docker client
4.Install the Proxy Docker image from the Google Container Registry.
If you are running the Proxy Docker image on a local machine (not a Compute Engine instance), or your Compute Engine instance does not have the proper scopes, create a Google Cloud Platform service account.
Go to the Cloud SQL Instances page in the Google Cloud Console.
Select the instance to open its Instance details page and copy the Instance connection name.
Start the proxy.
docker run -d \
-v <PATH_TO_KEY_FILE>:/config \
-p 127.0.0.1:3306:3306 \
gcr.io/cloudsql-docker/gce-proxy:1.16 /cloud_sql_proxy \
-instances=<INSTANCE_CONNECTION_NAME>=tcp:0.0.0.0:3306 - credential_file=/config
Start the client
mysql -u <USERNAME> -p --host 127.0.0.1
Then connect using
db.driver=com.mysql.cj.jdbc.Driver
db.url=jdbc:mysql://127.0.0.1:3306/db
db.username=usrname
db.password=pswd
db.entity.package = com.example.model
If you want to reach a CloudSQL database from a GKE cluster, you have 2 solutions:
You can use configure a private ip on CloudSQL and then reach it directly to this IP. For this, your GKE cluster must be configured as VPC native.
You can attach a sidecar to your main container which open a cloud sql proxy connection to your database. This solution is quite similar of the answer of #ParthMehta. Here the description (and the github example) of this sidecar configuration
For Spanner, it's different because you can't use private IP or cloud SQL proxy binary. you have details on this page for the configuration and the dependencies
As you can see, you connect your instance directly with the ressource definition (/projects/..../instance/.......). Your config file should look like to this:
db.driver=com.google.cloud.spanner.jdbc.JdbcDriver
db.url=jdbc:cloudspanner:/projects/{YOUR_PROJECT_ID}/instances/{YOUR_INSTANCE_ID}/databases/{YOUR_DATABASE_ID}
db.dialect=com.google.cloud.spanner.hibernate.SpannerDialect

Unable to access REST service deployed in docker swarm in AWS

I used the cloud formation template provided by Docker for AWS setup & prerequisites to set up a docker swarm.
I created a REST service using Tibco BusinessWorks Container Edition and deployed it into the swarm by creating a docker service.
docker service create --name aka-swarm-demo --publish 8087:8085 akamatibco/docker_swarm_demo:part1
The service starts successfully but the CloudWatch logs show the below exception:
I have tried passing the JVM environment variable in the Dockerfile as :
ENV JAVA_OPTS= "-Dbw.rest.docApi.port=7778"
but it doesn't help.
The interesting fact is at the end the log says:
com.tibco.thor.frwk.Application - TIBCO-THOR-FRWK-300006: Started BW Application [SFDemo:1.0]
So I tried to access the application using CURL -
curl -X GET --header 'Accept: application/json' 'URL of AWS load balancer : port which I exposed while creating the service/resource URI'
But I am getting the below message:
The REST service works fine when I do docker run.
I have checked the Security Groups of the manager and load-balancer. The load-balancer has inbound open to all traffic and for the manager I opened HTTP connections.
I am not able to figure out if anything I have missed. Can anyone please help ?
As mentioned in Deploy services to swarm, if you read along, you will find the following:
PUBLISH A SERVICE’S PORTS DIRECTLY ON THE SWARM NODE
Using the routing mesh may not be the right choice for your application if you need to make routing decisions based on application state or you need total control of the process for routing requests to your service’s tasks. To publish a service’s port directly on the node where it is running, use the mode=host option to the --publish flag.
Note: If you publish a service’s ports directly on the swarm node using mode=host and also set published= this creates an implicit limitation that you can only run one task for that service on a given swarm node. In addition, if you use mode=host and you do not use the --mode=global flag on docker service create, it will be difficult to know which nodes are running the service in order to route work to them.
Publishing ports for services works different than for regular containers. The problem was; the image does not expose the port after running service create --publish and hence the swarm routing layer cannot reach the REST service. To resolve this use mode = host.
So I used the below command to create a service:
docker service create --name tuesday --publish mode=host,target=8085,published=8087 akamatibco/docker_swarm_demo:part1
Which eventually removed the exception.
Also make sure to configure the firewall settings of your load balancer so as to allow communications through desired protocols in order to access your applications deployed inside the container.
For my case it was HTTP protocol, enabling port 8087 on load balancer which served the purpose.

Pre-deploy development communication with an Internal Kubernetes service

I'm investigating a move to Kubernetes (coming from AWS ECS). But I haven't solved the local development issue when depending on internal services.
Let me elaborate:
When developing and testing microservices, before they are deployed as a Kubernetes Service I want to be able to talk to other, internal Kubernetes Services. As there are > 20 microservices I have a Kubernetes cluster running latest development versions. I can't run a MiniKube.
example:
I'm developing an user-service which needs access to the email service. The Email service is already on Kubernetes and is an internal service.
So before the user-service is deployed I want to be able to talk to the internal email service for dev/testing. I can't make use of K8S nice service discovery env vars.
As we currently already have a VPN up to restrict DEV env to testers/development only, could I use this VPN to provide access to the Kubernetes-Service IP-addresses? I do have Kubernetes DEV-env on the same VPC as the VPN is in.
If you deploy your internal services as type NodePort, then you can access them over your VPN via that nodePort. NodePorts can be dynamically allocated or you can customize them to be 'static' where they are known by you up front.
When developing an app on your local machine, you can access the dependent service by that NodePort.
As an alternative, you can use port-forwarding from kubectl (https://kubernetes.io/docs/user-guide/connecting-to-applications-port-forward/) to forward a pod to your local machine. (Note: This only handles traffic to a pod not a service).
Telepresence (http://telepresence.io) is designed for this scenario, though it presumes developers have kubectl access to the staging/dev cluster.

Web facing application on Kubernetes and AWS

It seems like the best way to deploy a external facing application on Google Cloud would be to create an external load balancer with this line in the service configuration:
{
...
"createExternalLoadBalancer": true
...
}
This doesn't seem to work for AWS. I'm getting the following error when running the service create:
requested an external service, but no cloud provider supplied
I know about the PublicIPs setting in services, but that would involve knowing the service's IP in advance so I can set a domain name to it, but so far that doesn't look to be possible if I want to set it up using an external service like AWS ELB.
What's the recommended way of doing this on AWS?
This is still a work in progress.
Please see:
https://github.com/GoogleCloudPlatform/kubernetes/pull/2672
For a proposal that starts to add support for AWS ELBs to Kubernetes, we're working to get that pull request integrated.
Thanks!