I have REST API Java application and want move it to cloud.
But I don't understand which tutorial use.
I already have docker image in Container Registry made by Jib and want connect it with some cloud database (Cloud SQL/Spanner).
How change this connection props to cloud?
db.driver=com.mysql.cj.jdbc.Driver
db.url=jdbc:mysql://localhost:3306/db
db.username=usrname
db.password=pswd
db.entity.package = com.example.model
We do this through Cloud SQL Proxy docker image: https://cloud.google.com/sql/docs/mysql/connect-docker
Enable the Cloud SQL Admin API.
Install the mysql client on the Compute Engine instance or client machine, if it is not already installed.
If needed, install the Docker client
4.Install the Proxy Docker image from the Google Container Registry.
If you are running the Proxy Docker image on a local machine (not a Compute Engine instance), or your Compute Engine instance does not have the proper scopes, create a Google Cloud Platform service account.
Go to the Cloud SQL Instances page in the Google Cloud Console.
Select the instance to open its Instance details page and copy the Instance connection name.
Start the proxy.
docker run -d \
-v <PATH_TO_KEY_FILE>:/config \
-p 127.0.0.1:3306:3306 \
gcr.io/cloudsql-docker/gce-proxy:1.16 /cloud_sql_proxy \
-instances=<INSTANCE_CONNECTION_NAME>=tcp:0.0.0.0:3306 - credential_file=/config
Start the client
mysql -u <USERNAME> -p --host 127.0.0.1
Then connect using
db.driver=com.mysql.cj.jdbc.Driver
db.url=jdbc:mysql://127.0.0.1:3306/db
db.username=usrname
db.password=pswd
db.entity.package = com.example.model
If you want to reach a CloudSQL database from a GKE cluster, you have 2 solutions:
You can use configure a private ip on CloudSQL and then reach it directly to this IP. For this, your GKE cluster must be configured as VPC native.
You can attach a sidecar to your main container which open a cloud sql proxy connection to your database. This solution is quite similar of the answer of #ParthMehta. Here the description (and the github example) of this sidecar configuration
For Spanner, it's different because you can't use private IP or cloud SQL proxy binary. you have details on this page for the configuration and the dependencies
As you can see, you connect your instance directly with the ressource definition (/projects/..../instance/.......). Your config file should look like to this:
db.driver=com.google.cloud.spanner.jdbc.JdbcDriver
db.url=jdbc:cloudspanner:/projects/{YOUR_PROJECT_ID}/instances/{YOUR_INSTANCE_ID}/databases/{YOUR_DATABASE_ID}
db.dialect=com.google.cloud.spanner.hibernate.SpannerDialect
Related
I'm making local cloud run services with the Cloud Code plugin to Intellij (PyCharm) but the locally deployed service cannot connect to the redis instance running in Docker:
redis.exceptions.ConnectionError: Error 111 connecting to 127.0.0.1:6379. Connection refused.
I can connect to the locally running redis instance from a python shell, it's just the cloud run service running in minikube/docker that cannot seem to connect to it.
Any ideas?
Edit since people are suggesting completely unrelated posts - The locally running Cloud Run instance makes use of Docker and Minikube to run, and is automatically configured by Cloud Code for Intellij. I suspect that Cloud Code for intellij puts Cloud Run instances into an environment that cannot access services running on MacOS localhost (but can access the Internet), which is why I tagged those specific items in the post. Please limit suggestions to ones that takes these items into account.
If you check Docker network using:
docker network list
You'll see a network called cloud-run-dev-internal. You need to connect your Redis container to that network. To do that, run this command (This instruction assumes that your container name is some-redis):
docker network connect cloud-run-dev-internal some-redis
Double check that your container is connected to the network:
docker network inspect cloud-run-dev-internal
Then connect to Redis Host using the container name:
import redis
...
redis_host = os.environ.get('REDISHOST', 'some-redis')
redis_port = int(os.environ.get('REDISPORT', 6379))
redis_client = redis.StrictRedis(host=redis_host, port=redis_port)
I have a postgresql database on the google cloud platform (cloud SQL). I'm currently managing this database through pgadmin, installed on my laptop. I've added the IP address of my laptop to the whitelist on the cloud sql settings page. This all works.
The problem is: when I go somewhere else and I connect to a different network, the IP address changes and I cannot connect to the postgresql database (through pgadmin) from my laptop.
Is there someone who knows a (secure) solution, involving a proxy server (or something else), to connect from my laptop (and only my laptop) to my postgresql database, even if I'm not on a whitelisted network (IP address)? Maybe I can set up a VM instance and install a proxy server and use this? But I have no clue where to start (or search for).
You have many options for connecting to a Cloud SQL instance from an external applications such a Public IP address with SSL, Public IP address without SSL, Cloud SQL proxy, etc. You can see all of them here.
Between all connection options there exists Cloud SQL Proxy, it basically provides secure access to your instances without the need for Authorized networks or configuring SSL on your part.
You only need to follow the steps listed here and you will be able to connect your Cloud SQL instance using the proxy.
Enable Cloud SQL Admin API on your console.
Install the proxy client on your local machine (Linux):
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
Determine how you will authenticate the proxy. You can use use a service account or let Cloud SDK take care of the authentication.
However, if required by your authentication method, create a service account.
Determine how you will specify your instances for the proxy. Your options for instance specification depend on your operating system and environment
Start the proxy using either TCP sockets or Unix sockets.
Take note that as of this writing, Cloud SQL Proxy does not support Unix sockets on Windows.
Update your application to connect to Cloud SQL using the proxy.
I want to setup a self managed docker private registry on an EC2 instance without using AWS ECR/ECS services i.e. using the docker registry:2 container image and make it accessible to the development team so that they can push/pull docker images remotely.
The development team has windows laptop with "docker for windows" installed in it.
Please note:
The EC2 instance is hosted on private subnet.
I have already created a AWS-ALB with openssl self-signed certificate and attached it to the EC2 so that the server can be accessed over HTTPS Listener.
I have deployed docker registry using below command:
docker run -d -p 8080:5000 --restart=always --name registry registry:2
I think pre-routing of 443 to 8080 is done because when I hit the browser with
https:///v2/_catalog I get an output in json format.
Currently, the catalog is empty because there is no image pushed in the registry.
I expect this docker-registry hosted on AWS-EC2 instance to be accessible remotely i.e. from windows remote machine as well.
Any references/suggestions/steps to achieve my task would be really helpful.
Hoping for a quick resolution.
Thanks and Regards,
Rohan Shetty
I have resolved the issue by following the below steps:
added --insecure-registry parameter in the docker.service file
created a new directory "certs.d/my-domain-name" at path /etc/docker.
( Please note: Here domain name is the one at which docker-registry is to be accessed)
Placed the self-signed openssl certificate and key for the domain-name inside the above mentioned directory
restart docker
I used the cloud formation template provided by Docker for AWS setup & prerequisites to set up a docker swarm.
I created a REST service using Tibco BusinessWorks Container Edition and deployed it into the swarm by creating a docker service.
docker service create --name aka-swarm-demo --publish 8087:8085 akamatibco/docker_swarm_demo:part1
The service starts successfully but the CloudWatch logs show the below exception:
I have tried passing the JVM environment variable in the Dockerfile as :
ENV JAVA_OPTS= "-Dbw.rest.docApi.port=7778"
but it doesn't help.
The interesting fact is at the end the log says:
com.tibco.thor.frwk.Application - TIBCO-THOR-FRWK-300006: Started BW Application [SFDemo:1.0]
So I tried to access the application using CURL -
curl -X GET --header 'Accept: application/json' 'URL of AWS load balancer : port which I exposed while creating the service/resource URI'
But I am getting the below message:
The REST service works fine when I do docker run.
I have checked the Security Groups of the manager and load-balancer. The load-balancer has inbound open to all traffic and for the manager I opened HTTP connections.
I am not able to figure out if anything I have missed. Can anyone please help ?
As mentioned in Deploy services to swarm, if you read along, you will find the following:
PUBLISH A SERVICE’S PORTS DIRECTLY ON THE SWARM NODE
Using the routing mesh may not be the right choice for your application if you need to make routing decisions based on application state or you need total control of the process for routing requests to your service’s tasks. To publish a service’s port directly on the node where it is running, use the mode=host option to the --publish flag.
Note: If you publish a service’s ports directly on the swarm node using mode=host and also set published= this creates an implicit limitation that you can only run one task for that service on a given swarm node. In addition, if you use mode=host and you do not use the --mode=global flag on docker service create, it will be difficult to know which nodes are running the service in order to route work to them.
Publishing ports for services works different than for regular containers. The problem was; the image does not expose the port after running service create --publish and hence the swarm routing layer cannot reach the REST service. To resolve this use mode = host.
So I used the below command to create a service:
docker service create --name tuesday --publish mode=host,target=8085,published=8087 akamatibco/docker_swarm_demo:part1
Which eventually removed the exception.
Also make sure to configure the firewall settings of your load balancer so as to allow communications through desired protocols in order to access your applications deployed inside the container.
For my case it was HTTP protocol, enabling port 8087 on load balancer which served the purpose.
I have a service running o Google Container Engine(Kubernetes). It access Google Cloud Storage and works fine.
On the same Kubernetes cluster, I installed Istio 0.1 following to https://istio.io/v-0.1/docs/tasks/installing-istio.html
I deploy my service via kube-inject
kubectl create -f <(istioctl kube-inject -f myservice.yaml)
But now my service cannot access Google Cloud Storage any more. I get the following error message:
java.lang.IllegalArgumentException: A project ID is required for this service but could not be determined from the builder or the environment. Please set a project ID using the builder.
To me it looks like the kube-inject and the sidecar make something so my service cannot access information about my google cloud project I am running in. As far as I can see is the sidecar the only difference.
Service still works when deploying without kube-inject.
What can cause this effect?
You may want to configure access to your external services as explained in Enabling Egress Traffic: either as Kubernetes external services or to use istioctl --includeIPRanges to exclude external traffic from being controlled by Istio.