Google Cloud - Private service connection for CloudSQL using Deployment Manager - google-cloud-platform

I'm trying to automate the deployment of a system using deployment manager. In essence, it's comprised of:
One compute instance running a proxy server
A second compute instance running the app itself (private IP only)
A CloudSQL instance hosting the database (MySQL)
In the existing environments they have, the database is configured with a private IP address, and private service access in the network so that the compute instance can acccess the DB by its private IP.
I've managed to get the 2 instances running, and the CloudSQL instance, but I"m struggling to get the private IP set up on the SQL instance. I've got the following:
- name: database
type: sqladmin.v1beta4.instance
properties:
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
region: {{ properties["region"] }}
databaseVersion: {{ properties["dbType"] }}
settings:
tier: db-n1-standard-1
dataDiskSizeGb: 10
dataDiskType: PD_SSD
storageAutoResize: true
replicationType: SYNCHRONOUS
locationPreference:
zone: {{ properties['zone']}}
ipConfiguration:
privateNetwork: {{ properties["network"] }}
However, when I try to build this, I receive the error:
Failed to create subnetwork. Please create Service Networking
connection with service 'servicenetworking.googleapis.com' from
consumer project '' network '' again
I've tried to dig through the documentation to find how to create this connection using Deployment Manager, but I'm at a loss! I got as far as creating a private address range for peering:
- name: google-managed-services-<network_name>
type: compute.beta.globalAddress
properties:
network: $(ref.<network_name>.selfLink)
purpose: VPC_PEERING
addressType: INTERNAL
prefixLength: 16
and this appears to create the reservation for private service links correctly, but I can't find the final piece of the puzzle, the actual peer connection to Google's network. The documentation suggests the CLI call I need is:
> gcloud services vpc-peerings connect
--service=servicenetworking.googleapis.com
--ranges=[RESERVED_RANGE_NAME]
--network=[VPC_NETWORK]
--project=[PROJECT_ID]
but as far as I can tell, Deployment Manager doesn't support this API.
Has anyone had success with automating this sort of setup before? Pointers to relevant documentation that I might have missed are of course welcome!

The servicenetworking.googleapis.com is not currently supported by Deployment Manager nor is it a supported GCP-type so this can't be done through DM for now. I recommend creating a feature request for it since it's a relatively new API.

below config works for me, after setting https://cloud.google.com/sql/docs/mysql/configure-private-ip#configure-access
ipConfiguration:
privateNetwork: "internal"
ipv4Enabled: false
authorizedNetworks: null

Related

Why can't I connect to my AWS Redshift Serverless cluster from my laptop?

I've set up a Redshift Serverless cluster w/ a workgroup and a namespace.
I turned on the "Publicly Accessible" option
I've created an inbound rule for the 5439 port w/ Source set to 0.0.0.0/0
I've created an IAM credential for access to Redshift
I ran aws config and added the keys
But when I run
aws redshift-data list-databases --cluster-identifier default --database dev --db-user admin --endpoint http://default.530158470050.us-east-1.redshift-serverless.amazonaws.com:5439/dev
I get this error:
Connection was closed before we received a valid response from endpoint URL: "http://default.XXXXXX.us-east-1.redshift-serverless.amazonaws.com:5439/dev".
In Node, when trying to use the AWS.RedshiftDataClient to do the same thing, I get this:
{
code: 'TimeoutError',
path: null,
host: 'default.XXXXXXX.us-east-1.redshift-serverless.amazonaws.com',
port: 5439,
localAddress: undefined,
time: 2022-07-09T02:20:47.397Z,
region: 'us-east-1',
hostname: 'default.XXXXXX.us-east-1.redshift-serverless.amazonaws.com',
retryable: true
}
What am I missing?
What Security Group and VPC have you configured for your Redshift Serverless Cluster?
Make sure the Security Group allows traffic from "My Ip" so that you can reach the VPC.
If it is not enough, check the cluster is installed on public subnets (an Internet Gateway should be attached to the VPC and the route tables route traffic to it eventually + "Publicly Accessible" option enabled).

K8s service type ELB stuck at inprogress

Deployed K8s service with type as LoadBalancer. K8s cluster running on an EC2 instance. The service is stuck at "pending state".
Does the service type 'ELB' requires any stipulation in terms of AWS configuration parameters?
Yes. Typically you need the option --cloud-provider=aws on:
All kubelets
kube-apiserserver
kube-controller-manager
Also, you have to make sure that all your K8s instances (master/nodes) have an AWS instance role that allows them to create/remove ELBs and routes (All access to EC2 should do).
Then you need to make sure all your nodes are tagged:
Key: KubernetesCluster, Value: 'your cluster name'
Key: k8s.io/role/node, Value: 1 (For nodes only)
Key: kubernetes.io/cluster/kubernetes, Value: owned
Make sure your subnet is also tagged:
Key: KubernetesCluster, Value: 'your cluster name'
Also, your Kubernetes node definition, you should have something like this:
ProviderID: aws:///<aws-region>/<instance-id>
Generally, all of the above is not needed if you are using the Kubernetes Cloud Controller Manager which is in beta as of K8s 1.13.0

I have setup a Kubernetes cluster on two EC-2 instances & dashboard but I'm not able to access the ui for the kubernetes dashboard on browser

I have setup a kubernetes(1.9) cluster on two ec-2 servers(ubuntu 16.04) and have installed a dashboard, the cluster is working fine and i get output when i do curl localhost:8001 on the master machine, but im not able to access the ui for the kubernetes dashboard on my laptops browser with masternode_public_ip:8001, master-machine-output
this is what my security group looks like security group which contains my machine ip.
Both the master and slave node are in ready state.
I know there are a lot of other ways to deploy an application on kubernetes cluster, however i want to explore this particular option for POC purpose.
I need to access the dashboard of the kubernetes UI and the nginx application which is deployed on this cluster.
So, my question: is it something else i need to add in my security group
or its because i need to do some more things on my master machine?
Also, it would be great if someone could throw some light on private and public IP and which one could be used to access the application and how does these are related
Here is the screenshot of deployment details describe deployment [2b][2c]4
This is an extensive topic ranging from Kubernetes Services (NodePort or LoadBalancer for this case) to Ingress Controllers and such. But there is a simple, quick and clean way to access your dashboard without all that.
Use either kubectl proxy or kubectl port-forward to access dashboard via embeded Kube apiserver proxy or directly forward from localhost to POD it self.
Found out the answer
Sorry for the delayed reply
I was trying to access the web application through its container's port but in kubernetes there is a concept of NodePort. so, if your container is running at port 8080 it will redirect it to a port between somewhere 30001 to 35000
all you need to do is add details to your deployment file
and expose the service
apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello-world
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001

IP Address specification in deployment of Spring Cloud microservice

I am trying to develop spring cloud microservice. I developed a sample demo of Spring Cloud project by using Zuul proxy, Eureka server and Hystrix. I added my developed service as a client of Eureka server and applied the routing. All are working well. Now I need to deploy in my AWS Ec2 machine. In my local I added the default zone URL in application.properties file like the following,
eureka.client.serviceUrl.defaultZone=http://localhost:8071/eureka/
When I am moving to my Ec2 machine or by sing AWS ECS, how I can modify this IP address belongs to cloud for proper configuration? I also using localhost:8090 and 8091 like these ports for Zuul and Turbine dashboard project etc. So how I need to change this URL when I am deploying to cloud?
We use domains. So you would point an A-record of api.yourdomain.com at the IP address or load balancer alias that is supporting your services.
Why? When we decided to change infrastructure we are able to change a DNS entry rather than modify all of our microservices' configurations. We recently moved from Eureka/Zuul to AWS's ALB. Using domains allowed us to run both environments in parallel and cutover with no down time. In the event there was a failure in the new environment, the old one was still running and we could cut back with a simple A-record change.
In your application.yml file you can configure different profiles so that you can test locally and then in ECS you can define the profile to use when creating the task definition.
First here is an example of how you can configure your application.yml file to be able to run on different profiles:
############# for running locally ################
server:
port: 1234
logging:
file: logs/example.log
level:
com.example: INFO
endpoints:
health:
sensitive: true
spring:
datasource:
url: jdbc:mysql://example.us-east-1.rds.amazonaws.com/example_db?noAccessToProcedureBodies=true
username: example
password: example
driver-class-name: com.mysql.jdbc.Driver
security:
oauth2:
client:
clientId: example
clientSecret: examplesecret
scope: webapp
accessTokenUri: http://localhost:9999/uaa/oauth/token
userAuthorizationUri: http://localhost:9999/uaa/oauth/authorize
resource:
userInfoUri: http://localhost:9999/uaa/user
########## For deployment in Docker containers/ECS ########
spring:
profiles: prod
datasource:
url: jdbc:mysql://example.rds.amazonaws.com/example_db?noAccessToProcedureBodies=true
username: example
password: example
driver-class-name: com.mysql.jdbc.Driver
prodnetwork:
ipAddress: api.yourdomain.com
security:
oauth2:
client:
clientId: exampleid
clientSecret: examplesecret
scope: webapp
accessTokenUri: https://${prodnetwork.ipAddress}/v1/uaa/oauth/token
userAuthorizationUri: https://${prodnetwork.ipAddress}/v1/uaa/oauth/authorize
resource:
userInfoUri: https://${prodnetwork.ipAddress}/v1/uaa/user
Second: Setting up ECS to use your Prod profile:
When you build your docker container, tag it with your new profile's name, in this case "prod"
Third: Create a task definition and define your Docker tag in the repo URL and your new profile in your container run command:
Now when you work on your application on your local machine, you can run it with "localhost" and when you deploy it to ECS you can define your new domain/ip to be used in the run command in your container definition.

Kubernetes Multinode CoreOS gude doesn't create ELBs in AWS

The CoreOS Multinode Cluster guide appears to have a problem. When I create a cluster and configure connectivity, everything appears fine -- however, I'm unable to create an ELB through service exposing:
$ kubectl expose rc my-nginx --port 80 --type=LoadBalancer
service "my-nginx" exposed
$ kubectl describe services
Name: my-nginx
Namespace: temp
Labels: run=my-nginx
Selector: run=my-nginx
Type: LoadBalancer
IP: 10.100.6.247
Port: <unnamed> 80/TCP
NodePort: <unnamed> 32224/TCP
Endpoints: 10.244.37.2:80,10.244.73.2:80
Session Affinity: None
No events.
The IP line that says 10.100.6.247 looks promising, but no ELB is actually created in my account. I can otherwise interact with the cluster just fine, so it seems bizarre. A "kubectl get services" listing is similar -- it shows the private IP (same as above) but the EXTERNAL_IP column is empty.
Ultimately, my goal is a solution that allows me to easily configure my VPC (ie. private subnets with NAT instances) and if I can get this working, it'd be easy enough to drop into CloudFormation since it's based on user-data. The official method of kube-up doesn't leave room for VPC-level customization in a repeatable way.
Unfortunately, that getting-started guide isn't nearly as up to date as the kube-up implementation. For instance, I don't see a --cloud-provider=aws flag anywhere, and the kubernetes-controller-manager would need that in order to know to call the AWS APIs.
You may want to check out the official CoreOS on AWS guide:
https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html
If you hit a deadend or find a problem, I recommend asking in the AWS Special Interest Group forum:
https://groups.google.com/forum/#!forum/kubernetes-sig-aws