I'd like to use GCP Cloud Run with Loadbalancer, however, I haven't found a way to create a backend service or something to connect them. I found a way of using Anthos, but, I'd prefer without it.
https://cloud.google.com/solutions/integrating-https-load-balancing-with-istio-and-cloud-run-for-anthos-deployed-on-gke
How is that possible?, What more options are there?
Finally, this integration is possible with Serverless Network Endpoint [07-09-2020].
This is in beta, however, it looks pretty nice:
Concepts:
https://cloud.google.com/load-balancing/docs/negs/serverless-neg-concepts
Setting:
https://cloud.google.com/load-balancing/docs/negs/setting-up-serverless-negs
Related
I am using AWS Managed Prometheus service and setup a Prometheus server on my EKS cluster to collect and write metrics on my AMP workspace, using the helm chart, as per tutorial from AWS. All works fine, I am also connecting to a cluster run Grafana and I can see the metrics no problem.
However, my use case is to query metrics from my web application which runs on the cluster and to display the said metrics using my own diagram widgets. In other words, I don't want to use Grafana.
So I was thinking to use the AWS SDK (Java in my case, https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/amp/model/package-summary.html), which works fine (I can list my workspaces etc...), except it doesn't have any method for querying metrics!?
The documentation indeed mentions that this is not out of the box (!) and basically redirects to Grafana...
This seems fairly odd to me as the basic use case would be to run some queries no? Am I missing something here? do I need to create my own HTTP requests for this?
FYI, I ended up doing the query manually, creating an SdkHttpFullRequest and using an AWS4Signer to sign it. Works OK but I wonder why it couldn't be included in the SDK directly... The only gotcha was to make sure to specify the "aps" for the signing name in the Aws4SignerParams creation.
There is a project migrated from legacy to GCP.
On GCP everything runs on microservices.
May be around 40-50 microservice.
I would like to automate this microservices but there is no endpoint exposed in this project.
How could you automate a microservice where there are no endpoints?
What type of architecture, you could use to test this?
Db: Firestore (nosql)
Thanks
M
In my view you can do it following way:
Use ClusterIP or NodePort to access those POD.
Spin up a new POD which will access your target POD to communicate.
You can enable POD to POD communication based on labels and enabling network policy.
You can use calico as network policy agent.
You can view the log of your testing pod using kubectl logs [pod name] or from logging service of your cloud provider or even using daemonset that you could install.
The testing POD can periodically send traffic. So you can use thread to call the target service and keep the thread in sleep mode for a while or you can use kubernetes cronjob to call the target service. Based on your usecase it will be chosen.
Let me know if it meets your requirements or you have more to elaborate?
In terms of finding out how to test your micro-services on the Google Cloud Platform, I would suggest referencing our documentation on "Microservices Architecture on Google App Engine" as it will explain and guild you how to implement your services onto GCP. You may also look into this document as well as it provides the best practices for designing APIs to communicate between microservices.
Additionally, user "ARINDAM BANERJEE" has a great example you can follow as well.
This is my first time using GCP, and I'm trying to put my project on production, and I'm running into problems with getting websocket communication working. I've been googling around and I'm super unclear on if cloud run on GKE supports inbound/outbound websocket connections. The limitations docs say that cloud run fully managed does not work with inbound websockets, but does not say anything about cloud run on gke having issues with websockets.
I can post my ingress config and stuff, not really sure what exactly is relevant to this, but I've just followed their getting setup guide so everything is still set to the default for the most part.
The short answer is no. However, WebSockets do work outbound. This is a known issue on Cloud Run. You can use either just GKE or App Engine Flex as recommended alternatives.
The short answer, as of January 2021, is yes! You will need to use the beta api when deploying your service. Details are here: https://cloud.google.com/blog/products/serverless/cloud-run-gets-websockets-http-2-and-grpc-bidirectional-streams
I am new to AWS and the variety of options is overwhelming.
I run my app localy in docker. Now I want to move it to AWS so I can access it in browser remotely. What is the easiest configuration for my case?
If you are new to AWS, I suggest it would be better to take a minute to understand what and how AWS works.
However, for your scenario, assuming you are good with docker, you can follow the this tutorial. AWS has a container service called ECS and I suggest you stick to it.
I am trying to access Kafka and 3rd-party services (e.g., InfluxDB) running in GKE, from a Dataflow pipeline.
I have a DNS server for service discovery, also running in GKE. I also have a route in my network to access the GKE IP range from Dataflow instances, and this is working fine. I can manually nslookup from the Dataflow instances using my custom server without issues.
However, I cannot find a proper way to set up an additional DNS server when running my Dataflow pipeline. How could I achieve that, so that KafkaIO and similar sources/writers can resolve hostnames against my custom DNS?
sun.net.spi.nameservice.nameservers is tricky to use, because it must be called very early on, before the name service is statically instantiated. I would call java -D, but Dataflow is going to run the code itself directly.
In addition, I would not want to just replace the systems resolvers but merely append a new one to the GCP project-specific resolvers that the instance comes pre-configured with.
Finally, I have not found any way to use a startup script like for a regular GCE instance with the Dataflow instances.
I can't think of a way today of specifying a custom DNS in a VM other than editing /etc/resolv.conf[1] file in the box. I don't know if it is possible to share the default network. If it is machines are available at hostName.c.[PROJECT_ID].internal, which may serve your purpose if hostName is stable [2].
[1] https://cloud.google.com/compute/docs/networking#internal_dns_and_resolvconf [2] https://cloud.google.com/compute/docs/networking