How should i do to make my app installed by azure/draft integrate with Istio?
Specifically, on official Istio documentation:
https://istio.io/docs/setup/kubernetes/quick-start.html
If you do not have the Istio-Initializer installed, you must use istioctl kube-inject to manuallly inject Envoy containers in your application pods before deploying them:
kubectl create -f <(istioctl kube-inject -f .yaml)
What / where should I modify the Helm chart folder that created by azure/draft to work with Istio?
The answer is not specific to Azure.
There are two ways to integrate Istio with an app:
1.Deploy the Istio initializer before deploying your app. (undeploy it, deploy the initializer and then deploy your app again). Run kubectl create -f install/kubernetes/istio-initializer.yaml. After that moment, all the deployed kubernetes pods in the future in the cluster will be integrated with Istio.
2.Integrate Istio with particular apps, not with every app. For those apps to be integrated with Istio, instead of running: kubectl create -f app.yaml as you would normally do, run kubectl create -f <(istioctl kube-inject -f .yaml).
Related
We have installed Istio manually on GKE cluster. We want to install/add istio stack driver adapter so that Istio metrics are available on Stack Driver monitoring Dashboard of GCP. I am not able to get the metrics despite add the CRD as mentioned in
https://github.com/GoogleCloudPlatform/istio-samples/blob/master/common/install_istio.sh
git clone https://github.com/istio/installer && cd installer
helm template istio-telemetry/mixer-telemetry --execute=templates/stackdriver.yaml -f global.yaml --set mixer.adapters.stackdriver.enabled=true --namespace istio-system | kubectl apply -f -
I feel we are missing the authentication part. Can anyone help in resolving this?
I was unable to replicate your set up and I notice that the Istio version downloaded by the script was 1.4.2 which is not supported by GKE at this moment.
Nonetheless, I’d recommend you to check this document for troubleshooting and consult this guide to get Istio installed on GKE.
You should also be aware of couple of limitations when using Istio on GKE
I have a service created on Google Cloud run that I am able to deploy manually through the Google Cloud Console UI using an image on Container registry. But deployment from CLI is failing. Here is the command I am using and the error I get. I am not able to understand what I am missing:
$ gcloud beta run deploy service-name --platform managed --region region-name --image image-url
Deploying container to Cloud Run service [service-name] in project [project-name] region [region-name]
X Deploying...
. Creating Revision...
. Routing traffic...
Deployment failed
ERROR: (gcloud.beta.run.deploy) INVALID_ARGUMENT: The request has errors
- '#type': type.googleapis.com/google.rpc.BadRequest
fieldViolations:
- description: spec.revisionTemplate.spec.container.ports should be empty
field: spec.revisionTemplate.spec.container.ports
Update 1:
I have updated the SDK using gcloud components update, but I still have the same issue
Here's my SDK Version
$gcloud version
Google Cloud SDK 270.0.0
beta 2019.05.17
bq 2.0.49
core 2019.11.04
gsutil 4.46
I am using a multistage docker build. Here's my Dockerfile:
FROM custom-dev-image
COPY . /project_dir
WORKDIR /project_dir
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
/usr/local/bin/go build -a \
-ldflags '-w -extldflags "-static"' \
-o /root/go/bin/executable ./cmds/project/main.go
FROM alpine:3.10
ENV GIN_MODE=release APP_NAME=project_name
COPY --from=0 /root/go/bin/executable /usr/local/bin/
CMD executable
I had this same problem and I assume it was because I had older Cloud Run deployment that was created before I had ran gcloud components update since some update.
I was able to fix it by deleting the whole Cloud Run service (through the GUI) and deploying it from scratch again (via terminal). I noticed that the ports: definition disappeared from the YAML once I did this.
After this I could do deployments normally.
This was a bug in Cloud Run. It has been fixed and deploying with CLI is working for me now. Here's the link to the issue I had raised with Google Cloud which has a response from them https://issuetracker.google.com/issues/144069696.
I want to run a service on Google Cloud Run that uses Cloud Memorystore as cache.
I created an Memorystore instance in the same region as Cloud Run and used the example code to connect: https://github.com/GoogleCloudPlatform/golang-samples/blob/master/memorystore/redis/main.go this didn't work.
Next I created a Serverless VPC access Connectore which didn't help. I use Cloud Run without a GKE Cluster so I can't change any configuration.
Is there a way to connect from Cloud Run to Memorystore?
To connect Cloud Run (fully managed) to Memorystore you need to use the mechanism called "Serverless VPC Access" or a "VPC Connector".
As of May 2020, Cloud Run (fully managed) has Beta support for the Serverless VPC Access. See Connecting to a VPC Network for more information.
Alternatives to using this Beta include:
Use Cloud Run for Anthos, where GKE provides the capability to connect to Memorystore if the cluster is configured for it.
Stay within fully managed Serverless but use a GA version of the Serverless VPC Access feature by using App Engine with Memorystore.
While waiting for serverless VPC connectors on Cloud Run - Google said yesterday that announcements would be made in the near term - you can connect to Memorystore from Cloud Run using an SSH tunnel via GCE.
The basic approach is the following.
First, create a forwarder instance on GCE
gcloud compute instances create vpc-forwarder --machine-type=f1-micro --zone=us-central1-a
Don't forget to open port 22 in your firewall policies (it's open by default).
Then install the gcloud CLI via your Dockerfile
Here is an example for a Rails app. The Dockerfile makes use of a script for the entrypoint.
# Use the official lightweight Ruby image.
# https://hub.docker.com/_/ruby
FROM ruby:2.5.5
# Install gcloud
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
# Generate SSH key to be used by the SSH tunnel (see entrypoint.sh)
RUN mkdir -p /home/.ssh && ssh-keygen -b 2048 -t rsa -f /home/.ssh/google_compute_engine -q -N ""
# Install bundler
RUN gem update --system
RUN gem install bundler
# Install production dependencies.
WORKDIR /usr/src/app
COPY Gemfile Gemfile.lock ./
ENV BUNDLE_FROZEN=true
RUN bundle install
# Copy local code to the container image.
COPY . ./
# Run the web service on container startup.
CMD ["bash", "entrypoint.sh"]
Finally open an SSH tunnel to Redis in your entrypoint.sh script
# !/bin/bash
# Memorystore config
MEMORYSTORE_IP=10.0.0.5
MEMORYSTORE_REMOTE_PORT=6379
MEMORYSTORE_LOCAL_PORT=6379
# Forwarder config
FORWARDER_ID=vpc-forwarder
FORWARDER_ZONE=us-central1-a
# Start tunnel to Redis Memorystore in background
gcloud compute ssh \
--zone=${FORWARDER_ZONE} \
--ssh-flag="-N -L ${MEMORYSTORE_LOCAL_PORT}:${MEMORYSTORE_IP}:${MEMORYSTORE_REMOTE_PORT}" \
${FORWARDER_ID} &
# Run migrations and start Puma
bundle exec rake db:migrate && bundle exec puma -p 8080
With the solution above Memorystore will be available to your application on localhost:6379.
There are a few caveats though
This approach requires the service account configured on your Cloud Run service to have the roles/compute.instanceAdmin role, which is quite powerful.
The SSH keys are backed into the image to speedup container boot time. That's not ideal.
There is no failover if your forwarder crashes.
I've written a longer and more elaborated approach in a blog post that improves the overall security and adds failover capabilities. The solution uses plain SSH instead of the gcloud CLI.
If you need something in your VPC, you can also spin up Redis on Compute Engine
It's more costly (especially for a Cluster) than Redis Cloud - but an temp solution if you have to keep the data in your VPC.
I've been struggling with configuring Kubernetes for many hours and I don't know how to move it forward.
What I did :
I created few services using spring cloud
I created docker images for each service
I pushed those images to docker hub
I launched AWS by running
export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
Command kubectl cluster-info shows that it actually works.
I created Kubernetes pods for each service. Command kubectl get pods
shows that all pods have status running.
The problem is that when I log to my AWS account I don't see any running instance, although I can see kubernetes-staging created in my S3 bucket.
My goal is to actually access my service , not on localhost. How can I do it ?
You should be able to see instances of course - as #kichik mentioned check whether your AWS console is using the same region as the deployment scripts.
To use your services/applications the next step is to expose them to the public with Kubernetes services as described here and here
I am setting up a CI/CD pipeline for my micro-services. Currently I use TravisCI to pull the code from Github upon check-in, build the docker image and push it to DockerHub. I tried using docker cloud(previously knows as Tutum), which provides automatic deployment feature to AWS EC2 instance but the deployment sometimes recreates the container and the service endpoint URL changes, which is not desirable.
I am exploring amazon's ECS and its tasks , but I can not find any reference for how to setup continuos deployment to ECS when a new image is pushed to docker hub.
Anybody has any experience doing the setup ?
with ECS you would basically have CI detect a change to docker hub and update your task definition/service.
For this I use the wonderful ecs-deploy script from here:
https://github.com/silinternational/ecs-deploy
After my container has been built and deployed to dockerhub it's simply a matter of:
ecs-deploy -k $AWS_KEY -s $AWS_SECRET -r $AWS_REGION -c $CLUSTER_NAME -n $SERVICE_NAME -i $DOCKER_IMAGE_NAME
and that does it.