I have already implemented Consul cluster in AWS with AutoJoin and EC2 tagging.
https://www.consul.io/docs/agent/cloud-auto-join.html
Now I am trying to do them same on a VMware/Vsphere environment. I have done everything as the tutorial describes, but the issue there is the TAGS. Seems that those tags which are generated from the UI are not the actual tags which Consul see. As far as I understood there are different tags which can only be applied via the REST API.
Has someone managed to make a consul cluster in Vsphere with auto-join function ? I have searched all over the internet for some user stories on this topic, but there is none.
Thank you in advance.
So this work. It seems that the issue was on my side. I used a complicated password and it did not render it correctly. More info here:
https://github.com/hashicorp/consul/issues/4486
Related
I am using AWS Managed Prometheus service and setup a Prometheus server on my EKS cluster to collect and write metrics on my AMP workspace, using the helm chart, as per tutorial from AWS. All works fine, I am also connecting to a cluster run Grafana and I can see the metrics no problem.
However, my use case is to query metrics from my web application which runs on the cluster and to display the said metrics using my own diagram widgets. In other words, I don't want to use Grafana.
So I was thinking to use the AWS SDK (Java in my case, https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/amp/model/package-summary.html), which works fine (I can list my workspaces etc...), except it doesn't have any method for querying metrics!?
The documentation indeed mentions that this is not out of the box (!) and basically redirects to Grafana...
This seems fairly odd to me as the basic use case would be to run some queries no? Am I missing something here? do I need to create my own HTTP requests for this?
FYI, I ended up doing the query manually, creating an SdkHttpFullRequest and using an AWS4Signer to sign it. Works OK but I wonder why it couldn't be included in the SDK directly... The only gotcha was to make sure to specify the "aps" for the signing name in the Aws4SignerParams creation.
I managed to get multicluster istio working following the documentation.
However this requires the kubeconfig of the clusters to be setup on each other. I am looking for an alternative to doing that. Based on presentation from solo.io and admiral, it seems that it might be possible to setup ServiceEntries to accomplish this manually. Istio docs are scarce in this this area. Does anyone have pointers on how to make this work?
There are some advantages to setting up the discovery manually or thru our CD processes...
if one cluster gets compromised, the creds to other clusters dont leak
allows us to limit the which services are discovered
I posted the question on twitter as well and hope to get some feedback from the Istio contributors.
As per Admiral docs:
Admiral acts as a controller watching k8s clusters that have a credential stored as a secret object which the namespace Admiral is running in. Admiral delivers Istio configuration to each cluster to enable services to communicate.
No matter how you manage contol-plane configuration (manually or with controller) - you have store and provision credentials somehow. In this case with use of the secrets
You can store your secrets securely in git with sealed-secrets.
You can read more here.
This is my first time using GCP, and I'm trying to put my project on production, and I'm running into problems with getting websocket communication working. I've been googling around and I'm super unclear on if cloud run on GKE supports inbound/outbound websocket connections. The limitations docs say that cloud run fully managed does not work with inbound websockets, but does not say anything about cloud run on gke having issues with websockets.
I can post my ingress config and stuff, not really sure what exactly is relevant to this, but I've just followed their getting setup guide so everything is still set to the default for the most part.
The short answer is no. However, WebSockets do work outbound. This is a known issue on Cloud Run. You can use either just GKE or App Engine Flex as recommended alternatives.
The short answer, as of January 2021, is yes! You will need to use the beta api when deploying your service. Details are here: https://cloud.google.com/blog/products/serverless/cloud-run-gets-websockets-http-2-and-grpc-bidirectional-streams
I am Newbie who is studying ELK stack this time.
Recently, I succeeded in loading logs onto an Elasticsearch through a logback on a spring boot project. But this is like sending a log from logback to Elasticsearch _bulk uri, which exists in Elasticsearch service.
But there is still something you don't know.
How do I approach and configure logstash in aws elaticearch serivce.
I don't know where logstash is located in AWS Elasticsearch Service. Is it true that logstash exists?
So I asked a question to my friend or developer group, but I didn't get the results I wanted.
My friend's developer has already commented that logstash exists in the AWS Elasticsearch service, "Why are you trying to create in the wrong place, such as a separate EC2?"
Before asking questions and questions, Elasticsearch experts advised people to visit the official website to watch the video clips.
Some might call me stupid, but I tried various things to find out and find out.
I learned all the starting videos about ELK Stack on the official website of Elastic Search.
I looked for any Logstash information in the AWS Elasticsearch service reference, but all I found was the logstash-output-amazon-es plug-in in the topics below.
https://docs.aws.amazon.com/ko_kr/elasticsearch-service/latest/developerguide/es-kibana.html
I try to figure it out, and I'm just panicking, having little sleep a few week.
And finally, I'm going to ask you, I'm thinking in two ways.
If Logstash does not exist in AWS Elasticsearch service,
First, deploy the spring boot application to my EC2 instance
Second, I will need to install Logstash on this EC2 instance to configure the pipeline through logstash.conf to load logs into elasticsearch in my AWS Elasticsearch service.
If Logstash exists in the AWS Elasticsearch service, I wonder how it approaches logstash.conf. Because I want to set the input, filter,output as I want.
please help me.
You would need to run Logstash on your own infrastructure. The question here is more: Do you really need Logstash? Change your Logback appender to write out JSON logs and then use Filebeat to store the data directly in Elasticsearch; no Logstash needed.
There are quite a few resources on deployments of AMI's on EC2. But are there any solutions to incremental code updates to a PHP/Java based website?
Suppose I have 10 EC2 instances all running PHP / Java based websites with docroots local to the instance. I may want to do numerous code deployments to it through out the day.
I don't want to create a new AMI copy and scale that up to new instances each time I have a code update.
Any leads on how to best do this would be greatly appreciated. We use subversion as our main code repository and in the past we've simply done an SVN update/co when we were on one to two servers.
Thanks.
You should check out Elastic Beanstalk. Essentially you just package up your WAR or other code file, upload it to a bucket via AWS's command line/Eclipse integration and the deployment is performed automatically.
http://aws.amazon.com/elasticbeanstalk/
Elastic Beanstalk is exactly designed to do this for you. We use the Elastic Beanstalk java/tomcat flavor but it also has support for php, ruby, python environment. It has web console that allows you to deploy code (it even keeps history of it), it also has git tool to deploy code from command line.
It also has monitoring, load balancer, auto scaling all built in. Only a few web form entries to control all these.
Have you considered using a tool designed to manage this sort of thing for you, Puppet is well regarded in this area.
Have a look here:
https://puppetlabs.com/puppet/what-is-puppet/
(No I am not a Puppet Labs employee :))
Capistrano is a great tool for deploying code to multiple servers at once. Chef and Puppet are great tools for setting up those servers with databases, webservers, etc.
Go for a Capistrano . Its a good way to deploy your code on multiple servers .
As already mentioned Elastic Beanstalk is a good option if you just want a webserver and don't want to worry about the details.
Also, take a look at AWS CodeDeploy. You can have much more control over the lifecycle of your instance and you'd be looking at something very similar to what you have now (a set of EC2 instances that you setup). You can even get automatic deployments on instance launch with Auto Scaling.
You can either use Capsitrano or TravisCI.