We are scraping the metrics of many istio-proxy sidecars with Prometheus. As these are many metrics, we would like to compress the payload to save us some bandwidth.
Out of the box the stats endpoint does not seem to be compressed with Istio 1.8.2:
$ kubectl exec -it my-pod-0 -c server -- curl -o /dev/null -vsS --compressed http://127.0.0.1:15090/stats/prometheus
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 15090 (#0)
> GET /stats/prometheus HTTP/1.1
> Host: 127.0.0.1:15090
> User-Agent: curl/7.61.1
> Accept: */*
> Accept-Encoding: deflate, gzip
>
< HTTP/1.1 200 OK
< content-type: text/plain; charset=UTF-8
< cache-control: no-cache, max-age=0
< x-content-type-options: nosniff
< date: Fri, 19 Feb 2021 10:42:25 GMT
< server: envoy
< x-envoy-upstream-service-time: 2
< transfer-encoding: chunked
<
{ [26267 bytes data]
* Connection #0 to host 127.0.0.1 left intact
How do I get the sidecar to compress the stats traffic?
So far I tried adding an EnvoyFilter, but I honestly have no idea about the Envoy internals and I failed to find docs that help me understanding it.
My understanding is that I have to add the compress filter to this:
$ istioctl proxy-config listeners maintenance-0 --port 15090 -o json | gron
json = [];
json[0] = {};
json[0].address = {};
json[0].address.socketAddress = {};
json[0].address.socketAddress.address = "0.0.0.0";
json[0].address.socketAddress.portValue = 15090;
json[0].filterChains = [];
json[0].filterChains[0] = {};
json[0].filterChains[0].filters = [];
json[0].filterChains[0].filters[0] = {};
json[0].filterChains[0].filters[0].name = "envoy.filters.network.http_connection_manager";
json[0].filterChains[0].filters[0].typedConfig = {};
json[0].filterChains[0].filters[0].typedConfig.httpFilters = [];
json[0].filterChains[0].filters[0].typedConfig.httpFilters[0] = {};
json[0].filterChains[0].filters[0].typedConfig.httpFilters[0].name = "envoy.filters.http.router";
json[0].filterChains[0].filters[0].typedConfig.httpFilters[0].typedConfig = {};
json[0].filterChains[0].filters[0].typedConfig.httpFilters[0].typedConfig["#type"] = "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router";
json[0].filterChains[0].filters[0].typedConfig.routeConfig = {};
json[0].filterChains[0].filters[0].typedConfig.routeConfig.virtualHosts = [];
json[0].filterChains[0].filters[0].typedConfig.routeConfig.virtualHosts[0] = {};
json[0].filterChains[0].filters[0].typedConfig.routeConfig.virtualHosts[0].domains = [];
json[0].filterChains[0].filters[0].typedConfig.routeConfig.virtualHosts[0].domains[0] = "*";
json[0].filterChains[0].filters[0].typedConfig.routeConfig.virtualHosts[0].name = "backend";
json[0].filterChains[0].filters[0].typedConfig.routeConfig.virtualHosts[0].routes = [];
json[0].filterChains[0].filters[0].typedConfig.routeConfig.virtualHosts[0].routes[0] = {};
json[0].filterChains[0].filters[0].typedConfig.routeConfig.virtualHosts[0].routes[0].match = {};
json[0].filterChains[0].filters[0].typedConfig.routeConfig.virtualHosts[0].routes[0].match.prefix = "/stats/prometheus";
json[0].filterChains[0].filters[0].typedConfig.routeConfig.virtualHosts[0].routes[0].route = {};
json[0].filterChains[0].filters[0].typedConfig.routeConfig.virtualHosts[0].routes[0].route.cluster = "prometheus_stats";
json[0].filterChains[0].filters[0].typedConfig.statPrefix = "stats";
json[0].filterChains[0].filters[0].typedConfig["#type"] = "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager";
So far I tried creating the filter a few times and this is my latest try:
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: gzip
spec:
workloadSelector:
labels:
app: my-pod
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: envoy.http_connection_manager
subFilter:
name: envoy.router
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.compressor
typed_config:
'#type': type.googleapis.com/envoy.extensions.filters.http.compressor.v3.Compressor
compressor_library:
name: text_optimized
typed_config:
'#type': type.googleapis.com/envoy.extensions.compression.gzip.compressor.v3.Gzip
remove_accept_encoding_header: true
I do not really know what to put into the .spec.configPatches.match section. The patch and applyTo section probably are wrong too.
With help in an Istio issue, we made it work. I am copying my original response from: https://github.com/istio/istio/issues/30987#issuecomment-822517456
I got a working example and our network usage went down from ~20MBytes/s to ~30KBytes/s (yes, from Mega to Kilo 🔥). First I thought there was any error, but the data was complete and I did a short check with my CLI:
$ kubectl exec elasticsearch-0 -c istio-proxy -- timeout 1 curl -Ss --fail --compressed -w '%{size_download}' -i http://localhost:14090/stats/prometheus | tail -n 1
7763
$ kubectl exec elasticsearch-0 -c istio-proxy -- timeout 1 curl -Ss --fail -w '%{size_download}' -i http://localhost:14090/stats/prometheus | tail -n 1
330315
It is only 2.35% of its original size and both have the same mount of lines!
Here is the custom bootstrap, that need to be added to each pod with the sidecar.istio.io/bootstrapOverride: "istio-custom-bootstrap-config"
annotation.
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
name: istio-custom-bootstrap-config
namespace: default
data:
custom_bootstrap.json: |-
{
"staticResources": {
"listeners": [
{
"address": {
"socketAddress": {
"address": "0.0.0.0",
"portValue": 14090
}
},
"filterChains": [
{
"filters": [
{
"name": "envoy.filters.network.http_connection_manager",
"typedConfig": {
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
"httpFilters": [
{
"name": "envoy.filters.http.compressor",
"typed_config": {
"#type": "type.googleapis.com/envoy.extensions.filters.http.compressor.v3.Compressor",
"compressor_library": {
"name": "text_optimized",
"typed_config": {
"#type": "type.googleapis.com/envoy.extensions.compression.gzip.compressor.v3.Gzip"
}
},
"remove_accept_encoding_header": true
}
},
{
"name": "envoy.filters.http.router",
"typedConfig": {
"#type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
}
}
],
"routeConfig": {
"virtualHosts": [
{
"domains": [
"*"
],
"name": "backend",
"routes": [
{
"match": {
"prefix": "/stats/prometheus"
},
"route": {
"cluster": "prometheus_stats"
}
}
]
}
]
},
"statPrefix": "stats"
}
}
]
}
]
}
]
}
}
I had to change the port, because it is easy to maintain when there gets anything added to staticResources.listeners in future updates.
Related
I am getting the error below when sending a request with PUT method.
1 < 405
1 < Allow: GET,HEAD,POST,OPTIONS
1 < Content-Type: text/html; charset=iso-8859-1
1 < Date: Mon, 17 Aug 2020 04:01:07 GMT
1 < Server: Apache
1 < Vary: Accept-Encoding
1 < Via: 1.1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>405 Method Not Allowed</title>
</head><body>
<h1>Method Not Allowed</h1>
<p>The requested method PUT is not allowed for the URL /xyz/abc/def/-1.</p>
</body></html>
[Fatal Error] :1:50: White spaces are required between publicId and systemId.
13:01:07.437 [main] WARN com.intuit.karate - xml parsing failed, response data type set to string: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 50; White spaces are required between publicId and systemId.
My Request Body looks like:
{
"i": {
"x1": {
"q": 10
},
"x2": {
"q": 50
}
}
}
Can anyone guide me to resolve the issue?
NOTE:
The API Supports PUT method and the request we are trying to work with is the PUT one.
The same request works fine with Postman.
My Code that makes the request
Scenario: Common Action
* string testdata = read(sourceFilename)
* def arr = LIB.getJsonArrayForElement(testdata, 'testCases')
* eval logTC(arr[cnt].caseId)
#setting c
* def c = arr[cnt].request.c == null ? 'Y' : arr[cnt].request.c
#setting b
* def b = arr[cnt].request.b == null ? 'Z' : arr[cnt].request.b
#setting s
* def s = arr[cnt].request.s == null ? '' : arr[cnt].request.s
#setting i
* def i = arr[cnt].request.i == null ? '' : arr[cnt].request.i
#setting d
* def d = arr[cnt].request.d == null ? false : arr[cnt].request.d
#setting v
* def v = arr[cnt].request.v == null ? '' : arr[cnt].request.v
#setting f
* def f = arr[cnt].request.f == null ? false : arr[cnt].request.f
#setting es
* def es = arr[cnt].expected.s == null ? 999999 : arr[cnt].expected.s
* eval if(es == 999999 && typeof ess != 'undefined') karate.set('es', ess)
* eval if(es == 999999) karate.set('es', 200)
#preparing endpoint
* def endpoint = utils.getUrl(c, b, s, i, v, d)
Given url endpoint
And request arr[cnt].request.body
When method PUT
Then assert responses == es
# for the cases where we are expecting 404
* if (es == 404) karate.abort()
# for cases != 404
* def expRes = '<empty>'
* eval if(es != 204) karate.set('expRes', arr[cnt].expected.body)
* def actRes = '<empty>'
* eval if(es != 204) karate.set('actRes', response)
And match actRes == expRes
cURL
curl --location --request PUT 'http://host-url/c/Y/b/X/s/2/i/1/v/-1' \
--header 'X-Client-Id: test' \
--header 'Content-Type: application/json' \
--data-raw '{
"i": {
"x1": {
"q": 10
},
"x2": {
"q": 50
}
}
}'
This request works absolutely fine for me, so most likely your server was expecting something else, or maybe it is an actual bug in your server. Please work with the team who owns the server, you should be able to find the issue in no time.
* url 'http://httpbin.org'
* header X-Client-Id = 'test'
* path 'put'
* request
"""
{
"i": {
"x1": {
"q": 10
},
"x2": {
"q": 50
}
}
}
"""
* method put
Other troubleshooting tips:
Postman adds some headers automatically such as Accept watch out for that and add that if needed
Karate appends ; charset=UTF-8 to the Content-Type header by default, which in rare cases the server does not like (most likely a bug on your server-side) - you can disable this by * configure charset = null - see https://stackoverflow.com/a/53651454/143475
Let's suppose that I have a bucket with many folders and objects.
This bucket has Objects can be public as policy access. If I want to know if there is at least one public object or list all public objects, how should I do this? Is there any way to do this automatically?
It appears that you would need to loop through every object and call GetObjectAcl().
You'd preferably do it in a programming language, but here is an example with the AWS CLI:
aws s3api get-object-acl --bucket my-bucket --key foo.txt
{
"Owner": {
"DisplayName": "...",
"ID": "..."
},
"Grants": [
{
"Grantee": {
"DisplayName": "...",
"ID": "...",
"Type": "CanonicalUser"
},
"Permission": "FULL_CONTROL"
},
{
"Grantee": {
"Type": "Group",
"URI": "http://acs.amazonaws.com/groups/global/AllUsers"
},
"Permission": "READ"
}
]
}
I granted the READ permission by using Make Public in the S3 management console. Please note that objects could also be made public via a Bucket Policy, which would not show up in the ACL.
Use this method from AWS SDK to do this with JavaScript:
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listObjects-property. There is an SDK for every major language like Java etc. Use the one that you know.
var params = {
Bucket: "examplebucket",
MaxKeys: 2
};
s3.listObjectsV2(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else {
// bucket isnt empty
if (data.length != 0)
console.log(data); // successful response
}
/*
data = {
Contents: [
{
ETag: "\"70ee1738b6b21e2c8a43f3a5ab0eee71\"",
Key: "happyface.jpg",
LastModified: <Date Representation>,
Size: 11,
StorageClass: "STANDARD"
},
{
ETag: "\"becf17f89c30367a9a44495d62ed521a-1\"",
Key: "test.jpg",
LastModified: <Date Representation>,
Size: 4192256,
StorageClass: "STANDARD"
}
],
IsTruncated: true,
KeyCount: 2,
MaxKeys: 2,
Name: "examplebucket",
NextContinuationToken: "1w41l63U0xa8q7smH50vCxyTQqdxo69O3EmK28Bi5PcROI4wI/EyIJg==",
Prefix: ""
}
*/
});```
Building off of John's answer, you might find this helpful:
import concurrent.futures
import boto3
BUCKETS = [
"TODO"
]
def get_num_objs(bucket):
num_objs = 0
s3_client = boto3.client("s3")
paginator = s3_client.get_paginator("list_objects_v2")
for res in paginator.paginate(
Bucket=bucket,
):
if "Contents" not in res:
print(f"""No contents in res={res}""")
continue
num_objs += len(res["Contents"])
return num_objs
for BUCKET in BUCKETS:
print(f"Analyzing bucket={BUCKET}...")
num_objs = get_num_objs(BUCKET)
print(f"BUCKET={BUCKET} has num_objs={num_objs}")
# if num_objs > 10_000:
# raise Exception(f"num_objs={num_objs}")
s3_client = boto3.client("s3")
def assert_no_public_obj(res):
if res["ResponseMetadata"]["HTTPStatusCode"] != 200:
raise Exception(res)
if "Contents" not in res:
print(f"""No contents in res={res}""")
return
print(f"""Fetched page with {len(res["Contents"])} objs...""")
for i, obj in enumerate(res["Contents"]):
if i % 100 == 0:
print(f"""Fetching {i}-th obj in page...""")
res = s3_client.get_object_acl(Bucket=BUCKET, Key=obj["Key"])
for grant in res["Grants"]:
# Amazon S3 considers a bucket or object ACL public if it grants any permissions to members of the predefined AllUsers or AuthenticatedUsers groups.
# https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html#access-control-block-public-access-policy-status
uri = grant["Grantee"].get("URI")
if not uri:
continue
if "AllUsers" in uri or "AuthenticatedUsers" in uri:
raise Exception(f"""Grantee={grant["Grantee"]} found for {BUCKET}/{obj["Key"]}""")
paginator = s3_client.get_paginator("list_objects_v2")
with concurrent.futures.ThreadPoolExecutor() as executor:
for res in paginator.paginate(
Bucket=BUCKET,
):
executor.submit(assert_no_public_obj, res)
Waiting for create [operation-1544424409972-57ca55456bd22-84bb0f13-64975fdc]...failed.
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1544424409972-57ca55456bd22-84bb0f13-64975fdc]: errors:
- code: CONDITION_NOT_MET
location: /deployments/infrastructure/resources/practice-gke-clusters->$.properties->$.cluster.name
message: |-
InputMapping for field [cluster.name] for method [create] could not be set from input, mapping was: [$.ifNull($.resource.properties.cluster.name, $.resource.name)
], and evaluation context was:
{
"deployment" : {
"id" : 4291795636642362677,
"name" : "infrastructure"
},
"intent" : "CREATE",
"matches" : [ ],
"project" : "resources-practice",
"requestId" : "",
"resource" : {
"name" : "practice-gke-clusters",
"properties" : {
"initialNodeCount" : 1,
"location" : "asia-east2-a",
"loggingService" : "logging.googleapis.com",
"monitoringService" : "monitoring.googleapis.com",
"network" : "$(ref.practice-gke-network.selfLink)",
"subnetwork" : "$(ref.practice-gke-network-subnet-1.selfLink)"
},
"self" : { }
}
}
I always experience this when I try to create a GKE out of deployment manager w/ the jinja template below
resources:
- name: practice-gke-clusters
type: container.v1.cluster
properties:
network: $(ref.practice-gke-network.selfLink)
subnetwork: $(ref.practice-gke-network-subnet-1.selfLink)
initialNodeCount: 1
loggingService: logging.googleapis.com
monitoringService: monitoring.googleapis.com
location: asia-east2-a
You are missing:
properties:
cluster:
name: practice-gke-clusters
initialNodeCount: 3
nodeConfig:
oauthScopes:
- https://www.googleapis.com/auth/compute
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
Modify the initialNodeCount and oauthScopes as required.
I'm trying to create a one node etcd cluster on AWS using coreos cloud-config. I have created a Route53 recordset with value etcd.uday.com which has a alias to the ELB which points to the ec2 instance. Etcd is running successfully but when I run the etcd member list command I get below error
ETCDCTL_API=3 etcdctl member list \
--endpoints=https://etcd.udayvishwakarma.com:2379 \
--cacert=./ca.pem \
--cert=etcd-client.pem \
--key=etcd-client-key.pem
Error: context deadline exceeded
However, it lists members when --insecure-skip-tls-verify flag is added to the etcdctl member list command. I have generated certificated using cfssl using below configs
ca.json
{
"CN": "Root CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "UK",
"L": "London",
"O": "Kubernetes",
"OU": "CA"
}
],
"ca": {
"expiry": "87658h"
}
}
ca.config
{
"signing": {
"default": {
"expiry": "2190h"
},
"profiles": {
"client": {
"expiry": "8760h",
"usages": [
"signing",
"key encipherment",
"client auth"
]
},
"server": {
"expiry": "8760h",
"usages": [
"signing",
"key encipherment",
"server auth"
]
},
"peer": {
"expiry": "8760h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
},
"ca": {
"usages": [
"signing",
"digital signature",
"cert sign",
"crl sign"
],
"expiry": "26280h",
"is_ca": true
}
}
}
}
etcd-member.json
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts":[
"etcd.uday.com"
],
"names": [
{
"O": "Kubernetes"
}
]
}
etcd-client.json
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts":[
"etcd.uday.com"
],
"names": [
{
"O": "Kubernetes"
}
]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -hostname="etcd.uday.com" \
-config=ca-config.json -profile=peer \
etcd-member.json | cfssljson -bare etcd-member
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -hostname="etcd.uday.com" \
-config=ca-config.json -profile=client\
etcd-client.json | cfssljson -bare etcd-client
My etcd-member.service systemd unit cloudconfig is as below
units:
- name: etcd-member.service
drop-ins:
- name: aws-etcd-cluster.conf
content: |
[Service]
Environment=ETCD_USER=etcd
Environment=ETCD_NAME=%H
Environment=ETCD_IMAGE_TAG=v3.1.12
Environment=ETCD_SSL_DIR=/etc/etcd/ssl
Environment=ETCD_CA_FILE=/etc/ssl/certs/ca.pem
Environment=ETCD_CERT_FILE=/etc/ssl/certs/etcd-client.pem
Environment=ETCD_KEY_FILE=/etc/ssl/certs/etcd-client-key.pem
Environment=ETCD_CLIENT_CERT_AUTH=true
Environment=ETCD_TRUSTED_CA_FILE=/etc/ssl/certs/ca.pem
Environment=ETCD_PEER_CA_FILE=/etc/ssl/certs/ca.pem
Environment=ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd-member.pem
Environment=ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd-member-key.pem
Environment=ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/certs/ca.pem
Environment=ETCD_INITIAL_CLUSTER_STATE=new
Environment=ETCD_INITIAL_CLUSTER=%H=https://%H:2380
Environment=ETCD_DATA_DIR=/var/lib/etcd3
Environment=ETCD_LISTEN_CLIENT_URLS=https://%H:2379,https://127.0.0.1:2379
Environment=ETCD_ADVERTISE_CLIENT_URLS=https://%H:2379
Environment=ETCD_LISTEN_PEER_URLS=https://%H:2380
Environment=ETCD_INITIAL_ADVERTISE_PEER_URLS=https://%H:2380
PermissionsStartOnly=true
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/lib/coreos/etcd-member-wrapper.uuid"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/lib/coreos/etcd-member-wrapper.uuid
ExecStartPre=/usr/bin/sed -i 's/^ETCDCTL_ENDPOINT.*$/ETCDCTL_ENDPOINT=https:\/\/%H:2379/' /etc/environment
ExecStartPre=/usr/bin/mkdir -p /var/lib/etcd3
ExecStartPre=/usr/bin/chown -R etcd:etcd /var/lib/etcd3
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/lib/coreos/etcd-member-wrapper.uuid
enable: true
command: start
Is cert generation wrong or something I have missed?
The certificates are generated for etcd.uday.com.
You are trying to connect using etcd.udayvishwakarma.com while certificate is valid for etcd.uday.com.
Change endpoint on etcdctl from etcd.udayvishwakarma.com to etcd.uday.com.
I ran into the same problem today, probably this is not going to be useful for you now, but it will for anybody who runs into the same problem in the future.
I think you might be missing
etcd.udayvishwakarma.com
from your cert in
--cert=etcd-client.pem
To verify that etcd.udayvishwakarma.com exists in your cert, you can run:
openssl x509 -in etcd-client.pem -text
and you should be able to see it under X509v3 Subject Alternative Name. If you don't, you will probably need to recreate the certificate adding that DNS name.
while i am running .\etcdctl.exe put key value, i get this error
Error: context deadline exceeded
befor running etcdctl.exe you should running etcd.exe first.
in my case it's working.
Currently I am writing a test library to test the configuration settings. I would like to set only few parameters of firehose like SizeInMBs and IntervalInSeconds. All other parameters will remain same. Is there a simple way to do it?
I wrote the following method
def set_firehose_buffering_hints(self, size_mb, interval_sec):
response = self._firehose_client.describe_delivery_stream(DeliveryStreamName=self.firehose)
lambdaarn = (response['DeliveryStreamDescription']
['Destinations'][0]['ExtendedS3DestinationDescription']
['ProcessingConfiguration']['Processors'][0]['Parameters'][0]['ParameterValue'])
response = self._firehose_client.update_destination(DeliveryStreamName=self.firehose,
CurrentDeliveryStreamVersionId=response['DeliveryStreamDescription']['VersionId'],
DestinationId=response['DeliveryStreamDescription']['Destinations'][0]['DestinationId'],
ExtendedS3DestinationUpdate={
"BufferingHints": {
"IntervalInSeconds": interval_sec,
"SizeInMBs": size_mb
},
'ProcessingConfiguration': {
'Processors': [{
'Type': 'Lambda',
'Parameters': [
{
'ParameterName': 'LambdaArn',
'ParameterValue': lambdaarn
},
{
'ParameterName': 'BufferIntervalInSeconds',
'ParameterValue': str(interval_sec)
},
{
'ParameterName': 'BufferSizeInMBs',
'ParameterValue': str(size_mb)
}]
}]
}})