I try to configure the Logstash Output OpenSearch plugin to index data in OpenSearch 2.3.0 provisioned by AWS. The access to the AWS OpenService is controlled by IAM. I followed this configuration sample and built the following configuration:
output {
opensearch {
hosts => ["vpc-my-opensearch-domain-XXXXXXXX.us-east-1.es.amazonaws.com:443"]
auth_type => {
type => 'aws_iam'
aws_access_key_id => 'MY_ACCESS_KEY_ID'
aws_secret_access_key => 'MY_ACCESS_SECRET_KEY'
region => 'us-east-1'
}
index => "program_search_01"
document_id => "%{ruleProgramId}-%{ruleId}"
routing => "%{[programRulesRelationship][parent]}"
template_name => "my-template"
template => "/usr/share/logstash/pipeline/template/my-template.json"
manage_template => true
template_overwrite => true
}
}
After starting Logstash I am getting the following logs:
[2023-02-07T05:00:05,306][DEBUG][org.apache.http.wire ][programAvailability] http-outgoing-25053 << "HTTP/1.1 400 Bad Request[\r][\n]"
[2023-02-07T05:00:05,306][DEBUG][org.apache.http.wire ][programAvailability] http-outgoing-25053 << "Server: awselb/2.0[\r][\n]"
[2023-02-07T05:00:05,306][DEBUG][org.apache.http.wire ][programAvailability] http-outgoing-25053 << "Date: Tue, 07 Feb 2023 05:00:05 GMT[\r][\n]"
[2023-02-07T05:00:05,306][DEBUG][org.apache.http.wire ][programAvailability] http-outgoing-25053 << "Content-Type: text/html[\r][\n]"
[2023-02-07T05:00:05,306][DEBUG][org.apache.http.wire ][programAvailability] http-outgoing-25053 << "Content-Length: 220[\r][\n]"
[2023-02-07T05:00:05,306][DEBUG][org.apache.http.wire ][programAvailability] http-outgoing-25053 << "Connection: close[\r][\n]"
[2023-02-07T05:00:05,306][DEBUG][org.apache.http.wire ][programAvailability] http-outgoing-25053 << "[\r][\n]"
[2023-02-07T05:00:05,306][DEBUG][org.apache.http.headers ][programAvailability] http-outgoing-25053 << HTTP/1.1 400 Bad Request
[2023-02-07T05:00:05,306][DEBUG][org.apache.http.headers ][programAvailability] http-outgoing-25053 << Server: awselb/2.0
[2023-02-07T05:00:05,306][DEBUG][org.apache.http.headers ][programAvailability] http-outgoing-25053 << Date: Tue, 07 Feb 2023 05:00:05 GMT
[2023-02-07T05:00:05,306][DEBUG][org.apache.http.headers ][programAvailability] http-outgoing-25053 << Content-Type: text/html
[2023-02-07T05:00:05,306][DEBUG][org.apache.http.headers ][programAvailability] http-outgoing-25053 << Content-Length: 220
[2023-02-07T05:00:05,306][DEBUG][org.apache.http.headers ][programAvailability] http-outgoing-25053 << Connection: close
[2023-02-07T05:00:05,306][DEBUG][org.apache.http.impl.conn.DefaultManagedHttpClientConnection][programAvailability] http-outgoing-25053: Close connection
[2023-02-07T05:00:05,306][DEBUG][org.apache.http.impl.execchain.MainClientExec][programAvailability] Connection discarded
[2023-02-07T05:00:05,306][DEBUG][org.apache.http.impl.conn.PoolingHttpClientConnectionManager][programAvailability] Connection released: [id: 25053][route: {}->http://vpc-my-opensearch-domain-XXXXXXXXXX-east-1.es.amazonaws.com:443][total available: 0; route allocated: 0 of 100; total allocated: 0 of 1000]
According to the logs, the plugin use http instead of https. I suspect this to be a root cause. If yes, can anyone tell me what I am missing in my plugin configuration?
I am running logstash as a docker container, the existing docker file looks like this:
FROM opensearchproject/logstash-oss-with-opensearch-output-plugin:7.16.3
LABEL Name=cas-logstash
COPY mssql-jdbc-9.4.0.jre11.jar /usr/share/logstash/drivers/
USER root
RUN chown -R logstash:root /usr/share/logstash/drivers
RUN sed -i 's/path = "#{template_endpoint}\/#{name}"/path = "\/#{template_endpoint}\/#{name}"/' /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-opensearch-1.2.0-java/lib/logstash/outputs/opensearch/http_client.rb
USER logstash
Related
I have an api rest gateway and a lambda fetching an image from s3 after doing some checks.
That serving binary files is not so easy I noticed and found a lot of problems about this topic on SO or on other sites.
What I understand and already configured is the following:
my lambda is returning an JSON object with isBase64Encoded
return {
statusCode: 200,
headers: {
"Content-Type": mimeType, // mimeType is here image/jpeg
},
body: data.toString("base64"), // data is a buffer;
// I also tried data.toString("binary") and data.toString()
isBase64Encoded: true,
}
In my API gateway I set the binary Media Types to the following
Resources:
MyApiGateway:
Type: AWS::ApiGateway::RestApi
Properties:
...
BinaryMediaTypes:
- "application/octet"
- "image/jpeg"
- "image/png"
- "image/gif"
- "image/*"
This settings I see also in the AWS Console.
When I now query me API Gateway I only receive a BASE64 encoded image.
❯ curl https://myurl/test/profile-picture?size=small -H "Accept: image/jpeg" -v
* TCP_NODELAY set
* Connected to xxx port 443 (#0)
...
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x558f76b7fea0)
> GET /test/profile-picture?size=small HTTP/2
> Host: xxx
> user-agent: curl/7.68.0
> accept: image/jpeg
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 200
< date: Sun, 28 Feb 2021 21:37:04 GMT
< content-type: image/jpeg
< content-length: 3940
< x-amzn-requestid: bc54ee15-5c41-4828-a7b2-c4978af07176
< x-amzn-trace-id: Root=1-603c0d00-2f468936671986a95532641e;Sampled=0
<
/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAf/AABEIAD8AZAMBEQACEQEDEQH/xAGiAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgsQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5+gEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoLEQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4+Tl5ufo6ery8/T19vf4+fr/2gAMAwEAAhEDEQA/AP8AP/oAKACgAoAKACgAoAKACgAoAKAHKMnHPTtQA/YPf9P8KAIqACgAoAKACgAoAKACgD2H4E/BPxX+0H8SNF+GXg6/8P6Vq+ssMar4pvbuw0PT43ubWxhkvZtO0/VtSdrnUL6x0+0ttP0y+vLq9vLaCG3YuSvicQ57h+HMrrZricLj8bSozo0/q+W4dYnEzlWqKnBqM6lGjTpxcr1K1etSo046ymm0n914dcA5l4l8UYbhXKc04fyjGYnC43GRxnEuaRyrL1SwGHlia1KFX2VevisbVpwccJgcHh8Ri8TU92nScYzlH6w+LP8AwTz8a/B/U/C+l694k13Uv7emEOo6vpvwl8dadonh9UurG3nvtQfxfF4Y1uHSVS/juLbVNR0HSrW+ijne2LRwzSR/M4fxAwGIfK/7LwVT2aqSoZhxBl9PEwT25qWEWNjZ6LmjUlG7Svc/dF9FXihYSti6ceOM6pQxM8NSxPCnhfnmdZfiZU4qVR0sdjMxyWKlFqpF0qlKNRezcnFRaZn/AAg/Ye0r4ueJdJ8K2nxqtfD+p3vi8eFtRl1P4e6xPY6Ravs2+JnmtdbM2o6EhFybiXTba4nhjgiYQySX9hFcebmXiU8phTrYzKsFDDVK0aUcRT4goV4JSteq/YYGo3CHNFzUVKok0+Rn3fDv0MaHFc8wwmR+IHENTNsvymrmNXK8x8KcxyWvLE0/aezy3nzTirDQp18S6co0K9Xkw0pRnB1FODi6n7Qn7B3iT4A+L9T8JP8AFPwF43uNM1q80aS50NdTthMltELiHVUtriKWWPTb+1P2m0Nw0N6EGy6sraVkjesu8UMvxuIxNCplmJprDqE4YnDYnD4zC4mlVjzU6lGa9hVtOPvKM6MJqLTaXMr8PEP0JeKcpyfJc3y7jPJ8Y849tSnlWbZRmeR5rl+Mwk1Tx2DxUKc83y+c8HVfsqtbD5jVoTqqUaVSfJUdOlpX7NPwrt7C2/4SL4m6lqGpTxi4eXw5ZR2VhCj/ACi2eHXNO+2faYZEkEkqs1vLGYpIypZ400xPiG/af7BllStRUYqUq/tKc41bXlBKnCpGUUnFqSkrttWsk3jlX0Q6lPDzjxXxngcszH6xU9hRy14bG4etgLQ+r4pzxFXDVKVSrNVlKg4S5Iwg3PmlKEPhav00/ioKACgAoAKACgAoAKAOn8GeM/E/w98UaL4y8G6xd6D4k8PahbanpepWbhZIbm1mSZEmicNBd2kxQRXlhdxzWV9bNJa3cE1vLJEwB+/Hgn4m+B/2vfCfxY+MXgfwXqQ+Iw8ArN+1H4M0/Vb2B9B8GgWtne+PfCF7cyC80y7j8R2fhvTPh74uttavfD/ha51rTfCnxK8N+Evhd8PvAc8qq4fC4ylLD4rDUMRRnFKdGvQp1qU1FqVpQnGUZpNRnyyT1jda3Z6GVZ5nnD+Ow+aZFnGaZNmWEm6mEzHKcwxeXY7CVJwnTlUw+KwlalWoSlTqVKUpU6kW6dSUG+WUk/Nfh/8AD7xF4Jmh0W9vJNO1bTPin8KtA0fx3ouqp4dvNf8Ahdqnh6LR/iv8UdWivr7S9R1Gx+FevajoMfitriItceI/GHg+Dwhq9xpEmv6vq/j4zhThrG01h8VkeV1qMFzQpVMJRdNJSTsouCjFSUYtJK2mtnZH3+V+Nni9k1eti8t8SeM8LisQo/WMTTz/ADGWIrOHMo+1rVK86lRw558rlJtOTt8TPBP2ydY8X2nwb+EHifxVb6toXxU1ObwhN4ss9csRpfjLS5dH0rxp4ciufHFqIbeSbW/H83hUeNnsNYgvVstDutETTr69ttR1G5vFh+G8gwdOpRwmTZbhqdSHs5RoYSjRfs3FpRUoQjONk3ZwkmtHF3SZnmXjL4rZxicBi828ROMczrZbiFi8Iswz/McXQo11UjU5lha9eeGnGbhFVadSjOnVjeFWE4ScX8++Av2nfC+n6G9t438M6hcast9MYJPDdroy2H9ntBbeWJF1mSe5S6+1fa2kWJxaiJofKRW8wV8RjvDhyruWXZtiMPQkuaVPES9pJVXKV+WVKnSXs1D2cYqSlO8ZOU3dW/pzhr6YOX/2aocb+H+V5nm9KqqVHF5HSq4PDTwMKFBU3iKWMzLESeMlifrdSrOi6WH9nUpQp0YOEnL4nr9RP4YCgAoAKACgAoAKACgAoA/Rb/glDqniGx/bu+C+maFq6aRaeK4PiF4V8Wm+kmTw5feCNS+Gvi6bxPa+M44bmzMng2DT7NtV8QyN* Connection #0 to host xn5gb6d7e6.execute-api.eu-central-1.amazonaws.com left intact
cwNptjpjazbT217ptrdQJ/qttHvbRgfvPpXwH8QeOPiL4KtPhd4E1aXTfCXhzU/Anw6vNZvEaz8I/A7XoPEVrcPr1rf6gsvivw9pnwq8W+In+JnjHT9X13xdoHxruPCn9qxeJLnRPh9BZ17Re85NaprTS9mm0t7bcyv00tbQTi9Er9Guu/e2t+r0V7+enwX/AMFwfgP8Uvg/4j8K6UPhC3wu+Bvhi1+H1hpvijxVL4C0Dxl8W/H/AIx+H0XjaKztNAs9K8MeOPEcHwM8G6xYfCvxVqepaV4gvPDviazul8X+LtRvPEvh2WeedSb76+ez6u1r6q3W3ezGotK7vbu1a7d3f567aadD+famAUAFABQAUAFABQAUAFABQB9ifsL/ALTfh/8AZC+Pdv8AHHXfAl78RptI8A/Erwdo/hq01jT9BSWb4p+DdV+GHiO6udV1HRPEUdm1v4B8XeMBpFwmjaitt4kk0S6vtP1PSYNR0y9TV9PPXv8ALzvb5DTtr1tp210d/K1/npsfS/xf/wCCsXx48VzySfBU6v8AAa4vIdQk1TxbpfiPSdc+J93r3iDRZdB8aeL7Dx7ongnwFN4f8Y+P7GeYeMPF+i6TaeLbqGQ6JpGv6L4Uz4dJZf1+nb5W/Ky+/XfzPzG8S+KvFXjDVNS1zxd4l8QeKtZ1jV9S8QatrHiPWdR1zVNU17WWgfWNb1LUNTubq7vdX1V7W2bUtSuZZby+a3ga6mlMUe1gc7QAUAFABQAUAFABQAUAFABQA9Mc5xxjGaAJaAEbofoaAIKACgAoAKACgAoAKACgAoAKACgAyR0JFAClie9ACUAFABQAUAFABQAUAFABQAUAFABQAUAFAH//2Q==
Is there something wrong?
I am not able to figre the error out :-(
I can not create shared-domain in Cloud Foundry, any pushed apps get's health check connection refused.
I had working Cloud Foundry environment based on OpenStack IaaS. Everything worked as expected. I took my deployment files and after some time deployed it successfully in IaaS Vmware VSphere 7. The problem is, that every app that I push has problems with health check:
2020-10-29T16:55:01.43+0000 [CELL/0] OUT Cell 938b869c-5a68-40cc-9486-c5bc0d53a73a successfully destroyed container for instance 44e9c2a6-b54d-4fc4-4118-6d6d
2020-10-29T16:55:36.55+0000 [CELL/0] OUT Cell 938b869c-5a68-40cc-9486-c5bc0d53a73a creating container for instance 17f161a2-9788-426d-414d-6c33
2020-10-29T16:55:37.18+0000 [CELL/0] OUT Cell 938b869c-5a68-40cc-9486-c5bc0d53a73a successfully created container for instance 17f161a2-9788-426d-414d-6c33
2020-10-29T16:55:37.47+0000 [CELL/0] OUT Downloading droplet...
2020-10-29T16:55:37.75+0000 [CELL/0] OUT Downloaded droplet
2020-10-29T16:55:37.75+0000 [CELL/0] OUT Starting health monitoring of container
2020-10-29T16:56:38.45+0000 [HEALTH/0] ERR Failed to make TCP connection to port 8080: connection refused
2020-10-29T16:56:38.45+0000 [CELL/0] ERR Timed out after 1m0s: health check never passed.
2020-10-29T16:56:38.46+0000 [CELL/SSHD/0] OUT Exit status 0
2020-10-29T16:56:38.48+0000 [APP/PROC/WEB/0] OUT Exit status 143
I am also not able to create any shared domains:
bash-5.0# cf create-shared-domain tcp.cf.test-env.net --router-group default-tcp -v
REQUEST: [2020-10-29T17:03:33Z]
GET /v2/info HTTP/1.1
Host: api.cf.test-env.net
Accept: application/json
User-Agent: cf/6.47.2+d526c2cb3.2019-11-05 (go1.12.12; amd64 linux)
RESPONSE: [2020-10-29T17:03:33Z]
HTTP/1.1 200 OK
Content-Length: 561
Content-Type: application/json;charset=utf-8
Date: Thu, 29 Oct 2020 17:03:33 GMT
Server: nginx
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: 4badb79b-2faf-4623-6c3c-ce5fa3223cd5::dc43d2c9-c902-4429-9d65-d9a0060983c5
{
"api_version": "2.144.0",
"app_ssh_endpoint": "ssh.cf.test-env.net:2222",
"app_ssh_host_key_fingerprint": "ae:a3:ed:ad:37:d3:8a:7b:ed:b4:e5:d2:25:e5:8c:d0",
"app_ssh_oauth_client": "ssh-proxy",
"authorization_endpoint": "https://login.cf.test-env.net",
"build": "",
"description": "",
"doppler_logging_endpoint": "wss://doppler.cf.test-env.net:443",
"min_cli_version": null,
"min_recommended_cli_version": null,
"name": "",
"osbapi_version": "2.15",
"routing_endpoint": "https://api.cf.test-env.net/routing",
"support": "",
"token_endpoint": "https://uaa.cf.test-env.net",
"version": 0
}
REQUEST: [2020-10-29T17:03:33Z]
GET /login HTTP/1.1
Host: login.cf.test-env.net
Accept: application/json
Connection: close
User-Agent: cf/6.47.2+d526c2cb3.2019-11-05 (go1.12.12; amd64 linux)
RESPONSE: [2020-10-29T17:03:34Z]
HTTP/1.1 200 OK
Cache-Control: no-store
Content-Language: en-US
Content-Length: 384
Content-Type: application/json;charset=UTF-8
Date: Thu, 29 Oct 2020 17:03:34 GMT
Set-Cookie: X-Uaa-Csrf=NJlSPAjspn7m8oWuQdKsVD; Max-Age=86400; Expires=Fri, 30-Oct-2020 17:03:34 GMT; Path=/; Secure; HttpOnly
Strict-Transport-Security: max-age=31536000
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Vcap-Request-Id: 577d4d31-ec30-477e-6f44-c0dd9306270d
X-Xss-Protection: 1; mode=block
{
"app": {
"version": "74.12.0"
},
"commit_id": "7311e68",
"entityID": "login.cf.test-env.net",
"idpDefinitions": {},
"links": {
"login": "https://login.cf.test-env.net",
"passwd": "/forgot_password",
"register": "/create_account",
"uaa": "https://uaa.cf.test-env.net"
},
"prompts": {
"password": "[PRIVATE DATA HIDDEN]",
"username": [
"text",
"Email"
]
},
"timestamp": "2019-12-02T22:53:03+0000",
"zone_name": "uaa"
}
Creating shared domain tcp.cf.test-env.net as admin...
REQUEST: [2020-10-29T17:03:34Z]
GET /routing/v1/router_groups?name=default-tcp HTTP/1.1
Host: api.cf.test-env.net
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Connection: close
Content-Type: application/json
User-Agent: cf/6.47.2+d526c2cb3.2019-11-05 (go1.12.12; amd64 linux)
[application/json Content Hidden]
RESPONSE: [2020-10-29T17:03:34Z]
HTTP/1.1 200 OK
Content-Length: 114
Content-Type: application/json
Date: Thu, 29 Oct 2020 17:03:34 GMT
X-Vcap-Request-Id: 9459b068-0987-4f5e-7dee-1efdb5ca6fb8
[
{
"guid": "343ba1e8-88a7-4003-6db6-4feabedd072b",
"name": "default-tcp",
"reservable_ports": "1024-2048",
"type": "tcp"
}
]
REQUEST: [2020-10-29T17:03:34Z]
POST /v2/shared_domains HTTP/1.1
Host: api.cf.test-env.net
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: cf/6.47.2+d526c2cb3.2019-11-05 (go1.12.12; amd64 linux)
{
"internal": false,
"name": "tcp.cf.test-env.net",
"router_group_guid": "343ba1e8-88a7-4003-6db6-4feabedd072b"
}
RESPONSE: [2020-10-29T17:04:04Z]
HTTP/1.0 504 Gateway Time-out
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>504 Gateway Time-out</h1>
The server didn't respond in time.
</body></html>
Error unmarshalling the following into a cloud controller error: <html><body><h1>504 Gateway Time-out</h1>
The server didn't respond in time.
</body></html>
FAILED
I suspect network configuration issue, that blocks some internal CF parts from connection. There is no any firewall or any rules found in VMware. I can also ping and make ssh connection between bosh created VM's.
Any ideas, what else can I do?
the problem was with DNAT and SNAT rules on VmWare NSX-T. If any internal VM asked about dns name "api.cf.test-env.net" it get's remote (public) IP address as answer. When the connection should be established, the internal VM has been asking api.cf.test-env.net by public IP address, and get's the local one by second stage of TCP three-way-handshake - what caused TCP RST. After creating DNAT and SNAT rules correctly, everything works as expected. I still wondering why "api.cf.test-env.net" is not answered by bosh-dns with internal address. Does anyone know why it so and how it can be changed?
I'm having troubles configuring istio to go through our corporate proxy while doing tls origination, i created a demo project that reproduces this usecase and shows:
[NO-PROXY] https requests to www.wikipedia.org work
[PROXY] https requests to www.wikipedia.org work
[NO-PROXY] http requests to www.wikipedia.org work using tls origination
[PROXY] http requests to www.wikipedia.org don't work
To setup demo project launch istio/start.sh
I followed this guide for proxy
and this guide for tls origination
But i haven't been able to make these 2 features work together.
Any clues on what i have been doing wrong, or if this is not possible in istio?
my current guess is that having the proxy configured with tcp protocol disables istios features required to do tls origination
I'm also starting to play with egress gateways and will update this if it works.
Meanwhile this is what you should see with the demo project:
https no proxy - works
microk8s.kubectl exec -it $(microk8s.kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep -- sh -c "curl -I https://www.wikipedia.org 2>/dev/null | head -n 1"
log istio-proxy
[2020-01-31T12:00:39.247Z] "- - -" 0 - "-" "-" 850 4413 576 - "-" "-" "-" "-" "91.198.174.192:443" outbound|443||www.wikipedia.org 10.1.21.153:33064 91.198.174.192:443 10.1.21.153:33062 www.wikipedia.org -
http no proxy - works
microk8s.kubectl exec -it $(microk8s.kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep -- sh -c "curl -I http://www.wikipedia.org 2>/dev/null | head -n 1"
log istio-proxy
[2020-01-31T12:02:17.012Z] "HEAD / HTTP/1.1" 200 - "-" "-" 0 0 181 180 "-" "curl/7.64.0" "09dddb0e-94b2-9f52-8505-e2a790f2d0c6" "www.wikipedia.org" "91.198.174.192:443" outbound|443|tls-origination|www.wikipedia.org - 91.198.174.192:80 10.1.21.153:45598 - -
https proxy - works
microk8s.kubectl exec -it $(microk8s.kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep -- sh -c "https_proxy=$PROXY curl -I https://www.wikipedia.org 2>/dev/null | head -n 1"
istio log
[2020-01-31T12:04:38.819Z] "- - -" 0 - "-" "-" 976 4429 253 - "-" "-" "-" "-" "10.1.21.154:3128" outbound|3128||proxy 10.1.21.153:41184 10.1.21.154:3128 10.1.21.153:41182 - -
squid-proxy log
1580472279.072 252 10.1.21.153 TCP_TUNNEL/200 4429 CONNECT www.wikipedia.org:443 - HIER_DIRECT/91.198.174.192 -
http proxy - wont work
microk8s.kubectl exec -it $(microk8s.kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep -- sh -c "http_proxy=$PROXY curl -I http://www.wikipedia.org 2>/dev/null | head -n 1"
istio log
[2020-01-31T12:06:40.069Z] "- - -" 0 - "-" "-" 136 681 88 - "-" "-" "-" "-" "10.1.21.154:3128" outbound|3128||proxy 10.1.21.153:42398 10.1.21.154:3128 10.1.21.153:42396 - -
squid-proxy log
1580472400.158 85 10.1.21.153 TCP_MISS/301 681 HEAD http://www.wikipedia.org/ - HIER_DIRECT/91.198.174.192 -
I use wikipedia because it's clear what kind of request it's receiving by looking at the response code. i get 301 for http requests and 200 for https request
EDIT:
Microk8s
microk8s.kubectl version: 1.17
microk8s.istioctl version: 1.3.4
I was having the same troubles on IBM Cloud Private
kubectl version: 1.12
istioctl version: 1.2.2
The curl command You are executing inside Your pod does not have -L flag that follows redirects.
According to istio documentation:
Notice the -L flag of curl which instructs curl to follow redirects. In this case, the server returned a redirect response (301 Moved Permanently) for the HTTP request to http://edition.cnn.com/politics. The redirect response instructs the client to send an additional request, this time using HTTPS, to https://edition.cnn.com/politics. For the second request, the server returned the requested content and a 200 OK status code.
So by adding -L flag to curl command We can get output that follows redirections like this:
$ curl -I -L http://wikipedia.org
HTTP/1.1 301 TLS Redirect
Date: Mon, 03 Feb 2020 12:04:56 GMT
Server: Varnish
X-Varnish: 107796482
X-Cache: cp3058 int
X-Cache-Status: int-front
Server-Timing: cache;desc="int-front"
Set-Cookie: WMF-Last-Access=03-Feb-2020;Path=/;HttpOnly;secure;Expires=Fri, 06 Mar 2020 12:00:00 GMT
Set-Cookie: WMF-Last-Access-Global=03-Feb-2020;Path=/;Domain=.wikipedia.org;HttpOnly;secure;Expires=Fri, 06 Mar 2020 12:00:00 GMT
X-Client-IP: REDACTED
Location: https://wikipedia.org/
Content-Length: 0
Connection: keep-alive
HTTP/2 301
date: Sun, 02 Feb 2020 15:48:09 GMT
content-type: text/html; charset=iso-8859-1
content-length: 234
server: mw1333.eqiad.wmnet
location: https://www.wikipedia.org/
vary: X-Forwarded-Proto
x-ats-timestamp: 1580658489
x-varnish: 325572256 37161482
age: 73007
x-cache: cp3062 miss, cp3050 hit/75960
x-cache-status: hit-front
server-timing: cache;desc="hit-front"
strict-transport-security: max-age=106384710; includeSubDomains; preload
set-cookie: WMF-Last-Access=03-Feb-2020;Path=/;HttpOnly;secure;Expires=Fri, 06 Mar 2020 12:00:00 GMT
set-cookie: WMF-Last-Access-Global=03-Feb-2020;Path=/;Domain=.wikipedia.org;HttpOnly;secure;Expires=Fri, 06 Mar 2020 12:00:00 GMT
x-client-ip: REDACTED
set-cookie: GeoIP=REDACTED; Path=/; secure; Domain=.wikipedia.org
HTTP/2 200
date: Mon, 03 Feb 2020 01:38:38 GMT
cache-control: s-maxage=86400, must-revalidate, max-age=3600
server: ATS/8.0.5
x-ats-timestamp: 1580693918
etag: W/"12be8-59c0633ed3519"
content-type: text/html
last-modified: Mon, 13 Jan 2020 14:22:18 GMT
backend-timing: D=320 t=1579084179579408
vary: Accept-Encoding
x-varnish: 335524839 907054142
age: 37578
x-cache: cp3062 miss, cp3050 hit/406421
x-cache-status: hit-front
server-timing: cache;desc="hit-front"
strict-transport-security: max-age=106384710; includeSubDomains; preload
set-cookie: WMF-Last-Access=03-Feb-2020;Path=/;HttpOnly;secure;Expires=Fri, 06 Mar 2020 12:00:00 GMT
x-client-ip: REDACTED
accept-ranges: bytes
So there might be nothing wrong with Your configuration.
Try to use the following command:
microk8s.kubectl exec -it $(microk8s.kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep -- sh -c "http_proxy=$PROXY curl -I -L http://www.wikipedia.org 2>/dev/null | head -n 1"
Hopefully this will show You if Your cluster configuration is working with Your corporate proxy.
Edit:
Check Your squid configuration for HTTP access. According to squid documentation:
Allowing or Denying access based on defined access lists
To allow or deny a message received on an HTTP, HTTPS, or FTP port:
http_access allow|deny [!]aclname ...
NOTE on default values:
If there are no "access" lines present, the default is to deny the
request.
If none of the "access" lines cause a match, the default is the
opposite of the last line in the list. If the last line was deny, the
default is allow. Conversely, if the last line is allow, the default
will be deny. For these reasons, it is a good idea to have an "deny
all" entry at the end of your access lists to avoid potential
confusion.
This clause supports both fast and slow acl types. See
http://wiki.squid-cache.org/SquidFaq/SquidAcl for details.
In proxy.yaml You have the following:
http_access deny CONNECT !SSL_ports
It denies CONNECT to other than secure SSL ports.
I suggest modifying the squid configuration snippet to match ports/protocols You are using. As this could be the reason why HTTP requests with proxy are not working.
Hope this helps.
I am currently setting up an Nginx server on a "Google Compute Engine" behind Google's Load Balancer/CDN combo:
Website visitor <---> CDN <---> Load Balancer <---> Nginx on Google Compute Engine
I would like to redirect the visitor from https://www.example.org/ to either https://www.example.org/de/ or https://www.example.org/en/ depending on the value of the "Accept-Language" HTTP-Header in the client's request. For this purpose, I am using the following code in the nginx.conf configuration file:
set $language_suffix "en";
if ($http_accept_language ~* "^de") {
set $language_suffix "de";
}
location = / {
add_header Vary "Accept-Language";
return 303 https://www.example.org/$language_suffix/;
}
But, above config leads to a 502 error:
~> curl -I https://www.example.org/
HTTP/2 502
content-type: text/html; charset=UTF-8
referrer-policy: no-referrer
content-length: 332
date: Mon, 11 Jun 2018 09:57:55 GMT
alt-svc: clear
How can I fix this?
UPDATE:
XXX.XXX.XXX.XXX - "HEAD https://www.XXXXXXX.com/" 502 106 "curl/7.60.0" {
httpRequest: {
cacheLookup: true
remoteIp: "XXX.XXX.XXX.XXX"
requestMethod: "HEAD"
requestSize: "38"
requestUrl: "https://www.XXXXXXX.com/"
responseSize: "106"
status: 502
userAgent: "curl/7.60.0"
}
insertId: "XXXXXXXXXXXXX"
jsonPayload: {
#type: "type.googleapis.com/google.cloud.loadbalancing.type.LoadBalancerLogEntry"
statusDetails: "failed_to_pick_backend"
}
logName: "projects/crack-triode-XXXXXXXX/logs/requests"
receiveTimestamp: "2018-06-11T03:33:10.864056419Z"
resource: {
labels: {
backend_service_name: ""
forwarding_rule_name: "XXX-werbserver-ipv4-https"
project_id: "crack-triode-XXXXXXXX"
target_proxy_name: "XXX-werbserver-loadbalancer-target-proxy-2"
url_map_name: "XXX-werbserver-loadbalancer"
zone: "global"
}
type: "http_load_balancer"
}
severity: "WARNING"
spanId: "XXXXXXXXXXXXXX"
timestamp: "2018-06-11T03:33:10.088466141Z"
trace: "projects/crack-triode-XXXXXXXX/traces/XXXXXXXXXXXXXXX"
}
You have to change the request uri from / to some else, that returns HTTP-Status 200. I am now using /robots.txt. The setting can be changed at:
https://console.cloud.google.com/compute/healthChecks
I'm trying to use Google's Admin SDK to create an orgunit using a shell script. My script is as follows:
# Obtain a token we can use to modify the organisation
auth_header=`oauth2l header --json "..." "admin.directory.orgunit"`
customer_id=...
curl -v -H "Content-Type: application/json" -X POST \
--data-binary "#google-orgunits/technical.json" \
--header "$auth_header" \
"https://www.googleapis.com/admin/directory/v1/customer/$customer_id/orgunits"
This produces the output:
* Trying 216.58.196.138...
* Connected to www.googleapis.com (216.58.196.138) port 443 (#0)
* found 173 certificates in /etc/ssl/certs/ca-certificates.crt
* found 704 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
* server certificate verification OK
* server certificate status verification SKIPPED
* common name: *.googleapis.com (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: C=US,ST=California,L=Mountain View,O=Google Inc,CN=*.googleapis.com
* start date: Wed, 05 Apr 2017 17:01:30 GMT
* expire date: Wed, 28 Jun 2017 16:56:00 GMT
* issuer: C=US,O=Google Inc,CN=Google Internet Authority G2
* compression: NULL
* ALPN, server accepted to use http/1.1
> POST /admin/directory/v1/customer/.../orgunits HTTP/1.1
> Host: www.googleapis.com
> User-Agent: curl/7.47.0
> Accept: */*
> Content-Type: application/json
> Authorization: Bearer ...
> Content-Length: 157
>
* upload completely sent off: 157 out of 157 bytes
< HTTP/1.1 401 Unauthorized
< Vary: X-Origin
< WWW-Authenticate: Bearer realm="https://accounts.google.com/", error=invalid_token
< Content-Type: application/json; charset=UTF-8
< Date: Sat, 15 Apr 2017 06:26:27 GMT
< Expires: Sat, 15 Apr 2017 06:26:27 GMT
< Cache-Control: private, max-age=0
< X-Content-Type-Options: nosniff
< X-Frame-Options: SAMEORIGIN
< X-XSS-Protection: 1; mode=block
< Server: GSE
< Alt-Svc: quic=":443"; ma=2592000; v="37,36,35"
< Accept-Ranges: none
< Vary: Origin,Accept-Encoding
< Transfer-Encoding: chunked
<
{
"error": {
"errors": [
{
"domain": "global",
"reason": "required",
"message": "Login Required",
"locationType": "header",
"location": "Authorization"
}
],
"code": 401,
"message": "Login Required"
}
}
There must be some problem here: I appear to be obtaining a valid token, (looks like ya29.ElouBGKFig-nXZ9uykyGoDr0hxAxG5PMJTUh3VmtAtj2SAdYEbH2Coumjp5XoaF232oVx3--2EpTyNi5NgFBNrLINJij9tGL3-64MshEXjHhvkH-1NESoxPeVAU). I've followed all of the instructions here, enabled API access, authorized my API client, everything; but still, not working. Where have I gone wrong?
Try checking the documentation about Directory API: Authorize Requests
Every request your application sends to the Directory API must include an authorization token. The token also identifies your application to Google.
Here's the OAuth 2.0 scope information for the Directory API:
https://www.googleapis.com/auth/admin.directory.orgunit - Global scope for access to all organization unit operations.
https://www.googleapis.com/auth/admin.directory.orgunit.readonly -
Scope for only retrieving organization units.
You can check the OAuth 2.0 Playground, an interactive demonstration of using OAuth 2.0 with Google (including the option to use your own client credentials). Also there are many quickstart that can help you on how to properly authorize a request for Admin SDK.
Hope this helps.