How to set up Varnish caching for GraphQL in Django - django

Here is my django-benchmark project in which I implemented simple REST API and GraphQL endpoints. In front of my application I put Varnish for caching. Caching works well for Rest HTTP endpoints and does not work for GraphQL here is my Varnish configuration. What am I doing wrong?
vcl 4.1;
# Default backend definition. Set this to point to your content server.
backend default {
.host = "0.0.0.0";
.port = "8080";
}
sub vcl_hash {
# For multi site configurations to not cache each other's content
if (req.http.host) {
hash_data(req.http.host);
} else {
hash_data(server.ip);
}
# Cache GET requests coming to endpoint located at /api/graphql
if (req.method == "POST" && req.url ~ "/api/graphql") {
call process_graphql_headers;
}
}
# TODO: Find way to cache GraphQL requests.
sub process_graphql_headers {
}
sub vcl_recv {
# # Bypass authenticated requests, these requests should not be cached by default
# if (req.http.Authorization ~ "^Bearer") {
# return (pass);
# }
}
sub vcl_backend_response {
}
sub vcl_deliver {
}
Application Setup
Set up the project and seed database.
$ git clone https://github.com/ldynia/django-benchmark.git
$ cd django-benchmark/
$ docker-compose up -d
$ docker exec -it django-benchmark python manage.py seed 100
Application Testing
Query graphql endpoint directly (port 8080), or passing by varnish first (port 8888).
# Query graphql endpoint directly
$ curl 'http://localhost:8080/api/graphql/' \
-X 'POST' \
-H 'Content-Type: application/json' \
--data-raw '{"query":"query { allDummy { results { id } }}"}'
# Query graphql endpoint hitting varnish first
$ curl 'http://localhost:8888/api/graphql/' \
-X 'POST' \
-H 'Content-Type: application/json' \
--data-raw '{"query":"query { allDummy { results { id } }}"}'
Response
{
"data": {
"allDummy": {
"results": [{
"id": "1"
}, {
"id": "2"
}, {
"id": "3"
},
...
]
}
}
}
HTTP Request & Response Headers
$ curl 'http://localhost:8888/api/graphql/' -X 'POST' -H 'Content-Type: application/json' --data-raw '{"query":"query { allDummy { results { id } }}"}' -v > /dev/null
Note: Unnecessary use of -X or --request, POST is already inferred.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 127.0.0.1:8888...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8888 (#0)
> POST /api/graphql/ HTTP/1.1
> Host: localhost:8888
> User-Agent: curl/7.68.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 48
>
} [48 bytes data]
* upload completely sent off: 48 out of 48 bytes
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Mon, 08 Mar 2021 13:45:42 GMT
< Server: WSGIServer/0.2 CPython/3.7.9
< Content-Type: application/json
< Vary: Cookie
< X-Frame-Options: DENY
< Content-Length: 17128
< X-Content-Type-Options: nosniff
< Referrer-Policy: same-origin
< Set-Cookie: csrftoken=86bgA2o83BavIOTq7Wf59pXxZPeJ65byTMt286UbyKfPSo9O1uefGw8gMP99plbL; expires=Mon, 07 Mar 2022 13:45:42 GMT; Max-Age=31449600; Path=/; SameSite=Lax
< X-Varnish: 32791
< Age: 0
< Via: 1.1 varnish (Varnish/6.4)
< Accept-Ranges: bytes
< Connection: keep-alive
<
{ [17128 bytes data]
100 17176 100 17128 100 48 124k 358 --:--:-- --:--:-- --:--:-- 125k

Related

APIGateway Lambda integration do not serve binary file

I have an api rest gateway and a lambda fetching an image from s3 after doing some checks.
That serving binary files is not so easy I noticed and found a lot of problems about this topic on SO or on other sites.
What I understand and already configured is the following:
my lambda is returning an JSON object with isBase64Encoded
return {
statusCode: 200,
headers: {
"Content-Type": mimeType, // mimeType is here image/jpeg
},
body: data.toString("base64"), // data is a buffer;
// I also tried data.toString("binary") and data.toString()
isBase64Encoded: true,
}
In my API gateway I set the binary Media Types to the following
Resources:
MyApiGateway:
Type: AWS::ApiGateway::RestApi
Properties:
...
BinaryMediaTypes:
- "application/octet"
- "image/jpeg"
- "image/png"
- "image/gif"
- "image/*"
This settings I see also in the AWS Console.
When I now query me API Gateway I only receive a BASE64 encoded image.
❯ curl https://myurl/test/profile-picture?size=small -H "Accept: image/jpeg" -v
* TCP_NODELAY set
* Connected to xxx port 443 (#0)
...
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x558f76b7fea0)
> GET /test/profile-picture?size=small HTTP/2
> Host: xxx
> user-agent: curl/7.68.0
> accept: image/jpeg
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 200
< date: Sun, 28 Feb 2021 21:37:04 GMT
< content-type: image/jpeg
< content-length: 3940
< x-amzn-requestid: bc54ee15-5c41-4828-a7b2-c4978af07176
< x-amzn-trace-id: Root=1-603c0d00-2f468936671986a95532641e;Sampled=0
<
/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAf/AABEIAD8AZAMBEQACEQEDEQH/xAGiAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgsQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5+gEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoLEQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4+Tl5ufo6ery8/T19vf4+fr/2gAMAwEAAhEDEQA/AP8AP/oAKACgAoAKACgAoAKACgAoAKAHKMnHPTtQA/YPf9P8KAIqACgAoAKACgAoAKACgD2H4E/BPxX+0H8SNF+GXg6/8P6Vq+ssMar4pvbuw0PT43ubWxhkvZtO0/VtSdrnUL6x0+0ttP0y+vLq9vLaCG3YuSvicQ57h+HMrrZricLj8bSozo0/q+W4dYnEzlWqKnBqM6lGjTpxcr1K1etSo046ymm0n914dcA5l4l8UYbhXKc04fyjGYnC43GRxnEuaRyrL1SwGHlia1KFX2VevisbVpwccJgcHh8Ri8TU92nScYzlH6w+LP8AwTz8a/B/U/C+l694k13Uv7emEOo6vpvwl8dadonh9UurG3nvtQfxfF4Y1uHSVS/juLbVNR0HSrW+ijne2LRwzSR/M4fxAwGIfK/7LwVT2aqSoZhxBl9PEwT25qWEWNjZ6LmjUlG7Svc/dF9FXihYSti6ceOM6pQxM8NSxPCnhfnmdZfiZU4qVR0sdjMxyWKlFqpF0qlKNRezcnFRaZn/AAg/Ye0r4ueJdJ8K2nxqtfD+p3vi8eFtRl1P4e6xPY6Ravs2+JnmtdbM2o6EhFybiXTba4nhjgiYQySX9hFcebmXiU8phTrYzKsFDDVK0aUcRT4goV4JSteq/YYGo3CHNFzUVKok0+Rn3fDv0MaHFc8wwmR+IHENTNsvymrmNXK8x8KcxyWvLE0/aezy3nzTirDQp18S6co0K9Xkw0pRnB1FODi6n7Qn7B3iT4A+L9T8JP8AFPwF43uNM1q80aS50NdTthMltELiHVUtriKWWPTb+1P2m0Nw0N6EGy6sraVkjesu8UMvxuIxNCplmJprDqE4YnDYnD4zC4mlVjzU6lGa9hVtOPvKM6MJqLTaXMr8PEP0JeKcpyfJc3y7jPJ8Y849tSnlWbZRmeR5rl+Mwk1Tx2DxUKc83y+c8HVfsqtbD5jVoTqqUaVSfJUdOlpX7NPwrt7C2/4SL4m6lqGpTxi4eXw5ZR2VhCj/ACi2eHXNO+2faYZEkEkqs1vLGYpIypZ400xPiG/af7BllStRUYqUq/tKc41bXlBKnCpGUUnFqSkrttWsk3jlX0Q6lPDzjxXxngcszH6xU9hRy14bG4etgLQ+r4pzxFXDVKVSrNVlKg4S5Iwg3PmlKEPhav00/ioKACgAoAKACgAoAKAOn8GeM/E/w98UaL4y8G6xd6D4k8PahbanpepWbhZIbm1mSZEmicNBd2kxQRXlhdxzWV9bNJa3cE1vLJEwB+/Hgn4m+B/2vfCfxY+MXgfwXqQ+Iw8ArN+1H4M0/Vb2B9B8GgWtne+PfCF7cyC80y7j8R2fhvTPh74uttavfD/ha51rTfCnxK8N+Evhd8PvAc8qq4fC4ylLD4rDUMRRnFKdGvQp1qU1FqVpQnGUZpNRnyyT1jda3Z6GVZ5nnD+Ow+aZFnGaZNmWEm6mEzHKcwxeXY7CVJwnTlUw+KwlalWoSlTqVKUpU6kW6dSUG+WUk/Nfh/8AD7xF4Jmh0W9vJNO1bTPin8KtA0fx3ouqp4dvNf8Ahdqnh6LR/iv8UdWivr7S9R1Gx+FevajoMfitriItceI/GHg+Dwhq9xpEmv6vq/j4zhThrG01h8VkeV1qMFzQpVMJRdNJSTsouCjFSUYtJK2mtnZH3+V+Nni9k1eti8t8SeM8LisQo/WMTTz/ADGWIrOHMo+1rVK86lRw558rlJtOTt8TPBP2ydY8X2nwb+EHifxVb6toXxU1ObwhN4ss9csRpfjLS5dH0rxp4ciufHFqIbeSbW/H83hUeNnsNYgvVstDutETTr69ttR1G5vFh+G8gwdOpRwmTZbhqdSHs5RoYSjRfs3FpRUoQjONk3ZwkmtHF3SZnmXjL4rZxicBi828ROMczrZbiFi8Iswz/McXQo11UjU5lha9eeGnGbhFVadSjOnVjeFWE4ScX8++Av2nfC+n6G9t438M6hcast9MYJPDdroy2H9ntBbeWJF1mSe5S6+1fa2kWJxaiJofKRW8wV8RjvDhyruWXZtiMPQkuaVPES9pJVXKV+WVKnSXs1D2cYqSlO8ZOU3dW/pzhr6YOX/2aocb+H+V5nm9KqqVHF5HSq4PDTwMKFBU3iKWMzLESeMlifrdSrOi6WH9nUpQp0YOEnL4nr9RP4YCgAoAKACgAoAKACgAoA/Rb/glDqniGx/bu+C+maFq6aRaeK4PiF4V8Wm+kmTw5feCNS+Gvi6bxPa+M44bmzMng2DT7NtV8QyN* Connection #0 to host xn5gb6d7e6.execute-api.eu-central-1.amazonaws.com left intact
cwNptjpjazbT217ptrdQJ/qttHvbRgfvPpXwH8QeOPiL4KtPhd4E1aXTfCXhzU/Anw6vNZvEaz8I/A7XoPEVrcPr1rf6gsvivw9pnwq8W+In+JnjHT9X13xdoHxruPCn9qxeJLnRPh9BZ17Re85NaprTS9mm0t7bcyv00tbQTi9Er9Guu/e2t+r0V7+enwX/AMFwfgP8Uvg/4j8K6UPhC3wu+Bvhi1+H1hpvijxVL4C0Dxl8W/H/AIx+H0XjaKztNAs9K8MeOPEcHwM8G6xYfCvxVqepaV4gvPDviazul8X+LtRvPEvh2WeedSb76+ez6u1r6q3W3ezGotK7vbu1a7d3f567aadD+famAUAFABQAUAFABQAUAFABQB9ifsL/ALTfh/8AZC+Pdv8AHHXfAl78RptI8A/Erwdo/hq01jT9BSWb4p+DdV+GHiO6udV1HRPEUdm1v4B8XeMBpFwmjaitt4kk0S6vtP1PSYNR0y9TV9PPXv8ALzvb5DTtr1tp210d/K1/npsfS/xf/wCCsXx48VzySfBU6v8AAa4vIdQk1TxbpfiPSdc+J93r3iDRZdB8aeL7Dx7ongnwFN4f8Y+P7GeYeMPF+i6TaeLbqGQ6JpGv6L4Uz4dJZf1+nb5W/Ky+/XfzPzG8S+KvFXjDVNS1zxd4l8QeKtZ1jV9S8QatrHiPWdR1zVNU17WWgfWNb1LUNTubq7vdX1V7W2bUtSuZZby+a3ga6mlMUe1gc7QAUAFABQAUAFABQAUAFABQA9Mc5xxjGaAJaAEbofoaAIKACgAoAKACgAoAKACgAoAKACgAyR0JFAClie9ACUAFABQAUAFABQAUAFABQAUAFABQAUAFAH//2Q==
Is there something wrong?
I am not able to figre the error out :-(

Why end point returning below error while processing request?

Details:- Have added Datamapper in-process module of my wso2 project. But when I send request JSON using command prompt to my back-end service I get below error from the endpoint.
--In console window of Integration studio.
Details:- From below logs, I can say it pass through a log module just before endpoint.
[2020-02-18 15:25:14,521] INFO {org.apache.synapse.mediators.builtin.LogMediator} - message = Routing to clemency medical center
[2020-02-18 15:46:22,301] INFO {org.apache.synapse.mediators.builtin.LogMediator} - message = Routing to clemency medical center
---In Command Prompt getting error:-
F:\WS02\WSO2 Integration Studio\Request_JSON\HelathCare\Transforming Message Content>curl -v -X POST --data #request.json http://localhost:8280/healthcare/categories/surgery/reserve --header "Content-Type:application/json"
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying ::1...
* TCP_NODELAY set
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8280 (#0)
> POST /healthcare/categories/surgery/reserve HTTP/1.1
> Host: localhost:8280
> User-Agent: curl/7.55.1
> Accept: */*
> Content-Type:application/json
> Content-Length: 200
>
* upload completely sent off: 200 out of 200 bytes
< HTTP/1.1 500 Internal Server Error
< Accept-Ranges: none
< Access-Control-Allow-Methods: POST
< Set-Cookie: SERVERID=s0; path=/
< Access-Control-Allow-Headers: content-type
< Content-Type: application/octet-stream
< Via: HTTP/1.1 forward.http.proxy:8080
< Date: Tue, 18 Feb 2020 10:16:27 GMT
< Transfer-Encoding: chunked
<
Error in executing request: POST /clemency/categories/surgery/reserve* Connection #0 to host localhost left intact
Below are request and response JSON content have used.
Request content (client requested content in below format)
{
"name": "John Doe",
"dob": "1940-03-19",
"ssn": "234-23-525",
"address": "California",
"phone": "8770586755",
"email": "johndoe#gmail.com",
"doctor": "thomas collins",
"hospital": "grand oak community hospital"
}
The response we expect using data mapper from back end service.
{
"patient": {
"name": "John Doe",
"dob": "1990-03-19",
"ssn": "234-23-525",
"address": "California",
"phone": "8770586755",
"email": "johndoe#gmail.com"
},
"doctor": "thomas collins",
"hospital": "grand oak community hospital"
}
In case of Using the data mapper approach just be sure on input and output schema you are providing and after successfully mapping input and output for conversion from XML to JSON vice versa. Please ensure the properties of Data-Mapper make input and output as per your requirements.
By default it set to XML to XML.

Istio access external sites using TLS origination behind corporate proxy

I'm having troubles configuring istio to go through our corporate proxy while doing tls origination, i created a demo project that reproduces this usecase and shows:
[NO-PROXY] https requests to www.wikipedia.org work
[PROXY] https requests to www.wikipedia.org work
[NO-PROXY] http requests to www.wikipedia.org work using tls origination
[PROXY] http requests to www.wikipedia.org don't work
To setup demo project launch istio/start.sh
I followed this guide for proxy
and this guide for tls origination
But i haven't been able to make these 2 features work together.
Any clues on what i have been doing wrong, or if this is not possible in istio?
my current guess is that having the proxy configured with tcp protocol disables istios features required to do tls origination
I'm also starting to play with egress gateways and will update this if it works.
Meanwhile this is what you should see with the demo project:
https no proxy - works
microk8s.kubectl exec -it $(microk8s.kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep -- sh -c "curl -I https://www.wikipedia.org 2>/dev/null | head -n 1"
log istio-proxy
[2020-01-31T12:00:39.247Z] "- - -" 0 - "-" "-" 850 4413 576 - "-" "-" "-" "-" "91.198.174.192:443" outbound|443||www.wikipedia.org 10.1.21.153:33064 91.198.174.192:443 10.1.21.153:33062 www.wikipedia.org -
http no proxy - works
microk8s.kubectl exec -it $(microk8s.kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep -- sh -c "curl -I http://www.wikipedia.org 2>/dev/null | head -n 1"
log istio-proxy
[2020-01-31T12:02:17.012Z] "HEAD / HTTP/1.1" 200 - "-" "-" 0 0 181 180 "-" "curl/7.64.0" "09dddb0e-94b2-9f52-8505-e2a790f2d0c6" "www.wikipedia.org" "91.198.174.192:443" outbound|443|tls-origination|www.wikipedia.org - 91.198.174.192:80 10.1.21.153:45598 - -
https proxy - works
microk8s.kubectl exec -it $(microk8s.kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep -- sh -c "https_proxy=$PROXY curl -I https://www.wikipedia.org 2>/dev/null | head -n 1"
istio log
[2020-01-31T12:04:38.819Z] "- - -" 0 - "-" "-" 976 4429 253 - "-" "-" "-" "-" "10.1.21.154:3128" outbound|3128||proxy 10.1.21.153:41184 10.1.21.154:3128 10.1.21.153:41182 - -
squid-proxy log
1580472279.072 252 10.1.21.153 TCP_TUNNEL/200 4429 CONNECT www.wikipedia.org:443 - HIER_DIRECT/91.198.174.192 -
http proxy - wont work
microk8s.kubectl exec -it $(microk8s.kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep -- sh -c "http_proxy=$PROXY curl -I http://www.wikipedia.org 2>/dev/null | head -n 1"
istio log
[2020-01-31T12:06:40.069Z] "- - -" 0 - "-" "-" 136 681 88 - "-" "-" "-" "-" "10.1.21.154:3128" outbound|3128||proxy 10.1.21.153:42398 10.1.21.154:3128 10.1.21.153:42396 - -
squid-proxy log
1580472400.158 85 10.1.21.153 TCP_MISS/301 681 HEAD http://www.wikipedia.org/ - HIER_DIRECT/91.198.174.192 -
I use wikipedia because it's clear what kind of request it's receiving by looking at the response code. i get 301 for http requests and 200 for https request
EDIT:
Microk8s
microk8s.kubectl version: 1.17
microk8s.istioctl version: 1.3.4
I was having the same troubles on IBM Cloud Private
kubectl version: 1.12
istioctl version: 1.2.2
The curl command You are executing inside Your pod does not have -L flag that follows redirects.
According to istio documentation:
Notice the -L flag of curl which instructs curl to follow redirects. In this case, the server returned a redirect response (301 Moved Permanently) for the HTTP request to http://edition.cnn.com/politics. The redirect response instructs the client to send an additional request, this time using HTTPS, to https://edition.cnn.com/politics. For the second request, the server returned the requested content and a 200 OK status code.
So by adding -L flag to curl command We can get output that follows redirections like this:
$ curl -I -L http://wikipedia.org
HTTP/1.1 301 TLS Redirect
Date: Mon, 03 Feb 2020 12:04:56 GMT
Server: Varnish
X-Varnish: 107796482
X-Cache: cp3058 int
X-Cache-Status: int-front
Server-Timing: cache;desc="int-front"
Set-Cookie: WMF-Last-Access=03-Feb-2020;Path=/;HttpOnly;secure;Expires=Fri, 06 Mar 2020 12:00:00 GMT
Set-Cookie: WMF-Last-Access-Global=03-Feb-2020;Path=/;Domain=.wikipedia.org;HttpOnly;secure;Expires=Fri, 06 Mar 2020 12:00:00 GMT
X-Client-IP: REDACTED
Location: https://wikipedia.org/
Content-Length: 0
Connection: keep-alive
HTTP/2 301
date: Sun, 02 Feb 2020 15:48:09 GMT
content-type: text/html; charset=iso-8859-1
content-length: 234
server: mw1333.eqiad.wmnet
location: https://www.wikipedia.org/
vary: X-Forwarded-Proto
x-ats-timestamp: 1580658489
x-varnish: 325572256 37161482
age: 73007
x-cache: cp3062 miss, cp3050 hit/75960
x-cache-status: hit-front
server-timing: cache;desc="hit-front"
strict-transport-security: max-age=106384710; includeSubDomains; preload
set-cookie: WMF-Last-Access=03-Feb-2020;Path=/;HttpOnly;secure;Expires=Fri, 06 Mar 2020 12:00:00 GMT
set-cookie: WMF-Last-Access-Global=03-Feb-2020;Path=/;Domain=.wikipedia.org;HttpOnly;secure;Expires=Fri, 06 Mar 2020 12:00:00 GMT
x-client-ip: REDACTED
set-cookie: GeoIP=REDACTED; Path=/; secure; Domain=.wikipedia.org
HTTP/2 200
date: Mon, 03 Feb 2020 01:38:38 GMT
cache-control: s-maxage=86400, must-revalidate, max-age=3600
server: ATS/8.0.5
x-ats-timestamp: 1580693918
etag: W/"12be8-59c0633ed3519"
content-type: text/html
last-modified: Mon, 13 Jan 2020 14:22:18 GMT
backend-timing: D=320 t=1579084179579408
vary: Accept-Encoding
x-varnish: 335524839 907054142
age: 37578
x-cache: cp3062 miss, cp3050 hit/406421
x-cache-status: hit-front
server-timing: cache;desc="hit-front"
strict-transport-security: max-age=106384710; includeSubDomains; preload
set-cookie: WMF-Last-Access=03-Feb-2020;Path=/;HttpOnly;secure;Expires=Fri, 06 Mar 2020 12:00:00 GMT
x-client-ip: REDACTED
accept-ranges: bytes
So there might be nothing wrong with Your configuration.
Try to use the following command:
microk8s.kubectl exec -it $(microk8s.kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep -- sh -c "http_proxy=$PROXY curl -I -L http://www.wikipedia.org 2>/dev/null | head -n 1"
Hopefully this will show You if Your cluster configuration is working with Your corporate proxy.
Edit:
Check Your squid configuration for HTTP access. According to squid documentation:
Allowing or Denying access based on defined access lists
To allow or deny a message received on an HTTP, HTTPS, or FTP port:
http_access allow|deny [!]aclname ...
NOTE on default values:
If there are no "access" lines present, the default is to deny the
request.
If none of the "access" lines cause a match, the default is the
opposite of the last line in the list. If the last line was deny, the
default is allow. Conversely, if the last line is allow, the default
will be deny. For these reasons, it is a good idea to have an "deny
all" entry at the end of your access lists to avoid potential
confusion.
This clause supports both fast and slow acl types. See
http://wiki.squid-cache.org/SquidFaq/SquidAcl for details.
In proxy.yaml You have the following:
http_access deny CONNECT !SSL_ports
It denies CONNECT to other than secure SSL ports.
I suggest modifying the squid configuration snippet to match ports/protocols You are using. As this could be the reason why HTTP requests with proxy are not working.
Hope this helps.

Why the APIMCLI import process give me 403 Forbidden?

I am getting "403" forbidden when I try to import an API for my distributed APIM instance.
I have 4 VMs running centos 7 and JDK 8:
1st - PostgreSQL
2nd - WSO2 IS with Key manager
3rd - WSO2 APIMMANAGER (2 instances - APIMStore and APIMPublisher)
4th - WSO2 APIMWORKER (2 instances - APIMGateway and APIMTrafficManager)
1- After start all servers OK, I create an 'env' for APIMCLI as follows:
apimcli add-env -n apimm_hml --registration https://apimmanager:9444/client-registration/v0.14/register --apim https://apimmanager:9444 --token https://apimmanager:8244/token --import-export https://apimmanager:9444/api-import-export-2.6.0-v2 --admin https://apimmanager:9444/api/am/admin/v0.14 --api_list https://apimmanager:9444/api/am/publisher/v0.14/apis --app_list https://apimmanager:9444/api/am/store/v0.14/applications
2- I had add my exported APIs to $ .wso2apimcli/esported/apis
3- I get Ok the Tokken from APIM
curl -X POST -c cookies http://apimmanager:9764/publisher/site/blocks/user/login/ajax/login.jag -d 'action=login&username=admin&password=admin' -k -v
* About to connect() to apimmanager port 9764 (#0)
* Trying 10.61.1.68...
* Connected to apimmanager (10.61.1.68) port 9764 (#0)
> POST /publisher/site/blocks/user/login/ajax/login.jag HTTP/1.1
> User-Agent: curl/7.29.0
> Host: apimmanager:9764
> Accept: */*
> Content-Length: 42
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 42 out of 42 bytes
< HTTP/1.1 200 OK
< X-Frame-Options: DENY
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< Cache-Control: no-store, no-cache, must-revalidate, private
* Added cookie JSESSIONID="E97A1EC2A610C0985E9149C6AEDB0FC9AAF492239437DB11D6A64F0ADBB3CA2424437A19ED8A51409F453D1E53640A547E186AC3810235AD7761DE58093C432314B3D46DE5B353562FBCFEB3268A6084945840CD1083330A69B8564068B92A39B17714D2F94807129392AB6EDFE10CB19EC4ED87E514B31E09D19991F6D6938A" for domain apimmanager, path /publisher, expire 0
< Set-Cookie: JSESSIONID=E97A1EC2A610C0985E9149C6AEDB0FC9AAF492239437DB11D6A64F0ADBB3CA2424437A19ED8A51409F453D1E53640A547E186AC3810235AD7761DE58093C432314B3D46DE5B353562FBCFEB3268A6084945840CD1083330A69B8564068B92A39B17714D2F94807129392AB6EDFE10CB19EC4ED87E514B31E09D19991F6D6938A; Path=/publisher; HttpOnly
< Content-Type: application/json;charset=UTF-8
< Content-Length: 17
< Date: Mon, 21 Oct 2019 14:32:47 GMT
< Server: WSO2 Carbon Server
<
* Connection #0 to host apimmanager left intact
{"error" : false}
4- I get 403 forbiden afte try to import an API:
apimcli import-api -f APIM_ABC_v1.0.zip -e apimm_hml -u admin -p admin -k --preserve-provider=false --verbose
[INFO]: Insecure: true
[INFO]: import-api called
[INFO]: Environment: 'apimm_hml'
[INFO]: Import URL: https://apimmanager:9444/api-import-export-2.6.0-v2/import-api?preserveProvider=false
[INFO]: Source Environment: ConsentimentoService_v1.0.zip
ZipFilePath: /home/centos/.wso2apimcli/exported/apis/ConsentimentoService_v1.0.zip
Error importing API.
Status: 403 Forbidden
Error importing API
[ERROR]: 403 Forbidden
From APIMPublisher Log file I get:
WARN {org.owasp.csrfguard.log.JavaLogger} - potential cross-site request forgery (CSRF) attack thwarted (user:<anonymous>, ip:10.61.1.68, method:POST, uri:/api-import-export-2.6.0-v2/import-api, error:required token is missing from the request) {org.owasp.csrfguard.log.JavaLogger}
After the creation of the environment (ie step 1) try to login to the environment using the below command
apimcli login <environment> -u <username> -p <password>
in your case..
apimcli login apimm_hml -u admin -p admin -k

401 When trying to create an orgunit using Google API

I'm trying to use Google's Admin SDK to create an orgunit using a shell script. My script is as follows:
# Obtain a token we can use to modify the organisation
auth_header=`oauth2l header --json "..." "admin.directory.orgunit"`
customer_id=...
curl -v -H "Content-Type: application/json" -X POST \
--data-binary "#google-orgunits/technical.json" \
--header "$auth_header" \
"https://www.googleapis.com/admin/directory/v1/customer/$customer_id/orgunits"
This produces the output:
* Trying 216.58.196.138...
* Connected to www.googleapis.com (216.58.196.138) port 443 (#0)
* found 173 certificates in /etc/ssl/certs/ca-certificates.crt
* found 704 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
* server certificate verification OK
* server certificate status verification SKIPPED
* common name: *.googleapis.com (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: C=US,ST=California,L=Mountain View,O=Google Inc,CN=*.googleapis.com
* start date: Wed, 05 Apr 2017 17:01:30 GMT
* expire date: Wed, 28 Jun 2017 16:56:00 GMT
* issuer: C=US,O=Google Inc,CN=Google Internet Authority G2
* compression: NULL
* ALPN, server accepted to use http/1.1
> POST /admin/directory/v1/customer/.../orgunits HTTP/1.1
> Host: www.googleapis.com
> User-Agent: curl/7.47.0
> Accept: */*
> Content-Type: application/json
> Authorization: Bearer ...
> Content-Length: 157
>
* upload completely sent off: 157 out of 157 bytes
< HTTP/1.1 401 Unauthorized
< Vary: X-Origin
< WWW-Authenticate: Bearer realm="https://accounts.google.com/", error=invalid_token
< Content-Type: application/json; charset=UTF-8
< Date: Sat, 15 Apr 2017 06:26:27 GMT
< Expires: Sat, 15 Apr 2017 06:26:27 GMT
< Cache-Control: private, max-age=0
< X-Content-Type-Options: nosniff
< X-Frame-Options: SAMEORIGIN
< X-XSS-Protection: 1; mode=block
< Server: GSE
< Alt-Svc: quic=":443"; ma=2592000; v="37,36,35"
< Accept-Ranges: none
< Vary: Origin,Accept-Encoding
< Transfer-Encoding: chunked
<
{
"error": {
"errors": [
{
"domain": "global",
"reason": "required",
"message": "Login Required",
"locationType": "header",
"location": "Authorization"
}
],
"code": 401,
"message": "Login Required"
}
}
There must be some problem here: I appear to be obtaining a valid token, (looks like ya29.ElouBGKFig-nXZ9uykyGoDr0hxAxG5PMJTUh3VmtAtj2SAdYEbH2Coumjp5XoaF232oVx3--2EpTyNi5NgFBNrLINJij9tGL3-64MshEXjHhvkH-1NESoxPeVAU). I've followed all of the instructions here, enabled API access, authorized my API client, everything; but still, not working. Where have I gone wrong?
Try checking the documentation about Directory API: Authorize Requests
Every request your application sends to the Directory API must include an authorization token. The token also identifies your application to Google.
Here's the OAuth 2.0 scope information for the Directory API:
https://www.googleapis.com/auth/admin.directory.orgunit - Global scope for access to all organization unit operations.
https://www.googleapis.com/auth/admin.directory.orgunit.readonly -
Scope for only retrieving organization units.
You can check the OAuth 2.0 Playground, an interactive demonstration of using OAuth 2.0 with Google (including the option to use your own client credentials). Also there are many quickstart that can help you on how to properly authorize a request for Admin SDK.
Hope this helps.