I'm trying to host and serve webfonts (specifically, fontawesome) to my django project on heroku from AWS S3, and I'm having difficulty overcoming the dreaded firefox cross-domain font-loading issue. I've tried all the documented, accepted solutions and none of them are working for me.
The recommended solution I keep seeing is to edit CORS configs on my S3 bucket:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>https://myapp.herokuapp.com</AllowedOrigin>
<AllowedOrigin>https://www.myapp.herokuapp.com</AllowedOrigin>
<AllowedOrigin>https://myapp.com</AllowedOrigin>
<AllowedOrigin>https://www.myapp.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
I've tried different variations of these settings and firefox is still giving me an HTTP 403 forbidden from https://www.myapp.com:
Request URL: https://my_bucket.s3.amazonaws.com/css/fontawesome-webfont-webfont.ttf
Request Method: GET
Status Code: HTTP/1.1 403 Forbidden
With that HTTP request I'm getting "Access-Control-Allow-Credentials:true" in response headers.
Is there another CORS rule I need to declare for firefox to accept the fonts from S3? When I curl font-awesome I don't get/see anything helpful for troubleshooting this:
> https://s3.amazonaws.com/my_bucket/font/fontawesome-webfont.eot
* About to connect() to s3.amazonaws.com port 443 (#0)
* Trying xxx.xx.xx.xxxx... connected
* Connected to s3.amazonaws.com (xxx.xx.xx.xxx) port 443 (#0)
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using DES-CBC3-SHA
* Server certificate:
* subject: C=US; ST=Washington; L=Seattle; O=Amazon.com Inc.; CN=s3.amazonaws.com
* start date: 2010-10-08 00:00:00 GMT
* expire date: 2013-10-07 23:59:59 GMT
* common name: s3.amazonaws.com (matched)
* issuer: C=US; O=VeriSign, Inc.; OU=VeriSign Trust Network; OU=Terms of use at https://www.verisign.com/rpa (c)09; CN=VeriSign Class 3 Secure Server CA - G2
* SSL certificate verify ok.
> GET /my_bucket/font/fontawesome-webfont.eot HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r zlib/1.2.5
> Host: s3.amazonaws.com
> Accept: */*
> Origin: https://www.myapp.com
>
< HTTP/1.1 200 OK
< x-amz-id-2: XxMCWhqMsTGMMmAQnSHT/+RO7aluQSRyZ5wTAseMKM5cpavE+NkBQCuD8ykiIIDE
< x-amz-request-id: 90FF2C1C85254815
< Date: Mon, 22 Jul 2013 01:54:53 GMT
< Access-Control-Allow-Origin: https://www.myapp.com
< Access-Control-Allow-Methods: GET
< Access-Control-Max-Age: 3000
< Access-Control-Allow-Credentials: true
< Vary: Origin, Access-Control-Request-Headers, Access-Control-Request-Method
< Last-Modified: Mon, 22 Jul 2013 01:44:31 GMT
< ETag: "455808250694e5760bd92b3ce1f070b6"
< Accept-Ranges: bytes
< Content-Type: application/octet-stream
< Content-Length: 25395
< Server: AmazonS3
<
3cOb?LP&?S~FontAwesomeRegular"Version 1.00 2012&FontAwesome RegularBSGPɴbGbKV?????Y?D
Is there another way to set Access-Control-Allow-Origin that might get this working?
If you are restricting access to specific HTTP referrers in your bucket policy, add your bucket url also in referer list. For example:
"Condition": {
"StringLike": {
"aws:Referer": [
"http://my_bucket.s3.amazonaws.com/*",
"https://my_bucket.s3.amazonaws.com/*",
"http://www.example.com/*",
"https://www.example.com/*",
]
}
}
Check the response headers in Firefox. Turns out that referer for font is your CSS file, which is hosted on s3 bucket, not your domain.
Related
I created a bucket in s3. Static website hosting, choose Enable. I upload two html file.
page1.html
This is page1
page2.html
This is page2
I added metadata x-amz-website-redirect-location = /page2.html into page1 object in s3 website console.
When I visit http://bucket-name.s3-website.Region.amazonaws.com/page1.html on chrome. it's not redirect(it's page1 content not page2). I followed the documentation and search about this question. https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-page-redirect.html
thanks in advance.
screenshot of page1 metadata
My bucket settings.
curl -v the site
$ curl -v https://aws-redirect-test.s3.ap-northeast-1.amazonaws.com/page1.html
* Trying 3.5.154.185...
* TCP_NODELAY set
* Connected to aws-redirect-test.s3.ap-northeast-1.amazonaws.com (3.5.154.185) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=*.s3-ap-northeast-1.amazonaws.com
* start date: Dec 9 00:00:00 2021 GMT
* expire date: Dec 2 23:59:59 2022 GMT
* subjectAltName: host "aws-redirect-test.s3.ap-northeast-1.amazonaws.com" matched cert's "*.s3.ap-northeast-1.amazonaws.com"
* issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon
* SSL certificate verify ok.
> GET /page1.html HTTP/1.1
> Host: aws-redirect-test.s3.ap-northeast-1.amazonaws.com
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 200 OK
< x-amz-id-2: jWBm/e0Rdb2BB3R/nFffH8/YS2+f1AgXFHQfT6bUzmMK9tMZDtSNYprUp4Ka6m9xMKookshlWwo=
< x-amz-request-id: T4JG7K11X2FTBCA8
< Date: Thu, 09 Jun 2022 02:15:03 GMT
< Last-Modified: Thu, 09 Jun 2022 02:12:42 GMT
< ETag: "a12ac1ca5226842e56871deaa4d9ef9c"
< x-amz-website-redirect-location: /page2.html
< Accept-Ranges: bytes
< Content-Type: text/html
< Server: AmazonS3
< Content-Length: 14
<
This is page1
* Connection #0 to host aws-redirect-test.s3.ap-northeast-1.amazonaws.com left intact
* Closing connection 0
You are not using the website endpoint. I tested the following url and it works.
http://aws-redirect-test.s3-website-ap-northeast-1.amazonaws.com/page1.html
I have a apigateway endpoint that returns 200 for me, but when it's called by a third party they get 403.
I request via curl and python requests and get 200 for both
Bash:
curl -X POST -v --http1.1 https://939pd1ndql.execute-api.us-east-1.amazonaws.com/default/bitbucket-events
Python
requests.post('https://939pd1ndql.execute-api.us-east-1.amazonaws.com/default/bitbucket-events',
I get 200 response for each request.
However when a third party calls the endpoint they get
HTTPSConnectionPool(host='939pd1ndql.execute-api.us-east-1.amazonaws.com', port=443): Max retries exceeded with url: /default/bitbucket-events (Caused by ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 403 Forbidden',)))
The third part is bitbucket - I am trying to create bitbucket app (really just a JSON payload telling bitbucket to create a webhook):
I do not have control over how bitbucket performs the requests and the request is very opaque but I pointed it at ngrok and intercepted the request it makes:
POST /default/bitbucket-events HTTP/1.1
Host: 939pd1ndql.execute-api.us-east-1.amazonaws.com
User-Agent: python-requests/2.22.0
Content-Length: 2292
Accept: */*
Accept-Encoding: gzip, deflate
Content-Type: application/json
Sentry-Trace: 00-41043c2935294252aa25ac44716a2300-86324af91ef0493e-00
X-Forwarded-For: 104.192.142.247
X-Forwarded-Proto: https
X-Newrelic-Id: VwMGVVZSGwQJVFVXDwcPXg==
X-Newrelic-Transaction: PxQPB1daXQMHVwRWAQkDUQUIFB8EBw8RVU4aWl4JDVcDUgoEBVcLVlNXDkNKQQoBBlZRAAQHFTs=
{LOTS OF JSON HERE}
Nothing in the request that bitbucket sends looks like it could cause this problem.
The response I get from the curl command is:
* Trying 3.84.56.177...
* TCP_NODELAY set
* Connected to 939pd1ndql.execute-api.us-east-1.amazonaws.com (3.84.56.177) port 443 (#0)
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=*.execute-api.us-east-1.amazonaws.com
* start date: Jul 22 00:00:00 2021 GMT
* expire date: Aug 20 23:59:59 2022 GMT
* subjectAltName: host "939pd1ndql.execute-api.us-east-1.amazonaws.com" matched cert's "*.execute-api.us-east-1.amazonaws.com"
* issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon
* SSL certificate verify ok.
> POST /default/bitbucket-events HTTP/1.1
> Host: 939pd1ndql.execute-api.us-east-1.amazonaws.com
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Tue, 12 Apr 2022 22:00:39 GMT
< Content-Type: application/json
< Content-Length: 0
< Connection: keep-alive
< x-amzn-RequestId: 78585bb0-5db4-4273-9333-45ef8b44952d
< Access-Control-Allow-Origin: *
< x-amz-apigw-id: QfN1IHrSoAMFrMw=
I have now devolved the apigateway to be just a mock endpoint that returns 200 response:
and I have set the logging to be very loud:
But I only see log entries as a result of the curl and python request I make. The bitbucket request does not result in a log line.
Could this mean the bitbucket request is being rejected by AWS before my api gateway is handling the request? I have no WAF enabled
As you can tell I am running out of ideas.
I replicated your setup, but with my own API Gateway. I was able to install the app though, so strongly suspect it is something to with your API Gateway setup.
I am using the exact same app descriptor, with only the URL being different.
{
"key": "codereview.doctor.staging",
"name": "Code Review Doctor Staging",
"description": "Target lambdas with 'staging' version alias",
"vendor": {
"name": "Code Review Doctor",
"url": "https://codereview.doctor"
},
"baseUrl": "https://fj7987nlx3.execute-api.ap-southeast-1.amazonaws.com",
"authentication": {
"type": "jwt"
},
"lifecycle": {
"installed": "/default/bitbucket-events",
"uninstalled": "/default/bitbucket-events"
},
"modules": {
"webhooks": [
{
"event": "pullrequest:created",
"url": "/default/bitbucket-events"
},
{
"event": "pullrequest:updated",
"url": "/default/bitbucket-events"
},
{
"event": "pullrequest:fulfilled",
"url": "/default/bitbucket-events"
}
]
},
"scopes": ["account", "repository", "pullrequest"],
"contexts": ["account"]
}
My API GW POST configuration looks exactly like yours, so the difference may be somewhere else.
Note that I have deleted my API GW stage, so you will not be able to test using mine for now.
I try to upload file to S3 by pre-singed url with curl.
It returns success when I run following command
❯ curl -v -X PUT --upload-file [file directory] '[pre-sined url]'
* Trying [port]...
* TCP_NODELAY set
* Connected to bucket-name.s3.region.amazonaws.com (ip address) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: C=US; ST=Washington; L=Seattle; O=Amazon.com, Inc.; CN=*.region.amazonaws.com
* start date: Nov 9 00:00:00 2019 GMT
* expire date: Dec 10 12:00:00 2020 GMT
* subjectAltName: host "bukcet-name.s3.region.amazonaws.com" matched cert's "*.s3.region.amazonaws.com"
* issuer: C=US; O=DigiCert Inc; OU=www.digicert.com; CN=DigiCert Baltimore CA-2 G2
* SSL certificate verify ok.
> PUT [pre-signed url] HTTP/1.1
> Host: bukcet-name.s3.region.amazonaws.com
> User-Agent: curl/7.64.1
> Accept: */*
> Content-Type: image/png
> Content-Length: 145701
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
* We are completely uploaded and fine
< HTTP/1.1 200 OK
< x-amz-id-2: hogehoge
< x-amz-request-id:hugahuga
< Date: Mon, 17 Feb 2020 08:09:01 GMT
< ETag: "hogehuga"
< Content-Length: 0
< Server: AmazonS3
<
* Connection #0 to host bukcet-name.s3.region.amazonaws.com left intact
* Closing connection 0
But When I look at S3, file is not uploaded.
I want to know how to upload file correctly to S3.
[Update]
I added x-amz-acl: bucket-owner-full-control header in curl and set <AllowedHeader>x-amz-acl</AllowedHeader> in S3 bucket CORS.
curl -v -X PUT -H 'x-amz-acl: bucket-owner-full-control' --upload-file [file directory] '[pre-sined url]'
but It returns error.
<Error><Code>AccessDenied</Code><Message>There were headers present in the request which were not signed</Message><HeadersNotSigned>x-amz-acl</HeadersNotSigned>
Also I wonder my presigned url does not have file name in directry path. Is it correct presigned url?
My implementaion to generate pre-signed url is like this:
req, _ := svc.PutObjectRequest(&s3.PutObjectInput{
Bucket: aws.String(bucketName),
Key: aws.String(key),
})
url, err := req.Presign(expires)
DO I need to add ACL inside PutObjectInput struct?
This isssue is resolved by adding file name in the end of s3 directory when it is generated.
For example(Golang):
req, _ := svc.PutObjectRequest(&s3.PutObjectInput{
Bucket: aws.String("hogehoge/fugafuga/filename"),
Key: aws.String(key),
})
url, err := req.Presign(expires)
I have a very simple lambda function that facilitates short URL redirection. Like so...
var env = process.env.NODE_ENV
exports.handler = async function (event) {
var mappings = {
"": "https://example.com",
"/": "https://example.com",
"/article1": "https://example.com/articles/article-title",
"/podcasts": "https://example.com/podcasts"
}
return {
body: null,
headers: {
"Location": mappings[event.path] || "https://example.com/four-oh-four"
},
isBase64Encoded: false,
statusCode: 301
}
}
The URL redirects just fine for all routes except the homepage (with or without a slash). Instead of the homepage, I get a "Missing Authentication Token" error from API Gateway (or Cloudfront rather).
Curling doesn't appear to reveal anything... (Updated the curl code, my bad I left the redirect).
$ curl -v https://short.url/
* Trying xxx.xx.xxx.xx...
* TCP_NODELAY set
* Connected to short.url (xxx.xx.xxx.xx) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /path/to/ca-certificates.crt
CApath: /path/to/certs
* (304) (OUT), TLS handshake, Client hello (1):
* (304) (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / xxxxxxxxxxxx-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=*.ib.run
* start date: Apr 5 00:00:00 2019 GMT
* expire date: May 5 12:00:00 2020 GMT
* subjectAltName: host "short.url" matched cert's "short.url"
* issuer: xxx; O=xxx; OU=xxx; CN=xxx
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle xxxxxxxx)
> GET / HTTP/2
> Host: short.url
> User-Agent: curl/7.58.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 403
< content-type: application/json
< content-length: 42
< date: Sat, 20 Jul 2019 03:51:44 GMT
< x-amzn-requestid: xxxxxxxxxx-xxxxxxxxxx-xxxxxxxxxx
< x-amzn-errortype: MissingAuthenticationTokenException
< x-amz-apigw-id: xxxxxxxxxxxxxx_
< x-cache: Error from cloudfront
< via: 1.1 xxxxxxxxxxxxxxxxxxxxxx.cloudfront.net (CloudFront)
< x-amz-cf-pop: xxxxx-xx
< x-amz-cf-id: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx===
<
* Connection #0 to host short.url left intact
{"message":"Missing Authentication Token"}
The response "Missing Authentication Token" is misleading.
It suggests that you need to provide an Token.
The real error is, that your routes in Api gateway are not setup properly.
So it is basically an Route not found from api-gateway.
You need to provide a Route for "/" with a method or the any method and redirect it to the Lambda function. You probably setup an subroute but no route for "/"
At the moment the curl is hitting the url "/" with the method GET and Api-Gateway does not know how to route this call so it answers with: "Missing Authentication Token".
You can reproduce this behavior with every non existent route. Try: /sdfsdfsdf for example. You will get the same error.
Setup the route and you shoud be fine.
I hope I could help you!
Dominik
I have a VCenter Server version 5.5. I am trying to generate a session ID to authenticate to ReST API requests using the following command:
curl -kv -X POST -H 'Accept: application/json' --basic -u me#abc.co.in:myPass! $VCENTER/rest/com/vmware/cis/session
where $VCENTER=https://vc
Here is the output I get
* Hostname was NOT found in DNS cache
* Trying 1.2.3.4...
* Connected to vc (1.2.3.4) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using AES256-SHA
* Server certificate:
* subject: O=VMware, Inc.; OU=vCenterServer_2014.12.24_203443; CN=VMware default certificate; emailAddress=support#vmware.com
* start date: 2014-12-24 04:44:30 GMT
* expire date: 2024-12-22 04:44:32 GMT
* issuer: O=VMware, Inc.; OU=vCenterServer_2014.12.24_203443; CN=VC.xyz.co.in; emailAddress=support#vmware.com
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Server auth using Basic with user 'me#xyz.co.in'
> POST /rest/com/vmware/cis/session HTTP/1.1
> Authorization: Basic YWthbmtzaGFfamFpbkBwZXJzaXN0ZW50LmNvLmluOmFra2FTZXAyMDE3IQ==
> User-Agent: curl/7.35.0
> Host: pt-vc
> Accept: application/json
>
< HTTP/1.1 400 Bad Request
< Date: Thu, 13 Jul 2017 09:33:40 GMT
< Connection: close
< Content-Type: text; charset=plain
< Content-Length: 0
<
* Closing connection 0
* SSLv3, TLS alert, Client hello (1):
Looking at the output I am not sure what's going wrong and where. Is this because my password has an ! which needs to be converted into its hexadecimal equivalent?
The exclamation point should be fine to pass as a password.
The issue could be due to vCenter 5.5 not having any RESTful endpoints available. That was introduced in vSphere/vCenter 6.0.
Your output would look closer to the following if you were on a 6.0 or 6.5 environment:
curl -kv -X POST -H 'Accept: application/json' --basic -u user#domain.lab:VMware1! https://vcsa01.domain.lab/rest/com/vmware/cis/session
* Trying 10.159.13.52...
* Connected to vcsa01.domain.lab (10.159.13.52) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
* Server certificate: VCSA01
* Server auth using Basic with user 'user#domain.lab'
> POST /rest/com/vmware/cis/session HTTP/1.1
> Host: vcsa01.domain.lab
> Authorization: Basic ZWNrQGNwYnUubGFiOlZNd2FyZTEh
> User-Agent: curl/7.43.0
> Accept: application/json
>
< HTTP/1.1 200 OK
< Date: Thu, 13 Jul 2017 18:47:18 GMT
< Set-Cookie: vmware-api-session-id=37e6921e6a3905b47ba356aaad19d3d6;Path=/rest;Secure;HttpOnly
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Content-Type: application/json
< Transfer-Encoding: chunked
<
* Connection #0 to host vcsa01.domain.lab left intact
{"value":"37e6921e6a3905b47ba356aaad19d3d6"}