I am trying to explore the product api to create api using curl commands.
I followed the documentation to create the access token . It worked fine till then
But when i try to invoke a publisher service using the below command, I get an error
curl -v -k -H "Authorization: Bearer <access token obtained>" http://<host ip address>:9763/api/am/store/v0.14/apis
* About to connect() to <host ip address> port 9763 (#0)
* Trying <host ip address>...
* Connected to <host ip address> (<host ip address>) port 9763 (#0)
> GET /api/am/store/v0.14/apis HTTP/1.1
> User-Agent: curl/7.29.0
> Host: <host ip address>:9763
> Accept: */*
> Authorization: Bearer <access token obtained>
>
< HTTP/1.1 401 Unauthorized
< Date: Thu, 31 Jan 2019 15:03:08 GMT
< Content-Type: application/json
< Transfer-Encoding: chunked
< Server: WSO2 Carbon Server
<
* Connection #0 to host <host ip address> left intact
{"code":401,"message":"","description":"Unauthenticated request","moreInfo":"","error":[]}
I have rechecked my login credentials while generating the client key and then the client and client key while generating the token. I am not sure what is going wrong. Can anyone help
Make sure you have provided relevant scope when generating the token. Every resource has a particular scope as defined in https://docs.wso2.com/display/AM260/apidocs/publisher/#!/operations#APICollection#apisGet. In this the scope is apim:api_view. In the token response, check whether you have obtained the relevant scope.
Related
Iam new to the ESB's and Apache Synapse. I have a simple REST API that takes a GET method and returns a simple json response. I tried to create a proxy with Apache Synapse.
The configuration is given below.
<proxy name="SampeJsonProxy">
<target>
<endpoint>
<address uri="http://localhost:8081/kafka/publish/hello" format="json" methods="GET"/>
</endpoint>
<inSequence>
<log level="full"/>
</inSequence>
<outSequence>
<send/>
</outSequence>
</target>
</proxy>
When I do a curl on the REST API I get response as
curl -v http://127.0.0.1:8081/kafka/publish/hello
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
> GET /kafka/publish/hello HTTP/1.1
> Host: 127.0.0.1:8081
> User-Agent: curl/7.55.1
> Accept: */*
>
< HTTP/1.1 200
< Content-Type: text/plain;charset=UTF-8
< Content-Length: 48
< Date: Wed, 10 Feb 2021 13:48:27 GMT
<
{"name": "John", "age": 31, "city": "New York"}* Connection #0 to host 127.0.0.1 left intact
When I do a curl on the synapse server I get as below without response.
curl -v http://127.0.0.1:8280/kafka/publish/hello
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8280 (#0)
> GET /kafka/publish/hello HTTP/1.1
> Host: 127.0.0.1:8280
> User-Agent: curl/7.55.1
> Accept: */*
>
< HTTP/1.1 202 Accepted
< Date: Wed, 10 Feb 2021 13:48:18 GMT
< Server: Synapse-PT-HttpComponents-NIO
< Transfer-Encoding: chunked
<
* Connection #0 to host 127.0.0.1 left intact
The log on synapse server is shown below.
INFO LogMediator To: /kafka/publish/hello, MessageID: urn:uuid:46af1619-6bbb-4fe9-b00f-2ec1e7e938a3, Direction: request
I ran the configuration by editing the synapse_sample_150.xml and replacing content with above proxy and run it as synapse.bat -sample 150.
I don't understand why this is not working. Can someone help me understand the problem. I refered the second example in the here.
I found the problem. I was hitting at the wrong proxy.Giving a curl at http://127.0.0.1:8280/services/SampeJsonProxy
solves the gives the right output.
vuejs is running inside a docker container served by:
CMD [ "http-server", "dist" ]
when using axios inside Vue.js mounted() to do a GET request against a flask api it shows "blocked" in the network tab, accessing other REST-API's works fine.
testing with curl (localhost:6000 being the flask server):
- curl is running from my real host and conneting to the container
curl -H "Origin: http://localhost:5000"
-H "Access-Control-Request-Method: GET" -H "Access-Control-Request-Headers: X-Requested With"
-X OPTIONS --verbose http://localhost:6000/todo/api/v1.0/wheel/40
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 6000 (#0)
> OPTIONS /todo/api/v1.0/wheel/40 HTTP/1.1
> Host: localhost:6000
> User-Agent: curl/7.58.0
> Accept: */*
> Origin: http://localhost:5000
> Access-Control-Request-Method: GET
> Access-Control-Request-Headers: X-Requested-With
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Content-Type: text/html; charset=utf-8
< Allow: OPTIONS, GET, HEAD
< Access-Control-Allow-Origin: http://localhost:5000
< Access-Control-Allow-Headers: X-Requested-With
< Access-Control-Allow-Methods: DELETE, GET, HEAD, OPTIONS, PATCH, POST, PUT
< Vary: Origin
< Content-Length: 0
< Server: Werkzeug/1.0.0 Python/3.8.2
< Date: Sun, 15 Mar 2020 15:43:44 GMT
<
* Closing connection 0
from what ive read so far, for example here: 1, for a unauthorized GET request the headers look ok.
This one gets the real data:
curl -H "Origin: http://l:5000" -H "Access-Control-Request-Method: GET" -H "Access-Control-Request-Headers: X-Requested-With" -v http://localhost:6000/todo/api/v1.0/wheel/40
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 6000 (#0)
> GET /todo/api/v1.0/wheel/40 HTTP/1.1
> Host: localhost:6000
> User-Agent: curl/7.58.0
> Accept: */*
> Origin: http://l:5000
> Access-Control-Request-Method: GET
> Access-Control-Request-Headers: X-Requested-With
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Content-Type: application/json
< Content-Length: 39
< Access-Control-Allow-Origin: http://l:5000
< Vary: Origin
< Server: Werkzeug/1.0.0 Python/3.8.2
< Date: Sun, 15 Mar 2020 15:56:43 GMT
<
{"result":{"model 1":0,"model 2":150}}
* Closing connection 0
Manipulating -H "Origin..." to this:
-H "Origin: http://l:5000"
also shows a normal reply. Isn't that a good test?
As it turns out mozilla allows certain ports for certain protocols as shown here:
https://developer.mozilla.org/en-US/docs/Mozilla/Mozilla_Port_Blocking
6000 is the "x11" port and on that list - As port for x11 and not to be used for xhr. So every port not on that list should do the trick.
reason:
Cert issued a Vulnerability Note VU#476267 for a "Cross-Protocol" scripting attack, known as the HTML Form Protocol Attack
In your app's __init__.py file add these lines and you'll be good to go.
$ pip install flask-cors
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
Read more about CORS here-- MDN CORS
I have an ALB with a target group and ECS cluster running PHP API.
I am trying to query the API for a CSV response but I am getting truncated results if the Request is coming through the ALB.
When I SSH into the EC2 instance running the cluster and try to run curl manually (going through the load balancer) the response gets truncated:
curl -sSL -D - 'https://my.domain.com/api/export?token=foobar&start_date=01-01-2015&end_date=01-01-2019' \
-H 'Content-Type: application/json' \
-H 'cache-control: no-cache' -o /dev/null
I am getting these headers:
HTTP/2 200
date: Wed, 21 Nov 2018 20:25:27 GMT
content-type: text/csv; charset=utf-8
content-length: 173019
server: nginx
content-transfer-encoding: binary
content-description: File Transfer
content-disposition: attachment;filename=export.csv
cache-control: private, must-revalidate
etag: "b90d0da7b482da96e1a478d59eedd0d16552fbfd"
strict-transport-security: max-age=2592000; includeSubDomains; preload
content-security-policy-report-only: default-src 'self';
x-frame-options: DENY
x-xss-protection: 1; mode=block
x-content-type-options: nosniff
referrer-policy: origin
curl: (92) HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err 2)
If I try to run the same curl against the container (running locally - not through ALB)
curl -sSL -D - 'http://localhost:32776/api/export?token=foobar&start_date=01-01-2015&end_date=01-01-2019' \
-H 'Content-Type: application/json' \
-H 'cache-control: no-cache' -o /dev/null
Response:
HTTP/1.1 200 OK
Server: nginx
Content-Type: text/csv; charset=utf-8
Content-Length: 173019
Connection: keep-alive
Content-Transfer-Encoding: binary
Content-Description: File Transfer
content-disposition: attachment;filename=export.csv
Cache-Control: private, must-revalidate
Date: Wed, 21 Nov 2018 20:36:55 GMT
ETag: "b90d0da7b482da96e1a478d59eedd0d16552fbfd"
Strict-Transport-Security: max-age=2592000; includeSubDomains; preload
Content-Security-Policy-Report-Only: default-src 'self;
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Referrer-Policy: origin
When I compare them, there is a difference in the HTTP version. I tried switching to HTTP1 in ALB but still getting the same (or similar) issue: curl: (18) transfer closed with 130451 bytes remaining to read.
Another difference is the Keep-Alive option. I am not sure if this is an attribute I can enable on the ALB.
When I try to return a different response (complex web page/really long) the response goes through ALB without a problem (not truncated). According to the error message when ALB has HTTP/1.1 enabled the Response is truncated every time after 42568 bytes.
Any ideas?
UPDATE
If I leave out the Content-Type header in the response, it doesn't get truncated.
return new Response($content, Response::HTTP_OK, [
# Works without this:
# 'Content-Type' => 'text/csv; charset=utf-8',
'Content-Transfer-Encoding' => 'binary',
'Content-Description' => 'File Transfer',
'Content-Disposition' => "attachment;filename=export.csv",
'Content-Length' => strlen($content),
]);
UPDATE 2
Changing the response Content-Type to be text/html returns the response properly.
So after some joyful debugging, I found this in the Nginx logs from the container:
nginx stderr | 2018/11/22 01:03:59 [warn] 39#39: *65 an upstream response is
buffered to a temporary file /var/tmp/nginx/fastcgi/4/01/0000000014 while reading
upstream, client: 10.1.1.163, server: _, request: "GET /api/export?
token=foobar&start_date=01-01-2015&end_date=01-01-2019 HTTP/1.1", upstream:
"fastcgi://unix:/var/run/php-fpm.sock:", host: "my.domain.com"
Which can basically be solved by baking in these two lines into my nginx config:
client_body_temp_path /tmp 1 2;
fastcgi_temp_path /tmp 1 2;
The question why was this happening only for csv output will remain a mystery.
Thanks for the help!
You should enable keep-alive on your EC2 instances.
You can enable HTTP keep-alive in the web server settings for your EC2
instances.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/application-load-balancers.html#connection-idle-timeout
Also double check that the Content-Length header is accurate. An incorrect size here will result in the error you are seeing.
I've a simple API Gateway, that sends the data to an HTTP endpoint (Express/Node).
For testing, I'm using curl, which is great. Sending the curl request without CORS works like a charm, however if I try to mimic CORS in curl, I get a HTTP 500 and have no idea why. These are both requests:
curl -v -H "X-Api-Key: myapikey" -H "Origin: example.com" "https://apigatewayid.execute-api.us-west-2.amazonaws.com/dev/path/prettyParam?anotherParam=1"
* Trying x.x.x.x...
* TCP_NODELAY set
* Connected to apigatewayid.execute-api.us-west-2.amazonaws.com (x.x.x.x) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: *.execute-api.us-west-2.amazonaws.com
* Server certificate: Symantec Class 3 Secure Server CA - G4
* Server certificate: VeriSign Class 3 Public Primary Certification Authority - G5
> GET /dev/path/prettyParam?anotherParam=1 HTTP/1.1
> Host: apigatewayid.execute-api.us-west-2.amazonaws.com
> User-Agent: curl/7.51.0
> Accept: */*
> X-Api-Key: myapikey
> Origin: example.com
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< Content-Length: 64
< Connection: keep-alive
< Date: Fri, 21 Jul 2017 00:28:50 GMT
< x-amzn-RequestId: numbers-6dab-11e7-b411-b7f8fd6c0cc3
< Access-Control-Allow-Origin: *
< X-Amzn-Trace-Id: Root=1-5morenumbersletters3e8be5c86a2c72781a0b356
< X-Cache: Miss from cloudfront
< Via: 1.1 numbersletters7a8621aabe6b30d2f5a48.cloudfront.net (CloudFront)
< X-Amz-Cf-Id: numberslettersUk3Bs9dL4KJR4QccPmILA4tJUjO0X_h7cQc9DxA==
<
* Curl_http_done: called premature == 0
* Connection #0 to host apigatewayid.execute-api.us-west-2.amazonaws.com left intact
{"resultDataFromServer":"dataReceived!"}
curl -H "Origin: example.com" -H "X-Api-Key: myapikey" -H "Access-Control-Request-Method: GET" -H "Access-Control-Request-Headers: X-Requested-With" -X OPTIONS --verbose "https://apigatewayid.execute-api.us-west-2.amazonaws.com/dev/path/prettyParam?anotherParam=1"
* Trying x.x.x.x...
* TCP_NODELAY set
* Connected to apigatewayid.execute-api.us-west-2.amazonaws.com (x.x.x.x) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: *.execute-api.us-west-2.amazonaws.com
* Server certificate: Symantec Class 3 Secure Server CA - G4
* Server certificate: VeriSign Class 3 Public Primary Certification Authority - G5
> OPTIONS /dev/path/prettyParam?anotherParam=1 HTTP/1.1
> Host: apigatewayid.execute-api.us-west-2.amazonaws.com
> User-Agent: curl/7.51.0
> Accept: */*
> X-Api-Key: myapikey
> Access-Control-Request-Method: GET
> Access-Control-Request-Headers: X-Requested-With
>
< HTTP/1.1 500 Internal Server Error
< Content-Type: application/json
< Content-Length: 36
< Connection: keep-alive
< Date: Fri, 21 Jul 2017 00:29:07 GMT
< x-amzn-RequestId: numbers-6dab-11e7-b411-b7f8fd6c0cc3
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Headers: Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token
< Access-Control-Allow-Methods: GET,OPTIONS
< X-Cache: Miss from cloudfront
< Via: 1.1 numbersletters7a8621aabe6b30d2f5a48.cloudfront.net (CloudFront)
< X-Amz-Cf-Id: numberslettersUk3Bs9dL4KJR4QccPmILA4tJUjO0X_h7cQc9DxA==
<
* Curl_http_done: called premature == 0
* Connection #0 to host apigatewayid.execute-api.us-west-2.amazonaws.com left intact
{"message": "Internal server error"}
I really don't understand what I'm doing wrong. I enabled CORS in the API Gateway, and in Express CORS is enabled also, so not sure what is going on.
#Raul, did you test your API method via API Gateway? Try deploying your API again and test it from the APIGateway itself by providing the URL param. If you get the same {"message": "Internal server error"} there is a problem with the code. Sometimes it might look like a CORS issue, but actually it could be a lambda logic error.
Amazon recently rolled out a new feature on CloudFront that supports custom SSL certificates at no charge using SNI (Server Name Indication).
I got my distribution set up with a free Class 1 certificate from StartSSL and everything was working when I was noticing that the site would go down a short time after it's deployed. Running SSL Checker returns that my certificate is working properly:
But then I would hit this error page when trying to access the site via HTTPS (it would work for the first request then go down in subsequent attempts to connect).
Here's a verbose output when accessing with ssl (succeeds on index):
$ curl -I -v -ssl https://wikichen.is
* Adding handle: conn: 0x7f9f82804000
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7f9f82804000) send_pipe: 1, recv_pipe: 0
* About to connect() to wikichen.is port 443 (#0)
* Trying 54.230.141.222...
* Connected to wikichen.is (54.230.141.222) port 443 (#0)
* TLS 1.2 connection using TLS_RSA_WITH_RC4_128_MD5
* Server certificate: www.wikichen.is (6w984WNu7vM5OrdU)
* Server certificate: StartCom Class 1 Primary Intermediate Server CA
* Server certificate: StartCom Certification Authority
> HEAD / HTTP/1.1
> User-Agent: curl/7.30.0
> Host: wikichen.is
> Accept: */*
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Content-Type: text/html; charset=utf-8
Content-Type: text/html; charset=utf-8
< Content-Length: 1153
Content-Length: 1153
< Connection: keep-alive
Connection: keep-alive
< Date: Sun, 09 Mar 2014 16:09:54 GMT
Date: Sun, 09 Mar 2014 16:09:54 GMT
< Cache-Control: max-age=120
Cache-Control: max-age=120
< Content-Encoding: gzip
Content-Encoding: gzip
< Last-Modified: Wed, 05 Mar 2014 20:40:48 GMT
Last-Modified: Wed, 05 Mar 2014 20:40:48 GMT
< ETag: "34685bc45353d1030d3a515ddba78f3e"
ETag: "34685bc45353d1030d3a515ddba78f3e"
* Server AmazonS3 is not blacklisted
< Server: AmazonS3
Server: AmazonS3
< Age: 4244
Age: 4244
< X-Cache: Hit from cloudfront
X-Cache: Hit from cloudfront
< Via: 1.1 4f672256eaca5524999342dc8678cdd2.cloudfront.net (CloudFront)
Via: 1.1 4f672256eaca5524999342dc8678cdd2.cloudfront.net (CloudFront)
< X-Amz-Cf-Id: h4TEULH44TCi7m2lL42A8lO-5-Gmx8iY2M2C1AOmRlK543zFN6jCtQ==
X-Amz-Cf-Id: h4TEULH44TCi7m2lL42A8lO-5-Gmx8iY2M2C1AOmRlK543zFN6jCtQ==
<
* Connection #0 to host wikichen.is left intact
Then fails on other pages:
$ curl -i -v https://wikichen.is/writing/index.html
* Adding handle: conn: 0x7fa153804000
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fa153804000) send_pipe: 1, recv_pipe: 0
* About to connect() to wikichen.is port 443 (#0)
* Trying 54.230.140.160...
* Connected to wikichen.is (54.230.140.160) port 443 (#0)
* TLS 1.2 connection using TLS_RSA_WITH_RC4_128_MD5
* Server certificate: www.wikichen.is (6w984WNu7vM5OrdU)
* Server certificate: StartCom Class 1 Primary Intermediate Server CA
* Server certificate: StartCom Certification Authority
> GET /writing/index.html HTTP/1.1
> User-Agent: curl/7.30.0
> Host: wikichen.is
> Accept: */*
>
< HTTP/1.1 502 Bad Gateway
HTTP/1.1 502 Bad Gateway
< Content-Type: text/html
Content-Type: text/html
< Content-Length: 472
Content-Length: 472
< Connection: keep-alive
Connection: keep-alive
* Server CloudFront is not blacklisted
< Server: CloudFront
Server: CloudFront
< Date: Sun, 09 Mar 2014 17:54:41 GMT
Date: Sun, 09 Mar 2014 17:54:41 GMT
< Age: 6
Age: 6
< X-Cache: Error from cloudfront
X-Cache: Error from cloudfront
< Via: 1.1 9096435f28f91f92bacdf76122de09ee.cloudfront.net (CloudFront)
Via: 1.1 9096435f28f91f92bacdf76122de09ee.cloudfront.net (CloudFront)
< X-Amz-Cf-Id: iAUOQbP8O4A0pI9KGvVz0VgBT1TW-j0yVDa7vdSvIAuxnKOyQghtnw==
X-Amz-Cf-Id: iAUOQbP8O4A0pI9KGvVz0VgBT1TW-j0yVDa7vdSvIAuxnKOyQghtnw==
<
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<HTML><HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<TITLE>ERROR: The request could not be satisfied</TITLE>
</HEAD><BODY>
<H1>ERROR</H1>
<H2>The request could not be satisfied.</H2>
<HR noshade size="1px">
</BODY></HTML>
<BR clear="all">
<HR noshade size="1px">
<ADDRESS>
Generated by cloudfront (CloudFront)
</ADDRESS>
* Connection #0 to host wikichen.is left intact
</BODY></HTML>%
Would love some pointers as to where to start troubleshooting.
A kind rep by the name of Alastair#AWS from the AWS CloudFront forums solved this for me:
I have identified your CloudFront distribution and the S3 bucket
acting as the origin for this distribution.
I can re-create and explain the intermittent '502 Bad Gateway'
response you are receiving.
This response is returned by CloudFront when you attempt to access a
URL using the HTTPS protocol that is not currently cached by
CloudFront. The reason for this error is CloudFront is attempting to
contact your origin using the HTTPS protocol, and this is failing.
The reason for this failure is you have configured your origin as an
S3 bucket, but you are using the "Custom Origin" type and directing to
the S3 website URL for this bucket. If you attempt to hit your S3
website URL using HTTPS, you will note this does not work. S3 website
hosting only supports serving content using the HTTP protocol
(http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html#WebsiteRestEndpointDiff).
Now, the intermittent page load behavior you are seeing is due to
CloudFront returning the pages it currently has in its cache. You
should be able to re-create this scenario as follows:
Hit a page on your site using HTTPS. You should get a '502 Bad Gateway' error back.
Hit the same page using HTTP. You should see the page.
Hit the page again using HTTPS. You should now get the expected result, as CF has served the content from its cache rather than
attempting to contact your origin.
To resolve this issue, please try the following:
Open the CloudFront Management Console and open your distribution.
Navigate to the Origins tab, select your origin and click "Edit"
Modify the "Origin Protocol Policy" to "HTTP Only".
Save the changes and wait about 15 minutes for the change to take effect.
Test
My expectation is this will force CloudFront to contact your origin
using HTTP only. I have tested this in my environment with an S3
Website hosted bucket and I can successfully load content via both
HTTP and HTTPS.
Here's the link to the original forum thread.
I had a similar issue to this and, as #Michael-sqlbot suggested, switched from custom origin to S3. That did not, by itself, resolve the issue.
In addition to switching the origin, Andrew from AWS support said that aliases work better than CNAMEs. I had been using CNAMEs. When I switched to aliases (one for IPv4 and one for IPv6) it worked. Here is the Route 53 documentation for CloudFront that shows how to setup aliases for CloudFront.
I was struggling a bit with having proper setup with own SSL Certificate, but this article was most helpful. Just pay attention to details:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/tutorial-redirecting-dns-queries.html