Kibana won't connect to Elasticsearch on Amazon's Elasticsearch Service - amazon-web-services

After switching from hosting my own Elastiscsearch cluster to Amazon's Elasticsearch Service,
my Kibana dashboards (versions 4.0.2 and 4.1.2) won't load and I'm receiving the following error in kibana.log:
{
"name": "Kibana",
"hostname": "logs.example.co",
"pid": 8037,
"level": 60,
"err": {
"message": "Not Found",
"name": "Error",
"stack": "Error: Not Found\n at respond (\/srv\/kibana\/kibana-4.1.2-linux-x64\/src\/node_modules\/elasticsearch\/src\/lib\/transport.js:235:15)\n at checkRespForFailure (\/srv\/kibana\/kibana-4.1.2-linux-x64\/src\/node_modules\/elasticsearch\/src\/lib\/transport.js:203:7)\n at HttpConnector.<anonymous> (\/srv\/kibana\/kibana-4.1.2-linux-x64\/src\/node_modules\/elasticsearch\/src\/lib\/connectors\/http.js:156:7)\n at IncomingMessage.bound (\/srv\/kibana\/kibana-4.1.2-linux-x64\/src\/node_modules\/elasticsearch\/node_modules\/lodash-node\/modern\/internals\/baseBind.js:56:17)\n at IncomingMessage.emit (events.js:117:20)\n at _stream_readable.js:944:16\n at process._tickCallback (node.js:442:13)"
},
"msg": "",
"time": "2015-10-14T20:48:40.169Z",
"v": 0
}
unfortunately, this error is not very helpful. I assume it's a wrapped HTTP 404, but for what?
How can I connect a Kibana install to Amazon's Elasticsearch Service?

Here are a some things to keep in mind when using Amazon's Elasticsearch Service:
Modifications to the access policies take a non-deterministic amount of time. I've found that it's good to wait at least 15 minutes after the status is no longer "processing" after making policy changes.
It listens on port 80 for HTTP requests and not the standard port 9200. Be sure that your elasticsearch_url configuration directive reflects this, e.g.:
elasticsearch_url: "http://es.example.co:80"
It's very likely that your Kibana instance will not have the necessary permissions to create the index that it needs to show you a dashboard -- this is towards the root of the issue. Check out the indexes on the Elasticsearch domain and look for a line that matches the kibana_index config directive (e.g. via http://es.example.co/_cat/indices).
for instance, if your kibana_index directive is the value is .kibana-4, if you don't see a line like the following:
green open .kibana-4 1 1 3 2 30.3kb 17.2kb
then your Kibana index was not able to create the index it needs. If you go to the dashboard for the Elasticsearch service in amazon and click on the Kibana link, it will likely create a .kibana-4 index for you.
You can specify this index in your existing Kibana's configuration and you should see the next point.
Your existing Kibana install will likely require authentication via the header:
Kibana: Authorization header requires 'Credential' parameter. Authorization header requires 'Signature' parameter. Authorization header requires 'SignedHeaders' parameter. Authorization header requires existence of either a 'X-Amz-Date' or a 'Date' header.
You can configure this in Kibana and can see the general signing API request documentation for more help.
It's worth noting that the error messaging is reportably better in Kibana 4.2, but as that's in beta and Amazon's Elasticsearch Service was very recently released, the above should be helpful in debugging.

Related

GCP CDN - server requests return a 404

I followed the guide to set up an external global, app engine based, load balancer. I linked it to Google's CDN by ticking the little box in the LB configuration settings.
Now, when I load my domain name, it says CANNOT GET /. The request returns a 404, along with some C S P: The page’s settings blocked the loading of a resource at inline (“default-src”). error messages.
It was working well before adding the CDN. So, I'm assuming my app server configuration is fine.
In the Load Balancer details, there is a little chart under the Monitoring section, with how traffic flows.
It shows traffic coming from the 3 global regions, going to the frontend of the LB, then to / (unknown) and / (unmatched) as URL Rule, then to the backend service I defined, and finally to a backend instance labelled NO_BACKEND_SELECTED.
I'm guessing the issue comes either from the URL Rule or Backend Instance, but there is little in the doc. to troubleshoot.
I followed the doc. to setup the LB. Settings are pretty simple using App Engine, so there is little room for wrong doing. But I may have missed something still.
In the 'create serverless NEG', I did select App Engine, and default as the service name (although i'm not sure what default actually means).
Any idea what's missing ?
EDIT :
So, in the load balancing menu, I go to the 'Backends' section at the top, and select my backend. Here I have the list of 'General properties' of my backend. Except, under 'Backends', it says the following : Backends contain instance groups of VMs or network endpoint groups. This backend service has no backends yet edit
From there, I can click the edit link, which redirects me to the 'Backend service edit' menu. I DO have a backend selected in there. I did create a serverless NEG using App Engine.
So, what's missing ? Is there anything wrong with Google's serverless backend ?
I wanna help you with the issue that you are facing.
If the responses from your external backend are not cached by Cloud CDN
Ensure that:
-You have enabled Cloud CDN on the backend service containing the NEG that points to your external backend by setting enableCDN to true. (DONE as per your description).
-Responses served by your external backend meet Cloud CDN caching requirements. For example, you are sending Cache-Control: public, max-age=3600 response headers from the origin.
The current implementation of Cloud CDN stores responses in cache if all of the following are true.
Attribute:
Served by
Requirement:
Backend service, backend bucket, or an external backend with Cloud CDN enabled
Attribute:
In response to
Requirement:
GET request
Attribute:
Status code
Requirement:
200, 203, 204, 206, 300, 301, 302, 307, 308, 404, 405, 410, 421, 451, or 501.
Attribute:
Freshness
Requirement:
The response has a Cache-Control header with a max-age or s-maxage directive, or an Expires header with a timestamp in the future.
For cacheable responses without an age (for example, with no-cache), the public directive must be explicitly provided.
With the CACHE_ALL_STATIC cache mode, if no freshness directives are present, a successful response with static content type is still eligible for caching.
With the FORCE_CACHE_ALL cache mode, any successful response is eligible for caching.
If negative caching is enabled and the status code matches one for which negative caching specifies a TTL, the response is eligible for caching, even without explicit freshness directives.
Attribute:
Content
Requirement:
Contains a valid Content-Length, Content-Range, or Transfer-Encoding: chunked header.
For example, a Content-Length header that correctly matches the size of the response.
Attribute:
Size
Requirement:
Less than or equal to the maximum size.
For responses with sizes between 10 MB and 5 TB, see the additional cacheability constraints described in byte range requests.
Please validate the URL Mapping too:
This is an example as reference adapt this according your project.
Create a YAML file /tmp/http-lb.yaml, making sure to substitute PROJECT_ID with your project ID.
When a user requests path /*, the path gets rewritten in the backend to the actual location of the content, which is /love-to-fetch/*.
defaultService: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendBuckets/cats
hostRules:
- hosts:
- '*'
pathMatcher: path-matcher-1
name: http-lb
pathMatchers:
- defaultService: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendBuckets/cats
name: path-matcher-1
pathRules:
- paths:
- /*
routeAction:
urlRewrite:
pathPrefixRewrite: /love-to-fetch/
service: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendBuckets/dogs
tests:
- description: Test routing to backend bucket, dogs
host: example.com
path: /love-to-fetch/test
service: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendBuckets/dogs
Validate the URL map.
gcloud compute url-maps validate --source /tmp/http-lb.yaml
If the tests pass and the command outputs a success message, save the changes to the URL map.
Update the URL map.
gcloud compute url-maps import http-lb \
--source /tmp/http-lb.yaml \
--global
Using URL maps

Custom doman (with SSL) for Cloud Functions [duplicate]

I can't see any option anywhere to set up a custom domain for my Google Cloud Function when using HTTP Triggers. Seems like a fairly major omission. Is there any way to use a custom domain instead of their location-project.cloudfunctions.net domain or some workaround to the same effect?
I read an article suggesting using a CDN in front of the function with the function URL specified as the pull zone. This would work, but would introduce unnecessary cost - and in my scenario none of the content is able to be cached so using a CDN is far from ideal.
If you connect your Cloud project with Firebase, you can connect your HTTP-triggered Cloud Functions to Firebase Hosting to get vanity URLs.
Using Cloudflare Workers (CDN, reverse proxy)
Why? Because it not only allows you to set up a reverse proxy over your Cloud Function but also allows you to configure things like - server-side rendering (SSR) at CDN edge locations, hydrating API response for the initial (SPA) webpage load, CSRF protection, DDoS protection, advanced caching strategies, etc.
Add your domain to Cloudflare; then go to DNS settings, add a A record pointing to 192.0.2.1 with Cloudflare proxy enabled for that record (orange icon). For example:
Create a Cloudflare Worker script similar to this:
function handleRequest(request) {
const url = new URL(request.url);
url.protocol = "https:";
url.hostname = "us-central1-example.cloudfunctions.net";
url.pathname = `/app${url.pathname}`;
return fetch(new Request(url.toString(), request));
}
addEventListener("fetch", (event) => {
event.respondWith(handleRequest(event.request));
});
Finally, open Workers tab in the Cloudflare Dashboard, and add a new route mapping your domain URL (pattern) to this worker script, e.g. example.com/* => proxy (script)
For a complete example, refer to GraphQL API and Relay Starter Kit (see web/workers).
Also, vote for Allow me to put a Custom Domain on my Cloud Function
in the GCF issue tracker.
Another way to do it while avoiding Firebase is to put a load balancer in front of the Cloud Function or Cloud Run and use a "Serverless network endpoint group" as the backend for the load balancer.
Once you have the load balancer set up just modify the DNS record of your domain to point to the load balancer and you are good to go.
https://cloud.google.com/load-balancing/docs/https/setting-up-https-serverless
Been a while for this answer.
Yes, now you can use a custom domain for your Google Cloud functions.
Go over to firebase and associate your project with firebase. What we are interested in here is the hosting. Install the Firebase CLI as per the firebase documentation - (very good and sweet docs here)
Now create your project and as you may have noticed on the docs, to add firebase to your project you type firebase init. Select hosting and that's it.
Once you are done, look for the firebase.json file. Then customize it like this
{
"hosting": {
"public": "public",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"rewrites": [
{
"source": "myfunction/custom",
"function": "myfunction"
},
]
}
}
By default, you get a domain like https://project-name.web.app but you can add your own domain on the console.
Now deploy your site. Since you are not interested in web hosting probably you can leave as is. Now your function will execute like this
Function to execute > myfunction
Custom url > https://example.com/myfunction/custom
If you don't mind the final appearance of the url you could also setup a CNAME dns record.
function.yourdomain.com -> us-central1******.cloudfunctions.net
then you could call it like
function.yourdomain.com/function-1/?message=Hello+World

HTTP Destination for External Webservice in HANA Cloud Platform

I wanted to execute a webservice hosted in w3schools using SAPUI5 application and WebIDE.
WSDL url is: https://www.w3schools.com/xml/tempconvert.asmx?WSDL
When I used the url directly in UI5 code, I got error on "access control allow origin" as the url belongs to different domain.
So I have decided to create a http destination for the WSDL, refer the alias in neo-ap.json and make use of that alias in Web IDE code.
So, I have created the following HTTP destination in Cloud platform cockpit
HTTP Destination created in Cockpit
neo-app.json
{
"path": "/w3schools",
"target": {
"type": "destination",
"name": "w3schools",
"entryPath": "/"
},
"description": "W3SChools WS Temperature Conversion API"
}
In my controller, I have referred the destination during ajax call as follows:
url: "/w3schools/xml/tempconvert.asmx?WSDL",
However, it seems unreachable to the code as I can see this invocation with red status in the "Network" tab of google chrome!!
To cross check, I tried to open the destination using the application testurl with suffix /w3schools/xml/tempconvert.asmx?WSDL
However, I got 404 error code
With this, I came to the conclusion that the issue is with the HTTP destination configured in cloud cockpit.
I have tried various options (url as https instead of http), giving different name is WebIDESystem, etc but nothing worked out in my favor.In all these cases, the destination is in green status when I tried with "Check Connection" option of HTTP destination.
Can someone please tell me how to resolve this? Here, I would like to stick with the HTTP destination approach as it gives me the flexibility to configure the service from an admin perspective, so in turn easier maintenance.
Regards,
Faddy
Remove WebIDESystem from the HTTP destination. It should work

Validate AWS Client Certificate from API Gateway in SailsJS, behind Elastic Load Balance

I have a small API Gateway endpoint, that sends a Client Certificate to a backend server. This backend server runs on Sails behind an ELB. What I want to do, is to filter some of the routes in Sails, and the policy should look for the Certificate in the request, if it is not sent, then reject and if its (validating against the public Key) then allow it to continue.
In the docs of AWS API Gateway (http://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started-client-side-ssl-authentication.html), they suggest to use client-certificate-auth (https://www.npmjs.com/package/client-certificate-auth), but I can't find a way to use it on my Sails backend.
I've tried using serverOptions in Sails, but it crashes with:
error: TypeError: "listener" argument must be a function
So I'm pretty much lost on this one. If anyone has experience with it, please advice :)
Thanks
Step 0:
As mentioned in some of the comments - you'll need to change the ELB to be a classic ELB and load balance TCP on port 443 instead of HTTPS.
Step 1: you'll need to launch sails with HTTPS - the config looks something like this:
http: {
serverOptions: {
requestCert: true,
rejectUnauthorized: true
}
},
ssl: {
key: key.Body.toString(),
cert: cert.Body.toString()
}
So either config/http.js and config/ssl.js, or pass those to sails.lift().
Step 2: run the middleware. - there are lots of different ways to do this, but since you say you only want to do it some routes, I'd create a policy that runs client-certificate-auth, or even possibly better, just call req.connection.getPeerCertificate() and check the things you want to check. No need to use an extra npm module when you don't have to.
Here's the policy documentation:
https://sailsjs.com/documentation/concepts/policies

API Console Issue

I've been using WSO2 API Manager 1.9.1 for the past month on a static IP and we liked it enough to put it on Azure behind a full qualified domain name. As we are still only using for internal purposes, we shut the VM down during off hours to save money. Our Azure setup does not guarantee the same IP address each time the VM restarts. The FQDN allows us to always reach https://api.mydomain.com regardless of what happens with the VM IP.
I updated the appropriate config files to the FQDN and everything seems to be working well. However! The one issue I have and cannot seem to resolve is calling APIs from the API consoloe. No matter what I do, I get a response as below
Response Body
no content
Response Code
0
Response Headers
{
"error": "no response from server"
}
Mysteriously, I can successfully make the same calls from command line or SOAPUI. So it's something unique about the API Console. I can't seem to find anything useful in the logs or googling. I do see a recurring error but it's not very clear or even complete (seems to cut off).
[2015-11-17 21:33:21,768] ERROR - AsyncDataPublisher Reconnection failed for
Happy to provide further inputs / info. Any suggestions on root cause or where to look is appreciated. Thanks in advance for your help!
Edit#1 - adding screenshots from chrome
The API Console may not be giving you response due to following issues
If you are using https, you have to type the gateway url in browser and accept it before invoke the API from the API Console (This case there is no signed certificate in the gateway)
CORS issue which may due to your domain is not in access allow origins response of Options call
If you create a API which having https backend. You have to import endpoint SSL certificate to client-trustore.jks