HTTP Destination for External Webservice in HANA Cloud Platform - web-services

I wanted to execute a webservice hosted in w3schools using SAPUI5 application and WebIDE.
WSDL url is: https://www.w3schools.com/xml/tempconvert.asmx?WSDL
When I used the url directly in UI5 code, I got error on "access control allow origin" as the url belongs to different domain.
So I have decided to create a http destination for the WSDL, refer the alias in neo-ap.json and make use of that alias in Web IDE code.
So, I have created the following HTTP destination in Cloud platform cockpit
HTTP Destination created in Cockpit
neo-app.json
{
"path": "/w3schools",
"target": {
"type": "destination",
"name": "w3schools",
"entryPath": "/"
},
"description": "W3SChools WS Temperature Conversion API"
}
In my controller, I have referred the destination during ajax call as follows:
url: "/w3schools/xml/tempconvert.asmx?WSDL",
However, it seems unreachable to the code as I can see this invocation with red status in the "Network" tab of google chrome!!
To cross check, I tried to open the destination using the application testurl with suffix /w3schools/xml/tempconvert.asmx?WSDL
However, I got 404 error code
With this, I came to the conclusion that the issue is with the HTTP destination configured in cloud cockpit.
I have tried various options (url as https instead of http), giving different name is WebIDESystem, etc but nothing worked out in my favor.In all these cases, the destination is in green status when I tried with "Check Connection" option of HTTP destination.
Can someone please tell me how to resolve this? Here, I would like to stick with the HTTP destination approach as it gives me the flexibility to configure the service from an admin perspective, so in turn easier maintenance.
Regards,
Faddy

Remove WebIDESystem from the HTTP destination. It should work

Related

Using AWS EC2 API (https) with EC2 Launch Templates - The parameter LaunchTemplate is not recognized

I am extremely new with APIs and can not figure this out.
I have a use case where I want to call the AWS EC2 Rest API over https to start an instance from a Launch Template. I am using "RunInstances" 1 to attempt this. I have scrolled through the document and included all of the required parameters.
I then added "ImageId" to the test and got a good run (no issues with authentication, etc). The default instance was created.
The issue is that I cannot figure out how to get the API to accept the "LaunchTemplate" option. I either get "The parameter LaunchTemplate is not recognized" or a "400 - bad request" error.
In postman I have:
Postman Screenshot with the RunInstances call set up and the full URL visable
This is the link to the "LaunchTemplateSpecification" object documentation (linked from 1 also) 3
Can someone help me figure out how to construct the API request for the "LaunchTemplate" parameter?
Also, my web searches have not revealed HTTPS (web url) examples only CLI examples (none for this paramater in the docs either). If someone found a link that I could not, please send it my way.
Thanks!
#stdunbar was correct so far as the LaunchTemplate.LaunchTemplateName parameter was concerned.
I submitted a case to AWS support and received a response. I was missing a "version" parameter.
The correct parameter set is as follows:
https://us-east-1.ec2.amazonaws.com/?
Action=RunInstances&
MaxCount=1&
MinCount=1&
LaunchTemplate.LaunchTemplateName=Testing-Template&
Version=2016-11-15
Here is an updated Image of Postman (for learners like me) 1

How to get fixedresponseconfig on boto3 to work?

I am trying to create an integration between EC2-ALB and Lambda functions and in a part of my code I am trying to use the method:
modify_listener() documentation available here: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.modify_listener
in that part I am using the DefaultAction: FixedResponseConfig where I am trying to display a simple hello world on html. the way this gets triggered in the code is, if my target group is unhealthy display the fixed response permissions have been setup everything looks fine because when I run the function I get a successful message but when I ask for the application from my okta portal I don't get that response (hello world) I get a normal 503 service temporary unavailable.
How can I direct that fixed response to the frontend of the app when is not working? the purpose of this is to display a maintenance page when the target group is down.
Thanks for the responses please feel free to make any question.
You can't customize ALB's error messages through fixed response. Instead you should consider two options:
use CloudFront in front of your ALB to setup Custom Error Page for Specific HTTP Status Codes
use Route53 DNS failover when your ALB becomes unhealthy.

Why am I getting a CustomOriginConfig instead of S3OriginConfig?

Following this article, I'm trying to serve my website's static content from multiple regions.
The lambda function in that article is trying to modify the property of an object within this path:
event.Records[0].cf.request.origin.s3
This is in the case that my lambda function is not receiving such a property. Instead, I'm getting:
event.Records[0].cf.request.origin.custom
Apparently, this means that I'm receiving a CustomOriginConfig while the article is expecting an S3OriginConfig. I'm not sure what these two mean but the UI depicted in the article for the "Edit Origin" page is totally different from mine.
The article shows this:
And I've got this:
Can someone please help me find why I'm receiving a CustomOriginConfig instead of an S3OriginConfig?
CloudFront only considers the origin to be an S3 Origin if the Origin Domain Name is the REST endpoint for the bucket -- e.g. ${bucketname}.s3.amazonaws.com. This is the configuration that supports authentication of requests on the back-side of CloudFront using an Origin Access Identity.
If you are using S3's web site hosting features (index and error documents, and/or redirects) then you use the web site hosting endpoint for the bucket, e.g. ${bucketname}.s3-website.${region}.amazonaws.com. CloudFront actually treats this configuration as a Custom Origin, the same as if you are using any (non-S3) web service as the origin server. Origin Access Identity and S3 website endpoints are not compatible with each other.
The console options change depending on whether the console sees that you're creating an S3 or Custom Origin (based on the hostname).
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html

API Console Issue

I've been using WSO2 API Manager 1.9.1 for the past month on a static IP and we liked it enough to put it on Azure behind a full qualified domain name. As we are still only using for internal purposes, we shut the VM down during off hours to save money. Our Azure setup does not guarantee the same IP address each time the VM restarts. The FQDN allows us to always reach https://api.mydomain.com regardless of what happens with the VM IP.
I updated the appropriate config files to the FQDN and everything seems to be working well. However! The one issue I have and cannot seem to resolve is calling APIs from the API consoloe. No matter what I do, I get a response as below
Response Body
no content
Response Code
0
Response Headers
{
"error": "no response from server"
}
Mysteriously, I can successfully make the same calls from command line or SOAPUI. So it's something unique about the API Console. I can't seem to find anything useful in the logs or googling. I do see a recurring error but it's not very clear or even complete (seems to cut off).
[2015-11-17 21:33:21,768] ERROR - AsyncDataPublisher Reconnection failed for
Happy to provide further inputs / info. Any suggestions on root cause or where to look is appreciated. Thanks in advance for your help!
Edit#1 - adding screenshots from chrome
The API Console may not be giving you response due to following issues
If you are using https, you have to type the gateway url in browser and accept it before invoke the API from the API Console (This case there is no signed certificate in the gateway)
CORS issue which may due to your domain is not in access allow origins response of Options call
If you create a API which having https backend. You have to import endpoint SSL certificate to client-trustore.jks

Kibana won't connect to Elasticsearch on Amazon's Elasticsearch Service

After switching from hosting my own Elastiscsearch cluster to Amazon's Elasticsearch Service,
my Kibana dashboards (versions 4.0.2 and 4.1.2) won't load and I'm receiving the following error in kibana.log:
{
"name": "Kibana",
"hostname": "logs.example.co",
"pid": 8037,
"level": 60,
"err": {
"message": "Not Found",
"name": "Error",
"stack": "Error: Not Found\n at respond (\/srv\/kibana\/kibana-4.1.2-linux-x64\/src\/node_modules\/elasticsearch\/src\/lib\/transport.js:235:15)\n at checkRespForFailure (\/srv\/kibana\/kibana-4.1.2-linux-x64\/src\/node_modules\/elasticsearch\/src\/lib\/transport.js:203:7)\n at HttpConnector.<anonymous> (\/srv\/kibana\/kibana-4.1.2-linux-x64\/src\/node_modules\/elasticsearch\/src\/lib\/connectors\/http.js:156:7)\n at IncomingMessage.bound (\/srv\/kibana\/kibana-4.1.2-linux-x64\/src\/node_modules\/elasticsearch\/node_modules\/lodash-node\/modern\/internals\/baseBind.js:56:17)\n at IncomingMessage.emit (events.js:117:20)\n at _stream_readable.js:944:16\n at process._tickCallback (node.js:442:13)"
},
"msg": "",
"time": "2015-10-14T20:48:40.169Z",
"v": 0
}
unfortunately, this error is not very helpful. I assume it's a wrapped HTTP 404, but for what?
How can I connect a Kibana install to Amazon's Elasticsearch Service?
Here are a some things to keep in mind when using Amazon's Elasticsearch Service:
Modifications to the access policies take a non-deterministic amount of time. I've found that it's good to wait at least 15 minutes after the status is no longer "processing" after making policy changes.
It listens on port 80 for HTTP requests and not the standard port 9200. Be sure that your elasticsearch_url configuration directive reflects this, e.g.:
elasticsearch_url: "http://es.example.co:80"
It's very likely that your Kibana instance will not have the necessary permissions to create the index that it needs to show you a dashboard -- this is towards the root of the issue. Check out the indexes on the Elasticsearch domain and look for a line that matches the kibana_index config directive (e.g. via http://es.example.co/_cat/indices).
for instance, if your kibana_index directive is the value is .kibana-4, if you don't see a line like the following:
green open .kibana-4 1 1 3 2 30.3kb 17.2kb
then your Kibana index was not able to create the index it needs. If you go to the dashboard for the Elasticsearch service in amazon and click on the Kibana link, it will likely create a .kibana-4 index for you.
You can specify this index in your existing Kibana's configuration and you should see the next point.
Your existing Kibana install will likely require authentication via the header:
Kibana: Authorization header requires 'Credential' parameter. Authorization header requires 'Signature' parameter. Authorization header requires 'SignedHeaders' parameter. Authorization header requires existence of either a 'X-Amz-Date' or a 'Date' header.
You can configure this in Kibana and can see the general signing API request documentation for more help.
It's worth noting that the error messaging is reportably better in Kibana 4.2, but as that's in beta and Amazon's Elasticsearch Service was very recently released, the above should be helpful in debugging.