WSO2 APIM is not invoking backend when hostname is changed.
My APIM server is in AWS running in a docker container. Backend is in Azure App service. When I configure API gateway with localhost in AWS EC2 instance, the published API in gateway is able to invoke backend and fetch data without any issue.
When I do the following changes and try out the same API from AWS, it gives 400 response error without anything in logs
change the host name in deployment.toml and the required gateway urls
create new keystore for SSL communication using CA signed certificate and import it in client-truststore
change the secondary keystore to the new one
Build and run the docker with modified keystores and deployment.toml
Created an image using the dockerfile mentioned in https://github.com/wso2/docker-apim.git under dockerfiles/ubuntu/apim and then performed required changes in deployment.toml and Dockerfile
The API is working fine using curl and from postman. It is giving 400 error only when invoked from Publisher/Devportal UI
Related
So I've deployed my WSO2 APIM instance to an Azure VM, changed the hostname to be the same as the VM, also the gateway.
I'm then having problems during requests execution, I'm getting CORS Problem
I have spring app A running in beanstalk.
A is internally calling app B using http. It’s working fine.
Now I added a listener in load balancer in app B and enabled https.
Now A is not able to call B using https and having certificate exception.
Please let me know if I need to make any change in app A for disabling certificate validation or if there is any other way.
App A is calling app B using web client. Both apps are running in beanstalk.
You need to have your own domain (e.g. myapp.org). You can't use HTTPS with a default EB domain provided to you by AWS. Once you have your own domain you can get an SSL certificate using AWS ACM. The full procedure for setting up HTTPS on EB is described in AWS docs.
How can I send https request from one deployment to another deployment using AWS lightsail's private domain?
I've created two AWS Lightsail Container deployments using two docker images. I'd like to send https request from one image deployment ("sender") to another image deployment ("receiver"). This works fine when the receiver's public endpoint is enabled. However, I don't want to expose this service to the public but instead route traffic using AWS Lightsail's private domain.
My problem is when I try and send https request from "sender" to the "receiver"'s private domain (.service.local:) I get https://<service_name>.service.local:52020/tester/status net::ERR_NAME_NOT_RESOLVED on the "sender"'s html page. According to the Lightsail docs (section "Private domain") this should be accessible to my "Lightsail resources in the same AWS Region as your service".
I've found a similar Question & Answer in stackoverflow. I tried this answer using my region but failed because Lightsail container required https while .service.local required http. After creating a Amazon Linux instance, I succeeded making http request but failed to make https request. (screenshot below). In the meantime, Lightsail strictly asks you to use https.
If I force to send http request from https webpage, chrome generates Mixed content: The page at ... was loaded over HTTPS but requested an insecure ... error. I can go around the https problem by using next.js api routes, but this doesn't feel secure because next.js api routes are publicly accessible.
Is there anything that I may be missing here?
Things I've verified:
The image is up and running and works fine when connecting to it using the public domain
I'm running both instance and container service in the same region
Thank you in advance.
Some screenshots
Running dig inside docker's entrypoint script
Error message when sender sends http request to receiver
I made my two AWS Lightsail Containers, Frontend Container with next.js and Backend Container with flask, talk to each other using the following steps:
Launch a Lightsail "instance" using "Amazon Linux" in the region I want to deploy my Container. Copy /etc/resolv.conf from this "Amazon Linux" instance. Update Dockerfile to overwrite /etc/resolv.conf file in my docker.
To make API request using http instead of https and go around the Mixed content: The page at ... was loaded over HTTPS but requested an insecure ... error, I used next.js' API route to relay the API request. So, for instance, a page on Frontend Container will make API request to /api on the same Container and the /api route will make http request to Backend Container.
API route was properly coded with security measures so that users cannot use API route to access random endpoint in Backend Container.
https "pages" are often mixed content where resources such as pictures are drawn from the http folders not the "https" site folder, hence the request to get such a resource is http because of its configuration by location, so it will be called by http to obtain and then not be crypted (see server configuration for https folder location that requires access to it by that protocol).
Of protocols, the message from another post implies and may be that to communicate "privately" is NOT a web service for public so such communications require using ssl:// secure protocol (alike using ssh://) NOT https:// secure public web server protocol of both require certificate. (hazard a guess)
ssl may be what is used privately across local.
The following AWS links recommend having differnet accounts for developing and the service.
https://aws.amazon.com/blogs/compute/a-guide-to-locally-testing-containers-with-amazon-ecs-local-endpoints-and-docker-compose/
https://aws.amazon.com/cli/
I have hosted my website on EC2 (both front and backend) instance and using SSL provided by Amazon Certificate Manager. I was able to access the route which put a "get" query on the EC2 backend before installing this certificate. However now I am not able to open this route, And the problem it is showing is that - Website is somehow not able to fetch data from the backend.
Also, this works completely fine when I open my website using the IP address of EC2.
I think I must do some changes in the certificate so that it could fetch the data from the backend. Please let me know what should be done.
I have been using React and NodeJS
When I deployed my war file in AWS using Amazon Elastic Beanstalk, only HTTP GET is processed. HTTP POST is rejected with 405 Error - Method Not Allowed.
The war file being deployed is basically JAX-RS web services using Jersey framework. When deploying it locally, everything works normally.
I am using all resources under AWS Free Tier - EC2, S3, RDS (MySql) and AWS AMI Linux with Tomcat.
Is there any configuration in AWS that I need to set to allow HTTP POST?
I already tried modifying Tomcat web.xml to set readOnly to false but still did not fix the problem.
Is this something to do with Security such as described in IAM - roles and permissions?
I really wish that the solution requires changing only config files and not modifying source code.
Thank you in advance for your answer :)