We have a back-end website deployed on AWS. I deploy a front-end website in local tomcat and send a request to back-end website in order to get some object data with homemade soap api. Dose it work?
Yes , literally you are trying to access a remote api from local environment. After the deployment in AWS do make sure the security groups allows the protocol and port number, to be communicated from remotely.
By default there ports are not allowed.
Looks like you are trying to connect to a SOAP Webservice hosted in AWS. There is no reason it shouldn't be working, Only thing is you have to properly configure your AWS security groups attached to your backend server, to allow connections from your frontend website. Use front-end server port as the source ip in your security group. You might also have to allow outgoing connections from the network where your frontend server is hosted if it is protected by a firewall.
Related
I have an on premise server
It has multiple webservices running
I want my aws applications to access it so I have exposed my on premise server to internet
But now anyone can connect and make requests to the webservices
How do I make only my aws services access my on premise webservices and every other request to be blocked by the on premise server if it cannot be authenticated
So, I'm working on a hackathon project right now, and for the demo, I've spun up a NodeJS Express server on an EC2 via Elastic Beanstalk. When testing the server's API with our front-end locally, it worked perfectly fine.
Now we've deployed our front-end to AWS Amplify, setup a domain name in Route53, and hooked everything up. When we go to the domain, our front-end looks great, but when we try using the functionality that would connect to our server's API, we get a net::ERR_SSL_PROTOCOL_ERROR.
Doing some research, it looks like(?) that we have to setup a certificate on the Classic Load Balancer that's in front of the EC2. So I requested a certificate, and created a listener on the Load Balancer as follows:
Load Balancer Protocol
Load Balancer Port
Instance Protocol
Instance Port
HTTPS
443
HTTPS
3000
But now I realize that if setup this way, I still have no idea how to point the React Frontend's API calls to the Load Balancer instead of the EC2, or whether the listener is setup correctly. Would anyone have an idea of what steps we should take here?
For the details of the app, the backend is a pretty straightforward Express App with CORS enabled, and the frontend is a fairly standard React project, nothing special about either of them.
Instance Protocol should be HTTP. So your setup uses HTTPS only between client and CLB:
Client--- (HTTPS) ---> CLB --- (HTTP) ---> EC2
Also for properly setup HTTPS, you need to use your own domain. You can't use default domain provided by EB for your application.
How can I send https request from one deployment to another deployment using AWS lightsail's private domain?
I've created two AWS Lightsail Container deployments using two docker images. I'd like to send https request from one image deployment ("sender") to another image deployment ("receiver"). This works fine when the receiver's public endpoint is enabled. However, I don't want to expose this service to the public but instead route traffic using AWS Lightsail's private domain.
My problem is when I try and send https request from "sender" to the "receiver"'s private domain (.service.local:) I get https://<service_name>.service.local:52020/tester/status net::ERR_NAME_NOT_RESOLVED on the "sender"'s html page. According to the Lightsail docs (section "Private domain") this should be accessible to my "Lightsail resources in the same AWS Region as your service".
I've found a similar Question & Answer in stackoverflow. I tried this answer using my region but failed because Lightsail container required https while .service.local required http. After creating a Amazon Linux instance, I succeeded making http request but failed to make https request. (screenshot below). In the meantime, Lightsail strictly asks you to use https.
If I force to send http request from https webpage, chrome generates Mixed content: The page at ... was loaded over HTTPS but requested an insecure ... error. I can go around the https problem by using next.js api routes, but this doesn't feel secure because next.js api routes are publicly accessible.
Is there anything that I may be missing here?
Things I've verified:
The image is up and running and works fine when connecting to it using the public domain
I'm running both instance and container service in the same region
Thank you in advance.
Some screenshots
Running dig inside docker's entrypoint script
Error message when sender sends http request to receiver
I made my two AWS Lightsail Containers, Frontend Container with next.js and Backend Container with flask, talk to each other using the following steps:
Launch a Lightsail "instance" using "Amazon Linux" in the region I want to deploy my Container. Copy /etc/resolv.conf from this "Amazon Linux" instance. Update Dockerfile to overwrite /etc/resolv.conf file in my docker.
To make API request using http instead of https and go around the Mixed content: The page at ... was loaded over HTTPS but requested an insecure ... error, I used next.js' API route to relay the API request. So, for instance, a page on Frontend Container will make API request to /api on the same Container and the /api route will make http request to Backend Container.
API route was properly coded with security measures so that users cannot use API route to access random endpoint in Backend Container.
https "pages" are often mixed content where resources such as pictures are drawn from the http folders not the "https" site folder, hence the request to get such a resource is http because of its configuration by location, so it will be called by http to obtain and then not be crypted (see server configuration for https folder location that requires access to it by that protocol).
Of protocols, the message from another post implies and may be that to communicate "privately" is NOT a web service for public so such communications require using ssl:// secure protocol (alike using ssh://) NOT https:// secure public web server protocol of both require certificate. (hazard a guess)
ssl may be what is used privately across local.
The following AWS links recommend having differnet accounts for developing and the service.
https://aws.amazon.com/blogs/compute/a-guide-to-locally-testing-containers-with-amazon-ecs-local-endpoints-and-docker-compose/
https://aws.amazon.com/cli/
First of all, I'm in no way an expert at security or networking, so any advice would be appreciated.
I'm developing an IOS app that communicates with an API hosted on an AWS EC2 linux machine.
The API is deployed using **FastAPI + Docker**.
Currently, I'm able to communicate with my remote API using HTTP requests to my server's public IP address (after opening port 80 for TCP) and transfer data between the client and my server.
One of my app's features requires sending a private cookie from the client to the server.
Since having the cookie allows potential attackers to make requests on behalf of the client, I intend to transfer the cookie securely with HTTPS.
I have several questions:
Will implementing HTTPS for my server solve my security issue? Is that the right approach?
The FastAPI "Deploy with Docker" docs recommend this article for implementing TLS for the server (using Docker Swarm Mode and Traefik).Is that guide relevant for my use-case?
In that article, it says Define a server name using a subdomain of a domain you own. Do I really need to own a domain to implement HTTPS? Can't I just keep using the server's IP address to communicate with it?
Thanks!
Will implementing HTTPS for my server solve my security issue? Is that the right approach?
With HTTP all traffic between your clients and the ec2 is in plain text. With HTTPS the traffic is encrypted, so it is secure.
FastAPI "Deploy with Docker"
Sadly can't comment on the article.
Do I really need to own a domain to implement HTTPS?
Yes. The SSL certificates can only be registered for domains that you own. You can't get the certificate for domain that is not yours.
I have a website written in AngularJs which send api request to another server application. If I want user to connect website through https, do I have to make server https also? I have already requested a ssl certificate on AWS with my website address, and applied it on the load balancer of website instance (not server instance). Do I have to request another certificate for my api server?
Thanks.
It is recommended that the communication between the client and server happens over https, especially if private data is being transmitted, such as login data.
Regarding certificates, in order to https to work, the common name (CN) that is used in the certificate must match the fully qualified domain of your server's URL. So yes, you need a new certificate created specifically for your back-end server.