Secure connection from S3 to EC2 on AWS - amazon-web-services

I'm sure this is a fairly simple question regarding EC2 and S3 on AWS.
I have a static website hosted on S3 which connects to a MongoDB server on an EC2 instance which I want to secure. Currently it's open to all of the internet 0.0.0.0/0 on port 27017, which is the MDB default. I want to restrict the inbound traffic to only requests from the S3 static web site however for security reasons. Apparently S3 does not supply fixed addresses which is causing a problem.
My only thought was to open the port to all IP ranges for the S3 region I am in. This doc on AWS explains how to find these. Although they are subject to change without notice.
http://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
Would this be the way to proceed or am I missing something obvious here. Another way to assign an IP to S3 perhaps?

S3 is a storage service, not a compute service so it cannot make a request to your MongoDB. When S3 serves static webpages, your browser will render it and when a user clicks on a link to connect to your MongoDB, the request goes to MongoDB from the user's computer.
So MongoDB sees the request coming from the user's IP. Since you do not know where the user is coming from (or the IP range), you have no choice but to listen to traffic from any IP.

I think it is not possible to allow only for your s3 hosted site to access your DB inside the ec2 since s3 does not offer an IP address for you.
So its better to try an alternative solution such as instead of directly access DB, proxy through a https service inside your ec2 and restrict the inbound traffic for your mondo db port

s3 wont request your mongodb server on ec2 instance . From my understanding your js files in browser would request the mongodb running on ec2 instance . In that case you have to add message headers in the mongodb configuration files to allow CORS .
CORS: enter link description here

Related

AWS host static website and access via DirectConnect

I want to host a static website and will access it via DirectConnect with a custom domain + HTTPS. I think CloudFront + S3 is not suitable in this case as traffic will go through internet (correct me if I'm wrong). What/where should I host my website? Thanks in advance.
I am not sure you need Direct Connect for your use case. Direct Connect is to connect on-premises data center with aws with private connection. It takes a lot of work to set this up like a telecom provider setting up a router at aws locations where they connect this router with aws's etc. This is a big project and costs money. I highly doubt you need this to host a static website.
You can host your static website in S3 and probably buy a domain name in route 53, and map your S3 bucket to this domain name so you can access this site on the internet (public site). There are many tutorials to set this up.

AWS ElasticSearch Request not giving response in Postman

https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-use-postman-to-call-api.html
I am following above document, to connect to my AWS Elastic Search via Postman.
What I want to achieve sent request & get the response.
I put all things related Authentication as well, still it is giving timeout error.
It is giving error 'Could not get any response'.
My Postman settings related to SSL is also correct
Sample URL :
https://vpc-abc-yqb7jfwa6tw6ebwzphynyfvaka.ap-southeast-1.es.amazonaws.com/elasticsearch_index/_search?source={"query":{"bool":{"should":[{"multi_match":{"query":"abc","fields":["name.suggestion"],"fuzziness":1}}]}},"size":10,"_source":["name"],"highlight":{"fields":{"name.suggestion":{}},"pre_tags":["\u003Cem\u003E"],"post_tags":["\u003C\/em\u003E"]}}&source_content_type=application/json
Since your ES domain is in the VPC, you can't access if from the internet. The use of security groups and "allowing port" is unfortunately not enough.
The following is written in the docs:
If you try to access the endpoint in a web browser, however, you might find that the request times out. To perform even basic GET requests, your computer must be able to connect to the VPC. This connection often takes the form of a VPN, managed network, or proxy server.
Some options to consider are:
Setup a bastion host in the VPC in its public subnet, and ssh tunnel connection from the ES to your local mac through the bastion host. This would be the easiest ad-hoc proxy solution mentioned in the docs.
Accessing the EC directly from the bastion host (e.g. remote desktop)
Setting up a proxy server to proxy all requests from the internet into the ES.
For creating and managing ES domain you can refer this documentation.
While creating ES domain in the Network configuration section, you can choose either VPC access or Public access. If you select public access, you can secure your domain with an access policy that only allows specific users or IP addresses to access the domain.
To know more about access policies, you can refer this SO answer.
So, if you create your ES domain outside VPC, in public access you can easily send request and get response through postman, without adding any Authorization.
The endpoint in the url, is the endpoint that is generated when you have created your ES domain.
To create an index
For adding data into the index
Get API to get the mapping of index created
Now, you can check it from your AWS console, that this index is created in the ES domain

How to use an IAM certificate in AWS?

I have an EC2, hosting a simple http server.
I want to make use of the HTTPS so to have my traffic hidden, but I made the mistake of buying a domain via AWS and to generate a certificate for it via AWS.
Mistake because it seems I cannot simply import that certificate in my EC2 (maybe because, if AWS gave me that cert as file, I could use it in any number of application of mine).
So, what I have to do in order to use it?
Move my web application to an elastic load balancer? Use a cointainer to host it?
What is the less expensive?

Dynamic Apache site with route 53 and cloud front not working

I am beginner with AWS and need some help for my first project .Below is the architecture of my environment and the code runs great on Godaddy .I am trying to move it on AWS. Please try to understand things are working but not in a way that I want.
Setup
Two private subnet AWS linux servers(like one in Associate practise lab)
One S3 bucket mapped to /var/www/html sync aws s3 sync cronjob
Route53 domain name registered sample.com
Cloudfront created https://sample.cloufront.net
Application load balancer internet facing --ssl (80,443 listener ports )
ACM -SSL certificate issued *.sample.com ,WWW.sample.com
RDS instance with everything configured for SG etc
Issues
Below are the Issues
As soon as I use http -https redirection from httpd.conf as suggested by Amazon guys I get a bad gateway using this documentation
Images are not getting delivered by Cloudfront. I tried to redirect in .htaccess getting access denied although I created identity and updated bucket policy .
Load Balancer DNS name gets exposed even when route53 mapped to load balancer
httpd.conf "All" has been used for override
S3 Sync with /var/www/html is changing the permission on linux when any new file is uploaded .

Dropbox API access from Amazon Cloud

I am building a project which will be using the Dropbox API to read and write files to and from Dropbox. I have noticed that the endpoint URL is linked to an Amazon ELB, and i am wondering is there an AWS internal API i could use, which may save both me and Dropbox some money by making internal to Amazon requests, not external requests?
Host of Dropbox API is api.dropbox.com and resolves to 199.47.218.158.
That does not look like it belongs in one of the EC2 public IPs.
See: https://forums.aws.amazon.com/ann.jspa?annID=1528
Anyway, even if it is, it is not possible to determine the internal IP unless they publish the elastic IP DNS name (that looks like ec2-xx-xx-xx-xx.us-west-2.compute.amazonaws.com).
A little known tip:
If you query an Elastic IP's DNS name from within an EC2 instance, you will get an internal IP.