Dropbox API access from Amazon Cloud - amazon-web-services

I am building a project which will be using the Dropbox API to read and write files to and from Dropbox. I have noticed that the endpoint URL is linked to an Amazon ELB, and i am wondering is there an AWS internal API i could use, which may save both me and Dropbox some money by making internal to Amazon requests, not external requests?

Host of Dropbox API is api.dropbox.com and resolves to 199.47.218.158.
That does not look like it belongs in one of the EC2 public IPs.
See: https://forums.aws.amazon.com/ann.jspa?annID=1528
Anyway, even if it is, it is not possible to determine the internal IP unless they publish the elastic IP DNS name (that looks like ec2-xx-xx-xx-xx.us-west-2.compute.amazonaws.com).
A little known tip:
If you query an Elastic IP's DNS name from within an EC2 instance, you will get an internal IP.

Related

AWS host static website and access via DirectConnect

I want to host a static website and will access it via DirectConnect with a custom domain + HTTPS. I think CloudFront + S3 is not suitable in this case as traffic will go through internet (correct me if I'm wrong). What/where should I host my website? Thanks in advance.
I am not sure you need Direct Connect for your use case. Direct Connect is to connect on-premises data center with aws with private connection. It takes a lot of work to set this up like a telecom provider setting up a router at aws locations where they connect this router with aws's etc. This is a big project and costs money. I highly doubt you need this to host a static website.
You can host your static website in S3 and probably buy a domain name in route 53, and map your S3 bucket to this domain name so you can access this site on the internet (public site). There are many tutorials to set this up.

How do I restrict GCP load balancer access by domain, domain's IP or GKE ingress IP?

I have assets in a Google Cloud Storage bucket. I created a subdomain (aaa.bbb.com) and a load balancer using that subdomain to point to that bucket (e.g. aaa.bbb.com/image-to-read.png). I also have an application using Google Kubernetes Engine. The goal is to make sure all users are blocked except that application on GKE. And all the GKE application is doing is reading the url of the assets to display them. How do I achieve that?
Things I've tried:
Setting GCS cors for the bucket
It turns out this only restricts by domain if people are signed into Google with the domain.
Workload Identity
This has just not worked for me. I also have an API service in the same GKE cluster that uses this and I'm able to upload fine with it. However, using a plain <img /> tag with the source as a the GCS bucket ignores the Workload Identity as far as I can tell.
Cloud Armor
This seems the most promising. I have successfully restricted by IP address but, unfortunately, the only IP address I'm able to restrict by is my actual local computer. I believe that means the request headers are sending my computer's IP address to the load balancer. But what I am trying to do is restrict access by the application's load balancer IP address or even by the origin domain (preferred).
What I'm asking is probably a basic networking question, but I'm no wiz at all the devops/infrastructure concepts so any help would be appreciated. Thanks!
You have two options:
Cloud Storage authorization
Deploy an HTTP(S) Load Balancer + Cloud Armor.
I am not sure what you mean by GKE ingress IP.
The simplest is to add Authorization in your GKE application when accessing Cloud Storage.
Authorization:
Service Account OAuth Access Token
Signed URLs.
Both methods are easy to implement.
Note: Workload Identity Federation also generates service account OAuth access tokens. Use that method if need to federate credentials from one OAuth Authority to Google. However, for a GKE application, Signed URLs or service account OAuth access tokens are probably the correct solution.

Whitelist IP of AWS API Gateway API endpoint in company's firewall

I have built an AWS API Gateway API endpoint which will be hit by one of the machines of my company's network to POST data every regular interval. But, the office firewall blocks it when I try it from office network through Postman (but when I use mobile hotspot/other wifi, it works seamlessly due to no firewall challenge), so I have to give the range of IP addresses to be white-listed by the office network team to be able to hit the API endpoint.
Where to get the IPs? Will they be constant or changing? Since it is a long process of raising tickets for IPs to be white-listed by the network security team, is there a smooth process on the same?
Also, is there a risk associated to this way of data push from on-prem to cloud? I've already implemented AWS IAM Authorization and also API-Key for security and access control. If there still is a risk, how to make this process totally secured?
Please help!
Unluckly you can not give a static IP to an API Gateway as it can change without notice, that is by AWS Design. What you can do in such case is to have a reverse poxy with and elastic IP associated that will transparently route your http traffic to the API Gateway (Then you need a domain name and a certificate because you will not use the APIGateway name anymore)
Also, is there a risk associated to this way of data push from on-prem
to cloud? I've already implemented AWS IAM Authorization and also
API-Key for security and access control. If there still is a risk, how
to make this process totally secured?
There's nothing totally secured in any organization, but in order to secure in transit data, you should use an encrypted channel like https (which is natively supported by API Gateway). That is why you need a domain name and a certificate for the proxy
how about:
https://www.reddit.com/r/aws/comments/8f1pve/whitelisting_aws_api_gateway/
API Gateway is proxied through Cloudfront so you could whitelist the IPs here that are for the CLOUDFRONT service. The ips are rotated so you’ll need to update your whitelist every
There is an SNS topic that you can subscribe to that sends out the IP ranges of AWS services whenever they are updated.
https://aws.amazon.com/blogs/security/how-to-automatically-update-your-security-groups-for-amazon-cloudfront-and-aws-waf-by-using-aws-lambda/
https://aws.amazon.com/blogs/aws/subscribe-to-aws-public-ip-address-changes-via-amazon-sns/
Unless it's a regional APIGW endpoint.
This explains the IPs of AWS https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
You need the ranges for the "EC2" service for the region where your API Gateway is deployed.
To me it was not clear enough to know which of the IP ranges in the JSON are those of API Gateway endpoints. I made a few experiments with nslookup and found out that you need the ranges for the "EC2" service using a script like this one:
const IPCIDR = require("ip-cidr");
const ipRanges = require('./ips.json');
const relevantRanges = ipRanges.prefixes.filter(el => {
const cidr = new IPCIDR(el.ip_prefix);
// Those IPs I get from `nslookup` to my endpoint. They belong to EC2 of the region
return cidr.contains('3.5.136.1') || cidr.contains('18.192.0.1');
});
console.log(JSON.stringify(relevantRanges, undefined, 2));
You could use it for experiments to find out what service and ranges are responsible for an endpoint.

How to use an IAM certificate in AWS?

I have an EC2, hosting a simple http server.
I want to make use of the HTTPS so to have my traffic hidden, but I made the mistake of buying a domain via AWS and to generate a certificate for it via AWS.
Mistake because it seems I cannot simply import that certificate in my EC2 (maybe because, if AWS gave me that cert as file, I could use it in any number of application of mine).
So, what I have to do in order to use it?
Move my web application to an elastic load balancer? Use a cointainer to host it?
What is the less expensive?

Secure connection from S3 to EC2 on AWS

I'm sure this is a fairly simple question regarding EC2 and S3 on AWS.
I have a static website hosted on S3 which connects to a MongoDB server on an EC2 instance which I want to secure. Currently it's open to all of the internet 0.0.0.0/0 on port 27017, which is the MDB default. I want to restrict the inbound traffic to only requests from the S3 static web site however for security reasons. Apparently S3 does not supply fixed addresses which is causing a problem.
My only thought was to open the port to all IP ranges for the S3 region I am in. This doc on AWS explains how to find these. Although they are subject to change without notice.
http://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
Would this be the way to proceed or am I missing something obvious here. Another way to assign an IP to S3 perhaps?
S3 is a storage service, not a compute service so it cannot make a request to your MongoDB. When S3 serves static webpages, your browser will render it and when a user clicks on a link to connect to your MongoDB, the request goes to MongoDB from the user's computer.
So MongoDB sees the request coming from the user's IP. Since you do not know where the user is coming from (or the IP range), you have no choice but to listen to traffic from any IP.
I think it is not possible to allow only for your s3 hosted site to access your DB inside the ec2 since s3 does not offer an IP address for you.
So its better to try an alternative solution such as instead of directly access DB, proxy through a https service inside your ec2 and restrict the inbound traffic for your mondo db port
s3 wont request your mongodb server on ec2 instance . From my understanding your js files in browser would request the mongodb running on ec2 instance . In that case you have to add message headers in the mongodb configuration files to allow CORS .
CORS: enter link description here