GCP Cloud Armor deny main domain https://mma.mydomain.com/ - google-cloud-platform

Is there a way to deny https://mma.mydomain.com/ main domain and allow the below Web sevices in GCP Cloud armor.
1. https://mma.mydomain.com/v1/teststudio/developer - POST
2. https://mma.mydomain.com/v1/teststudio/developer - GET
3. https://mma.mydomain.com/v1/teststudio/developer - PATCH
4. https://mma.mydomain.com/v1/teststudio/developer/app - POST
5. https://mma.mydomain.com/v1/teststudio/developer/app - GET
I have set the below rules in Google Cloud Armor Network Security services
deny request.path.matches('https://mma.mydomain.com/') Deny access from Internet to https://mma.mydomain.com 28
Allow request.path.matches('/v1/devstudio/developer') Allow access from Internet to /v1/teststudio/developer 31
Allow request.path.matches('/v1/devstudio/developer') Allow access from Internet to /v1/teststudio/developer/app 32
I am referring to https://cloud.google.com/armor/docs/rules-language-reference. Please guide with examples.
Thanks in Advance.
Best Regards,
Kaushal

Assuming your numbers to the right are the rule priorities, Cloud Armor will match the first rule and stop. In your case, it will match the hostname value and deny the request and never consider the other rules. Consider reversing the flow and have the more specific allow rules first and then fire the "default" hostname rule.
consider a rule like this:
request.headers['host'].matches('mma.mydomain.com') && request.path.lower().urlDecode().contains('/v1/devstudio/developer') && request.method == "GET"
And if you want to block other requests, have your request.path.matches('https://mma.mydomain.com/') rule fire after

Related

Security Group to allow traffic from CloudFront has different number of inbound rules from one account to another

I followed this article to use Lambda and SNS to manage my Security Group for allowing traffic from CloudFront. After setting it up for multiple accounts, I noticed that the number of inbound rules in each account differs, with some having 50+ rules and others having 100+. However, the number of rules doesn't seem to correspond with the IP ranges.
I've already checked that the maximum number of rules per Security Group is 200 and that the Lambda function didn't timeout. Has anyone else encountered this issue, or is it normal to have varying numbers of inbound rules for the same Security Group across different accounts?
Looking at the code for the Lambda function, I would expect them to be the same for every account. I thought they might be different depending on the region, but that doesn't appear to be the case.
HOWEVER, you no longer need to do this! The blog post you are following is from 2020, and as of Feb 2022 Amazon manages this list for you. All you have to do is add the managed prefix list com.amazonaws.global.cloudfront.origin-facing in a single rule in your security group.

Cloud Armor logs aren't very clear when rule is set as "Preview only"

I'm deploying WAF with Cloud Armor and I realized that the rules can be created in a "Preview only" mode and that there are Cloud Armor entries in Cloud Logging.
The problem is that when I create a "Preview only" rule and that rule is matched by some request, I cannot differentiate, in the logs, the requests that matched some specific rule and/or the normal, ordinary requests. They look all pretty much the same.
Are there any logging attributes that only exist (or have specific values) when the request match a specific rule in these cases? Because the only way I found to explicitly check the rules matched by some request is unchecking the "Preview only" flag, and it is not nice for production when testing.
When you have rules configured in Cloud Armor set to "Preview", Cloud Logging will record what the rule would have done if enabled.
This Cloud Logging filter will show you entries that were denied by Cloud Armor:
resource.type="http_load_balancer"
jsonPayload.statusDetails="denied_by_security_policy"
This Cloud Logging filter will show you entries that would have been denied by Cloud Armor:
resource.type="http_load_balancer"
jsonPayload.previewSecurityPolicy.outcome="DENY"
In Cloud Logging, set the resource.type to "http_load_balancer" and delete the second filter line to see all entries.
Expand one of the entries:
Look for "jsonPayload.enforcedSecurityPolicy". This is the Cloud Armor Policy.
Look for "jsonPayload.previewSecurityPolicy". This provides details on the rule priority which tells you the rule and the outcome if the rule was not in preview.
Example screenshot:

Should I use nginx reverse proxy for cloud object storage?

I am currently implementing image storing architecture for my service.
As I read in one article it is a good idea to move whole
image upload and download traffic to the external cloud object storage.
https://medium.com/#jgefroh/software-architecture-image-uploading-67997101a034
As I noticed there are many cloud object storage providers:
- Amazon S3
- Google Cloud Storage
- Microsoft Azure Blob Storage
- Alibaba Object Storage
- Oracle Object Storage
- IBM Object Storage
- Backblaze B2 Object
- Exoscale Object Storage
- Aruba Object Storage
- OVH Object Storage
- DreamHost DreamObjects
- Rackspace Cloud Files
- Digital Ocean Spaces
- Wasabi Hot Object Storage
My first choice was Amazon S3 because
almost all of my system infrastructure is located on AWS.
However I see a lot of problems with this object storage.
(Please correct me if I am wrong in any point below)
1) Expensive log delivery
AWS is charging for all operational requests. If I have to pay for all requests I would like to see all request logs. and I would like to get these logs as fast as possible. AWS S3 provide log delivery, but with a big delay and each log is provided as a separate file in other S3 bucket, so each log is a separate S3 write request. Write requests are more expensive, they cost approximately 5$ per 1M requests. There is another option to trigger AWS Lambda whenever request is made, however it is also additional cost 0,2 $ per 1M lambda invocations. In summary - in my opinion log delivery of S3 requests is way to expensive.
2) Cannot configure maximum object content-length globally for a whole bucket.
I have not found the possibility to configure maximum object size (content-length) restriction for a whole bucket. In short - I want to have a possibility to block uploading files larger than specified limit for a chosen bucket. I know that it is possible to specify content-length of uploaded file in a presigned PUT urls, however I think this should be available to configure globally for a whole bucket.
3) Cannot configure request rate limit per IP numer per minute directly on a bucket.
Because all S3 requests are chargable I would like to have a possibility
to restrict a limit of requests that will be made on my bucket from one IP number.
I want to prevent massive uploads and downloads from one IP number
and I want it to be configurable for a whole bucket.
I know that this functionality can be privided by AWS WAF attached to Cloudfront
however such WAF inspected requests are way to expensive!
You have to pay 0,60$ per each 1M inspected requests.
Direct Amazon S3 requests costs 0,4$ per 1M requests,
so there is completely no point and it is completely not profitable
to use AWS WAF as a rate limit option for S3 requests as a "wallet protection" for DOS attacks.
4) Cannot create "one time - upload" presigned URL.
Generated presigned URLs can be used multiple times as long as the didnt expired.
It means that you can upload one file many times using same presigned URL.
It would be great if AWS S3 API would provide a possibility to create "one time upload" presigned urls. I know that I can implement such "one time - upload" functionality by myself.
For example see this link https://serverless.com/blog/s3-one-time-signed-url/
However in my opinion such functionality should be provided directly via S3 API
5) Every request to S3 is chargable!
Let's say you created a private bucket.
No one can access data in it however....
Anybody from the internet can run bulk requests on your bucket...
and Amazon will charge you for all that forbidden 403 requests!!!
It is not very comfortable that someone can "drain my wallet"
anytime by knowing only the name of my bucket!
It is far from being secure!, especially if you give someone
direct S3 presigned URL with bucket address.
Everyone who knows the name of a bucket can run bulk 403 requests and drain my wallet!!!
Someone already asked that question here and I guess it is still a problem
https://forums.aws.amazon.com/message.jspa?messageID=58518
In my opinion forbidden 403 requests should not be chargable at all!
6) Cannot block network traffic to S3 via NaCL rules
Because every request to S3 is chargable.
I would like to have a possibility to completely block
network traffic to my S3 bucket in a lower network layer.
Because S3 buckets cannot be placed in a private VPC
I cannot block traffic from a particular IP number via NaCl rules.
In my opinion AWS should provide such NaCl rules for S3 buckets
(and I mean NaCLs rules not ACLs rules that block only application layer)
Because of all these problems I am considering using nginx
as a proxy for all requests made to my private S3 buckets
Advantages of this solution:
I can rate limit requests to S3 for free however I want
I can cache images on my nginx for free - less requests to S3
I can add extra layer of security with Lua Resty WAF (https://github.com/p0pr0ck5/lua-resty-waf)
I can quickly cut off requests with request body greater than specified
I can provide additional request authentication with the use of openresty
(custom lua code can be executed on each request)
I can easily and quickly obtain all access logs from my EC2 nginx machine and forward them to cloud watch using cloud-watch-agent.
Disadvantages of this solution:
I have to transfer all the traffic to S3 through my EC2 machines and scale my EC2 nginx machines with the use of autoscaling group.
Direct traffic to S3 bucket is still possible from the internet for everyone who knows my bucket name!
(No possibility to hide S3 bucket in private network)
MY QUESTIONS
Do you think that such approach with reverse proxy nginx server in front of object storage is good?
Or maybe a better way is to just find alternative cloud object storage provider and not proxy object storage requests at all?
I woud be very thankful for the recommendations of alternative storage providers.
Such info about given recommendation would be preferred.
Object storage provider name
A. What is the price for INGRESS traffic?
B. What is the price for EGRESS traffic?
C. What is the price for REQUESTS?
D. What payment options are available?
E. Are there any long term agreement?
F. Where data centers are located?
G. Does it provide S3 compatible API?
H. Does it provide full access for all request logs?
I. Does it provide configurable rate limit per IP number per min for a bucket?
J. Does it allow to hide object storage in private network or allow network traffic only from particular IP number?
In my opinion a PERFECT cloud object storage provider should:
1) Provide access logs of all requests made on bucket (IP number, response code, content-length, etc.)
2) Provide possibility to rate limit buckets requests per IP number per min
3) Provide possibility to cut off traffic from malicious IP numbers in network layer
4) Provide possibility to hide object storage buckets in private network or give access only for specified IP numbers
5) Do not charge for forbidden 403 requests
I would be very thankful for allt the answers, comments and recommendations
Best regards
Using nginx as a reverse proxy for cloud object storage is a good idea for many use-cases and you can find some guides online on how to do so (at least with s3).
I am not familiar with all features available by all cloud storage providers, but I doubt that any of them will give you all the features and flexibility you have with nginx.
Regarding your disadvantages:
Scaling is always an issue, but you can see with benchmark tests
that nginx can handle a lot of throughput even in small machines
There are solution for that in AWS. First make your S3 bucket private, and then you can:
Allow access to your bucket only from the EC2 instance/s running your nginx servers
generate pre-signed URLs to your S3 bucket and serve them to your clients using nginx.
Note that both solutions for your second problem require some development
If you have an AWS Infrastructure and want to implement a on-prem S3 compatible API, you can look into MinIO.
It is a performant object storage which protects data protection through Erasure Coding

Amazon WAF setup

I have requirement to allow a URL pattern only for a set of ip's.
For example the pattern is /helloworld.
for /helloworld pattern only certain ip's will be allowed other ip's must be blocked. Is it possible with amazon AWS. I tried creating a condition which matches string but I could not find a ip based rule for this condition.
Could you please let us know whether it is possible in aws waf?
Yes, this is possible.
1) Create a rule that "Whitelists" a list of IPs.
2) Follow that rule with another rules that "Blacklists" everyone for your protected URL.
Create an IP Address Condition Set containing the allowed IP addresses, then a rule that allows access to anyone in that set.
Create a String matching condition with your protected strings. Create a rule that blocks anyone that references your URL string in the URI.
Make sure that the Whitelist rule comes before the Blacklist rule.
You could use Lambda#Edge, it can execute custom logic before response is returned to the client.
It requires CloudFront distribution.

Possible to allow client upload to S3 over https AND have a CNAME alias for the bucket?

OK, so I have a an Amazon S3 bucket to which I want to allow users to upload files directly from the client over https.
In order to do this it became apparent that I would have to change the bucket name from a format using periods to a format using dashes. So :
my.bucket.com
became :
my-bucket-com
This being required due to a limitation of https authentication which can't deal with periods in the bucket name when resolving the S3 endpoint.
So everything is peachy, except now I'd like to allow access to those files while hiding the fact that they are being stored on Amazon S3.
The obvious choice seems to be to use Route 53 zone configuration records to add a CNAME record to point my url at the bucket, given that I already have the 'bucket.com' domain :
my.bucket.com > CNAME > my-bucket-com.s3.amazonaws.com
However, I now seem to have hit another limitation, in that Amazon seem to insist that the name of the CNAME record must match the bucket name exactly so the above example will not work.
My temporary solution is to use a reverse proxy on an EC2 instance while traffic volumes are low. But this is not a good or long term solution as it means that all S3 access is being funneled through the proxy server causing extra server load, and data transfer charges. Not to mention the solution really isn't scalable when traffic volumes start to increase.
So is it possible to achieve both of my goals above or are they mutually exclusive?
If I want to be able to upload directly from clients over https, I can't then hide the S3 url from end users accessing that content and vice versa?
Well there simply doesn't seem to be a straightforward way of achieving this.
There are 2 possible solutions :
1.) Put your S3 bucket behind Amazon Cloudfront - but this does incur a lot more charges, all be it with the added benefit of lower latency regional access to your content.
2.) The solution we will go with will simply be to split the bucket in to two.
One for upload from HTTPS clients my-bucket-com; And one for CNAME aliased access to that content my.bucket.com. This keeps the costs down, although it will involve extra steps in organising the content before it can be accessed.