Google Cloud Armor - missing default rule - google-cloud-platform

I've created a cloud armor security policy but it does not have a default rule. I am confused because the documentation contradicts with this.
https://cloud.google.com/compute/docs/reference/rest/beta/securityPolicies
A list of rules that belong to this policy. There must always be a default rule (rule with priority 2147483647 and match "*"). If no rules are provided when creating a security policy, a default rule with action "allow" will be added.
$ gcloud beta compute security-policies describe healthcheck
---
creationTimestamp: ''
description: ''
fingerprint: ...
id: '.....'
kind: compute#securityPolicy
labelFingerprint: .....
name: healthcheck
rules:
- action: deny(404)
description: Block requests to /health
kind: compute#securityPolicyRule
match:
expr:
expression: request.path.matches('/health')
preview: false
priority: 1000
selfLink: https://www.googleapis.com/compute/....
Based on my tests, the default behaviour seems to be Allow. Is this default rule hidden or am I missing something?
The rule was created with Terraform but I don't think it matters.

The answer to your question lies in the fact that there are different ways to create a Cloud Armor policy. For example, if you create a policy through the Cloud Console, you are required to choose the default rule type prior to creating the policy.
In your case, the policy was created using Terraform. Terraform will create a policy in effectively the same way as if you were to use gcloud commands from the Cloud Shell. Using something like Terraform or using gcloud commands will permit a Cloud Armor policy to be created without a default rule specified.
If a Cloud Armor policy is created without a rule specified (default or otherwise), then an “Allow” rule will be automatically added. This is the behavior documented in the REST resource link you shared. One thing to take note of, it may take a few minutes before the default “Allow” rule is visible. In my testing it took at least 2 minutes minimum to be visible in the Console and through:
gcloud compute security-policies describe [POLICY_NAME]
Typically during Cloud Armor policy creation, a default rule is specified with the desired behavior (step # 2). The example you have shared appears to not have updated in the console completely, thus does not show the default “Allow” rule. However, based on the description you provided for your setup, a default “Allow” rule would have been applied during the policy creation by Terraform.
You can always choose to change the behavior of the default rule from “Allow” to “Deny-404” (or “Deny-502”), using the command:
gcloud compute security-policies rules update 2147483647 --security-policy [POLICY_NAME] --action "deny-404"
(2147483647 is the default rule priority, max int32)

Related

GCP - Deny Permissions for Specific Resources

How do I set up explicit deny permissions for a specific resource in GCP? For example, I have 2 GKE clusters in my project, say "dev-gke" and "qa-gke". How do I ensure that folks in the team are denied permission to update/delete the "qa-gke" cluster while they can continue to do so on the "ts-dev" cluster.
I contemplated setting up a deny policy as explained here using denialCondition and resource.matchTag referencing a tag for "ts-qa" cluster.
"denialCondition": {
"title": "QA Setup",
"expression": "resource.matchTag('12345678/env', 'ts-qa')"
But as explained here tags are defined at an organization level and not at a resource level. And, I couldn't find out an equivalent of resource.matchTag for labels.
With not having any suitable way to address this from IAM permissions itself, I ended up creating ClusterRole and ClusterRoleBinding for the clusters individually with respective users who should be allowed to have access.
Reference: https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control
see this doc https://cloud.google.com/iam/docs/deny-access.
And probably you may use IAM tag to attach tag to your clusters, also see this https://cloud.google.com/resource-manager/docs/tags/tags-creating-and-managing#gcloud_8.

AWS IAM Role - AccessDenied error in one pod

I have a service account which I am trying to use across multiple pods installed in the same namespace.
One of the pods is created by Airflow KubernetesPodOperator.
The other is created via Helm through Kubernetes deployment.
In the Airflow deployment, I see the IAM role being assigned and DynamoDB tables are created, listed etc however in the second helm chart deployment (or) in a test pod (created as shown here), I keep getting AccessDenied error for CreateTable in DynamoDB.
I can see the AWS Role ARN being assigned to the service account and the service account being applied to the pod and the corresponding token file also being created, but I see AccessDenied exception.
arn:aws:sts::1234567890:assumed-role/MyCustomRole/aws-sdk-java-1636152310195 is not authorized to perform: dynamodb:CreateTable on resource
ServiceAccount
Name: mypipeline-service-account
Namespace: abc-qa-daemons
Labels: app.kubernetes.io/managed-by=Helm
chart=abc-pipeline-main.651
heritage=Helm
release=ab-qa-pipeline
tier=mypipeline
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::1234567890:role/MyCustomRole
meta.helm.sh/release-name: ab-qa-pipeline
meta.helm.sh/release-namespace: abc-qa-daemons
Image pull secrets: <none>
Mountable secrets: mypipeline-service-account-token-6gm5b
Tokens: mypipeline-service-account-token-6gm5b
P.S: Both the client code created using KubernetesPodOperator and through Helm chart deployment is same i.e. same docker image. Other attributes like nodeSelector, tolerations etc, volume mounts are also same.
The describe pod output for both of them is similar with just some name and label changes.
The KubernetesPodOperator pod has QoS class as Burstable while the Helm chart ones is BestEffort.
Why do I get AccessDenied in Helm deployment but not in KubernetesPodOperator? How to debug this issue?
Whenever we get an AccessDenied exception, there can be two possible reasons:
You have assigned the wrong role
The assigned role doesn't have necessary permissions
In my case, latter is the issue. The permissions assigned to particular role can be sophisticated i.e. they can be more granular.
For example, in my case, the DynamoDB tables which the role can create/describe is limited to only those that are starting with a specific prefix but not all the DynamoDB tables.
So, it is always advisable to check the IAM role permissions whenever
you get this error.
As stated in the question, be sure to check the service account using the awscli image.
Keep in mind that, there is a credential provider chain used in AWS SDKs which determines the credentials to be used by the application. In most cases, the DefaultAWSCredentialsProviderChain is used and its order is given below. Ensure that the SDK is picking up the intended provider (in our case it is WebIdentityTokenCredentialsProvider)
super(new EnvironmentVariableCredentialsProvider(),
new SystemPropertiesCredentialsProvider(),
new ProfileCredentialsProvider(),
WebIdentityTokenCredentialsProvider.create(),
new EC2ContainerCredentialsProviderWrapper());
Additionally, you might also want to set the AWS SDK classes to DEBUG mode in your logger to see which credentials provider is being picked up and why.
To check if the service account is applied to a pod, describe it and check if the AWS environment variables are set to it like AWS_REGION, AWS_DEFAULT_REGION, AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE.
If not, then check your service account if it has the AWS annotation eks.amazonaws.com/role-arn by describing that service account.

How do I resolve this circular reference in AWS CloudFormation?

I’m creating a generic stack template using CloudFormation, and I’ve hit a rather annoying circular reference.
Overall Requirements:
I want to be able to provision (a lot of other things, but mainly) an ECS Cluster Service that auto-scales using capacity providers, the capacity providers are using auto-scaling groups, and the auto scaling groups are using a launch template.
I don’t want static resource names. This causes issues if a resource has to be re-created due to an update and that particular resource has to have a unique name.
Problem:
Without the launch template “knowing the cluster name” (via UserData) the service tasks get stuck in a PROVISIONING state.
So we have the first dependency chain:
Launch Template <- Cluster (Name)
But the Cluster has a dependency chain of:
Cluster <- Capacity Provider <- AutoScalingGroup <- Launch Template
Thus, we have a circular reference: Cluster <-> Launch Template
——
One way I can think of resolving this is to add a suffix to another resource’s name (one that lives outside of this dependency chain, e.g., the target group) as the Cluster’s name; in that way, it is not static but also removes the circular reference.
My question is: is there a better way?
It feels like there should be a resource that the cluster can subscribe to and the ec2 instance can publish to, which would remove the circular dependency as well as the need to assign resource names.
There is no such resource to break the dependency and the cluster name must be pre-defined. This has already been recognized as a problem and its part of open github issue:
[ECS] Full support for Capacity Providers in CloudFormation.
One of the issues noted is:
Break circular dependency so that unnamed clusters can be created
At the moment one work around noted is to partially predefine the name, e.g.:
ECSCluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: !Sub ${AWS::StackName}-ECSCluster
LaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
UserData:
Fn::Base64: !Sub |
#!/bin/bash
echo ECS_CLUSTER=${AWS::StackName}-ECSCluster >> /etc/ecs/ecs.config
Alternatively, one could try to solve that by development of some custom resource that would be in the form of a lambda function. So you could probably create your unnamed cluster with launch template (LT) that has some dummy name for cluster. Then once the cluster is running, you would use the custom resource to create new version of LT with updated cluster name and refresh your auto-scaling group to use the new LT version. But I'm not sure if this would work. Nevertheless, its something that can be considered at least.
Sharing an update from the GitHub issue. The circular dependency has been broken by introducing a new resource: Cluster Capacity Provider Associations.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-clustercapacityproviderassociations.html
To use it in my example, you:
Create Cluster (without specifying name)
Create Launch Template (using Ref to get cluster name)
Create Auto Scaling Group(s)
Create Capacity Provider(s)
Create Cluster Capacity Provider Associations <- This is new!
The one gotcha is that you have to wait for the new association to be created before you can create a service on the cluster. So be sure that your service "DependsOn" these associations!

How to create and verify a cross region public certificate through CloudFormation?

I'm attempting to achieve the following through CloudFormation.
From a stack created in EU region I want to create (and verify) a public certificate against Route53 in US-EAST-1 due to using Cloudfront. Aiming to have zero actions performed in the console or AWS CLI.
The new CloudFormation support for ACM was a little sketchy last week but seems to be working now.
Certifcate
Resources:
Certificate:
Type: AWS::CertificateManager::Certificate
Properties:
DomainName: !Sub "${Env}.domain.cloud"
ValidationMethod: DNS
DomainValidationOptions:
-
DomainName: !Sub "${Env}.domain.cloud"
HostedZoneId: !Ref HostedZoneId
All I need to do is use Cloudformation to deploy this into the US-EAST-1 region from stack in a different region. Everything else is ready for this.
I thought that using Codepipeline's cross region support would be great so I started to look into [this documentation][1] after getting setting things up in my template I met the following error message...
An error occurred while validating the artifact bucket {...} The bucket named is not located in the `us-east-1` AWS region.
To me this makes no sense as it seems that you already need at least a couple of resources to exist in target region for it to work. Cart before the horse kind of behavior. To test this I create an artifact bucket in the target region by hand and things worked fine, but requires using CLI or the console when I'm aiming for a CloudFormation based solution.
Note: I'm running out of time to write this so I'll update it when I can in a few hours time. any help before I can do that would be great though
Sadly, that's required for cross-region CodePipeline. From docs:
When you create or edit a pipeline, you must have an artifact bucket in the pipeline Region and then you must have one artifact bucket per Region where you plan to execute an action.
If you want to fully automate this through CloudFormation, you either have to use custom resource to create buckets in all the regions in advance or look at stack sets to deploy one template bucket in multiple regions.
p.s.
Your link does not work, thus I'm not sure if you refer to the same documentation page.

Can we assign a group to a role in AWS

We are using AWS cloud and terraform with ansible to deploy our current infrastructure.
The code is yml files where we can put whatever works m, but to my concern we cannot apply some group policies to a user role. is this possible or could it be that the console does not show the policies applied from a group to a role.
assigned in our usual modus operandi but i believe it does not work while the functionality may be provided by the usage of extra permissions like expressly specified iam/bucket access policies.
user-test: #this line declares role
assume_arn:
- arn:aws:iam::anonymised:user/test
- arn:aws:iam::anonymised:user/me
groups:
- tf-dev1-group
- tf-dev2-group
- tf-dev3-group
policies:
- athena-fulladmin-policy
- support_access
no error messages just lack of result