AWS WAF rate limiting rule with condition doesn't work - amazon-web-services

I have the following WAF rule:
rule {
name = "rule4"
action {
block {}
}
priority = 40
statement {
rate_based_statement {
limit = 100
aggregate_key_type = "IP"
scope_down_statement {
byte_match_statement {
field_to_match {
uri_path {}
}
positional_constraint = "STARTS_WITH"
search_string = "/search"
text_transformation {
priority = 0
type = "NONE"
}
}
}
}
}
}
I also have a generic rule for limiting the rate:
rule {
name = "rule5"
action {
block {}
}
priority = 300
statement {
rate_based_statement {
limit = 3000
aggregate_key_type = "IP"
}
}
}
When a lot of requests are made to https://myexample.com/search?someQueryParams=here, they are picked up by rule5, not rule4, and I'm not sure why. Any idea?

Related

Terraform MalformedXML: The XML you provided was not well-formed for aws_s3_bucket_lifecycle_configuration

I really stuck today on the following error:
MalformedXML: The XML you provided was not well-formed
when applying aws_s3_bucket_lifecycle_configuration via Terraform using hashicorp/aws v4.38.0.
I wanted to set a rule that would expire files after 365 days with file size greater than 0 bytes for a my_prefix prefix so the definition of the resource looks like that:
resource "aws_s3_bucket_lifecycle_configuration" "my-bucket-lifecycle-configuration" {
depends_on = [aws_s3_bucket_versioning.my-bucket-versioning]
bucket = aws_s3_bucket.my_bucket.id
rule {
id = "my_prefix_current_version_config"
filter {
and {
prefix = "my_prefix/"
object_size_greater_than = 0
}
}
expiration {
days = 365
}
status = "Enabled"
}
}
Anyone has idea what's wrong with the above definition? :nerd_face:
Documentation: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_lifecycle_configuration
Remark: the following definition can be applied without problem (no and block):
resource "aws_s3_bucket_lifecycle_configuration" "my-bucket-lifecycle-configuration" {
depends_on = [aws_s3_bucket_versioning.my-bucket-versioning]
bucket = aws_s3_bucket.my_bucket.id
rule {
id = "my_prefix_current_version_config"
filter {
prefix = "my_prefix/"
}
expiration {
days = 365
}
status = "Enabled"
}
}
From the documentation, you have to specify both the object size range (which I guess mean, you have to specify both object_size_greater_than and object_size_less_than) and prefix, for example:
filter {
and {
prefix = "my_prefix/"
object_size_greater_than = 0
object_size_less_than = 500
}
}

Terraform - Optional Nested Variable

I'm trying to create a module for Sagemaker endpoints. There's an optional object variable called async_inference_config. If you omit it, the endpoint being deployed is synchronous, but if you include it, the endpoint deployed is asynchronous. To satisfy both of these usecases, the async_inference_config needs to be an optional block.
I am unsure of how to make this block optional though.
Any guidance would be greatly appreciated. See example below of structure of the optional parameter.
Example:
resource "aws_sagemaker_endpoint_configuration" "sagemaker_endpoint_configuration" {
count = var.create ? 1 : 0
name = var.endpoint_configuration_name
production_variants {
instance_type = var.instance_type
initial_instance_count = var.instance_count
model_name = var.model_name
variant_name = var.variant_name
}
async_inference_config {
output_config {
s3_output_path = var.s3_output_path
}
client_config {
max_concurrent_invocations_per_instance = var.max_concurrent_invocations_per_instance
}
}
lifecycle {
create_before_destroy = true
ignore_changes = ["name"]
}
tags = var.tags
depends_on = [aws_sagemaker_model.sagemaker_model]
}
Update: What I tried based on the below suggestion, which seemed to work
dynamic "async_inference_config" {
for_each = var.async_inference_config == null ? [] : [true]
content {
output_config {
s3_output_path = lookup(var.async_inference_config, "s3_output_path", null)
}
client_config {
max_concurrent_invocations_per_instance = lookup(var.async_inference_config, "max_concurrent_invocations_per_instance", null)
}
}
}
You could use a dynamic block [1] in combination with for_each meta-argument [2]. It would look something like:
dynamic "async_inference_config" {
for_each = var.s3_output_path != null && var.max_concurrent_invocations_per_instance != null ? [1] : []
content {
output_config {
s3_output_path = var.s3_output_path
}
client_config {
max_concurrent_invocations_per_instance = var.max_concurrent_invocations_per_instance
}
}
}
Of course, you could come up with a different variable, say enable_async_inference_config (probalby of type bool) and base the for_each on that, e.g.:
dynamic "async_inference_config" {
for_each = var.enable_async_inference_config ? [1] : []
content {
output_config {
s3_output_path = var.s3_output_path
}
client_config {
max_concurrent_invocations_per_instance = var.max_concurrent_invocations_per_instance
}
}
}
[1] https://www.terraform.io/language/expressions/dynamic-blocks
[2] https://www.terraform.io/language/meta-arguments/for_each

How can I add domain names to my terraform configuration

I want to Add domain to listener rule in addition to paths. What arguments should I use for the same.
resource "aws_alb_listener_rule" "service" {
listener_arn = var.alb_listener_arn
action {
type = "forward"
target_group_arn = aws_alb_target_group.service.arn
}
condition {
path_pattern {
values = ["/login", "/logout"]
}
}
Thank you.
The domain name is specified using host_header:
Contains a single values item which is a list of host header patterns to match.
An example usage from the docs:
condition {
host_header {
values = ["my-service.*.terraform.io"]
}
}
Thanks. This worked.
condition {
path_pattern {
values = ["/login", "/logout"]
}
}
condition {
host_header {
values = ["my-service.*.terraform.io"]
}
}

Akka Cluster starding not able to register to Coordinator

I am trying to create an Akka Sharding Cluster. I want to use proxy only mode on one of the nodes just to route the message to the shard regions. I am getting the following Warning:
[WARN] [02/11/2019 17:04:17.819] [ClusterSystem-akka.actor.default-dispatcher-21] [akka.tcp://ClusterSystem#127.0.0.1:2555/system/sharding/ShardnameProxy] Trying to register to coordinator at [Some(ActorSelection[Anchor(akka.tcp://ClusterSystem#127.0.0.1:2551/), Path(/system/sharding/ShardnameCoordinator/singleton/coordinator)])], but no acknowledgement. Total [1] buffered messages.
**Main.java: ** Starts the cluster using the configuration from application.conf(code added latter)
object Main {
val shardName = "Shardname"
val role = "Master"
var shardingProbeLocalRegin: Option[ActorRef] = None
def main(args: Array[String]): Unit = {
val conf = ConfigFactory.load()
val system = ActorSystem("ClusterSystem",conf.getConfig("main"))
ClusterSharding(system).start(shardName,Test.props,ClusterShardingSettings(system),ShardDetails.extractEntityId,ShardDetails.extractShardId)
}
}
Test.java : Entity for the Sharding Cluster
object Test {
def props: Props = Props(classOf[Test])
class Test extends Actor {
val log = Logger.getLogger(getClass.getName)
override def receive = {
case msg: String =>
log.info("Message from " + sender().path.toString + " Message is " + msg)
sender() ! "Done"
}
}
}
MessageProducer.java(Proxy Only Mode) Message Producer sends a message to the Shard every second.
object MessageProducer {
var shardingProbeLocalRegin: Option[ActorRef] = None
object DoSharding
def prop:Props = Props(classOf[MessageProducer])
var numeric : Long = 0
def main(args: Array[String]): Unit = {
val conf = ConfigFactory.load
val system = ActorSystem("ClusterSystem",conf.getConfig("messgaeProducer"))
ClusterSharding(system).startProxy(Main.shardName,None,extractEntityId,extractShardId)
shardingProbeLocalRegin = Some(ClusterSharding(system).shardRegion(Main.shardName))
val actor = system.actorOf(Props[MessageProducer],"message")
}
}
class RemoteAddressExtensionImpl(system: ExtendedActorSystem) extends Extension {
def address = system.provider.getDefaultAddress
}
object RemoteAddressExtension extends ExtensionKey[RemoteAddressExtensionImpl]
class MessageProducer extends Actor{
val log = Logger.getLogger(getClass.getName)
override def preStart(): Unit = {
println("Starting "+self.path.address)
context.system.scheduler.schedule(10 seconds,1 second ,self,DoSharding)
}
override def receive = {
case DoSharding =>
log.info("sending message" + MessageProducer.numeric)
MessageProducer.shardingProbeLocalRegin.foreach(_ ! "" + (MessageProducer.numeric))
MessageProducer.numeric += 1
}
}
**application.conf: ** Configuration File
main {
akka {
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = on
netty.tcp {
hostname = "127.0.0.1"
port = 2551
}
}
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem#127.0.0.1:2551"
]
sharding.state-store-mode = ddata
auto-down-unreachable-after = 1s
}
akka.extensions = ["akka.cluster.metrics.ClusterMetricsExtension", "akka.cluster.ddata.DistributedData"]
}
}
messgaeProducer {
akka {
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = on
netty.tcp {
hostname = "192.168.2.96"
port = 2554
}
}
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem#127.0.0.1:2551"
//, "akka.tcp://ClusterSystem#127.0.0.1:2552"
]
sharding.state-store-mode = ddata
auto-down-unreachable-after = 1s
}
akka.extensions = ["akka.cluster.metrics.ClusterMetricsExtension", "akka.cluster.ddata.DistributedData"]
}
}
Am I doing anything wrong? Is there any other way to apply for this approach. My main aim is to avoid Single Point of failure for my cluster. If any node goes down then it should not affect any other state. Can anyone help me with this?
Is it solved?
If not, please check your akka.cluster configuration.
You have to set config like this. It works to me
for proxy
akka.cluster {
roles = ["Proxy"]
sharding {
role = "Master"
}
}
for master
akka.cluster {
roles = ["Master"]
sharding {
role = "Master"
}
}

How do I populate sourceInfo in SSM Association using TerraForm

I am building a very basic Systems Manager Association in TerraForm but I do not understand what the sourceInfo field is asking for. It requires a string but even simple strings like "test" cause it to reject the input.
resource "aws_ssm_association" "sslscanssm" {
name = "AWS-RunInspecChecks"
association_name = "test"
targets = {
key = "tag:os"
values = ["linux"]
}
parameters {
sourceType = "GitHub"
sourceInfo = "{"owner":"awslabs","repository":"amazon-ssm","path":"Compliance/InSpec/PortCheck","getOptions":"branch:master"}"
#^this line doesn't work
#sourceInfo = "test"
#^this line doesn't work either
}
}
Instead of escaping all of your strings you could also use the jsonencode function to turn a map into the JSON you want:
locals {
source_info = {
owner = "awslabs"
repository = "amazon-ssm"
path = "Compliance/InSpec/PortCheck"
getOptions = "branch:master"
}
}
resource "aws_ssm_association" "sslscanssm" {
name = "AWS-RunInspecChecks"
association_name = "test"
targets = {
key = "tag:os"
values = ["linux"]
}
parameters {
sourceType = "GitHub"
sourceInfo = "${jsonencode(local.source_info)}"
}
}
I wasn't aware sourceInfo expects parentheses and all inner double quotes to be escaped or it won't work.
resource "aws_ssm_association" "sslscanssm" {
name = "AWS-RunInspecChecks"
association_name = "test"
targets = {
key = "tag:os"
values = ["linux"]
}
parameters {
sourceType = "GitHub"
sourceInfo = "{\"owner\":\"awslabs\",\"repository\":\"amazon-ssm\",\"path\":\"Compliance/InSpec/PortCheck\",\"getOptions\":\"branch:master\"}"
}
}
There is a mistake in the code shared (no equal sign after targets but after parameters). The correct syntax of the resource is :
resource "aws_ssm_association" "sslscanssm" {
name = "AWS-RunInspecChecks"
association_name = "test"
targets {
key = "tag:os"
values = ["linux"]
}
parameters = {
sourceType = "GitHub"
sourceInfo = "${jsonencode(local.source_info)}"
}
}