How to organize Istio rate limiting policy configuration - istio

Let me present a usecase for various independant teams deploying to a single cluster
Team A is responsible for servics A1, A2, A3
Team B is responsible for servics B1, B2, B3
They both intend to apply custom Rate Limiting to their services. My Question is:
Should/Could they both define completely different config(consisting of the following 5 yalm spec) or is part of it common? At the very least, I'm guessing the "quota instance" could be shared while the rest could be different.
From the istio site ( https://istio.io/docs/tasks/policy-enforcement/rate-limiting/#rate-limits ) , the config is split into
Client Side
QuotaSpec defines quota name and amount that the client should request.
QuotaSpecBinding conditionally associates QuotaSpec with one or more services.
Mixer Side
quota instance defines how quota is dimensioned by Mixer.
memquota handler defines memquota adapter configuration.
quota rule defines when quota instance is dispatched to t

As far as im concerned you should be able to do that.
You should start with creating a single redis handler and a simple rule that limits based on a single dimension and try to apply that to services from team A and create a similar rule that applies to services B.
Should/Could they both define completely different config(consisting of the following 5 yalm spec) or is part of it common?
Handler - a single one, to connect to Redis or to keep all the rate limiting info in one place
Instance - This can be single, or there can be 2 - depending on usecase
QuotaSpec - as many as instances defined
QuotaSpecBinding - two or even more
Rule - Depends on usecase, You probably start with one, but eventually may use two or more
If You want to rate limit based on same dimensions, one shared instance is enough.
Single instance is possible, unless the teams want to use different dimensions,then obviously You would need two seperate quota instances.
I found something like namespace tenancy on istio documentation, i think if You would like, You could create a 2 seperate namespaces for both, team A and team B.

Related

AWS network firewall with Suricata rules

I'm looking into implementing AWS Network Firewall with Suricata IPS rules, and find it really hard to find real examples and ideas of what is relevant regarding rules etc. Our customer put emphasis on IPS, IDS and anti-malware.
My setup today is Internet Gateway -> Application Load Balancer -> Auto-scaling ECS containers. Correct me if I'm wrong, but the firewall fits in between IG and ALB?
I have spent some time staring at the following screen;
and my initial questions are;
How do I determine what rules are applicable to me?
What is "Capacity" really?
Starting with number one, I believe the rules I can choose from are listed here, and initially I thought that I surely wanna use all the 30k (?) rules they supply. Thinking about it a bit more I assume that that might affect the responsiveness for our end users. So, if I'm thinking IPS, what rule-sets are necessary for a web solution with port 80 and 443 open to the public? If I look at the file containing all "emerging" rules they list about 30k rules but I hardly think all of them are relevant to me.
Regarding point two, Capacity, Amazon state the following as an explanation;
Maximum processing capacity allowed for the rule group. Estimate the stateful rule group’s capacity requirement as the number of rules you expect to add. You can’t change or exceed this setting when you update the rule group.
Initially I thought that "one capacity" refers to one line (one rule in any rule set), but I later understood that one line itself might require up to 450 "capacity" (I've lost the link where I read/interpreted this).
I understand that this subject is huge, and I'm somewhat of a rookie when it comes to firewalls, but can anyone enlighten me how to approach this? I feel as if I'm not certain what I'm asking about, so please let me know if I need to clarify anything.
I have recently developed an integration between IDSTower (suricata & rules management solution) and AWS Network firewall, so I can relate to the confusion :)
How do I determine what rules are applicable to me?
The starting point should be the services you are protecting, once you know that things will be easier, ET Open/Suricata rules can be grouped in different ways, they are published in different files (eg: emerging-smtp.rules, emerging-sql.rules ...etc) and contains classtype that classify the rules (eg: bad-unknown, misc-attack ...etc) as well as metadata like tags, signature_severity ...etc
Another important thing to point here is that aws network firewall has a limit of the uploaded rules size (in a single stateful rule group) of 2 MBs, which will force you to pick and choose you rules.
there are several approaches to decide what rules to enable:
Using the grouping of rules explained above, start by enabling a small subset, monitor the output, adjust/tune and enable another subset, till you cover the services, so start small and grow the enabled rules.
Enable all of the rules (in IDS mode) and asses the alerts, disable/tune noisy/useless ones till you reach a state of confidence.
Enable Rules that monitor the protocol you system speaks, if you are protecting HTTP based web services, start by enabling rules that are monitoring http protocol ('alert http.....')
If you are applying the above to a production environment, make sure you start by alerting only and once you remove false positives you can move them to drop.
What is "Capacity" really?
AWS use the capacity settings to make sure your Cloud-Suricata instance can deliver the promised performance which is largely influenced by the number of enabled rules.
a single stateful rule consumes 1 capacity
Initially I thought that "one capacity" refers to one line (one rule in any rule set), but I later understood that one line itself might require up to 450 "capacity" (I've lost the link where I read/interpreted this).
Yes, Suricata Rules (which are stateful in AWS Network Firewall world) consumes 1 capacity point per single rule line, however for stateless rules, a single rule can consume more depending on protocols, sources, destinations as mentioned in AWS Docs
A rule with a protocol that specifies 30 different protocols, a source with 3 settings, a destination with 5 settings, and single or no specifications for the other match settings has a capacity requirement of (3035) = 450.
Here is the AWS Network Firewall Docs link

Related to traefik's container call restriction

I am a beginner.
I built a service on Amazon ECS using traefik.
https://traefik.io/
I have DNS configured for router53 service.
For example, I set it up like * .test.com.
So i can run it like a.test.com or b.test.com on the web.
But, I want to limit the calls per month for each Docker container.
All requests are made through traefik.
So I think traefik can handle it.
What part of traefik can i use?
I would appreciate any information you may have.
I can't comment yet so:
Some questions for you:
If you limit the number of calls per month, what happens if you have many calls in certain week that reach the set threshold?
Is your VoIP service support limiting calls per hour or per day from the same user/source?
A reverse proxy will limit calls for active sessions at a given time, which can help you achieve limiting the number of calls depending on the protocol in use.

Is it possible to create Siebel inound web service based on workflow with more than one operation?

I have a requirement to publish Siebel inbound web service with only one port, at the same time WS has to receive three different operations.
My WS's are based on workflow.
As I could read in the bookshelf the only one operation is possible to add in the one port of WS based on WF:
https://docs.oracle.com/cd/E14004_01/books/CRMWeb/CRMWeb_Overview12.html
(see p.5)
However I've found vanilla WS that looks as I need:
FinancialAssetService
Could anyone give me some tips how to create such WS?
Is it possible to receive different IO by different operations of this WS?
Thanks in advance!
Well, if your web service provides 3 operations, you must be invoking 3 different workflows, right? (It says so in the page you linked: a workflow corresponds to a single Web service operation). Then, yes, you'll need to define 3 "service ports" in your web service.
However, I don't see why that would be a problem at all. I've never done this myself, but you can define the same endpoint URL and HTTP port for each one of the 3 service ports. The external application consuming your service would never notice any difference.
As for your second question, yes, having 3 different workflows would obviously allow you to choose different integration objects for each operation.
On the other hand, if you only have one workflow and need 3 operations because you want it to accept different input structures, then you might want to rethink your solution. Perhaps create 3 tiny workflows (or a BS with 3 operations) to just transform the data to a common IO (using Siebel data mappings), and then pass it to your existing WF.

Microservices service registry registration and discovery

Little domain presentation
I m actually having two microservices :
User - managing CRUD on users
Billings - managing CRUD on billings, with a "reference" on a user concerned by the billing
Explanation
I need, when a billing is called in a HTTP request, to send the fully billing object with the user loaded. In that case, and in this specifical case, I really need this.
In a first time, I looked around, and it seems that it was a good idea to use message queuing, for asynchronicity, and so the billing service can send on a queue :
"who's the user with the id 123456 ? I need to load it"
So my two services could exchange, without really knowing each other, or without knowing the "location" of each other.
Problems
My first question is, what is the aim of using a service registry in that case ? The message queuing is able to give us the information without knowing anything at all concerning the user service location no ?
When do we need to use a service registration :
In the case of Aggregator Pattern, with RESTFul API, we can navigate through hateoas links. In the case of Proxy pattern maybe ? When the microservices are interfaced by another service ?
Admitting now, that we use proxy pattern, with a "frontal service". In this case, it's okay for me to use a service registration. But it means that the front send service know the name of the userService and the billing service in the service registration ? Example :
Service User registers as "UserServiceOfHell:http://80.80.80.80/v1/"
on ZooKeeper
Service Billing registers as "BillingService:http://90.90.90.90/v4.3/"
The front end service needs to send some requests to the user and billing service, it implies that it needs to know that the user service is "UserServiceOfHell". Is this defined at the beginning of the project ?
Last question, can we use multiple microservices patterns in one microservices architecture or is this a bad practice ?
NB : Everything I ask is based on http://blog.arungupta.me/microservice-design-patterns/
A lot of good questions!
First of all, I want to answer your last question - multiple patterns are ok when you know what you're doing. It's fine to mix asynchronous queues, HTTP calls and even binary RPC - it depends on consistency, availability and performance requirements. Sometimes you can see a good fit for simple PubSub and sometimes you need to have distributed lock - microservices are different.
Your example is simple: two microservices need to exchange some information. You chose asynchronous queue - fine, in this case they don't really need to know about each other. Queues don't expect any discovery between consumers.
But we need service discovery in other cases! For example, backing services: databases, caches and actually queues as well. Without service discovery you probably hardcoded the URL to your queue, but if it goes down you have nothing. You need to have high availability - cluster of nodes replicating your queue, for example. When you add a new node or existing node crashed - you should not change anything, service discovery tool should understand that and update the registry.
Consul is a perfect modern service discovery tool, you can just use custom DNS name for accessing your backing services and Consul will perform constant health checks and keep your cluster healthy.
The same rule can be applied to microservices - when you have a cluster running service A and you need to access it from service B without any queues (for example, for HTTP call) you have to use service discovery to be sure that endpoint you use will bring you to the healthy node. So it's a perfect fit for Aggregator or Proxy patterns from the article you mentioned.
Probably the most confusion is caused by the fact that you see "hardcoded" URLs in Zookeeper. And you think that you need to manage that manually. Modern tools like Consul or etcd allows you to avoid that headache and just rely on them. It's actually also achievable with Zookeeper, but it'll require more time and resources to have similar setup.
PS: please remember about the most important rule in microservices - http://martinfowler.com/bliki/MonolithFirst.html

Akka clustering - force actors to stay on specific machines

I've got an akka application that I will be deploying on many machines. I want each of these applications to communicate with each others by using the distributed publish/subscribe event bus features.
However, if I set the system up for clustering, then I am worried that actors for one application may be created on a different node to the one they started on.
It's really important that an actor is only created on the machine that the application it belongs to was started on.
Basically, I don't want the elasticity or the clustering of actors, I just want the distributed pub/sub. I can see options like singleton or roles, mentioned here http://letitcrash.com/tagged/spotlight22, but I wondered what the recommended way to do this is.
There is currently no feature in Akka which would move your actors around: either you programmatically deploy to a specific machine or you put the deployment into the configuration file. Otherwise it will be created locally as you want.
(Akka may one day get automatic actor tree partitioning, but that is not even specified yet.)
I think this is not the best way to use elastic clustering. But we also consider on the same issue, and found that it could to be usefull to spread actors over the nodes by hash of entity id (like database shards). For example, on each node we create one NodeRouterActor that proxies messages to multiple WorkerActors. When we send message to NodeRouterActor it selects the end point node by lookuping it in hash-table by key id % nodeCount then the end point NodeRouterActor proxies message to specific WorkerActor which controlls the entity.