WSO2 Cluster configuration deployment synchronization filter - wso2

In a cluster configuration every modification is propagated from the manager to the worker. I have a scheduled task which have to run just on the manager.
What's the way i can stop the synchronization with the workers?

Deployment synchronization does not happen as a schedule task .. once an artifact is uploaded to a manager node, manager sends a cluster message to all other workers specifying the repo is updated. Once the workers get the message it will update their repo's . So to if you want to disable worker's dep-sync then you need to disable DeploymentSynchronizer in carbon.xml(repository/conf/)
<DeploymentSynchronizer>
<Enabled>false</Enabled>
...
</DeploymentSynchronizer>
Please refer this for more details.

Related

Port error while executing update on cloudformation

I changed some environment variables in the task definition part and executed the changeset.
The task definition got updated successfully but the update of service got stuck in cloudformation.
On checking the events in the cluster I found the following:
It is adding new task but the old one is already running consuming port so it is stuck. what can be done to resolve this. I can always delete and run the CF script again but I need to create a pipeline so I want the update stack to work.
This UPDATE_IN_PROGRESS will take around 3 hours until DescribeService API timeout.
If you can't wait then you need to manually force the state of the Amazon ECS service resource in AWS CloudFormation into a CREATE_COMPLETE state by
setting the desired count of the service to zero in the Amazon ECS console to stop running tasks. AWS CloudFormation then considers the update as successful, because the number of tasks equals the desired count of zero.
This blog explains the cause of the message and its fix in detail.
https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-ecs-service-stabilize/
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-service-stuck-update-status/?nc1=h_ls

How to change log levle in WSO2 API Manager 3.1.0 in Container Environment?

Is there a way to dynamically change the log levels in the API manager in a containerized environment, where the user cannot log in and change the values in log4j2.properties? I am aware that log4j2.properties gets hot deployed when we make changes, but how to do the same in a docker/K8s scenario.
There are multiple options to enable logs in the running servers. But in a server running in a container you can take the following actions,
Access required containers/pods and do the changes to log4j2.properties
Respawn a new cluster by modifying log4j2.properties
Configure logs of each node by accessing the management console(3.1.0 WUM only)[1]
Configure logs per APIs in each pod by using REST API(3.1.0 WUM only)[2]
[1] https://apim.docs.wso2.com/en/3.1.0/administer/logging-and-monitoring/logging/setting-up-logging/#enable-logs-for-a-component-via-the-ui
[2] https://apim.docs.wso2.com/en/3.1.0/administer/logging-and-monitoring/logging/setting-up-logging-per-api/

Kubernetes: Get mail once deployment is done

Is there a way to have post deployment mail in kubernetes on GCP/AWS ?
It has become harder to maintaining deployment on kubernetes once deployment team size grows. Having a post deployment mail service will ease up the process. As it'll also say who applied the deployment.
You could try to watch deployment events using https://github.com/bitnami-labs/kubewatch and webhook handler.
Another thing could be implementing customized solution with kubernetes API, for instance in python: https://github.com/kubernetes-client/python then run it as a separate notification pod in your cluster
Third option is to have deployment managed in ci/cd pipeline where actual deployment execution step is "approval" type, you should see user who approved and next step in the pipeline after approving could be the email notification
Approval in circle ci: https://circleci.com/docs/2.0/workflows/#holding-a-workflow-for-a-manual-approval
I don’t think such feature is built-in in Kubernetes.
There is a watch mechanism though, what you could use. Run the following GET query:
https://<api-server-url>/apis/apps/v1/namespace/<namespace>/deployments?watch=true
The connection will not close and you’ll get a “notification” about each deployment. Check the status fields. Then you can send the mail or do something else.
You’ll need to pass an authorization token to gain access to the API server. If you have kubectl setup, you could run a local proxy, which then won’t need the token: kubectl proxy.
You can attach handlers to container lifecycle events. Kubernetes supports preStop and postStart events. Kubernetes sends the postStart event immediately after the container is started. Here is the snippet of the pod manifest deployment file.
spec:
containers:
- name: <******>
images: <******>
lifecycle:
postStart:
exec:
command: [********]
Considering GCP, one option could be create a filter to get the info about your deployment finalization at Stackdriver Logging, and with the filter you can use the CREATE METRIC option, also in Stackdriver Logging.
With the metric created, use Stackdriver Monitoring to create an alert to send e-mails. More details at official documentation.
It looks like no one has mentioned "native tool" Kubernetes provides for that yet.
Please note, that there is a concept of Audit in Kubernetes.
It provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators or other components of the system.
Each request on each stage of its execution generates an event, which is then pre-processed according to a certain policy and processed by certain backend.
That allows cluster administrator to answer the following questions:
what happened?
when did it happen?
who initiated it?
on what did it happen?
where was it observed?
from where was it initiated?
to where was it going?
Administrator can specify what events should be recorded and what data they should include with the help of Audit policy/ies.
There are a few backends that persist audit events to an external storage.
Log backend, which writes events to a disk
Webhook backend, which sends events to an external API
Dynamic backend, which configures webhook backends through an AuditSink API object.
In case you use log backend, it is possible to collect data with tools such as a fluentd. With that data you can achieve more than just a post deployment mail in Kubernetes.
Hope that helps!

Using AWS ECS service tasks as disposable/consumable workers?

Right now I have a web app running on ECS and have a pretty convoluted method of running background jobs:
I have a single task service that polls an SQS queue. When it reads a message, it attempts to place the requested task on the cluster. If this fails due to lack of available resources, the service backs off/sleeps for a period before trying again.
What I'd like to move to instead is as follows:
Run a multi task worker service. Each task periodically polls the queue. When a message is received it runs the job itself (as opposed to trying to schedule a new task) and then exits. The AWS service scheduler would then replenish the service with a new task. This is analogous to gunicorn's prefork model.
My only concern is that I may be abusing the concept of services - are planned and frequent service task exits well supported or should service tasks only exit when something bad happens like an error
Thanks

Celery Beat on Amazon ECS

I am using Amazon Web Services ECS (Elastic Container Service).
My task definition contains Application + Redis + Celery and these containers are defined in task definition. Automatic scaling is set, so at the moment there are three instances with same mirrored infrastructure. However, there is a demand for a Celery Beat instance for scheduled tasks, so Celery Beat would be a great tool, since Celery is already in my infrastructure.
But here is the problem: if I add Celery Beat container together with other containers (add it to task definition), it will be mirrored and multiple instances will execute same scheduled tasks at the same moment. What would be a solution to this infrastructure problem? Should I create a seperate service?
We use single-beat to solve this problem and it works like a charm:
Single-beat is a nice little application that ensures only one
instance of your process runs across your servers.
Such as celerybeat (or some kind of daily mail sender, orphan file
cleaner etc...) needs to be running only on one server, but if that
server gets down, well, you go and start it at another server etc.
You should still set the number of desired tasks for the service to 1.
You can use ECS Task Placement strategy to place your Celery Beat task and choose "One Task Per Host". Make sure to choose Desire state to "1". In this way, your celery beat task will run only in 1 container in your cluster.
Ref:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_run_task.html
The desired task is the number of tasks you want to run in the cluster. You may set the "Number of tasks" while configuring the service or in the run task section. You may refer the below links for references.
Configuring service:
Ref:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html
Run Task:
Ref:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_run_task.html
Let me know if you find any issue with it.