Is there a way to have post deployment mail in kubernetes on GCP/AWS ?
It has become harder to maintaining deployment on kubernetes once deployment team size grows. Having a post deployment mail service will ease up the process. As it'll also say who applied the deployment.
You could try to watch deployment events using https://github.com/bitnami-labs/kubewatch and webhook handler.
Another thing could be implementing customized solution with kubernetes API, for instance in python: https://github.com/kubernetes-client/python then run it as a separate notification pod in your cluster
Third option is to have deployment managed in ci/cd pipeline where actual deployment execution step is "approval" type, you should see user who approved and next step in the pipeline after approving could be the email notification
Approval in circle ci: https://circleci.com/docs/2.0/workflows/#holding-a-workflow-for-a-manual-approval
I don’t think such feature is built-in in Kubernetes.
There is a watch mechanism though, what you could use. Run the following GET query:
https://<api-server-url>/apis/apps/v1/namespace/<namespace>/deployments?watch=true
The connection will not close and you’ll get a “notification” about each deployment. Check the status fields. Then you can send the mail or do something else.
You’ll need to pass an authorization token to gain access to the API server. If you have kubectl setup, you could run a local proxy, which then won’t need the token: kubectl proxy.
You can attach handlers to container lifecycle events. Kubernetes supports preStop and postStart events. Kubernetes sends the postStart event immediately after the container is started. Here is the snippet of the pod manifest deployment file.
spec:
containers:
- name: <******>
images: <******>
lifecycle:
postStart:
exec:
command: [********]
Considering GCP, one option could be create a filter to get the info about your deployment finalization at Stackdriver Logging, and with the filter you can use the CREATE METRIC option, also in Stackdriver Logging.
With the metric created, use Stackdriver Monitoring to create an alert to send e-mails. More details at official documentation.
It looks like no one has mentioned "native tool" Kubernetes provides for that yet.
Please note, that there is a concept of Audit in Kubernetes.
It provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators or other components of the system.
Each request on each stage of its execution generates an event, which is then pre-processed according to a certain policy and processed by certain backend.
That allows cluster administrator to answer the following questions:
what happened?
when did it happen?
who initiated it?
on what did it happen?
where was it observed?
from where was it initiated?
to where was it going?
Administrator can specify what events should be recorded and what data they should include with the help of Audit policy/ies.
There are a few backends that persist audit events to an external storage.
Log backend, which writes events to a disk
Webhook backend, which sends events to an external API
Dynamic backend, which configures webhook backends through an AuditSink API object.
In case you use log backend, it is possible to collect data with tools such as a fluentd. With that data you can achieve more than just a post deployment mail in Kubernetes.
Hope that helps!
Related
We are currently deploying multiple instances (a front-end, a back-end, a database, etc..). All are deployed and configured using a CloudFormation script so we can deploy our solution quickly. One of our central server has multiple connections to other services and, for some, we open very simple REST endpoints that reply with 200 or 500 if the server can connect to another service or the database (get on /dbConnectionStatus for example).
We would like to have perform calls on those endpoints periodically and have a view on these. A little bit like the health check but without restarting the instance in case of trouble and possibly multiple endpoints to check on a service.
Is there an AWS service that can achieve that? If not what alternative do you suggest?
AWS CloudWatch Synthetic Monitoring can do what you want. By default it will just perform checks against your endpoints, and log the success or failure, without triggering a redeployment or something like a load balancer health check would.
I'd like to measure the number of times a Docker image has been downloaded from a Google Artifact registry repository in my GCP project.
Is this possible?
Interesting question.
I think this would be useful too.
I think there aren't any Monitoring metrics (no artifactregistry resource type is listed nor metrics are listed)
However, you can use Artifact Registry audit logs and you'll need to explicitly enable Data Access logs see e.g. Docker-GetManifest.
NOTE I'm unsure whether this can be achieved from gcloud.
Monitoring Developer tools, I learned that Audit Logs are configured in Project Policies using AuditConfig's. I still don't know whether this functionality is available through gcloud (anyone?) but evidently, you can effect these changes directly using API calls e.g. projects.setIamPolicy:
gcloud projects get-iam-policy ${PROJECT}
auditConfigs:
- auditLogConfigs:
- logType: DATA_READ
- logType: DATA_WRITE
service: artifactregistry.googleapis.com
bindings:
- members:
- user:me
role: roles/owner
etag: BwXanQS_YWg=
Then, pull something from the repo and query the logs:
PROJECT=[[YOUR-PROJECT]]
REGION=[[YOUR-REGION]]
REPO=[[YOUR-REPO]]
FILTER="
logName=\"projects/${PROJECT}/logs/cloudaudit.googleapis.com%2Fdata_access\"
protoPayload.methodName=\"Docker-GetManifest\"
"
gcloud logging read "${FILTER}" \
--project=${PROJECT} \
--format="value(timestamp,protoPayload.methodName)"
Yields:
2022-03-20T01:57:16.537400441Z Docker-GetManifest
You ought to be able to create a logs-based metrics for these too.
We do not yet have platform logs for Artifact Registry unfortunately, so using the CALs is the only way to do this today. You can also turn the CALs into log-based metrics and get graphs and metrics that way too.
The recommendation to filter by 'Docker-GetManifest' is also correct - it's the only request type for which a Docker Pull always has exactly one. There will be a lot of other requests that are related but don't match 1:1. The logs will have all requests (Docker-Token, 0 or more layer pulls), including API requests like ListRepositories which is called by the UI in every AR region when you load the page.
Unfortunately, the theory about public requests not appearing is correct. CALs are about logging authentication events, and when a request has no authentication whatsover, CALs are not generated.
first time asker.
So I've been trying to implement AWS Cloud Watch to monitor Disk Usage on an EC2 instance running EC2 Linux. I'm interesting in doing this just using the CW Agent and I've installed it according to the how-to found here. The install runs fine and I've made sure I've created an IAM Role for the instance as is described here. Unfortunately whenever I run the amazon-cloudwatch-agent.service it only sends log files and not the custom used_percent measurement specified. I receive this error when I tail the logs.
2021-06-18T15:41:37Z E! WriteToCloudWatch failure, err: RequestError: send request failed
caused by: Post "https://monitoring.us-west-2.amazonaws.com/": dial tcp 172.17.1.25:443: i/o timeout
I've done my best googlefu but gotten nowhere thus far. If you've got any advice it would be appreciated.
Thank you
Belated answer to my own question. I had to create a security group that would accept traffic from that same security group!
Having the same issue, it definitely wasn't a network restriction as I was still able to telnet to the monitoring endpoint.
From AWS docs: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create-iam-roles-for-cloudwatch-agent.html
One role or user enables CloudWatch agent to be installed on a server
and send metrics to CloudWatch. The other role or user is needed to
store your CloudWatch agent configuration in Systems Manager Parameter
Store. Parameter Store enables multiple servers to use one CloudWatch
agent configuration.
If you're using the default cloudwatchagent configuration wizard, you may require extra policy CloudWatchAgentAdminRole in your role for the agent to connect to the monitoring service.
I have an application that is distributed over two AWS accounts.
One part of the application ingest data from one account into the other account.
The producer part is realised as python lambda microservices.
The consumer part is a spring-boot app in elastic beanstalk and additional python lambdas that further distribute data to external systems after they have processed by the spring-boot app in EBeanstalk.
I don't have an explicit X-Ray daemon running anywhere.
I am wondering if it is possible to send the x-ray traces of the one account to the account so i can monitor my application in one place.
I could not find any hints in the documentation regarding cross account usage. Is this even doable ?
If you running X-Ray daemon, you can provide RoleARN to the daemon, so it assumes the role and sends data it receives from X-Ray SDK from Account 1 to Account 2.
However if you have enabled X-Ray on API Gateway or AWS Lambda, segments generated by these services are sent to the account they run in and its not possible to send data cross account for these services.
Please let me know if you have questions. If yes, include the architecture flow and solution stack you are using to better guide you.
Thanks,
Yogi
It is possible but you'd have to run your own xray-daemon as a service.
By default, lambda uses its own xray daemon process to send traces to the account it is running in. However, the X-Ray SDK supports environment variables which can be used to use a custom xray daemon process instead. These environment variables are applicable even if the microservice is running inside a lambda function.
Since your lambda is written in python, you can refer to this AWS Doc which talks about an environment variable. You can set the value to the address of the custom xray daemon service.
AWS_XRAY_DAEMON_ADDRESS = x.x.x.x:2000
Let's say you want to send traces from Account 1 to Account 2. You can do that by configuring your daemon to assume a role. This role must be present in the Account 2 (where you want to send your traces). Then use this role's ARN by passing in the options while running your XRay daemon service in Account 1 (from where you want the traces to be sent). The options to use are mentioned in this AWS Doc here.
--role-arn, arn:aws:iam::123456789012:role/xray-cross-account
Make sure you attach permissions in Account 1 as well to send traces to Account 2.
This is now possible with this recent launch: https://aws.amazon.com/about-aws/whats-new/2022/11/amazon-cloudwatch-cross-account-observability-multiple-aws-accounts/.
You can link accounts to another account to share traces, metrics, and logs.
Is there any way to start an AWS Database Migration Service full-load-and-cdc replication task through Terraform? Preferably, this would start automatically upon creation of the task.
The AWS DMS console provides an option to "Start task on create", and the AWS CLI provides a start-replication-task command, but I'm not seeing similar options in the Terraform resources. The aws_dms_replication_task provides the cdc_start_time argument, but I think this may apply only to cdc tasks. I've tried setting this argument to a number of past/current/future timestamps with my full-load-and-cdc replication task, but the task never started (it was merely created and entered the ready state).
I'd be glad to log a feature request to Terraform if this feature is not supported, but wanted to check with the community first to see if I was overlooking an existing way to do this today.
(Note: This question has also been logged to the Terraform Google group.)
I've logged an issue for this feature request:
Terraform AWS Provider #2083: Support Starting AWS Database Migration Service Replication Task