Best way to Setup Cloudwatch in 'n' number of servers? - amazon-web-services

We want to setup cloudwatch in more that 50 servers for which in general we will have to do it manually logging into each server.But we would like to reduce the manual work.
While browsing through we found below two ideas:
1)Opswork( aws internally uses chef)
2) Chef
Are the above approaches correct to achieve what intend to?
Which approach is best suitable?
Your suggestions will be of great help... Thank you

We performed this activity using chef.The process was simple.
There are a number of cookbooks already available on the chef supermarket which is of great help to beginners.
We did not try Opswork so i will not be able to comment which is a better approach.

Related

AWS Patch Manager - rollback

I am preparing a patching plan for one of my customers. If I am using Patch Manager, should I create AMI/Snapshot before patching in case of failure and do I need to perform rollback? Thank you in advance for clarification :)
It's good practice to have regular snapshots of servers in-case anything goes wrong. You can use lambda or AWS Backup for this.
For Patching, you need to set baseline as per your needs & your OS. This way you reduce the chance of anything going wrong.

Store logs from applications in kubernetes

What is the recommended approach to store the logs of applications deployed on Kubernetes? I read about ELK stack, but not sure about the pros and cons. Needs recommendations.
If you ask specifically about storing application logs in kubernetes cluster, there are a few different approaches. First I would recommend you to familiarize with this article in the official kubernetes documentation.
As per my experience with the Kubernetes logging, I would suggest you go with EFK stack (Fluentd/flunetbit --> Kafka --> Logstash/flunetd --> Elasticserach --> kibana), this one has initial challenges during setup but once this is up and running, it will be like a super scalable system where you don't need to worry about volume of logs you are shipping.
Another approach you can take is shipping logs directly from fluentd/fluentbit/filebeat to Elasticsearch. The drawback of this approach is if ES has some issue then you may lose your logs.
I hope it helps.
I want to emphasize the response from #javajon. There is a KataCoda exercise specifically for logging at https://katacoda.com/javajon/courses/kubernetes-observability/efk.
Logging is a very large topic with lots of variables. In order to get any specific advice, you'll need to comment about your goals for logging. Is it related to performance, compliance, security, debugging, observability or something else?
Try to get some knowledge by yourself.
Every storage have some pros and cons according to requirement we use them.
Visit https://medium.com/volterra-io/kubernetes-storage-performance-comparison-9e993cb27271
and learn more.
I will surely somehow help.

Automate volume snapshots on AWS

I found this excellent service https://app.cloudzy.io/ to backup/create snapshots for my AWS instances on a schedule and being able to set the retention. Very simple and easy to use
I just got an email from them that they are shutting down the service.
Now I'm looking for something similar that is affordable. Any recommendations?
A quick google search found exactly what you needed.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/TakeScheduledSnapshot.html
It’s fairly easy to roll your own with a scheduled Lambda function. I wrote a script similar to the one here: https://serverlesscode.com/post/lambda-schedule-ebs-snapshot-backups/
If you are willing to spend money, N2WS is your next best bet. https://n2ws.com/

what are the best ways to transfer large streaming data files to cloud

Is it possible to use aws kinesis? If possible how can we use it?
Any suggestions Please reply. Thanks in advance.
Is it possible to use aws kinesis?
Yup. You can pretty much use anything for anything these days. Just need a little bit of creativity, that's all.
If possible how can we use it?
Good starting point would be the documentation but it would really depend on what you want to use it for.
what are the best ways to transfer large streaming data files to cloud
That's hard to answer without any details. In the end, you will be only given options and opinions, it'll be your call to figure out what's best. Kinesis is reliable and works for basic use-cases but it can be slower and less flexible than other options. It also costs a pretty penny if you use it unwisely. If you need options, checkout Apache Kafka

Amazon inspector

I would like to use AWS tool, like in topic. To me it looks like there are two releases of this tool. One with AWS agent installed on EC2 instance, allows tracking security issues. New one with some benchmarking, and so on. So I'm interested in the new one.
I've red docs, set up sample, test env. but still it looks a bit unclear for me. I understand that they are using public database of vulnerabilities. As well as benchmarking, or testing against best practices.
The question is - how can I know that all of that is tested in lowest 15min. target? Or in the other words - if time is short - what is less tested?
Is anyone use this tool and would like to share knowledge, insights?
A report provided at the end of the testing gives you an overview of the scanning results. The results indicates which of your preselected resources has security issues.