I am preparing a patching plan for one of my customers. If I am using Patch Manager, should I create AMI/Snapshot before patching in case of failure and do I need to perform rollback? Thank you in advance for clarification :)
It's good practice to have regular snapshots of servers in-case anything goes wrong. You can use lambda or AWS Backup for this.
For Patching, you need to set baseline as per your needs & your OS. This way you reduce the chance of anything going wrong.
Related
I need to schedule the backfill in transfer service for at least a few 100 times for several data sources.
The REST API is deprecated and the python client is not helping either.
How can I automate this?
#Yun Zhang is right. I will elaborate more on this.
The deprecated method required you to setup transfers in a specific schedule. The creation of regular ones was different and used another endpoint.
Now, by using the ScheduledOptions argument, we can set times for the transfers to start. We are able to now schedule (or not) the transfers. This is why using that endpoint is the right path.
Hope this is helpful! :)
What is the recommended approach to store the logs of applications deployed on Kubernetes? I read about ELK stack, but not sure about the pros and cons. Needs recommendations.
If you ask specifically about storing application logs in kubernetes cluster, there are a few different approaches. First I would recommend you to familiarize with this article in the official kubernetes documentation.
As per my experience with the Kubernetes logging, I would suggest you go with EFK stack (Fluentd/flunetbit --> Kafka --> Logstash/flunetd --> Elasticserach --> kibana), this one has initial challenges during setup but once this is up and running, it will be like a super scalable system where you don't need to worry about volume of logs you are shipping.
Another approach you can take is shipping logs directly from fluentd/fluentbit/filebeat to Elasticsearch. The drawback of this approach is if ES has some issue then you may lose your logs.
I hope it helps.
I want to emphasize the response from #javajon. There is a KataCoda exercise specifically for logging at https://katacoda.com/javajon/courses/kubernetes-observability/efk.
Logging is a very large topic with lots of variables. In order to get any specific advice, you'll need to comment about your goals for logging. Is it related to performance, compliance, security, debugging, observability or something else?
Try to get some knowledge by yourself.
Every storage have some pros and cons according to requirement we use them.
Visit https://medium.com/volterra-io/kubernetes-storage-performance-comparison-9e993cb27271
and learn more.
I will surely somehow help.
We want to setup cloudwatch in more that 50 servers for which in general we will have to do it manually logging into each server.But we would like to reduce the manual work.
While browsing through we found below two ideas:
1)Opswork( aws internally uses chef)
2) Chef
Are the above approaches correct to achieve what intend to?
Which approach is best suitable?
Your suggestions will be of great help... Thank you
We performed this activity using chef.The process was simple.
There are a number of cookbooks already available on the chef supermarket which is of great help to beginners.
We did not try Opswork so i will not be able to comment which is a better approach.
I would like to use AWS tool, like in topic. To me it looks like there are two releases of this tool. One with AWS agent installed on EC2 instance, allows tracking security issues. New one with some benchmarking, and so on. So I'm interested in the new one.
I've red docs, set up sample, test env. but still it looks a bit unclear for me. I understand that they are using public database of vulnerabilities. As well as benchmarking, or testing against best practices.
The question is - how can I know that all of that is tested in lowest 15min. target? Or in the other words - if time is short - what is less tested?
Is anyone use this tool and would like to share knowledge, insights?
A report provided at the end of the testing gives you an overview of the scanning results. The results indicates which of your preselected resources has security issues.
I have an Amazon RDS instance class db.m1.medium. I would like to downgrade to db.m1.small to save on costs since it's not being used much.
When I do this, are there any software changes involved? My concern is that settings will get changed when it downgrades. I don't want anything getting corrupt or MySQL settings getting changed.
Please advise. Thanks!
Your RDS settings will not be automatically changed if you change the instance type. However, you should check the monitoring on the db.m1.medium before downgrading to make sure you'd have enough memory in a db.m1.small. You'd be dropping from 3.75GB to 1.7GB.
I wrote some information and some concepts explaining what are the most expensive parts of RDS and how to plan to reduce costs. See if it helps https://shatteredsilicon.net/blog/2021/06/10/how-to-reduce-rds-costs-on-aws/