I have setup a Managed Instance Group with initial 3 instances (I installed Lumen inside, and the web server is auto started) to be used with the GCP load balancer. The LB works great.
However, whenever I need to trace lumen logs, I need to SSH every single instance to view the logs. Is there any best practices of one centralized storage I can refer to for the logs?
Can I mount the lumen logs into a centralized disk e.g. GCP filestore volume, or Google storage bucket or using FluendD to dump my logs into GCP Logging?
Please, I need to know the best industrial practice. THanks
STACK DRIVER is the right option for your case
https://cloud.google.com/logging/docs/agent/installation#joint-install
Install the stack driver logging agent on compute engine instances. You can track your logs lively also you can create visualizations and useful analysis out of it. Stackdriver is the best industry standard for the people who is using GCP. remember the pricing. Please check the pricing details
Related
I had a failover on my PostgreSQL instance on gcp, i have logs and metrics about the instance and metrics all looks good, but i don't have a log with the reason for the failover (something as well as a network failure or zone), Is there any way to know the reason for the failover?
This is info is only available if you have support service (only with pay), this steps are:
1.- paid 4 support
2.- open a ticker for ask to info
We have couple of application running on AWS. Currently we are redirecting all our logs to single bucket. However for ease of access to users, I am thinking to install ELK Stack on EC2 instance.
Would want to check if there is alternate way available where I don't have to maintain this stack.
Scaling won't be an issue, as this is only for logs generated through application running on AWS, so not ingestion or processing is required. mostly log4j logs.
You can go for either the managed Elasticsearch available in AWS or setup your own in an EC2 instance
It usually comes down to the price involved and the amount of time you have in hand in setting up and maintaining your own setup
With your own setup, you can do a lot more configurations than that provided by the managed service and also helps in reducing the cost
You can find more info on this blog
I have an AWS account, and have several EC2 servers and an ElasticSearch domain set up to take the syslogs from these servers. However, in Cloudwatch and when investigating a specific server instance in the EC2 control panel, I see specific metrics and graphs for things like CPU, memory load, storage use, etc. Is there some way I can pipe this information into my ElasticSearch as well?
Set up Logstash and use this plugin https://github.com/EagerELK/logstash-input-cloudwatch
Or go the other way and use AWS Logs agent to put your syslogs into Cloudwatch and stop using ElasticSearch
I have to setup ec2 for a medium rails app running on apache2, mysql, capistrano and a few background services. I would like to know what is the best practices that every developer usually does to set up his rails app. I would like to know what kind of setup that is easy to scale and can mimimic at least
auto deployment
security
regular data backup and an easy and quick way to restore the data
server recovery
fault tolerance
I am also interested in how to monitor the server status and performance and other kind of best practice would be also helpful.
ps: take into account also that my app database will grow a fast.
I think a good look into the AWS docs and in particular the architecture center would be the best place to start. However, let me address as many of your questions as I can.
Database
The easiest way to get a scalable, fault tolerant database on AWS is to use the Relational Database Service. You should read the docs and best practices to ensure you get the most out of it - ie. multiple AZs.
EC2 Servers
The most recommended way to structure your servers is to decouple them into Web Servers (serve html to users) and App Servers (application logic, usually returns json or xml etc). See this architecture example.
However, the key is to use an AutoScaling group behind an Elastic Load Balancer.
Automation
If you want to use capistrano, just install it into your servers. You could create a pre-configured AMI with it installed along with whatever else you want. Alternatively, you could install it in a deployment script. However, the most recommended method for this kind of thing is to use the AWS OpsWorks service which is Chef in the cloud.
Server Recovery & Fault Tolerance
If you use EC2 AutoScaling, if a server becomes unavailable ie. hardware fails or it stops replying to EC2 health checks, AutoScaling will automatically terminate it and launch a replacement.
With the addition of the ELB and ELB health checks, instances that stop responding to web requests can be brought out of service by the ELB.
You need to read the docs for more info on this.
Backup and Recovery
For backing up data on EBS volumes attached to EC2 instances, use EBS Snapshots. However, the best types of architectures keep EC2 instances stateless - they don't store anything except application code, if they died it wouldn't matter. In these situations all data, including user files can be stored on S3. On S3, you have a number of back up options such as Cross Region Replication and or data archiving to Glacier
Monitoring
AWS provides CloudWatch which can provide you with hypervisor visible metrics such as network in and out, CPU utilization and more. If you want to get more data, you could use custom metrics and push things like eg. memory usage. In addition to cloudwatch, you could use a server level monitoring tool.
Deployment
I recommend AWS Code Deploy.
Security
Use Security Groups to open only the ports you want users to be able to connect on. Also, use security groups to lock down important ports eg.22 to only a specific set of IPs. You can also use Network ACLS to block undesired traffic. AWS provides more information and suggestions here.
I also recommend you read this Whitepaper.
I am programming a Jersey service on Tomcat via EBS with LoadBalancer. I am finding getting the EC2's S3 catalina files very cumbersome. Currently I need to determine the EC2 instance(s) then work my way to each of the S3 locations, download the files, then I can diagnose.
The snapshot doesn't help due to the amount of requests that come in, it doesn't hold enough info and by the time I get the snapshot, it has "rolled" off the snapshot.
Two questions:
1) Is there an easier approach to logs files via AWS? (Increase time before rotation which I don't believe is supported as of now, scripts, etc)
2) Is there any software or scripts to access all the logs under load balancer? I am basically wanting to say "give me all logs for this EBS" and have it get all logs for that day under all servers for that load balancer (up or down)". The clincher is down. Problem becomes more complex when the load balancer takes down an instance right when the issue occurs.
Thanks!
As an immediate solution to your problem you can follow the approach suggested in this answer. Essentially you can modify the logrotate configuration to rotate for a bigger log size using ebextensions.
Then snapshot logs should work for you.
Let me know if you need more clarifications on this approach.
AWS has released CloudWatch Logs just last week, which enables you to to monitor and troubleshoot your systems and applications using your existing system, application, and custom log files:
You can send your existing system, application, and custom log files to CloudWatch Logs and monitor these logs in near real-time. [...] you can store your logs using highly durable, low-cost storage for later access.
See the introductory blog post Store and Monitor OS & Application Log Files with Amazon CloudWatch for an illustrated walk through, which touches on using Elastic Beanstalk and CloudWatch Logs already - this is further detailed in Using AWS Elastic Beanstalk with Amazon CloudWatch Logs.