I just tried to configure the cloudwatch to a running ec2-instance(windows) , manually using the steps given in aws-cloudwatch-documentation and ended successfully. the cloud watch log groups are created and the logs are being logged correctly.
Now my query is, Is there any way to do the cloud watch configuration through code(programattically using JAVA) or script or powershell, ???
If yes, kindly share some samples
Related
We have setup a fluentd agent on a GCP VM to push logs from syslog server (the VM) to GCP's Google Cloud Logging. The current setup is working fine and is pushing more than 300k log entries to Stackdriver (Google Cloud Logging) per hour.
Due to increased traffic, we are planning to increase the number of VMs employed behind the load balancer. However, the new VM with fluentd agent is not being able to push logs to Stackdriver. After the first time activation of VM, it does send a few entries to Stackdriver and after that, it does not work.
I tried below options to setup the fluentd agent and to resolve the issue:
Create a new VM from scratch and install fluentd logging agent using this Google Cloud documentation.
Duplicate the already working VM (with logging agent) by creating Images
Restart the VM
Reinstall the logging agent
Debugging I did:
All the configurations for google fluentd agent. Everything is correct and is also exactly similar to the currently working VM instance.
I checked the "/var/log/google-fluentd/google-fluentd.log" for any logging errors. But there are none.
Checked if the logging API is enabled. As there are already a few million logs per day, I assume we are fine on that front.
Checked the CPU and memory consumption. It is close to 0.
All the solutions I could find on Google (there are not many)
It would be great if someone can help me identify where exactly I am going wrong. I have checked configurations/setup files multiple times and they look fine.
Troubleshooting steps to resolve the issue:
Check whether you are using the latest version of the fluentd agent or not. If not, try upgrading the fluentd agent. Refer to upgrade the agent for information.
If you are running very old Compute Engine instances or Compute Engine instances created without the default credentials you must complete the Authorizing the agent procedures.
Another point to focus is, how you are Configuring an HTTP Proxy. If you are using an HTTP proxy for proxying requests to the Logging and Monitoring APIs, check whether the metadata server is reachable. The metadata server has to be reachable (and do it directly; no proxy) when Configuring an HTTP Proxy.
Check if you have any log exclusions configured which is preventing the logs from arriving. Refer Exclusion filters for information.
Try uninstalling the Fluentd agent and try to use Ops agent instead (note that syslog logs are collected by it with no setup) and check whether you were able to see the logs. Combining logging and metrics into a single agent, the Ops Agent uses Fluent Bit for logs, which supports high-throughput logging, and the OpenTelemetry Collector for metrics. Refer Ops agent for more information.
I currently deploying a couple of apps with Elastic Beanstalk and have some open questions. One thing that bugs me about EB is the logs. I can run eb logs or request the logs from the GUI. But the result is kind of confusing to me since I can't find a way to access the normal stdout for a running process. I'm running a Django app and it seems like the logs automatically show every log that is explicitly set to a Warning priority.
In Django, there are a lot of errors that seem to slip through the log system (e. g. failed migrations, failed custom commands, etc.)
Is there any way to access more detailed logs, or access the stdout of my main process? It would also be ok, if they would stream to my terminal, or if I had to ssh on the machine.
I suggest using the cli to enable the cloudwatch logs with eb logs --cloudwatch-logs enable --cloudwatch-log-source all. This will allow you to see the streaming output of web.stdout.log along with all of your other logs individually.
I'm trying to execute a Script on a Google VM through Terraform.
First I tried it via Google Startup Scripts. But since the metadata is visible in the Google Console (startup scripts count as metadata) and that would mean that anybody with read access can see that script which is not acceptable.
So i tried to get the script from a Storage Account. But for that i need to attach a service account to the VM so the VM has the rights to access the Storage Account. Now people that have access to the VM also have access to my script as long as the service account is attached to the VM. In order to "detach" the service account i would have to stop the VM. Also if i don't want to permanently keep the attachment of the service account i would have to attach the service account via a script which requires another stop and start of the VM. This is probably not possible and also really ugly.
I don't understand how the remote-exec ressource works on GCP VMs. Because i have to specify a user and a userpassword to connect to the VM and then execute the script. But the windows password needs to be set manually via the google console, so i can't specify those things at this point in time.
So does anybody know how I can execute a Script where not anybody has access to my script via Terraform?
Greetings :) and Thanks in advance
I ended up just running a gcloud script in which i removed the Metadata from the VM after the Terraform apply was finished. In my Gitlab pipeline i just called the script in the "after_script"-section. Unfortunately the credentials are visible for approximately 3min.
I have a python script that runs from an ec2 server. What is the easiest way for me to see print statements from that script? I tried viewing the system log but I don't see anything there and I can't find anything in cloudwatch. Thanks!
Standard output from arbitrary applications running on EC2 don't appear in CloudWatch Logs.
You can install the CloudWatch Logs Agent, configure it to collect logs from given locations, and then configure your app to log to one of those locations.
It is possible to send log of application running on EC2 to Cloudwatch directly for that you need to do following step.
Create IAM Role with relevant permission and attach to Linux instance.
Install the CloudWatch agent in the instances.
Prepare the configuration file in the instance.
Start the CloudWatch agent service in the instance.
Monitor the logs using CloudWatch web console.
For your reference:-
http://medium.com/tensult/to-send-linux-logs-to-aws-cloudwatch-17b3ea5f4863
I am using AWS SEK for java. I create and run an EC2 instance with a user data script which gets a .jar from a S3 bucket and runs it. When I run the instance it shows me that it is running but nothing happens. The .jar should create a SimpleDB table and a SQS queue. How do see whats wrong whithout connecting through ssh to the instance or is it the only what to see the logs?
Kind regards,
Snafu
Some of the user-data output may be found in the system log (on EC2 dashboard, right-click on the instance and choose System logs)
you could put a piece of java code \ shell script and\or cron job to upload your logs to S3, but it's best to SSH to see what's in there at least at the first time you run your code.
You can use mind-term java applet to connect directly from EC2 dashboard (there's a button labeled 'connect' at the top, it's easy and you don't need to download ssh client). I would highly recommend getting used to work with SSH because it's the easiest way to see what's inside.