I have Apache2 installed in one of VMs in Google Cloud Platform. I installed Ops Agent and configured it like below per the docs:
logging:
receivers:
mywebserver:
type: files
include_paths:
- /var/log/apache*/access_log
- /var/log/apache*/error_log
service:
pipelines:
default_pipeline:
receivers:
- mywebserver
But then the Logs in GCP isn't showing the logs of this web-server. I don't see the service mywebserver as filter option in the logs dropdown even for this VM instance.
OS: Ubuntu 18.x LTS
Ops Agent Version : Latest as of today
What am I missing? Your help is much appreciated.
When I tried to debug using the command cat /var/log/google-cloud-ops-agent/subagents/*.log | grep apache it returned nothing. It should show something similar to below:
[ info] [input:tail:tail.0] inotify_fs_add(): inode=268631 watch_fd=1 name=/var/log/apache2/access.log
[input:tail:tail.0] inotify_fs_add(): inode=268633 watch_fd=2 name=/var/log/apache2/error.log
This prompted me to get back to logs and realized that the google docs had a typo and I ended up copy-pasting the lines in good faith. Basically if you note my configuration instead of access.log the line contains access_log.
As trivial as it sounds, this killed a good deal of hours of mine. :Facepalm:
Lesson: Even Google Docs can have errors something as trivial as this that can kill your hours in debugging.
Related
Having issues with adding items to varnish logging.
I am able to start and stop varnishncsa, and things look clear on varnish status, however, when attempting to add logging for PURGE, using the varnishncsa command, I'm getting command not found errors.
sudo systemctl status rh-varnish6-varnishncsa
● rh-varnish6-varnishncsa.service - Varnish Cache HTTP accelerator NCSA logging daemon
Loaded: loaded (/../../../../rh-varnish6-varnishncsa.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2020-08-05 13:42:14 EDT; 3h 1min ago
Main PID: 28620 (varnishncsa)
CGroup: /...../rh-varnish6-varnishncsa.service
└─28620 /../../../../../../varnishncsa -a -w /.../op...
Aug 05 13:42:14 ip-..-...-..-...ec2.internal systemd[1]: Starting Varnish Cac...
Aug 05 13:42:14 ip-..-...-..-...ec2.internal systemd[1]: Started Varnish Cach...
Hint: Some lines were ellipsized, use -l to show in full.
sudo rh-varnish6-varnishncsa -g request -q 'ReqMethod eq "PURGE"'
sudo: rh-varnish6-varnishncsa: command not found
Running the same command as sudo varnishncsa -g request -q 'ReqMethod eq "PURGE"' produces the same result.
Has anyone come across this issue before? I'm trying to configure my logs to look into some caching issues.
Appreciate your help.
I'm unsure on how you configured Varnish on AWS, but rh-varnish6-varnishncsa is probably not the command you're looking for.
Based on your output, there's a systemd service called rh-varnish6-varnishncsa.service, which starts up a varnishncsa service that gets logged to a specific location.
varnishlog vs varnishncsa
varnishncsa is a useful binary to gather stats on what happens on a running Varnish system. Because it is in NCSA format, logprocessers can easily get metrics out of them.
Because your origin server will get a lot less hits, varnishncsa could be the surrogate for those kinds of logs.
However, varnishncsa is not really the right tool when you want to perform debugging. The varnishlog binary is a far superior tool for that.
If you want to know what's going on with specific calls, and you want to debug PURGE calls, you can use the same vsl-query syntax you're currently using in your varnishncsa command.
Debugging with varnishlog
In your case, the following command would make sense to debug:
varnishlog -g request -q 'ReqMethod eq "PURGE"'
The output you'll get will be a lot more verbose and actionable.
A couple of years ago I wrote a really extensive blog post on how to leverage varnishlog. You can read it here. It contains information on how to use the different parameters of the varnishlog binary, but also about the vsl-query syntax.
Still use varnishncsa
varnishncsa can still play a role in your setup. If you found what you're looking for using varnishlog, you might want to look out for similar anomalies in the future.
Because of the verbosity of varnishlog, it's tough to persist these kinds of logs on disk. You could set up a different systemd service that logs these specific cases in a separate log file.
As far as your rh-varnish6-varnishncsa goes: please use that one as an access_log replacement you usually get when running an Apache or Nginx server.
An extra piece of advice regarding AWS
If you're running Varnish on AWS, it can make a lot of sense to run EC2 instances from the official Varnish Software images.
There are images for Varnish Cache 6. But there are also images for Varnish Enterprise 6, which comes with a lot of extra features.
Have a look at the docs page for Varnish Enterprise: https://docs.varnish-software.com/.
Here's our AWS marketplace profile page, which contains the various EC2 images we offer: https://aws.amazon.com/marketplace/seller-profile?id=263c0208-6a3a-435d-8728-fa2514202fd0
To start with I am trying to upgrade from 1.9 version to 1.10 so my setup contains two vms running different versions of airflow with different port forwarding.
I can access UI from vm running with 1.9 but not able to access UI from 1.10.
To debug I want to confirm if airflow webserver is running. if I execute
sudo systemctl start airflow-webserver
it throws no error but when
I am looking at netstat I am not seeing any process listening to port 8080(default).
Also I have not created any user as I do not need rbac authentication ? Can that be a problem?
As requested by #kaxil. Below is the output of ps aux | grep airflow
Can someone provide some suggestions on how to fix this problem? Also if you need any further resource can provide it. I am not sure what is relevant here.
Output of journalctl -u airflow-webserver.service -b
The Error message shows that there is an issue with airflow.cfg file i.e. there might be a character in your airflow.cfg that is causing the issue. Recheck your config file, if you don't find an issue, post your config file in your question and we will try to figure it out.
After my app is successfully pushed via cf I usually need do manually ssh-log into the container and execute a couple of PHP scripts to clear and warmup my cache, potentially execute some DB schema updates etc.
Today I found out about Cloudfoundry Tasks which seems to offer a pretty way to do exactly this kind of things and I wanted to test it whether I can integrate it into my build&deploy script.
So used cf login, got successfully connected to the right org and space, app has been pushed and is running and I tried this command:
cf run-task MYAPP "bin/console doctrine:schema:update --dump-sql --env=prod" --name dumpsql
(tried it with a couple of folder changes like app/bin/console etc.)
and this was the output:
Creating task for app MYAPP in org MYORG / space MYSPACE as me#myemail...
Unexpected Response
Response Code: 404
FAILED
Uses CF CLI: 6.32.0
cf logs ArcticTenTestBackend --recent does not output anything (this might be the case because I have enabled an ELK instance for logging - as I wanted to service-connect to ELK to look up the logs I found out that the service-connector cf plugin is gone for which I will open a new ticket).
Created new Issue for that: https://github.com/cloudfoundry/cli/issues/1242
This is not a CF CLI issue. Swisscom Application Cloud does not yet support the Cloud Foundry tasks. This explains the 404 you are currently receiving. We will expose this feature of Cloud Foundry in an upcoming release of Swisscom Application Cloud.
In the meantime, maybe you can find a way to execute your one-off tasks (cache warming, DB migrations) at application startup.
As mentioned by #Mathis Kretz Swisscom has gotten around to enable cf run-task since this question was posted. They send out e-mails on 22. November 2018 to announce the feature.
As discussed on your linked documentation you use the following commands to manage tasks:
cf tasks [APP_NAME]
cf run-task [APP_NAME] [COMMAND]
cf terminate-task [APP_NAME] [TASK_ID]
I have a problem with my application logs on my Cloudfoundry deployment.
I've deployed Cloudfoundry in a something minimized design based on the tiny-aws deployment of https://github.com/cloudfoundry-community/cf-boshworkspace.
I further minimized the deployment and put everything from the VMs "api", "backbone", "health" and "services" together on the api-machines.
So I have the following VMs:
api (2 instances)
data (1 instance)
runner (2 instances)
haproxy (1 public and 1 private proxy)
Cloudfoundry version is 212.
The deployment itself seems to work. I can deploy apps and they start up.
But the logs from my applications don't show up when I run
"cf logs my-app --recent"
I've tried several log-configurations in my spring-boot-app.
standard without modifications which should log to STDOUT according to spring-boot documentation
exlicitly set a log4j.properties file which was configured to log to STDOUT as well
a log4j-2 configuration for logging on STDOUT
a spring-boot configuration which logs to a file
In the last configuration, the file was created and my logs was shown when I ran "cf files my-app log/my-app.log"
I tried to debug where my logs are lost, but I couldn't find something.
The dea_logging_agent seems to run and has the correct NATS location configured, the dea itself too.
Loggregator seems to run well on the api-host too and seems to be connected to NATS too.
So my question is: In which locations should I search to find out where my logs go?
Thank you very much.
Please feel free to redirect me to any other place if this isn't the right one for this question.
Problem: When I log to the administration panel : "localhost:8083" with "root" "root" I cannot see the existing databases nor the data in it. Also, I have no way to access InfluxDB from the command line.
Also the line sudo /etc/init.d/influxdb start does not work for my setup. I have to go into /etc/init.d/ and run sudo ./influxdb start -config=config.toml in order to get the server running.
I've installed influxDB v0.8 from https://influxdb.com/docs/v0.8/introduction/installation.html for Ubuntu 14.04.
I've been developing a Clojure program using the Capacitor API just to get started and interact with InfluxDB. It runs well, I can create delete, insert and query a database without problems.
netstat -anp | grep LISTEN confirms me that ports 8083 8086 8090 and 8099 are listening.
I've been Googling all around but cannot manage to get a solution.
Thanks for the support and enjoy building things !
Problem solved: the database weren't visible in firefox but everything is visible in Chromium!
Why couldn't I access the CLI ? I was expecting the v0.8 to behave exactly like the v0.9.
You help was appreciated anyway !
For InfluxDB 0.9 the CLI could be started with:
/opt/influxdb/influx
then you can display available databases:
Connected to http://localhost:8086 version 0.9.1
InfluxDB shell 0.9.1
> show databases
name: databases
---------------
name
collectd
graphite
> use collectd
Using database collectd
> show series limit 5
You can try creating new database from CLI:
> CREATE DATABASE mydb
or with curl command:
curl -G 'http://localhost:8086/query' --data-urlencode "q=CREATE DATABASE mydb"
Web UI should be available on http://localhost:8083