sudo apache2 stop doesn't work - django

not sure why but when I run:
ubuntu#ip-10-46-206-16:/etc/init.d$ sudo apache2 stop
Usage: apache2 [-D name] [-d directory] [-f file]
[-C "directive"] [-c "directive"]
[-k start|restart|graceful|graceful-stop|stop]
[-v] [-V] [-h] [-l] [-L] [-t] [-S] [-X]
Options:
-D name : define a name for use in <IfDefine name> directives
-d directory : specify an alternate initial ServerRoot
-f file : specify an alternate ServerConfigFile
-C "directive" : process directive before reading config files
-c "directive" : process directive after reading config files
-e level : show startup errors of level (see LogLevel)
-E file : log startup errors to file
-v : show version number
-V : show compile settings
-h : list available command line options (this page)
-l : list compiled in modules
-L : list available configuration directives
-t -D DUMP_VHOSTS : show parsed settings (currently only vhost settings)
-S : a synonym for -t -D DUMP_VHOSTS
-t -D DUMP_MODULES : show all loaded modules
-M : a synonym for -t -D DUMP_MODULES
-t : run syntax check for config files
-X : debug mode (only one worker, do not detach)
it doesn't seem to stop the server. I still trying to ping the ip and it's returning the default page. Is there a reason why?

Try:
sudo service apache2 stop
See here:
https://help.ubuntu.com/community/ApacheMySQLPHP
The older way would be:
sudo /etc/init.d/apache2 stop
Note that when you do sudo apache2 stop, you are running apache2 from you PATH, not from the current folder (usually, . is not in the PATH). Try sudo ./apache2 stop for that.
See here:
http://www.cyberciti.biz/faq/ubuntu-linux-start-restart-stop-apache-web-server/

This is not a programming question. I guess you have to look for answers here http://serverfault.com

You have to type full path to apache2 init script. Example:
sudo /etc/init.d/apache2 stop

check ubuntu#ip-10-46-206-16:/etc/init.d$ ./apache2 stop
this will slove your problem

One thing is missing in the command, I encountered the same issue:
sudo apache2 stop
Should be
sudo service apache2 stop
Hope no one search too far for missing service part of the command :)

Related

Connect to VPN with Podman

Having this Dockerfile:
FROM fedora:30
ENV LANG C.UTF-8
RUN dnf upgrade -y \
&& dnf install -y \
openssh-clients \
openvpn \
slirp4netns \
&& dnf clean all
CMD ["openvpn", "--config", "/vpn/ovpn.config", "--auth-user-pass", "/vpn/ovpn.auth"]
Building the image with:
podman build -t peque/vpn .
If I try to run it with (note $(pwd), where the VPN configuration and credentials are stored):
podman run -v $(pwd):/vpn:Z --cap-add=NET_ADMIN --device=/dev/net/tun -it peque/vpn
I get the following error:
ERROR: Cannot open TUN/TAP dev /dev/net/tun: Permission denied (errno=13)
Any ideas on how could I fix this? I would not mind changing the base image if that could help (i.e.: to Alpine or anything else as long as it allows me to use openvpn for the connection).
System information
Using Podman 1.4.4 (rootless) and Fedora 30 distribution with kernel 5.1.19.
/dev/net/tun permissions
Running the container with:
podman run -v $(pwd):/vpn:Z --cap-add=NET_ADMIN --device=/dev/net/tun -it peque/vpn
Then, from the container, I can:
# ls -l /dev/ | grep net
drwxr-xr-x. 2 root root 60 Jul 23 07:31 net
I can also list /dev/net, but will get a "permission denied error":
# ls -l /dev/net
ls: cannot access '/dev/net/tun': Permission denied
total 0
-????????? ? ? ? ? ? tun
Trying --privileged
If I try with --privileged:
podman run -v $(pwd):/vpn:Z --privileged --cap-add=NET_ADMIN --device=/dev/net/tun -it peque/vpn
Then instead of the permission-denied error (errno=13), I get a no-such-file-or-directory error (errno=2):
ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)
I can effectively verify there is no /dev/net/ directory when using --privileged, even if I pass the --cap-add=NET_ADMIN --device=/dev/net/tun parameters.
Verbose log
This is the log I get when configuring the client with verb 3:
OpenVPN 2.4.7 x86_64-redhat-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Feb 20 2019
library versions: OpenSSL 1.1.1c FIPS 28 May 2019, LZO 2.08
Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
TCP/UDP: Preserving recently used remote address: [AF_INET]xx.xx.xx.xx:1194
Socket Buffers: R=[212992->212992] S=[212992->212992]
UDP link local (bound): [AF_INET][undef]:0
UDP link remote: [AF_INET]xx.xx.xx.xx:1194
TLS: Initial packet from [AF_INET]xx.xx.xx.xx:1194, sid=3ebc16fc 8cb6d6b1
WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
VERIFY OK: depth=1, C=ES, ST=XXX, L=XXX, O=XXXXX, emailAddress=email#domain.com, CN=internal-ca
VERIFY KU OK
Validating certificate extended key usage
++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication
VERIFY EKU OK
VERIFY OK: depth=0, C=ES, ST=XXX, L=XXX, O=XXXXX, emailAddress=email#domain.com, CN=ovpn.server.address
Control Channel: TLSv1.2, cipher TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384, 2048 bit RSA
[ovpn.server.address] Peer Connection Initiated with [AF_INET]xx.xx.xx.xx:1194
SENT CONTROL [ovpn.server.address]: 'PUSH_REQUEST' (status=1)
PUSH: Received control message: 'PUSH_REPLY,route xx.xx.xx.xx 255.255.255.0,route xx.xx.xx.0 255.255.255.0,dhcp-option DOMAIN server.net,dhcp-option DNS xx.xx.xx.254,dhcp-option DNS xx.xx.xx.1,dhcp-option DNS xx.xx.xx.1,route-gateway xx.xx.xx.1,topology subnet,ping 10,ping-restart 60,ifconfig xx.xx.xx.24 255.255.255.0,peer-id 1'
OPTIONS IMPORT: timers and/or timeouts modified
OPTIONS IMPORT: --ifconfig/up options modified
OPTIONS IMPORT: route options modified
OPTIONS IMPORT: route-related options modified
OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified
OPTIONS IMPORT: peer-id set
OPTIONS IMPORT: adjusting link_mtu to 1624
Outgoing Data Channel: Cipher 'AES-128-CBC' initialized with 128 bit key
Outgoing Data Channel: Using 160 bit message hash 'SHA1' for HMAC authentication
Incoming Data Channel: Cipher 'AES-128-CBC' initialized with 128 bit key
Incoming Data Channel: Using 160 bit message hash 'SHA1' for HMAC authentication
ROUTE_GATEWAY xx.xx.xx.xx/255.255.255.0 IFACE=tap0 HWADDR=0a:38:ba:e6:4b:5f
ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)
Exiting due to fatal error
Error number may change depending on whether I run the command with --privileged or not.
It turns out that you are blocked by SELinux: after running the client container and trying to access /dev/net/tun inside it, you will get the following AVC denial in the audit log:
type=AVC msg=audit(1563869264.270:833): avc: denied { getattr } for pid=11429 comm="ls" path="/dev/net/tun" dev="devtmpfs" ino=15236 scontext=system_u:system_r:container_t:s0:c502,c803 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=0
To allow your container configuring the tunnel while staying not fully privileged and with SELinux enforced, you need to customize SELinux policies a bit. However, I did not find an easy way to do this properly.
Luckily, there is a tool called udica, which can generate SELinux policies from container configurations. It does not provide the desired policy on its own and requires some manual intervention, so I will describe how I got the openvpn container working step-by-step.
First, install the required tools:
$ sudo dnf install policycoreutils-python-utils policycoreutils udica
Create the container with required privileges, then generate the policy for this container:
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --name ovpn peque/vpn
$ podman inspect ovpn | sudo udica -j - ovpn_container
Policy ovpn_container created!
Please load these modules using:
# semodule -i ovpn_container.cil /usr/share/udica/templates/base_container.cil
Restart the container with: "--security-opt label=type:ovpn_container.process" parameter
Here is the policy which was generated by udica:
$ cat ovpn_container.cil
(block ovpn_container
(blockinherit container)
(allow process process ( capability ( chown dac_override fsetid fowner mknod net_raw setgid setuid setfcap setpcap net_bind_service sys_chroot kill audit_write net_admin )))
(allow process default_t ( dir ( open read getattr lock search ioctl add_name remove_name write )))
(allow process default_t ( file ( getattr read write append ioctl lock map open create )))
(allow process default_t ( sock_file ( getattr read write append open )))
)
Let's try this policy (note the --security-opt option, which tells podman to run the container in newly created domain):
$ sudo semodule -i ovpn_container.cil /usr/share/udica/templates/base_container.cil
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --security-opt label=type:ovpn_container.process peque/vpn
<...>
ERROR: Cannot open TUN/TAP dev /dev/net/tun: Permission denied (errno=13)
Ugh. Here is the problem: the policy generated by udica still does not know about specific requirements of our container, as they are not reflected in its configuration (well, probably, it is possible to infer that you want to allow operations on tun_tap_device_t based on the fact that you requested --device /dev/net/tun, but...). So, we need to customize the policy by extending it with few more statements.
Let's disable SELinux temporarily and run the container to collect the expected denials:
$ sudo setenforce 0
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --security-opt label=type:ovpn_container.process peque/vpn
These are:
$ sudo grep denied /var/log/audit/audit.log
type=AVC msg=audit(1563889218.937:839): avc: denied { read write } for pid=3272 comm="openvpn" name="tun" dev="devtmpfs" ino=15178 scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=1
type=AVC msg=audit(1563889218.937:840): avc: denied { open } for pid=3272 comm="openvpn" path="/dev/net/tun" dev="devtmpfs" ino=15178 scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=1
type=AVC msg=audit(1563889218.937:841): avc: denied { ioctl } for pid=3272 comm="openvpn" path="/dev/net/tun" dev="devtmpfs" ino=15178 ioctlcmd=0x54ca scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=1
type=AVC msg=audit(1563889218.947:842): avc: denied { nlmsg_write } for pid=3273 comm="ip" scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tclass=netlink_route_socket permissive=1
Or more human-readable:
$ sudo grep denied /var/log/audit/audit.log | audit2allow
#============= ovpn_container.process ==============
allow ovpn_container.process self:netlink_route_socket nlmsg_write;
allow ovpn_container.process tun_tap_device_t:chr_file { ioctl open read write };
OK, let's modify the udica-generated policy by adding the advised allows to it (note, that here I manually translated the syntax to CIL):
(block ovpn_container
(blockinherit container)
(allow process process ( capability ( chown dac_override fsetid fowner mknod net_raw setgid setuid setfcap setpcap net_bind_service sys_chroot kill audit_write net_admin )))
(allow process default_t ( dir ( open read getattr lock search ioctl add_name remove_name write )))
(allow process default_t ( file ( getattr read write append ioctl lock map open create )))
(allow process default_t ( sock_file ( getattr read write append open )))
; This is our new stuff.
(allow process tun_tap_device_t ( chr_file ( ioctl open read write )))
(allow process self ( netlink_route_socket ( nlmsg_write )))
)
Now we enable SELinux back, reload the module and check that the container works correctly when we specify our custom domain:
$ sudo setenforce 1
$ sudo semodule -r ovpn_container
$ sudo semodule -i ovpn_container.cil /usr/share/udica/templates/base_container.cil
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --security-opt label=type:ovpn_container.process peque/vpn
<...>
Initialization Sequence Completed
Finally, check that other containers still have no these privileges:
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z peque/vpn
<...>
ERROR: Cannot open TUN/TAP dev /dev/net/tun: Permission denied (errno=13)
Yay! We stay with SELinux on, and allow the tunnel configuration only to our specific container.

zabbix agent service failed, PID not readable

I'm trying to run zabbix-agent 3.0.4 on CentOS7, systemd failed to start the zabbix agent, from journalctl -xe
PID file /run/zabbix/zabbix_agentd.pid not readable (yes?) after start.
node=localhost.localdomain type=SERVICE_START msg=audit(1475848200.601:17994): pid=1 uid=0 auid=4294967298 ses=...
zabbix-agent.service never wrote its PID file. Failing.
Failed to start Zabbix Agent.
There is no permission error, and I try to re-configure the PID path to /tmp folder in zabbix-agent.service and zabbix_agentd.conf, it doesn't work.
Very weird, anyone has idea? Thank you in advance.
=====
Investigating a little bit, the PID should be under /run/zabbix folder, I create manually the zabbix_agentd.pid, and it disappears after 1 second. Really weird.
I had the same issue and it was related to selinux. So I allowed zabbix_agent_t via semanage
yum install policycoreutils-python
semanage permissive -a zabbix_agent_t
Giving the full permissions 7777 to that pid file will help to resolve the issue.
i had this too and it was Selinux, it was disabled but i had to
run the command
That's work for me.
Prerequisites: Centos 7, zabbix-server 3.4 and zabbix-agent 3.4 runing on same host.
Solution steps:
Install zabbix-server and zabbix-agent (no matter how - via yum or building from source code).
Check first if there is already separate users exist in /etc/passwd. If there is already zabbix users exist go to p.4.
Create separate groups and users for zabbix-server and zabbix-agent.
Example (you can specify usernames on your desire):
groupadd zabbix-agent
useradd -g zabbix-agent zabbix-agent
groupadd zabbix
useradd -g zabbix zabbix
Specify PID and LOG file location in Zabbix config files. Example:
For zabbix-server: in /etc/zabbix/zabbix_server.conf:
PidFile=/run/zabbix/zabbix_server.pid
LogFile=/var/log/zabbix/zabbix_server.log
For zabbix-agent: in /etc/zabbix/zabbix_agentd.conf:
PidFile=/run/zabbix-agent/zabbix-agent.pid
LogFile=/var/log/zabbix-agent/zabbix-agent.log
Create appropriate directories (if they haven't been creatred previously) as were specified in config files and change owners for this directories:
mkdir /var/log/zabbix-agent
mkdir /run/zabbix-agent
chown zabbix-agent:zabbix-agent /var/log/zabbix-agent
chown zabbix-agent:zabbix-agent /run/zabbix-agent
mkdir /var/log/zabbix
mkdir /run/zabbix
chown zabbix:zabbix /var/log/zabbix-agent
chown zabbix:zabbix /run/zabbix-agent
Check systemd config for zabbix services and add Username= and Group= in [Service] section under which services will run. Example:
For zabbix-server: /etc/systemd/system/multi-user.target.wants/zabbix-server.service:
[Unit]
Description=Zabbix Server
After=syslog.target
After=network.target
[Service]
Environment="CONFFILE=/etc/zabbix/zabbix_server.conf"
EnvironmentFile=-/etc/sysconfig/zabbix-server
Type=forking
Restart=on-failure
PIDFile=/run/zabbix/zabbix_server.pid
KillMode=control-group
ExecStart=/usr/sbin/zabbix_server -c $CONFFILE
ExecStop=/bin/kill -SIGTERM $MAINPID
RestartSec=10s
TimeoutSec=0
User=zabbix
Group=zabbix
[Install]
WantedBy=multi-user.target
For zabbix-agent: /etc/systemd/system/multi-user.target.wants/zabbix-agent.service:
[Unit]
Description=Zabbix Agent
After=syslog.target
After=network.target
[Service]
Environment="CONFFILE=/etc/zabbix/zabbix_agentd.conf"
EnvironmentFile=-/etc/sysconfig/zabbix-agent
Type=forking
Restart=on-failure
PIDFile=/run/zabbix-agent/zabbix-agent.pid
KillMode=control-group
ExecStart=/usr/sbin/zabbix_agentd -c $CONFFILE
ExecStop=/bin/kill -SIGTERM $MAINPID
RestartSec=10s
User=zabbix-agent
Group=zabbix-agent
[Install]
WantedBy=multi-user.target
If there is no such configs - you can find them in:
/usr/lib/systemd/system/
OR
Enable zabbix-agent.service service and thereby create symlink in /etc/systemd/system/multi-user.target.wants/ directory to /usr/lib/systemd/system/zabbix-agent.service
Run services:
systemctl start zabbix-server
systemctl start zabbix-agent
Check users under which services had been started (first column):
ps -aux | grep zabbix
or via top command.
Disable SELinux and Firewalld and you're good to go

Unable to set ASG instance hostnames using Troposphere/CloudFormation UserData

Here's the relevant part of my Troposphere file:
LaunchConfiguration = t.add_resource(LaunchConfiguration(
"LaunchConfigA",
ImageId=UBUNTU_IMG,
SecurityGroups=[Ref(SecurityGroup)],
InstanceType="m3.medium",
UserData=Base64(Join('', [
"#cloud-boothook\n",
"#!/bin/bash\n",
"sudo hostname test\n",
"sudo sh -c 'echo test > /etc/hostname'\n",
"sudo sh -c 'echo 127.0.0.1 test >> /etc/hosts'\n",
"sudo touch /var/log/TESTING\n"
])),
))
AutoScalingGroupA = t.add_resource(AutoScalingGroup(
"GroupA",
AvailabilityZones=GetAZs(Ref(AWS_REGION)),
LaunchConfigurationName=Ref(LaunchConfiguration),
MinSize="1",
DesiredCapacity="2",
MaxSize="2",
))
When I create a brand new CloudFormation stack from this template, the hostnames on the instances look like ip-172-XXX-XXX-XXX, the default.
I am certain that the script is running, because of my TESTING file:
atrose#ip-172-31-32-40:~$ ls -la /var/log/TESTING
-rw-r--r-- 1 root root 0 Jul 14 20:10 /var/log/TESTING
If I run the script manually, the hostname is properly set. Like so:
atrose#ip-172-31-32-40:~$ hostname
ip-172-31-32-40
atrose#ip-172-31-32-40:~$ sudo cat /var/lib/cloud/instance/user-data.txt
#cloud-boothook
#!/bin/bash
sudo hostname test
sudo sh -c 'echo test > /etc/hostname'
sudo sh -c 'echo 127.0.0.1 test >> /etc/hosts'
atrose#ip-172-31-32-40:~$ sudo bash /var/lib/cloud/instance/user-data.txt
atrose#ip-172-31-32-40:~$ hostname
test
How can I set hostnames on instances when they first boot into an ASG?
It looks like you're using an Ubuntu AMI, which means CloudInit should have a hostname parameter built into it, and you shouldn't need a shell script to do what you want. I'm going to guess that cloudinit itself is colliding with your script. You should check this out:
http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt#L540
Let me know if you have any questions about how to use that. Thanks!

Docker env variables not set while log via shell

In my settings file i am getting env variables like this
'NAME': os.environ['PG_DBNAME'], # Database
I am setting in docker file like this
-e PG_DBNAME= "mapp"
Now
The web app work fine
If i log into shell via docker exec ... bash then env variables are also set
But if i log in via ipaddress and port number from ssh client then i am able to login but env variables are not set
As commented in issue 2569:
This is expected. SSH wipes out the environment as part of the login process.
One way to work around it is to dump the environment variables in /etc/environment (e.g. env | grep _ >> /etc/environment) before starting Supervisor.
Further "login processes" should source this file, and tada! There is your environment.
That env | grep _ >> /etc/environment could be part of a default run script associated (through ENTRYPOINT or CMD) to your image.
Daniel A.A. Pelsmaeker suggests jenkinsci/docker-ssh-agent issue 33 for an approach that selects and sets all environment variables excluding a specific denylist:
For my own uses I changed that line to the following:
env | egrep -v "^(HOME=|USER=|MAIL=|LC_ALL=|LS_COLORS=|LANG=|HOSTNAME=|PWD=|TERM=|SHLVL=|LANGUAGE=|_=)" >> /etc/environment
This takes all environment variables, except those listed, and appends then to /etc/environment, overriding any previously defined there.
I also had the exact same problem. I found the example on docs.docker.com appending variables by echo'ing to /etc/profile not the nicest way to do that. So here is my solution:
Dockerbuild:
I execute the docker build by the following command which also fetches the http_proxy, https_proxy and no_proxy variables from the
current shell session. The variables are passed as agruments with the --build-arg option.
[root#localhost dock-centOS]# docker build
--build-arg http_proxy="{{ lookup('env', 'http_proxy')}}"
--build-arg https_proxy="{{ lookup('env', 'https_proxy')}}"
--build-arg no_proxy="{{ lookup('env', 'no_proxy')}}"
-t my_pv_repo:centOS-with-sshd .
Dockerfile:
I use the following dockerfile snippet for setting the enviroment variables for all users. The ARG command is used instead of
ENV because i don't want docker to persist my variables in the image. The ARG variable is only available during the docker build.
The RUN command creates a bash script which is placed in the /etc/profile.d directory. During start-up of the container
/etc/profile script is run and sources all readable files in the /etc/profile.d directory.
FROM centos:7.3.1611
ARG http_proxy=$http_proxy
ARG https_proxy=$https_proxy
ARG no_proxy=$no_proxy
ARG JAVA_HOME=/usr/lib/jvm/jdk1.6.0_45
ARG DOMAIN_HOME=/home/oracle/w001/D1/app/user_projects/domains/fancy_app_domain
ARG PATH=$PATH:/usr/lib/jvm/jdk1.6.0_45/bin
ARG XAUTHORITY=~/.Xauthority
RUN shebang='#!/usr/bin/env bash'; \
env_vars="export http_proxy=${http_proxy} https_proxy=${https_proxy} no_proxy=${no_proxy}"; \
env_vars+=' JAVA_HOME=/usr/lib/jvm/jdk1.6.0_45 DOMAIN_HOME=/home/oracle/w001/D1/app/user_projects/domains/fancy_app_domain'; \
env_vars+=" PATH=${PATH}:/usr/lib/jvm/jdk1.6.0_45/bin XAUTHORITY=${XAUTHORITY}"; \
echo $shebang$'\n'$env_vars > /etc/profile.d/env_vars.sh
Test result: Well lets hit the cli to check if our environment variables are available during a ssh session.
[root#localhost vagrant]# docker exec -u root -it centOS-container bash
[root#33e7efab489c /]#
[root#33e7efab489c /]#
[root#33e7efab489c /]# cat /etc/profile.d/env_vars.sh
#!/usr/bin/env bash
export http_proxy=http://10.0.2.2:3128 https_proxy=http://10.0.2.2:3128 no_proxy=localhost,127.0.0.1 JAVA_HOME=/usr/lib/jvm/jdk1.6.0_45 DOMAIN_HOME=/home/oracle/w001/D1/app/user_projects/domains/fancy_app_domain PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/jdk1.6.0_45/bin XAUTHORITY=~/.Xauthority
[root#33e7efab489c /]#
[root#33e7efab489c /]#
[root#33e7efab489c /]# printenv
HOSTNAME=33e7efab489c
TERM=xterm
http_proxy=http://10.0.2.2:3128
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/jdk1.6.0_45/bin
DOMAIN_HOME=/home/oracle/w001/D1/app/user_projects/domains/fancy_app_domain
PWD=/
JAVA_HOME=/usr/lib/jvm/jdk1.6.0_45
LANG=en_US.UTF-8
https_proxy=http://10.0.2.2:3128
SHLVL=1
HOME=/root
no_proxy=localhost,127.0.0.1
XAUTHORITY=/root/.Xauthority
_=/usr/bin/printenv
[root#33e7efab489c /]#
[root#33e7efab489c /]#
[root#33e7efab489c /]# exit
[root#localhost vagrant]# exit
[vagrant#localhost ~]$ logout
Connection to 127.0.0.1 closed.
me#my-mac$ ssh -X root#localhost -p 7022 -o UserKnownHostsFile=/dev/null -o IdentityFile=/development/workspace/supercalifragilisticexpialidocious-app/.vagrant/machines/default/virtualbox/private_key
The authenticity of host '[localhost]:7022 ([127.0.0.1]:7022)' can't be established.
ECDSA key fingerprint is SHA256:dTd/vsmPTbrA3kPeIfArZMFEgfdlgjGHwMgE3Z5BgBc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[localhost]:7022' (ECDSA) to the list of known hosts.
/usr/bin/xauth: file /root/.Xauthority does not exist
[root#33e7efab489c ~]# su - oracle
bash-4.2$
bash-4.2$
bash-4.2$ printenv
HOSTNAME=33e7efab489c
SHELL=/bin/bash
TERM=xterm-256color
HISTSIZE=1000
http_proxy=http://10.0.2.2:3128
USER=oracle
LS_COLORS=rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:
MAIL=/var/spool/mail/oracle
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/jdk1.6.0_45/bin
DOMAIN_HOME=/home/oracle/w001/D1/app/user_projects/domains/fancy_app_domain
PWD=/home/oracle
JAVA_HOME=/usr/lib/jvm/jdk1.6.0_45
LANG=en_US.UTF-8
https_proxy=http://10.0.2.2:3128
HISTCONTROL=ignoredups
SHLVL=1
HOME=/home/oracle
no_proxy=localhost,127.0.0.1
LOGNAME=oracle
XAUTHORITY=/home/oracle/.Xauthority
_=/usr/bin/printenv

Where can I find the error logs of nginx, using FastCGI and Django?

I'm using Django with FastCGI + nginx. Where are the logs (errors) stored in this case?
Errors are stored in the nginx log file. You can specify it in the root of the nginx configuration file:
error_log /var/log/nginx/nginx_error.log warn;
On Mac OS X with Homebrew, the log file was found by default at the following location:
/usr/local/var/log/nginx
I was looking for a different solution.
Error logs, by default, before any configuration is set, on my system (x86 Arch Linux), was found in:
/var/log/nginx/error.log
You can use lsof (list of open files) in most cases to find open log files without knowing the configuration.
Example:
Find the PID of httpd (the same concept applies for nginx and other programs):
$ ps aux | grep httpd
...
root 17970 0.0 0.3 495964 64388 ? Ssl Oct29 3:45 /usr/sbin/httpd
...
Then search for open log files using lsof with the PID:
$ lsof -p 17970 | grep log
httpd 17970 root 2w REG 253,15 2278 6723 /var/log/httpd/error_log
httpd 17970 root 12w REG 253,15 0 1387 /var/log/httpd/access_log
If lsof prints nothing, even though you expected the log files to be found, issue the same command using sudo.
You can read a little more here.
Run this command, to check error logs:
tail -f /var/log/nginx/error.log
My ngninx logs are located here:
/usr/local/var/log/nginx/*
You can also check your nginx.conf to see if you have any directives dumping to custom log.
run nginx -t to locate your nginx.conf.
# in ngingx.conf
error_log /usr/local/var/log/nginx/error.log;
error_log /usr/local/var/log/nginx/error.log notice;
error_log /usr/local/var/log/nginx/error.log info;
Nginx is usually set up in /usr/local or /etc/. The server could be configured to dump logs to /var/log as well.
If you have an alternate location for your nginx install and all else fails, you could use the find command to locate your file of choice.
find /usr/ -path "*/nginx/*" -type f -name '*.log', where /usr/ is the folder you wish to start searching from.
Logs location on Linux servers:
Apache – /var/log/httpd/
IIS – C:\inetpub\wwwroot\
Node.js – /var/log/nodejs/
nginx – /var/log/nginx/
Passenger – /var/app/support/logs/
Puma – /var/log/puma/
Python – /opt/python/log/
Tomcat – /var/log/tomcat8
Type this command in the terminal:
sudo cat /var/log/nginx/error.log
For Mac OS users, you can type nginx -help in your terminal.
nginx version: nginx/1.21.0
Usage: nginx [-?hvVtTq] [-s signal] [-p prefix]
[-e filename] [-c filename] [-g directives]
Options:
-?,-h : this help
-v : show version and exit
-V : show version and configure options then exit
-t : test configuration and exit
-T : test configuration, dump it and exit
-q : suppress non-error messages during configuration testing
-s signal : send signal to a master process: stop, quit, reopen, reload
-p prefix : set prefix path (default: /opt/homebrew/Cellar/nginx/1.21.0/)
-e filename : set error log file (default: /opt/homebrew/var/log/nginx/error.log)
-c filename : set configuration file (default: /opt/homebrew/etc/nginx/nginx.conf)
-g directives : set global directives out of configuration file
Then, you could find some default path for configuration and log files, in this case:
/opt/homebrew/log/nginx/error.log
cd /var/log/nginx/
cat error.log
It is a good practice to set where the access log should be in nginx configuring file . Using acces_log /path/ Like this.
keyval $remote_addr:$http_user_agent $seen zone=clients;
server { listen 443 ssl;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
if ($seen = "") {
set $seen 1;
set $logme 1;
}
access_log /tmp/sslparams.log sslparams if=$logme;
error_log /pathtolog/error.log;
# ...
}
I found it in /usr/local/nginx/logs/*.