How to enable log in openconnect vpn - openconnect

I new to openconnect (https://github.com/openconnect/openconnect.git), can someone please tell that how can I redirect all the log to a file in openconnect, and how to change the log level.
Thanks in advance.

This is working for me. I am adding --timestamp for
Prepend a timestamp to each progress message
and --syslog for:
After tunnel is brought up, use syslog for further progress messages
export vpn_server="<YOUR IP ADDRESS>"
export vpn_username="<YOUR USERNAME>"
sudo openconnect --syslog --timestamp --servercert --protocol=anyconnect -u $vpn_username $vpn_server
Then in another terminal tab, tail the messages
tail -f /var/log/syslog
This bit was taken from https://askubuntu.com/a/1062368
More info about other parameters is here https://www.infradead.org/openconnect/manual.html

Related

Zabbix Agent Auto Registration - Zabbix Server

I am trying to register Zabbix agent to Zabbix server UI automatically but seems i am missing something but when i am trying to do it via UI(Manually) it is working. can someone help me to do it.
My configurations -
/etc/zabbix/zabbix_agentd.conf
Server=127.0.0.1,{zabbix-server-ip}
ServerActive=DNS Name
HostMetadata=ubuntu (string why which i am doing configuration in UI)
any though on this would be appreciated.
Wrote a script for auto-registration of the Zabbix agent to Zabbix server.
For auto discovery, you need to set rules in Zabbix UI/frontend with HostMetadata under Configuration → Actions. Follow This
Then you can move ahead with script execution on agent
#Note - This script takes zabbixserverip metadatastring as input at runtime sh script.sh internal-dns-name free-string
#!/bin/bash
zabbixserverip=$1
metadatastring=$2
apt update -y
apt install zabbix-agent
sed -i -- 's/Server=127.0.0.1/Server='$zabbixserverip'/g' /etc/zabbix/zabbix_agentd.conf
echo "ServerActive=$zabbixserverip" >> /etc/zabbix/zabbix_agentd.conf
echo "HostMetadata=$metadatastring" >> /etc/zabbix/zabbix_agentd.conf
systemctl restart zabbix-agent
systemctl status zabbix-agent
first, you need to set Server and ServerActive both IP or DNS of zabbix server.
For auto discovery, you need to set rules in zabbix UI/frontend with HostMetadata under Configuration → Actions.
Check if link will help you.
https://www.zabbix.com/documentation/4.2/manual/discovery/auto_registration
You are configuring Server and ServerActive wrong: both need to be set to the IP/DNS of the Zabbix Proxy/Server, no the local IP/DNS.

Using AWS IoT and Mosquitto client causes a TLS error

I am new to AWS IoT. I am following the example in the link below about setting up JITP (Just In Time Provisioning). Everything goes fine such as registering the Root CA, private key verification certificate etc.
https://aws.amazon.com/blogs/iot/setting-up-just-in-time-provisioning-with-aws-iot-core/
I don't normally post "what is wrong with my code" kind of questions, but i'm out of ideas with this one. When I try the last step to use the MQTT Mosquitto client to connect and publish to AWS IoT Core with the below command
mosquitto_pub --cafile root.cert --cert deviceCertAndCACert.crt --key deviceJITPCert.key -h a9bqki6ij1hx9.iot.us-east-1.amazonaws.com -p 8883 -q 1 -t foo/bar -I anyclientID --tls-version tlsv1.2 -m "Hello" -d
I get the error :
Client anyclientID4406 sending CONNECT
Error: A TLS error occurred.
I can't understand whatsoever what is the problem here with the handshake. All the keys and certificates are generated correctly. I have tried it from the start time and time again. Perhaps i'm missing one obvious step. If anyone with some experience might know what is going wrong i'd greatly appreciate it.

How to redirect screen log into syslog of cassandra snapshotter 1.0.0

I'm using cassandra snapshotter to take backup of a cluster and upload into S3 but when I am executing the backup command:
cassandra-snapshotter --aws-access-key-id=xxxxxxxxxxxxx --aws-secret-access-key=xxxxxxxxxx --s3-bucket-name=xxxxx --s3-bucket-region=us-east-1 --s3-base-path=xxxx backup --hosts=xx.xx.xx.xx --keyspace xxxx --user=xxxxx --password=xxxxx
and following logs are printing on screen.
[xx.xx.xx.xx] sudo: cassandra-snapshotter-agent --incremental_backups put -- s3-bucket-name=xxxx --s3-bucket-region=us-east-1 --s3-base-path=xxxxxxx/20160531104350/xx.xx.xx.xx --manifest=/tmp/backupmanifest --bufsize=64 --concurrency=4 --aws-access-key-id=xxxxxx --aws-secret-access-key=xxxxx
[xx.xx.xx.xx] out: lzop 1.03
[xx.xx.xx.xx] out: LZO library 2.06
[xx.xx.xx.xx] out: Copyright (C) 1996-2010 Markus Franz Xaver Johannes Oberhumer
[xx.xx.xx.xx] out:
[xx.xx.xx.xx] out: cassandra_snapshotter.agent INFO MSG: Initialized multipart upload for file /var/lib/cassandra/data/test/my_table-3035993026f911e695834dae91308d63/snapshots/20160531124729/test-my_table-ka-24-Index.db to 20160519/20160531124729/xx.xx.xx.xx//var/lib/cassandra/data/test/my_table-3035993026f911e695834dae91308d63/snapshots/20160531124729/test-my_table-ka-24-Index.db.lzo
.......
.......
I want to move screen log into dev/log/syslog. How to move the following log into the particular file with the help of small changing into codes.
Is it possible to redirect the log?
In cassandra_snapshotter 1.0.0 , already code exists in logging_helper.py and How its work and where it is redirecting log.
can anyone help me to solve this problem?
cassandra-snapshotter <args> | logger
It looks like the lzop output is being generated from compressed_pipe in utils.py, and every call run with fabric.api.sudo will echo to stdout (see http://docs.fabfile.org/en/1.11/api/core/operations.html#fabric.operations.run). These aren't handled by the logging handler configured in configure in logging_helper.py.
To get stdout of the cassandra-snapshotter call sent to syslog, I would hope that you could just use logger, which, from the man page, is "a shell command interface to the syslog(3) system log module" (see http://linux.die.net/man/1/logger).

Django Celery cannot connect to remote RabbitMQ on EC2

I created a rabbitmq cluster on two instances on EC2. My django app uses celery for async tasks which in turn uses RabbitMQ for message queue.
Whenever I start celery with the command:
python manage.py celery worker --loglevel=INFO
OR
python manage.py celeryd --loglevel=INFO
I keep getting following error message related to remote RabbitMQ:
[2015-05-19 08:58:47,307: ERROR/MainProcess] consumer: Cannot connect to amqp://myuser:**#<ip-address>:25672/myvhost/: Socket closed.
Trying again in 2.00 seconds...
I set permissions using:
sudo rabbitmqctl set_permissions -p myvhost myuser ".*" ".*" ".*"
and then restarted rabbitmq-server on both the cluster nodes. However, it didn't help.
In log file, I see few entries like below:
=INFO REPORT==== 19-May-2015::08:14:41 ===
accepting AMQP connection <0.1981.0> (<ip-address>:38471 -> <ip-address>:5672)
=ERROR REPORT==== 19-May-2015::08:14:44 ===
closing AMQP connection <0.1981.0> (<ip-address>:38471 -> <ip-address>:5672):
{handshake_error,opening,0,
{amqp_error,access_refused,
"access to vhost 'myvhost' refused for user 'myuser'",
'connection.open'}}
The file /usr/local/etc/rabbitmq/rabbitmq-env.conf contains an entry for NODE_IP_ADDRESS to bind it only to localhost. Removing the NODE_IP_ADDRESS entry from the config binds the port to all network inferfaces.
Source: https://superuser.com/questions/464311/open-port-5672-tcp-for-access-to-rabbitmq-on-mac
Turns out I had not created appropriate configuration files. In my case (Ubuntu 14.04), I had to create below two configuration files:
$ cat /etc/rabbitmq/rabbitmq-env.conf
RABBITMQ_NODE_IP_ADDRESS=<ip_of_ec2_instance>
<ip_of_ec2_instance> has to be the internal IP that EC2 uses. Not the public IP that one uses to ssh into the instance. It can be obtained using ip a command.
$ cat /etc/rabbitmq/rabbitmq.config
[
{mnesia, [{dump_log_write_threshold, 1000}]},
{rabbit, [{tcp_listeners, [25672]}]},
{rabbit, [{loopback_users, []}]}
].
I think the line {rabbit, [{tcp_listeners, [25672]}]}, was one of the most important piece of configuration that I was missing.
Thanks #dgil for the initial troubleshooting help.
The question has been answered. but just leaving notes with a similar issue i faced should anybody else find it useful
I have a flask app running on ec2 with amqp as a broker on port 5672 and ec2 elasticcache memcached as a backend. The amqp broker had trouble picking up tasks that were getting fired - so i resolved it by fixing as such
Assuming you have rabbitmq-server installed (sudo apt-get install rabbitmq-server), add the user and set the properties as such
sudo add_user username password
set_permissions username ".*" ".*" ".*"
restart server: sudo service rabbitmq-server restart
In your flask app for the celery configuration
broker_url=amqp://username:password#localhost:5672// (Set as above)
backend=cache+memcached://(ec2 cache url):11211/
(The cache+memcached:// tripped me up - without it i kept getting an import error (cannot import module)
Open up the port 5672 on your ec2 instance in the security group.
Now if you fire up your celery worker, it should pick up the the tasks that get fired and store the results on your memcached server

Can't stop service in Vesta Control Panel

Hi everyone
I have a problem.
I stopped service named, exim and dovecot, but after a period of time, these services auto started again. Until now, I don't know why this happen even though, I was tried search for this issue but can't find out anything. please help me how to solve this problem..
Thank you so much!!!
This works for me:
Login as root on your server and force-stop the services:
service named stop
service exim stop
service dovecot stop
Next is to configure VestaCP to not start the services when the server is beeing rebooted:
chkconfig named off
chkconfig exim off
chkconfig dovecot off
And you're done. You can check by rebooting the server. You can also do this with other services:
clamd, spamassasin (if you installed the high ram VestaCP version and don't need the mail services)_
httpd, nginx, mysqld and vsftpd (for if you make a dns-only server)
You get the point, hope this works. Good luck
it just about when you create web domain, but have check the option DNS support and mail support. So, vesta will start service named and dovecot. you just ceate a cronjob with these command:
sudo /usr/local/vesta/bin/v-stop-service dovecot
sudo /usr/local/vesta/bin/v-stop-service named
sudo /usr/local/vesta/bin/v-stop-service exim
or, in the server, add these command line
JOB='8' MIN='0' HOUR='/6' DAY='' MONTH='' WDAY='' CMD='sudo /usr/local/vesta/bin/v-stop-service exim' SUSPENDED='no' TIME='12:32:31' DATE='2014-05-22'
JOB='9' MIN='0' HOUR='/6' DAY='' MONTH='' WDAY='' CMD='sudo /usr/local/vesta/bin/v-stop-service named' SUSPENDED='no' TIME='12:32:05' DATE='2014-05-22'
JOB='10' MIN='0' HOUR='/6' DAY='' MONTH='' WDAY='' CMD='sudo /usr/local/vesta/bin/v-stop-service dovecot' SUSPENDED='no' TIME='12:31:50' DATE='2014-05-22'
if you have any issue, give me message :)