Im trying to query a modbus device trough snmp using snmpd to pass a python script to retrieve data.
#! /bin/bash
if [ "$1" = "-g" ]
then
echo .1.3.6.1.4.1.52612.10.3.1
echo string
python /usr/local/bin/readvolt.py
fi
exit 0
And this is the readvolt.py looks like :
#!/usr/bin/python
import minimalmodbus
eqp = minimalmodbus.Instrument('/dev/ttyUSB0',1) # port name, slave address (in decimal)
# skip some other lines for serial port initialization
volt = eqp.read_float(0,4,2) # getting data from serial modbus
print volt
and this line from my snmpd.conf :
pass .1.3.6.1.4.1.52612.10.3.1 /bin/sh /usr/local/bin/volt.sh
my question : I got traceback from python, couldnot find minimalmodbus module, but when i tried to run the readvolt.py from directly from host, it is working as expected (it can print out the result (volt) )
pi#raspberrypi:/usr/local/bin $ readvolt.py
220.25
I also tried using simple python script (test.py) just to make sure if snmpd pass can run python script on respond of snmpget from snmp manager
#!/usr/bin/python
import sys
print "test"
It run OK :
suryo#r50e:~$ snmpwalk -v2c -c public 192.168.1.5 .1.3.6.1.4.1.52612.10.3.1
iso.3.6.1.4.1.52612.10.3.1 = STRING: "test"
suryo#r50e:~$
what is the problem here ? seems that python could not import external module when it is run by snmpd pass.
I'm thinking if this is an access control issue, Debian-snmp doesnt have right to access serial port..
Problem is solved, by finding out the username of the snmpd daemon. I put whoami in the script and got 'Debian-snmp', then becoming straight forward, checking group membership by running :
pi#rraspberrypi:~$ groups Debian-snmp
Debian-snmp : Debian-snmp
Put Debian-snmp in the dialout membership to grant full access to serial ports:
pi#raspberrypi:~ $ sudo usermod -a -G dialout Debian-snmp
Restart the snmpd to logon with new membership, and voilla..It can read the slave modbus device from snmp command /snmpget
Related
Instead of localhost IP , I have My VM ip (eth0-192.168.12.20) to receive trap notification, I am not receiving any traps if generate one from outside the VM(I am using snmptrap command from another machine) but I can see SNMP data when I do tcpdump on VM interface eth0.
If I generate trap from same machine using snmptrap command I can see trap data via PySNMP trap receiver script.
Option's tried:
1. Tried binding the port to 0.0.0.0 to receive trap from any machine
2. Enabled debugging option in pysnmp to get some idea to solve the issue. There is no info generated when sending snmptrap from outside machine
The closest scenario similar to my question is present in the following link which does not have final solution.
Code:
SNMP v1 and v2c:
from pysnmp.carrier.asynsock.dispatch import AsynsockDispatcher
from pysnmp.carrier.asynsock.dgram import udp, udp6
from pyasn1.codec.ber import decoder
from pysnmp.proto import api
from pysnmp.entity import engine, config
from pysnmp.entity.rfc3413 import ntfrcv
from pysnmp import debug
debug.setLogger(debug.Debug("all"))
### SNMPv2c/SNMPv1 setup
### Callback function for receiving notifications
def v2cv1CallBackFunc(snmpEngine, stateReference, contextEngineId, contextName,
varBinds, cbCtx):
transportDomain, transportAddress = snmpEngine.msgAndPduDsp.getTransportInfo(stateReference)
print transportDomain, transportAddress
# Get an execution context...
execContext = snmpEngine.observer.getExecutionContext(
'rfc3412.receiveMessage:request'
)
# ... and use inner SNMP engine data to figure out peer address
print('Notification from %s, ContextEngineId "%s", ContextName "%s"'
%('#'.join([str(x) for x in execContext['transportAddress']]),
contextEngineId.prettyPrint(), contextName.prettyPrint()))
for name, val in varBinds:
print('%s = %s' % (name.prettyPrint(), val.prettyPrint()))
# Create SNMP engine with autogenernated engineID and pre-bound
# to socket transport dispatcher
snmpEngine = engine.SnmpEngine()
# SNMPv1/2c setup
# SecurityName <-> CommunityName mapping
config.addV1System(snmpEngine, 'my-area', "public")
# Specify security settings per SecurityName (SNMPv2c -> 1)
config.addTargetParams(snmpEngine, 'my-creds', 'my-area', 'noAuthNoPriv', 1)
# Transport setup
# UDP over IPv4, first listening interface/port
config.addSocketTransport(
snmpEngine,
udp.domainName + (1, ),
udp.UdpSocketTransport().openServerMode(('0.0.0.0', 162))
)
# Register SNMP Application at the SNMP engine
ntfrcv.NotificationReceiver(snmpEngine, v2cv1CallBackFunc)
snmpEngine.transportDispatcher.jobStarted(1) # this job would never finish
# Run I/O dispatcher which would receive queries and send confirmations
try:
snmpEngine.transportDispatcher.runDispatcher()
except:
snmpEngine.transportDispatcher.closeDispatcher()
raise
Thanks in advance
I found the issue with the help from my IT team. Basically the API is working perfectly.
The firewalld application is not allowing the packets to pass through. So after I added the SNMP port to firewall exception list, it made my code working.
Commands I used:
sudo firewall-cmd --add-port=161-162/udp --zone=public --permanent
sudo systemctl restart network
sudo systemctl reload firewalld
I have installed memcachedb according to the Memcachedb: The complete guide, and I am able to set and get the key,values using telnet as explained in the guide.
What I really want to do is to set and get the key, value pairs from a python script.
I have the memcachedb running on the Ubuntu machine by following command:
sudo ./memcachedb -vv -u root -H ~/mcDB/ -N
I read and found out that libmemcached python client can be used to communicate with memcachedb.
So, I am using the following test script
import memcache
client=memcache.Client([('localhost',21201)]) # port for memcachedb
print "return value " + str(client.set("key","value"))
print "get result " + str(client.get("key"))
But it gives the following output:
return value 0
get result None
I have also tried replacing localhost with 127.0.0.1, does not work either.
In fact, there is no output by memcachedb (-vv option) on running the python script while there is when I use telnet to set and get.
So how can I connect to memcachedb and execute commands through python (get and set)?
So instead of python-memcached, I tried pylibmc and now that script is working.
There is probably some problem with python-memcached.
Updated script looks as follows:
import pylibmc
client=pylibmc.Client(["127.0.0.1:21201"]) # port for memcachedb
print "return value " + str(client.set("key","value"))
print "get result " + str(client.get("key"))
I am using ansijet to automate the ansible playbook to be run on a button click. The playbook is to stop the running instances on AWS. If run, manually from command-line, the playbook runs well and do the tasks. But when run through the web interface of ansijet, following error is encountered
Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the remote directory. Consider changing the remote temp path in ansible.cfg to a path rooted in "/tmp". Failed command was: mkdir -p $HOME/.ansible/tmp/ansible-tmp-1390414200.76-192986604554742 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1390414200.76-192986604554742 && echo $HOME/.ansible/tmp/ansible-tmp-1390414200.76-192986604554742, exited with result 1:
Following is the ansible.cfg configuration.
# some basic default values...
inventory = /etc/ansible/hosts
#library = /usr/share/my_modules/
remote_tmp = $HOME/.ansible/tmp/
pattern = *
forks = 5
poll_interval = 15
sudo_user = root
#ask_sudo_pass = True
#ask_pass = True
transport = smart
#remote_port = 22
module_lang = C
I try to change the remote_tmp path to /home/ubuntu/.ansible/tmp
But still getting the same error.
By default, the user Ansible connects to remote servers as will be the same name as the user ansible runs as. In the case of Ansijet, it will try to connect to remote servers with whatever user started Ansijet's node.js process. You can override this by specifying the remote_user in a playbook or globally in the ansible.cfg file.
Ansible will try to create the temp directory if it doesn't already exist, but will be unable to if that user does not have a home directory or if their home directory permissions do not allow them write access.
I actually changed the temp directory in my ansible.cfg file to point to a location in /tmp which works around these sorts of issues.
remote_tmp = /tmp/.ansible-${USER}/tmp
I faced the same problem a while ago and solved like this . The possible case is that either the remote server's /tmp directory did not have enough permission to write . Run the ls -ld /tmp command to make sure its output looks something like this
drwxrwxrwt 7 root root 20480 Feb 4 14:18 /tmp
I have root user as super user and /tmp has 1777 permission .
Also for me simply -
remote_tmp = /tmp worked well.
Another check would be to make sure $HOME is present from the shell which you are trying to run . Ansible runs commands via /bin/sh shell and not /bin/bash.Make sure that $HOME is present in sh shell .
In my case I needed to login to the server for the first time and change the default password.
Check the ansible user on the remote / client machine as this error occurs when the ansible user password expires on the remote / client machine.
==========
'WARNING: Your password has expired.\nPassword change required but no TTY available.\n')
<*.*.*.*> Failed to connect to the host via ssh: WARNING: Your password has expired.
Password change required but no TTY available.
Actual error :
host_name | UNREACHABLE! => {
"changed": false,
"msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo /tmp/ansible-$USER `\"&& mkdir /tmp/ansible-$USER/ansible-tmp-1655256382.78-15189-162690599720687 && echo ansible-tmp-1655256382.78-15189-162690599720687=\"` echo /tmp/ansible-$USER/ansible-tmp-1655256382.78-15189-162690599720687 `\" ), exited with result 1",
"unreachable": true
===========
This could happen mainly because on the Remote Server, there is no home directory present for the user.
The following steps resolved the issue for me -
Log into the remote server
switch to root
If the user is linux_user from which Host (in my case Ansible) is trying to connect , then run following commands
mkdir /home/linux_user
chown linux_user:linux_user /home/linux_user
Environment
Linux Mint 17.1
Python 2.7
pyserial 2.7
Arduino UNO rv3
Desired Behaviour
I'm trying to send three values from a Python application to Arduino.
It works when doing the following from terminal:
$ python
$ import serial
$ import struct
$ ser = serial.Serial('/dev/ttyACM0', 9600)
$ ser.write(struct.pack('>3B', 255, 0, 0))
Current Behaviour
It doesn't work when using the same code in a Python file ie:
import serial
import struct
ser = serial.Serial('/dev/ttyACM0', 9600)
ser.write(struct.pack('>3B', red_value, green_value, blue_value))
Error Message
$ sudo tail -100 /var/log/apache2/error.log
OSError: [Errno 13] Permission denied: '/dev/ttyACM0'
Troubleshooting
Permissions
Application file:
$ ls -l
-rwxr-xr-x 1 myname mygroupname 114146 Jan 9 19:16 my_application.py
ttyACM0:
ls -l /dev/ttyACM0
crw-rw---- 1 root dialout 166, 0 Jan 9 20:12 /dev/ttyACM0
Groups
Groups the owner is a member of:
$ groups
mygroupname adm dialout cdrom sudo dip plugdev lpadmin sambashare
Due to various suggestions on the internet I also added the owner to the tty group via System Settings > Users and Groups. This had no effect.
Serial Ports Available
$ dmesg | grep tty
[ 0.000000] console [tty0] enabled
[ 3390.614686] cdc_acm 3-2:1.0: ttyACM0: USB ACM device
Update
I can force it to work under the following conditions:
01. Permissions for world must be set to rw ie:
sudo chmod 666 /dev/ttyACM0
02. Arduino IDE serial monitor needs to be open.
However these conditions are not sustainable as:
Permissions are reset each time the USB is connected.
The Arduino IDE serial monitor shouldn't need to be open.
The following fleshes out some of the ideas in the first answer (I tried to add this content to that answer and accept it, but the edits were rejected). I'm not an expert in the area, so please just use this information to support your own research.
You can do one of the following:
01. Alter the permissions on /dev/ttyACM0 so that world has read and write priviliges (something you may not want to do) - although you may find they reset each time the device is plugged in eg:
sudo chmod 666 /dev/ttyACM0
02. Create a rule in /etc/udev/rules.d that will set the permissions of the device (a restart will be required):
# navigate to rules.d directory
cd /etc/udev/rules.d
#create a new rule file
sudo touch my-newrule.rules
# open the file
sudo vim my-newrule.rules
# add the following
KERNEL=="ttyACM0", MODE="0666"
This also sets permissions for world to read and write, which you may not want to do.
For more information about this approach, see these answers:
https://unix.stackexchange.com/a/48596/92486
https://stackoverflow.com/a/11848003/1063287
03. The third option, which is the option I implemented, adds the Apache user to the dialout group so that if the script is being run by Apache, then it can access the device.
a) Find the location of your Apache config file, then search for the User setting within that file:
# open file in editor
sudo vim /etc/apache2/apache2.conf
# search for User setting
/User
You may find something like:
# These need to be set in /etc/apache2/envvars
User ${APACHE_RUN_USER}
Group ${APACHE_RUN_GROUP}
b) Quit vim and search for APACHE_RUN_USER in /etc/apache2/envvars (if the above scenario applies):
# open file in editor
sudo vim /etc/apache2/envvars
# search for APACHE_RUN_USER
/APACHE_RUN_USER
You may find something like:
export APACHE_RUN_USER=www-data
c) Add the User www-data to the dialout group:
sudo usermod -a -G dialout www-data
d) Restart.
As the Apache user has been added to the dialout group, the script should now be able to access the device.
Further Reading
How to find the location of the Apache config file:
https://stackoverflow.com/a/12202042/1063287
The permissions on the file make no difference to the user that the program runs as
When you are logged in interactively you do have permission to use the /dev/ttyACM0
When your script is running (presumably as the apache user) it does not have permission
You need to alter the permissions on the /dev/ttyACM0
See the 2nd answer here How can I programmatically set permissions on my char device for an example of altering udev permissions so the file has the correct permissions
Based on the accepted answer, I was able to just add the following to my setup.sh script
printf "KERNEL==\"ttyACM0\", MODE=\"0666\"" | sudo tee /etc/udev/rules.d/si-ct.rules
I am using virtual env to run flask app and often my local host port does not work and I have to do export PORT=500* . I mean after using foreman start couple of times at a specific port, the port gets engaged and when I try again to start and it tries to connect saying retrying to connect and then fails.
I have to change the port every time I experience this problem. Is there a command by which I can free the port or delete the port.
often happens because foreman doesn't shutdown properly. Try looking to see if there are process still running in the background that might be using the port. For example, if you use forman to launch a python app, try:
ps aux | grep python
to see all your running python processes. You can automatically kill all running python process using the following command
ps aux | grep python | tr -s ' ' '\t' | awk '{system("kill " $2)}'
but be careful as this will kill all running python processes that you have running.