I am building a C++ application which purpose is, among other thing, to receive SNMP traps. For this I am using SNMP ++ library version V3.3 (https://agentpp.com/download.html C++ APIs SNMP++ 3.4.9).
I was expecting for traps using no authentication to be discarded/dropped if configuration was requesting some form of authentication but it does not seem to be the case.
To confirm this behavior I used the provided receive_trap example available in the consoleExamples directory. I commented every call to
usm->add_usm_user(...)
except for the one with "MD5" as security name :
usm->add_usm_user("MD5",
SNMP_AUTHPROTOCOL_HMACMD5, SNMP_PRIVPROTOCOL_NONE,
"MD5UserAuthPassword", "");
I then sent a trap (matching the "MD5" security name) to the application using net-snmp :
snmptrap -v 3 -e 0x090807060504030200 -u MD5 -Z 1,1 -l noAuthNoPriv localhost:10162 '' 1.3.6.1.4.1.8072.2.3.0.1 1.3.6.1.4.1.8072.2.3.2.1 i 123456
Since the application only registered User Security Model requires an MD5 password I would have though the trap would have been refused/dropped/discarded, but it was not :
Trying to register for traps on port 10162.
Waiting for traps/informs...
press return to stop
reason: -7
msg: SNMP++: Received SNMP Notification (trap or inform)
from: 127.0.0.1/45338
ID: 1.3.6.1.4.1.8072.2.3.0.1
Type:167
Oid: 1.3.6.1.4.1.8072.2.3.2.1
Val: 123456
To make sure there was no "default" UserSecurityModel used instead I then commented the remaining
usm->add_usm_user("MD5",
SNMP_AUTHPROTOCOL_HMACMD5, SNMP_PRIVPROTOCOL_NONE,
"MD5UserAuthPassword", "");
and sent my trap again using the same command. This time nothing happened :
Trying to register for traps on port 10162.
Waiting for traps/informs...
press return to stop
V3 is around 18k lines of RFC so it is completely possible I missed or misunderstood something but I would expect to be able to specify which security level I am expecting and drop everything which does not match. What am I missing ?
EDIT Additional testing with SNMPD
I have done some test with SNMPD and I somehow still get similar result.
I have created a user :
net-snmp-create-v3-user -ro -A STrP#SSWRD -a SHA -X STr0ngP#SSWRD -x AES snmpadmin
Then I am trying with authPriv key :
snmpwalk -v3 -a SHA -A STrP#SSWRD -x AES -X STr0ngP#SSWRD -l authPriv -u snmpadmin localhost
The request is accepted
with authNoPriv :
snmpwalk -v3 -a SHA -A STrP#SSWRD -x AES -l AuthNoPriv -u snmpadmin localhost
The request is accepted
with noAuthNoPriv :
snmpwalk -v3 -a SHA -A STrP#SSWRD -x AES -X STr0ngP#SSWR2D -l noauthnoPriv -u snmpadmin localhost
The request is rejected.
As I understand the authNoPriv must be rejected, but is accepted, this is incorrect from what I have read in the RFC and the cisco snmpv3 resume
Disclaimer I am not expert so take the following with a pinch of salt.
I cannot say for the library you are using but regarding the SNMP v3 flow:
In SNMPv3 exchanges, the USM is responsible for validation only on the SNMP authoritative engine side, the authoritative role depending of the kind of message.
Agent authoritative for : GET / SET / TRAP
Receiver authoritative for : INFORM
The RFC 3414 describes the reception of message in section 3.2 :
If the information about the user indicates that it does not
support the securityLevel requested by the caller, then the
usmStatsUnsupportedSecLevels counter is incremented and an error
indication (unsupportedSecurityLevel) together with the OID and value
of the incremented counter is returned to the calling module.
If the securityLevel specifies that the message is to be
authenticated, then the message is authenticated according to the
user’s authentication protocol. To do so a call is made to the
authentication module that implements the user’s authentication
protocol according to the abstract service primitive
So in the step 6, the securityLevel is the one from the message. This means the TRAP is accepted by the USM layer on the receiver side even if the authentication is not provided.
It is then the task of the user of the TRAP to decide if the message must be interpreted or not.
I've launched a new AWS EC2 instance (m5.large) based on Matillion's latest AMI (Matillion v1.56.9). The instance is coming up fine and I can reach Matillion's login page at https://[internal IP], but I cannot login with the default credentials which are supposed to be "ec2-user" and the instance id ("i-xxxxxx"). Error message is "Invalid username or password".
The EC2 instance has no public IP, that's why I use a private IP.
I can also ssh into the instance.
Can anyone help me find out why login using the default user doesn't work?
I believe the way it's supposed to work is at first boot the ec2-user password in /usr/share/tomcat8/conf/tomcat-users.xml gets set to the sha512sum of the instance ID. As per your comment Tobie that's a good spot but I think the Matillion documentation is just out of date there, from right back when instance IDs really were just 10 characters long!
I guess it uses the instance metadata service v1 to do that, so if IMDS v1 is not available it might not get created correctly.
In any case, as long as you can SSH into your server and the Admin / User Configuration is in Internal mode (which is the default)
you can fix the password manually like this...
Become root with sudo -i
Create the sha512sum of your chosen password like this.
echo -n "schepo" | sha512sum
Make sure you use the -n otherwise it adds a newline and gets the hash wrong. Mine comes out like 55aa...a1cf -
Then stop Tomcat so you can update the password
systemctl stop tomcat8
Fix the relevant line in /usr/share/tomcat8/conf/tomcat-users.xml or add a new one. You have to be really careful to keep the XML valid. Mine ends up like this:
<user username="schepo" password="55aa00778ccb153bc05aa6a8d7ee7c00f008397c5c70ebc8134aa1ba6cf682ac3d35297cbe60b21c00129039e25608056fe4922ebe1f89c7e2c68cf7fbfba1cf" roles="Emerald,API,Admin"/>
Then restart Tomcat
systemctl restart tomcat8
It normally takes about 60 seconds to restart. After that you should be able to login via the UI with your new user and/or password.
I have a django service that is running under sudo account. All the operations like job submission, database operations etc. are performed by sudo account. However there are certain scripts that should be executed as user for traceability purpose later on. What is the best way to transfer the user password so that it can be taken as input to a script that does su myid and subsequent commands in the script are run under userid.
For example my server is running as sys_admin. The password mypassword should be given by the user from the webpage and passed onto the script as argument.
Below is my sample script:
su - myid <<! >/dev/null 2>&1
mypassword
whoami > /dev/tty
!
The above script will print myid and not sys_admin. What is the best and the most secure way to perform this operation so that password is not exposed to backend as well.
Can some form of encryption and decryption be performed on client or server side with some passphrase?
It sounds like your are directly passing user input through a sudo shell, this is VERY unsafe if you are, as this can lead to OS command injection
I'm sorry I couldn't directly assist you but if I'm right in how I took that it isnt a great idea to continue the current way it's being done
Maybe someone can offer a safer way of handling it? I'm not experienced in django
I am trying a small example with AWS API Gateway and IAM authorization. The AWS API Gateway generated the below Endpoint :
https://xyz1234.execute-api.us-east-2.amazonaws.com/Users/users
with POST action and no parameters.
Initially I had turned off the IAM for this POST Method and I verified results using Postman it works.
Then I created a new IAM User and attached AmazonAPIGatewayInvokeFullAccess Policy to the user thereby giving permission to invoke any API's. Enabled the IAM for the POST Method.
I then went to Postman - and added Authorization with AccessKey, Secret Key, AWS Region as us-east-2 and Service Name as execute-api and tried to execute the Request but I got InvalidSignatureException Error with 403 as return code.
The body contains following message :
Signature expired: 20170517T062414Z is now earlier than 20170517T062840Z (20170517T063340Z - 5 min.)"
What am I missing ?
A request signed with AWS sigV4 includes a timestamp for when the signature was created. Signatures are only valid for a short amount of time after they are created. (This limits the amount of time that a replay attack can be attempted.)
When the signature is validated the timestamp is compared to the current time. If this indicates that the signature was not created recently, then signature validation fails with the error message you mentioned.
If you get this on in a Docker container on Windows that uses WSL, then it may help to fix the WSL time with by running wsl -d docker-desktop -e /sbin/hwclock -s in a Powershell. You can verify this is the case beforehand by logging into the container and
typing date in the terminal and comparing it with your host machine time.
A common cause of this is when the local clock on the host generating the signature is off by more than a couple of minutes.
You need to synchronize your machines local clock with NTP.
for eg. on an ubuntu machine:
sudo ntpdate pool.ntp.org
System time goes out of sync quite often. You need to keep them in sync periodically.
You can run a daily CRON job to keep your system time in sync as mentioned at this link: Periodically synchronize time in Linux
Create a bash script to sync time called ntpdate and put the below
into it
#!/bin/sh
# sync server time
/usr/sbin/ntpdate pool.ntp.org >> /tmp/ntpdate.log
You can place this script anywhere you like and then set up a cron I
will be putting it into the daily cron directory so that it runs once
every day So my ntpdate script is now in /etc/cron.daily/ntpdate and
it will run every day
Make this script executable
chmod +x /etc/cron.daily/ntpdate
Test it by running the script once and look for some output in
/tmp/ntpdate.log
/etc/cron.daily/ntpdate
In your log file you should see something like
26 Aug 12:19:06 ntpdate[2191]: adjust time server 206.108.0.131 offset 0.272120 sec
Faced similar issue when I use timedatectl command to change datetime of underlying machine... Explanation given by MikeD & others are really informative to fix the issue....
sudo apt install ntp
sudo apt install ntpdate
sudo ntpdate ntp.ubuntu.com
After synchronizing time with correct current datetime, this issue will be resolved
For me, the issue happened while using WSL. The date in WSL was out of sync.
The solution was to run the command
wsl --shutdown and restart docker.
This one command did the trick
sudo ntpdate pool.ntp.org
Make sure your PC's clock is set correctly. I faced the same issue and then realized my clock wasn't showing the right time due to some reason. As soon as I corrected the time, it started working fine again! Hope this helped.
I was also facing this issue , added
correctClockSkew: true
and issue fixed for me
const nodemailer = require('nodemailer');
const ses = require('nodemailer-ses-transport');
let transporter = nodemailer.createTransport(ses({
correctClockSkew: true,
accessKeyId: **,
secretAccessKey: **,
region: **
}));
If you are in AWS Ec2 Ubuntu server and somehow not able to fix time with NTP thing.
sudo date -s "$(wget -qSO- --max-redirect=0 google.com 2>&1 | grep Date: | cut -d' ' -f5-8)Z"
Source:https://askubuntu.com/a/655528
Had this problem on Windows. The current time got out of sync after a power outage. Solved it by: Setting -> date and time -> Sync now.
For those who face this issue while running Lambda functions (that use other AWS services like DynamoDB) locally with sam local invoke:
The time in docker container, used by sam, may not be in sync with host. Restarting your docker on host (Docker Desktop on Windows) should resolve this issue.
I was making AWS API requests from a VM on my local machine. I checked the date was correct and was syncing, but I was still getting the error above. I halted and re-upped my VM and the error went away. I never figured out the exact cause, but "turning it off and back on again" fixed it.
Complementing what as #miked-at-aws post about AWS sigV4, There are at least 2 main possible root causes for the clock skew:
your CPU is overloaded (reaching 99% usage or in EC2 instances with CPU limits that run out on CPU credits).
Why would this generate the time skew? because when the amazon SDK creates the time stamp to the moment the request is sent, normally there shouldn't be more than just a few nano or micro seconds, but if your CPU is overwhelmed it may take it several seconds or even minutes in some cases to process, so for this root cause you will experience not a 100% events lost but just some x% that may not be too big.
for the second root cause which is that your machine clock isn't just adjusted, well probably 100% of your events are being lost and you just have to make sure that your machine clock is being set and adjusted correctly.
I have tried all the solution related to time sync, but nothing works out. What I did was, while creating a service client, I set the correctClockSkew option as true. This solved my problem.
For instance:
let dynamodb = new AWS.DynamoDB({correctClockSkew: true});
Hope this will sort out.
Reference: https://github.com/aws/aws-sdk-js/issues/527
I have face this same problem while fetching video from Amazon Kinesis to my local website. So, in order to solve this problem i have install crony in my computer.This crony solved my problem.You can see the Amazon crony installation in this following link.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html
What worked for me was to change the time on my computer. I am in the UK so I put it forward one hour to put on a European time zone. Then it worked. This is not the best fix but it worked for me to move forward.
I set the timezone to eu-west-2 which is London so I am not sure why it only worked when I put the time on my computer forward an hour. I need to look into that.
Just try to update the system date and time they might be outdated synchronize your clock, and reload your console. This worked for me.
This is a question asking for any recent updates/suggestions.
Is this problem can be solved using aws amplifier, aws cognito SDK, service worker?
Time synchronization is not working
We're using S3, SimpleDB and SQS on quite a complicated project.
I'd like to be able to automatically track their usage, to be sure we don't suddenly spend large amounts of money when we didn't intend to (perhaps because of a bug).
Is there a way of reading the usage figures of all Amazon Web Services and/or the current real time dollar cost of an account from a script?
Or any service or script which provides alerts based on that?
Amazon just announced that you can now "set alarms for any metric that Amazon CloudWatch monitors" (CPU utilization, disk reads and writes, and network traffic, etc). Also, all instances now come with basic monitoring for free.
We just released a Lab Management service that adds policies to AWS usage: time limits, max number of instance, max machine sizes etc. You may want to try that and see if it helps: http://LabSlice.com. As this is a startup, we would really value feedback about how to resolve issues such as yours (ie. email me if you think the app could be better modified to meet your requirements).
I don't believe there is any direct way to control AWS costs to the dollar. I doubt that Amazon provides an API to get in-depth metrics on usage, as it's obviously not going to be in their interest to help you reduce costs. I actually ran into two instances where surprise costs arose in a company (bank) due to mis-configured scripts, so I know that it can be a problem.
I ran into the same issue with EC2 instances, but addressed it in a different way -- instead of monitoring the instances, I had them automatically kill themselves after a set amount of time. From your description, it sounds like this may not be practical in your environment, but I thought I would share just in case it helps. My AMI was Fedora-based, so I created the following bash script, registered it as a service, and had it run at startup:
#!/bin/bash
# chkconfig: 2345 68 20
# description: 50 Minute Kill
# Source Functions
. /etc/rc.d/init.d/functions
start()
{
# Shut down 50 minutes after starting up
at now + 50 minutes < /root/atshutdown
}
stop()
{
# Remove all jobs from the at queue because I'm not using at for anything else
for job in $(atq | awk '{print $1}')
do
atrm $job
done
}
case "$1" in
start)
start && success || failure
echo
;;
stop)
stop && success || failure
echo
;;
restart)
stop && start && success || failure
echo
;;
status)
echo $"`atq`"
;;
*)
echo $"Usage: $0 {start | stop | restart}"
RETVAL=1
esac
exit $RETVAL
You might consider doing something similar to suit your needs. If you do this, be especially careful that you stop the service before modifying your image so that the instance does not shutdown before you get a chance to re-bundle.
If you wanted, you could have the instances shutdown at a fixed time (after everyone leaves work?), or you could pass in a keep-alive length/shutdown time via the -d or -f parameters to ec2-run-instances and parse it out into the script.