I have a django service that is running under sudo account. All the operations like job submission, database operations etc. are performed by sudo account. However there are certain scripts that should be executed as user for traceability purpose later on. What is the best way to transfer the user password so that it can be taken as input to a script that does su myid and subsequent commands in the script are run under userid.
For example my server is running as sys_admin. The password mypassword should be given by the user from the webpage and passed onto the script as argument.
Below is my sample script:
su - myid <<! >/dev/null 2>&1
mypassword
whoami > /dev/tty
!
The above script will print myid and not sys_admin. What is the best and the most secure way to perform this operation so that password is not exposed to backend as well.
Can some form of encryption and decryption be performed on client or server side with some passphrase?
It sounds like your are directly passing user input through a sudo shell, this is VERY unsafe if you are, as this can lead to OS command injection
I'm sorry I couldn't directly assist you but if I'm right in how I took that it isnt a great idea to continue the current way it's being done
Maybe someone can offer a safer way of handling it? I'm not experienced in django
Related
I've launched a new AWS EC2 instance (m5.large) based on Matillion's latest AMI (Matillion v1.56.9). The instance is coming up fine and I can reach Matillion's login page at https://[internal IP], but I cannot login with the default credentials which are supposed to be "ec2-user" and the instance id ("i-xxxxxx"). Error message is "Invalid username or password".
The EC2 instance has no public IP, that's why I use a private IP.
I can also ssh into the instance.
Can anyone help me find out why login using the default user doesn't work?
I believe the way it's supposed to work is at first boot the ec2-user password in /usr/share/tomcat8/conf/tomcat-users.xml gets set to the sha512sum of the instance ID. As per your comment Tobie that's a good spot but I think the Matillion documentation is just out of date there, from right back when instance IDs really were just 10 characters long!
I guess it uses the instance metadata service v1 to do that, so if IMDS v1 is not available it might not get created correctly.
In any case, as long as you can SSH into your server and the Admin / User Configuration is in Internal mode (which is the default)
you can fix the password manually like this...
Become root with sudo -i
Create the sha512sum of your chosen password like this.
echo -n "schepo" | sha512sum
Make sure you use the -n otherwise it adds a newline and gets the hash wrong. Mine comes out like 55aa...a1cf -
Then stop Tomcat so you can update the password
systemctl stop tomcat8
Fix the relevant line in /usr/share/tomcat8/conf/tomcat-users.xml or add a new one. You have to be really careful to keep the XML valid. Mine ends up like this:
<user username="schepo" password="55aa00778ccb153bc05aa6a8d7ee7c00f008397c5c70ebc8134aa1ba6cf682ac3d35297cbe60b21c00129039e25608056fe4922ebe1f89c7e2c68cf7fbfba1cf" roles="Emerald,API,Admin"/>
Then restart Tomcat
systemctl restart tomcat8
It normally takes about 60 seconds to restart. After that you should be able to login via the UI with your new user and/or password.
Prerequisites
I have a script that works with AWS but does not deal with credentials explicitly. It just calls AWS API, expecting the credentials to be there according to default credentials provider chain. In fact, the wrapper that calls this script obtains temporary credentials and passes them in environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN).
Problem
The wrapper usually reuses existing credentials, and only asks to re-authenticate explicitly when they are about to expire. So there is a possibility that it passes credentials that have a few minutes left to live, which may not be enough, as the script execution usually takes long time. Unfortunately, I don't have control over the wrapper, so I would like to make the script check how much time it has left before making a decision whether to start or abort early to prevent failure in mid-flight.
AWS doesn't seem provide a standard way to query "how much time I have before my current session expires?" If I had control over the wrapper, I would make it pass the expiry date in an environment variable as well. I was hoping that AWS_SESSION_TOKEN is a sort of a JWT token, but unfortunately it is not, and does not seem to contain any timestamp in it.
Can anyone suggest any other ways around the given problem?
You probably need to check for the value of $AWS_SESSION_EXPIRATION
echo $AWS_SESSION_EXPIRATION
It should be giving you the expiration using the Zulu Time (Coordinated Universal Time) like below
2022-05-17T20:20:40Z
Then would need to compare the date on your system against it.
Instructions below work on macOS, for Linux you may have to modify the date function params
Let's check the current system time in Zulu Time:
date -u +'%Y-%m-%dT%H:%M:%SZ'
this should give something like the example:
2022-05-17T20:21:14Z
To automate things you can create a bash function to add to your ~/.bashrc or favorite terminal theme
aws_session_time_left() {
zulu_time_now=$1
aws_session_expiration_epoch="`date -j -u -f '%Y-%m-%dT%H:%M:%SZ' $AWS_SESSION_EXPIRATION '+%s'`"
zulu_time_now_epoch="`date -j -u -f '%Y-%m-%dT%H:%M:%SZ' $zulu_time_now '+%s'`"
if [[ $zulu_time_now < $AWS_SESSION_EXPIRATION ]]; then
secs="`expr $aws_session_expiration_epoch - $zulu_time_now_epoch`"
echo "+`printf '%dh:%02dm:%02ds\n' $((secs/3600)) $((secs%3600/60)) $((secs%60))`"
else
secs="`expr $zulu_time_now_epoch - $aws_session_expiration_epoch`"
echo "-`printf '%dh:%02dm:%02ds\n' $((secs/3600)) $((secs%3600/60)) $((secs%60))`"
fi
}
To consult the result of your function you can run:
aws_session_time_left "`date -u +'%Y-%m-%dT%H:%M:%SZ'`"
When your session is still active can give you something like:
+0h:56m:35s
or (when the session has expired) can give you how long since it expired:
-28h:13m:42s
NOTE the hours can be over 24h
BONUS: simplified standalone script version without parameters below
for example, you can also have as a standalone file get-aws-session-time-left.sh
#!/bin/bash
# Use to find out IF "aws session expiration" exist AND compare the current system time to IT
# These are the expected result types we want to have:
# - "no aws session found" (NOTE: this does not mean there is no aws session open in another terminal)
# - how long until the session expires (for example +0h:59m:45s)
# - how long since the session expired (for example -49h:41m:12s)
# NOTE: the hours do not reset at every 24 hours, we do not require to display the days
# IMPORTANT: the date function arguments work on macOS, for other OS types may need adapting
if [[ $AWS_SESSION_EXPIRATION != '' ]]; then
zulu_time_now="`date -u +'%Y-%m-%dT%H:%M:%SZ'`" # TODO: see important note above
aws_session_expiration_epoch="`date -j -u -f '%Y-%m-%dT%H:%M:%SZ' $AWS_SESSION_EXPIRATION '+%s'`" # TODO: see important note above
zulu_time_now_epoch="`date -j -u -f '%Y-%m-%dT%H:%M:%SZ' $zulu_time_now '+%s'`" # TODO: see important note above
if [[ $zulu_time_now < $AWS_SESSION_EXPIRATION ]]; then
secs="`expr $aws_session_expiration_epoch - $zulu_time_now_epoch`"
echo "+`printf '%dh:%02dm:%02ds\n' $((secs/3600)) $((secs%3600/60)) $((secs%60))`"
else
secs="`expr $zulu_time_now_epoch - $aws_session_expiration_epoch`"
echo "-`printf '%dh:%02dm:%02ds\n' $((secs/3600)) $((secs%3600/60)) $((secs%60))`"
fi
else
echo "no aws session found"
fi
Then to consult the AWS session expiration, you can just run:
bash get-aws-session-time-left.sh
---EDITS---
IMPORTANT:
Some info about the origin of AWS_SESSION_EXPIRATION can be found here
NOTE: aws sso login does not appear to set this environment variable, would be curious if someone can clarify if AWS_SESSION_EXPIRATION is only present when using aws-vault
you can use aws configure get to get the expiry time:
AWS_SESSION_EXPIRATION=$(aws configure get ${AWS_PROFILE}.x_security_token_expires)
(obviously replace MYPROFILE with your profile name.)
I have a requirement to start workflow concurrently with multiple instances, all instances need to run in parallel. When I run an instance it is running and related param file is being picked up. But when I start another instance to run in parallel with previous instance, it is giving below Error.
"Start Workflow Advanced: ERROR: Workflow [wf_name]: Could not start execution of this workflow because the current run on this Integration Service has not completed yet."
I tried doing this using PMCMDcommand like below. It's starting without any param file and without instance name. But PMCMD log is showing the the workflow is started for the given instance successfully.
pmcmd startworkflow -sv 'INT_......' -d 'DOM_......' -u 'venkat' -p MyPass.... -f 'MyFold...' -nowait -rin $inst_name $wf_name
This is working fine in our test environment. But not working in QA. Is there a configuration setting to avoid this behavior.
Please make sure the workflow is properly configured to allow multiple executions: the Configure Concurrent Execution has to be enabled and Allow concurrent run... needs to be correctly set. If you run with same instance name, the Allow concurent run with same instance name must be chosen. Otherwise, choose the Allow concurent run only with unique instance name, add the instance name and desired parameter file to the list below.
In your command I don't see the parameterfile, so I assume the latter should be the proper setup.
The issue is resolved by restarting the integration service. We did not restart integration service to fix this issue. But that resolved this issue. When we contacted informatica support for resolution, below KB link is provided by them. https://kb.informatica.com/solution/23/Pages/59/501120.aspx
Please find the thread I have opened in Informatica network.
https://network.informatica.com/thread/83540
I have a script that is using the python-ldap module.
Here is my basic code that makes a connection to my ldap server:
server = 'ldap://example.com'
dn = 'uid=user1,cn=users,cn=accounts,dc=example,dc=com'
pw = "password!"
con = ldap.initialize(server)
con.start_tls_s()
con.simple_bind_s(dn,pw)
This works...but does the actual literal password have to be stored in the variable pw?? it seems like a bad idea to have a password stored right there in a script.
Is there a way to make a secure connection to my ldap server without needing to store my actual password in the script??
Placing the password in a separate file with restricted permissions is pretty much it. You can for example source that file from the main script:
. /usr/local/etc/secret-password-here
You could also restrict the permissions of the main script so that only authorized persons can execute it, but it's probably better to do as you suggest and store only the password itself in a restricted file. That way you can allow inspection of the code itself (without sensitive secrets), version-control and copy around the script more easily, etc...
I've been working on a program in c++ that fork a pty. Everything goes well except for one thing: when the root run the program, the pty logs-in as the root user. In the same way, if a user 'x' runs the program, the new pty logs in as 'x' user.
How can it start a pty asking for the user credentias and login in? i know that ssh or pty1(ctr + alt + 1) does.
EDIT: Here is like i fork the pty
http://pastebin.com/3vLQynz2
To be allowed to run something as a different user, you have to have to right to change to uid (man setuid). Normally you can only do this as user 'root'.
Therefore if you want to implement something like this either your program has to run as suid root or you have to use some other executeable that is suid root. For example you could ask the user which user it wants to be. Then run /bin/su to ask the user for his password.
BTW: the mentioned binary /bin/login will only work if you already run as user 'root'.