I have an IPA Server and client set up, with NFS and autofs installed on both. Whenever I make a user with ipa user-add, and then I switch to that user, IPA creates a home directory for that user and displays Creating home directory for user. I want to make it so autofs sets up the home directory instead, so that IPA does not need to. My IPA server also acts as an NFS server and I added /home into my /etc/exports and pointed it to my client machine. My IPA client serves as my NFS client and it has /home mounted on /mnt/nfs/home. On my client I went into /etc/auto.master and added a line /home /etc/auto.misc. Then I added onto /etc/auto.misc with the line:
* -fstype=nfs :nameofserver.example.home:/mnt/nfs/home. After all that, I restarted autofs and tried making a user but when I switch to the user now I get the message warning: cannot change directory to /home/user: No such file or directory What am I doing wrong?
The IPA configuration of autofs make that the user home is mounted not the root of user home. That means in your case that the autofs is trying to mount /mnt/nfs/home/newuser.
AFAIK they are no official workaround with this chicken/egg problems. Free-ipa is currently working on a hook/callback systeme that is supposed to provide a solution to that old and well know "problematic".
Since this update was'nt available yet, the only know way is to setup a cron script that make call to the LDAP service of the IDM server and create the new home. But no one seens to have release a code to do so.
Here is the bash script I have made for this purpose. I ran it in a cron set to each minutes.
#!/bin/bash
TIMEFILE=/root/scripts/data/ldap_last_check.txt
LASTTIME=$(cat $TIMEFILE)
CURRENTTIME=$(date +%Y%m%d%H%M%SZ)
echo $LASTTIME
NEWUSERLIST=$(/usr/bin/ldapsearch -LLL -x -h localhost -b "cn=users,cn=accounts,dc=domain,dc=com" "(createTimestamp>=$LASTTIME)" uid)
UID_REGEX="^uid:"
mount filesrv:/srv/idmhome /mnt/idmhome
OLDUSERLIST=$(ls -1 /mnt/idmhome)
while read -r i_line; do
HOME_EXIST=false
if [[ $i_line =~ $UID_REGEX ]]; then
TMPUSER="$(echo $i_line | awk '{print $NF}')"
while read -r j_line; do
if [[ $TMPUSER = $j_line ]]; then
HOME_EXIST=true
fi
if [[ $TMPUSER = "admin" ]]; then
HOME_EXIST=true
fi
done <<< "$OLDUSERLIST"
if ! $HOME_EXIST; then
mkdir /mnt/idmhome/$TMPUSER
cp /etc/skel/.* /mnt/idmhome/admin/
chown -R $TMPUSER:$TMPUSER /mnt/idmhome/$TMPUSER/
ls -lah /mnt/idmhome/$TMPUSER
fi
fi
done <<< "$NEWUSERLIST"
umount /mnt/idmhome
echo $CURRENTTIME > $TIMEFILE
My setup is a little bit different than your, my nfs server is'nt on the same server as my IDM. You just have to comment the mount/umount line and change path to yours and it's should work fine.
consider to make a similar code to erase/archive deleted account.
Related
I am trying to run a script on my EC2 at startup, with an image I created that runs ubuntu.
However, the script is failing although when I connect through ssh and run the script it is working.
My user data is:
#!/bin/bash
echo '
#!/bin/bash
sleep 30
sudo apt-get update
cd /etc/apache2/sites-available
sudo sed -i 's/oldurl/newurl/g' 000-default.conf
sudo sed -i 's/oldurl/newurl/g' 000-default.conf
sudo certbot --apache -d url1 -d url2
sudo systemctl restart apache2' > init-ssl.sh
sleep 2 & init-ssl.sh
I stopped my instance and changed my user data to something simple like:
#!/bin/bash
echo 'work' > try1.txt
I didn't see an error but I also didn't see my new try1.txt file.
A script passed via User Data will only be executed on the first boot of the instance. (Actually, the first boot per Instance ID.)
If you want to debug the script, the log file is available in:
/var/log/cloud-init-output.log
Your attempt to redirect to a file with echo ' ... ' >init-ssl.sh is being thwarted by the fact that the script also contains a single quote ('), which is closing the echo early. You should use different quotes to avoid this happening. Or, as #Mornor points out, simply run the script directly. If you want to sleep for a bit up-front, then just put the sleep() at the start of the script.
I put the following commands in user data of an EC2 running RedHat 8 AMI (ami-0fc841be1f929d7d1), when they run, the mkdir tries to create .kube at root which looks to me like $HOME is not set at the time.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Following are log from /var/log/user-data.log
+ mkdir -p /.kube
+ sudo cp -i /etc/kubernetes/admin.conf /.kube/config
++ id -u
++ id -g
+ sudo chown 0:0 /.kube/config
When I SSH to the instance, the $HOME is set correctly to /home/ec2-user.
Could you advise what I did wrong here?
Thank you
When your EC2 server is provisioned, the user data script runs as user root, so $HOME is empty. What you could do, is to define the HOME env var at the top of your user data script, like this (insert your user's home directory here):
export HOME=/home/ubuntu
I've tried it and it works (I install NVM, SDKMAN, sbt, java, git, docker; all works fine). You might need to do some chown at the end of your user data script to change the owner of some files back to your user. For example, if your user data sets up some files in your home directory:
chown ubuntu ~/.foo/bar.properties
$HOME refers to the home directory of the logged in user. Userdata runs under the root user, and the root user $HOME is /. That is the result you are seeing.
Instead of the variable $HOME, your script should refer to /home as a literal.
See https://superuser.com/questions/271925/where-is-the-home-environment-variable-set
You are running as sudo which is known to change environment variables that are established with your users shell (such as $HOME) as well as shell context based such as ssh-agent.
Generally you can ensure this persists when you run sudo by adding it to the env_keep settings in your sudoers configuration by adding the below line within /etc/sudoers. More information is available here, be careful about modifying this file.
Defaults env_keep=HOME
Otherwise if you don't want to make the above change, ensure you have the permissions to carry this out without running sudo or pass an absolute path value in.
I would generally stay clear of user data for important configuration anyway,
instead build a pre-baked AMI ahead of time with the configuration how you want it, using a configuration tool such as Ansible, Chef, Puppet.
Alternatively as this is within the User Data anyway, it is unlikely you have already configured the sudoers configuration, you should instead just specify the path.
I faced the same issue. Adding this to the User Data script helped resolve it. The sub shells will have the HOME set with this change to profile.
cat > /etc/profile.d/set_home.sh << 'EOF'
export HOME=~
EOF
chmod a+x /etc/profile.d/set_home.sh
In my use case, I am trying to use the $HOME variable to identify my app server path in the instance startup.
I am using Google compute engine with a startup script which uses $HOME variable. But it looks $HOME is not set or the user is not created while startup script executes in google cloud.
It throws $HOME not set error. Is there any workaround for this? Now I have to restart the instance after creating for the first time. So that the $HOME variable will be set when I restart. But this is an ugly hack for production.
Could someone help me with this?
The startup script is executed as root when the user have been not created yet and no user is logged in (you can check it running at startup $ users and comparing the output of $ cat /etc/shadow after a reboot).
Honestly I don't understand how just a reboot can make your $HOME be populated at startup time since on Linux, the HOME environment variable is set by the login program:
by login on console, telnet and rlogin sessions
by sshd for SSH
connections by gdm, kdm or xdm for graphical sessions.
However if you need to reboot and you don't want to do it manually you can reboot just once after the creation of a machine:
if [ -f flagreboot ]; then
...
your script
...
else
touch flagreboot
reboot
fi
On the other hand if you know which is going to be the $HOME path of your application you can think to simply export this variable at startup to populate it manually.
$ export HOME=/home/username
printenv
cd $HOME
touch test.txt
echo $HOME >> test.txt
echo $PWD >> test.txt
printenv > env.txt
I included the above code in my startup script. Strangely, the $HOME, $PWD and many other environment variables are not set while the startup script is runninng. Here are the contents of of the files I created during the startup.
test.txt:
/
env.txt:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
PWD=/
LANG=en_US.UTF-8
SHLVL=2
_=/usr/bin/printenv
Here's the output(some values removed) of printenv command, immediately after the VM creation.
XDG_SESSION_ID=
HOSTNAME=server1
SELINUX_ROLE_REQUESTED=
TERM=xterm-256color
SHELL=/bin/bash
HISTSIZE=1000
SSH_CLIENT=
SELINUX_USE_CURRENT_RANGE=
SSH_TTY=/dev/pts/0
USER=
LS_COLORS=
MAIL=/var/spool/mail/xyz
PATH=/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/*<username>*/.local/bin:/home/*<username>*/bin
PWD=/home/*<username>*
LANG=en_US.UTF-8
SELINUX_LEVEL_REQUESTED=
HISTCONTROL=ignoredups
SHLVL=1
HOME=/home/*<username>*
LOGNAME=*<username>*
SSH_CONNECTION=
LESSOPEN=||/usr/bin/lesspipe.sh %s
XDG_RUNTIME_DIR=/run/user/1000
_=/usr/bin/printenv
To summarize, not all the environment variables are set at the time the startup script executes. They are populated some time after. I find that wierd, but that's how it's works.
Boiling my issue down to the simplest case, I'm using Compute Engine with the following startup-script:
#! /bin/bash
sudo useradd -m drupal
su drupal
cd /home/drupal
touch test.txt
I can confirm the drupal user exists after this command, so does the test file. However I expect the owner of the test file to be 'drupal' (hence the su). However, when I use this as a startup script I can still confirm ROOT is the owner of the file:
meaning my
su drupal
did not work. sudo su drupal also does not make any difference. I'm using Google Container OS, but same happens on a Debian 8 image.
sudo su is not a command run within a shell -- it starts a new shell.
That new shell is no longer running your script, and the old shell that is running the script waits for the new one to exit before it continues.
The sudo su command will start a new shell. The old shell waits for the old one to exit and continues executing the rest of the code.
Your script is running in the 'old' shell, which means these commands:
cd /home/drupal
touch test.txt
are still executed as root and thus the owner of these files is root as well.
You can modify your script to this:
#! /bin/bash
sudo useradd -m drupal
sudo -u drupal bash -c 'cd ~/; touch text2.txt'
and it should work.
The -u flag executes the command as the user specified, in this case 'drupal'
I wrote some stuff underneath - but looks like this should work:
how to run script as another user without password
The other option would be to ssh into your own machine as the other user, you can use sshpass to send the password, or get your own public key.
When I test a similar script:
su [my username]
touch test.txt
It actually logs in as me, and doesn't finish until I ctrl-d
Further testing reveals that the only way to own the file is if I invoke the script from the shell, ie:
su me
touch test.txt
./test2.sh
test2.sh:
touch test2.txt
gives both files to root, even if I own both scripts.
This follows that everything YOU do is yours, you can't make something for someone else.
I have a set of Django Log files, for which I have the appropriate logger set to write out messages. However each time it creates a new log file, the permissions on the file don't allow me to start the shell, and at times cause issues with apache.
I have ran chmod -Rv 777 on the directory, which sets all the permissions so we can do what we like, but the next logfile created, goes back to some default.
How can I set permissions on the logfiles to be created
Marc
Permissions on files created by a particular user depend on what mask is set for this particular user.
Now you need to set the appropriate permissions for whoever is running the apache service
ps -aux | grep apache | awk '{ print $1 }'
Then for this particular user running apache (www-data?)
sudo chown -R your_user:user_running_apache directory
where directory is the root directory of your django application.
To make sure that all the files that will be added to this directory in the future have
the correct permissions run:
sudo chmod -R g+s directory
I faced with the same problem - I had issues with starting shell and with celery due to rotated-log file permissions. I'm running my django-project through the uwsgi (which is running by www-data user) - so I handled it by setting umask for it (http://uwsgi-docs.readthedocs.org/en/latest/Options.html#umask).
Also I'm using buildout, so my fix looks like this:
[uwsgi]
recipe = buildout.recipe.uwsgi
xml-socket = /tmp/uwsgi.sock
xml-master = True
xml-chmod-socket = 664
xml-umask = 0002
xml-workers = 3
xml-env = ...
xml-wsgi-file = ...
After this log file permissions became 664, so group members of www-data group can also write into it.