Running lynx via sudo - lynx

I am trying to run Lynx under apache user via sudo, but it seems that lynx tries to access my home directory:
$ sudo -u apache lynx
/home/ssmirnov/: No such directory
I have such permissions on my home directory: drwx------
Can you advice me how to run Lynx under another user?

You might try using sudo's -H option. It sets $HOME to the home directory of the user you're trying to run as. Perhaps lynx is looking for a file there, i dunno. (It doesn't seem to have a problem on my machine...but eh.)
-i might work as well; it basically sets the environment up as if the user had logged in, including cd'ing to their home directory. Note, that means starting the shell specified for that user, running login scripts, and all that. If the user's not allowed to log in, this will likely fail.
If you want to run it from your home directory, for example to download something to that location, of course you'll have to grant access to apache somehow. This can be done on ext* filesystems on most modern Linux systems (without granting everyone access) by saying something like setfacl -m u:apache:rwx $HOME. In a pinch, you could temporarily put apache in your group and grant group rwx permissions on your homedir...but unless this is your home machine, i wouldn't do that.

Related

$HOME is not set for ec2-user during commands in User Data run

I put the following commands in user data of an EC2 running RedHat 8 AMI (ami-0fc841be1f929d7d1), when they run, the mkdir tries to create .kube at root which looks to me like $HOME is not set at the time.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Following are log from /var/log/user-data.log
+ mkdir -p /.kube
+ sudo cp -i /etc/kubernetes/admin.conf /.kube/config
++ id -u
++ id -g
+ sudo chown 0:0 /.kube/config
When I SSH to the instance, the $HOME is set correctly to /home/ec2-user.
Could you advise what I did wrong here?
Thank you
When your EC2 server is provisioned, the user data script runs as user root, so $HOME is empty. What you could do, is to define the HOME env var at the top of your user data script, like this (insert your user's home directory here):
export HOME=/home/ubuntu
I've tried it and it works (I install NVM, SDKMAN, sbt, java, git, docker; all works fine). You might need to do some chown at the end of your user data script to change the owner of some files back to your user. For example, if your user data sets up some files in your home directory:
chown ubuntu ~/.foo/bar.properties
$HOME refers to the home directory of the logged in user. Userdata runs under the root user, and the root user $HOME is /. That is the result you are seeing.
Instead of the variable $HOME, your script should refer to /home as a literal.
See https://superuser.com/questions/271925/where-is-the-home-environment-variable-set
You are running as sudo which is known to change environment variables that are established with your users shell (such as $HOME) as well as shell context based such as ssh-agent.
Generally you can ensure this persists when you run sudo by adding it to the env_keep settings in your sudoers configuration by adding the below line within /etc/sudoers. More information is available here, be careful about modifying this file.
Defaults env_keep=HOME
Otherwise if you don't want to make the above change, ensure you have the permissions to carry this out without running sudo or pass an absolute path value in.
I would generally stay clear of user data for important configuration anyway,
instead build a pre-baked AMI ahead of time with the configuration how you want it, using a configuration tool such as Ansible, Chef, Puppet.
Alternatively as this is within the User Data anyway, it is unlikely you have already configured the sudoers configuration, you should instead just specify the path.
I faced the same issue. Adding this to the User Data script helped resolve it. The sub shells will have the HOME set with this change to profile.
cat > /etc/profile.d/set_home.sh << 'EOF'
export HOME=~
EOF
chmod a+x /etc/profile.d/set_home.sh

Google Cloud permissions

I have a hosted server on Google Cloud Platform (GCP), and I am trying to overwrite some files.
I was able to make a connection through WinSCP, and I'm able to find the directory of the files I need to overwrite, however, all files are read-only.
How can I manage the permissions to give myself add/change permissions?
I agree this seems to be related to permissions on the files. I am not able to comment and wanted to add that if you want to avoid changing the ownership of directory and files, you can always set up a group as an owner.
Details can be found on this discussion
Summarizing:
# groupadd mygroup
# useradd -G mygroup user1
# chown -R :mygroup /path/folder
# chmod -R g+rw /path/folder
Create new group ‘mygroup’
Adds user user1 to mygroup
Recursively grants group ownership to content of /path/folder/ to mygroup
Recursively grants group read & write permission to contents of /path/folder
This will effectively allow you to manage users in mygroup with the appropriate permissions and access.
You need to be the owner of the file in order to be able to make changes. For example, if root is the owner of the file, you won't be able to change it (since GCP doesn't allow root access through FTP).
What you should do is make you (the user logged through WinSCP) owner of the file using command line and then make changes to the file. Be careful to make the old owner of the file owner again.
For example, using Centos and WinSCP you should do this:
Login to your server with WinSCP
Login to your server through putty or any other command line client
in putty: sudo chown YOUR_USER /complete/URL/file/in/your/server.XYZ
make whatever changes you need to make to your file
in putty: sudo chown OLD_USER /complete/URL/file/in/your/server.XYZ
YOUR_USER is the user you are logged in on WinSCP.
OLD_USER can be apache, root or whatever
If you want to upload a new file you must take ownership of the folder. To do that do not specify the file on the chown command, for instance:
sudo chown YOUR_USER /complete/URL/folder/
Once you finish, give back ownership to OLD_USER.
This can be a pain but is the only way I found to edit my files in my GCP server...
Hope this helps.

How do you use pyenv with Apache and Django?

I'm trying to use pyenv to create a virtual environment to use with Django on Apache (it works great for development outside of Apache). I'm a bit miffed though on what user to set up the environments and run with (attempting to su commands with www-data fails as it's "not currently available")...should I use root (OK because it just would own everything, not run whatever), make another user, etc.
I haven't been able to test, but I'm assuming that I should add the shims path to PATH in /etc/apache2/envvars then let each site set PYENV_VERSION in it's Apache .conf as appropriate.
When you want to run a command as another user use: sudo -u <user> command. In order to use su that user must be configured in /etc/passwd to have a shell. You can always just do sudo -u www-data bash instead.
With respect to your question about pyenv. You should install pyenv somewhere where the apache user has permissions. You will need to create a directory since, www-data is unlikely to have a home directory.

How to make Cygwin the default shell for Jenkins?

I'm trying to come up with some sensible solution for a build written using SCons, which relies on quite a lot of applications to be accessible in a Unix-like way, using Unix-like paths etc. However, when I'm trying to use SCons plugin, or Git plugin in Jenkins, it tries to invoke the plugins using something like cmd /c git.exe - and this will certainly fail, because Git was installed using Cygwin and is only known in Cygwin shell, but not in CMD. But even if I could make git and the rest available to cmd.exe, other problems arise: the Cygwin version of Git expects paths to have forward slashes and treats backward slashes as escape characters. Idiotic Windows file-system related issues kick in too (I can't give Jenkins permissions to delete my own files!).
So, is there a way to somehow make Jenkins only use Cygwin shell, and never cmd.exe? Or should I be prepared to run some Linux in a VM to have this handled?
You could configure Jenkins to execute the cygwin command with the specific shell command, as follows:
c:\cygwin\bin\mintty --hold always --exec /cygdrive/c/path/to/bash/script.sh
Where script.sh will execute all the commands needed for the Jenkins execution.
Just for the record here's what I ended up doing:
Added a user SYSTEM to Cygwin, mkpasswd -u SYSTEM
Edited /etc/passwd by adding the newly created user's home directory to the record. Now it looks something like the below:
SYSTEM:*:18:544:,S-1-5-18:/home/SYSTEM:
Copied my own user's configuration settings such as .netrc, .ssh and so on into the SYSTEM home. Then, from Windows Explorer, through an array of popups I've claimed ownership of all of these files to SYSTEM user. One by one! I love Microsoft!
In Jenkins I now run a wrapper for my build that sets some other environment variables etc. by calling c:\cygwin\bin\bash --login -i /path/to/script/script
Gave it up because of other difficulties in configuration and made Jenkins service run under my user rather then SYSTEM. Here's a blog post on how to do it: http://antagonisticpleiotropy.blogspot.co.il/2012/08/running-jenkins-in-windows-with-regular.html but, basically, you need to open Windows services, then find Jenkins service, select it's properties, go to "Logon" tab and change the user to the "this user".
One way to do this is to start your "execute shell" build steps with
#!c:\cygwin\bin\bash --login
The trick is of course that it resets your current directory so you need to
cd `cygpath $WORKSPACE`
to get back to the workspace.
Adding to thon56's good answer: this is helpful: "set -ex"
#!c:\cygwin\bin\bash --login
cd `cygpath $WORKSPACE`
set -ex
Details:
-e to exit on error. This is important if you want your jobs to fail on error.
-x to echo command to the screen, if desired.
You can also use #!c:\cygwin\bin\bash --login -ex, but that echos a lot of login steps that you most likely don't care to see.

Ubuntu Log creation permission issues after Fabric build

My Django app is built on a VM Ubuntu instance via a Fabric script ran from my local dev machine as root with sudo. The Fabric script sets up a folder in:
/var/log/FOLDERNAME
and the app is set to log all log data into it.
However after each build even though the right permissions (group & folder) exist on the folder (ls -all confirms it) the log files have trouble getting generated unless I SSH to the box after each Fabric build and physically type in:
sudo chmod 777 /var/log/FOLDERNAME -Rf
... then everything works fine.
Can anyone please shed some light and/or point me in the right direction to solve this?
Cheers!
use put with mode to setup your logfile folder with permissions.
put('yourlogfile', 'yourlogfile', mode=0755)
A sidenote: Using chmod 777 is generally not a good idea. If your VM is running ubuntu your apache runs by default as www-data. chown www-data and r-w permissions for this user/group should be enough.