During installation wso2 / wso2ei-integrator: 6.5.0-centos on OpenShift error / bin / sh appears: /home/wso2carbon/init.sh: Permission denied.
When changing permissions, I have an error: chmod: changing permissions of '/ home / wso2caarbon /': Operation not permitted
Does anyone know how to install WSO2 enterprise integrator on Openshift?
thanks
chmod: changing permissions of '/ home / wso2caarbon /': Operation not permitted
Looks like it expects to run as root. By default an OpenShift cluster will run stuff as an assigned user ID
Indeed Graham is right, if you see the startup script, it tries to chmod the some directories (based on the version used). What we've done - we did that in the Dockerfile, so no chmod is executed in the initialization script. Just make sure any mounted persistent filesystem will have permission to for the runtime user.
Actually - which version (github ref of the EI dockerfile) are you using? Seeing the current version I see there's no chmod in the init.sh now
Related
I have created a Laravel project in laravel 5.4 and i have made it live using AWS server . Now the issue I face is I have to provide the 777 permission to storage folder very frequently and due to this the site is not working properly. Can anyone help me with this as what can be the issue ? I have already given 777 permission to storage folder but somehow the permission changes and site stops as it cannot write log in log file. Thanks in advance
Ideally giving 777 permissions means who have open the access to ANYONE in the world who can access your storage with all Read/Write permissions.
You need to assign permission to your Web server to access the Directories and files which you can do in following way:
www-XXX can be your webserver user
sudo chown -R www-xxx:www-xxx /path/to/your/laravel/root/directory
Now in order to grant the storage level permissions to your webserver you need to execute the below commands
sudo chgrp -R www-data storage bootstrap/cache
sudo chmod -R ug+rwx storage bootstrap/cache
All of a sudden I started getting "Permission Denied" issues when trying to run any gcloud commands such as gcloud components update -- the issue was avoided if I ran sudo gcloud components update but it's not clear to my why the sudo command is suddenly required? I have actually been trying to run a GCMLE experiment and it had the same error/warning, so I tried updating components and still ran into this issue. I have been travelling for a couple days and did not make any changes since these same commands worked a few days ago. Further, I did not changed my OS (Mac High Sierra 10.13.3) -- were there any changes on the Google side that might explain this change in behavior? What is the best course of action to permanently get around this warning?
(conda-env) MacBook-Pro:user$ gcloud components update
WARNING: Could not setup log file in /Users/$USERNAME/.config/gcloud/logs, (IOError: [Errno 13] Permission denied: u'/Users/$USERNAME/.config/gcloud/logs/2018.03.10/XX.XX.XX.XXXXXX.log')
after sudo gcloud components update I was able to kick off a GCMLE experiment, but I also get the same warning (though my job now submits successfully).
WARNING: Could not setup log file in /Users/#USERNAME/.config/gcloud/logs, (IOError: [Errno 13] Permission denied: u'/Users/$USERNAME/.config/gcloud/logs/2018.03.10/XX.XX.XX.XXXXXX.log')
Based on an answer to a similar question, you probably need to change the permissions to the appropriate directories:
sudo chown -R $USER ~/.config/gcloud
That same post suggests that permissions may have gotten out-of-whack by running a gcloud command with sudo.
In most cases, the problem is not caused by the ~/.config/gcloud directory but the installation directory of gcloud which is owned by root:
drwxr-xr-x 20 root staff 640 Jun 20 18:22 google-cloud-sdk
Solution:
You must change permissions for that directory to your user from:
by:
sudo chown -R $USER /Users/$USER/bin/google-cloud-sdk
I am migrating a Django application from Openshift v2 to v3 (In case you don't know, RedHat is shutting down v2 on September 30th, see: https://blog.openshift.com/migrate-to-v3-v2-eol/)
So, I am following this blog post to help me: https://blog.openshift.com/migrating-django-applications-openshift-3/ . I am new to all these Docker / Kubernetes concepts the new version is build upon.
I was able to make some progress : I managed to get a successful build of my app. Yet it crashes at deployment time:
---> Running application from script (app.sh) ...
/usr/libexec/s2i/run: line 42: /opt/app-root/src/app.sh: Permission denied
Indeed, app.sh has lost its x permission. I log into the failing container as debug and see it:
> oc debug dc/<my app>
> (app-root)sh-4.2$ ls -l /opt/app-root/src/app.sh
-rw-rw-r--. 1 default root 127 Sep 6 21:20 /opt/app-root/src/app.sh
The blog posts states "Ensure that the app.sh file is executable by running chmod +x app.sh.", which I did on my local repo. Whatever, I want to do it again directly in the pod, but it doesn't work:
(app-root)sh-4.2$ chmod +x /opt/app-root/src/app.sh
chmod: changing permissions of ‘/opt/app-root/src/app.sh’: Operation not permitted
So, how can I set the x permission to app.sh ? Thank you
Without looking into more details, any S2I builder image will gladly use your custom supplied run script to start the application in an alternative way.
Create .s2i/bin/ (mind the dot) in your source code directory, place the run script into it and rebuild the app in OpenShift - it will automatically use your custom run script upon deployment.
This is the preferred way of starting applications using custom commands in OpenShift.
Regarding your immediate problem, there is a very simple reason why you can not change the permissions of the script: you were trying to modify the permissions in the deployed pod, and not the builder pod. Deployed pods run using different UIDs, usually somewhere in the range of 100000000, and definitely do not match the file ownership as generated by the build. Hence permission denied.
The root cause of your problem (app.sh losing executable permissions) must be in the way the build process installs those files, and indeed looking at the /usr/libexec/s2i/assemble script in the base image does seem to reveal the culprit. The last two lines are:
# set permissions for any installed artifacts
fix-permissions /opt/app-root
If you wanted to change this part of the build instead of using a custom run script, I suggest you then create .s2i/bin/assemble in your project's source code and make it look sort of like this:
#!/bin/bash
echo "Running stock build:"
${STI_SCRIPTS_PATH}/assemble
echo "Fixing the mess:"
chmod 755 /opt/app-root/src/app.sh
This will fix whatever the stock build process does to file permissions, and will do it using the same UID as the rest of the build, so file ownership shouldn't be an issue.
as I stumbled upon this issue myself I've found a way to resolve it.
You have to make your file app.sh executable and push it in your repo as such.
If git does not track this modification as it did for me, you have to use: git update-index --chmod=+x app.sh for it to work.
So I'm having some adventures with the vagrant-aws plugin, and I'm now stuck on the issue of syncing folders. This is necessary to provision the machines, which is the ultimate goal. However, running vagrant provision on my machine yields
[root#vagrant-puppet-minimal vagrant]# vagrant provision
[default] Rsyncing folder: /home/vagrant/ => /vagrant
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p '/vagrant'
I'm almost positive the error is caused because ssh-ing manually and running that command yields 'permission denied' (obviously, a non-root user is trying to make a directory in the root directory). I tried ssh-ing as root but it seems like bad practice. (and amazon doesn't like it) How can I change the folder to be rsynced with vagrant-aws? I can't seem to find the setting for that. Thanks!
Most likely you are running into the known vagrant-aws issue #72: Failing with EC2 Amazon Linux Images.
Edit 3 (Feb 2014): Vagrant 1.4.0 (released Dec 2013) and later versions now support the boolean configuration parameter config.ssh.pty. Set the parameter to true to force Vagrant to use a PTY for provisioning. Vagrant creator Mitchell Hashimoto points out that you must not set config.ssh.pty on the global config, you must set it on the node config directly.
This new setting should fix the problem, and you shouldn't need the workarounds listed below anymore. (But note that I haven't tested it myself yet.) See Vagrant's CHANGELOG for details -- unfortunately the config.ssh.pty option is not yet documented under SSH Settings in the Vagrant docs.
Edit 2: Bad news. It looks as if even a boothook will not be "faster" to run (to update /etc/sudoers.d/ for !requiretty) than Vagrant is trying to rsync. During my testing today I started seeing sporadic "mkdir -p /vagrant" errors again when running vagrant up --no-provision. So we're back to the previous point where the most reliable fix seems to be a custom AMI image that already includes the applied patch to /etc/sudoers.d.
Edit: Looks like I found a more reliable way to fix the problem. Use a boothook to perform the fix. I manually confirmed that a script passed as a boothook is executed before Vagrant's rsync phase starts. So far it has been working reliably for me, and I don't need to create a custom AMI image.
Extra tip: And if you are relying on cloud-config, too, you can create a Mime Multi Part Archive to combine the boothook and the cloud-config. You can get the latest version of the write-mime-multipart helper script from GitHub.
Usage sketch:
$ cd /tmp
$ wget https://raw.github.com/lovelysystems/cloud-init/master/tools/write-mime-multipart
$ chmod +x write-mime-multipart
$ cat boothook.sh
#!/bin/bash
SUDOERS_FILE=/etc/sudoers.d/999-vagrant-cloud-init-requiretty
echo "Defaults:ec2-user !requiretty" > $SUDOERS_FILE
echo "Defaults:root !requiretty" >> $SUDOERS_FILE
chmod 440 $SUDOERS_FILE
$ cat cloud-config
#cloud-config
packages:
- puppet
- git
- python-boto
$ ./write-mime-multipart boothook.sh cloud-config > combined.txt
You can then pass the contents of 'combined.txt' to aws.user_data, for instance via:
aws.user_data = File.read("/tmp/combined.txt")
Sorry for not mentioning this earlier, but I am literally troubleshooting this right now myself. :)
Original answer (see above for a better approach)
TL;DR: The most reliable fix is to "patch" a stock Amazon Linux AMI image, save it and then use the customized AMI image in your Vagrantfile. See below for details.
Background
A potential workaround is described (and linked in the bug report above) at https://github.com/mitchellh/vagrant-aws/pull/70/files. In a nutshell, add the following to your Vagrantfile:
aws.user_data = "#!/bin/bash\necho 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty && chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty\nyum install -y puppet\n"
Most importantly this will configure the OS to not require a tty for user ec2-user, which seems to be the root of the problem. I /think/ that the additional installation of the puppet package is not required for the actual fix (although Vagrant may use Puppet for provisioning the machine later, depending on how you configured Vagrant).
My experience with the described workaround
I have tried this workaround but Vagrant still occasionally fails with the same error. It might be a "race condition" where Vagrant happens to run its rsync phase faster than cloud-init (which is what aws.user_data is passing information to) can prepare the workaround for #72 on the machine for Vagrant. If Vagrant is faster you will see the same error; if cloud-init is faster it works.
What will work (but requires more effort on your side)
What definitely works is to run the command on a stock Amazon Linux AMI image, and then save the modified image (= create an image snapshot) as a custom AMI image of yours.
# Start an EC2 instance with a stock Amazon Linux AMI image and ssh-connect to it
$ sudo su - root
$ echo 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty
$ chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty
# Note: Installing puppet is mentioned in the #72 bug report but I /think/ you do not need it
# to fix the described Vagrant problem.
$ yum install -y puppet
You must then use this custom AMI image in your Vagrantfile instead of the stock Amazon one. The obvious drawback is that you are not using a stock Amazon AMI image anymore -- whether this is a concern for you or not depends on your requirements.
What I tried but didn't work out
For the record: I also tried to pass a cloud-config to aws.user_data that included a bootcmd to set !requiretty in the same way as the embedded shell script above. According to the cloud-init docs bootcmd is run "very early" in the startup cycle for an EC2 instance -- the idea being that bootcmd instructions would be run earlier than Vagrant would try to run its rsync phase. But unfortunately I discovered that the bootcmd feature is not implemented in the outdated cloud-init version of current Amazon's Linux AMIs (e.g. ami-05355a6c has cloud-init 0.5.15-69.amzn1 but bootcmd was only introduced in 0.6.1).
My Django app is built on a VM Ubuntu instance via a Fabric script ran from my local dev machine as root with sudo. The Fabric script sets up a folder in:
/var/log/FOLDERNAME
and the app is set to log all log data into it.
However after each build even though the right permissions (group & folder) exist on the folder (ls -all confirms it) the log files have trouble getting generated unless I SSH to the box after each Fabric build and physically type in:
sudo chmod 777 /var/log/FOLDERNAME -Rf
... then everything works fine.
Can anyone please shed some light and/or point me in the right direction to solve this?
Cheers!
use put with mode to setup your logfile folder with permissions.
put('yourlogfile', 'yourlogfile', mode=0755)
A sidenote: Using chmod 777 is generally not a good idea. If your VM is running ubuntu your apache runs by default as www-data. chown www-data and r-w permissions for this user/group should be enough.