Pycharm debug and user permissions - python-2.7

I am trying to debug Frappe Framework (based on Python) with PyCharm, the debugger works fine and breaks at the selected breakpoints in the code.
At the line
stream = open(self.baseFilename, self.mode)
where it is trying to open a log file for writing, I get Permission denied: /home/frappeuser/frappe-bench/logs/bench.log. I "su ed" as root.
Frappe user is a sudo user, where I normally run the command bench start to start Frappe and it works fine from the command line, but not while I am debugging.
My question is why it is denying access to the log file when I am root. Not sure if PyCharm can be configured for different user levels.

the problem is not relevant to Pycharm, you have to change the permissions of that file so the working user can read the file. try help chmod or visit here http://ss64.com/bash/chmod.html

Related

Unable to connect google compute engine, getting permission denied error

I have accidentally changed permission of the .ssh folder to 600 and now I am not able to log in to the GCP server through SSH as it's giving me permission denied error.
**Connection Failed**
You cannot connect to the VM instance because of an unexpected error. Wait a few moments and then try again.
I tried multiple options like, ssh troubleshooting instance, enabling serial console, ssh private key login.
Thanks you in advance.
One of the simple ways to fix this would be to use a startup script. In this script just execute chmod 700 /path/to/your/.ssh.
The startup scripts are executed with root privileges, so it should be able to fix your problem with .ssh folder permissions.
So, what you need to do:
Set the startup script.
Restart the VM.
Wait a minute or two to make sure the script got executed.
Remove the startup script from the machine. (no need to restart again)
Thank you guys for all your support, my problem got solved by follwing below document:
Serial Console with local password using a startup script

CentOS7 ccollab with perforce CL update issue

I cant get codecollaborator to upload files to for code review. I suspect I am missing some config. I have been scouring perforce and smartbear and stackover flow pages for a couple hours now no luck
CENTOS7
p4 (cant seem to find the version)
Collaborator Enterprise v11.2.11200
My p4 works totally fine have been using for months now to create CLs and submit. But now i need to upload files for code reviews.
command i ran to setup ccollab:
wget https://s3.amazonaws.com/downloads.smartbear/collaborator/11.2.11200/ccollab_client_11_2_11200_unix.sh
chmod +x ccollab_client_11_2_11200_unix.sh
./ccollab_client_11_2_11200_unix.sh
(went through install accepting entering as prompted)
ccollab login https://<codecollaborator_server> <username>
the above logs in fine no errors
ccollab --no-browser --scm perforce --server-proxy-host https://codecollaborator_server --p4user <username> --p4charset utf8 --p4client local_workspace_name --p4 /bin/p4 set
the try to upload a file
ccollab --debug addchangelist new 123456789
and get the following output:
Connecting to server at https://
Connected to Collaborator Enterprise v11.2.11200
Connected as:
Attaching changelists to review
Auto-detecting SCM System for '/my/workspace/path'
Checking client configuration for '/my/workspace/path'.
ERROR: Could not configure SCM system:
SCM system could not be auto-detected, but there was an error: Cannot run program "accurev" (in directory "/my/workspace/path"): error=2, No such file or directory
I tried to find what the "accurev" package is or how to use it but no joy.
Accurev is a different source control system. Sounds like Code Collab doesn't know that it's supposed to be using Perforce?

Cannot chmod file on Openshift online v3 : Operation not permitted

I am migrating a Django application from Openshift v2 to v3 (In case you don't know, RedHat is shutting down v2 on September 30th, see: https://blog.openshift.com/migrate-to-v3-v2-eol/)
So, I am following this blog post to help me: https://blog.openshift.com/migrating-django-applications-openshift-3/ . I am new to all these Docker / Kubernetes concepts the new version is build upon.
I was able to make some progress : I managed to get a successful build of my app. Yet it crashes at deployment time:
---> Running application from script (app.sh) ...
/usr/libexec/s2i/run: line 42: /opt/app-root/src/app.sh: Permission denied
Indeed, app.sh has lost its x permission. I log into the failing container as debug and see it:
> oc debug dc/<my app>
> (app-root)sh-4.2$ ls -l /opt/app-root/src/app.sh
-rw-rw-r--. 1 default root 127 Sep 6 21:20 /opt/app-root/src/app.sh
The blog posts states "Ensure that the app.sh file is executable by running chmod +x app.sh.", which I did on my local repo. Whatever, I want to do it again directly in the pod, but it doesn't work:
(app-root)sh-4.2$ chmod +x /opt/app-root/src/app.sh
chmod: changing permissions of ‘/opt/app-root/src/app.sh’: Operation not permitted
So, how can I set the x permission to app.sh ? Thank you
Without looking into more details, any S2I builder image will gladly use your custom supplied run script to start the application in an alternative way.
Create .s2i/bin/ (mind the dot) in your source code directory, place the run script into it and rebuild the app in OpenShift - it will automatically use your custom run script upon deployment.
This is the preferred way of starting applications using custom commands in OpenShift.
Regarding your immediate problem, there is a very simple reason why you can not change the permissions of the script: you were trying to modify the permissions in the deployed pod, and not the builder pod. Deployed pods run using different UIDs, usually somewhere in the range of 100000000, and definitely do not match the file ownership as generated by the build. Hence permission denied.
The root cause of your problem (app.sh losing executable permissions) must be in the way the build process installs those files, and indeed looking at the /usr/libexec/s2i/assemble script in the base image does seem to reveal the culprit. The last two lines are:
# set permissions for any installed artifacts
fix-permissions /opt/app-root
If you wanted to change this part of the build instead of using a custom run script, I suggest you then create .s2i/bin/assemble in your project's source code and make it look sort of like this:
#!/bin/bash
echo "Running stock build:"
${STI_SCRIPTS_PATH}/assemble
echo "Fixing the mess:"
chmod 755 /opt/app-root/src/app.sh
This will fix whatever the stock build process does to file permissions, and will do it using the same UID as the rest of the build, so file ownership shouldn't be an issue.
as I stumbled upon this issue myself I've found a way to resolve it.
You have to make your file app.sh executable and push it in your repo as such.
If git does not track this modification as it did for me, you have to use: git update-index --chmod=+x app.sh for it to work.

Google cloud compute startup script ignored with no logging

I have a standard Debian 8.9 instance on google cloud compute (GCE) where my startup script is ignored.
In the custom metadata field, for startup-script, I am trying to run an Rscript (which is used for batch execution of R files), followed by a system shutdown, with the following:
#! /bin/bash
sudo /usr/bin/Rscript /home/myuser/launch_script.R
sudo shutdown -h now
Starting the instance is immediately followed by a shutdown and the Rscript is ignored. Removing the last line to shutdown causes the GCE instance to start, but the Rscript to be ignored. Running just "sudo /usr/bin/Rscript /home/myuser/launch_script.R" from the terminal results in the script being run. It has a chmod of 755, so I don't think this is a permissions issue.
In addition to this problem, I have read elsewhere that logging should happen in /var/log/, but there is nothing there. Instead, I have a bunch of log files (that only contain the start-up script and nothing else) in the root of my instance:
I got in touch with Google cloud support, who gave the following response:
script definition is kept under /var/run/google.startup.script
If the script does not run initially, you can force it manually with : $ sudo google_metadata_script_runner --script-type startup # for Debian, or # sudo /usr/share/google/run-startup-scripts # on Ubuntu and older images
I'm posting this information here, because it is not in their documentation (as of August 2017). I'm not sure how helpful it is, since the google.startup.script didn't exist in my case (using the latest Debian image on GCE), but I did run the other commands.
However, I think my main issues were:
I was using autossh to connect to a remote database. The startup-script was running before autossh. Building a 40 second delay into the script and running the script as a user (not sudo-type root) seems to have solved this problem for now. Autossh was being run as the main user, which I think gets loaded before lower-privilege user-defined scripts get loaded.
I was using some gcloud commands from the user account which had its own authentication issues. Running gcloud auth login as the user and ensuring correct permissions on my private key solved this.
Always remember to check the messages and syslog files in /var/log for troubleshooting. This allowed me to see the order of things being loaded at system-boot.

Building project from cron task

When I build project from terminal by using 'xcodebuild' command I succeed, but when I try to do run same script from cron task I receive error
"Code Sign error: The identity '****' doesn't match any valid certificate/private key pair in the default keychain"
I think problem is in settings and permissions of crontab utility, it seems crontab does not see my keychain
Can anyone provide me terminal command how to make my keychain visible for crontab
I encountered a similar issue with trying to build nightly via cron. The only resolution I found was to create a plist in /Library/LaunchDaemons/ and load it via launchctl. The key necessary is "SessionCreate" otherwise you will quickly run in to problems similar to what was encountered with trying to use cron -- namely that your user login.keychain is not available to the process. "SessionCreate" is similar to "su -l" in that (as far as I understand) it simulates a login and thus default keychains you expect will be available; otherwise, you are stuck with only the System keychain despite the task running as your user.
I found the answers (though not the top answer currently) here useful in troublw shooting this issue: Missing certificates and keys in the keychain while using Jenkins/Hudson as Continuous Integration for iOS and Mac development
You execute your cron job with which account ?
most probably the problem !!
You can add
echo `whoami`
at the beginning of your script to see with which user the script is launched.
Also when a Bash script is launched from cron, it don't use the same environment variable (non login shell) as when you launch it as a user.
When the script launches from cron, it doesn't load your $HOME/.profile (or .bash_profile). Anything you run from cron has to be 100% self-sufficient in terms of it's environment. I'd suggest you make yourself a file called something like "set_build_env.sh" It should contain everything from your .profile that you need to build, such as $PATH, $HOME, $CLASSPATH etc. Then in your build script, load set_build_env.sh using the dot notation or source cmd as ericc said. You should also remove the build-specific lines from your.profile and then source set_build_env from there too so only one place to maintain. Example:
source /home/dmitry/set_build_env.sh #absolute path
. /home/dmitry/set_build_env.sh #dot-space notation same as "source"