Fabric fab -f : Can't find any collection named 'fabfile' - fabric

After updated Fabric from 1.4 to 2.4, fab -f <file_path>.<fabfile>.py isn't works anymore. It always threw me the error Can't find any collection named 'fabfile'!
From fab --help, it stated that -f option is used to -f STRING, --config=STRING Runtime configuration file to use.
Thinking to downgrade it to 1.4 since all my project on other hosts are running this version. But i cant't get back the 1.4 installer on net. Anyone could help on this or any workaround for fabric 2.4 ?

This is indeed changed!
Now with newer version of Fabric 2.x you must use -r instead of -f. This is based on pyinvoke check this link pyInvoke
For example: if you are in dir A and your fabfile is in dir B
dir A
|__dir B
you will be able to call your fabfile tasks by executing this command from the command-line
fab -r ./b/ yourTaskName

Related

CLion Full Remote Mode with FreeBSD as the remote host

Currently, the Full Remote Mode of CLion only supports Linux as a remote host OS. Is it possible to have a FreeBSD remote host?
Yes, you can!
However, note that I'm recalling these steps retrospectively, so probably I have missed one step or two. Should you encounter any problem, please feel free to leave a comment below.
Rent a FreeBSD server, of course :)
Update your system to the latest release. Otherwise, you may get weird errors like "libdl.so.1" not found when installing packages. The one I'm using is FreeBSD 12.0-RELEASE-p3.
Create a user account. Don't forget to make it a member of wheel, and uncomment the %wheel ALL=(ALL) ALL line in /usr/local/etc/sudoers.
Set up SSH. This step is especially tricky, because we need to use both public-key and password authentication.
Due to a known bug, in some cases, the remote host must use password authentication, or you'll get an error when setting up the toolchain. You can enable password authentication by setting PasswordAuthentication yes in /etc/ssh/sshd_config, followed by a sudo /etc/rc.d/sshd restart.
It appears that CLion synchronizes files between the local and remote host with rsync and SSH. For some reasons I cannot explain, this process will hang forever if the host server doesn't support passphrase-less SSH key login. Follow this answer to create an SSH key as an additional way of authentication.
CLion assumes the remote host OS to be Linux, so we must fix some incompatibilities between GNU/Linux and FreeBSD.
Install GNU utilities with sudo pkg install coreutils.
Rename the BSD utility stat with sudo mv /usr/bin/stat /usr/bin/_stat.
Create a "new" file /usr/bin/stat with the content in Snippet 1. This hack exploits the fact that CLion sets the environment variable JETBRAINS_REMOTE_RUN to 1 before running commands on the remote server.
Do sudo chmod a+x /usr/bin/stat to make it executable.
Again, rename the BSD utility ls with sudo mv /bin/ls /bin/_ls.
Create a "new" file /bin/ls with the content in Snippet 2, like before.
Lastly, sudo chmod a+x /bin/ls.
Install the dependencies with sudo pkg install rsync cmake gcc gdb gmake.
Now you can follow the official instructions, and connect to your shiny FreeBSD host!
Snippet 1
#!/bin/sh
if [ -z "$JETBRAINS_REMOTE_RUN" ]
then
exec "/usr/bin/_stat" "$#"
else
exec "/usr/local/bin/gnustat" "$#"
fi
Snippet 2
#!/bin/sh
if [ -z "$JETBRAINS_REMOTE_RUN" ]
then
exec "/bin/_ls" "$#"
else
exec "/usr/local/bin/gls" "$#"
fi
Additionally you need to fix one more incompatibility between GNU/Linux and FreeBSD.
Check gtar is installed if no pkg install gtar
Rename the BSD utility tar with mv /usr/bin/tar /usr/bin/_tar
Create a "new" file /usr/bin/tar with the content in Snippet 3, like before.
Lastly, sudo chmod a+x /usr/bin/tar
Snippet 3
#!/bin/sh
if [ -z "$JETBRAINS_REMOTE_RUN" ]
then
exec "/usr/bin/_tar" "$#"
else
exec "/usr/local/bin/gtar" "$#"
fi
Starting CLion 2020.1 the instruction regarding gnustat and "ls" is not relevant anymore. Because CLion 2020.1 includes the proper fixes in jsch-nio library (https://github.com/lucastheisen/jsch-nio/commit/410cf5cbb489114b5da38c7c05237f6417b9125b)
Starting CLion 2020.2 doesn't use tar --dereference option, so the instruction regarding gtar (gnutar) is also not relevant anymore.

How to start odoo server automatically when system is ON

Haii everyone
How to start Odoo server automatically when system is ON.
Normally i searched in google i had found a link " http://www.serpentcs.com/serpentcs-odoo-auto-startup-script-322 "
i follow the each and every step and i started the odoo-server
ps -ax | grep python
5202 ? Sl 0:01 python /home/tejaswini/Odoo_workspace/workspace_8/odoo8/openerp-server --config /etc/odoo-server.conf --logfile /var/log/odoo-server.log
it is showing the server path also
but when i run 0.0.0.0:8069/localhost:8069 in browser it is running
shows This site can’t be reached
please any one help me
Thanks in advance
To start a service automatically when the system turns on, you need to put that service into init script. Try below command
sudo update-rc.d <service_name> defaults
In your case,
sudo update-rc.d odoo-server defaults
Hope it will help you.
For the final step we need to install a script which will be used to start-up and shut down the server automatically and also run the application as the correct user. There is a script you can use in /opt/odoo/debian/init but this will need a few small modifications to work with the system installed the way I have described above. here is the link
Similar to the configuration file, you need to either copy it or paste the contents of this script to a file in /etc/init.d/ and call it odoo-server. Once it is in the right place you will need to make it executable and owned by root:
sudo chmod 755 /etc/init.d/odoo-server
sudo chown root: /etc/init.d/odoo-server
In the configuration file there’s an entry for the server’s log file. We need to create that directory first so that the server has somewhere to log to and also we must make it writeable by the openerp user:
sudo mkdir /var/log/odoo
sudo chown odoo:root /var/log/odoo
reference

Startup script logs location

In Google Cloud Platform's Ubuntu 16.04.1 instance, the output of my startup script was written to /var/log/startupscript.log.
Since they upgraded to 16.04.02 I can't find the logs anymore.
Any idea?
UPDATE from the official documentation:
Startup script output is written to the following log files:
CentOS and RHEL: /var/log/messages
Debian: /var/log/daemon.log
Ubuntu 14.04, 16.04, and 16.10: /var/log/syslog
SLES 11 and 12: /var/log/messages
The correct answer (by now) is to use journalctl:
sudo journalctl -u google-startup-scripts.service
You can re-run a startup script like this:
sudo google_metadata_script_runner startup
See also: https://cloud.google.com/compute/docs/instances/startup-scripts/linux
There are two ways to search for the log file probably a lot more but i know the below.
locate -i startupscript.log - you may need to update your indexes periodically for this option to be optimal.
From root find / -iname startupscript.log -print .

xboost dmlc-submit eats command quotes (needed to run python job via scl enable)

I use RedHat 6.8 for my YARN cluster nodes, and it uses Python 2.6.6. In order to get Python 2.7 in RedHat one uses Software Collections. To activate a software collection one must do scl enable. To print Python version for example one does scl enable python27 'python -V'
Problem is when I try to submit my Python job to YARN as follows
dmlc-submit --cluster=yarn scl enable python27 'python -V'
It seems to eat up the quotes and produces this error (instead of the expected Python 2.7.8):
Unable to open /etc/scl/prefixes/python!
That is the same output one gets when doing following at the bash prompt on any machine
scl enable python27 python -V
I'm trying to figure out how to fool argparse into letting this through.
Looked into argparser (which dmlc-core and xgboost use) source code trying to figure out any possible escaping mechanism. Couldn't find any but did find out dmlc-submit picks up job submission node environment and recreates on each of the execution nodes. Looked at the scl enable and /opt/rh/python27/enable script and realized LD_LIBRARY_PATH is the important one. PATH is also but for some reason it doesn't carry over so I now specify full path to my SCL provided python27
Following now happily runs on my YARN 2.7.1 cluster
cd xgboost
dmlc-core/tracker/dmlc-submit --cluster=yarn --num-workers=9 --worker-cores=4 /opt/rh/python27/root/usr/bin/python tests/distributed/test_basic.py

Why doesn't my custom recipes run on AWS OpsWorks?

I've created a GitHub repo for my simple custom recipe:
my-cookbook/
|- recipes/
|- appsetup.rb
I've added the repo to Custom Chef Recipes as https://github.com/my-github-user/my-github-repo.git
I've added my-cookbook::appsetup to the Setup "cycle".
I know it's executed, because it fails to load if I mess up the syntax.
This is my appsetup.rb:
node[:deploy].each do |app_name, deploy|
script "install_composer" do
interpreter "bash"
user "root"
cwd "#{deploy[:deploy_to]}/current"
code "curl -sS https://getcomposer.org/installer | php && php composer.phar install --no-dev"
end
end
When I log into the instance by SSH with the ubuntu user, composer isn't installed.
I've also tried the following to no avail (A nodejs install):
node[:deploy].each do |app_name, deploy|
execute "installing node" do
command "add-apt-repository --yes ppa:chris-lea/node.js && apt-get update && sudo apt-get install python-software-properties python g++ make nodejs"
end
end
Node doesn't get installed, and there are no errors in the log. The only references to the cookbook in the log just says:
[2014-03-31T13:26:04+00:00] INFO: OpsWorks Custom Run List: ["opsworks_initial_setup", "ssh_host_keys", "ssh_users", "mysql::client", "dependencies", "ebs", "opsworks_ganglia::client", "opsworks_stack_state_sync", "mod_php5_apache2", "my-cookbook::appsetup", "deploy::default", "deploy::php", "test_suite", "opsworks_cleanup"]
...
2014-03-31T13:26:04+00:00] INFO: New Run List expands to ["opsworks_initial_setup", "ssh_host_keys", "ssh_users", "mysql::client", "dependencies", "ebs", "opsworks_ganglia::client", "opsworks_stack_state_sync", "mod_php5_apache2", "my-cookbook::appsetup", "deploy::default", "deploy::php", "test_suite", "opsworks_cleanup"]
...
[2014-03-31T13:26:05+00:00] DEBUG: Loading Recipe my-cookbook::appsetup via include_recipe
[2014-03-31T13:26:05+00:00] DEBUG: Found recipe appsetup in cookbook my-cookbook
Am I missing some critical step somewhere? The recipe is clearly recognized and loaded, but doesn't seem to be executed.
(The following are fictitious names: my-github-user, my-github-repo, my-cookbook)
I know you've abandoned the cookbook but I'm almost 100% sure it's because you don't have a metadata.rb file in the root of your cookbook.
Your cookbook name should not contain a dash. I had the same problem, replacing by '_' solved it for me.
If those commands are failing silently, it could be that your use of && is obscuring a failure.
As for add-apt-repository, that is an interactive command. Try using the "--yes" option to answer yes by default, making it no longer interactive.
If you do not execute your command successfully, you will not find the files in the current directory. Check inside the last release folder to see if it had been put there.
It maybe prudent to check if you got the right directory etc setup by changing the CWD to : /tmp