Drush doesn't export command result as file - drupal-8

I would to export result from a command run by drush but it doesn't work.
This command create an empty file:
drush #sited8bl.preprod ms > ~/www/XXX/exports-bruts/$(date +%Y%m%d_%H%M%S)-migrationsBL.txt --debug
This debug message let think the command isn't found:
[debug] Done with bootstrap max in Application::find(): trying to find ms again. [4.39 sec, 31.8 MB]
On another server (multisite) it works, the hosting say there is a drush issue.
If someone can help me on this
Thanks

Related

How do you run Drush commands via crontab on Drupal 8

I am trying to execute drush commands through a crontab for a Drupal 8 site. These commands work when I call them directly, but when run through my user's crontab I get the following error:
\Drupal::$container is not initialized yet. \Drupal::setContainer() must be
called with a real container.
Other posts suggest this is a bug within older versions of Drush, but I am on 10.3.5.
I have tried a number of things over the past few hours including reconfiguring cron, but ultimately it seems Drush is not bootstrapping Drupal correctly, but I need to be able to run the queue from cron
This is a test command I'm running which just adds to the Drupal log...
crontab (my user)
* * * * * /var/www/html/vendor/bin/drush scr /var/www/html/scripts/what.php -r /var/www/html/web
what.php
<?php
\Drupal::logger('mymod')->info("CHECKING IN FROM CRON CLI...");
Here is another command, closer to what I'm trying to accomplish...
crontab (my user)
* * * * * /var/www/html/vendor/bin/drush queue:run commerce_recurring -r /var/www/html/web >> /var/www/html/private/logs/cron_commerce_recurring.log
The error I get here is:
Command queue:run was not found. Drush was unable to query the database. As
a result, many commands are unavailable. Re-run your command with --debug
to see relevant log messages.
I get this same error when running this command with drupal console.
Any suggestions appreciated.
Thanks.
This turned out to be a database connectivity issue. The tricky part was getting to the error. I had to install an MTA, so that cron could write errors to my local user's mailbox and then add the --debug option, so that I could see the real error.
I'm working on a DDEV instance and for some reason the DDEV settings were not being loaded. An issue for another day...

CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://repo.anaconda.com/pkgs/free/win-64/repodata.json.bz2>

I'm setting up the virtual environment of Django for the first time. I've downloaded the Anaconda library of Python in my D drive. So initially I set up the path of Python and Conda(Scripts) manually in advance system settings. But now when I'm creating the environment using command
conda create --name mydjang0 django
the command prompt is showing an error like this-
C:\Users\AABHA GAUTAM> conda create --name mydjang0 django
Solving environment: failed
CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://repo.anaconda.com/pkgs/pro/win-64/repodata.json.bz2>
Elapsed: -
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
If your current network has https://www.anaconda.com blocked, please file
a support request with your network engineering team.
SSLError(MaxRetryError('HTTPSConnectionPool(host='repo.anaconda.com', port=443): Max retries exceeded with url: /pkgs/pro/win-64/repodata.json.bz2 (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))'))
If you already have a .condarc file in your root folder. Then update the file by running this command.
conda config --set ssl_verify false
If you do not have a .condarc file, then create one and then run the above command. I have added both the commands below.
conda config --add channels conda-forge
conda config --set ssl_verify false
Instead of using command prompt use Anaconda Prompt as administator.
run the same command there.
I also faced same issue with command prompt.
Only delete file .condrac from c:\users\user

"drush cache-rebuild" throwing driver not found error

I am getting [error] could not find driver for the command drush cache-rebuild
Below is the Drush and Drupal Version Details, Appreciate any help. Thanks
I was using wampp with PostgreSQL where I enabled pdo_pgsql, pgsql from wampp applcation menu (wampp> php > php extension). It didn't work for me.
Step 1) run php -m in terminal, in my case it didn't listed those php modules although it was showing enabled in wampp. So I went to php.ini and enabled those modules.
Step 2) drush cr or drush cache-rebuild clears cache from database, so make sure to put postgres path to system environment variable.
Try this:
sudo composer update
run drush cr all
Try using drush cr command. And if you are using ubuntu then you must use sudo drush cr command.

Startup script logs location

In Google Cloud Platform's Ubuntu 16.04.1 instance, the output of my startup script was written to /var/log/startupscript.log.
Since they upgraded to 16.04.02 I can't find the logs anymore.
Any idea?
UPDATE from the official documentation:
Startup script output is written to the following log files:
CentOS and RHEL: /var/log/messages
Debian: /var/log/daemon.log
Ubuntu 14.04, 16.04, and 16.10: /var/log/syslog
SLES 11 and 12: /var/log/messages
The correct answer (by now) is to use journalctl:
sudo journalctl -u google-startup-scripts.service
You can re-run a startup script like this:
sudo google_metadata_script_runner startup
See also: https://cloud.google.com/compute/docs/instances/startup-scripts/linux
There are two ways to search for the log file probably a lot more but i know the below.
locate -i startupscript.log - you may need to update your indexes periodically for this option to be optimal.
From root find / -iname startupscript.log -print .

Cron job openshift error

I have a rails 4 openshift application. I am trying to run a cron job. The script runs completely fine when I run it by itself. The script is:
#!/bin/bash
/bin/bash -l -c 'cd $OPENSHIFT_REPO_DIR && bundle exec bin/rails runner -e production "Payment.charge_customers_pay_experts"'
The problem is the log file gives me the following error
Wed Feb 3 22:57:05 EST 2016: START minutely cron run
__________________________________________________________________________
/var/lib/openshift/56a438107628e18b30000111/app-root/runtime/repo//.openshift/åcron/minutely/charge_customers_pay_experts:
Warning: You're using Rubygems 2.0.14 with Spring. Upgrade to at least Rubygems 2.1.0 and run `gem pristine --all` for better startup performance.
/var/lib/openshift/56a438107628e18b30000111/app-root/runtime/repo/vendor/bundle/ruby/gems/spring-1.6.2/lib/spring/sid.rb:39:in `getpgid': Permission denied (Errno::EACCES)
from /var/lib/openshift/56a438107628e18b30000111/app-root/runtime/repo/vendor/bundle/ruby/gems/spring-1.6.2/lib/spring/sid.rb:39:in `pgid'
from /var/lib/openshift/56a438107628e18b30000111/app-root/runtime/repo/vendor/bundle/ruby/gems/spring-1.6.2/lib/spring/server.rb:78:in `set_pgid'
from /var/lib/openshift/56a438107628e18b30000111/app-root/runtime/repo/vendor/bundle/ruby/gems/spring-1.6.2/lib/spring/server.rb:34:in `boot'
from /var/lib/openshift/56a438107628e18b30000111/app-root/runtime/repo/vendor/bundle/ruby/gems/spring-1.6.2/lib/spring/server.rb:14:in `boot'
from -e:1:in `<main>'
__________________________________________________________________________
Wed Feb 3 22:57:06 EST 2016: END minutely cron run - status=0
__________________________________________________________________________
I have made sure the script was executable. I'm not sure if I am missing something. Does anyone have any thoughts?
I don't know that the script being executable necessarily has anything to do with this. It looks like a permissions error more than anything. Does the system user that runs the cron job have the correct permissions to run? You can test this by logging into that user (or sudo su - <user>) and then execute the command in the script manually.
/bin/bash -l -c 'cd $OPENSHIFT_REPO_DIR && bundle exec bin/rails runner -e production "Payment.charge_customers_pay_experts"'
Be sure to replace your $OPENSHIFT_REPO_DIR variable with the correct path to your OpenShift repo directory.
You may just need to either add the user your cronjob runs as to the group that has permissions over the files, or perhaps run the cronjob as a more privileged user (privileged in that it has permissions over the required files).
BTW, I could only post this as an answer as Stack Overflow is telling me I need 50 reputation points to comment.
I fixed this by commenting out the 'spring' gem in my gemfile. But apparently this is a known issue. https://bugzilla.redhat.com/show_bug.cgi?id=1305544.
There is a workaround for the time being until this issue is resolved. You can edit the /usr/libexec/openshift/cartridges/cron/bin/cron_runjobs.sh to add setsid in front of timeout so that it runs setsid timeout ... as this allows for the timeout command to actually change the sid.