I tried to follow this tutorial on publishing a HIT Task on MTurk using ParlAI:
how to setup a task with parlAI
The tutorial says that I should run the following code in order to run the task on sandbox:
$ python run.py -nh 2 -r 0.05 --sandbox --verbose
Normally, it should then download the dataset that has been specified in the agents.py file. But when I run this in powershell, the following error message occurs:
PS C:\py3\ParlAI\parlai\mturk\tasks\model_evaluator> python run.py -nh 2 -r
0.05 --sandbox --verbose
Traceback (most recent call last):
File "run.py", line 6, in <module>
from parlai.core.params import ParlaiParser
ModuleNotFoundError: No module named 'parlai'
PS C:\py3\ParlAI\parlai\mturk\tasks\model_evaluator>
It says that module parlai is missing. How can I make it work?
Thanks for your answer!
Best regards
Rainer
This error means that ParlAI just hasn't been installed to your python path yet (python setup.py develop in the ParlAI home directory).
However, note that ParlAI doesn't officially support windows, so you're likely to run into further errors after that.
(The ParlAI team usually responds pretty quickly if you open issues on the Github page if you have questions directly for them.)
Related
I am trying to run a BlazeMeter Taurus script with a JMeter script inside via AWS Lambda. I'm hoping that there is a way to run bzt via a local installation in /tmp/bzt instead of looking for a bzt installation on the system which doesn't really exist since its lambda.
This is my lambda_handler.py:
import subprocess
import json
def run_taurus_test(event, context):
subprocess.call(['mkdir', '/tmp/bzt/'])
subprocess.call(['pip', 'install', '--target', '/tmp/bzt/', 'bzt'])
# subprocess.call('ls /tmp/bzt/bin'.split())
subprocess.call(['/tmp/bzt/bin/bzt', 'tests/taurus_test.yaml'])
return {
'statusCode': 200,
'body': json.dumps('Executing Taurus Test hopefully!')
}
The taurus_test.yaml runs as expected when testing on my computer with bzt installed via pip normally, so I know the issue isn't with the test script. The same traceback as below appears if I uninstall bzt from my system and try use a local installation targeted in a certain directory.
This is the traceback in the execution results:
Traceback (most recent call last):
File "/tmp/bzt/bin/bzt", line 5, in <module>
from bzt.cli import main
ModuleNotFoundError: No module named 'bzt'
It's technically failing in /tmp/bzt/bin/bzt which is the executable that's failing, and I think it is because it's not using the local/targeted installation.
So, I'm hoping there is a way to tell bzt to use keep using the targeted installation in /tmp/bzt instead of calling the executable there and then trying to pass it on to an installation that doesn't exist elsewhere. Feedback if AWS Fargate or EC2 would be better suited for this is also appreciated.
Depending on the size of the bzt package, the solutions are:
Use Lambda Docker recent feature, and this way, what you run locally is what you get on Lambda.
Use Lambda layers (similar to Docker), this layer as the btz module in the python directory as described there
When you package your Lambda, instead of uploading a simple Python file, create a ZIP file containing both: /path/to/zip_root/lambda_handler.py and pip install --target /path/to/zip_root
Following the instructions in the documentation, I attempt to create my new project, and get the following error:
EDIT: I should not post questions late at night. Added more detail to the terminal output. Prior to this, I verified pip was upgraded, djangocms-installer is installed and the virtualenv was installed.
(djangoenv) [ec2-user#web01 ~]$ djangocms jbi
Creating the project
Please wait while I install dependencies
If I am stuck for a long time, please check for connectivity / PyPi issues
Dependencies installed
Creating the project
The installation has failed.
*****************************************************************
Check documentation at https://djangocms-installer.readthedocs.io
*****************************************************************
Traceback (most recent call last):
File "/home/ec2-user/djangoenv/bin/djangocms", line 8, in <module>
sys.exit(execute())
File "/home/ec2-user/djangoenv/lib/python3.7/site-packages/djangocms_installer/main.py", line 44, in execute
django.setup_database(config_data)
File "/home/ec2-user/djangoenv/lib/python3.7/site-packages/djangocms_installer/django/__init__.py", line 353, in setup_database
output = subprocess.check_output(command, env=env, stderr=subprocess.STDOUT)
File "/usr/lib64/python3.7/subprocess.py", line 411, in check_output
**kwargs).stdout
File "/usr/lib64/python3.7/subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['/home/ec2-user/djangoenv/bin/python', '-W', 'ignore', 'manage.py', 'migrate']' returned non-zero exit status 1.
Using Django inside a virtual environment created on an AWS EC2 instance, running Amazon Linux 2. I'm OK with burning the instance down and using another distro if the issue is the distro.
I am trying to install google-cloud-sdk in ubuntu-18.04. I am following offical docs given here. When I run ./google-cloud-sdk/install.sh I get following error:-
Welcome to the Google Cloud SDK!
To help improve the quality of this product, we collect anonymized usage data
and anonymized stacktraces when crashes are encountered; additional information
is available at <https://cloud.google.com/sdk/usage-statistics>. This data is
handled in accordance with our privacy policy
<https://policies.google.com/privacy>. You may choose to opt in this
collection now (by choosing 'Y' at the below prompt), or at any time in the
future by running the following command:
gcloud config set disable_usage_reporting false
Do you want to help improve the Google Cloud SDK (y/N)? N
Traceback (most recent call last):
File "/home/vineet/./google-cloud-sdk/bin/bootstrapping/install.py", line 225, in <module>
main()
File "/home/vineet/./google-cloud-sdk/bin/bootstrapping/install.py", line 200, in main
Prompts(pargs.usage_reporting)
File "/home/vineet/./google-cloud-sdk/bin/bootstrapping/install.py", line 123, in Prompts
scope=properties.Scope.INSTALLATION)
File "/home/vineet/google-cloud-sdk/lib/googlecloudsdk/core/properties.py", line 2406, in PersistProperty
config.EnsureSDKWriteAccess()
File "/home/vineet/google-cloud-sdk/lib/googlecloudsdk/core/config.py", line 198, in EnsureSDKWriteAccess
raise exceptions.RequiresAdminRightsError(sdk_root)
googlecloudsdk.core.exceptions.RequiresAdminRightsError: You cannot perform this action because you do not have permission to modify the Google Cloud SDK installation directory [/home/vineet/google-cloud-sdk].
Re-run the command with sudo: sudo /home/vineet/google-cloud-sdk/bin/gcloud ...
I tried to search it on stackoverflow and github-issues but in vain.
Would appreciate any hint to solve it.
As stated on the error message.
Re-run the command with sudo: sudo /home/vineet/google-cloud-sdk/bin/gcloud ...
The install.sh script should be run using sudo.
There are also other alternatives to install the Google Cloud SDK in Ubuntu 18.04 just as installing the package with apt-get as explained on the documentation.
I am playing arround with pyvmomi and I managed to get the "sample" script (getallvms.py) working.
I am now trying an other script that I found here:
https://raw.githubusercontent.com/vmware/pyvmomi-community-samples/master/samples/vminfo_quick.py
When I run this script I get the following error:
Iwans-Mac:sample iwan-home-folder$ python vminfo_quick.py -s 10.11.11.215 -u pyvmomi-user#sso-iwan.local -p VMware1!
Traceback (most recent call last):
File "vminfo_quick.py", line 19, in <module>
from tools import cli
ImportError: No module named tools
I am not sure how I install the module "tools".
Can someone tell me how I should continue?
Thanks,
Iwan
The script you are trying to run is meant to be run from the samples project directory. To have the most success you would want to clone the project:
git clone https://github.com/vmware/pyvmomi-community-samples
cd pyvmomi-community-samples/samples
python vminfo_quick.py xxxx
Once you do that the import issues will go away. If you look in the samples directory you will find tools/cli which is what is trying to be imported.
As of Friday 11th of February, 2016, gsutil has suddenly stopped working. I run nightly backups using gsutil, and prior to executing I perform a gcloud components update.
$ gsutil --version
Traceback (most recent call last):
File "/home/IRUser/google-cloud-sdk/bin/bootstrapping/gsutil.py", line 12, in <module>
import bootstrapping
File "/home/IRUser/google-cloud-sdk/bin/bootstrapping/bootstrapping.py", line 9, in <module>
import setup
File "/home/IRUser/google-cloud-sdk/bin/bootstrapping/setup.py", line 41, in <module>
reload(google)
ImportError: No module named google
If I manually pip install google, gsutil works fine again. However, I question that this somehow wasn't performed by gcloud components update.
My question: Isn't gcloud components update supposed to take care of any such dependencies?
I'm on CentOS 7.
This issue has been reported https://code.google.com/p/google-cloud-sdk/issues/detail?id=538
"google" package was included in previous releases of cloud sdk, but it is no longer needed.
On python installations (which have protobuf installed) "google" package is auto-imported on the startup the reload of existing google package can fail.
By installing it "google" with pip you made reload stop complaining about the module, even though it is not used.
Alternatively you can apply patches suggested in the above issue log.