While updating the Jobs in Jenkins Job-Builder using jenkins-jobs update I'm getting the below error.
INFO:root:Updating jobs in ['jobs'] ([])
Traceback (most recent call last):
File "/usr/bin/jenkins-jobs", line 10, in <module>
sys.exit(main())
File "/usr/lib/python2.7/site-packages/jenkins_jobs/cmd.py", line 191, in main
execute(options, config)
File "/usr/lib/python2.7/site-packages/jenkins_jobs/cmd.py", line 372, in execute
n_workers=options.n_workers)
File "/usr/lib/python2.7/site-packages/jenkins_jobs/builder.py", line 348, in update_jobs
self.load_files(input_fn)
File "/usr/lib/python2.7/site-packages/jenkins_jobs/builder.py", line 293, in load_files
self.parser.parse(in_file)
File "/usr/lib/python2.7/site-packages/jenkins_jobs/parser.py", line 128, in parse
self.parse_fp(fp)
File "/usr/lib/python2.7/site-packages/jenkins_jobs/parser.py", line 105, in parse_fp
cls, dfn = next(iter(item.items()))
AttributeError: 'str' object has no attribute 'items'
Job-Builder Version : 1.6.1
Python Version : 2.7
OS : RHEL 7.1
I've tried this in different machines but with no luck.
The AttributeError: 'str' object has no attribute 'items'error is very common in python, it will be more helpful if you share the code or atlease where the error is appearing.
You are using "Jenkins Job Builder" for configuring jenkins and you are getting error while updating jenkins jobs. The update command is used to deploy the job to jenkins after you have tested the job definition. The update command requires a configuration file.
You should pass that configuration file as it is, not in string format and also the jobs should be in non string format inside configuration file, I mean not in ' OR " single or double quotes.
Related
I am trying to execute python code on a dataproc cluster via airflow orchestration.
I am using airflow 1.10.12, and DataprocWorkflowTemplateInstantiateInlineOperator to instanciate a dataproc cluster & pass some parameters (and templated params aswell). The main objective is to run some prediction code.
Note that I upgraded this code so use airflow.providers.google.cloud.operators.dataproc and not airflow.contrib.operators.dataproc_operator to import DataprocInstantiateInlineWorkflowTemplateOperator, because the former introduced the parameters kwarg that, in theory, permits passing arguments to the cluster. Using the later in my other scripts, I have no errors, but I cannot introduce parameters to the cluster.
...
from airflow.providers.google.cloud.operators.dataproc import (
DataprocInstantiateInlineWorkflowTemplateOperator,
)
...
workflow_seg_members = make_workflow_template(
region=REGION,
dataproc_job_bucket=DATAPROC_JOB_BUCKET,
python_main_executable_path="segmentation_members/seg_members_prediction.py",
)
op_seg_members_prediction = DataprocInstantiateInlineWorkflowTemplateOperator(
task_id="seg_members_prediction",
project_id=PROJECT_ID,
region=REGION,
template=workflow_seg_members,
parameters={
"execution_date_str": "{{ds_nodash}}",
"project": "<REDACTED>",
"dataset": "<REDACTED>",
"features_table_prefix": "global_features",
"output_table_prefix": "seg_members_output",
"path_to_model": "segmentation_members/DecisionTreeClassifier.pkl",
"bucket_name": "<REDACTED>",
"model_designation": "Segmentation Members",
},
)
in seg_members_prediction.py, I use argparse to create the needed arguments.
The error I am getting is :
TypeError: Parameter to MergeFrom() must be instance of same class: expected google.cloud.dataproc.v1beta2.OrderedJob got str.
My questions are :
How do I fix this MergeFrom() exception?
Is this the right approach to pass parameters from airflow to my dataproc cluster?
Here is the complete stack :
File "/usr/local/lib/airflow/airflow/providers/google/common/hooks/base_google.py", line 373, in inner_wrapper
return func(self, *args, **kwargs)
File "/usr/local/lib/airflow/airflow/providers/google/cloud/hooks/dataproc.py", line 712, in instantiate_inline_workflow_template
metadata=metadata,
File "/opt/python3.6/lib/python3.6/site-packages/google/cloud/dataproc_v1beta2/gapic/workflow_template_service_client.py", line 488, in instantiate_inline_workflow_template
request_id=request_id,
TypeError: Parameter to MergeFrom() must be instance of same class: expected google.cloud.dataproc.v1beta2.OrderedJob got str.
[2022-06-13 12:36:34,696] {taskinstance.py:1197} INFO - Marking task as UP_FOR_RETRY. dag_id=ds_seg_members_integration, task_id=seg_members_prediction, execution_date=20220610T162234, start_date=20220613T123629, end_date=20220613T123634
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 7, in <module>
exec(compile(f.read(), __file__, 'exec'))
File "/usr/local/lib/airflow/airflow/bin/airflow", line 37, in <module>
args.func(args)
File "/usr/local/lib/airflow/airflow/utils/cli.py", line 76, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/airflow/airflow/bin/cli.py", line 561, in run
_run(args, dag, ti)
File "/usr/local/lib/airflow/airflow/bin/cli.py", line 480, in _run
pool=args.pool,
File "/usr/local/lib/airflow/airflow/utils/db.py", line 74, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/airflow/airflow/models/taskinstance.py", line 986, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/airflow/airflow/providers/google/cloud/operators/dataproc.py", line 1748, in execute
metadata=self.metadata,
File "/usr/local/lib/airflow/airflow/providers/google/common/hooks/base_google.py", line 373, in inner_wrapper
return func(self, *args, **kwargs)
File "/usr/local/lib/airflow/airflow/providers/google/cloud/hooks/dataproc.py", line 712, in instantiate_inline_workflow_template
metadata=metadata,
File "/opt/python3.6/lib/python3.6/site-packages/google/cloud/dataproc_v1beta2/gapic/workflow_template_service_client.py", line 488, in instantiate_inline_workflow_template
request_id=request_id,
TypeError: Parameter to MergeFrom() must be instance of same class: expected google.cloud.dataproc.v1beta2.OrderedJob got str.
EDIT :
I tried running the following code but still got the same error :
op_seg_members_prediction = DataprocInstantiateInlineWorkflowTemplateOperator(
task_id="seg_members_prediction",
project_id=PROJECT_ID,
region=REGION,
template=workflow_seg_members,
)
op_seg_members_prediction.execute(context="DEBUG")
After browsing the apache-airflow-providers-google doc I discovered it isn't compatible with airflow 1.
You can install this package on top of an existing Airflow 2
installation (see Requirements below for the minimum Airflow version
supported) via pip install apache-airflow-providers-google
The package supports the following python versions: 3.7,3.8,3.9,3.10
So I upgraded to airflow 2 in my local environment, and the MergeFrom() error stopped occurring.
This still begs the question : How do you pass parameters from airflow to dataproc for airflow 1
On a Macbook Pro (Catalina), Python 3.8, interested in developing Android apps in Kivy using KivyMD. I took the following steps to install KivyMD:
create the virtual environment (virtmd), then activated it (source /virtmd/bin/activate)
in this virtual environment, installed the following via pip: Pillow, kivy
git-cloned KivyMD as per the link suggested on the GitHub page
Ran the pip install .
All this ran seamlessly with no hitch or errors. pip freeze shows the following installed items:
certifi==2021.5.30
charset-normalizer==2.0.4
docutils==0.17.1
idna==3.2
Kivy==2.0.0
Kivy-Garden==0.1.4
kivymd # file:///Users/robinhahn/PyProg/kvKivyMD/KivyMD
Pillow==8.3.1
Pygments==2.9.0
requests==2.26.0
urllib3==1.26.6
I was following a Codemy video tutorial and noticed that the presenter's pip freeze showed 4 more entries:
chardet==4.0.0
kivy-deps.angle==0.3.0
kivy-deps.glew==0.3.0
kivy-deps.sdl2==0.3.1
Cd'ed into the /demos/kitchen_sink folder, ran 'python3 main.py' which failed, raising this error, the last few lines of the traceback appear to focus on a file called kivytoast.py:
File "main.py", line 144, in <module>
KitchenSinkApp().run()
File "/Users/robinhahn/PyProg/kvKivyMD/virtmd/lib/python3.8/site-packages/kivy/app.py", line 949, in run
self._run_prepare()
File "/Users/robinhahn/PyProg/kvKivyMD/virtmd/lib/python3.8/site-packages/kivy/app.py", line 944, in _run_prepare
self.dispatch('on_start')
File "_event.pyx", line 709, in kivy._event.EventDispatcher.dispatch
File "main.py", line 65, in on_start
Builder.load_file(
File "/Users/robinhahn/PyProg/kvKivyMD/virtmd/lib/python3.8/site-packages/kivy/lang/builder.py", line 306, in load_file
return self.load_string(data, **kwargs)
File "/Users/robinhahn/PyProg/kvKivyMD/virtmd/lib/python3.8/site-packages/kivy/lang/builder.py", line 373, in load_string
parser = Parser(content=string, filename=fn)
File "/Users/robinhahn/PyProg/kvKivyMD/virtmd/lib/python3.8/site-packages/kivy/lang/parser.py", line 402, in __init__
self.parse(content)
File "/Users/robinhahn/PyProg/kvKivyMD/virtmd/lib/python3.8/site-packages/kivy/lang/parser.py", line 508, in parse
self.execute_directives()
File "/Users/robinhahn/PyProg/kvKivyMD/virtmd/lib/python3.8/site-packages/kivy/lang/parser.py", line 472, in execute_directives
mod = __import__(package)
File "/Users/robinhahn/PyProg/kvKivyMD/virtmd/lib/python3.8/site-packages/kivymd/toast/__init__.py", line 11, in <module>
from .kivytoast import toast
File "/Users/robinhahn/PyProg/kvKivyMD/virtmd/lib/python3.8/site-packages/kivymd/toast/kivytoast/__init__.py", line 3, in <module>
from .kivytoast import toast
File "/Users/robinhahn/PyProg/kvKivyMD/virtmd/lib/python3.8/site-packages/kivymd/toast/kivytoast/kivytoast.py", line 72, in <module>
class Toast(BaseDialog):
File "/Users/robinhahn/PyProg/kvKivyMD/virtmd/lib/python3.8/site-packages/kivymd/toast/kivytoast/kivytoast.py", line 90, in Toast
self, instance_label: Label, texture_size: list[int, int]
TypeError: 'type' object is not subscriptable
Still somewhat of a novice, and not really sure how to proceed from here.
ETA: I brought this up in VSCode and the last word:
list[int, int]
was squiggly-underlined, indicating it's the offending item. I have no idea what 'type' object is not subscriptable means or how to fix it.
Thanks to all who read and ponder this problem.
I'm on the same course right now and just got to that file with the same issue. I followed the path to the 'kivytoast.py' file and changed the following line (line 89):
def label_check_texture_size(self, instance_label: Label, texture_size: list([int, int])) -> NoReturn:
The kivymd devs seemed to have made a mistake and did list[int, int] instead of list([int,int]). You could also just leave it as [int,int] and it would work the same since it is alread in list format. Goodluck with the rest of your course!
So I'm working towards having automated staging deployments via Jenkins and Ansible. Part of this is using a script called ec2.py from ansible in order to dynamically retrieve a list of matching servers to deploy to.
SSH-ing into the Jenkins server and running the script from the jenkins user, the script runs as expected. However, running the script from within jenkins leads to the following error:
ERROR: Inventory script (ec2/ec2.py) had an execution error: Traceback (most recent call last):
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 1262, in <module>
Ec2Inventory()
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 159, in __init__
self.do_api_calls_update_cache()
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 386, in do_api_calls_update_cache
self.get_instances_by_region(region)
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 417, in get_instances_by_region
reservations.extend(conn.get_all_instances(filters = { filter_key : filter_values }))
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/.local/lib/python2.7/site-packages/boto/ec2/connection.py", line 585, in get_all_instances
max_results=max_results)
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/.local/lib/python2.7/site-packages/boto/ec2/connection.py", line 681, in get_all_reservations
[('item', Reservation)], verb='POST')
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/.local/lib/python2.7/site-packages/boto/connection.py", line 1181, in get_list
xml.sax.parseString(body, h)
File "/usr/lib/python2.7/xml/sax/__init__.py", line 43, in parseString
parser = make_parser()
File "/usr/lib/python2.7/xml/sax/__init__.py", line 93, in make_parser
raise SAXReaderNotAvailable("No parsers found", None)
xml.sax._exceptions.SAXReaderNotAvailable: No parsers found
I don't know too much about python, so I'm not sure how to debug this issue further.
So it turns out the issue was to do with Jenkins overwriting the default LD_LIBRARY_PATH variable. By unsetting that variable before running python, I was able to make the python app work!
I'm trying to get into the new N1QL Queries for Couchbase in Python.
I got my database set up in Couchbase 4.0.0.
My initial try was to retreive all documents like this:
from couchbase.bucket import Bucket
bucket = Bucket('couchbase://localhost/dafault')
rv = bucket.n1ql_query('CREATE PRIMARY INDEX ON default').execute()
for row in bucket.n1ql_query('SELECT * FROM default'):
print row
But this produces a OperationNotSupportedError:
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 2357, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1777, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Users/my_user/python_tests/test_n1ql.py", line 9, in <module>
rv = bucket.n1ql_query('CREATE PRIMARY INDEX ON default').execute()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/couchbase/n1ql.py", line 215, in execute
for _ in self:
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/couchbase/n1ql.py", line 235, in __iter__
self._start()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/couchbase/n1ql.py", line 180, in _start
self._mres = self._parent._n1ql_query(self._params.encoded)
couchbase.exceptions.NotSupportedError: <RC=0x13[Operation not supported], Couldn't schedule n1ql query, C Source=(src/n1ql.c,82)>
Here the version numbers of everything I use:
Couchbase Server: 4.0.0
couchbase python library: 2.0.2
cbc: 2.5.1
python: 2.7.8
gcc: 4.2.1
Anyone an idea what might have went wrong here? I could not find any solution to this problem up to now.
There was another ticket for node.js where the same issue happened. There was a proposal to enable n1ql for the specific bucket first. Is this also needed in python?
It would seem you didn't configure any cluster nodes with the Query or Index services. As such, the error returned is one that indicates no nodes are available.
I also got similar error while trying to create primary index.
Create a primary index...
Traceback (most recent call last):
File "post-upgrade-test.py", line 45, in <module>
mgr.n1ql_index_create_primary(ignore_exists=True)
File "/usr/local/lib/python2.7/dist-packages/couchbase/bucketmanager.py", line 428, in n1ql_index_create_primary
'', defer=defer, primary=True, ignore_exists=ignore_exists)
File "/usr/local/lib/python2.7/dist-packages/couchbase/bucketmanager.py", line 412, in n1ql_index_create
return IxmgmtRequest(self._cb, 'create', info, **options).execute()
File "/usr/local/lib/python2.7/dist-packages/couchbase/_ixmgmt.py", line 160, in execute
return [x for x in self]
File "/usr/local/lib/python2.7/dist-packages/couchbase/_ixmgmt.py", line 144, in __iter__
self._start()
File "/usr/local/lib/python2.7/dist-packages/couchbase/_ixmgmt.py", line 132, in _start
self._cmd, index_to_rawjson(self._index), **self._options)
couchbase.exceptions.NotSupportedError: <RC=0x13[Operation not supported], Couldn't schedule ixmgmt operation, C Source=(src/ixmgmt.c,98)>
Adding query and index node to the cluster solved the issue.
I'm trying to intall the asterisk_click2dial module on ODOO and this error comes to me in the log file:
ValueError: Routing: posting a message without model should be with a parent_id (private mesage).
2015-03-09 15:23:38,262 11093 ERROR ? werkzeug: Error on request:
Traceback (most recent call last):
File "/home/odoo/odoo/lib/python2.7/site-packages/werkzeug/serving.py", line 177, in run_wsgi
execute(self.server.app)
File "/home/odoo/odoo/lib/python2.7/site-packages/werkzeug/serving.py", line 165, in execute
application_iter = app(environ, start_response)
File "/opt/odoo/openerp/service/server.py", line 281, in app
return self.app(e, s)
File "/opt/odoo/openerp/service/wsgi_server.py", line 216, in application
return application_unproxied(environ, start_response)
File "/opt/odoo/openerp/service/wsgi_server.py", line 202, in application_unproxied
result = handler(environ, start_response)
File "/opt/odoo/openerp/http.py", line 1274, in __call__
self.load_addons()
File "/opt/odoo/openerp/http.py", line 1293, in load_addons
m = __import__('openerp.addons.' + module)
File "/opt/odoo/openerp/modules/module.py", line 79, in load_module
mod = imp.load_module('openerp.addons.' + module_part, f, path, descr)
File "/opt/odoo/addons/base_phone/__init__.py", line 23, in <module>
from . import wizard
File "/opt/odoo/addons/base_phone/wizard/__init__.py", line 23, in <module>
from . import number_not_found
File "/opt/odoo/addons/base_phone/wizard/number_not_found.py", line 25, in <module>
import phonenumbers
ImportError: No module named phonenumbers
The problem is just that I installed that module (phonenumbers) plus the py-Asterisk module without errors using pip install phonenumbers and pip install py-Asterisk and the error persists.
I noticed I have at least two versions of python installed (2.6 and 2.7) but both modules are installed at the same version from odoo (I can see the modules in the python2.7 cli when, for example, I write phonenumbers or search).
Has anybody any idea what is happening to me? I'd be gratefull for some specific response. Thanks.
Here the connector's page: OpenERP - Asterisk connector
Ok, my bad. During odoo instalation I created a virtual environment under the Odoo system user account which was used solely by the Odoo server so I just need to install these modules under that enviorment. This works for me:
First let’s switch from root to the odoo user, then create a new virtual environment called odoo and activate it:
su - odoo
/usr/local/bin/virtualenv --python=/usr/local/bin/python2.7 odoo
source odoo/bin/activate
(If you have the virtual environment created like me just ignore the second line). Before starting the module installation we need to add the path to the PostgreSQL binaries, otherwise the PsycoPG2 module install will fail (I ignored this one too):
export PATH=/usr/pgsql-9.3/bin:$PATH
Then I can do pip install... Thank you all for your help.