java.lang.ClassCastException: [Lorg.drools.rule.Package; cannot be cast to org.drools.rule.Package - drools-guvnor

i have this problem when trying to access Guvnor model.
Here the code:
RuleAgent ruleAgent = RuleAgent.newRuleAgent("/guvnor.properties");
RuleBase ruleBase = ruleAgent.getRuleBase();
FactType factype = ruleBase.getFactType("sample.Number");
Object obj = factype.newInstance();
factype.set(obj, "numberOne", 1);
factype.set(obj, "numberTwo", 1);
WorkingMemory workingMemory = ruleBase.newStatefulSession();
workingMemory.insert(obj);
workingMemory.fireAllRules();
System.out.println(factype.get(obj, "message"));
The problem appear executing this line: RuleBase ruleBase = ruleAgent.getRuleBase();
and return me this exception:
java.lang.ClassCastException: [Lorg.drools.rule.Package; cannot be cast to org.drools.rule.Package
This is my configuration:
jboss-eap-6.1
guvnor-5.5.0.Final-jboss-as-7.0.war
my pom.xml:
<dependency>
<groupId>org.drools</groupId>
<artifactId>drools-core</artifactId>
<version>5.5.0.Final</version>
</dependency>
<dependency>
<groupId>org.drools</groupId>
<artifactId>drools-compiler</artifactId>
<version>5.5.0.Final</version>
Have somebody solve this problem?

I am using drools version 5.4 ,This is common bug in guvnor 5.4 .I guess this issue is fixed in drools version 5.5.0.Beta1 ,
Please check
https://issues.jboss.org/browse/JBRULES-3590
Regards
Ganesh N
ganeshneelekani#gmail.com

Related

SyntaxError (amazon-sagemaker-object-has-no-attribute)

I'm running the code cell below, on SageMaker Notebook instance.
pd.concat([train_data['y_yes'], train_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('train.csv', index=False, header=False)
boto3.Session().resource('s3').Bucket(bucket_name).Object(os.path.join(prefix, 'train/train.csv')).upload_file('train.csv')
s3_input_train = sagemaker.s3_input(s3_data='s3://{}/{}/train'.format(bucket_name, prefix), content_type='csv')
And if I hit, the following error is appearing:
AttributeError: 'SageMaker' object has no attribute 's3_input'
s3_input_train = sagemaker.input.TrainingInput(s3_data='s3://{}/{}/train'.format(bucket_name, prefix), content_type='csv')
did not work for me, but
s3_input_train = sagemaker.TrainingInput(s3_data='s3://{}/{}/train'.format(bucket_name, prefix), content_type='csv')
did.
Instead of input, use sagemaker.inputs.TrainingInput(parameters)
As per the official github code, s3_input function was planned to be updated to TrainingInput. The documentation for the tutorial might not be updated for this change. Please try using TrainingInput function instead.
Replace the line: s3_input_train = sagemaker.s3_input(s3_data='s3://{}/{}/train'.format(bucket_name, prefix), content_type='csv')
with:
s3_input_train = sagemaker.TrainingInput(s3_data='s3://{}/{}/train'.format(bucket_name, prefix), content_type='csv')

AttributeError: 'DataTransferServiceClient' object has no attribute 'project_transfer_config_path'

Got the following code:
import time
from google.protobuf.timestamp_pb2 import Timestamp
from google.cloud import bigquery_datatransfer_v1
def runQuery (parent, requested_run_time):
client = bigquery_datatransfer_v1.DataTransferServiceClient()
projectid = '[enter your projectId here]' # Enter your projectID here
transferid = '[enter your transferId here]' # Enter your transferId here
parent = client.project_transfer_config_path(projectid, transferid)
start_time = bigquery_datatransfer_v1.types.Timestamp(seconds=int(time.time() + 10))
response = client.start_manual_transfer_runs(parent, requested_run_time=start_time)
print(response)
We used it in few different projects and cases and everything works fine. Today I deployed another function using this code and keep getting the following error:
AttributeError: 'DataTransferServiceClient' object has no attribute
'project_transfer_config_path'
What am I missing?
Thank you!
You are probably using a newer version (2.0.0 or 2.1.0) of the google-cloud-bigquery-datatransfer client library. In these versions, most utility methods have been removed, one of them being project_transfer_config_path.
You can use the method transfer_config_path of the client to achieve the same result.
I would strongly suggest that you study the Migration Guide to 2.0.0 as there might be other changes that you need to make too.
In case you are using version 2.0.0 and not 2.1.0, I would recommend upgrading to the latest since there are breaking changes between them, for example the import paths that were changed in 2.0.0 have been reverted in 2.1.0.

NoClassDefFoundError: in jenkins-test-harness-htmlunit library

I am developing test cases for one of my plugin. Below is snipped code of test case:
JenkinsRule.WebClient webClient=rule.createWebClient();
FreeStyleProject p = (FreeStyleProject) rule.jenkins.getItem("WebApp");
HtmlPage page= webClient.getPage(p,"configure");
.
I am always getting java.lang.NoClassDefFoundError: Could not initialize class com.gargoylesoftware.htmlunit.util.EncodingSniffer . I have checked dependency jenkins-test-harness-htmlunit and it has same class.
I am having below jenkins and jenkins-test-harness version in my plugin:
<properties>
<jenkins.version>2.7.3</jenkins.version>
<java.level>8</java.level>
<jenkins-test-harness.version>2.27</jenkins-test-harness.version>
</properties>
Please help me if anybody has solution for it

pxssh.pxssh() is producing error in python

I got the following errors when I run pxssh.pxssh() in python. Please let me know what I am missing here.
Exception AttributeError: "'pxssh' object has no attribute 'closed'" in <bound method pxssh.__del__ of <pexpect.pxssh.pxssh object at 0x10d98e910>> ignored
.........
File "/Users/any_user/system/somelibrary_lib.py", line 377, in login
ssh = pxssh.pxssh(maxread=read_buffer, ignore_sighup=False)
TypeError: __init__() got an unexpected keyword argument 'ignore_sighup'
.........
I faced the same problem. the solution was like this :
s = pxssh.pxssh(timeout=time_out, maxread=2000000)
s.SSH_OPTS += " -o StrictHostKeyChecking=no"
s.SSH_OPTS += " -o UserKnownHostsFile=/dev/null"
You should add this two options
I faced the same problem. Solution I found was to use SSH_OPTS property of pxssh object instead of options ctor (__init__) argument.
So my code looks like this:
s = pxssh.pxssh()
s.SSH_OPTS += " -o StrictHostKeyChecking=no"
s.force_password = True
s.login(ip, user, passwd)
It doesn't throw AttributeError exception on module initialization and TypeError too.
But it still doesn't work for me. If the remote server has MOTD or some shell initialization on startup, it would break the logic of pxssh. I made the following trick:
s.login(ip, user, passwd, original_prompt = "Last login:")
Shell writes on each successful login line like Last login: %date% from %ip%. I used it to verify successful logon. Now, using s you can execute remote commands.
Original (default) value for original_prompt is r"[#$]". May be you need somehow to combine them for correct pxssh work. May be you need to add > to original_prompt if you are connecting to SQL shell or Cisco devices.

Solr Admin not found: "missing core name in path"

I'm running solr on dotcloud for my django app (using haystack) and am running into some trouble. I receive a 404 "missing core name in path" message when trying to access the admin, despite the fact that--as far as I can tell--I only have a single core.
Here is my schema.xml:
<?xml version="1.0" encoding="UTF-8" ?>
<solr persistent="false">
<!--
adminPath: RequestHandler path to manage cores.
If 'null' (or absent), cores will not be manageable via request handler
-->
<cores adminPath="/admin/cores" defaultCoreName="collection1">
<core name="collection1" instanceDir="." shard="shard1"/>
</cores>
</solr>
When I point my browser at .../solr/collection1/admin, still nothing. But since I've only got a single core shouldn't I just be able to go to .../solr/admin?
I've followed the steps on the haystack "getting started tutorial" as well as the dotcloud solr service docs.
The relevant code in my settings.py:
HAYSTACK_SITECONF = 'gigmash.search_sites'
HAYSTACK_SEARCH_ENGINE = 'solr'
HAYSTACK_SOLR_URL = 'http://35543365.dotcloud.com/solr' #provided by dotcloud
HAYSTACK_INCLUDE_SPELLING = True
HAYSTACK_SEARCH_RESULTS_PER_PAGE = 10
And here's the error I get when I try and test in the interpreter:
>>> from haystack.query import SearchQuerySet
>>> sqs = SearchQuerySet().all()
>>> sqs.count()
Failed to query Solr using '*:*': [Reason: None]
java.lang.NullPointerException at org.apache.solr.servlet.SolrServlet.doGet(SolrServlet.java:91) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:297) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:923) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:547) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
I find it especially amusing that my error output says "reason: none" :-P
Perhaps also relevant: despite having run ./manage.py build_solr_schema and ./manage.py rebuild_index (and having rebuild index (accurately) report the number of models indexed), a data/ directory has not been created in my solr/ directory.
Any help would be greatly appreciated. Total newb with solr/haystack/dotcloud/everything!
Solved.
It ended up being an issue with my dotcloud.yml file.