When i run the command
java -Xmx2g -classpath $CLASSPATH:weka.jar weka.gui.GUIChooser
I get the error
---Registering Weka Editors---
Trying to add database driver (JDBC): RmiJdbc.RJDriver - Error, not in CLASSPATH?
Trying to add database driver (JDBC): jdbc.idbDriver - Error, not in CLASSPATH?
Trying to add database driver (JDBC): org.gjt.mm.mysql.Driver - Error, not in CLASSPATH?
Trying to add database driver (JDBC): com.mckoi.JDBCDriver - Error, not in CLASSPATH?
Trying to add database driver (JDBC): org.hsqldb.jdbcDriver - Error, not in CLASSPATH?
How to fix this . I need to load a file in weka which has more than 1000 attributes .
The error says nothing about the heap size, it says you are missing a database driver. If don't need database access, you should be fine. Does the GUI open up? If yes, you can ignore the error messages.
Related
I am facing issue while submitting below job, can someone please suggest?
Error:
IEF344I KA7LS2W2 INPUT STEP1 SYSUT2 - ALLOCATION FAILED DUE TO DATA FACILITY SYSTEM ERROR
IGD17501I ATTEMPT TO OPEN A UNIX FILE FAILED,
RETURN CODE IS (00000081) REASON CODE IS (0594003D)
FILENAME IS (/ka7a/KA7A.in)
JCL:
//KA7LS2W2 JOB (51,168),'$ACCEPT',CLASS=1,
// MSGCLASS=X,MSGLEVEL=(1,0),NOTIFY=&SYSUID,REGION=0M
// EXPORT SYMLIST=*
// JCLLIB ORDER=SYS2.CI55.SDFHINST
//STEP1 EXEC DFHLS2WS,
// JAVADIR='java/J7.0_64',PATHPREF='',TMPDIR='/ka7a',
// USSDIR='',TMPFILE=&QT.&SYSUID.&QT
//INPUT.SYSUT1 DD *
PDSLIB=//DJPN.KA7A.POC
LANG=COBOL
PGMINT=CHANNEL
PGMNAME=KZHFEN1C
REQMEM=PAYIN
RESPMEM=PAYOUT
MAPPING-LEVEL=2.2
LOGFILE=/home/websrvices/wsbind/payws.log `enter code here`
WSBIND=/home/webservices/wsbind/payws.wsbind
WSDL=/home/webservices/wsdl/payws.wsdl
/*
Based on the Return Code 81 / Reason Code 0594003D the pathname can't be resolved.
the message IGD17501I explains the error. You'll find more information looking up the Reason Code 0594003D.
You can use BPXMTEXT to lookup more detail on the Reason Code.
Executing this command in USS you'll see:
$ bpxmtext 0594003D
BPXFVLKP 05/14/20
JRDirNotFound: A directory in the pathname was not found
Action: One of the directories specified was not found. Verify that the name
specified is spelled correctly.
Per #phunsoft adding that the same command can be executed in TSO and is not case sensitive like USS.
I'd suspect that /ka7a doesn't exist. Is it a case issue? Or perhaps you meant /u/ka7a/ or `/home/ka7a' ?
I am running a SAS job on Server and getting this error:
Unable to write parameter catalog: SASUSER.PARMS.PARMS.SLIST
Any help/comments will be appreciated
This is symptom of not having write access to the SASUSER library. Usually it is generated by PROC IMPORT which seems incapable of checking the RSASUSER setting and understanding that the SASUSER library is not writable. It should not cause any trouble.
I am trying Pig commands on EMR of AWS. But even small commands are not working as I expected. What I did is following.
Save the following 6 lines as ~/a.csv.
1,2,3
4,2,1
8,3,4
4,3,3
7,2,5
8,4,3
Start Pig
Load the csv file.
grunt> A = load './a.csv' using PigStorage(',');
16/01/06 13:09:09 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
Dump the variable A.
grunt> dump A;
But this commands fails. I expected that this command produces 6 tuples which are described in a.csv. The dump commands a lot of INFO lines and ERROR lines. The ERROR lines are following.
91711 [main] ERROR org.apache.pig.tools.pigstats.PigStats - ERROR 0: java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
16/01/06 13:10:08 ERROR pigstats.PigStats: ERROR 0: java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
91711 [main] ERROR org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil - 1 map reduce job(s) failed!
16/01/06 13:10:08 ERROR mapreduce.MRPigStatsUtil: 1 map reduce job(s) failed!
[...skipped...]
Input(s):
Failed to read data from "hdfs://ip-xxxx.eu-central-1.compute.internal:8020/user/hadoop/a.csv"
Output(s):
Failed to produce result in "hdfs://ip-xxxx.eu-central-1.compute.internal:8020/tmp/temp-718505580/tmp344967938"
[...skipped...]
91718 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias A. Backend error : java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
16/01/06 13:10:08 ERROR grunt.Grunt: ERROR 1066: Unable to open iterator for alias A. Backend error : java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
(I have changed IP-like description.) The error message seems to say that the load operator also fails.
I have no idea why even the dump operator fails. Can you give me any advice?
Note
I also use TAB in a.csv instead commas and execute A = load './a-tab.csv';, but it does not help.
$ pig -x local -> A = load 'a.csv' using PigStorage(','); -> dump A;. Then
Input(s):
Failed to read data from "file:///home/hadoop/a.csv"
If I use the full path, namely A = load '/home/hadoop/a.csv' using PigStorage(',');, then I get
Input(s):
Failed to read data from "/home/hadoop/a.csv"
I have encountered the same problem. You may try to su root use the root user, then ./bin/pig at PIG_HOME to start pig in mapreduce mode. On the other hand, you also can use the current user by sudo ./bin/pig at PIG_HOME to start pig, but you must export JAVA_HOME and HADOOP_HOME in the ./bin/pig file.
If you want to use your local file system, you should have to start your pig in step 2 as below
bin/pig -x local
If you start just as bin/pig that will search the file in DFS. That's why you get error Failed to read data from "hdfs://ip-xxxx.eu-central-1.compute.internal:8020/user/hadoop/a.csv"
I have an application that I turned into a simple Native Client App a year ago, and I've been trying to get it running again. However, when I try to run it, or any of the example VS projects, the web server fails to start, giving me usage hints for httpd.py, and saying "httpd.py: error: unrecognized arguments: 5103".
I wasn't able to find anything about this on the NaCL guide or on the net. I could probably troubleshoot the issue if I could see the script that starts the webserver, but I have no idea where this is stored.
The script that start the server is 'nacl_sdk\pepper_43\tools\httpd.py'. The problem is with the port argument being formated incorrectly.
The expected format is:
httpd.py [-h] [-C SERVE_DIR] [-p PORT] [--no-dir-check]
But, the received arguments formatted by the add-in is:
['--no_dir_check', '5103']
where the port prefix is missing and should be '-p 5103'
For a quick fix, add the following line
parser.add_argument('args', nargs=argparse.REMAINDER)
before the parse_args(args) in the main(args) method in httpd.py.
This will keep the unknown arguments from being parsed and will use the default value for port instead (5103).
My instance of play v1.2.3 on Ubuntu was working fine till yesterday. I am not entirely certain if I installed any new packages on Ubuntu in the meanwhile. When I now try running play (run/start), I get the exception copied below. I have tried cleaning the tmp directory but it did not help. Any other thoughts (besides setting up play again) will be greatly appreciated. Thanks
Exception in thread "main" play.exceptions.UnexpectedException: Unexpected Error
at play.vfs.VirtualFile.contentAsString(VirtualFile.java:180)
at play.classloading.hash.ClassStateHashCreator.getClassDefsForFile(ClassStateHashCreator.java:83)
at play.classloading.hash.ClassStateHashCreator.scan(ClassStateHashCreator.java:58)
at play.classloading.hash.ClassStateHashCreator.scan(ClassStateHashCreator.java:63)
at play.classloading.hash.ClassStateHashCreator.scan(ClassStateHashCreator.java:63)
at play.classloading.hash.ClassStateHashCreator.scan(ClassStateHashCreator.java:63)
at play.classloading.hash.ClassStateHashCreator.computePathHash(ClassStateHashCreator.java:48)
at play.classloading.ApplicationClassloader.computePathHash(ApplicationClassloader.java:371)
at play.classloading.ApplicationClassloader.<init>(ApplicationClassloader.java:62)
at play.Play.init(Play.java:272)
at play.server.Server.main(Server.java:158)
Caused by: java.lang.RuntimeException: java.io.IOException: Input/output error
at play.libs.IO.readContentAsString(IO.java:62)
at play.libs.IO.readContentAsString(IO.java:49)
at play.vfs.VirtualFile.contentAsString(VirtualFile.java:178)
... 10 more
Caused by: java.io.IOException: Input/output error
at java.io.FileInputStream.readBytes(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:220)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158)
at java.io.InputStreamReader.read(InputStreamReader.java:167)
at java.io.Reader.read(Reader.java:123)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1364)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1340)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1315)
at org.apache.commons.io.IOUtils.toString(IOUtils.java:525)
at play.libs.IO.readContentAsString(IO.java:60)
One of the Java classes in my project somehow got corrupted. I noticed it when I tried copying the whole directory to another location - the copy operation generated an error message specifying the corrupted file (which I'm sure one could have detected through other means as well).
Removing the corrupted file (& replacing with the same code) resulted in normal behavior. Hope this helps others.