Clearcase mvfs error while running the build in windows dynamic view - build

I have the following error while running the build in the dynamic view. It looks like an mvfs caching problem.
The build succeeded after I ran it few times but what might be the problem behind it?
pid/tid 900/938} cleartext lookup view= vob= dbid=0x80000173 - error 6
[2013/12/14 02:56:10.233] mvfs: Error: {858 pid/tid 900/938} cleartext pname= \Device\Mup\.......\.s\00030\8000017352aab277sc100-elf2xx.exe
[2013/12/14 02:56:18.603] mvfs: Error: {859 pid/tid 900/938} cleartext lookup view= vob= dbid=0x80000173 - error 6
[2013/12/14 02:56:18.603] mvfs: Error: {860 pid/tid 900/938} cleartext pname= \Device\Mup\.....\.s\00030\8000017352aab277sc100-elf2xx.exe
[2013/12/14 02:56:24.951] mvfs: Error: {861 pid/tid 900/938} mvfs_nt2vfs_opensendirp: failed OPEN irp fop 0xFFFFFA800EF21700 status 0xc000006d [2013/12/14 02:56:24.951] mvfs: Error: {862 pid/tid 900/938} DriverName: \FileSystem\FltMgr
[2013/12/14 02:56:24.951] mvfs: Error: {863 pid/tid 900/938} FileName: ....\.s\00030\8000011952aab271objcopy.exe

Considering the objects which are troublesome in the private storage area (.s) of the view storage are .exe, it is possible it is linked to some kind of process keeping an handle on said exe, and preventing ClearCase to properly access them.
It is best to use two dynamic views: one for accessing data, and one for building data, in order to use one for read/access and execution, and the other for write access and compilation.
This old thread mentions restarting the albd_server service can help.
That is also akin to stopping / restarting the view (cleartool endview -server).

Related

Batch jcl error for cics web services using cics web service assistant tool

I am facing issue while submitting below job, can someone please suggest?
Error:
IEF344I KA7LS2W2 INPUT STEP1 SYSUT2 - ALLOCATION FAILED DUE TO DATA FACILITY SYSTEM ERROR
IGD17501I ATTEMPT TO OPEN A UNIX FILE FAILED,
RETURN CODE IS (00000081) REASON CODE IS (0594003D)
FILENAME IS (/ka7a/KA7A.in)
JCL:
//KA7LS2W2 JOB (51,168),'$ACCEPT',CLASS=1,
// MSGCLASS=X,MSGLEVEL=(1,0),NOTIFY=&SYSUID,REGION=0M
// EXPORT SYMLIST=*
// JCLLIB ORDER=SYS2.CI55.SDFHINST
//STEP1 EXEC DFHLS2WS,
// JAVADIR='java/J7.0_64',PATHPREF='',TMPDIR='/ka7a',
// USSDIR='',TMPFILE=&QT.&SYSUID.&QT
//INPUT.SYSUT1 DD *
PDSLIB=//DJPN.KA7A.POC
LANG=COBOL
PGMINT=CHANNEL
PGMNAME=KZHFEN1C
REQMEM=PAYIN
RESPMEM=PAYOUT
MAPPING-LEVEL=2.2
LOGFILE=/home/websrvices/wsbind/payws.log `enter code here`
WSBIND=/home/webservices/wsbind/payws.wsbind
WSDL=/home/webservices/wsdl/payws.wsdl
/*
Based on the Return Code 81 / Reason Code 0594003D the pathname can't be resolved.
the message IGD17501I explains the error. You'll find more information looking up the Reason Code 0594003D.
You can use BPXMTEXT to lookup more detail on the Reason Code.
Executing this command in USS you'll see:
$ bpxmtext 0594003D
BPXFVLKP 05/14/20
JRDirNotFound: A directory in the pathname was not found
Action: One of the directories specified was not found. Verify that the name
specified is spelled correctly.
Per #phunsoft adding that the same command can be executed in TSO and is not case sensitive like USS.
I'd suspect that /ka7a doesn't exist. Is it a case issue? Or perhaps you meant /u/ka7a/ or `/home/ka7a' ?

C++ querying through Soci doesn't work on computer that is not the owner

So I have this code:
soci::session* mSql;
mSql = new soci::session("sqlite3", "database.db");
soci::rowset<soci::row> rs = (mSql->prepare << "SELECT * FROM tab");
When I run this on the machine that created it, it works perfectly.
However, when I run it on another computer, either physically or via ssh, I get this error: ./executable: symbol lookup error: ./executable: undefined symbol: _ZN4soci7details14statement_impl19exchange_for_rowsetERKNS0_8type_ptrINS0_14into_type_baseEEE
The error occurs on the third line.
Database permissions are 777.
Why is this occurring and how can I fix it?

Error: Unable to write parameter catalog: SASUSER.PARMS.PARMS.SLIST

I am running a SAS job on Server and getting this error:
Unable to write parameter catalog: SASUSER.PARMS.PARMS.SLIST
Any help/comments will be appreciated
This is symptom of not having write access to the SASUSER library. Usually it is generated by PROC IMPORT which seems incapable of checking the RSASUSER setting and understanding that the SASUSER library is not writable. It should not cause any trouble.

Pig's "dump" is not working on AWS

I am trying Pig commands on EMR of AWS. But even small commands are not working as I expected. What I did is following.
Save the following 6 lines as ~/a.csv.
1,2,3
4,2,1
8,3,4
4,3,3
7,2,5
8,4,3
Start Pig
Load the csv file.
grunt> A = load './a.csv' using PigStorage(',');
16/01/06 13:09:09 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
Dump the variable A.
grunt> dump A;
But this commands fails. I expected that this command produces 6 tuples which are described in a.csv. The dump commands a lot of INFO lines and ERROR lines. The ERROR lines are following.
91711 [main] ERROR org.apache.pig.tools.pigstats.PigStats - ERROR 0: java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
16/01/06 13:10:08 ERROR pigstats.PigStats: ERROR 0: java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
91711 [main] ERROR org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil - 1 map reduce job(s) failed!
16/01/06 13:10:08 ERROR mapreduce.MRPigStatsUtil: 1 map reduce job(s) failed!
[...skipped...]
Input(s):
Failed to read data from "hdfs://ip-xxxx.eu-central-1.compute.internal:8020/user/hadoop/a.csv"
Output(s):
Failed to produce result in "hdfs://ip-xxxx.eu-central-1.compute.internal:8020/tmp/temp-718505580/tmp344967938"
[...skipped...]
91718 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias A. Backend error : java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
16/01/06 13:10:08 ERROR grunt.Grunt: ERROR 1066: Unable to open iterator for alias A. Backend error : java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
(I have changed IP-like description.) The error message seems to say that the load operator also fails.
I have no idea why even the dump operator fails. Can you give me any advice?
Note
I also use TAB in a.csv instead commas and execute A = load './a-tab.csv';, but it does not help.
$ pig -x local -> A = load 'a.csv' using PigStorage(','); -> dump A;. Then
Input(s):
Failed to read data from "file:///home/hadoop/a.csv"
If I use the full path, namely A = load '/home/hadoop/a.csv' using PigStorage(',');, then I get
Input(s):
Failed to read data from "/home/hadoop/a.csv"
I have encountered the same problem. You may try to su root use the root user, then ./bin/pig at PIG_HOME to start pig in mapreduce mode. On the other hand, you also can use the current user by sudo ./bin/pig at PIG_HOME to start pig, but you must export JAVA_HOME and HADOOP_HOME in the ./bin/pig file.
If you want to use your local file system, you should have to start your pig in step 2 as below
bin/pig -x local
If you start just as bin/pig that will search the file in DFS. That's why you get error Failed to read data from "hdfs://ip-xxxx.eu-central-1.compute.internal:8020/user/hadoop/a.csv"

Google Native Client Visual Studio Add-in: Webserver fails to start because arguments to httpd.py are invalid

I have an application that I turned into a simple Native Client App a year ago, and I've been trying to get it running again. However, when I try to run it, or any of the example VS projects, the web server fails to start, giving me usage hints for httpd.py, and saying "httpd.py: error: unrecognized arguments: 5103".
I wasn't able to find anything about this on the NaCL guide or on the net. I could probably troubleshoot the issue if I could see the script that starts the webserver, but I have no idea where this is stored.
The script that start the server is 'nacl_sdk\pepper_43\tools\httpd.py'. The problem is with the port argument being formated incorrectly.
The expected format is:
httpd.py [-h] [-C SERVE_DIR] [-p PORT] [--no-dir-check]
But, the received arguments formatted by the add-in is:
['--no_dir_check', '5103']
where the port prefix is missing and should be '-p 5103'
For a quick fix, add the following line
parser.add_argument('args', nargs=argparse.REMAINDER)
before the parse_args(args) in the main(args) method in httpd.py.
This will keep the unknown arguments from being parsed and will use the default value for port instead (5103).