How to pass source value to pre processing one at a time/iterate within the same taskflow? - informatica

I’ve tried to assign source row value to InOut parameter/ Mapping variable using SetVariable($$p_file_name, FileName).
But It does not update the value once set. I tried solutions from web :
- SetVariable($$p_file_name, Null) ,
- Taskflow variable with assignment to the mapp variable,
- splitting the mapping into 2 parts ( 1st gets the value in Mapp Var & 2nd tries to use it in pre processing cmd) ,
—UserParameter file permanent sets the value &
- Default value with a empty string.
Nothing worked consistently to update the Inout Parameter/Mapping Variable every time a row is read with a new value from source table.
Note : I’ve the run xcopy manually & it works from cmd prompt using the $$p_file_name as source.

My recommendation is to script with variables the manual xcopy you preformed. Admittedly there are many other solutions though maintaining your own script and variables will give you a consistent result.

Related

c++ copy a value to one string and it populated another

I have a function in which I populate some string variables. problem is I set 6 vars with source and destination path,names and extensions, I set a further two variables and one of the file varaibles changes as well.
lAppendStr.assign(ID_MONGODB_APPEND); // "smallfiles=true\0";
lSearchStr.assign(ID_MONGODB_APPEND); // "smallfiles=true\0";
lSrcPath.assign(ID_MONGOODB_PATH); // "/etc/\0" ;
lSrcName.assign(ID_MONGOODB_NAME); // "mongodb\0";
lSrcExt.assign(ID_MONGOODB_EXT); // ".conf\0";
lDestPath.assign(ID_MONGOODB_PATH); // "/etc/\0";
lDestName.assign(ID_MONGOODB_PATH); // "mongodb\0";
lDestExt.assign(ID_DEFAULT); // "def";
when .def is assigned, lAppendStr is also populated with .def
All vars are strings and initialised with "" the ID definitions are terminated with \0 but only as a clutch at straws operation.
I have tried using the string values instead of the ID, and moving the list around changes the assignment results but the variables are still getting cross contaminated.
This is obviously part of larger code, but I guess in my mind is a memory boundary problem. I also posted an issue regarding a call to construct a class, which feels like a it could be in the same vein.
This is LINUX os developed on a MacBookair, in an eclipse oxygen dev editor environment.
Any guidance would be greatly appreciated.

Parameter list issue in HP Load Generator

I have a VUGen script that uses a parameter list of type=File with Select Next Row = Unique and Update value on = Once. This file (UsernameAndPassword.dat) is located in a Shared Folder.
When I run the Performance Test with more that 1 VUser, all the VUsers keep only the first record of that parameter. I mean, all the VUsers run with the same user/password values, i.e.:
parameter list
username,password
john,12345
caty,67890
alfred,09876
greg,54321
Vusers 1: john,12345
Vusers 2: john,12345
Vusers 3: john,12345
etc.
However, if I use an internal parameter list (type=file, and so on like above), when I run the test, each VUser obtains a different value of username.
For internal parameter list, I mean the file .dat is wrapped in the script.
How can I read the external file sequentially like an internal parameter list?
A file is a file is a file. When you start your virtual user your parameter file, whether it is sent with your script or referenced on a common drive, will be pulled into RAM on you load generator. This is why you can't write to a parameter file during a test and have the values available for use, because the file which is in use is actually in RAM.
Have you tried setting your block size manually to 1, then update once?
Check the software version of your load generator to the major.minor(patch) level. All sorts of henky things occur when the controller and the load generators are out of sync on a version level.

How can i pick out value from consecutive value of event by using siddhi

for example, that is data:
1,1470732420000,0
2,1470732421000,0
3,1470732422000,0
4,1470732423000,86
5,1470732424000,87
6,1470732425000,88
7,1470732426000,84
8,1470732427000,0
9,1470732428000,0
10,1470732429000,0
11,1470732430000,89
12,1470732431000,89
13,1470732432000,87
14,1470732433000,89
15,1470732434000,85
16,1470732435000,89
17,1470732436000,89
18,1470732437000,87
19,1470732438000,86
20,1470732439000,88
21,1470732440000,0
22,1470732441000,0
23,1470732442000,0
24,1470732443000,87
25,1470732444000,85
26,1470732445000,86
27,1470732446000,0
28,1470732447000,0
29,1470732448000,0
30,1470732449000,0
column one is id,column two is timestamp,column three is value,1 sec interval between the timestamp.
i want monitoring the value of event,if i found out value>=85(e.g. id=4), i will starting counting,if the next two consecutive value>=85(e.g. id=5/id=6),then i will put the third value of event to OutputStream.(e.g. id=6,value=88,timestamp=1470732425000)
at the same time i clear the counting and wait value lower than 85(e.g. id=7,value=84), then i will monitoring again,when i found out value>=85(e.g. id=11,value=89) i will starting counting,if the next two consecutive value>=85(e.g. id=12/id=13),then i will put the third value of event to OutputStream.(e.g. id=13,value=87,timestamp=1470732432000)...
all this is i wanna do,before i post this ask, i've got an answer in this post,i've tried this code:
from every a1=InputStream[value>=85], a2=InputStream[value>=85]+, a3=InputStream[value<85]
select a2[1].id, a2[1].value
having (not (a2[1] is null))
insert into OutPutStream;
and it works,but i found out it will insert the value into OutputStream after the value<=85,and what i want is if i got three consecutive value>=85 then i insert into the value immediately.(i don't want to wait if the next value>=85 all the times)
in fact, i just wanna record value of third seconds in three consecutive seconds value(>=85) .
i'm using wso2das-3.1.0-SNAPSHOT.
Though DAS (Siddhi) supports sequence/pattern processing, for your requirement you might need to write a custom extension. I have written a sample window processor extension to cater your requirement (source code). Download and place siddhi-extension-condition-window-1.0.jar in <das_home>/repository/components/lib/ directory and restart the server. Refer to the test case to get an idea of the usage of the extension.

python win32com shell.SHFileOperation - any way to get the files that were actually deleted?

In the code I maintain I run across:
from win32com.shell import shell, shellcon
# ...
result,nAborted,mapping = shell.SHFileOperation(
(parent,operation,source,target,flags,None,None))
In Python27\Lib\site-packages\win32comext\shell\ (note win32comext) I just have a shell.pyd binary.
What is the return value of shell.SHFileOperation for a deletion (operation=FO_DELETE in the call above) ? Where is the code for the shell.pyd ?
Can I get the list of files actually deleted from this return value or do I have to manually check afterwards ?
EDIT: accepted answer answers Q1 - having a look at the source of pywin32-219\com\win32comext\shell\src\shell.cpp I see that static PyObject *PySHFileOperation() delegates to SHFileOperation which does not seem to return any info on which files failed to be deleted - so I guess answer to Q2 is "no".
ActiveState Python help contains SHFileOperation description:
shell.SHFileOperation
int, int = SHFileOperation(operation)
Copies, moves, renames, or deletes a file system object.
Parameters
operation : SHFILEOPSTRUCT
Defines the operation to perform.
Return Value
The result is a tuple containing int result of the
function itself, and the result of the fAnyOperationsAborted member
after the operation. If Flags contains FOF_WANTMAPPINGHANDLE, returned
tuple will have a 3rd member containing a sequence of 2-tuples with
the old and new file names of renamed files. This will only have any
content if FOF_RENAMEONCOLLISION was specified, and some filename
conflicts actually occurred.
Source code can be downloaded here: http://sourceforge.net/projects/pywin32/files/pywin32/Build%20219/ (pywin32-219.zip)
Just unpack and go to .\pywin32-219\com\win32comext\shell\src\

C++: Rename instead of Delete & Copy when using Sync

Currently I have the following part code in my Sync:
...
int index = file.find(remoteDir);
if(index >= 0){
file.erase(index, remoteDir.size());
file.insert(index, localDir);
}
...
// Uses PUT command on the file
Now I want to do the following instead:
If a file is the same as before, except for a rename, don't use the PUT command, but use the Rename command instead
TL;DR: Is there a way to check whether a file is the same as before except for a rename that occurred? So a way to compare both files (with different names) to see if they are the same?
check the md5sum, if it is different then the file is modified.
md5 check sum of a renamed file will remain same. Any change in content of file will give a different value.
I first tried to use Renjith method with md5, but I couldn't get it working (maybe it's because my C++ is for windows instead of Linux, I dunno.)
So instead I wrote my own function that does the following:
First check if the file is the exact same size (if this isn't the case we can just return false for the function instead of continuing).
If the sizes do match, continue checking the file-buffer per BUFFER_SIZE (in my case this is 1024). If the entire buffer of the file matches, return true.
PS: Make sure to close any open streams before returning.. My mistake here was that I had the code to close one stream after the return-statement (so it was never called), and therefore I had errno 13 when trying to rename the file.