I'm new to NiFi, assume that I don't know anything. I have it installed locally and I try to make a flow that takes some files from an SFTP and passes them to an AzureDataLake. I use ListSFTP, FetchSFTP, and PutAzureDataLakeStorage. Then I'll want to put a ValidateCsv between the Fetch and the Put. But for now I have two problems.
The first is that whatever flow I create, I run it once, and if it works, it doesn't work anymore. Even if I turn off the server and everything. I have to re-create it from scratch to do the next test. I don't know why this could be, but it really slows me down.
The second, I want to take only files that respond to a specific name format (TAL_dddd_dddd_dddddddd.0 being the d digits). What I do is the List, and in the Fetch, in its Remote File put something like this ./upload/NOMBRE/${filename:matches('TAL_\d{4}\d{4}\d{8}.0')}. But it does not work. I was going to test with ./upload/NOMBRE/${'TAL_\d{4}\d{4}\d{8}.0'}. But of course, problem 1 occurs and before redoing the entire flow I have come to ask.
I would appreciate the help. Thank you very much.
I am using SAS Enterprise Guide and am running a program that runs for a long time (about 30 min).
I want to be able to do the following:
Exit my program at a defined point.
Examine the contents of the Log at this point.
Is there a command that allows me to do the above? I am basically looking for something like a break/exit/quit statement that maintains the SAS Log.
The Enterprise Guide way of doing this is to break your program into multiple smaller program files. Then you run each of those, either individually if you want to be able to see the details before moving on, or in a linked flow if you want to just run them whole thing and see outputs but not stop it (though you could always stop it using the stop button if something came up).
I'm currently trying to use the PlUnit test suit and encounter a problem to turn off the output of my predicates during the test.
The documentation says that I should be fine with something along the lines of set_test_options([silent(true)])., however that doesn't seem to have any effect on my test runs.
I tried to put those options into my load_test_files/1 call as well, but it didn't change anything.
If anyone can provide help about this library that'll be gratefully read, as I seem to misunderstand the documentation and am getting nowhere when trying to see what fails here.
how do we rename .xaml and .cs files?
would like to be able to keep development in synch with the original sketchflow. i.e. sketchflow has features such as the ability to collect client feedback on a per screen basis, etc.
... I kind of answered my own question here, so I'll post it as a follow up. Asked the original question 9 hours ago on the MS site without response... still trying to work out where the best place is to talk to the community, so sorry for the duplicate.
THE ANSWER (IS THERE A BETTER ONE?)
Context: Sketchflow is a prototyping tool. In large teams possibly you want to keep the prototype seperate from the finished version, or there's a large prototyping phase.
My view is that I really like Sketchflow. It's one of the coolest things I've seen for a while (well done Microsoft).
... so for me, I want the prototype to become a the finished product. I want the designers to step in and make transitions whenever they want. I want the designers to kick the process off, and the developers to put in the detail. I'd like our customers to be able to post feedback at any time during the build process. btw: get your developers to check out MVVM. It's very cool.
My bet is that the feedback could get lost if you make a breaking change (a file rename) -- so just beware of that. That wont be a problem for us. We'll get our file names to make sense and then mostly leave it alone. Of course MS could fix this this by creating a globally unique id (Guid) for each screen that is created. Perhaps they've done this already. If someone from MS reads this, please put this on your requested features list.
THE ANSWER:
So here is the answer that works for me:
don't try to hand-edit the xaml / cs, as all the cross referencing that you might be doing with behaviors will break if you aren't really careful. Typical files that need to be modified: .csproj, Sketch.Flow, xxxx.xaml, and xxxx.cs.
To auto do it, download a tool like Ultraedit. Alternatively, you might be able to just use VS 2010 (untested).
Steps with ultraedit:
(BACKUP YOUR PROJECT FIRST)
Search/Replace In Files...
Find in files... "Screen_1_19"
Replace with... "Welcome"
In Files/Types... "."
Directory...
Match Whole Word Only
Hit "Start"
follow the prompts
rename the files (.xaml & .cs) to be Welcome.???? (where ???? is .xaml or .cs) . Since I use SVN, this step gets done for me in one step (no big deal).
If using VS2010 for steps 1 through 8, be careful do longer string replacements first e.g. Screen_1_19 before Screen_1. I think VS treats _ as a word break. On ultraedit you'll be fine.
If there's interest, in the spare time that I don't currently have, I could release a quick tool to do this on codeplex.
** note: because we are working with XML and XML is very particular about being correct, I close expression blend down, and then reopen it again after the replace/rename to see if I was successful + my screen map still has all the flow lines still drawn in.
answer is above in the body of the question.
I'm one of the people involved in the Test Anything Protocol (TAP) IETF group (if interested, feel free to join the mailing list). Many programming languages are starting to adopt TAP as their primary testing protocol and they want more from it than what we currently offer. As a result, we'd like to get feedback from people who have a background in xUnit, TestNG or any other testing framework/methodology.
Basically, aside from a simple pass/fail, what information do you need from a test harness? Just to give you some examples:
Filename and line number (if applicable)
Start and end time
Diagnostic output such as the difference between what you got and what you expected.
And so on ...
Most definitely all things from your list for each individual item:
Filename
Line number
Namespace/class/function name
Test coverage
Start time and end time
And/or total time (this would be more useful for me than the top two items)
Diagnostic output such as the
difference between what you got and
what you expected.
From the top of my head not much else but for the group of tests I would like to know
group name
total execution time
It must be very, very easy to write a test, and equally easy to run them. That, to me, is the single most important feature of a testing harness. If someone has to fire up a GUI or jump through a bunch of hoops to write a test, they won't use it.
An arbitrary set of tags - so I can mark a test as, for example "integration, UI, admin".
(you knew I was going to ask for this didn't you :-)
To what you said I'd add:
Method/function/class name
Coverage counting tool, with exceptions (Do not count these methods)
Result of N last runs available
Mandate that ways to easily parse test results must exist
Any sort of diagnostic output - especially on failure is critical. If a test fails, you don't want to always have to rerun the test under a debugger to see what happened - there should be some cludes in the output.
I also like to see a before and after snapshot of critical system variables like memory or hard disk space available as those can provide great clues as well.
Finally, if you're using random seeds for any of the tests, write the seed out to the logfile so that the test can be reproduced if necessary.
I'd like the ability to concatenate and nest TAP streams.
A unique id (uuid, md5sum) to be able to identify an individual test -- say, for use when inserting test results in a database, or identifying them in a bug tracker to make it possible for QA to rerun an individual test.
This would also make it possible to trace an individual test's behavior from build-to-build through the entire lifecycle of multiple revisions of a product. This could eventually allow larger-scale correlations between 'historic' events (new hire, product release, hardware upgrades) and the profile(s) of tests that fail as a result of such events.
I'm also thinking that TAP should be emitted through a dedicated side-channel rather than mixed in with stdout. I'm not sure this is under the scope of the protocol definition.
I use TAP as output protocol for a set of simple C++ test methods, and have seen the following shortcomings:
test steps cannot be put into groups (there's only the grouping into several test scripts; but for running all tests in our software, I need at least one more level of grouping, so that a single test step would be identified by like "DB connection" -> "Reconnection Test" -> "test step #3")
seeing differences between expected and actual output is useful; I either print the diff to stderr (as comment) or actually launch a graphical diff tool
the protocol and tools must be really language-independent. For example, so far I only know of the Perl "prove" tool for running tests, which is limited to running Perl scripts
In the end, the test output must be suitable as basis for easily generating an HTML report file which lists succeeded tests very concisely, gives detailed output for failed tests, and makes it possible to quickly jump into the IDE to the failing test line.
optional ascii coloured output, green for good, yellow for pending, red for errors
the idea of things being pending
a summary at the end of the test report of commands that will run the individual tests where
List item
something went wrong
something in the test was pending
Extension idea for TAP:
1..4
ok 1 - yay
not ok 2 - boo
ok 3 - yay #json:{...}
ok 4 - see my json
Ability to attach a #json comment...
- can be safely ignored by existing code
- well-defined tags can be easily reserved at testanything.org
- easy to produce, parse and read complex types
- yaml is a pain