I'm a new Python user. I've been trying to get a setup script to work -- unsuccessfully. What I have found is that, while simply importing the script returns a fatal error, I can rewrite it line by line in real-time. Every time I make an error or typo, Python kicks me out of the continuing '...' edit mode and into '>>>' mode which, with my limited knowledge means that I have to start all over. The problem is that one function passes more than 100 lines of code. If I get an error in the middle, at this stage of my knowledge, I have to begin it again.
Is there some way to pick the function writing back up where I left off, before the typo?
If you use python IDLE (or any other IDE...there are tons of them listed here), you will have what's called a REPL (read-evaluate-print-loop) that will make your life much easier! You can select just a small part of your code, run it, see its effects, make edits, re-run it, run the next part, etc.
Related
I'm new to NiFi, assume that I don't know anything. I have it installed locally and I try to make a flow that takes some files from an SFTP and passes them to an AzureDataLake. I use ListSFTP, FetchSFTP, and PutAzureDataLakeStorage. Then I'll want to put a ValidateCsv between the Fetch and the Put. But for now I have two problems.
The first is that whatever flow I create, I run it once, and if it works, it doesn't work anymore. Even if I turn off the server and everything. I have to re-create it from scratch to do the next test. I don't know why this could be, but it really slows me down.
The second, I want to take only files that respond to a specific name format (TAL_dddd_dddd_dddddddd.0 being the d digits). What I do is the List, and in the Fetch, in its Remote File put something like this ./upload/NOMBRE/${filename:matches('TAL_\d{4}\d{4}\d{8}.0')}. But it does not work. I was going to test with ./upload/NOMBRE/${'TAL_\d{4}\d{4}\d{8}.0'}. But of course, problem 1 occurs and before redoing the entire flow I have come to ask.
I would appreciate the help. Thank you very much.
How do syntax highlighting tools such as pygments and textmate bundle do automated testing?
Tools like this often simply resort to a large collection of snippets of text representing a chosen input and the expected output. For instance if you look at the Pygments Github, you can see they have giant lists of text files divided into an input section and a tokens section like so:
---input---
f'{"quoted string"}'
---tokens---
'f' Literal.String.Affix
"'" Literal.String.Single
'{' Literal.String.Interpol
'"' Literal.String.Double
'quoted string' Literal.String.Double
'"' Literal.String.Double
'}' Literal.String.Interpol
"'" Literal.String.Single
'\n' Text
Since a highlighting tool reads a piece of code and then has to identify which bits of text are parts of which bits of code (is this the start of a function? is this a comment? is it a variable name?), they usually perform various processing steps that will result in a list of tokens as above, which they can then feed into the next step (insert highlights from the first Literal.String.Interpol to the next, bold any Literal.String.Single, etc. by generating the appropriate HTML or CSS or other markup relevant to the system). Checking that these tokens are generated properly from the input text is key.
Then, depending on the language the tool is built in you might use an existing testing suite or build your own (pygments seems to use a Python-based tool called pyTest), which essentially consists of running each of the inputs through your tool in a loop, reading the output, and comparing it to the expected values. If the output doesn't match, you can display a message showing what test failed, what the input/output/expected/error values were. If an output passes, you could simply signal with a happy green checkmark. Then when the test finishes, the developer can hopefully reason out what they broke by looking over the results.
It is often a good idea to randomize the order that these inputs so that you can be sure that each step in the test doesn't have side effects that are getting passed along to the next test and cause it to pass or fail incorrectly. It might also be a good idea to time the length of the complete test. If the whole thing was taking 12 seconds yesterday, but now it takes two minutes, we may have broken something even if all the test technically "pass".
In tools like a code highlighter, you often have a good idea of what many of the inputs and outputs will look like before you can code everything up, for instance if some spec document already exists. In that case, it may be a good idea to include tests that you know won't pass right away, but mark them with some tag (perhaps some text marker within the file that says "NOT PASSING", or naming the file in a certain way), and telling your testing suite to expect those tests to fail. Then, as you fix bugs and add features, say you fixed Bug X in your attempt to make test #144 pass. Now when you run the text, it also alerts you that 10 other tests that should be failing are now passing. Congrats! You just saved yourself a lot of work trying to fix several separate problems that were actually caused by the same root issue.
As the codebase is updated, a developer would run and rerun the test to ensure that any changes he makes doesn't break tests that were working before, and then would add new tests to the collection to verify that his new feature, fixed edge case, etc., now has a known expected output that you can be sure someone won't accidentally break in the future.
So, I have a program in C++ and I use visual studio 2010. My program is mostly procedural not object oriented programming though. The first part of my program does something, then the second half does something else that uses the information of the first half. The first half takes a while (~ 20 minutes) to run (I knew this through running it in debug mode and put a break point right after the end of that first half).
The thing is that I am experimenting different ideas for that second half. Now, whenever I write the code for any new idea, I have to run the whole code from scratch, and thus have to wait the 20 minutes before the new second half runs. This is very inconvenient/inefficient; since I will be doing this for a while. I also can not really write all my ideas at once and run different programs (with the same first half and different second halves corresponding to each idea) simultaneously, just because I get each new idea after I run the older one and understand somethings about the behavior of my algorithm.
So, is there any way I can start running the code right after the first part whenever I change something(s) in the second part, instead of having to compile it and run it from scratch each time I change something in the second part? And how is that, if possible?
Since you are using Visual Studio, you should look into Edit and Continue:
Edit and Continue is a time-saving feature that enables you to make
changes to your source code while your program is in break mode. When
you resume execution of the program by choosing an execution command
like Continue or Step, Edit and Continue automatically applies the
code changes with some limitations. This allows you to make changes to
your code during a debugging session, instead of having to stop,
recompile your entire program, and restart the debugging session.
But please pay attention to the limitations - Unsupported Scenarios, you might have to structure your code changes to fit within what's supported.
I am trying to use gdb with MySQL source code which is written in C/C++. In mysql-test/t, I create a custom test case file, say, example.test and then debug it by using the following line of code
/mysql-test-run --gdb example
Now I want to see the flow of execution as it changes from one function in a file to another in some different file. I am not sure of how the execution changes, so I can't pre define the break points. Any solution to how I can get to see the flow with multiple files of source code?
You can use the next directive to take line-by-line steps through the source. When appropriate, you can use the step directive to take steps "into" the function(s) being called on the current line.
A reasonable method would be to do next until you think you only just pass the externally visible behavior you're looking for. Then start over, stopping at the line right before you saw the behavior last time. Then step this time. Repeat as necessary until you find the code you're looking for. If you believe that it's encountering some sort of deadlock, it's significantly easier -- just interrupt (Ctrl-C) the program when you think it's stuck and it should stop right in the interesting spot.
In general, walking through the source you'll build up some places you think are interesting. You can note the source file and line number and/or function names as appropriate and set those breakpoints directly in the future to avoid tedious next/next/next business.
today I ran into an error and have no clue how to fix it.
Error: App with label XYZ could not be found. Are you sure your INSTALLED_APPS setting is correct?
Where XYZ stands for the app-name that I am trying to reset. This error shows up every time I try to reset it (manage.py reset XYZ). Show all the sql code works.
Even manage.py validate shows no error.
I already commented out every single line of code in the models.py that I touched the last three months. (function by function, model by model) And even if there are no models left I get this error.
Here http://code.djangoproject.com/ticket/10706 I found a bugreport about this error. I also applied one the patches to allocate the error, it raises an exception so you have a trace back, but even there is no sign in what of my files the error occurred.
I don't want to paste my code right now, because it is nearly 1000 lines of code in the file I edited the most.
If someone of you had the same error please tell me were I can look for the problem. In that case I can post the important part of the source. Otherwise it would be too much at once.
Thank you for helping!!!
I had a similar problem, but I only had it working after creating an empty models.py file.
I was running Django 1.3
Try to clean up all your build artifacts: build files, temporary files and so on. Also ./manage.py test XYZ will show you stack trace. Later try to run python with -m pdb option and step through the code to see where you fail and why.
You don't specify which server you're using. With Apache you'll almost certainly need a restart for things to take effect. If you're using the development one try restarting that. If this doesn't work you may need to give us some more details.
I'd also check your paths as you may have edited one file but you may be using a different one.
Plus check what's still in your database, as some of your previous versions may be interfering.
Finally as a last resort I'd try a clean install (on another django instance) and see if that goes cleanly, if it does then I'd know that I'd got a conflict, if not then the problem's in the code.