Is it possible to remove gdb aliases without restarting gdb? - gdb

Suppose I define a new gdb command which includes an alias.
import gdb
import string
class PrettyPrintString (gdb.Command):
"Command to print strings with a mix of ascii and hex."
def __init__(self):
super (PrettyPrintString, self).__init__("ascii-print",
gdb.COMMAND_DATA,
gdb.COMPLETE_EXPRESSION, True)
gdb.execute("alias -a pp = ascii-print", True)
Now, I'd like to make a small change to the script and source it again in the same gdb session. Unfortunately, when I try to source again, I get the following error.
gdb.error: Alias already exists: pp
How can I delete the original alias and source the updated script?
Note that the alias documentation does not appear to say anything about deleting aliases, and I tried unalias and delete but neither had the desired effect.

You can define a keyword to a function instead of as an alias. For example I have
define w
where
end
in my .gdbinit. And re-defining works, as opposed to re-aliasing.

Related

Dynamically loading module and dynamically calling function in that module; Python 2.7

I am trying to write Python 2.7 code that will
Dynamically load a list / array of modules from a config file at startup
Call functions from those modules. Those functions are also designated in a config file (maybe the same config file, maybe a different one).
The idea is my code will have no idea until startup which modules to load. And the portion that calls the functions will have no idea which functions to call and which modules those functions belong to until runtime.
I'm not succeeding. A simple example of my situation is this:
The following is abc.py, a module that should be dynamically loaded (in my actual application I would have several such modules designated in a list / array in a config file):
def abc_fcn():
print("Hello World!")
def another_fcn():
print("BlahBlah")
The following is the .py code which should load abc.py (my actual code would need to import the entire list / array of modules from the config file). Both this .py file and abc.py are in the same folder / directory. Please note comments next to each statement.
module_to_import = "abc" #<- Will normally come from config file
fcn_to_call = "abc.abc_fcn" #<- Will normally come from config file
__import__(module_to_import) #<- No error
print(help(module_to_import)) #<- Works as expected
eval(fcn_to_call)() #<- NameError: name 'abc' is not defined
When I change the second line to the following...
fcn_to_call = "abc_fcn"
...the NameError changes to "name 'abc_fcn' is not defined".
What am I doing wrong? Thanks in advance for the help!
__import__ only returns the module specified, it does not add it to the global namespace. So to accomplish what you want, save the result as a variable, and then dynamically retrieve the function that you want. That could look like
fcn_to_call = 'abc_fcn'
mod = __import__(module_to_import)
func = getattr(mod, fcn_to_call)
func()
On a side note, abc is the name of name of the Abstract Base Classes builtin Python module, although I know you were probably just using this an example.
You should assign the returning value of __import__ to a variable abc so that you can actually use it as a module.
abc = __import__(module_to_import)

lldb: conditional breakpoint on a most derived type

typical debugging pattern:
class Button : public MyBaseViewClass
{
...
};
....
void MyBaseViewClass::Resized()
{
//<---- here I want to stop in case MyBaseViewClass is really a Button, but not a ScrollBar, Checkbox or something else. I.e. I want a breakpoint condition on a dynamic (most derived) type
}
trivial approaches like a breakpoint on strstr(typeid(*this).name(), "Button") don't work because on typeid lldb console tells:
(lldb) p typeid(*this)
error: you need to include <typeinfo> before using the 'typeid' operator
error: 1 errors parsing expression
surely #include in console before making the call doesn't help
You can do this in Python pretty easily. Set the breakpoint - say it is breakpoint 1 - then do:
(lldb) break command add -s python 1
Enter your Python command(s). Type 'DONE' to end.
def function (frame, bp_loc, internal_dict):
"""frame: the lldb.SBFrame for the location at which you stopped
bp_loc: an lldb.SBBreakpointLocation for the breakpoint location information
internal_dict: an LLDB support object not to be used"""
this_value = frame.FindVariable("this", lldb.eDynamicDontRunTarget)
this_type = this_value.GetType().GetPointeeType().GetName()
if this_type == "YourClassNameHere":
return True
return False
DONE
The only tricky bit here is that when calling FindVariable I passed lldb.eDynamicDontRunTarget which told lldb to fetch the "dynamic" type of the variable, as opposed to the static type. As an aside, I could have also used lldb.eDynamicRunTarget but I happen to know lldb doesn't have to run the target go get C++ dynamic types.
This way of solving the problem is nice in that you don't have to have used RTTI for it to work (though then we'll only be able to get the type of classes that have some virtual method - since we use the vtable to do this magic.) It will also be faster than a method that requires running code in the debugee as your expression would have to do.
BTW, if you like this trick, you can also put the breakpoint code into a python function in some python file (just copy the def above), then use:
(lldb) command script import my_functions.py
(lldb) breakpoint command add -F my_functions.function
so you don't have to keep retyping it.

Org-mode library of babel: can't #'CALL what I define

I want to use the library of babel of org-mode to define a new Clojure function that would be accessible to any org-mode document.
What I did is to define that new function in a named codeblock like this:
#+NAME: foo
#+BEGIN_SRC clojure
(defn foofn
[]
(println "foo test"))
#+END_SRC
Then I saved that into my library of bable using C-c C-v i, then I selected the org file to save in the library and everything looked fine.
Then in another org file I wanted to call that block such that it becomes defined in that other context. So I used the following syntax:
#+CALL: foo
However when I execute that org file I am getting the following error:
Reference `nil' not found in this buffer
Which tell me that it can't find that named block.
Any idea what I am doing wrong? Also once it works, is there a way to add new parameters to that code block when called using #+CALL:?
Finally, where is supposed to be located my library of babel? (how to know if it got properly added or not?)
I am obviously missing some core information that I can't find in the worg documentation.
Try:
#+CALL: foo()
Also, check the value of the variable org-babel-library-of-babel to make sure that the C-c C-v i worked properly.

How to configure Eclipse/CDT/C++ formatter to not break line between a function returned type and the function name [duplicate]

I ran into a problem with the Eclipse formatter. It won't format my code correctly when declaring methods within a class declaration. It puts a new line after the method's return type.
I already exported the style xml file and examined the settings in it, but none of the settings have any apparent connection to this problem, and the settings editor in Eclipse didn't show the same problem happening in it's sample code for method declarations.
Here is an example bit of code for what I want to have happen:
class MyClass
{
public:
MyClass();
void myMethod();
};
However, this is what I get:
class MyClass
{
public:
MyClass();
void
myMethod();
};
Again, in the styles editor, the code doesn't have this problem and looks just how I want it to, but in the actual code, the story is different.
I'm using version 3.8.0. Any help is appreciated.
Edit: I deleted those source files that were formatted incorrectly (after formatting the code several times to no avail) and replaced them with "identical" files with the same methods, same structure, etc. I formatted the code this time and it worked. This is probably a bug, but I'm leaving it up just in case anyone else encounters a similar problem or has a solution to avoiding this problem in the first place.
I hand edited two files under the main eclipse projects directory
.metadata\.plugins\org.eclipse.core.runtime\.settings
The two files:
file 1: org.eclipse.cdt.core.prefs, change this line from "insert" to "do not insert"
org.eclipse.cdt.core.formatter.insert_new_line_before_identifier_in_function_declaration=do not insert
file 2: org.eclipse.cdt.ui.prefs,
scan this file for "insert_new_line_before_identifier_in_function_declaration" and make a similar change from insert to do not insert next to it, should be obvious
Note I seen this problem on indigo and juno, the fix described above was in juno.
If you have a custom formatter config, export it first (settings>C/C++ General>Formatter>Edit>Export). Then change the following line to "do not insert". Save the XML.
<setting id="org.eclipse.cdt.core.formatter.insert_new_line_before_identifier_in_function_declaration" value="do not insert"/>
Delete the current config and import the one you changed.
There's a specific preference in the formatter options starting from cdt 9.8 included in Eclipse 2019-06.

Setting up SCons to Autolint

I'm using google's cpplint.py to verify source code in my project meets the standards set forth in the Google C++ Style Guide. We use SCons to build so I'd like to automate the process by having SCons first read in all of our .h and .cc files and then run cpplint.py on them, only building a file if it passes. The issues are as follows:
In SCons how do I pre-hook the build process? No file should be compiled until it passes linting.
cpplint doesn't return an exit code. How do I run a command in SCons and check whether the result matches a regular expression? I.E., how do I get the text being output?
The project is large, whatever the solution to #1 and #2 it should run concurrently when the -j option is passed to SCons.
I need a whitelist that allows some files to skip the lint check.
One way to do this is to monkey patch the object emitter function, which turns C++ code into linkable object files. There are 2 such emitter functions; one for static objects and one for shared objects. Here is an example that you can copy paste into a SConstruct:
import sys
import SCons.Defaults
import SCons.Builder
OriginalShared = SCons.Defaults.SharedObjectEmitter
OriginalStatic = SCons.Defaults.StaticObjectEmitter
def DoLint(env, source):
for s in source:
env.Lint(s.srcnode().path + ".lint", s)
def SharedObjectEmitter(target, source, env):
DoLint(env, source)
return OriginalShared(target, source, env)
def StaticObjectEmitter(target, source, env):
DoLint(env, source)
return OriginalStatic(target, source, env)
SCons.Defaults.SharedObjectEmitter = SharedObjectEmitter
SCons.Defaults.StaticObjectEmitter = StaticObjectEmitter
linter = SCons.Builder.Builder(
action=['$PYTHON $LINT $LINT_OPTIONS $SOURCE','date > $TARGET'],
suffix='.lint',
src_suffix='.cpp')
# actual build
env = Environment()
env.Append(BUILDERS={'Lint': linter})
env["PYTHON"] = sys.executable
env["LINT"] = "cpplint.py"
env["LINT_OPTIONS"] = ["--filter=-whitespace,+whitespace/tab", "--verbose=3"]
env.Program("test", Glob("*.cpp"))
There's nothing too tricky about it really. You'd set LINT to the path to your cpplint.py copy, and set appropriate LINT_OPTIONS for your project. The only warty bit is creating a TARGET file if the check passes using the command line date program. If you want to be cross platform then that'd have to change.
Adding a whitelist is now just regular Python code, something like:
whitelist = """"
src/legacy_code.cpp
src/by_the_PHB.cpp
"""".split()
def DoLint(env, source):
for s in source:
src = s.srcnode().path
if src not in whitelist:
env.Lint( + ".lint", s)
It seems cpplint.py does output the correct error status. When there are errors it returns 1, otherwise it returns 0. So there's no extra work to do there. If the lint check fails, it will fail the build.
This solution works with -j, but the C++ files may compile as there are no implicit dependencies between the lint fake output and the object file target. You can add an explicit env.Depends in there to force that the ".lint" output depend on the object target. As is it's probably enough, since the build itself will fail (scons gives a non-zero return code) if there are any remaining lint issues even after all the C++ compiles. For completeness the depends code would be something like this in the DoLint function:
def DoLint(env, source, target):
for i in range(len(source)):
s = source[i]
out = env.Lint(s.srcnode().path + ".lint", s)
env.Depends(target[i], out)
AddPreAction seems to be what you are looking for, from the manpage:
AddPreAction(target, action)
env.AddPreAction(target, action)
Arranges for the specified action to be performed before the specified target is built. T
Also see http://benno.id.au/blog/2006/08/27/filtergensplint for an example.
See my github for a pair of scons scripts complete with an example source tree. It uses Google's cpplint.py.
https://github.com/xyzisinus/scons-tidbits