How can I continue in Fabric if a command does not finish or succeed? - fabric

Sometimes this code prompts a password, then I would like to just skip to next line of code in 10 seconds.
run_in(node, 'sudo init 0')
I regularly have an issue where I want to continue on errors.
Ansible has ignore_errors: True. How to do this in Fabric?

Is flag warn_only what you need? If set to True it skips errors and only shows warnings. You can use it as block only for lines of code which you want to skip errors. See more in docs here
Example:
from fabric.api import settings
with settings(warn_only=True):
# your code here

Related

inject code onto command line input using python27

I have written a simple c program with a bufferoveflow. It is basically a game to guess 4 digits number but starts by asking players to enter their name and this is where buffer overflow happens...I have written an exploit to basically inject shellcode when the "Please enter your name" When I run it without program attached to the immunity debugger it works fine but when I attach the exe file to the immunity debugger python script does noting as it is not something that is running on the debugger.....so basically nothing happens when I execute the code. Python code is below:
import sys, struct, os
import subprocess
import time
from subprocess import Popen, PIPE
location ='C:\Users\ZEIT8042\Desktop\Guessthenumber\guess.exe
p= Popen([location],stdin=PIPE,stdout=PIPE,stderr=PIPE)
time.sleep(15) #tried this to make the program stall for 15 seconds so that it can be attached to immunity debugger.
junk='A'*40
o,e= p.communicate(input=junk)
print(o)
What I am trying to do is check if the program is running...if it is running then inject the shellcode when the exe asks for the name.....any help would be appreciated...
elif is used in multiple conditions that is seen wrong.is this wrong meaning

gdb run program in a loop until a breakpoint is reached then display stacktrace

I am trying to debug a very sporadic issue in my application. If ran ~1000 times my application surely hits a certain line it shouldn't and I would like to view the stack.
I tried using a gdb script cmd.gdb for this:
set logging overwrite on
set pagination off
set $n = 1000
break file.c:496
while $n-- > 0
ignore 1 9
condition 1 global_var == 10
run
end
How should I modify this script in order to print the stack when the breakpoint is reached?
I tried adding this after "run":
if $_siginfo
bt
loop_break
end
but it doesn't seem to work.
Actually, I have a Github repo with a Python-GDB extension, which does exactly the same thing as You have described, but with some more functionality.
You can just clone the repo:
git clone https://github.com/Viaceslavus/gdb-debug-until.git
and feed the python script to GDB with the following command inside GDB:
source <python script path>
Then, according to your example, you should run the next command:
debug-until file.c:496 --args="" --var-eq="global_var:10" -r=1000
*some remarks:
file.c:496 here is a starting breakpoint
"--args" parameter contains the arguments for your program
"--var-eq" is a debugging event, where 'global_var' is a variable name and '10' is a value
and finally the "-r" option specifies the number of times the program will be ran.
So all together this command will run your program 1000 times and will immediately notify You when the 'global_var' will be equal to 10.
Any additional information about the project could be found here:
https://github.com/Viaceslavus/gdb-debug-until.git in the README file.

Debuggers not acting properly on Jupyter notebooks

I'm trying to debug some code in a Jupyter notebook. I've tried 3 4 different methods, and they all suffer from the same problem:
--Return--
None
> <ipython-input-22-04c6f5c205d1>(3)<module>()
1 import IPython.core.debugger as dbg
2 dber = dbg.Tracer()
----> 3 dber()
4 tst = huh.plot(ret_params=True)
5 type(tst)
ipdb> n
> y:\miniconda\lib\site-packages\ipython\core\interactiveshell.py(2884)run_code()
2882 finally:
2883 # Reset our crash handler in place
-> 2884 sys.excepthook = old_excepthook
2885 except SystemExit as e:
2886 if result is not None:
as you can see, the n command, which from what I understood from the pdb documentation should execute the next line (I'm assuming ipdb is just pdb adapted to work on IPython, especially since I can't find any command documentation that refers specifically to ipdb and not pdb)
s also has the same problem. This is actually what I want to do - step into the plot call (from what I understand, this is what s is supposed to do), but what I get is exactly the same as what I get from n. I also just tried r and I get the same problem.
Every example I've seen just uses Tracer()() or IPython.core.debugger.PDB().set_trace() to set a breakpoint in the line that follows the command, but both cause the same problems (and, I assume, are actually the exact same thing).
I also tried %debug (MultipleInstanceError) and %%debug (Doesn't show the code in the line being executed - just says what line, using s doesn't step into the function, just runs the line).
Edit: turns out, according to a blog post from April of this year, plain pdb should also work. It does allow me to interactively debug the notebook, but it only prints the current line being debugged (probably not a bug), and it has the same problem as IPython's set_trace() and Tracer()()
on plain IPython console, IPython's set_trace (only one I've tested) works just fine.
I encountered the same problem when debugging in Jupyter Notebook. What is working for me however, is when I call set_trace() inside a function. Why is explained here (click), though I don't really understand why others don't encounter this problem. Anyway, if you need a pragmatic solution for your problem and you want to debug a self-written function, try this:
from IPython.core.debugger import set_trace
def thisfunction(x):
set_trace() # start debugging when calling the function
x += 2
return x
thisfunction(5) # ipdb console opens and I can use 'n'
Now I can use 'n' and the debugging process runs the next line without problems. If I use the following code, however, I run into your above mentioned problem.
from IPython.core.debugger import set_trace
def thisfunction(x):
x += 2
return x
set_trace() # start debugging before calling the function.
# Calling 's' in the ipdb console to step inside "thisfunction" produces an error
thisfunction(5)
Hope this helps until somebody could solve the problem completely.

Is there a way to see the salt state converted to the actual command that is being run?

I have a state like
django.syncdb:
module.run:
- settings_module: mvod.dev_settings
- bin_env: /home/vagrant/virtualenv/
- migrate: True
- require:
- pip: mvod
- mysql_grants: mvod_user_grants
- file: /tmp/mvod.log
The docs aren't very specific about what this exactly does, though it indeed does seem to do what I expect, meaning run the command django-admin.py syncdb --settings=mvod.dev_settings --migrate from inside the directory /home/vagrant/virtualenv.
It actually fails to do this, since the /home/vagrant/virtualenv/ path actually needs to set to /home/vagrant/virtualenv/bin/django-admin.py.
However, i ran this in an environment where django wasn't installed, and so i'd expect this to fail. The state however returned Result: True but then the output was this Is a directory
I figured out eventually that i have to replace the line bin_env: /home/vagrant/virtualenv/ with bin_env: /home/vagrant/virtualenv/bin/django-admin.py since that's what i was trying to call.
Bottom line: i would have figured it out much sooner had i had a way of turning the state into the exact command being executed.
So is there a way to do this real fast?
You can run the minion as salt-minion --log-level=debug and then execute the state. It will show you what commands are being executed by salt on the system based on your state file.

GDB python script for bounded instruction tracing

I'm trying to write a GDB script to do instruction tracing in a bounded maner (i.e start addr and stop addr). Perhaps I'm failing at google but I cant seem to find this in existence already.
Here is my stab at it:
python
def start_logging():
gdb.execute("set logging on")
gdb.execute("while $eip != 0xBA10012E9")
gdb.execute("x/1i $eip")
gdb.execute("stepi")
gdb.execute(" end")
gdb.execute("set logging off")
gdb.execute("set pagination off")
gdb.execute("break *0xBA19912CF")
gdb.execute("command 1 $(start_logging())")
gdb.execute("continue")
In my mind this should set up a breakpoint then set the command to run when it hits. When the breakpoint hits it should single step through the code until the end address is hit and then it will turn off logging.
When I run this with gdb the application will break at the correct point but no commands are run.
What am I doing wrong? Sorry if this is the wrong way to go about this please let me know. I'm new to gdb scripting
I see a few odd things in here.
First, it looks like you are trying to split multi-line gdb commands across multiple calls to gdb.execute. I don't believe this will work. Certainly it isn't intended to work.
Second, there's no reason to try to do a "while" loop via gdb.execute. It's better to just do it directly in Python.
Third, I think the "command" line seems pretty wrong as well. I don't really get what it is trying to do, I guess call start_logging when the breakpoint is hit? And then continue? Well, it won't work as written.
What I would suggest is something like:
gdb.execute('break ...')
gdb.execute('run')
while gdb.parse_and_eval('$eip') != 0x...:
gdb.execute('stepi')
If you really want logging, either do the 'set logging' business or just instruct gdb.execute to return a string and log it from Python.