So I have 2 custom commands:
define foo
set $i = 10
end
define bar
set $i = 100
foo
print $i <=== print "10"
end
So $i is not scoped to each custom command which is what I want. Except using different names, any tricks out there?
There is no concept of scope for convenience variables. I don't know of a trick that would work in the general case, either.
The gdb CLI is rather limited. If you have more complicated programming needs, I suggest using the Python scripting support.
Related
I am writing a configure.ac script for gnu autotools. In my code I have some if test statements where I want to set flags based on the compiler name. My original test looked like this:
if test "x$NETCDF_FC" = xifort; then
but sometimes the compiler name is more complicated (e.g., mpifort, mpiifort, path prepended, etc...), and so I want to check if the string ifort is contained anywhere within the variable $NETCDF_FC.
As far as I can understand, to set up a comparison using a wildcard or regex, I cannot use test but instead need to use the double brackets [[ ]]. But when configure.ac is parsed by autoconf to create configure, square brackets are treated like quotes and so one level of them is stripped from the output. The only solution I could get to work is to use triple brackets in my configure.ac, like this:
if [[[ $NETCDF_FC =~ ifort ]]]; then
Am I doing this correctly? Would this be considered best practices for configure.ac or is there another way?
Use a case statement. Either directly as shell code:
case "$NETCDF_FC" in
*ifort*)
do_whatever
;;
*)
do_something_else
;;
esac
or as m4sh code:
AS_CASE([$NETCDF_FC],
[*ifort*], [do_whatever],
[do_something_else])
I would not want to rely on a shell capable of interpreting [[ ]] or [[[ ]]] being present at configure runtime (you need to escape those a bit with [] to have the double or triple brackets make it into configure).
If you need a character class within a case pattern (such as e.g. *[a-z]ifort*), I would advise you to check the generated configure file for the case statement patterns which actually end up being used until you have enough [] quotes added around the pattern in the source configure.ac file.
Note that the explicit case statements often contain # ( shell comments at the end of the lines directly before the ) patterns to avoid editors becoming confused about non-matching opening/closing parentheses.
I would like to create a csh alias that performs one operation if invoked without arguments and a second operation if invoked with a single argument. Does anyone know how to do this? (Attempting to refer to an argument that wasn't passed triggers an error).
I know this is a bit late but I just ran into needing something similar and hope it might still be relevant to somebody.
You can set the arguments as an array and query based on the size of the array:
alias testing 'set args_=(\!*); if ($#args_ > 0) echo "this command has $#args_ arguments" endif'
Aliases in tcsh are limited; for more advanced things, I've found that the best way is to source a (t)csh script, like so:
alias my-cmd 'source ~/.tcsh/my-cmd.tcsh'
And ~/.tcsh/my-cmd.tcsh would contain something like:
if ( $1 != '' ) then
echo "we have an argument: $1"
else
echo "we don't have an argument"
endif
Example output:
% my-cmd
we don't have an argument
% my-cmd hello
we have an argument: hello
Now, it may also be possible to do this with just an alias, but this will be much more maintainable & cleaner in the long run, IMHO.
(I've assumed tcsh here since almost all, or perhaps even all, c shells are tcsh these days).
Easy to do - sorry I'm late to the party.
alias iftest 'if (\\!:0 != \\!:$) echo "Last arg="\\!:$;if (\\!:0 == \\!:$) echo "No args given."'
This merely checks whether the 0th argument (=the 'iftest' itself) and the last arguments are the same, and if they are, assumes there is no argument. This is, of course, not necessarily true, but hopefully works in praxis.
We know that 'eval' can do evil things: Like execute code.
I need a way in BASH to be able to identify in a string of characters where an environment variable is used, and ultimately replace it with its actual value.
e.g. This is a very simple example of something much more complex.
File "x.dat" contains:
$MYDIR/file.txt
Environment
export MYDIR=/tmp/somefolder
Script "x.sh"
...
fileToProcess=$(cat x.dat)
realFileToProcess=$(eval echo $fileToProcess)
echo $realFileToProcess
...
Keep in mind that referenced environment variables in a string can also be:
${MYDIR}_txt
$MYDIR-txt
${MYDIR:0:3}:txt
${MYDIR:5}.txt
Not an aswer yet but some remarks about the question.
It seems what you need is not variable expansion but token replacement in a template, depending the use case printf may be sufficient.
variable expansion depends also on context for example following are not replaced:
# single quotes
echo '${MYDIR}'
# ANSI-C quotes
echo $'${MYDIR}'
# heredoc with end marker enclosed between single quotes
cat << 'END'
${MYDIR}
END
Should be noted also that a variable expansion may execute arbitrary code:
echo ${X[`echo hi >&2`]}
I'm trying to create a kind of a polyglot script. It's not a true polyglot because it actually requires multiple languages to perform, although it can be "bootstrapped" by either Shell or Batch. I've got this part down no problem.
The part I'm having trouble with is a bit of embedded Powershell code, which needs to be able to load the current file into memory and extract a certain section that is written in yet another language, store it in a variable, and finally pass it into an interpreter. I have an XML-like tagging system that I'm using to mark sections of the file in a way that will hopefully not conflict with any of the other languages. The markers look like this:
lang_a_code
# <{LANGB}>
... code in language B ...
... code in language B ...
... code in language B ...
# <{/LANGB}>
lang_c_code
The #'s are comment markers, but the comment markers can be different things depending on the language of the section.
The problem I have is that I can't seem to find a way to isolate just that section of the file. I can load the entire file into memory, but I can't get the stuff between the tags out. Here is my current code:
#ECHO OFF
SETLOCAL EnableDelayedExpansion
powershell -ExecutionPolicy unrestricted -Command ^
$re = '(?m)^<{LANGB}^>(.*)^<{/LANGB}^>';^
$lang_b_code = ([IO.File]::ReadAllText(^'%0^') -replace $re,'$1');^
echo "${re}";^
echo "Contents: ${lang_b_code}";
Everything I've tried so far results in the entire file being output in the Contents rather than just the code between the markers. I've tried different methods of escaping the symbols used in the markers, but it always results in the same thing.
NOTE: The use of the ^ is required because the top-level interpreter is Batch, which hangs up on the angle brackets and other random things.
Since there is just one block, you can use the regex
$re = '(?s)^<{LANGB}^>(.*)^^.*^<{/LANGB}^>';^
but with -match operator, and then access the text using $matches[1] variable that is set as a result of -match.
So, after the regex declaration, use
[IO.File]::ReadAllText(^'%0^') -match $re;^
echo $matches[1];
I've made a translator in perl for a messageboard migration, All I do is applying regexes and print the result. I write stdout to a file and here we go ! But the problem is that my program won't work after 18 MB written !
I've made a translate.pl ( https://gist.github.com/914450 )
and launch it with this line :
$ perl translate.pl mydump.sql > mydump-bbcode.sql
Really sorry for quality of code but I never use perl... I tried sed for same work but didn't manage to apply the regex I found in original script.
[EDIT]
I reworked the code and sanitized some regexes (see gist.github.com/914450) but I'm still stuck. When I splited the big dump in 15M files, I launched translate.pl 7(processes) by 7 to use all cores but the script stops at a variable size. a "tail" command doesn't show a complex message on any url when it stops...
Thanks Guys ! I let you know if I manage finally
yikes - start with the basics:
use strict;
use warnings;
..at the top of your script. It will complain about not properly declaring your lexicals, so go ahead and do that. I don't see anything obvious that would be truncating your file, but perhaps one or more of your regexes is pathological. Also, the undefs at the end are not needed.
For what you are doing, you might consider just using sed
You say the "script stops". It keeps running but produces no more output? Or actually stops running? If it stops running, what does:
perl translate.pl mydump.sql > mydump-bbcode.sql
echo $?
show? And if you add a print STDERR "done!\n"; after your loop, does that show up?
Perl can certainly handle files much larger than 18 MB. I know because I routinely run files of 5 GB through Perl.
I think that your problem is in while($html=<FILE>).
Whenever $html is set to an empty line the while will evaluate as False and exit the loop.
You need to use something like while( defined( $html = <FILE> ) )
Edit:
Hmm. I had always thought you need the defined but in my testing just now it didn't exit on blank lines or 0. Must be more of that special Perl magic that mostly works the way you intend -- except when it doesn't.
Indeed if you restructure the while loop enough you can fool Perl into working the way I always thought it worked. (And it might have, in Perl 4 or in earlier versions of Perl 5)
This will fail:
$x = <>;
chomp $x;
while( $x ) {
print $x;
$x = <>;
chomp $x;
}
There could be any number of things going on:
Try adding $| = 1; to the top of your script. This will make all output unbuffered.
One of your regexes is going crazy and is deleting strings when you're not expecting it.
You've run out of disk space.
There's nothing really wrong with your script (other than you're missing use strict; use warnings; and you're not using the three-argument form of open()) that would cause it to stop working after some magic number of bytes.
Hello guys and Thank you so much for your help and ideas !
After trying to cut and parallelize the jobs, I tried to cut my program in 3 programs, translate1.pl, translate2.pl and 3... the job is done, and it's fast by 8 active cores !
then my launcher.sh starts successively the 3 scripts for each splitted file. done with 2 loops and here we go :)
Regards, Yoann