In gdb, I would like to print just the command output and NOT the command number. For example, I get
$24 = {timeStamp = 15329666, id = 1, version = 1, checksum = 15329411}
instead I want it to output only
{timeStamp = 15329666, id = 1, version = 1, checksum = 15329411}
I would like to print just the command output and NOT the command number.
The value $24 is not the command number. It's the value history number of that particular output.
Use the output command if you don't want value history.
Related
I have a config file that looks like this:
...
[env.staging]
name = "something"
...
[env.production]
name = "something"
...
I'm trying to replace the value of name on a specific environment using regex in the sed command in bash, but when I try to find the line by its section followed by a New Line, it doesn't work:
sed -i -e '/\[env\.production\]\nname =/s/=.*/= \"something_else\"/' config.toml
But the following command works fine, and of course, changes the name variable of both environments which is not desired.
sed -i -e '/name =/s/=.*/= \"something_else\"/' config.toml
Any ideas on how to achieve the correct result? (All my files are using LF line endings)
sed works line by line by default. You cannot match across multiple lines unless you use features to bring in multiple lines to the pattern space.
$ sed '/\[env\.production]/ {n; s/=.*/= "something_else"/}' config.toml
...
[env.staging]
name = "something"
...
[env.production]
name = "something_else"
...
n command will replace the pattern space with the next line. Use N when you need to process both lines together.
{} is used to group commands.
I would also suggest to use a toml tool like dasel instead of sed for such cases.
I've created some additional sample data for demonstration purposes:
$ cat config.toml
...
[env.staging]
id = 1
name = "something"
name2 = "something"
date = 2021/08/03
...
[env.production]
id = 2
name = "something"
name2 = "something"
date = 2022/01/23
...
[env.test]
id = 3
name = "something"
name2 = "something"
date = 2021/11/15
...
One sed idea using a range to find the desired section and then apply the change to said section:
$ sed '/\[env.production\]/,/^name = ".*$/ s/^name = ".*$/name = "something_else"/' config.toml
...
[env.staging]
id = 1
name = "something"
name2 = "something"
date = 2021/08/03
...
[env.production]
id = 2
name = "something_else"
name2 = "something"
date = 2022/01/23
...
[env.test]
id = 3
name = "something"
name2 = "something"
date = 2021/11/15
...
NOTES:
we don't know what the rest of the config sections look like so I went a bit overboard by specifying the entire line (^name = ".*$)
once OP is satisfied with the answer the -i option can be added to perform the 'in place' update of config.toml
The following code is intended to open a text file and search for any matches from a list of strings then output how many results it finds. For some reason, it's always "finding" 0.
validcards=array("NVIDIA GRID K140Q","AMD FirePro S7150","VMware SVGA 3D")
textFile = fso.opentextfile("_cards.txt",1,0,1).readall
set fso=nothing
set query = new regexp
with query
.global=true
.multiline=true
.ignorecase=true
.pattern="^.*?" & join(validcards,".*?") & ".*?$"
end with
counter = 0
set results = query.execute(textFile)
for each result in results
stdout.WriteLine escape(result)
counter = counter + 1
next
When I output counter it is always zero. What am I missing? Here is what the text file looks like:
Name
VMware SVGA 3D
The text file is generated using wmic path win32_VideoController get name > _cards.txt
UPDATE
In desperation, I just printed out the file after it's loaded. It looks like this:
■N a m e
V M w a r e S V G A 3 D
I was able to fix this by changing the OpenTextFile line to textFile = fso.opentextfile("_cards.txt",1,0,-1).readall. However, the regex still is not working.
I changed the pattern to the following and now it seems to be working fine:
.pattern="^.*(" & join(validcards,"|") & ").*$"
I'm working with a very large text file (58GB) that I'm attempting to split into smaller chunks. The problem that I'm running into is that the smaller chunks appear to be Hex. I'm having my terminal print each line to stdout as well, but when I'm seeing it printed in stdout it's looking like normal strings to me. Is this known behavior? I've never encountered an issue where Python keeps spitting stuff out in Hex before. Even odder when I tried using Ubuntu's split from the command line it was also generating everything in Hex.
Code snippet below:
working_dir = '/SECRET/'
output_dir = path.join(working_dir, 'output')
test_file = 'SECRET.txt'
report_file = 'SECRET_REPORT.txt'
output_chunks = 100000000
output_base = 'SECRET'
input = open(test_file, 'r')
report_output = open(report_file, 'w')
count = 0
at_line = 0
output_f = None
for line in input:
if count % output_chunks == 0:
if output_f:
report_output.write('[{}] wrote {} lines to {}. Total count is {}'.format(
datetime.now(), output_chunks, str(output_base + str(at_line) + '.txt'), count))
output_f.close()
output_f = open('{}{}.txt'.format(output_base, str(at_line)), 'wb')
at_line += 1
output_f.write(line.encode('ascii', 'ignore'))
print line.encode('ascii', 'ignore')
count += 1
Here's what was going on:
Each line was started with a NUL character. When I was opening up parts of the file using head or PyCharm's terminal it was showing up normal, but when I was looking at my output in Sublime Text it was picking up on that NUL character and rendering the results in Hex. I had to strip '\x00' from each line of the output and it started looking the way I would expect it to
my problem is very simple but I can not figure out how to solve it.
I have a list of about of one million sequences and each of them need to be aligned to a sequencing adapter. I`m thinking to do the alignment in python using pairwise2 tools from Biopython. I would like to use this tool because i need to collect all the alignment scores, do some math and select the sequences based on the math. If I run the code below it works but it is slow because only one alignment is run per time.
def align_call(record, adapter):
score = pairwise2.align.localms(record.seq, adapter.seq, 1, -3, -1, -0.99, one_alignment_only=1, score_only=1)
print record.id + " " + record.seq + " " + adapter.id + " " + str(score)
#results.append(line)
return
if __name__ == '__main__':
fastaSeq = argv[1]
threads = argv[2]
fastaAdapt = argv[3]
listSeq = []
adpt = list(SeqIO.parse(fastaAdapt, "fasta"))
for record in SeqIO.parse(fastaSeq, "fasta"):
align_call(record, adpt[0])
Therefore, I was thinking to change the code and use multithreading or multiprocess to speed up process by sending n number of parallel jobs based on the number of threads that the computer has. So i came up to something like this:
results = []
def align_call(record, adapter):
score = pairwise2.align.localms(record.seq, adapter.seq, 1, -3, -1, -0.99, one_alignment_only=1, score_only=1)
line = record.id + " " + record.seq + " " + adapter.id + " " + str(score)
results.append(line)
return results
if __name__ == '__main__':
fastaSeq = argv[1]
threads = argv[2]
fastaAdapt = argv[3]
listSeq = []
adpt = list(SeqIO.parse(fastaAdapt, "fasta"))
for record in SeqIO.parse(fastaSeq, "fasta"):
pool = Pool(processes=1)
result = pool.apply_async(align_call, args= (record, adpt[0]))
print result.get()
The script works but I can not modulate how many sequence need to be send each time and when I get a lot of them, I run out of cores and memory.
Any idea on how I can do this? Suggestions?
I tried by implementing Queue but it did not work
Thanks
Luigi
You might want to look into vsearch (or usearch)
It is pretty fast! and supports multi-threading
https://github.com/torognes/vsearch
vsearch --usearch_global big.fasta -db adapter_seq.fasta --uc outputfile.txt --id 0.8 --iddef 0 --threads 6
--id is the max % difference from the target sequence that you will allow (80 % in this case)
--iddef 0 is the scoring method 0=identity to based on shortest sequence, 2=strip terminal gaps...)
You can then read in this outputfile.txt and grab the alignment scores/matches/gaps/alignment length query name...
for each sequence.
With the desired query names collected, you can use these to pull the relevant sequences from the original fasta file
If you just want the sequences that were greater than x % match to the adapter, you can use --matched, instead of --uc, which will give you a fasta file of the sequences that matched above the --id threshold.
Alternatively, --samout will give you a sam file, with the seq name, alignment length, cigar code for the alignment as well as the query sequence.
https://github.com/torognes/vsearch/releases/download/v2.7.1/vsearch_manual.pdf has the full set of output options and commands
I think the pool should be created before the SeqIO loop. And you need to use a lock or callback to ensure the output is in right order.
Iam new in python programming.I have a following question.
How to display a user input number from -n to n.
Suppose if user enters 2 and it should display -2,-1,0,1,2
Do this :
def f():
n = input("Value n = ?")
for i in range(-n,n+1):
print i
and then call your function