Regex syntax error with Sed command in Ubuntu 9.04 - regex

I have the sed command like this:
radius_clientsfile=clients.conf
iface_netsize="/64"
wireless_prefix=fd04:bd3:80e8:3::
sed -i "/client $wireless_prefix\\$iface_netsize/ {n s/\(\W*secret\W*=\W\).*/\1$key/}" $radius_clientsfile
clients.conf has the content like this:
client fd04:bd3:80e8:3::/64 {
secret = 00000000000000000000000000000001
}
which aim to replace value of secret by key in clients.conf file. For Example, if key is 00000000000000000000000000000002, the content of clients.conf should be changed as following:
client fd04:bd3:80e8:3::/64 {
secret = 00000000000000000000000000000002
}
This script work on OpenWRT attitude adjustment r35400 for armv5tejl
However, it can not work in Ubuntu 9.04 with error:
sed: -e expression #1, char 36: extra characters after command
Could anyone help me for this situation?

I think you need add a ; between command n and command s, like this
sed -i "/client $wireless_prefix\\$iface_netsize/ {n; s/\(\W*secret\W*=\W\).*/\1$key/}" $radius_clientsfile
This working in my cygwin environment.

You need to separate the commands in the command block with a semi-colon, so add a ; after the n command to separate it from the following command.
Like this:
{n;s/\(\W*secret\W*=\W\).*/\1$key/}

Related

Regex in Windows Batch to automate Docker run

I am trying to automate the process of sending my temporary Amazon AWS keys as environment variables to a Docker image using Windows. I have a file, credentials.txt that contains my AWS credentials (the 3 ids are always the same, but the string values change regularly). I am using Windows command prompt.
Input:
(includes 2 empty lines at end) credentials.txt:
[default]
aws_access_key_id = STR/+ing1
aws_secret_access_key = STR/+ing2
aws_session_token = STR/+ing3
Desired output:
I need to issue the following command in order to run a Docker image (substituting the strings with the actual strings):
docker run -e AWS_ACCESS_KEY_ID=STR/+ing1 -e AWS_SECRET_ACCESS_KEY=STR/+ing2 -e AWS_SESSION_TOKEN=STR/+ing3 my-aws-container
My idea is to try to use regex on credentials.txt to convert it to:
SET aws_access_key_id=STR/+ing1
SET aws_secret_access_key=STR/+ing2
SET aws_session_token=STR/+ing3
And then run:
docker run -e AWS_ACCESS_KEY_ID=%aws_access_key_id% -e AWS_SECRET_ACCESS_KEY=%aws_secret_access_key% -e AWS_SESSION_TOKEN=%aws_session_token% my-aws-container
Does anyone have any advice on how to achieve this?
You can parse your credentials.txt with a for /f loop to set the variables (effectively removing the spaces):
for /f "tokens=1,3" %%a in ('type credentials.txt ^| find "="') do set "%%a=%%b"
and then run the last code line from your question:
docker run -e AWS_ACCESS_KEY_ID=%aws_access_key_id% -e AWS_SECRET_ACCESS_KEY=%aws_secret_access_key% -e AWS_SESSION_TOKEN=%aws_session_token% my-aws-container
Note: the values should not contain spaces or commas.
I've had a go in python that seems to work. Someone else may have a better answer.
I create the python file:
docker_run.py
import re
import os
myfile = 'C:/fullpath/credentials'
with open(myfile,'r') as f:
mystr = f.read()
vals = re.findall('=[\s]*([^\n]+)',mystr)
keys = ['AWS_ACCESS_KEY_ID','AWS_SECRET_ACCESS_KEY','AWS_SESSION_TOKEN']
environment_vars = ''.join([' -e ' + k + '=' + v for k,v in zip(keys,vals)])
cmd = 'docker run'+environment_vars+' my-aws-container'
os.system(cmd)
Then from command prompt I run:
python docker_run.py
This succeeds in running docker
(note: I tried using exec() in the final line rather than os.system(), but got the error "SyntaxError: invalid syntax")

How to write unix regular expression to select for specific files in a cp for-loop

I've got a directory with a bunch of files. Instead of describing the filenames and extensions, I'll just show you what is in the directory:
P01_1.atag P03_3.tgt P05_6.src P08_3.atag P10_5.tgt
P01_1.src P03_4.atag P05_6.tgt P08_3.src P10_6.atag
P01_1.tgt P03_4.src P06_1.atag P08_3.tgt P10_6.src
P01_2.atag P03_4.tgt P06_1.src P08_4.atag P10_6.tgt
P01_2.src P03_5.atag P06_1.tgt P08_4.src P11_1.atag
P01_2.tgt P03_5.src P06_2.atag P08_4.tgt P11_1.src
P01_3.atag P03_5.tgt P06_2.src P08_5.atag P11_1.tgt
P01_3.src P03_6.atag P06_2.tgt P08_5.src P11_2.atag
P01_3.tgt P03_6.src P06_3.atag P08_5.tgt P11_2.src
P01_4.atag P03_6.tgt P06_3.src P08_6.atag P11_2.tgt
P01_4.src P04_1.atag P06_3.tgt P08_6.src P11_3.atag
P01_4.tgt P04_1.src P06_4.atag P08_6.tgt P11_3.src
P01_5.atag P04_1.tgt P06_4.src P09_1.atag P11_3.tgt
P01_5.src P04_2.atag P06_4.tgt P09_1.src P11_4.atag
P01_5.tgt P04_2.src P06_5.atag P09_1.tgt P11_4.src
P01_6.atag P04_2.tgt P06_5.src P09_2.atag P11_4.tgt
P01_6.src P04_3.atag P06_5.tgt P09_2.src P11_5.atag
P01_6.tgt P04_3.src P06_6.atag P09_2.tgt P11_5.src
P02_1.atag P04_3.tgt P06_6.src P09_3.atag P11_5.tgt
P02_1.src P04_4.atag P06_6.tgt P09_3.src P11_6.atag
P02_1.tgt P04_4.src P07_1.atag P09_3.tgt P11_6.src
P02_2.atag P04_4.tgt P07_1.src P09_4.atag P11_6.tgt
P02_2.src P04_5.atag P07_1.tgt P09_4.src P12_1.atag
P02_2.tgt P04_5.src P07_2.atag P09_4.tgt P12_1.src
P02_3.atag P04_5.tgt P07_2.src P09_5.atag P12_1.tgt
P02_3.src P04_6.atag P07_2.tgt P09_5.src P12_2.atag
P02_3.tgt P04_6.src P07_3.atag P09_5.tgt P12_2.src
P02_4.atag P04_6.tgt P07_3.src P09_6.atag P12_2.tgt
P02_4.src P05_1.atag P07_3.tgt P09_6.src P12_3.atag
P02_4.tgt P05_1.src P07_4.atag P09_6.tgt P12_3.src
P02_5.atag P05_1.tgt P07_4.src P10_1.atag P12_3.tgt
P02_5.src P05_2.atag P07_4.tgt P10_1.src P12_4.atag
P02_5.tgt P05_2.src P07_5.atag P10_1.tgt P12_4.src
P02_6.atag P05_2.tgt P07_5.src P10_2.atag P12_4.tgt
P02_6.src P05_3.atag P07_5.tgt P10_2.src P12_5.atag
P02_6.tgt P05_3.src P07_6.atag P10_2.tgt P12_5.src
P03_1.atag P05_3.tgt P07_6.src P10_3.atag P12_5.tgt
P03_1.src P05_4.atag P07_6.tgt P10_3.src P12_6.atag
P03_1.tgt P05_4.src P08_1.atag P10_3.tgt P12_6.src
P03_2.atag P05_4.tgt P08_1.src P10_4.atag P12_6.tgt
P03_2.src P05_5.atag P08_1.tgt P10_4.src
P03_2.tgt P05_5.src P08_2.atag P10_4.tgt
P03_3.atag P05_5.tgt P08_2.src P10_5.atag
P03_3.src P05_6.atag P08_2.tgt P10_5.src
I have a file that is just outside of this directory that I need to copy to all of the files that end with "_1.src" inside the directory.
I'm working with unix in the Terminal app, so I tried writing this for loop, but it rejected my regular expression:
for .*1.src in ./
> do
> cp ../1.src
> done
I've only written regular expressions in Python before and have minimal experience, but I was under the impression that .* would match any combination of characters. However, I got the following error message:
-bash: `.*1.src': not a valid identifier
I then tried the same for loop with the following regular expression:
^[a-zA-Z0-9_]*1.src$
But I got the same error message:
-bash: `^[a-zA-Z0-9_]*1.src$': not a valid identifier
I tried the same regular expression with and without quotation marks, but it always gives the same 'not a valid identifier' error message.
Tested on Bash 4.4.12, the following is possible:
$ for i in ./*_1.src; do echo "$i" ; done
This will echo every file ending with _1.src to the screen, thus moving it will be possible as well.
$ mkdir tmp
$ for i in ./*_1.src; do mv "$i" tmp/.; done
I've tested with the following data:
$ touch P{1,2}{0,1,2}_{0..6}.{src,tgt,atag}
$ ls
P10_0.atag P10_5.src P11_3.tgt P12_2.atag P20_0.src P20_5.tgt P21_4.atag P22_2.src
P10_0.src P10_5.tgt P11_4.atag P12_2.src P20_0.tgt P20_6.atag P21_4.src P22_2.tgt
P10_0.tgt P10_6.atag P11_4.src P12_2.tgt P20_1.atag P20_6.src P21_4.tgt P22_3.atag
P10_1.atag P10_6.src P11_4.tgt P12_3.atag P20_1.src P20_6.tgt P21_5.atag P22_3.src
P10_1.src P10_6.tgt P11_5.atag P12_3.src P20_1.tgt P21_0.atag P21_5.src P22_3.tgt
P10_1.tgt P11_0.atag P11_5.src P12_3.tgt P20_2.atag P21_0.src P21_5.tgt P22_4.atag
P10_2.atag P11_0.src P11_5.tgt P12_4.atag P20_2.src P21_0.tgt P21_6.atag P22_4.src
P10_2.src P11_0.tgt P11_6.atag P12_4.src P20_2.tgt P21_1.atag P21_6.src P22_4.tgt
P10_2.tgt P11_1.atag P11_6.src P12_4.tgt P20_3.atag P21_1.src P21_6.tgt P22_5.atag
P10_3.atag P11_1.src P11_6.tgt P12_5.atag P20_3.src P21_1.tgt P22_0.atag P22_5.src
P10_3.src P11_1.tgt P12_0.atag P12_5.src P20_3.tgt P21_2.atag P22_0.src P22_5.tgt
P10_3.tgt P11_2.atag P12_0.src P12_5.tgt P20_4.atag P21_2.src P22_0.tgt P22_6.atag
P10_4.atag P11_2.src P12_0.tgt P12_6.atag P20_4.src P21_2.tgt P22_1.atag P22_6.src
P10_4.src P11_2.tgt P12_1.atag P12_6.src P20_4.tgt P21_3.atag P22_1.src P22_6.tgt
P10_4.tgt P11_3.atag P12_1.src P12_6.tgt P20_5.atag P21_3.src P22_1.tgt P10_5.atag
P11_3.src P12_1.tgt P20_0.atag P20_5.src P21_3.tgt P22_2.atag
Apparently, my previous answer didn't work. But this seems to:
$ for x in `echo ./P[01][012]_1.src`; do echo "$x"; done
./P01_1.src
./P02_1.src
So, when you run this echo alone, this pattern gets expanded into many names:
$ echo ./P[01][012]_1.src # note that the 'regex' is not enclosed in quotes
./P01_1.src ./P02_1.src
And then you can iterate over these names in a loop.
BTW, as noted in the comments, you don't even need that echo, so you can plug the pattern right into the loop:
for x in ./P[01][012]_1.src; do echo "$x"; done
Please correct me if your goal is something other than
"overwrite many existing files sharing a common suffix with the contents of a single file"
find /path/to/dest_dir -type f -name "*_1.src" |xargs -n1 cp /path/to/source_file
Note that without the -maxdepth 1 option, find will recurse through your destination directory.
Thanks to everyone; this is what ended up working:
for x in `echo ./P[0-9]*_1.src`
> do
> cp ../1.src "$x"
> done
This loop allowed me to copy the contents of the one file to all of the files in the subdirectory that ended with "_1.src"

Sign data with SoftHSM

I want to sign (or encrypt) some data using SoftHSMv2 and pkcs11-tool.
So far, I generated a RSA keypair with :
pkcs11-tool --module=/usr/local/lib/softhsm/libsofthsm2.so -l --pin mypin -k --key-type rsa:1024 --id 01 --label mykey
But when I try to sign something :
echo "some text" > input.txt
pkcs11-tool --module=/usr/local/lib/softhsm/libsofthsm2.so -l --pin mypin -s -i input.txt
I get the following error message :
Using slot 0 with a present token (0x0)
error: Sign mechanism not supported
Aborting.
From what I understood reading the documentation, I should be able to sign with SoftHSM, but maybe I'm wrong ?
Is there an option to add during compilation or something else to fix my problem ? If not, is there other SSM products in the market ?
You seem to have forgotten -m (--mechanism) option. Read more here.

regular expression to extract data from html page

I want to extract all anchor tags from html pages. I am using this in Linux.
lynx --source http://www.imdb.com | egrep "<a[^>]*>"
but that is not working as expected, since result contains unwanted results
<a class="amazon-affiliate-site-name" href="http://www.fabric.com">Fabric</a><br>
I want just
<a href >...</a>
any good way ?
If you have a -P option in your grep so that it accepts PCRE patterns, you should be able to use better regexes. Sometimes a minimal quantifier like *? helps. Also, you’re getting the whole input line, not just the match itself; if you have a -o option to grep, it will list only the part that matches.
egrep -Po '<a[^<>]*>'
If your grep doesn’t have those options, try
perl -00 -nle 'print $1 while /(<a[^<>]*>)/gi'
Which now crosses line boundaries.
To do a real parse of HTML requires regexes subtantially more more complex than you are apt to wish to enter on the command line. Here’s one example, and here’s another. Those may not convince you to try a non-regex approach, but they should at least show you how much harder it is in the general case than in specific ones.
This answer shows why all things are possible, but not all are expedient.
why can't you use options like --dump ?
lynx --dump --listonly http://www.imdb.com
Try grep -Eo:
$ echo '<a class="amazon-affiliate-site-name" href="http://www.fabric.com">Fabric</a><br>' | grep -Eo '<a[^>]*>'
<a class="amazon-affiliate-site-name" href="http://www.fabric.com">
But please read the answer that MAK linked to.
Here's some examples of why you should not use regex to parse html.
To extract values of 'href' attribute of anchor tags, run:
$ python -c'import sys, lxml.html as h
> root = h.parse(sys.argv[1]).getroot()
> root.make_links_absolute(base_url=sys.argv[1])
> print "\n".join(root.xpath("//a/#href"))' http://imdb.com | sort -u
Install lxml module if needed: $ sudo apt-get install python-lxml.
Output
http://askville.amazon.com
http://idfilm.blogspot.com/2011/02/another-class.html
http://imdb.com
http://imdb.com/
http://imdb.com/a2z
http://imdb.com/a2z/
http://imdb.com/advertising/
http://imdb.com/boards/
http://imdb.com/chart/
http://imdb.com/chart/top
http://imdb.com/czone/
http://imdb.com/features/hdgallery
http://imdb.com/features/oscars/2011/
http://imdb.com/features/sundance/2011/
http://imdb.com/features/video/
http://imdb.com/features/video/browse/
http://imdb.com/features/video/trailers/
http://imdb.com/features/video/tv/
http://imdb.com/features/yearinreview/2010/
http://imdb.com/genre
http://imdb.com/help/
http://imdb.com/helpdesk/contact
http://imdb.com/help/show_article?conditions
http://imdb.com/help/show_article?rssavailable
http://imdb.com/jobs
http://imdb.com/lists
http://imdb.com/media/index/rg2392693248
http://imdb.com/media/rm3467688448/rg2392693248
http://imdb.com/media/rm3484465664/rg2392693248
http://imdb.com/media/rm3719346688/rg2392693248
http://imdb.com/mymovies/list
http://imdb.com/name/nm0000207/
http://imdb.com/name/nm0000234/
http://imdb.com/name/nm0000631/
http://imdb.com/name/nm0000982/
http://imdb.com/name/nm0001392/
http://imdb.com/name/nm0004716/
http://imdb.com/name/nm0531546/
http://imdb.com/name/nm0626362/
http://imdb.com/name/nm0742146/
http://imdb.com/name/nm0817980/
http://imdb.com/name/nm2059117/
http://imdb.com/news/
http://imdb.com/news/celebrity
http://imdb.com/news/movie
http://imdb.com/news/ni7650335/
http://imdb.com/news/ni7653135/
http://imdb.com/news/ni7654375/
http://imdb.com/news/ni7654598/
http://imdb.com/news/ni7654810/
http://imdb.com/news/ni7655320/
http://imdb.com/news/ni7656816/
http://imdb.com/news/ni7660987/
http://imdb.com/news/ni7662397/
http://imdb.com/news/ni7665028/
http://imdb.com/news/ni7668639/
http://imdb.com/news/ni7669396/
http://imdb.com/news/ni7676733/
http://imdb.com/news/ni7677253/
http://imdb.com/news/ni7677366/
http://imdb.com/news/ni7677639/
http://imdb.com/news/ni7677944/
http://imdb.com/news/ni7678014/
http://imdb.com/news/ni7678103/
http://imdb.com/news/ni7678225/
http://imdb.com/news/ns0000003/
http://imdb.com/news/ns0000018/
http://imdb.com/news/ns0000023/
http://imdb.com/news/ns0000031/
http://imdb.com/news/ns0000128/
http://imdb.com/news/ns0000136/
http://imdb.com/news/ns0000141/
http://imdb.com/news/ns0000195/
http://imdb.com/news/ns0000236/
http://imdb.com/news/ns0000344/
http://imdb.com/news/ns0000345/
http://imdb.com/news/ns0004913/
http://imdb.com/news/top
http://imdb.com/news/tv
http://imdb.com/nowplaying/
http://imdb.com/photo_galleries/new_photos/2010/
http://imdb.com/poll
http://imdb.com/privacy
http://imdb.com/register/login
http://imdb.com/register/?why=footer
http://imdb.com/register/?why=mymovies_footer
http://imdb.com/register/?why=personalize
http://imdb.com/rg/NAV_TWITTER/NAV_EXTRA/http://www.twitter.com/imdb
http://imdb.com/ri/TRAILERS_HPPIRATESVID/TOP_BUCKET/102785/video/imdb/vi161323033/
http://imdb.com/search
http://imdb.com/search/
http://imdb.com/search/name?birth_monthday=02-12
http://imdb.com/search/title?sort=num_votes,desc&title_type=feature&my_ratings=exclude
http://imdb.com/sections/dvd/
http://imdb.com/sections/horror/
http://imdb.com/sections/indie/
http://imdb.com/sections/tv/
http://imdb.com/showtimes/
http://imdb.com/tiger_redirect?FT_LIC&licensing/
http://imdb.com/title/tt0078748/
http://imdb.com/title/tt0279600/
http://imdb.com/title/tt0377981/
http://imdb.com/title/tt0881320/
http://imdb.com/title/tt0990407/
http://imdb.com/title/tt1034389/
http://imdb.com/title/tt1265990/
http://imdb.com/title/tt1401152/
http://imdb.com/title/tt1411238/
http://imdb.com/title/tt1411238/trivia
http://imdb.com/title/tt1446714/
http://imdb.com/title/tt1452628/
http://imdb.com/title/tt1464174/
http://imdb.com/title/tt1464540/
http://imdb.com/title/tt1477837/
http://imdb.com/title/tt1502404/
http://imdb.com/title/tt1504320/
http://imdb.com/title/tt1563069/
http://imdb.com/title/tt1564367/
http://imdb.com/title/tt1702443/
http://imdb.com/tvgrid/
http://m.imdb.com
http://pro.imdb.com/r/IMDbTabNB/
http://resume.imdb.com
http://resume.imdb.com/
https://secure.imdb.com/register/subscribe?c=a394d4442664f6f6475627
http://twitter.com/imdb
http://wireless.amazon.com
http://www.3news.co.nz/The-Hobbit-media-conference--full-video/tabid/312/articleID/198020/Default.aspx
http://www.amazon.com/exec/obidos/redirect-home/internetmoviedat
http://www.audible.com
http://www.boxofficemojo.com
http://www.dpreview.com
http://www.endless.com
http://www.fabric.com
http://www.imdb.com/board/bd0000089/threads/
http://www.imdb.com/licensing/
http://www.imdb.com/media/rm1037220352/rg261921280
http://www.imdb.com/media/rm2695346688/tt1449283
http://www.imdb.com/media/rm3987585536/tt1092026
http://www.imdb.com/name/nm0000092/
http://www.imdb.com/photo_galleries/new_photos/2010/index
http://www.imdb.com/search/title?sort=num_votes,desc&title_type=tv_series&my_ratings=exclude
http://www.imdb.com/sections/indie/
http://www.imdb.com/title/tt0079470/
http://www.imdb.com/title/tt0079470/quotes?qt0471997
http://www.imdb.com/title/tt1542852/
http://www.imdb.com/title/tt1606392/
http://www.imdb.de
http://www.imdb.es
http://www.imdb.fr
http://www.imdb.it
http://www.imdb.pt
http://www.movieline.com/2011/02/watch-jon-hamm-talk-butthole-surfers-paul-rudd-impersonate-jay-leno-at-book-reading-1.php
http://www.movingimagesource.us/articles/un-tv-20110210
http://www.npr.org/blogs/monkeysee/2011/02/10/133629395/james-franco-recites-byron-to-the-worlds-luckiest-middle-school-journalist
http://www.nytimes.com/2011/02/06/books/review/Brubach-t.html
http://www.shopbop.com/welcome
http://www.smallparts.com
http://www.twinpeaks20.com/details/
http://www.twitter.com/imdb
http://www.vanityfair.com/hollywood/features/2011/03/lauren-bacall-201103
http://www.warehousedeals.com
http://www.withoutabox.com
http://www.zappos.com
To extract values of 'href' attribute of anchor tags you may also use xmlstarlet after converting HTML to XHTML using HTML Tidy (Mac OS X version released on 25 March 2009):
curl -s www.imdb.com |
tidy -q -c -wrap 0 -numeric -asxml -utf8 --merge-divs yes --merge-spans yes 2>/dev/null |
xmlstarlet sel -N x="http://www.w3.org/1999/xhtml" -t -m "//x:a/#href" -v '.' -n |
grep '^[[:space:]]*http://' | sort -u | nl
On Mac OS X you may also use the command line tool linkscraper:
linkscraper http://www.imdb.com
see: http://codesnippets.joyent.com/posts/show/10772

getting aliases from config by regular expression (bad separated)

I tried to get some aliases from a specific config file in a short bash script. The config looks like:
[other config]
[alias]
alias: command -o option
alias2: command -l 2
alias3: command -o option
[other config]
How would you get these aliases separated? I would prefer a output like this:
alias: command -o option
alias2: command -l 2
alias3: command -o option
I already did some bad stuff like getting line numbers and so on...
Any Ideas? Would be great!
You can do this using sed:
sed -n -e '/\[alias\]/,/\[.*\]/s/:/:/p'
This will print all lines between [alias] and the next line containing [ and ] that have a colon on them.
Perl has a Config::GitLike module that may come in handy for parsing that git config file.
Looking at the documentation, you want to do:
#!perl -w
use strict;
use 5.010;
use Config::GitLike;
my $c = Config::GitLike->new(confname => 'config');
$c->load;
my $alias = $c->get( key => 'alias.alias' );
my $alias2 = $c->get( key => 'alias.alias2' );
my $alias3 = $c->get( key => 'alias.alias3' );
say "alias: $alias";
say "alias2: $alias2";
say "alias3: $alias3";