I have text files that contain 2 numbers separated by a '+' sign. Trying to figure out how to replace them with currency equivalent .
Example Strings:
20+2 would be converted to $0.20+$0.02 USD
1379+121 would be> $13.79+$1.21 USD
400+20 would be $4.00+$0.20 USD
and so on.
I have tried using a few angles but they do not work or provide odd results.
I tried to do it here by attempting to find by all patterns I think would come up .
.\Replace-FileString.ps1 "100+10" '$1.00+$0.10' $path1\*.txt -Overwrite
.\Replace-FileString.ps1 "1000+100" '$10.00+$1.00' $path1\*.txt -Overwrite
.\Replace-FileString.ps1 "300+30" '$3.00+$0.30' $path1\*.txt -Overwrite
.\Replace-FileString.ps1 "400+20" '$4.00+$0.20' $path1\*.txt -Overwrite
or this which just doesn't work.
Select-String -Path .\*txt -Pattern '[0-9][0-9]?[0-9]?[0-9]?[0-9]?\+[0-9][0-9]?[0-9]?[0-9]?[0-9]?' | ForEach-Object {$_ -replace ", ", $"} {$_ -replace "+", "+$"}
I tried to do it here by attempting to find by all patterns I think would come up
Don't try this - we're humans, and we won't think of all edge cases and even if we did, the amount of code we needed to write (or generate) would be ridiculous.
We need a more general solution here, and regex might indeed be helpful with this.
The pattern you describe could be expressed as three distinct parts:
1 or more consecutive digits
1 plus sign (+)
1 or more consecutive digits
With this in mind, let's start to simplifying the regex pattern to use:
\b\d+\+\d+\b
or, written out with explanations:
\b # a word boundary
\d+ # 1 or more digits
\+ # 1 literal plus sign
\d+ # 1 or more digits
\b # a word boundary
Now, in order to transform an absolute value of cents into dollars, we'll need to capture the digits on either side of the +, so let's add capture groups:
\b(\d+)\+(\d+)\b
Now, in order to do anything interesting with the captured groups, we can utilize the Regex.Replace() method - it can take a scriptblock as its substitution argument:
$InputString = '1000+10'
$RegexPattern = '\b(\d+)\+(\d+)\b'
$Substitution = {
param($Match)
$Results = foreach($Amount in $Match.Groups[1,2].Value){
$Dollars = [Math]::Floor(($Amount / 100))
$Cents = $Amount % 100
'${0:0}.{1:00}' -f $Dollars,$Cents
}
return $Results -join '+'
}
In the scriptblock above, we expect the two capture groups ($Match.Groups[1,2]), calculate the amount of dollars and cents, and then finally use the -f string format operator to make sure that the cents value is always two digits wide.
To do the substitution, invoke the Replace() method:
[regex]::Replace($InputString,$RegexPattern,$Substitution)
And there you go!
Applying to to a bunch of files is as easy as:
$RegexPattern = '\b(\d+)\+(\d+)\b'
$Substitution = {
param($Match)
$Results = foreach($Amount in $Match.Groups[1,2].Value){
$Dollars = [Math]::Floor(($Amount / 100))
$Cents = $Amount % 100
'${0:0}.{1:00}' -f $Dollars,$Cents
}
return $Results -join '+'
}
foreach($file in Get-ChildItem $path *.txt){
$Lines = Get-Content $file.FullName
$Lines |ForEach-Object {
[regex]::Replace($_, $RegexPattern, $Substitution)
} |Set-Content $file.FullName
}
this regular expression work too
\b\d{3,4}(?=\+)|\d{2,3}(?=\")
https://regex101.com/
Do you want something like this output?
$20+$2 would be converted to $0.20+$0.02 USD
$1379+$121 would be> $13.79+$1.21 USD
$400+$20 would be $4.00+$0.20 USD
Then, you may try this command in powershell.
(gc test.txt) -replace '\b(\d+)\+(\d+)\b','$$$1+$$$2' | sc test.txt
gc , sc : alias for get-content, set-content commands respectively
\b(\d+)\+(\d+)\b : match the target string (numbers+numbers) and capturing numbers to $1, $2 in order
$$ : $ must be escaped to indicate literal $ dollor character (what you want to place in front of numbers)
$1, $2 : back-reference to the captured value
test.txt : contains your sample text
Of course, this is applicable for multiple files like follows
gci '*.txt' -recurse | foreach-object{(gc $_ ) '\b(\d+)\+(\d+)\b','$$$1+$$$2' | sc $_ }
gci : alias for get-childitem command. In default, it returns list in the present directory. If you want to change the directory, then must use -path option and -include option.
-recurse option : enables to search sub-directory
Edited
If you want capturing & dividing values & replacing old value with new one like follows
$0.2+$0.02 would be converted to $0.20+$0.02 USD
$13.79+$1.21 would be> $13.79+$1.21 USD
$4+$0.2 would be $4.00+$0.20 USD
then, you may try this.
gci *.txt -recurse | % {(gc $_) | % { $_ -match "\b(\d+)\+(\d+)\b" > $null; $num1=[int]$matches[1]/100; $num2=[int]$matches[2]/100; $dol='$$'; $_ -replace "\b(\d+)\+(\d+)\b","$dol$num1+$dol$num2"}|sc $_}
This command search files in the present directory and sub-directory. If you don't want to search in sub-directory, then remove -recurse option. And if you want another path, then use -path option and -include option like follows.
gci -path "your_path" -include *.txt | % {(gc $_) ...
Other solutions seem excessively complicated, first turning the string to values and then back to strings. Looking at the examples, it is just chopping up a string and re-assembling it while ensuring that the different parts (dollars and cents) have the correct lengths:
('20+2','1379+121','400+20') -replace
'(\d+)\+(\d+)','00$1+00$2' -replace
'0*(\d+)(\d\d)\+0*(\d+)(\d\d)','$$$1.$2+$$$3.$4 USD'
$0.20+$0.02 USD
$13.79+$1.21 USD
$4.00+$0.20 USD
Explanation:
Substitute all the + separated cent values with 0 padded values so there is a minimum of three digits, i.e. at least one digit in the dollars and exactly 2 for the cents.
Collect the individual dollars and cents for each value into distinct capture groups while simultaneously discarding any extraneous leading zeroes.
Re-substitute the (just padded) strings with the appropriately formatted versions.
It is interesting to note how the second substitution relies on the greedy nature of *. The 0* will match just as many leading zeroes as will still leave enough for the remainder of the pattern.
You can put in the word boundary anchor (\b), at one or both ends of the patterns, if you have parts of a line where there are digits separated by + which are directly adjacent to other text and you want them to be NOT processed, otherwise it is unnecessary.
Note: the example above shows an array of String as input and producing an array of String (each element displayed on a separate line). When -Replace is applied to an array, it enumerates the array, applies the replace to each element and collects each (possibly replaced) element into a result array. The output of Get-Content is an array of String (enumerated by PowerShell when supplying a pipeline). Similarly, the 'input' to Set-Content is an array of String (possibly converted from a general Object[] and/or collected from pipeline input). Thus, to convert a file just use:
(gc somefile) -replace ... -replace ... | sc newfile
# or even
sc newfile ((gc somefile) -replace ... -replace ...)
# Set-Content [-Path] String[] [-Value] Object[]
In the above, newfile and somefile can be the same due to a nice feature of Set-Content whereby it does not even open/create its output file(s) until it has something to write. Thus,
#() | sc existingfile
does not destroy existingfile. Note, however, that
sc existingfile #()
does destroy existingfile. This is because the first example sends nothing to Set-Content while the second example gives Set-Content something (an empty array). Since the output from Get-Content is collected into an (anonymous) array before -Replace is applied, there is no conflict between Get-Content and Set-Content over accessing the same file. The functionally equivalent version
gc somefile | foreach { $_ -replace ... -replace ... } | sc newfile
does not work if newfile is somefile since Set-Content receives each (possibly substituted) line from Get-Content before the next one is read meaning Set-Content can't open the file because Get-Content still has it open.
This is a separate answer because it doesn't explain how to achieve the desired result (already did that) but explains why the listed attempts do not work (an educational motive).
If you're using Replace-FileString.ps1 from GitHub then not only are the examples not a general solution, it won't work as listed above because Replace-FileString.ps1 uses the Replace method of a [regex] object so "400+20" matches "40" then 1 or more "0" then "20". Similarly for other attempts. Note, no "+" is matched in the patterns so all fail (unless you have lines like "40020+125" which matches on the 40020). Just as well, the replacement includes the capture group specifier "$0" (as part of '$1.00+$0.10') and other specifiers. There are no capture groups specified in the pattern so all the group specifiers would be taken literally, except "$0" being the entire match (if found). Thus, "40020+125" would be replaced by substituting '$4.00+$0.20' giving "$4.00+40020.20" ($4='$4' and $0='40020'). Probably, no matches are found. Result -> files not changed. (Phew!)
As for the Select-String attempt, Select-String would probably have matched the required data since the pattern matched up to 5 digits on either side of a +. This would send the matching lines (and ignored the rest, if any) into the ForEach-Object as [Microsoft.PowerShell.Commands.MatchInfo] objects (not strings). (Aside: this is a common mistake by a lot of PowerShell, um, novices. They assume that what they see on the screen is the same as what is churning about inside PowerShell. This is far from the truth and probably leads to most of the confusion amongst new users. PowerShell processes entire objects and typically displays only a summary of the most useful bits.) Anyway, I am unsure what the ForEach-Object is trying to achieve, not least due to the apparent typo. There is at least one " missing in the first script block and possibly a comma also. The best I can interpret it is
{ $_ -replace ", ",", $" }
i.e. change every ", " into ", $". This assumes that the strings to be substituted are all preceded by ", ". Note: lone $ is not an error because it cannot be interpreted as a variable substitution (no following name or {) or capture reference (no following group specifier [0-9`+'_&]). The next script block is clearer, change every "+" into "+$". Unfortunately, again, the first string is interpreted as a regular expression and, unlike the lone $, a lone + here is an error. It needs to be escaped with \. However, even with these errors corrected, there are two big problems:
The default output from Select-String is a collection of [MatchInfo] objects which when (implicitly) converted to String for use as the LHS of -replace include the file name and line number, thereby corrupting the lines from the file. To use just the line itself, specify $_.Line.
A completely incorrect usage of the scriptblock parameters to ForEach-Object. While it would seem that the intent was to perform two replace operations, placing them in individual scriptblocks is an error. Even if it worked, it would output 2 separate partial replacements instead of one completed replacement since $_ is not updated between the two expressions. ($_ is writable!)
ForEach-Object has 3 basic scriptblock groups, 1 -Begin block, 1 -End block and all the rest collectively as the -Process blocks. (The -Parallel block is not relevant here.) The documentation mentions a group called -RemainingScripts but this is actually just an implementation construct to allow the -Process scriptblocks to be specified as individual parameters rather than collected into an array (similar to parameter arrays in C# and VB). I suspect this was done so that users could simply drop the parameter names (-Begin, -Process and -End) and treat the scriptblocks as if they were positional parameters even though, strictly speaking, only -Process is positional and expects an array of scriptblocks (i.e. separated by commas). The introduction of -RemainingScripts in PS3.0 (with attribute ValueFromRemainingArguments so it behaves like a parameter array) was probably done to tidy up what might have been a nasty kludge to get the user friendly behaviour prior to PS3.0. Or maybe it was just formalising what was already going on.
Anyway, back on topic. By specifying multiple scriptblocks, the first is treated as -Begin and, if there are more than 2, the last is treated as -End. Thus, for two scriptblocks, the first is -Begin and the other is -Process. Therefore, even if the first scriptblock were syntactically correct, it would only run once and then still do nothing since $_ is not assigned (=$null) in -Begin. The correct way would be to place both replacements, joined into a single expression, in one scriptblock:
{ $_.Line -replace ", ",", $" -replace "\+","+$" }
Of course, this is just describing how to get it to "work". It is not the correct solution to the problem in the original post (see other answer).
Related
I have a command that outputs text in the following format:
misc1=poiuyt
var1=qwerty
var2=asdfgh
var3=zxcvbn
misc2=lkjhgf
etc. I need to get the values for var1, var2, and var3 into variables in a perl script.
If I were writing a shell script, I'd do this:
OUTPUT=$(command | grep '^var-')
VAR1=$(echo "${OUTPUT}" | sed -ne 's/^var1=\(.*\)$/\1/p')
VAR2=$(echo "${OUTPUT}" | sed -ne 's/^var2=\(.*\)$/\1/p')
VAR3=$(echo "${OUTPUT}" | sed -ne 's/^var3=\(.*\)$/\1/p')
That populates OUTPUT with the basic content that I want (so I don't have to run the original command multiple times), and then I can pull out each value using sed VAR1 = 'qwerty', etc.
I've worked with perl in the past, but I'm pretty rusty. Here's the best I've been able to come up with:
my $output = `command | grep '^var'`;
(my $var1 = $output) =~ s/\bvar1=(.*)\b/$1/m;
print $var1
This correctly matches and references the value for var1, but it also returns the unmatched lines, so $var1 equals this:
qwerty
var2=asdfgh
var3=zxcvbn
With sed I'm able to tell it to print only the modified lines. Is there a way to do something similar with in perl? I can't find the equivalent of sed's p modifier in perl.
Conversely, is there a better way to extract those substrings from each line? I'm sure I could match match each line and split the contents or something like that, but was trying to stick with regex since that's how I'd typically solve this outside of perl.
Appreciate any guidance. I'm sure I'm missing something relatively simple.
One way
my #values = map { /\bvar(?:1|2|3)\s*=\s*(.*)/ ? $1 : () } qx(command);
The qx operator ("backticks") returns a list of all lines of output when used in list context, here imposed by map. (In a scalar context it returns all output in a string, possibly multiline.) Then map extracts wanted values: the ternary operator in it returns the capture, or an empty list when there is no match (so filtering out such lines). Please adjust the regex as suitable.
Or one can break this up, taking all output, then filtering needed lines, then parsing them. That allows for more nuanced, staged processing. And then there are libraries for managing external commands that make more involved work much nicer.
A comment on the Perl attempt shown in the question
Since the backticks is assigned to a scalar it is in scalar context and thus returns all output in a string, here multiline. Then the following regex, which replaces var1=(.*) with $1, leaves the next two lines since . does not match a newline so .* stops at the first newline character.
So you'd need to amend that regex to match all the rest so to replace it all with the capture $1. But then for other variables the pattern would have to be different. Or, could replace the input string with all three var-values, but then you'd have a string with those three values in it.
So altogether: using the substitution here (s///) isn't suitable -- just use matching, m//.
Since in list context the match operator also returns all matches another way is
my #values = qx(command) =~ /\bvar(?:1|2|3)\s*=\s*(.*)/g;
Now being bound to a regex, qx is in scalar context and so it returns a (here multiline) string, which is then matched by regex. With /g modifier the pattern keeps being matched through that string, capturing all wanted values (and nothing else). The fact that . doesn't match a newline so .* stops at the first newline character is now useful.
Again, please adjust the regex as suitable to yoru real problem.
Another need came up, to capture both the actual names of variables and their values. Then add capturing parens around names, and assign to a hash
my %val = map { /\b(var(?:1|2|3))\s*=\s*(.*)/ ? ($1, $2) : () } qx(command);
or
my %val = qx(command) =~ /\b(var(?:1|2|3))\s*=\s*(.*)/g;
Now the map for each line of output from command returns a pair of var-name + value, and a list of such pairs can be assigned to a hash. The same goes with subsequent matches (under /g) in the second case..
In scalar context, s/// and s///g return whether it found a match or not. So you can use
print $s if $s =~ s///;
I have a folder with a bunch of filenames I'd like to change. I'd like to insert a string after the 7th character.
Current filename:
123456_9999999999999_DocName.pdf
234567_9999999999999_DocName.pdf
345678_9999999999999_DocName.pdf
456789_9999999999999_DocName.pdf
Desired outcome:
123456_DocType_9999999999999_DocName.pdf
234567_DocType_9999999999999_DocName.pdf
345678_DocType_9999999999999_DocName.pdf
456789_DocType_9999999999999_DocName.pdf
I've tried the following I've pieced together, but it only renames one file and causes an error.
$location = 'C:\Users\username\Desktop\TEST'
Get-ChildItem -Path $location -Recurse |
Rename-Item -NewName{$_.name -replace '(.*?)_(.*)', '_DocType_'}
How can I insert this string to multiple filenames after the 7th character please?
Thanks for your help.
Note: The commands below focus just on the -replace operation; simply substituting the RHS of one of these operations for the RHS of the -replace operation in the code in the question is enough - although adding the -File switch is advisable to ensure that no accidental attempt to rename directories is made.[1]
To insert literally after the 7th character:
PS> '123456_9999999999999_DocName.pdf' -replace '(?<=^.{7})', 'DocType_'
123456_DocType_9999999999999_DocName.pdf
Note: As filimonic's helpful answer shows, you don't need a regex in this simple case; using the [string] type's .Insert() method is both simpler and faster.
More flexibly, to insert after the first _ in the name:
PS> '123456_9999999999999_DocName.pdf' -replace '(?<=^[^_]+_)', 'DocType_'
123456_DocType_9999999999999_DocName.pdf
Note:
(?<=...) is a positive look-behind assertion that matches the position of the subexpression represented by ... here - from the beginning (^), either the first 7 characters (.{7}), or a nonempty run (+) of characters other than _ ([^_]) followed by a _.
The replacement string ('DocType_') is therefore inserted into (a copy of) the original string at that position.
The look-behind approach obviates the need to refer to part of what was captured by the regex in the replacement operand, such as the use of $& in Wiktor's answer.
Note: While this approach is convenient and works well here, it has its pitfalls, because an anchored-to-the-start (^) subexpression inside a look-behind assertion doesn't always behave the same as outside of one - see this post.
See this answer for an overview of PowerShell's -replace operator.
[1] If the -replace operation turns out to be a no-op (if the regex doesn't match, the input string is returned as-is), trying to rename a directory to its existing name will actually generate an error, unlike with files - see GitHub issue #14903.
'123456_9999999999999_DocName.pdf'.Insert(7,'DocType_')
No need any regular expressions. Works twice faster. Self-descriptive.
You can use
$location = 'C:\Users\username\Desktop\TEST'
Get-ChildItem -Path $location -Recurse |
Rename-Item -NewName{$_.name -replace '^.{6}', '$&_DocType'}
See the regex demo.
Details:
^ - start of a string
.{6} - any 6 chars other than LF char.
$& in the replacement pattern is the reference to the whole match.
All answers submitted worked for me.
The final code I ended up using is:
Get-ChildItem -Path $location -Recurse | Rename-Item -NewName{$_.name.Insert(7,'DocType_')}
For me regex is a bit confusing and this code will make it easier for me and others on my team who may not use regex to support the code moving forward. I upvoted all answers as they work. Thanks to all who took time to help me and others who may have the same question.
I have been trying to extract certain equal to 40 values get the sixth last word from multiple lines inside a .txt file with PowerShell.
I have code so far :
$file = Get-Content 'c:\temp\file.txt'
$Array = #()
foreach ($line in $file)
{
$Array += $line.split(",")[6]
}
$Array
$Array | sc "c:\temp\export2.txt"
Txt file : (may be duplicate lines such as hostname01)
4626898,0,3,0,POL,INCR,hostname01,xx,1549429809,0000000507,1549430316,xxx,0,40,1,xxxx,51870834,5040,100
4626898,0,3,0,POL,INCR,hostname02,xx,1549429809,0000000507,1549430316,xxx,0,15,1,xxxx,51870834,5040,100
4626898,0,3,0,POL,INCR,hostname03 developer host,xx,1549429809,0000000507,1549430316,xxx,0,40,1,xxxx,51870834,5040,100
4626898,0,3,0,POL,INCR,hostname01,xx,1549429809,0000000507,1549430316,xxx,0,40,1,xxxx,51870834,5040,100
This is what I want :
hostname01
hostname02
hostname03 developer host
This is not a fast solution, but a convenient and flexible one:
Since your text file is effectively a CSV file, you can use Import-Csv.
Since your data is missing is a header row (column names), which we can supply to Import-Csv via its -Header parameter.
Since you're interested in columns number 7 (hostnames) and 14 (the number whose value should be 40), we need to supply column names (of our choice) for columns 1 through 14.
Import-Csv conveniently converts the CSV rows to (custom) objects, whose properties you can query with Where-Object and selectively extract with Select-Object; adding -Unique suppresses duplicate values.
To put it all together:
Import-Csv c:\temp\file.txt -Header (1..14) |
Where-Object 14 -eq 40 |
Select-Object -ExpandProperty 7 -Unique
For convenience we've named the columns 1, 2, ... using a range expression (1..14), but you're free to use descriptive names.
Assuming that c:\temp\file.txt contains your sample data, the above yields:
hostname01
hostname03 developer host
To output to a file, pipe the above to Set-Content, as in your question:
... | Set-Content c:\temp\export2.txt
If the desired field is always the 6th in the line it is easier to split each line and fetch the 6th member:
Get-Content 'c:\temp\file.txt' | Foreach-Object {($_ -split ',')[6]} | Select-Object -Unique
You could use a non-capturing group to look through the string for the correct format and reference the name of your 6 element with the 1st capture group $1:
(?:\d+,\d,\d,\d,[A-Z]+,[A-Z]+,)([a-zA-Z 0-9]+)
Demo here
(?: ) - Specifies a non-capture group (meaning it's not referenced via $1, or $2 like you normally would with a capture group
\d+, (I won't repeat all of these, but) looking for a one or more digits followed by a literal ,.
[A-Z]+, - Finds an all capital letter string, followed by a literal , (this occurs twice).
([a-zA-Z 0-9]+) - The capture group you're looking for, $1, that will capture all characters a-z, A-Z, spaces, and digits up until a character not in this set (in this case, a comma). Giving you the text you're looking for.
Below should work with what you are trying to do
Get-Content 'c:\temp\file.txt' | %{
$_.Split(',')[6]
}| select -Unique
I'm trying to split some text using PowerShell, and I'm doing a little experimenting with regex, and I would like to know exactly what the "|" character does in a PowerShell regex. For example, I have the following line of code:
"[02]: ./media/active-directory-dotnet-how-to-use-access-control/acs-01.png" | select-string '\[\d+\]:' | foreach-object {($_ -split '\[|\]')}
Running this line of code gives me the following output:
-blank line-
02
: ./media/active-directory-dotnet-how-to-use-access-control/acs-01.png
If I run the code without the "|" in the -split statement as such:
"[02]: ./media/active-directory-dotnet-how-to-use-access-control/acs-01.png" | select-string '\[\d+\]:' | foreach-object {($_ -split '\[\]')}
I get the following output without the [] being stripped (essentially it's just displaying the select-string output:
[02]: ./media/active-directory-dotnet-how-to-use-access-control/acs-01.png
If I modify the code and run it like this:
"[02]: ./media/active-directory-dotnet-how-to-use-access-control/acs-01.png" | select-string '\[\d+\]:' | foreach-object {($_ -split '\[|')}
In the output, the [ is stripped from the beginning but the output has a carriage return after each character (I did not include the full output for space purposes).
0
2
]
:
.
/
m
e
The Pipe character, "|", separates alternatives in regex.
You can see all the metacharacters defined here:
http://regexlib.com/CheatSheet.aspx?AspxAutoDetectCookieSupport=1
The answers already explain what the | is for but I would like to explain what is happening with each example that you have above.
-split '\[|\]': You are trying to match either [ or ] which is why you get 3 results. The first being a blank line which is the whitespace represented by the beginning of the line before the first [
-split '\[\]': Since you are omitting the | symbol in this example you are requesting to split on the character sequence [] which does not appear in your string. This is contrasted by the code $_.split('\[\]') which would split on every character. This is by design.
-split '\[|': Here you are running into a caveat of not specifying the right hand operand for the | operator. To quote the help from Regex101 when this regex is specified:
(null, matches any position)
Warning: An empty alternative effectively truncates the regex at this
point because it will always find a zero-width match
Which is why the last example split on every element. Also, I dont think any of this is PowerShell only. This behavior should be seen on other engines as well.
Walter Mitty is correct, | is for alternation.
You can also use [Regex]::Escape("string") in Powershell and it will return a string that has all the special characters escaped. So you can use that on any strings you want to match literally (or to determine if a specific character does or can have special meaning in a regex).
I have JPEG files in a folder named sequentially: table1.jpg, table2.jpg, table3.jpg, ..., table11.jpg
I need to do some processing on them, in sequential order, but a Get-ChildItem will return this (even with Sort):
table1.jpg
table10.jpg
table11.jpg
table2.jpg
table3.jpg
...
I would like to add a 0 to get them in the correct order:
table01.jpg
table02.jpg
table03.jpg
...
table11.jpg
I tried using RegEx with this command to preview the new names:
Get-ChildItem | % { $_.Name -replace "(.*[^0-9])([0-9]{1}\.jpg)",'$10$2' }
The idea is to get the first part of the name without any digit in first region, and one digit plus the extension in the second region. This will exclude files with two digits. And then I can replace by: first region + 0 + second region.
My problem is the "0" is seen as $10 instead of $1+"0", I don't know how to say it.
What would be the correct syntax? I tried "$10$2", "$1\0$2", "$1`0$2", '$1"0"$2'
You can use named groups:
$filename -replace "(?<first>.*[^0-9])(?<second>[0-9]{1}\.jpg)", '${first}0${second}'
The correct syntax for indicating a numbered backreference without ambiguity is ${1}.
Here's a corrected version of your statement:
Get-ChildItem | % { $_.Name -replace "(.*\D)(\d\.jpg)",'${1}0$2' }
Note: I also used \D, which is the built-in character class for [^0-9], and \d in place of [0-9].
'"$1"0$2'
This should do it .