I am pulling a string from a text file that looks like:
C:\Users\users\Documents\Firefox\tools\Install.ps1:37: Url = "https://somewebsite.com"
I need to some how remove everything except the URL, so it should look like:
https://www.somewebsite.com
Here is what I have tried:
$Urlselect = Select-String -Path "$zipPath\tools\chocolateyInstall.ps1" -Pattern "url","Url"-List # Selects URL download path
$Urlselect = $Urlselect -replace ".*" ","" -replace ""*.","" # remove everything but the download link
but this didn't seam to do anything. I am thinking that its going to have to do with regex but I am not sure how to put it. Any help is appreciated. Thanks
I suggest using the switch statement with the -Regex and -File options:
$url = switch -regex -file "$zipPath\tools\chocolateyInstall.ps1" {
' Url = "(.*?)"' { $Matches[1]; break }
}
-file makes switch loop over all lines of the specified file.
-regex interprets the branch conditionals as regular expressions, and the automatic $Matches variable can be used in the associated script block ({ ... }) to access the results of the match, notably, what the 1st (and only) capture group in the regex ((...)) captured - the URL of interest.
break stops processing once the 1st match is found. (To continue matching, use continue).
If you do want to use Select-String:
$url = Select-String -List ' Url = "(.*?)"' "$zipPath\tools\chocolateyInstall.ps1" |
ForEach-Object { $_.Matches.Groups[1].Value }
Note that the switch solution will perform much better.
As for what you tried:
Select-String -Path "$zipPath\tools\chocolateyInstall.ps1" -Pattern "url","Url"
Select-String is case-insensitive by default, so there's no need to specify case variations of the same string. (Conversely, you must use the -CaseSensitive switch to force case-sensitive matching).
Also note that Select-String doesn't output the matching line directly, as a string, but as a match-information objects; to get the matching line, access the .Line property[1].
$Urlselect -replace ".*" ","" -replace ""*.",""
".*" " and ""*." result in syntax errors, because you forgot to escape the _embedded " as `".
Alternatively, use '...' (single-quoted literal strings), which allows you to embed " as-is and is generally preferable for regexes and replacement operands, because there's no confusion over what parts PowerShell may interpret up front (string expansion).
Even with the escaping problem solved, however, your -replace operations wouldn't have worked, because .*" matches greedily and therefore up to the last "; here's a corrected solution with non-greedy matching, and with the replacement operand omitted (which makes it default to the empty string):
PS> 'C:\...ps1:37: Url = "https://somewebsite.com"' -replace '^.*?"' -replace '"$'
https://somewebsite.com
^.*?" non-greedily replaces everything up to the first ".
"$ replaces a " at the end of the string.
However, you can do it with a single -replace operation, using the same regex as with the switch solution at the top:
PS> 'C:\...ps1:37: Url = "https://somewebsite.com"' -replace '^.*?"(.*?)"', '$1'
https://somewebsite.com
$1 in the replacement operand refers to what the 1st capture group ((...)) captured, i.e. the bare URL; for more information, see this answer.
[1] Note that there's a green-lit feature suggestion - not yet implemented as of Windows PowerShell Core 6.2.0 - to allow Select-String to emit strings directly, using the proposed -Raw switch - see https://github.com/PowerShell/PowerShell/issues/7713
Related
I have a folder full of text documents in .adoc format that have some text in them. The text is following: link:lalala.html[lalala]. I want to replace this text with xref:lalala.adoc[lalala]. So, basically, just replace link: with xref:, .html with .adoc, leave all the rest unchanged.
But the problem is that lalala can be anything from a word to ../topics/halva.html.
I definitely know that I need to use regex patterns, I previously used similar script. A replace directive wrapped in an object:
Get-ChildItem -Path *.adoc -file -recurse | ForEach-Object {
$lines = Get-Content -Path $PSItem.FullName -Encoding UTF8 -Raw
$patterns = #{
'(\[\.dfn \.term])#(.*?)#' = '$1_$2_' ;
}
$option = [System.Text.RegularExpressions.RegexOptions]::Singleline
foreach($k in $patterns.Keys){
$pat = [regex]::new($k, $option)
$lines = $pat.Replace($lines, $patterns.$k)
}
$lines | Set-Content -Path $PSItem.FullName -Encoding UTF8 -Force
}
Looks like I need a different script since the new task cannot be added as just another object. I could've just replaced each part separately, using two objects: replace link: with xref:, then replace .html with .adoc.
But this can interfere with other links that end with .html and don't start with link:. In the text, absolute links usually don't have link: in the beginning. They always start with http:// or https://. And they still may or may not end with .html. So the best idea is to take the whole string link:lalala.html[lalala] and try to replace it with xref:lalala.adoc[lalala].
I need the help of someone who knows regex and PowerShell, please this would save me.
As a pattern, you might use
\blink:(.+?)\.html(?=\[[^][]*])
\blink: Match link:
(.+?) Capture 1+ chars as least as possbile in group 1
\.html match .html
(?=\[[^][]*]) Assert from an opening till closing square bracket at the right
Regex demo
In the replacement use group 1 using $1
xref:$1.adoc
Example
$Strings = #("link:lalala.html[lalala]", "link:../topics/halva.html[../topics/halva.html]")
$Strings -replace "\blink:(.+?)\.html(?=\[[^][]*])",'xref:$1.adoc'
Output
xref:lalala.adoc[lalala]
xref:../topics/halva.adoc[../topics/halva.html]
I have a list of files that contain either of the two strings:
"stuff" or ";stuff"
I'm trying to write a PowerShell Script that will return only the files that contain "stuff". The script below currently returns all the files because obviously "stuff" is a substring of ";stuff"
For the life of me, I cannot figure out how to only matches file that contain "stuff", without a preceding ;
Get-Content "C:\temp\list\list.txt" |
Where-Object { Select-String -Quiet -Pattern "stuff" -SimpleMatch $_ }
Note: C:\temp\list\list.txt contains a list of file paths that are each passed to Select-String.
Thanks for the help.
You cannot perform the desired matching with literal substring searches (-SimpleMatch).
Instead, use a regex with a negative look-behind assertion ((?<!..)) to rule out stuff substrings preceded by a ; char.: (?<!;)stuff
Applied to your command:
Get-Content "C:\temp\list\list.txt" |
Where-Object { Select-String -Quiet -Pattern '(?<!;)stuff' -LiteralPath $_ }
Regex pitfalls:
It is tempting to use [^;]stuff instead, using a negated (^) character set ([...]) (see this answer); however, this will not work as expected if stuff appears at the very start of a line, because a character set - whether negated or not - only matches an actual character, not the start-of-the-line position.
It is then tempting to apply ? to the negated character set (for an optional match - 0 or 1 occurrence): [^;]?stuff. However, that would match a string containing ;stuff again, given that stuff is technically preceded by a "0-repeat occurrence" of the negated character set; thus, ';stuff' -match '[^;]?stuff' yields $true.
Only a look-behind assertion works properly in this case - see regular-expressions.info.
To complement #mklement0's answer, I suggest an alternative approach to make your code easier to read and understand:
#requires -Version 4
#(Get-Content -Path 'C:\Temp\list\list.txt').
ForEach([IO.FileInfo]).
Where({ $PSItem | Select-String -Pattern '(?<!;)stuff' -Quiet })
This will turn your strings into objects (System.IO.FilePath) and utilizes the array functions ForEach and Where for brevity/conciseness. Further, this allows you to pipe the paths as objects which will be accepted by the -Path parameter into Select-String to make it more understandable (I find long lists of parameter sets difficult to read).
The example code posted won't actually run, as it will look at each line as the -Path value.
What you need is to get the content, select the string you're after, then filter the results with Where-Object
Get-Content "C:\temp\list\list.txt" | Select-String -Pattern "stuff" | Where-Object {$_ -notmatch ";stuff"}
You could create a more complex regex if needed, but depends on what your result data from your files looks like
I am trying to create a Powershell regex statement to remove the top five lines of this output from a git diff file that has already been modified with Powershell regex.
[1mdiff --git a/uk1.adoc b/uk2.adoc</span>+++
[1mindex b5d3bf7..90299b8 100644</span>+++
[1m--- a/uk1.adoc</span>+++
[1m+++ b/uk2.adoc</span>+++
[36m## -1,9 +1,9 ##</span>+++
= Heading
Body text
Image shown because binary code doesn't show in the text
The following statement matches the text so the '= Heading' line is placed at the top of the page if I replace with nothing.
^[^=]*.[+][\n]
But in Powershell, it isn't matching the text.
Get-Content "result2.adoc" | % { $_ -Replace '^[^=]*.[+][\n]', '' } | Out-File "result3.adoc";
Any ideas about why it doesn't work in Powershell?
My overall goal is to create a diff file of two versions of an AsciiDoc file and then replace the ASCII codes with HTML/CSS code to display the resulting AsciiDoc file with green/red track changes.
The simplest - and faster - approach is to read the input file as a single, multiline string with Get-Content -Raw and let the regex passed to -replace operate across multiple lines:
(Get-Content -Raw result2.adoc) -replace '(?s)^.+?\n(?==)' |
Set-Content result3.adoc
(?s) activates in-line option s which makes . match newline (\n) characters too.
^.+?\n(?==) matches from the start of the string (^) any number of characters (including newlines) (.+), non-greedily (?)
until a newline (\n) followed by a = is found.
(?=...) is a look-ahead assertion, which matches = without consuming it, i.e., without considering it part of the substring that matched.
Since no replacement operand is passed to -replace, the entire match is replace with the implied empty string, i.e., what was matched is effectively removed.
As for what you tried:
The -replace operator passes its LHS through if no match is found, so you cannot use it to filter out non-matching lines.
Even if you match an undesired line in full and replace it with '' (the empty string), it will show up as an empty line in the output when sent to Set-Content or Out-File (>).
As for your specific regex, ^[^=]*.[+][\n] (whether or not the first ^ is followed by an ESC (0x1b) char.):
[\n] (just \n would suffice) tries to match a newline char. after a literal + ([+]), yet lines read individually with Get-Content (without -Raw) by definition are stripped of their trailing newline, so the \n will never match; instead, use $ to match the end of a line.
Instead of % (the built-in alias for the ForEach-Object cmdlet) you could have used ? (the built-in alias for the Where-Object cmdlet) to perform the desired filtering:
Get-Content result2.adoc | ? { $_ -notmatch '^\e\[' }
$_ -notmatch '^\e[' returns $True only for lines that don't start (^) with an ESC character (\e, whose code point is 0x1b) followed by a literal (\) [, thereby effectively filtering out the lines before the = Heading line.
However, the multi-line -replace command at the top is a more direct and faster expression of your intent.
Here is the code I ended up with after help from #mklement0. This Powershell script creates MS Word-style track changes for two versions of an AsciiDoc file. It creates the Diff file, uses regex to replace ASCII codes with HTML/CSS tags, removes the Diff header (thank you!), uses AsciiDoctor to create an HTML file and then PrinceXML to create a PDF file of the output that I can send to document reviewers.
git diff --color-words file1.adoc file2.adoc > result.adoc;
Get-Content "result.adoc" | % {
$_ -Replace '(=+ ?)([A-Za-z\s]+)(\[m)', '$1$2' `
-Replace '\[32m', '+++<span style="color: #00cd00;">' `
-Replace '\[31m', '+++<span style="color: #cd0000; text-decoration: line-through;">' `
-Replace '\[m', '</span>+++' } | Out-File -encoding utf8 "result2.adoc" ;
(Get-Content -Raw result2.adoc) -replace '(?s)^.+?\n(?==)', '' | Out-File -encoding utf8 "result3.adoc" ;
asciidoctor result3.adoc -o result3.html;
prince result3.html --javascript -o result3.pdf;
Read-Host -Prompt "Press Enter to exit"
Here's a screenshot of the result using some text from Wikipedia:
I'm trying to get the first block of releasenotes...
(See sample content in the code)
Whenever I use something simple it works, it only breaks when I try to
search across multiple lines (\n). I'm using (Get-Content $changelog | Out-String) because that gives back 1 string instead of an array from each line.
$changelog = 'C:\Source\VSTS\AcmeLab\AcmeLab Core\changelog.md'
$regex = '([Vv][0-9]+\.[0-9]+\.[0-9]+\n)(^-.*$\n)+'
(Get-Content $changelog | Out-String) | Select-String -Pattern $regex -AllMatches
<#
SAMPLE:
------
v1.0.23
- Adds an IContainer API.
- Bugfixes.
v1.0.22
- Hotfix: Language operators.
v1.0.21
- Support duplicate query parameters.
v1.0.20
- Splitting up the ICommand interface.
- Fixing the referrer header empty field value.
#>
The result I need is:
v1.0.23
- Adds an IContainer API.
- Bugfixes.
Update:
Using options..
$changelog = 'C:\Source\VSTS\AcmeLab\AcmeLab Core\changelog.md'
$regex = '(?smi)([Vv][0-9]+\.[0-9]+\.[0-9]+\n)(^-.*$\n)+'
Get-Content -Path $changelog -Raw | Select-String -Pattern $regex -AllMatches
I also get nothing.. (no matter if I use \n or \r\n)
Unless you're stuck with PowerShell v2, it's simpler and more efficient to use Get-Content -Raw to read an entire file as a single string; besides, Out-String adds an extra newline to the string.[1]
Since you're only looking for the first match, you can use the -match operator - no need for Select-String's -AllMatches switch.
Note: While you could use Select-String without it, it is more efficient to use the -match operator, given that you've read the entire file into memory already.
Regex matching is by default always case-insensitive in PowerShell, consistent with PowerShell's overall case-insensitivity.
Thus, the following returns the first block, if any:
if ((Get-Content -Raw $changelog) -match '(?m)^v\d+\.\d+\.\d+.*(\r?\n-\s?.*)+') {
# Match found - output it.
$Matches[0]
}
* (?m) turns on inline regex option m (multi-line), which causes anchors ^ and $ to match the beginning and end of individual lines rather than the overall string's.
\r?\n matches both CRLF and LF-only newlines.
You could make the regex slightly more efficient by making the (...) subexpression non-capturing, given that you're not interested in what it captured: (?:...).
Note that -match itself returns a Boolean (with a scalar LHS), but information about the match is recorded in the automatic $Matches hashtable variables, whose 0 entry contains the overall match.
As for what you tried:
'([Vv][0-9]+\.[0-9]+\.[0-9]+\n)(^-.*$\n)+'
doesn't work, because by default $ only matches at the very end of the input string, at the end of the last line (though possibly before a final newline).
To make $ to match the end of each line, you'd have to turn on the multiline regex option (which you did in your 2nd attempt).
As a result, nothing matches.
'(?smi)([Vv][0-9]+\.[0-9]+\.[0-9]+\n)(^-.*$\n)+'
doesn't work as intended, because by using option s (single-line) you've made . match newlines too, so that a greedy subexpression such as .* will match the remainder of the string, across lines.
As a result, everything from the first block on matches.
[1] This problematic behavior is discussed in GitHub issue #14444.
My company has millions of old reports in pdf form. They are Typically named in the format: 2018-09-18 - ReportName.pdf
The organization we need to submit these to is now requiring that we name the files in this format: Report Name - 2018-09.pdf
I need to move the first 7 characters of the file name to the end. I'm thinking there is probably an easy code to perform this task, but I cannot figure it out. Can anyone help me.
Thanks!
Caveat:
As jazzdelightsme points out, the desired renaming operation can result in name collisions, given that you're removing the day component from your dates; e.g., 2018-09-18 - ReportName.pdf and 2018-09-19 - ReportName.pdf would result in the same filename, Report Name - 2018-09.pdf.
Either way, I'm assuming that the renaming operation is performed on copies of the original files. Alternatively, you can create copies with new names elsewhere with Copy-Item while enumerating the originals, but the advantage of Rename-Item is that it will report an error in case of a name collision.
Get-ChildItem -Filter *.pdf | Rename-Item -NewName {
$_.Name -replace '^(\d{4}-\d{2})-\d{2} - (.*?)\.pdf$', '$2 - $1.pdf'
} -WhatIf
-WhatIf previews the renaming operation; remove it to perform actual renaming.
Add -Recurse to the Get-CildItem call to process an entire directory subtree.
The use of -Filter is optional, but it speeds up processing.
A script block ({ ... }) is passed to Rename-Item's -NewName parameter, which enables dynamic renaming of each input file ($_) received from Get-ChildItem using a string-transformation (replacement) expression.
The -replace operator uses a regex (regular expression) as its first operand to perform string replacements based on patterns; here, the regex breaks down as follows:
^(\d{4}-\d{2}) matches something like 2018-09 at the start (^) of the name and - by virtue of being enclosed in (...) - captures that match in a so-called capture group, which can be referenced in the replacement string by its index, namely $1, because it is the first capture group.
(.*?) captures the rest of the filename excluding the extension in capture group $2.
The ? after .* makes the sub-expression non-greedy, meaning that it will give subsequent sub-expressions a chance to match too, as opposed to trying to match as many characters as possible (which is the default behavior, termed greedy).
\.pdf$ matches the the filename extension (.pdf) at the end ($) - note that case doesn't matter. . is escaped as \., because it is meant to be matched literally here (without escaping, . matches any single character in a single-line string).
$2 - $1.pdf is the replacement string, which arranges what the capture groups captured in the desired form.
Note that any file whose name doesn't match the regex is quietly left alone, because the -replace operator passes the input string through if there is no match, and Rename-Item does nothing if the new name is the same as the old one.
Get-ChildItem with some RegEx and Rename-Item can do it:
Get-ChildItem -Path "C:\temp" | foreach {
$newName = $_.Name -replace '(^.{7}).*?-\s(.*?)\.(.*$)','$2 - $1.$3'
$_ | Rename-Item -NewName $newName
}
The RegEx
'(^.{7}).*?-\s(.*?)\.(.*$)' / $2 - $1.$3
(^.{7}) matches the first 7 characters
.*?-\s matches any characters until (and including) the first found - (space dash space)
(.*?)\. matches anything until the first found dot ( . )
(.*$) matches the file extension in this case
$2 - $1.$3 puts it all together in the changed order
This won't properly work if there are filenames with multiple dots ( . ) in it.
This should work (added some test data):
$test = '2018-09-18 - ReportName.pdf','2018-09-18 - Other name.pdf','other pattern.pdf','2018-09-18 - double.extension.pdf'
$test | % {
$match = [Regex]::Match($_, '(?<Date>\d{4}-\d\d)-\d\d - (?<Name>.+)\.pdf')
if ($match.Success) {
"$($match.Groups['Name'].Value) - $($match.Groups['Date'].Value).pdf"
} else {
$_
}
}
Something like this -
Get-ChildItem -path $path | Rename-Item -NewName {$_.BaseName.Split(' - ')[-1] + ' - ' + $_.BaseName.SubString(0,7) + $_.Extension} -WhatIf
Explanation -
Split will segregate the name of the file based on the parameter - and [-1] tells PowerShell to select the last of the segregated values.
SubString(0,7) will select 7 characters starting from the first character of the BaseName of the file.
Remove -WhatIf to apply the rename.