I am trying to make a script that takes an XML file, looks for a matching condition, if it finds it adds a new line of asteriks, then when done going through the file to strip it of all its XML tags and leave the data in a plain text file.
The script has been tested on a small input xml file and works fine, but when I pass a large XML file to it takes forever (not actually sure how long as I ran it for over an hour and still no result so I just stopped it).
I'm guessing I must be performing the work in an extremely inefficient manner, hoping you guys can help me make it fast and efficient.
Here is the script below:
# Takes input XML File, cleans up XML elements, outputs plain text file
$FileName = "C:\Users\someguy\Desktop\input.xml"
$Pattern = "ProcessSpecifier = ""true"""
$FileOriginal = Get-Content $FileName
[String[]] $FileModified = #()
Foreach ($Line in $FileOriginal)
{
$FileModified += $Line
if ($Line -match $Pattern)
{
#Add Lines after the selected pattern
$FileModified += "*************isActive=true*****************"
}
}
$FileModified -replace "<[^>]+>", "" | Out-File C:\Users\someguy\Desktop\Output.txt
Let's go with a look behind and a bunch of regex to speed things up here. Also, I'm not going to store the whole thing in memory, I'm just going to pass it down the pipeline, which should help. I remove whitespace from the beginning and ends of lines, and filter out blank lines, but you can remove that bit if you want.
# Takes input XML File, cleans up XML elements, outputs plain text file
$FileName = "C:\Users\someguy\Desktop\input.xml"
$Pattern = '(?<=^.*ProcessSpecifier = "true".*$)'
(Get-Content $FileName) -replace $Pattern, "`n*************isActive=true*****************" -replace '<[^>]+?>' -replace '^\s*|\s$' | ?{$_} | Set-Content C:\Users\someguy\Desktop\Output.txt
So, the main thing here is that I use a look behind to find your pattern text, and then add a new line and the asterisk line to that line. So that the line
<SomeTag>ProcessSpecifier = "true"</SomeTag>
becomes:
<SomeTag>ProcessSpecifier = "true"</SomeTag>`n*************isActive=true*****************
When used inside double quote a backtick ` followed by n creates a new line, so the '*************isActive=true*****************' is on its own line immediately following your search pattern line. Past that I remove the XML tags, and then any leading or trailing whitespace from any line.
After the RegEx replacements I pass the result to a Where statement that removes blank lines, and then pass the remaining lines to Set-Content which I've seen better performance out of than Out-File.
Variation of TheMadTechnician's answer:
# Takes input XML File, cleans up XML elements, outputs plain text file
$FileName = "C:\Users\someguy\Desktop\input.xml"
$Pattern = '(?<=^.*ProcessSpecifier = "true".*$)'
Set-Content -Path C:\Users\someguy\Desktop\Output.txt -Value (((Get-Content $FileName) -replace $Pattern, "`n*************isActive=true*****************" -replace '<[^>]+?>' -replace '^\s*|\s$').Where{$_})
I actually try to avoid the pipeline, it is rather slow afaik. Of course you will run into problem with memory consumption if the files are very large.
The "().Where" construct doesn't work on all powershell versions (Version 4+ iirc).
This is a guess, I am not sure whether this is actually faster than TheMadTechnician's. I'd be curious about the result :)
Related
I'm looking for a way to strip all comments from a file. There are various ways to do comments, but I'm only interested in the simple # form comments. Reason is that I only use <# #> for in-function .SYNOPSIS which is functional code as opposed to just a comment so I want to keep those).
EDIT: I have updated this question using the helpful answers below.
So there are only a couple of scenarios that I need:
a) whole line comments with # at start of line (or possibly with white-space before. i.e. regex of ^\s*# seems to work.
b) with some code at start of line then a command at the end of the line.
I want to avoid stripping lines that have e.g. Write-Host "#####" but I think this is covered in the code that I have.
I was able to remove end-of-line comments with a split as I couldn't work out how to do it with regex, does anyone know a way to achieve that with regex?
The split was not ideal as a <# on a line would be removed by the -split but I've fixed that by splitting on " #". This is not perfect but might be good enough - maybe a more reliable way with regex might exist?
When I do the below against my 7,000 line long script, it works(!) and strips a huge amount of comments, BUT, the output file is almost doubled in size(!?) from 400kb to about 700kb. Does anyone understand why that happens and how to prevent that (is it something to do with BOM's or Unicode or things like that? Out-File seems to really balloon the file-size!)
$x = Get-Content ".\myscript.ps1" # $x is an array, not a string
$out = ".\myscript.ps1"
$x = $x -split "[\r\n]+" # Remove all consecutive line-breaks, in any format '-split "\r?\n|\r"' would just do line by line
$x = $x | ? { $_ -notmatch "^\s*$" } # Remove empty lines
$x = $x | ? { $_ -notmatch "^\s*#" } # Remove all lines starting with ; including with whitespace before
$x = $x | % { ($_ -split " #")[0] } # Remove end of line comments
$x = ($x -replace $regex).Trim() # Remove whitespace only at start and end of line
$x | Out-File $out
# $x | more
Honestly, the best approach to identify and process all comments is to use PowerShell's language parser or one of the Ast classes. I apologize that I don't know which Ast contains comments; so this is an uglier way that will filter out block and line comments.
$code = Get-Content file.txt -Raw
$comments = [System.Management.Automation.PSParser]::Tokenize($code,[ref]$null) |
Where Type -eq 'Comment' | Select -Expand Content
$regex = ( $comments |% { [regex]::Escape($_) } ) -join '|'
# Output to remove all empty lines
$code -replace $regex -split '\r?\n' -notmatch '^\s*$'
# Output that Removes only Beginning and Ending Blank Lines
($code -replace $regex).Trim()
Do the inverse of your example: Only emit lines that do NOT match:
## Output to console
Get-Content .\file.ps1 | Where-Object { $_ -notmatch '#' }
## Output to file
Get-Content .\file.ps1 | Where-Object { $_ -notmatch '#' } | Out-file .\newfile.ps1 -Append
Based on #AdminOfThings helpful answer using the Abstract Syntax Tree (AST) Class parser approach but avoiding any regular expressions:
$Code = $Code.ToString() # Prepare any ScriptBlock for the substring method
$Tokens = [System.Management.Automation.PSParser]::Tokenize($Code, [ref]$null)
-Join $Tokens.Where{ $_.Type -ne 'Comment' }.ForEach{ $Code.Substring($_.Start, $_.Length) }
As for the incidental problem of the size of the output file being roughly double that of the input file:
As AdminOfThings points out, Out-File in Windows PowerShell defaults to UTF-16LE ("Unicode") encoding, where characters are represented by (at least) two bytes, whereas ANSI encoding, as used by Set-Content in Windows PowerShell by default, encodes all (supported) characters in a single byte. Similarly, UTF-8-encoded files use only one byte for characters in the ASCII range (note that PowerShell (Core) 7+ now consistently defaults to (BOM-less) UTF-8). Use the -Encoding parameter as needed.
A regex-based solution to your problem is never fully robust, even if you try to limit the comment removal to single-line comments.
For full robustness, you must indeed use PowerShell's language parser, as noted in the other answers.
However, care must be taken when reconstructing the original source code with the comments removed:
AdminOfThings's answer risks removing too much, given the subsequent global regex-based processing with -replace: while the scenario may be unlikely, if a comment is repeated inside a string, it would mistakenly be removed from there too.
iRon's answer risks syntax errors by joining the tokens without spaces, so that . .\foo.ps1 would turn into ..\foo.ps1, for instance. Blindly putting a space between tokens is not an option, because the property-access syntax would break (e.g. $host.Name would turn into $host . Name, but whitespace between a value and the . operator isn't allowed)
The following solution avoids these problems, while trying to preserve the formatting of the original code as much as possible, but this has limitations, because intra-line whitespace isn't reported by the parser:
This means that you can't tell whether whitespace between tokens on a given line is made up of tabs, spaces, or a mix of both. The solution below replaces any tab characters with 2 spaces before processing; adjust as needed.
To somewhat compensate for the removal of comments occupying their own line(s), more than 2 consecutive blank or empty lines are folded into a single empty one. It is possible to remove blank/empty lines altogether, but that could hurt readability.
# Tokenize the file content.
# Note that tabs, if any, are replaced by 2 spaces first; adjust as needed.
$tokens = $null
$null = [System.Management.Automation.Language.Parser]::ParseInput(
((Get-Content -Raw .\myscript.ps1) -replace '\t', ' '),
[ref] $tokens,
[ref] $null
)
# Loop over all tokens while omitting comments, and rebuild the source code
# without them, trying to preserve the original formatting as much as possible.
$sb = [System.Text.StringBuilder]::new()
$prevExtent = $null; $numConsecNewlines = 0
$tokens.
Where({ $_.Kind -ne 'Comment' }).
ForEach({
$startColumn = if ($_.Extent.StartLineNumber -eq $prevExtent.StartLineNumber) { $prevExtent.EndColumnNumber }
else { 1 }
if ($_.Kind -eq 'NewLine') {
# Fold multiple blank or empty lines into a single empty one.
if (++$numConsecNewlines -ge 3) { return }
} else {
$numConsecNewlines = 0
$null = $sb.Append(' ' * ($_.Extent.StartColumnNumber - $startColumn))
}
$null = $sb.Append($_.Text)
$prevExtent = $_.Extent
})
# Output the result.
# Pipe to Set-Content as needed.
$sb.ToString()
Basically, I have a .bas file that I am looking to update. Basically the script requires some manual configuration and I don't want my team to need to reconfigure the script every time they run it. What I would like to do is have a tag like this
<BEGINREPLACEMENT>
'MsgBox ("Loaded")
ReDim Preserve STIGArray(i - 1)
ReDim Preserve SVID(i - 1)
STIGArray = RemoveDupes(STIGArray)
SVID = RemoveDupes(SVID)
<ENDREPLACEMENT>
I am kind of familiar with powershell so what I was trying to do is to do is create an update file and to replace what is in between the tags with the update. What I was trying to do is:
$temp = Get-Content C:\Temp\file.bas
$update = Get-Content C:\Temp\update
$regex = "<BEGINREPLACEMENT>(.*?)<ENDREPLACEMENT>"
$temp -replace $regex, $update
$temp | Out-File C:\Temp\file.bas
The issue is that it isn't replacing the block of text. I can get it to replace either or but I can't get it to pull in everything in between.
Does anyone have any thoughts as to how I can do this?
You need to make sure you read the whole files in with newlines, which is possible with the -Raw option passed to Get-Content.
Then, . does not match a newline char by default, hence you need to use a (?s) inline DOTALL (or "singleline") option.
Also, if your dynamic content contains something like $2 you may get an exception since this is a backreference to Group 2 that is missing from your pattern. You need to process the replacement string by doubling each $ in it.
$temp = Get-Content C:\Temp\file.bas -Raw
$update = Get-Content C:\Temp\update -Raw
$regex = "(?s)<BEGINREPLACEMENT>.*?<ENDREPLACEMENT>"
$temp -replace $regex, $update.Replace('$', '$$')
I have a question which im pretty much stuck on..
I have a file called xml_data.txt and another file called entry.txt
I want to replace everything between <core:topics> and </core:topics>
I have written the below script
$test = Get-Content -Path ./xml_data.txt
$newtest = Get-Content -Path ./entry.txt
$pattern = "<core:topics>(.*?)</core:topics>"
$result0 = [regex]::match($test, $pattern).Groups[1].Value
$result1 = [regex]::match($newtest, $pattern).Groups[1].Value
$test -replace $result0, $result1
When I run the script it outputs onto the console it doesnt look like it made any change.
Can someone please help me out
Note: Typo error fixed
There are three main issues here:
You read the file line by line, but the blocks of texts are multiline strings
Your regex does not match newlines as . does not match a newline by default
Also, the literal regex pattern must when replacing with a dynamic replacement pattern, you must always dollar-escape the $ symbol. Or use simple string .Replace.
So, you need to
Read the whole file in to a single variable, $test = Get-Content -Path ./xml_data.txt -Raw
Use the $pattern = "(?s)<core:topics>(.*?)</core:topics>" regex (it can be enhanced in case it works too slow by unrolling it to <core:topics>([^<]*(?:<(?!</?core:topics>).*)*)</core:topics>)
Use $test -replace [regex]::Escape($result0), $result1.Replace('$', '$$') to "protect" $ chars in the replacement, or $test.Replace($result0, $result1).
I'm new to powershell, and there seem to be a few differences in the way regex are handled. Currently iterating through a large number of txt files and want the start of each one of them (which is a URL) up to the | character.
The start of every file is a url ending in a slash. This was my umpteenth attempt with no luck:
$FirstUrl = '.*/\|$'
Pushed through a For-Each loop from which every other piece of information i'm trying to grab is coming out as expected:
Foreach-Object {
$FileContent = Get-Content $_.FullName
$Pos = Select-String -InputObject $FileContent -Pattern $FirstURL
Any tips on how to phrase the regex right in the $FirstURL. I'm generally 'ok' at regex and have googled my face off trying to find the proper documentation for powershell.
If each file is having the URL in the first line and after that there is a Pipe, then you do not need to use a regex in this case. You can directly split that like:
$FileContent = Get-Content $_.FullName
$FileContent.Split('|')[0]
Split actually splits the result data into an array. Then the first array element will be in the '0th' index and you can take it out.
Hope it helps.
I have a programming background, but I am fairly new to both powershell scripting and regexp. Regexp has always eluded me, and my prior projects have never 'forced' me to learn it.
With that in mind I have a file with a line of text that I need to replace. I can not depend on knowing where the line exists, if it has whitespace in front of it, or what the ACTUAL text being replaced IS. I DO KNOW what will preface and preceed the text being replaced.
AGAIN, I will not KNOW the value of "Replace This Text". I will only know what prefaces it "" and what preceeds it "". Edited OP to clarify. Thanks!
LINE OF TEXT I NEED TO REPLACE
<find-this-text>Replace This Text</find-this-text>
POTENTIAL CODE
(gc $file) | % { $_ -replace “”, “” } | sc $file
Get the content of the file, enclose this in parentheses to ensure file is first read and then closed so it doesnt throw an error when trying to save the file.
Iterate through each line, and issue replace statement. THIS IS WHERE I COULD USE HELP.
Save the file by using Set-Content. My understanding is that this method is preferable, because it takes encoding into consideration,like UTF8.
XML is not a line oriented format (nodes may span several lines, just as well as a line may contain several nodes), so it shouldn't be edited as if it were. Use a proper XML parser instead.
$xmlfile = 'C:\path\to\your.xml'
[xml]$xml = Get-Content $xmlfile
$node = $xml.SelectSingleNode('//find-this-text')
$node.'#text' = 'replacement text'
For saving the XML in "UTF-8 without BOM" format you can call the Save() method with a StreamWriter doing The Right Thing™:
$UTF8withoutBOM = New-Object Text.UTF8Encoding($false)
$writer = New-Object IO.StreamWriter ($xmlfile, $false, $UTF8withoutBOM)
$xml.Save($writer)
$writer.Close()
The .* in the regular expression would be considered "greedy" and dangerous by many. If the line that contains this tag and it's data contains nothing else, then there really isn't any significant risk according to my understanding.
$file = "c:\temp\sms.txt"
$OpenTag = "<find-this-text>"
$CloseTag = "</find-this-text>"
$NewText = $OpenTag + "New text" + $CloseTag
(Get-Content $file) | Foreach-Object {$_ -replace "$OpenTag.*$CloseTag", $NewText} | Set-Content $file