Move-Item Counter - regex

I have amended this to a more simpler explanation below and removed my previous version
Good Afternoon
i was wondering if anyone could advise
Each day we have alogfile which bears the current date, eg 22012013.txt, 23012013.txt etc
I have a move item cmdlet in my script, i would like to record how many files have been moved to 3 specific folders each day and writing the counter to the dated text log mentioned above
Pretend this is my folder structure
folder1
folder2
folder3
As an example, here is how my move-item would work
my move item moves file1.txt to folder1
file2 to folder3
file3 to folder1
file4 to folder3
file5 to folder2
file6 to folder1
In the log file, i would like to see
Items moved to Folder1 = 3
Items moved to Folder2 = 1
Items moved to Folder3 = 2
And that is it as the next day, that days file moves will be recorded in the new log file for that day, i would like the increment to go up for each move item if this is possible
Hope this makes sense
Regards
Barrie

Here is an example implementation of a function which moves a file and then updates a log.
No doubt it could be more concise, but it gets the job done and is reasonably readable.
The first argument is the file to be moved, the second is the name of the destination folder (it must not contain whitespace).
Basically, after moving the file to the specified folder, the last line of the log file is grabbed and checked to see if it contains today's date. If it does, the line is split on whitespace and the resulting array is iterated to find the folder name. If found, the next item of the array, which will be the number of moves made to that folder, is incremented by one. If not found, the folder name is appended to the line. The amended line then replaces the last line in the file. If the last line of the log file does not contain today's date a line is appended to the file with today's date and the folder name etc.
#() is used to ensure that the enclosed expression returns an array - it makes it easier to add content to the file as proper lines.
function Log-Move() {
$logfile = 'O:\AutoScan\log.txt'
$destination = 'O:\AutoScan\'
$folder = $args[1]
Move-Item -ErrorAction Stop $args[0] ( $destination + $folder )
$content = #( get-content -path $logfile )
$line = $content[-1]
$date = Get-Date -format 'd'
if ( $line ) {
if ( $line.Contains( $date ) ) {
$items = $line.split()
$count = $items.count
for ( $i = 0; $i -lt $count; $i++ ) {
if ( $items[$i] -eq $folder ) {
$items[$i + 1] = 1 + $items[$i + 1]
break
}
}
if ( $i -eq $count ) {
$items += #( $folder, '1' )
}
$line = $items -join ' '
if ( $content.length -gt 1 ) {
$content = #( $content[0..$($content.length-2)] ) + $line
} else {
$content = #( $line )
}
} else {
$content += $date + ' ' + $folder + ' 1'
}
} else {
$content = #( $date + ' ' + $folder + ' 1' )
}
$content | Set-Content -path $logfile
}
Example usage
Log-Move $newLongFilename Multiples
# log.txt:
# 22/01/2013 Multiples 1
Log-Move $anotherfile Multiples
Log-Move $anotherfile Autosorting
# 22/01/2013 Multiples 2 Autosorting 1

Related

I want to split a string from : to \n in Powershell script

I am using a config file that contains some information as shown below.
User1:xyz#gmail.com
User1_Role:Admin
NAME:sdfdsfu4343-234324-ffsdf-34324d-dsfhdjhfd943
ID:xyz#abc-demo-test-abc-mssql
Password:rewrfsdv34354*fds*vdfg435434
I want to split each value from*: to newline* in my Powershell script.
I am using -split '[: \n]' it matches perfectly until there is no '' in the value. If there is an '*' it will fetch till that. For example, for Password, it matches only rewrfsdv34354. Here is my code:
$i = 0
foreach ($keyOrValue in $Contents -split '[: *\n]') {
if ($i++ % 2 -eq 0) {
$varName = $keyOrValue
}
else {
Set-Variable $varName $keyOrValue
}
}
I need to match all the chars after : to \n. Please share your ideas.
It's probably best to perform two separate splits here, it makes things easier to work out if the code is going wrong for some reason, although the $i % 2 -eq 0 part is a neat way to pick up key/value.
I would go for this:
# Split the Contents variable by newline first
foreach ($line in $Contents -split '[\n]') {
# Now split each line by colon
$keyOrValue = $line -split ':'
# Then set the variables based on the parts of the colon-split
Set-Variable $keyOrValue[0] $keyOrValue[1]
}
You could also convert to a hashmap and go from there, e.g.:
$h = #{}
gc config.txt | % { $key, $value = $_ -split ' *: *'; $h[$key] = $value }
Or with ConvertFrom-StringData:
$h = (gc -raw dims.txt) -replace ':','=' | ConvertFrom-StringData
Now you have convenient access to keys and values, e.g.:
$h
Output:
Name Value
---- -----
Password rewrfsdv34354*fds*vdfg435434
User1 xyz#gmail.com
ID xyz#abc-demo-test-abc-mssql
NAME sdfdsfu4343-234324-ffsdf-34324d-dsfhdjhfd943
User1_Role Admin
Or only keys:
$h.keys
Output:
Password
User1
ID
NAME
User1_Role
Or only values:
$h.values
Output:
rewrfsdv34354*fds*vdfg435434
xyz#gmail.com
xyz#abc-demo-test-abc-mssql
sdfdsfu4343-234324-ffsdf-34324d-dsfhdjhfd943
Admin
Or specific values:
$h['user1'] + ", " + $h['user1_role']
Output:
xyz#gmail.com, Admin
etc.

Powershell script using RegEx to look for a pattern in one .txt and find line in a second .txt

I have a real "headsmasher" on my plate.
I have this piece of script:
$lines = Select-String -List -Path $sourceFile -Pattern $pattern -Context 20
foreach ($id in $lines) {
if (Select-String -Quiet -LiteralPath export.txt -Pattern "$($Matches[1]).+$($id.Pattern)") {
}
else {
Select-String -Path $sourceFile -Pattern $pattern -Context 20 >> $duplicateTransactionsFile
}
}
but it is not working for me as I wanted it to.
I have two .txt files: "$sourcefile = source.txt" and "export.txt"
The source.txt looks like something like this:
Some text here ***********
------------------------------------------------
F I N A L C O U N T 1 9 , 9 9
**************
** [0000123456]
ID Number:0000123456
Complete!
****************!
***********
Some other text here*******
------------------------------------------------
F I N A L C O U N T 9 , 9 9
**********
** [0000789000]
ID Number:0000789000
Complete!
******************!
************
The export.txt is like this:
0000123456 19,99
0000555555 ,89
0000666666 3,05
0000777777 31,19
0000789000 9,99
What I am trying to do is look into source.txt and search for the number that I enter (spaced out in my case)
*e.g: "9,99" but only that. As you can see, the next number in the source.txt is "19,99" and it also contains "9,99" but I do not want it to be matched.
and once I find the number, look for the next line in the source.txt that contains the text "ID Number:" then get the numbers right after the ":" Once I get those numbers after the ":", I want to now look into the export.txt and see if the numbers after the ":" are there and whether it has the "9,99" on the same line next to it but exactly "9,99" and nothing else lie "19,99", "29,99", and so on.
Then the rest is easy:
if (*true*) {
do this
}
else {
do that
}
Could you guys give me some love here and help a brother out?
I very much appreciate any help or hint you could share.
Best of wishes!
You could approach this like below:
# read the export.txt file and convert to a Hashtable for fast lookup
$export = ((Get-Content -Path 'D:\Test\export.txt').Trim() -replace '\s+', '=') -join "`r`n" | ConvertFrom-StringData
# read the source file and split into multiline data blocks
$source = ((Get-Content -Path 'D:\Test\source.txt' -Raw) -split '-{2,}').Trim() | Where-Object { $_ -match '(?sm)^\s?F I N A L C O U N T' }
# make sure the number given is spaced-out
$search = (((Read-Host "Search for Final Count number") -replace '\s' -split '') -join ' ').Trim()
Write-Host "Looking for a matching item using Final Count '$search'"
# see if we can find a data block that matches the $search
$blocks = $source | Where-Object { $_ -match "(?sm)^F I N A L C O U N T\s+$search\s?$" }
if (!$blocks) {
Write-Host "No item in source.txt could be found with Final Count '$search'" -ForegroundColor Red
}
else {
# loop over the data block(s) and pick the one that matches the search count
$blocks | ForEach-Object {
# parse out the ID
$id = $_ -replace '(?sm).*ID Number:(\d+).*', '$1'
# check if the $export Hashtable contains a key with that ID number
if ($export.Contains($id)) {
# check if that item has a value of $search without the spaces
if ($export[$id] -eq ($search -replace '\s')) {
# found it; do something
Write-Host "Found a match in the export.txt" -ForegroundColor Green
}
else {
# found ID with different FinalCount
Write-Host "An item with ID '$id' was found, but with different Final Count ($($export[$id]))" -ForegroundColor Red
}
}
else {
# ID not found
Write-Host "No item with ID '$id' could be found in the export.txt" -ForegroundColor Red
}
}
}
If as per your comment, you would like the code to loop over the Final Count numbers found in the source.txt file instead of a user typing in a number to search for, you can shorten the above code to:
# read the export.txt file and convert to a Hashtable for fast lookup
$export = ((Get-Content -Path 'D:\Test\export.txt').Trim() -replace '\s+', '=') -join "`r`n" | ConvertFrom-StringData
# read the source file and split into multiline data blocks
$blocks = ((Get-Content -Path 'D:\Test\source.txt' -Raw) -split '-{2,}').Trim() |
Where-Object { $_ -match '(?sm)^\s?F I N A L C O U N T' }
if (!$blocks) {
Write-Host "No item in source.txt could be found with Final Count '$search'" -ForegroundColor Red
}
else {
# loop over the data block(s)
$blocks | ForEach-Object {
# parse out the FINAL COUNT number to look for in the export.txt
$search = ([regex]'(?sm)^F I N A L C O U N T\s+([\d,\s]+)$').Match($_).Groups[1].Value
# remove the spaces, surrounding '0' and trailing comma (if any)
$search = ($search -replace '\s').Trim('0').TrimEnd(',')
Write-Host "Looking for a matching item using Final Count '$search'"
# parse out the ID
$id = $_ -replace '(?sm).*ID Number:(\d+).*', '$1'
# check if the $export Hashtable contains a key with that ID number
if ($export.Contains($id)) {
# check if that item has a value of $search without the spaces
if ($export[$id] -eq $search) {
# found it; do something
Write-Host "Found a match in the export.txt with ID: $($export[$id])" -ForegroundColor Green
}
else {
# found ID with different FinalCount
Write-Host "An item with ID '$id' was found, but with different Final Count ($($export[$id]))" -ForegroundColor Red
}
}
else {
# ID not found
Write-Host "No item with ID '$id' could be found in the export.txt" -ForegroundColor Red
}
}
}
There are surely multiple valid ways to accomplish this. Here is my approach:
(See comments for explanations. Let me know if you have any questions)
param (
# You can provide this when calling the script using "-Search 9,99"
# If not privided, powershell will prompt to enter the value
[Parameter(Mandatory)]
$Search,
$Source = "source.txt",
$Export = "export.txt"
)
# insert spaces
$pattern = $Search.ToCharArray() -join " "
# Search for the value in the source file.
$found = $false
switch -Regex -File $Source {
# This regex looks for something that is not a number,
# followed by only whitespace, and then your (spaced) search value.
# This makes sure "19,99" is not matched with "9,99".
# You could use a more elaborate regex here, but for your example,
# this one should work fine.
"\D\s+$pattern" {
$found = $true
}
"ID Number:(\d+)" {
# Get the ID number from the match.
$id = $Matches[1]
# If the search value was found
# (that means, this ID number is immediately followed by the search value)
# we can stop looking.
if ($found) {
break
}
}
}
# quick check if the value was actually found
if (-not $found) {
throw "Value $Search not found in $Source."
}
# Search for the id in the export file.
switch -Regex -File $Export {
"$id\s+(\S+)" {
# Get the amount value from the match
$value = $Matches[1]
# If the value matches your search...
if ($value -eq $search) {
# do this
}
else {
# otherwise do that
}
break
}
}
Note: You could additionally convert the values to decimal to account for different text representations when searching and comparing.

Skip Header Row in a High Performance Powershell Regex Script Block

I received some amazing help from Stack Overflow ... however ... it was so amazing I need a little more help to get to closer to the finish line. I'm parsing multiple enormous 4GB files 2X per month. I need be able to be able to skip the header, count the total lines, matched lines, and the not matched lines. I'm sure this is super-simple for a PowerShell superstar, but at my newbie PS level my skills are not yet strong. Perhaps a little help from you would save the week. :)
Data Sample:
ID FIRST_NAME LAST_NAME COLUMN_NM_TOO_LON5THCOLUMN
10000000001MINNIE MOUSE COLUMN VALUE LONGSTARTS
10000000002MICKLE ROONEY MOUSE COLUMN VALUE LONGSTARTS
Code Block (based on this answer):
#$match_regex matches each fixed length field by length; the () specifies that each matched field be stored in a capture group:
[regex]$match_regex = '^(.{10})(.{50})(.{50})(.{50})(.{50})(.{3})(.{8})(.{4})(.{50})(.{2})(.{30})(.{6})(.{3})(.{4})(.{25})(.{2})(.{10})(.{3})(.{8})(.{4})(.{50})(.{2})(.{30})(.{6})(.{3})(.{2})(.{25})(.{2})(.{10})(.{3})(.{10})(.{10})(.{10})(.{2})(.{10})(.{50})(.{50})(.{50})(.{50})(.{8})(.{4})(.{50})(.{2})(.{30})(.{6})(.{3})(.{2})(.{25})(.{2})(.{10})(.{3})(.{4})(.{2})(.{4})(.{10})(.{38})(.{38})(.{15})(.{1})(.{10})(.{2})(.{10})(.{10})(.{10})(.{10})(.{38})(.{38})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})(.{10})$'
Measure-Command {
& {
switch -File $infile -Regex {
$match_regex {
# Join what all the capture groups matched with a tab char.
$Matches[1..($Matches.Count-1)].Trim() -join "`t"
}
}
} | Out-File $outFile
}
You only need to keep track of two counts - matched, and unmatched lines - and then a Boolean to indicate whether you've skipped the first line
$first = $false
$matched = 0
$unmatched = 0
. {
switch -File $infile -Regex {
$match_regex {
if($first){
# Join what all the capture groups matched with a tab char.
$Matches[1..($Matches.Count-1)].Trim() -join "`t"
$matched++
}
$first = $true
}
default{
$unmatched++
# you can remove this, if the pattern always matches the header
$first = $true
}
}
} | Out-File $outFile
$total = $matched + $unmatched
Using System.IO.StreamReader reduced the processing time to about 20% of what it had been. This was absolutely needed for my requirement.
I added logic and counters without sacrificing much on performance. The field counter and row by row comparison is particularly helpful in finding bad records.
This is a copy/paste of actual code but I shortened some things, made some things slightly pseudo code, so you may have to play with it to get things working just so for yourself.
Function Get-Regx-Data-Format() {
Param ([String] $filename)
if ($filename -eq 'FILE NAME') {
[regex]$match_regex = '^(.{10})(.{10})(.{10})(.{30})(.{30})(.{30})(.{4})(.{1})'
}
return $match_regex
}
Foreach ($file in $cutoff_files) {
$starttime_for_file = (Get-Date)
$source_file = $file + '_' + $proc_yyyymm + $source_file_suffix
$source_path = $source_dir + $source_file
$parse_file = $file + '_' + $proc_yyyymm + '_load' +$parse_target_suffix
$parse_file_path = $parse_target_dir + $parse_file
$error_file = $file + '_err_' + $proc_yyyymm + $error_target_suffix
$error_file_path = $error_target_dir + $error_file
[regex]$match_data_regex = Get-Regx-Data-Format $file
Remove-Item -path "$parse_file_path" -Force -ErrorAction SilentlyContinue
Remove-Item -path "$error_file_path" -Force -ErrorAction SilentlyContinue
[long]$matched_cnt = 0
[long]$unmatched_cnt = 0
[long]$loop_counter = 0
[boolean]$has_header_row=$true
[int]$field_cnt=0
[int]$previous_field_cnt=0
[int]$array_length=0
$parse_minutes = Measure-Command {
try {
$stream_log = [System.IO.StreamReader]::new($source_path)
$stream_in = [System.IO.StreamReader]::new($source_path)
$stream_out = [System.IO.StreamWriter]::new($parse_file_path)
$stream_err = [System.IO.StreamWriter]::new($error_file_path)
while ($line = $stream_in.ReadLine()) {
if ($line -match $match_data_regex) {
#if matched and it's the header, parse and write to the beg of output file
if (($loop_counter -eq 0) -and $has_header_row) {
$stream_out.WriteLine(($Matches[1..($array_length)].Trim() -join "`t"))
} else {
$previous_field_cnt = $field_cnt
#add year month to line start, trim and join every captured field w/tabs
$stream_out.WriteLine("$proc_yyyymm`t" + `
($Matches[1..($array_length)].Trim() -join "`t"))
$matched_cnt++
$field_cnt=$Matches.Count
if (($previous_field_cnt -ne $field_cnt) -and $loop_counter -gt 1) {
write-host "`nError on line $($loop_counter + 1). `
The field count does not match the previous correctly `
formatted (non-error) row."
}
}
} else {
if (($loop_counter -eq 0) -and $has_header_row) {
#if the header, write to the beginning of the output file
$stream_out.WriteLine($line)
} else {
$stream_err.WriteLine($line)
$unmatched_cnt++
}
}
$loop_counter++
}
} finally {
$stream_in.Dispose()
$stream_out.Dispose()
$stream_err.Dispose()
$stream_log.Dispose()
}
} | Select-Object -Property TotalMinutes
write-host "`n$file_list_idx. File $file parsing results....`nMatched Count =
$matched_cnt UnMatched Count = $unmatched_cnt Parse Minutes = $parse_minutes`n"
$file_list_idx++
$endtime_for_file = (Get-Date)
write-host "`nEnded processing file at $endtime_for_file"
$TimeDiff_for_file = (New-TimeSpan $starttime_for_file $endtime_for_file)
$Hrs_for_file = $TimeDiff_for_file.Hours
$Mins_for_file = $TimeDiff_for_file.Minutes
$Secs_for_file = $TimeDiff_for_file.Seconds
write-host "`nElapsed Time for file $file processing:
$Hrs_for_file`:$Mins_for_file`:$Secs_for_file"
}
$endtime = (Get-Date -format "HH:mm:ss")
$TimeDiff = (New-TimeSpan $starttime $endtime)
$Hrs = $TimeDiff.Hours
$Mins = $TimeDiff.Minutes
$Secs = $TimeDiff.Seconds
write-host "`nTotal Elapsed Time: $Hrs`:$Mins`:$Secs"

Loop through a text file and Extract a set of 100 IP's from a text file and output to separate text files

I have a text file that contains around 900 IP's. I need to create batch of 100 IP's from that file and output them into new files. That would create around 9 text files.
Our API only allows to POST 100 IP's at a time.
Could you please help me out here?
Below is the format of the text file
10.86.50.55,10.190.206.20,10.190.49.31,10.190.50.117,10.86.50.57,10.190.49.216,10.190.50.120,10.190.200.27,10.86.50.58,10.86.50.94,10.190.38.181,10.190.50.119,10.86.50.53,10.190.50.167,10.190.49.30,10.190.49.89,10.190.50.115,10.86.50.54,10.86.50.56,10.86.50.59,10.190.50.210,10.190.49.20,10.190.50.172,10.190.49.21,10.86.49.18,10.190.50.173,10.86.49.49,10.190.50.171,10.190.50.174,10.86.49.63,10.190.50.175,10.13.12.200,10.190.49.27,10.190.49.19,10.86.49.29,10.13.12.201,10.86.49.28,10.190.49.62,10.86.50.147,10.86.49.24,10.86.50.146,10.190.50.182,10.190.50.25,10.190.38.252,10.190.50.57,10.190.50.54,10.86.50.78,10.190.50.23,10.190.49.8,10.86.50.80,10.190.50.53,10.190.49.229,10.190.50.58,10.190.50.130,10.190.50.22,10.86.52.22,10.19.68.61,10.41.43.130,10.190.50.56,10.190.50.123,10.190.49.55,10.190.49.66,10.190.49.68,10.190.50.86,10.86.49.113,10.86.49.114,10.86.49.101,10.190.50.150,10.190.49.184,10.190.50.152,10.190.50.151,10.86.49.43,10.190.192.25,10.190.192.23,10.190.49.115,10.86.49.44,10.190.38.149,10.190.38.151,10.190.38.150,10.190.38.152,10.190.38.145,10.190.38.141,10.190.38.148,10.190.38.142,10.190.38.144,10.190.38.147,10.190.38.143,10.190.38.146,10.190.192.26,10.190.38.251,10.190.49.105,10.190.49.110,10.190.49.137,10.190.49.242,10.190.50.221,10.86.50.72,10.86.49.16,10.86.49.15,10.190.49.112,10.86.49.32,10.86.49.11,10.190.49.150,10.190.49.159,10.190.49.206,10.86.52.28,10.190.49.151,10.190.49.207,10.86.49.19,10.190.38.103,10.190.38.101,10.190.38.116,10.190.38.120,10.190.38.102,10.190.38.123,10.190.38.140,10.190.198.50,10.190.38.109,10.190.38.108,10.190.38.111,10.190.38.112,10.190.38.113,10.190.38.114,10.190.49.152,10.190.50.43,10.86.49.23,10.86.49.205,10.86.49.220,10.190.50.230,10.190.192.238,10.190.192.237,10.190.192.239,10.190.50.7,10.190.50.10,10.86.50.86,10.190.38.125,10.190.38.127,10.190.38.126,10.190.50.227,10.190.50.149,10.86.49.59,10.190.49.158,10.190.49.157,10.190.44.11,10.190.38.124,10.190.50.153,10.190.49.40,10.190.192.235,10.190.192.236,10.190.50.241,10.190.50.240,10.86.46.8,10.190.38.234,10.190.38.233,10.86.50.163,10.86.50.180,10.86.50.164,10.190.49.245,10.190.49.244,10.190.192.244,10.190.38.130,10.86.49.142,10.86.49.102,10.86.49.141,10.86.49.67,10.190.50.206,10.190.192.243,10.190.192.241
I tried looking online to come up with a bit of working code but can't really think what would best work in this situation
$IP = 'H:\IP.txt'
$re = '\d*.\d*.\d*.\d*,'
Select-String -Path $IP -Pattern $re -AllMatches |
Select-Object -Expand Matches |
ForEach-Object {
$Out = 'C:\path\to\out.txt' -f | Set-Content $clientlog
}
This will do what you are after
$bulkIP = (get-content H:\IP.txt) -split ','
$i = 0
# Created loop
Do{
# Completed an action every 100 counts (including 0)
If(0 -eq $i % 100) {
# If the array is a valid entry. Removing this will usually end up creating an empty junk file called -1 or something
If($bulkIP[$i]) {
# outputs 100 lines into a folder with the starting index as the name.
# Eg. The first 1-100, the file would be called 1.txt. 501-600 would be called 501.txt etc
$bulkIP[$($i)..$($i+99)] | Out-File "C:\path\to\$($bulkip.IndexOf($bulkip[$($i)+1])).txt"
}
}
$i++
}While($i -le 1000)
what this does ...
calculates the number of batches
calcs the start & end index of each batch
creates a range from the above
creates a PSCustomObject to hold each batch
creates an array slice from the range
sends that out to the collection $Var
shows what is in the collection & in the 1st batch from that collection
here's the code ...
# fake reading in a raw text file
# in real life, use Get-Content -Raw
$InStuff = #'
10.86.50.55,10.190.206.20,10.190.49.31,10.190.50.117,10.86.50.57,10.190.49.216,10.190.50.120,10.190.200.27,10.86.50.58,10.86.50.94,10.190.38.181,10.190.50.119,10.86.50.53,10.190.50.167,10.190.49.30,10.190.49.89,10.190.50.115,10.86.50.54,10.86.50.56,10.86.50.59,10.190.50.210,10.190.49.20,10.190.50.172,10.190.49.21,10.86.49.18,10.190.50.173,10.86.49.49,10.190.50.171,10.190.50.174,10.86.49.63,10.190.50.175,10.13.12.200,10.190.49.27,10.190.49.19,10.86.49.29,10.13.12.201,10.86.49.28,10.190.49.62,10.86.50.147,10.86.49.24,10.86.50.146,10.190.50.182,10.190.50.25,10.190.38.252,10.190.50.57,10.190.50.54,10.86.50.78,10.190.50.23,10.190.49.8,10.86.50.80,10.190.50.53,10.190.49.229,10.190.50.58,10.190.50.130,10.190.50.22,10.86.52.22,10.19.68.61,10.41.43.130,10.190.50.56,10.190.50.123,10.190.49.55,10.190.49.66,10.190.49.68,10.190.50.86,10.86.49.113,10.86.49.114,10.86.49.101,10.190.50.150,10.190.49.184,10.190.50.152,10.190.50.151,10.86.49.43,10.190.192.25,10.190.192.23,10.190.49.115,10.86.49.44,10.190.38.149,10.190.38.151,10.190.38.150,10.190.38.152,10.190.38.145,10.190.38.141,10.190.38.148,10.190.38.142,10.190.38.144,10.190.38.147,10.190.38.143,10.190.38.146,10.190.192.26,10.190.38.251,10.190.49.105,10.190.49.110,10.190.49.137,10.190.49.242,10.190.50.221,10.86.50.72,10.86.49.16,10.86.49.15,10.190.49.112,10.86.49.32,10.86.49.11,10.190.49.150,10.190.49.159,10.190.49.206,10.86.52.28,10.190.49.151,10.190.49.207,10.86.49.19,10.190.38.103,10.190.38.101,10.190.38.116,10.190.38.120,10.190.38.102,10.190.38.123,10.190.38.140,10.190.198.50,10.190.38.109,10.190.38.108,10.190.38.111,10.190.38.112,10.190.38.113,10.190.38.114,10.190.49.152,10.190.50.43,10.86.49.23,10.86.49.205,10.86.49.220,10.190.50.230,10.190.192.238,10.190.192.237,10.190.192.239,10.190.50.7,10.190.50.10,10.86.50.86,10.190.38.125,10.190.38.127,10.190.38.126,10.190.50.227,10.190.50.149,10.86.49.59,10.190.49.158,10.190.49.157,10.190.44.11,10.190.38.124,10.190.50.153,10.190.49.40,10.190.192.235,10.190.192.236,10.190.50.241,10.190.50.240,10.86.46.8,10.190.38.234,10.190.38.233,10.86.50.163,10.86.50.180,10.86.50.164,10.190.49.245,10.190.49.244,10.190.192.244,10.190.38.130,10.86.49.142,10.86.49.102,10.86.49.141,10.86.49.67,10.190.50.206,10.190.192.243,10.190.192.241
'#
$SplitInStuff = $InStuff.Split(',')
$BatchSize = 25
$BatchCount = [math]::Truncate($SplitInStuff.Count / $BatchSize) + 1
$Start = $End = 0
$Result = foreach ($BC_Item in 1..$BatchCount)
{
$Start = $End
if ($BC_Item -eq 1)
{
$End = $Start + $BatchSize - 1
}
else
{
$End = $Start + $BatchSize
}
$Range = $Start..$End
[PSCustomObject]#{
IP_List = $SplitInStuff[$Range]
}
}
$Result
'=' * 20
$Result[0]
'=' * 20
$Result[0].IP_List.Count
'=' * 20
$Result[0].IP_List
screen output ...
IP_List
-------
{10.86.50.55, 10.190.206.20, 10.190.49.31, 10.190.50.117...}
{10.86.49.18, 10.190.50.173, 10.86.49.49, 10.190.50.171...}
{10.86.50.80, 10.190.50.53, 10.190.49.229, 10.190.50.58...}
{10.190.49.115, 10.86.49.44, 10.190.38.149, 10.190.38.151...}
{10.86.49.32, 10.86.49.11, 10.190.49.150, 10.190.49.159...}
{10.86.49.23, 10.86.49.205, 10.86.49.220, 10.190.50.230...}
{10.190.50.240, 10.86.46.8, 10.190.38.234, 10.190.38.233...}
====================
{10.86.50.55, 10.190.206.20, 10.190.49.31, 10.190.50.117...}
====================
25
====================
10.86.50.55
10.190.206.20
10.190.49.31
10.190.50.117
10.86.50.57
10.190.49.216
10.190.50.120
10.190.200.27
10.86.50.58
10.86.50.94
10.190.38.181
10.190.50.119
10.86.50.53
10.190.50.167
10.190.49.30
10.190.49.89
10.190.50.115
10.86.50.54
10.86.50.56
10.86.50.59
10.190.50.210
10.190.49.20
10.190.50.172
10.190.49.21
10.86.49.18
try this
$cpt=0
$Rang=1
#remove old file
Get-ChildItem "H:\FileIP_*.txt" -file | Remove-Item -Force
(Get-Content "H:\IP.txt") -split ',' | %{
if (!($cpt++ % 100)) {$FileResult="H:\FileIP_{0:D3}.txt" -f $Rang++} # build filename if cpt divisile by 100
$_ | Out-File $FileResult -Append
}

powershell: Append text before specific line instead of after

I'm looking for a way to add text before a line.
To be more specific, Before a line and a blank space.
Right now the scripts adds my text after the line [companyusers].
But I'd like to add the line before [CompanytoEXT] and before the blank space above [CompanytoEXT].
Does any body know how to do this?
Visual representation of what I'd want to do: https://imgur.com/a/lgH5i
My current script:
$FileName = "C:\temptest\testimport - Copy.txt"
$Pattern = "[[\]]Companyusers"
$FileOriginal = Get-Content $FileName
[String[]] $FileModified = #()
Foreach ($Line in $FileOriginal)
{
$FileModified += $Line
if ($Line -match $pattern)
{
#Add Lines after the selected pattern
$FileModified += "NEWEMAILADDRESS"
}
}
Set-Content $fileName $FileModified
Thanks for any advice!
Even if you're just pointing me where to look for answers it will be very much appreciated.
This might be easier using an ArrayList, that way you can insert new data easily at a specific point:
$FileName = "C:\temptest\testimport - Copy.txt"
$Pattern = "[[\]]Companyusers"
[System.Collections.ArrayList]$file = Get-Content $FileName
$insert = #()
for ($i=0; $i -lt $file.count; $i++) {
if ($file[$i] -match $pattern) {
$insert += $i-1 #Record the position of the line before this one
}
}
#Now loop the recorded array positions and insert the new text
$insert | Sort-Object -Descending | ForEach-Object { $file.insert($_,"NEWEMAILADDRESS") }
Set-Content $FileName $file
First open the file into an ArrayList, then loop over it. Each time you encounter the pattern, you can add the previous position into a separate array, $insert. Once the loop is done, you can then loop the positions in the $insert array and use them to add the text into the ArrayList.
You need a little state machine here. Note when you have found the correct section, but do not insert the line yet. Insert only at the next empty line (or the end of the file, if the section is the last in the file).
Haven't tested, but should look like this:
$FileName = "C:\temptest\testimport - Copy.txt"
$Pattern = "[[\]]Companyusers"
$FileOriginal = Get-Content $FileName
[String[]] $FileModified = #()
$inCompanyUsersSection = $false
Foreach ($Line in $FileOriginal)
{
if ($Line -match $pattern)
{
$inCompanyUsersSection = $true
}
if ($inCompanyUsersSection -and $line.Trim() -eq "")
{
$FileModified += "NEWEMAILADDRESS"
$inCompanyUsersSection = $false
}
$FileModified += $Line
}
# Border case: CompanyUsers might be the last sction in the file
if ($inCompanyUsersSection)
{
$FileModified += "NEWEMAILADDRESS"
}
Set-Content $fileName $FileModified
Edit: If you don't want to use the "insert at the next empty line" approach, because maybe your section may in clude empty lines, you can also trigger the insert at the beginning of the next section ($line.StartsWith("[")). However that would complicate things because now you have to look two lines ahead which means you have to buffer one line before writing it out. Doable but ugly.