So, I have this RegEx that captures a specific string I need (thanks to Shawn Mehan):
>?url:\'\/watch\/(video[\w-\/]*)
It works great, but now I need to mod my criteria. I need to capture ONLY the first URL after EACH instance of: videos:[{title:. Bolded all instances below and also bolded the first URL I'd want captured as an example.
How might I approach this? I have a VBScript that will dump each URL to a text file, so I just need help selecting the correct URLs from the blob below. Thinking something like, "if this string is found, do this, loop". Setting the regex global to false should only grab the first instance each round, right? A basic example would help.
I believe I have all of the pieces I need, but I'm not quite sure how to put them together. I'm expecting the code below to loop through and find the index of each instance of "videos:[{title:", then the regex to grab the first URL after (regexp global set to false) based on the pattern, then write the found URL to my text file, loop until all are found. Not working...
(larger portion of html_dump: http://pastebin.com/6i5gmeTB)
Set objWshShell = Wscript.CreateObject("Wscript.Shell")
Set fso = CreateObject("Scripting.FileSystemObject")
Set objRegExp = new RegExp
objRegExp.Global = False
objRegExp.Pattern = ">?url:\'\/watch\/(video[\w-\/]*)"
filename = fso.GetParentFolderName(WScript.ScriptFullName) & "\html_dump.txt" 'Text file contains html
set urldump = fso.opentextfile(filename,1,true)
do until urldump.AtEndOfStream
strLine = urldump.ReadLine()
strSearch = InStrRev(strLine, "videos:[{title:") 'Attempting to find the position of "videos:[{title:" to grab the first URL after.
If strSearch >0 then
Set myMatches = objRegExp.Execute(strLine) 'This matches the URL pattern.
For Each myMatch in myMatches
strCleanURL = myMatch.value
next
'===Writes clean urls to txt file...or, it would it if worked===
filename1 = fso.GetParentFolderName(WScript.ScriptFullName) & "\URLsClean.txt" 'Creates and writes to this file
set WriteURL = fso.opentextfile(filename1,2,true)
WriteURL.WriteLine strCleanURL
WriteURL.Close
else
End if
loop
urldump.close
var streams = [ {streamID:138, cards:[{cardId: 59643,cardTypeId: 48,clickCount: 84221,occurredOn: '2015-08-17T15:30:17.000-07:00',expiredOn: '',header: 'Latest News Headlines', subHeader: 'Here are some of the latest headlines from around the world.', link: '/watch/playlist/544/Latest-News-Headlines', earn: 3, playlistRevisionID: 3427, image: 'http%3A%2F%2Fpthumbnails.5min.com%2F10380591%2F519029502_3_o.jpg', imageParamPrefix: '?', size: 13, durationMin: 15, durationTime: '14:34',pos:0,trkId:'2gs55j6u0nz8', true,videos:[{title:'World\'s First Sky Pool Soon To Appear In South London',thumbnail:'http%3A%2F%2Fpthumbnails.5min.com%2F10380509%2F519025436_c_140_105.jpg',durationTime:'0:39',url:'/watch/video/716424/worlds-first-sky-pool-soon-to-appear-in-south-london',rating:'4.2857'},{title:'Treasure Hunters Find $4.5 Million in Spanish Coins',thumbnail:'http%3A%2F%2Fpthumbnails.5min.com%2F10380462%2F519023092_3.jpg',durationTime:'0:54',url:'/watch/video/715927/treasure-hunters-find-4-5-million-in-spanish-coins',rating:'4.25'},{title:'Former President Jimmy Carter Says Cancer Has Spread to Brain',thumbnail:'http%3A%2F%2Fpthumbnails.5min.com%2F10380499%2F519024920_c_140_105.jpg',durationTime:'1:59',url:'/watch/video/716363/former-president-jimmy-carter-says-cancer-has-spread-to-brain',rating:'2.8889'},{title:'Josh Duggar Had Multiple Accounts on AshleyMadison.Com',thumbnail:'http%3A%2F%2Fpthumbnails.5min.com%2F10380505%2F519025222_c_140_105.jpg',durationTime:'1:30',
Assuming that your input comes from a file and is in correct JSON format you could do something like this in PowerShell:
$jsonfile = 'C:\path\to\input.txt'
$json = Get-Content $jsonfile -Raw | ConvertFrom-Json
$json.streams.cards | ForEach-Object { $_.videos[0].url }
The above is assuming that streams is the topmost key in your JSON data.
Note that the code requires at least PowerShell v3.
Related
I am trying to extract and preprocess log data for a use case.
For instance, the log consists of problem numbers with information to each ID underneath. Each element starts with:
#!#!#identification_number###96245#!#!#change_log###
action
action1
change
#!#!#attribute###value_change
#!#!#attribute1###status_change
#!#!#attribute2###<None>
#!#!#attribute3###status_change_fail
#!#!#attribute4###value_change
#!#!#attribute5###status_change
#!#!#identification_number###96246#!#!#change_log###
action
change
change1
action1
#!#!#attribute###value_change
#!#!#attribute1###status_change_fail
#!#!#attribute2###value_change
#!#!#attribute3###status_change
#!#!#attribute4###value_change
#!#!#attribute5###status_change
I extracted the identification numbers and saved them as a .csv file:
f = open(r'C:\Users\reszi\Desktop\Temp\output_new.txt', encoding="utf8")
change_log = f.readlines()
number = re.findall('#!#!#identification_number###(.+?)#!#!#change_log###', change_log)
Now what I am trying to achieve is, that for every ID in the .csv file I can append the corresponding log content, which is:
action
change
#!#!#attribute###
Since I am rather new to Python and only started working with regex a few days ago, I was hoping for some help.
Each log for an ID starts with "#!#!identification_number###" and ends with "#!#!attribute5### <entry>".
I have tried the following code, but the result is empty:
In:
x = re.findall("\[^#!#!#identification_number###((.|\n)*)#!#!#attribute5###((.|\n)*)$]", str(change_log))
In:
print(x)
Out:
[]
Try this:
pattern='entification_number###(.+?)#!#!#change_log###(.*?)#!#!#id'
re.findall(pattern, string+'#!#!#id', re.DOTALL)
The dotall flag makes the point match newline, so hopefully in the second capturing group you will find the logs.
If you want to get the attributes, for each identification number, you can parse the logs (got for the search above) of each id number with the following:
pattern='#!#!#attribute(.*?)###(.*?)#!#'
re.findall(pattern, string_for_each_log_match+'#!#', re.DOTALL)
If you put each id into the regex when you search using string.format() you can grab the lines that contain the correct changelog.
with open(r'path\to\csv.csv', 'r') as f:
ids = f.readlines()
with open(r'C:\Users\reszi\Desktop\Temp\output_new.txt', encoding="utf8") as f:
change_log = f.readlines()
matches = {}
for id_no in ids:
for i in range(len(change_log)):
reg = '#!#!#identification_number###({})#!#!#change_log###'.format(id_no)
if re.search(reg, change_log[i]):
matches[id_no] = i
break
This will create a dictionary with the structure {id_no:line_no,...}.
So once you have all of the lines that tell you where each log starts, you can grab the lines you want that come after these lines.
I have a system that I can output a spreadsheet from. I then take this outputted spreadsheet and import it into MS Access. There, I run some basic update queries before merging the final result into a SharePoint 2013 Linked List.
The spreadsheet I output has an unfortunate Long Text field which has some comments in it, which are vital. On the system that hosts the spreadsheet, these comments are nicely formatted. When the spreadsheet it output though, the field turns into a long, very unpretty string like so:
09:00 on 01/03/2017, Firstname Surname. :- Have responded to request for more information. 15:12 on 15/02/2017, Firstname Surname. :- Need more information to progress request. 17:09 on 09/02/2017, Firstname Surname. :- Have placed request.
What I would like to do is run a query (either in MS Access or MS Excel) which can scan this field, detect occurrences of "##:## on ##/##/####, Firstname Surname. :-" and then automatically insert a line break before them, so this text is more neatly formatted. It would obviously skip the first occurrence of this format, as otherwise it would enter a new line at the start of the field. Ideal end result would be:
09:00 on 01/03/2017, Firstname Surname. :- Have responded to request
for more information.
15:12 on 15/02/2017, Firstname Surname. :- Need more information to progress request.
17:09 on 09/02/2017, Firstname Surname. :- Have placed request.
To be honest, I haven't tried much myself so far, as I really don't know where to start. I don't know if this can be done without regular expressions, or within a simple query versus VBA code.
I did start building a regular expression, like so:
[0-9]{2}:[0-9]{2}\s[o][n]\s[0-9]{2}\/[0-9]{2}\/[0-9]{4}\,\s
But this looks a little ridiculous and I'm fairly certain I'm going about it in a very unnecessary way. From what I can see from the text, detecting the next occurrence of "##:## on ##/##/####" should be enough. If I take a new line after this, that will suffice.
You have your RegExp pattern, now you need to create a function to append found items with your extra delimiter.
look at this function. It takes, your long string and finds your date-stamp using your pattern and appends with your delimiter.
Ideally, i would run each line twice and add delimiters after each column so you have a string like,
datestamp;firstname lastname;comment
you can then use arr = vba.split(text, ";") to get your data into an array and use it as
date-stamp = arr(0)
name = arr(1)
comment = arr(2)
Public Function FN_REGEX_REPLACE(iText As String, iPattern As String, iDelimiter As String) As String
Dim objRegex As Object
Dim allmatches As Variant
Dim I As Long
On Error GoTo FN_REGEX_REPLACE_Error
Set objRegex = CreateObject("vbscript.regexp")
With objRegex
.Multiline = True
.Global = True
.IgnoreCase = True
.Pattern = iPattern
If .test(iText) Then
Set allmatches = .Execute(iText)
If allmatches.count > 0 Then
For I = 1 To allmatches.count - 1 ' for i = 0 to count will start from first match
iText = VBA.Replace(iText, allmatches.item(I), iDelimiter & allmatches.item(I))
Next I
End If
End If
End With
FN_REGEX_REPLACE = Trim(iText)
Set objRegex = Nothing
On Error GoTo 0
Exit Function
FN_REGEX_REPLACE_Error:
MsgBox Err.description
End Function
use above function as
mPattern = "[0-9]{2}:[0-9]{2}\s[o][n]\s[0-9]{2}\/[0-9]{2}\/[0-9]{4}\,"
replacedText = FN_REGEX_REPLACE(originalText,mPattern,vbnewline)
Excel uses LF for linebreaks, Access uses CRLF.
So it should suffice to run a simple replacement query:
UPDATE myTable
SET LongTextField = Replace([LongTextField], Chr(10), Chr(13) & Chr(10))
WHERE <...>
You need to make sure that this runs only once on newly imported records, not repeatedly on all records.
I'm hoping I can get come help from a programmer.
What I want to do is to translate a word report generated by a software, so I turned to macros. I already have a word file containing the original word/phrases and the translated ones.
I 'stole' the code to translate from some forum online, which works great with normal text. My problem is that the text of the report I want to translate is within various "text boxes" and "tables".
I was able to manually remove the tables, but keep the text. This totally ruined the formatting, but I can deal with that latter.
Now, unfortunately I cannot do the same with textboxes. There is no 'delete, but keep the text" function for textboxes.
I can send you the macro code, the original report automatically generated by the software and the file to get all translated words from.
I really appreciate your time.
Ok. This is code that translates normal text.
Sub Translate()
Dim oChanges As Document, oDoc As Document
Dim oTable As Table
Dim oRng As Range
Dim rFindText As Range, rReplacement As Range
Dim i As Long
Dim sFname As String
'Change the path in the line below to reflect the path of the table document
sFname = "C:\Users\user\Desktop\Dictionary.doc"
Set oDoc = ActiveDocument
Set oChanges = Documents.Open(FileName:=sFname, Visible:=False)
Set oTable = oChanges.Tables(1)
For i = 1 To oTable.Rows.Count
Set oRng = oDoc.Range
Set rFindText = oTable.Cell(i, 1).Range
rFindText.End = rFindText.End - 1
Set rReplacement = oTable.Cell(i, 2).Range
rReplacement.End = rReplacement.End - 1
With oRng.Find
.ClearFormatting
.Replacement.ClearFormatting
Do While .Execute(findText:=rFindText, _
MatchWholeWord:=True, _
MatchWildcards:=False, _
Forward:=True, _
Wrap:=wdFindContinue) = True
oRng.Text = rReplacement
Loop
End With
Next i
oChanges.Close wdDoNotSaveChanges
End Sub
I'm guessing you'd need to see the format of the document that is being translated, which contains all the tables and text boxes. But it is too large and I'm not sure if I can send it as an attachment here somehow. (sorry, its my first time on this forum). Any advise?
Thanks a lot
JD
I am using a labeling software and I don't want any text inside of parentheses to display on the labels. Here is what I have so far
Function RemovePara(TextToBeEdited)
Set myRegEx = New RegExp
myRegEx.IgnoreCase = True
myRegEx.Global = True
myRegEx.Pattern = "\(([a-z]+?)\)(.+)"
Set RemovePara = myRegEx.Replace(txt, "")
End Function
Now I'm pretty new to this, and when I try to save this code in the labeling software it says "The script did not read the "Value" property, which means the current specified data source was ignored. This may not be what you intended" I had the text I field name I want edited where "TextToBeEdited" is at. What am I missing here?
You could use lookaround assertions.
myRegEx.Pattern = "(?<=\()[^()]*(?=\))"
Set RemovePara = myRegEx.Replace(txt, "")
DEMO
I have an Asp.Net web application to manage certain tables in the database. I'm using Grid to insert, update the Database. In addition to this, the requirement is that, user should be able to insert into database from Excel(by uploading the Excel, sort of like Import from Excel into Database).
So, I'm reusing the code for insertions(which i used for Insert in Grid) for each row in the Excel.
And I have Regular expression validators for certain fieldsin Grid in Asp.Net as follows:
Id: can be combination of numbers,alphabets. Regex is:"^[a-zA-Z0-9_]{1,50}$"
Formula: can have arithmetic operators and dot. Regex is: "^[ A-Za-z0-9%._(/*+)-]*$"
Sort Order: must be nuber with some max size Regex is: "^[0-9]{1,5}$"
Weight: real number with max size Regex is : "^[0-9]+(?:\.\d{1,2})?$"
Domain UserName: username with domain name Regex is: "^[a-zA-Z\\._]{1,200}$"
I wanted to have this validators in the Excel cells too. I've searched if Excel allows Regular expressions and found that it should be done through vba or any third party tool. I don't know Vb.net and neither want to use any external tool.
And i don't know much about Excel too. Is there any way to do the validations. If so, will there be some formats for setting formula for regex.
Can anyone suggest me how to do this. Thanks In Advance.
You can use the Regex engine that comes with VBScript:
Dim User_ID As String
User_ID = InputBox("Enter User ID:")
With CreateObject("VBScript.RegExp")
.Global = True
.Pattern = "^[\w]{1,50}$"
If .Test(User_ID) Then '// Check pattern matches User_ID string
Range("B" & Rows.Count).End(xlUp).Offset(1, 0).Value = User_ID
Else
MsgBox("Invalid ID, please try again!")
End If
End With
I got the answer. I've wrote worksheet_Change event with if else
Private Sub Worksheet_Change(ByVal Target As Range)
If Not Target.Row = 1 Then Exit Sub '// Only look at header row
Application.EnableEvents = False '// Disable events, prevent infinite loop.
If Cells(1, Target.Column).Value = "Attribute_Id" Then
Target.Value = AttributeId(Target.Value)
ElseIf Cells(1, Target.Column).Value = "Attribute_Name" Then
Target.Value = AttributeName(Target.Value)
End If
Application.EnableEvents = True '// Turn Events back on
End Sub
And these are the functions:
Function AttributeId(Attribute_Id As String) As String
With CreateObject("vbscript.regexp")
.Global = True
.Pattern = "^[a-zA-Z0-9_]{1,50}$"
.IgnoreCase = True
If Not .Test(Attribute_Id) Then
MsgBox ("Invalid Attribute ID, please try again!")
Exit Function
End If
End With
AttributeId = Attribute_Id
End Function
And
Function AttributeName(Attribute_Name As String) As String
If Attribute_Name = "" Then MsgBox ("Attribute Name is a Mandatory field!")
AttributeName = Attribute_Name
End Function
No need to bind the functions to the cells.
-- Thank you #S O for the help..