I would like to list up devices and put their prices next to them.
My goal is to check different sites every week and notice trends.
This is a hobby project, I know there are sites that already do this.
For instance:
Device | URL Site 1 | Site 1 | URL Site 2 | Site 2
Device a | http://... | €40,00 | http://... | €45,00
Device b | http://... | €28,00 | http://... | €30,50
Manually, this is a lot of work (checking every week), so I thought a Macro in Excel would help. The thing is, I would like to put the data in a single cell and excel only recognises tables. Solution: view source code, read price, export price to specific cell.
I think this is all possible within Excel, but I can't quiet figure out how to read the price or other given data and how to put it in one specific cell. Can I specify coordinates in the source code, or is there a more effective way of thinking?
First of all you have to find out how does the website works. For the page you asked I have done the following:
Opened http://www.mediamarkt.de page in Chrome.
Typed BOSCH WTW 85230 in the search box, suggestion list appeared.
Pressed F12 to open developer tools and clicked Network tab.
Each time I was typing, the new request appeared (see yellow areas):
Clicked the request to examine general info:
You can see that it uses GET method and some parameters including url-encoded product name.
Clicked the Response tab to examine the data returning from the server:
You can see it is a regular JSON, full content is as follows:
{"suggestions":[{"attributes":{"energyefficiencyclass":"A++","modelnumber":"2004975","availabilityindicator":"10","customerrating":"0.00000","ImageUrl":"http://pics.redblue.de/artikelid/DE/2004975/CHECK","collection":"shop","id":"MediaDEdece2358813","currentprice":"444.00","availabilitytext":"Lieferung in 11-12 Werktagen"},"hitCount":0,"image":"http://pics.redblue.de/artikelid/DE/2004975/CHECK","name":"BOSCH WTW 85230 Kondensationstrockner mit Warmepumpentechnologie (8 kg, A++)","priority":9775,"searchParams":"/Search.ff?query=BOSCH+WTW+85230+Kondensationstrockner+mit+W%C3%A4rmepumpentechnologie+%288+kg%2C+A+%2B+%2B+%29\u0026channel=mmdede","type":"productName"}]}
Here you can find "currentprice":"444.00" property with the price.
Simplified the request by throwing out some optional parameters, it turned out that the same JSON response can be received by the URL http://www.mediamarkt.de/FACT-Finder/Suggest.ff?channel=mmdede&query=BOSCH+WTW+85230
That data was enough to built some code, assuming that first column intended for products:
Option Explicit
Sub TestMediaMarkt()
Dim oRange As Range
Dim aResult() As String
Dim i As Long
Dim sURL As String
Dim sRespText As String
' set source range with product names from column A
Set oRange = ThisWorkbook.Worksheets(1).Range("A1:A3")
' create one column array the same size
ReDim aResult(1 To oRange.Rows.Count, 1 To 1)
' loop rows one by one, make XHR for each product
For i = 1 To oRange.Rows.Count
' build up URL
sURL = "http://www.mediamarkt.de/FACT-Finder/Suggest.ff?channel=mmdede&query=" & EncodeUriComponent(oRange.Cells(i, 1).Value)
' retrieve HTML content
With CreateObject("MSXML2.XMLHTTP")
.Open "GET", sURL, False
.Send
sRespText = .responseText
End With
' regular expression for price property
With CreateObject("VBScript.RegExp")
.Global = True
.MultiLine = True
.IgnoreCase = True
.Pattern = """currentprice""\:""([\d.]+)""" ' capture digits after 'currentprice' in submatch
With .Execute(sRespText)
If .Count = 0 Then ' no matches, something going wrong
aResult(i, 1) = "N/A"
Else ' store the price to the array from the submatch
aResult(i, 1) = .Item(0).Submatches(0)
End If
End With
End With
Next
' output resultion array to column B
Output Sheets(1).Range("B1"), aResult
End Sub
Function EncodeUriComponent(strText)
Static objHtmlfile As Object
If objHtmlfile Is Nothing Then
Set objHtmlfile = CreateObject("htmlfile")
objHtmlfile.parentWindow.execScript "function encode(s) {return encodeURIComponent(s)}", "jscript"
End If
EncodeUriComponent = objHtmlfile.parentWindow.encode(strText)
End Function
Sub Output(oDstRng As Range, aCells As Variant)
With oDstRng
.Parent.Select
With .Resize( _
UBound(aCells, 1) - LBound(aCells, 1) + 1, _
UBound(aCells, 2) - LBound(aCells, 2) + 1 _
)
.NumberFormat = "#"
.Value = aCells
.Columns.AutoFit
End With
End With
End Sub
Filled worksheet with some product names:
Launched the sub and got the result:
It is just the example how to retrieve a data from the website via XHR and parse a response with RegExp, I hope it helps.
Related
Here is what I need to do (for clarity)
Take a PDF file (link on the bottom)
Then parse only the information under each header into a DataFridView.
I couldn't think of a way to do this (seeing as there is no native way to handle PDFs)
So my only thought was to convert it to a txt document then (somehow) take the txt from the text document and put it into the datagridview.
So, using Itextsharp I first convert the PDF to a text file; Which keeps "most" of its formatting (see link below)
This is the source for that
Dim mPDF As String = "C:\Users\Innovators World Wid\Documents\test.pdf"
Dim mTXT As String = "C:\Users\Innovators World Wid\Documents\test.txt"
Dim mPDFreader As New iTextSharp.text.pdf.PdfReader(mPDF)
Dim mPageCount As Integer = mPDFreader.NumberOfPages()
Dim parser As PdfReaderContentParser = New PdfReaderContentParser(mPDFreader)
'Create the text file.
Dim fs As FileStream = File.Create(mTXT)
Dim strategy As iTextSharp.text.pdf.parser.SimpleTextExtractionStrategy
For i As Integer = 1 To mPageCount
strategy = parser.ProcessContent(i, New iTextSharp.text.pdf.parser.SimpleTextExtractionStrategy())
Dim info As Byte() = New UTF8Encoding(True).GetBytes(strategy.GetResultantText())
fs.Write(info, 0, info.Length)
Next
fs.Close()
however I only need the "lines" of information. So everything should look like this
63 FMPC0847535411 OD119523523152105000 Aug 28, 2020 02:18 PM EXPRESS
64 FMPP0532201112 OD119523544975573000 Aug 28, 2020 02:18 PM EXPRESS
65 FMPP0532243104 OD119523557412412000 Aug 28, 2020 02:18 PM EXPRESS
66 FMPC0847516962 OD119523576945605000 Aug 28, 2020 02:18 PM EXPRESS
67 FMPC0847520947 OD119523760191783000 Aug 28, 2020 02:19 PM EXPRESS
In order to do that now I needed to use RegEx to remove everything I didn't want
here is the RegEx I Used
The RegEx is
(\d{2}\s.{14}\s.{20}\s.{3}\s\d{1,2},\s\d{4}\s\d{2}:\d{2}\s.{2}\sEXPRESS,*\s*R*e*p*l*a*c*e*m*e*n*t*\s*o*r*d*e*r*)";
Here is the code I used.
Private Sub Fixtext()
Dim regex As Regex = New Regex("\d{2}\s.{14}\s.{20}\s.{3}\s\d{1,2},\s\d{4}\s\d{2}:\d{2}\s.{2}\sEXPRESS,*\s*R*e*p*l*a*c*e*m*e*n*t*\s*o*r*d*e*r*")
Using reader As StreamReader = New StreamReader("C:\Users\Innovators World Wid\Documents\test.txt")
While (True)
Dim line As String = reader.ReadLine()
If line = Nothing Then
Return
End If
Dim match As Match = regex.Match(line)
If match.Success Then
Dim value As String = match.Groups(1).Value
Console.WriteLine(line)
End If
End While
End Using
End Sub
The results are "close" but not exactly the way I need it. In some cases they are "crammed" together and there are still parts left behind. An example would be
90 FMPC0847531898 OD119522758218348000 Aug 28, 2020 03:20 PM EXPRESS
491 FMPP0532220915 OD119522825195489000 Aug 28, 2020 03:21 PM EXPRESS
Tracking Id Forms Required Order Id RTS done on Notes492 FMPP0532194482 OD119522868525176000 Aug 28, 2020 03:21 PM EXPRESS
493 FMPP0532195684 OD119522871090000000 Aug 28, 2020 03:21 PM EXPRESS494 FMPP0532224318 OD119522895172342000 Aug 28, 2020 03:21 PM EXPRESS
the format I actually need is (again) a format I can use to import the data later into a datagridview
so for each line it needs to be
[number][ID][ID2][Date][Notes]
[number][ID][ID2][Date][Notes]
[number][ID][ID2][Date][Notes]
[number][ID][ID2][Date][Notes]
using this "Concept" This is an example of what I need (though i know this doesn't work, but something along these lines that will work)
Dim regex As Regex = New Regex("\d{2}\s.{14}\s.{20}\s.{3}\s\d{1,2},\s\d{4}\s\d{2}:\d{2}\s.{2}\sEXPRESS,*\s*R*e*p*l*a*c*e*m*e*n*t*\s*o*r*d*e*r*")
Using reader As StreamReader = New StreamReader("C:\Users\Innovators World Wid\Documents\test.txt")
While (True)
Dim line As String = reader.ReadLine()
If line = Nothing Then
Return
End If
Dim match As Match = regex.Match(line)
If match.Success Then
Dim value As String = match.Groups(1).Value
Dim s As String = value
s = s.Replace(" Tracking Id Forms Required Order Id RTS done on Notes", Nothing)
s = s.Replace("EXPRESS ", "EXPRESS")
s = s.Replace("EXPRESS", "EXPRESS" & vbCrLf)
Console.WriteLine(line)
End If
End While
End Using
Here is a "brief" explanation with files included.
Copy of the original PDF (This is the PDF being converted to .txt using itext)
I am only doing this because I can't think of a way (outside of paying for a 3rd party tool to convert a pdf to XLS)
https://drive.google.com/file/d/1iHMM_G4UBUlKaa44-Wb00F_9ZdG-vYpM/view?usp=sharing
using the above "itext method" I mentioned this is the outputted converted file
https://drive.google.com/file/d/10dgJDFW5XlhsB0_0QAWQvtimsDoMllx-/view?usp=sharing
I then use the above Regex (mentioned above) to parse out what I don't need.
however it isn't working.
So my Questions are (for "clarity")
Is this the only or best method to do what I need done? (Convert PDF to text, Remove what I don't need then input that information into a DataGridView; Or is there another , Cleaner , Better method?
(if not 1) How can I make this work? Is something wrong with my RegEx or My Logic? Am I missing something better/cleaner that someone can help me see.
(if 2 ^ Not 1) What is the best way to take the results and place them in the proper DataGridView Column.
Final Statement: It doesn't have to be this method. I will take "ANY" method that will allow me to do what I need to be done, the cleaner the better however I have to do this avoiding 3rd party libraries that are free with limitations; Paid 3rd party libraries. That leaves me with limitations. IE: PDFBox, itext,itextsharp) And this has to be able to lead me from a PDF (like the above sample) to that table information in a Datagridview or even a listview.
I will take any help and I am more then appreciative. Also I did re-Ask this question because a mod closed my original question "Stating it wasn't clear what I needed" I did try in both cases to make the question as "thorough" as possible but I do hope this is "Clearer" so it doesn't get closed abruptly.
I cheated a bit by correcting the text file. It goes a little wonky at page breaks and misses starting a new line. Perhaps you can correct that with Itextsharp or the hard to maintain regex.
I made a class to hold the data. The property names become the column headers in the DataGridView.
I read all the lines in the text file into an array. I checked the first character of the line to see if it was a digit then split the line into another array based on the space. Next I created a new Tracking object, fleshing it out with all its properties with the parameterized constructor.
Finally, I checked it the line contained a comma and added that bit of text to the notes parameter. The completed object is added to the list.
After the loop the lst is bound to the grid.
Public Class Tracking
Public Property Number As Integer
Public Property ID As String
Public Property ID2 As String
Public Property TrackDate As Date
Public Property Notes As String
Public Sub New(TNumber As Integer, TID As String, TID2 As String, TDate As DateTime, TNotes As String)
Number = TNumber
ID = TID
ID2 = TID2
TrackDate = TDate
Notes = TNotes
End Sub
End Class
Private Sub OPCode()
Dim lst As New List(Of Tracking)
Dim lines = File.ReadAllLines("C:\Users\maryo\Desktop\test.txt")
For Each line In lines
If Char.IsDigit(line(0)) Then
Dim parts = line.Split(" "c)
Dim T As New Tracking(CInt(parts(0)), parts(1), parts(2), Date.ParseExact($"{parts(3)} {parts(4)} {parts(5)} {parts(6)} {parts(7)}", "MMM d, yyyy hh:mm tt", CultureInfo.CurrentCulture), parts(8))
If line.Contains(",") Then
T.Notes &= line.Substring(line.IndexOf(","))
End If
lst.Add(T)
End If
Next
DataGridView1.DataSource = lst
End Sub
EDIT
To pinpoint the error let's try...
Private Sub OPCode()
Dim lst As New List(Of Tracking)
Dim lines = File.ReadAllLines("C:\Users\maryo\Desktop\test.txt")
For Each line In lines
If Char.IsDigit(line(0)) Then
Dim parts = line.Split(" "c)
If parts.Length < 9 Then
Debug.Print(line)
MessageBox.Show($"We have a line that does not include all fields.")
Exit Sub
End If
Dim T As New Tracking(CInt(parts(0)), parts(1), parts(2), Date.ParseExact($"{parts(3)} {parts(4)} {parts(5)} {parts(6)} {parts(7)}", "MMM d, yyyy hh:mm tt", CultureInfo.CurrentCulture), parts(8))
If line.Contains(",") Then
T.Notes &= line.Substring(line.IndexOf(","))
End If
lst.Add(T)
End If
Next
DataGridView1.DataSource = lst
End Sub
Try this regex and see if this works according to your requirement:
\b[0-9].*(FMPC|OD).*(EXPRESS|Replacement\sOrder)\b
My question: how do I create a dictionary from a list by assigning dictionary keys based on a regex pattern match ('^--L-[0-9]{8}'), and assigning the values by using all lines between each key.
Example excerpt from the raw file:
SQL> --L-93752133
SQL> --SELECT table_name, tablespace_name from dba_tables where upper(table_name) like &tablename_from_developer;
SQL>
SQL> --L-52852243
SQL>
SQL> SELECT log_mode FROM v$database;
LOG_MODE
------------
NOARCHIVELOG
SQL>
SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 3
Current log sequence 5
SQL>
SQL> --L-42127143
SQL>
SQL> SELECT t.name "TSName", e.encryptionalg "Algorithm", d.file_name "File Name"
2 FROM v$tablespace t
3 , v$encrypted_tablespaces e
4 , dba_data_files d
5 WHERE t.ts# = e.ts#
6 AND t.name = d.tablespace_name;
no rows selected
Some additional detail: The raw file can be large (at least 80K+ lines, but often much larger) and I need to preserve the original spacing so the output is still easy to read. Here's how I'm reading the file in and removing "SQL>" from the beginning of each line:
with open(rawFile, 'r') as inFile:
content = inFile.read()
rawList = content.splitlines()
for line in rawList:
cleanLine = re.sub('^SQL> ', '', line)
Finding the dictionary keys I'm looking for is easy:
pattern = re.compile(r'^--L-[0-9]{8}')
if pattern.search(cleanLine) is not None:
itemID = pattern.search(cleanLine)
print(itemID.group(0))
But how do I assign all lines between each key as the value belonging to the most recent key preceding them? I've been playing around with new lists, tuples, and dictionaries but everything I do is returning garbage. The goal is to have the data and keys linked to each other so that I can return them as needed later in my script.
I spent a while searching for a similar question, but in most other cases the source file was already in a dictionary-like format so creating the new dictionary was a less complicated problem. Maybe a dictionary or tuple isn't the right answer, but any help would be appreciated! Thanks!
In general, you should question why you would read the entire file, split the lines into a list, and then iterate over the list. This is a Python anti-pattern.
For line oriented text files, just do:
with open(fn) as f:
for line in f:
# process a line
It sounds, however, that you have multi-line block oriented patterns. If so, with smaller files, read the entire file into a single string and use a regex on that. Then you would use group 1 and group 2 as the key, value in your dict:
pat=re.compile(pattern, flags)
with open(file_name) as f:
di={m.group(1):m.group(2) for m in pat.finditer(f.read())}
With a larger file, use a mmap:
import re, mmap
pat=re.compile(pattern, flags)
with open(file_name, 'r+') as f:
mm = mmap.mmap(f.fileno(), 0)
for i, m in enumerate(pat.finditer(mm)):
# process each block accordingly...
As far as the regex, I am a little unclear on what you are trying to capture or not. I think this regex is what I am understanding you want:
^SQL> (--L-[0-9]{8})(.*?)(?=SQL> --L-[0-9]{8}|\Z)
Demo
In either case, running that regex with the example string yields:
>>> pat=re.compile(r'^SQL> (--L-[0-9]{8})\s*(.*?)\s*(?=SQL> --L-[0-9]{8}|\Z)', re.S | re.M)
>>> with open(file_name) as f:
... di={m.group(1):m.group(2) for m in pat.finditer(f.read())}
...
>>> di
{'--L-52852243': 'SQL> \nSQL> SELECT log_mode FROM v;\n\n LOG_MODE\n ------------\n NOARCHIVELOG\n\nSQL> \nSQL> archive log list\n Database log mode No Archive Mode\n Automatic archival Disabled\n Archive destination USE_DB_RECOVERY_FILE_DEST\n Oldest online log sequence 3\n Current log sequence 5\nSQL>',
'--L-93752133': 'SQL> --SELECT table_name, tablespace_name from dba_tables where upper(table_name) like &tablename_from_developer;\nSQL>',
'--L-42127143': 'SQL> \nSQL> SELECT t.name TSName, e.encryptionalg Algorithm, d.file_name File Name\n 2 FROM v t\n 3 , v e\n 4 , dba_data_files d\n 5 WHERE t.ts# = e.ts#\n 6 AND t.name = d.tablespace_name;\n\n no rows selected'}
Something like this?
with open(rawFile, 'r') as inFile:
content = inFile.read()
rawList = content.splitlines()
keyed_dict = {}
in_between_lines = ""
last_key = 0
for line in rawList:
cleanLine = re.sub('^SQL> ', '', line)
pattern = re.compile(r'^--L-[0-9]{8}')
if pattern.search(cleanLine) is not None:
itemID = pattern.search(cleanLine)
if last_key: keyed_dict[last_key] = in_between_lines
last_key = itemID.group(0)
in_between_lines = ""
else:
in_between_lines += cleanLine
I have data in Excel like follows (one row here - one cell in Excel):
07 July 2015 12:02 – 14 July 2015 17:02
12 August 2015 22:02 – 01 September 2015 11:02
I want to write a macro that will delete all time info (e.g. "12:02") within a user's selection (multiple cells) to look like this:
07 July 2015 – 14 July 2015
12 August 2015 – 01 September 2015
When all "times" where similar ("00:00") this macro worked perfectly:
Sub delete_time()
Selection.Replace What:="00:00", Replacement:="", LookAt:=xlPart, _
SearchOrder:=xlByRows, MatchCase:=False, SearchFormat:=False, _
ReplaceFormat:=False
End Sub
But then time-info stopped being uniform, so I decided to use RegEx. The problem is I can't find a proper way to do this on VBA. I tried this macro:
Sub delete_time()
Dim RegEx As Object
Set RegEx = CreateObject("VBScript.RegExp")
On Error Resume Next
RegEx.Global = True
RegEx.Pattern = "\d\d\:\d\d"
ActiveDocument.Range = _
RegEx.Replace(ActiveDocument.Range, "")
End Sub
But it didn't work. Also tried "[0-9]{2}:[0-9]{2}" and "[0-9][0-9]:[0-9][0-9]" patterns but nothing changed. So the problem must be in my misunderstanding of VBA (I'm new to it).
Can anyone help?
The problem is with your selection.
ActiveDocument.Range = _
RegEx.Replace(ActiveDocument.Range, "")
ActiveDocument doesn't exist in the Excel namespace. We have ActiveWorkbook or ThisWorkbook, but what you need now is the Selection.
Use a for each loop to iterate all the cells in the current selection like this:
Dim myCell As Range
For Each myCell In Selection.Cells
myCell.Value = RegEx.Replace(myCell.Value, "")
Next
A faster approach would be to combine your RegExp with a variant array:
'Press Alt + F11 to open the Visual Basic Editor (VBE)
'From the Menu, choose Insert-Module.
'Paste the code into the right-hand code window.
'Press Alt + F11 to close the VBE
'In Xl2003 Goto Tools … Macro … Macros and double-click KillDate
Sub KillDate()
Dim rng1 As Range
Dim rngArea As Range
Dim lngRow As Long
Dim lngCol As Long
Dim lngCalc As Long
Dim objReg As Object
Dim X()
On Error Resume Next
Set rng1 = Application.InputBox("Select range for the replacement", "User select", Selection.Address, , , , , 8)
If rng1 Is Nothing Then Exit Sub
On Error GoTo 0
'See Patrick Matthews excellent article on using Regular Expressions with VBA
Set objReg = CreateObject("vbscript.regexp")
objReg.Pattern = "\d\d\:\d\d"
objReg.Global = True
'Speed up the code by turning off screenupdating and setting calculation to manual
'Disable any code events that may occur when writing to cells
With Application
lngCalc = .Calculation
.ScreenUpdating = False
.Calculation = xlCalculationManual
.EnableEvents = False
End With
'Test each area in the user selected range
'Non contiguous range areas are common when using SpecialCells to define specific cell types to work on
For Each rngArea In rng1.Areas
'The most common outcome is used for the True outcome to optimise code speed
If rngArea.Cells.Count > 1 Then
'If there is more than once cell then set the variant array to the dimensions of the range area
'Using Value2 provides a useful speed improvement over Value. On my testing it was 2% on blank cells, up to 10% on non-blanks
X = rngArea.Value2
For lngRow = 1 To rngArea.Rows.Count
For lngCol = 1 To rngArea.Columns.Count
'replace the leading zeroes
X(lngRow, lngCol) = objReg.Replace(X(lngRow, lngCol), vbNullString)
Next lngCol
Next lngRow
'Dump the updated array sans leading zeroes back over the initial range
rngArea.Value2 = X
Else
'caters for a single cell range area. No variant array required
rngArea.Value = objReg.Replace(rngArea.Value, vbNullString)
End If
Next rngArea
'cleanup the Application settings
With Application
.ScreenUpdating = True
.Calculation = lngCalc
.EnableEvents = True
End With
Set objReg = Nothing
End Sub
The easiest approach to me seems to be to use LEFT and RIGHT functions to extract the two separate timestamps, then to convert these timestamps to dates using TEXT function. Probably easiest in excel directly, but if you want to go down VBA route then example solution below:
' Taking a random date from Cell A1
DateRange = Range("A1")
' Extracting the first timestamp
FirstTimeStamp = Left(DateRange, Application.Find(" – ", DateRange))
' Converting to required date format
FirstDate = Application.Text(FirstTimeStamp, "dd-mmm-yyyy")
LastTimeStamp = Right(DateRange, Application.Find(" – ", DateRange))
LastDate = Application.Text(LastTimeStamp, "dd-mmm-yyyy")
Function ReplaceRegEx(str As String, pattern As String, newChar As String) As String 'recherche et remplace une expression reguliere par une chaine de char
Dim regEx As Object, found As Object, counter As Integer, F As Object
Set regEx = CreateObject("VBscript.RegExp")
regEx.Global = True
regEx.ignorecase = False
regEx.pattern = pattern
Set found = regEx.Execute(str)
counter = found.Count
If counter <> 0 Then
For Each F In found
str = Replace(str, F, newChar)
Next F
End If
ReplaceRegEx = str
End Function
In a VBA module in excel 2007, is it possible to call a web service? If so, any code snippets? How would I add the web reference?
Yes You Can!
I worked on a project that did that (see comment). Unfortunately no code samples from that one, but googling revealed these:
How you can integrate data from several Web services using Excel and VBA
STEP BY STEP: Consuming Web Services through VBA (Excel or Word)
VBA: Consume Soap Web Services
Here's an overview from MS:
Consuming Web Services in Excel 2007
For an updated answer see this SO question:
calling web service using VBA code in excel 2010
Both threads should be merged though.
In Microsoft Excel Office 2007 try installing "Web Service Reference Tool" plugin. And use the WSDL and add the web-services. And use following code in module to fetch the necessary data from the web-service.
Sub Demo()
Dim XDoc As MSXML2.DOMDocument
Dim xEmpDetails As MSXML2.IXMLDOMNode
Dim xParent As MSXML2.IXMLDOMNode
Dim xChild As MSXML2.IXMLDOMNode
Dim query As String
Dim Col, Row As Integer
Dim objWS As New clsws_GlobalWeather
Set XDoc = New MSXML2.DOMDocument
XDoc.async = False
XDoc.validateOnParse = False
query = objWS.wsm_GetCitiesByCountry("india")
If Not XDoc.LoadXML(query) Then 'strXML is the string with XML'
Err.Raise XDoc.parseError.ErrorCode, , XDoc.parseError.reason
End If
XDoc.LoadXML (query)
Set xEmpDetails = XDoc.DocumentElement
Set xParent = xEmpDetails.FirstChild
Worksheets("Sheet3").Cells(1, 1).Value = "Country"
Worksheets("Sheet3").Cells(1, 1).Interior.Color = RGB(65, 105, 225)
Worksheets("Sheet3").Cells(1, 2).Value = "City"
Worksheets("Sheet3").Cells(1, 2).Interior.Color = RGB(65, 105, 225)
Row = 2
Col = 1
For Each xParent In xEmpDetails.ChildNodes
For Each xChild In xParent.ChildNodes
Worksheets("Sheet3").Cells(Row, Col).Value = xChild.Text
Col = Col + 1
Next xChild
Row = Row + 1
Col = 1
Next xParent
End Sub
Excel 2013 Read Data from a web service and bash the JSON till you can get what you want out of it (given the JSON will always be in the same format).
This code should just work without the need for any plugins.
You will need your own free API key from the currency converter website though.
I used it to load the USD to GBP value into a cell on my sheet.
Option Explicit
Sub Test_LateBinding()
Dim objRequest As Object
Dim strUrl As String
Dim blnAsync As Boolean
Dim strResponse As String
Set objRequest = CreateObject("MSXML2.XMLHTTP")
strUrl = "https://free.currconv.com/api/v7/convert?q=USD_GBP&compact=ultra&apiKey=[MY_API_KEY]"
blnAsync = True
With objRequest
.Open "GET", strUrl, blnAsync
.SetRequestHeader "Content-Type", "application/json"
.Send
While objRequest.readyState <> 4
DoEvents
Wend
strResponse = .responsetext
End With
Dim responseArray() As String
responseArray = Split(strResponse, ":", -1)
Dim value As String
value = responseArray(1)
Dim valueArray() As String
valueArray = Split(value, "}", -1)
Dim finalValue As String
finalValue = valueArray(0)
Sheet2.Cells(22, "C") = finalValue
End Sub
I'm trying to create a data-scraping file for a class, and the data I have to scrape requires that I use while loops to get the right data into separate arrays-- i.e. for states, and SAT averages, etc.
However, once I set up the while loops, my regex that cleared the majority of the html tags from the data broke, and I am getting an error that reads:
Attribute Error: 'NoneType' object has no attribute 'groups'
My Code is:
import re, util
from BeautifulSoup import BeautifulStoneSoup
# create a comma-delineated file
delim = ", "
#base url for sat data
base = "http://www.usatoday.com/news/education/2007-08-28-sat-table_N.htm"
#get webpage object for site
soup = util.mysoupopen(base)
#get column headings
colCols = soup.findAll("td", {"class":"vaTextBold"})
#get data
dataCols = soup.findAll("td", {"class":"vaText"})
#append data to cols
for i in range(len(dataCols)):
colCols.append(dataCols[i])
#open a csv file to write the data to
fob=open("sat.csv", 'a')
#initiate the 5 arrays
states = []
participate = []
math = []
read = []
write = []
#split into 5 lists for each row
for i in range(len(colCols)):
if i%5 == 0:
states.append(colCols[i])
i=1
while i<=250:
participate.append(colCols[i])
i = i+5
i=2
while i<=250:
math.append(colCols[i])
i = i+5
i=3
while i<=250:
read.append(colCols[i])
i = i+5
i=4
while i<=250:
write.append(colCols[i])
i = i+5
#write data to the file
for i in range(len(states)):
states = str(states[i])
participate = str(participate[i])
math = str(math[i])
read = str(read[i])
write = str(write[i])
#regex to remove html from data scraped
#remove <td> tags
line = re.search(">(.*)<", states).groups()[0] + delim + re.search(">(.*)<", participate).groups()[0]+ delim + re.search(">(.*)<", math).groups()[0] + delim + re.search(">(.*)<", read).groups()[0] + delim + re.search(">(.*)<", write).groups()[0]
#append data point to the file
fob.write(line)
Any ideas regarding why this error suddenly appeared? The regex was working fine until I tried to split the data into different lists. I have already tried printing the various strings inside the final "for" loop to see if any of them were "None" for the first i value (0), but they were all the string that they were supposed to be.
Any help would be greatly appreciated!
It looks like the regex search is failing on (one of) the strings, so it returns None instead of a MatchObject.
Try the following instead of the very long #remove <td> tags line:
out_list = []
for item in (states, participate, math, read, write):
try:
out_list.append(re.search(">(.*)<", item).groups()[0])
except AttributeError:
print "Regex match failed on", item
sys.exit()
line = delim.join(out_list)
That way, you can find out where your regex is failing.
Also, I suggest you use .group(1) instead of .groups()[0]. The former is more explicit.