Is it possible to extract and compare the contents of the website using imacros? I would like to extract data and compare using some pre-defined information and thus modify only some fields rather than ensuring all fields within a page are set to a default. Is it possible?
Yes it is.
You can use macro to extract information and save it in JavaScript variable. And let's say you scraped data from 2 websites and if both have keyword "foo bar" you alert "foo bar found".
var test1=website_text1.search(/foo bar/gi);
var test2=website_text2.search(/foo bar/gi);
if (( test1>=0 ) && (test2 >=0 ))
{
alert("foo bar found");
}
else
{
alert("foo bar is not on both websites");
}
For the full code of what you explained it will take more information.
why not extract it in a csv file ..then use the csv as a database to compare whatever variable you might be needing,you can go futher by creating a macro to automate the csv macro comparion.make a google search on macros for csv.
I am not sure what exactly you need.
iMacros cannot perform such tasks by itself, you need scripting - either basic javascript or whatever else you know how to use (java, php, etc)
Anyway, I believe Selenium http://docs.seleniumhq.org/ may be closer to what you need. You should give it a try.
Related
Good morning all,
I'm looking in Google Data Fusion for a way to make dynamic the name of a source file stored on GCS. The files to be processed are named according to their value date, example: 2020-12-10_data.csv
My need would be to set the filename dynamically so that the pipeline uses the correct file every day (something like this: ${ new Date(). Getfullyear()... }_data.csv
I managed to use the arguments in runtime by specifying the date as a string (2020-12-10) but not with a function.
And more generally is there any documentation on how to enter dynamic parameters with ready-made or custom "functions" (I couldn't find it)
Thanks in advance for your help.
There is a readymade workaround, you can give a try "BigQuery Execute" plugin.
Steps:
Put below query in SQL
select cast(current_date as string) ||'_data.csv' as filename
--for output '2020-12-15_data.csv'
Row As Arguments to 'true'
Now use the above arguments via ${filename} wherever you want to.
I've put together a simple search form, with a search box and a couple of filters as dropdowns. Everything works as you'd expect, except that I want the behavior to be that when the user leaves everything completely blank (no search query, no filters) they simply get everything returned (paginated of course).
I'm currently achieving this by detecting this special case and querying my local database, but there are some advantages to doing it 100% with CloudSearch. Is there a way to build a request that simply returns a paginated list of every document? In other words, is there a CloudSearch equivalent to "SELECT id FROM x LIMIT n?"
Thanks in advance!
Joe
See the Search API.
?q=matchall&q.parser=structured will match all the documents.
These easiest way would be to use a not operator, so for example:
?q=dog|-dog
would return all documents that contained 'dog' and also did not contain 'dog'. You would need to intercept the special case, as you are already, and just substitute a query/not query combo and you should get everything back.
For someone looking for an answer using boto3.
CLOUD_SEARCH_CLIENT = boto3.client(
'cloudsearchdomain',
aws_access_key_id='',
aws_secret_access_key='',
region_name='',
endpoint_url="https://search-your-endpoint-url.amazonaws.com"
)
response = CLOUD_SEARCH_CLIENT.search(
query="matchall",
queryParser='structured'
)
print(response)
I am writing an API which exposes parts of our database to a client. Part of this API requires certain HTML response codes to be sent for particular conditions. This is generally easy with simple checks, but I can not see how to catch (for example) 'InvalidDateTimeException' errors where an invalid date is submitted to SQL.
I have tried dumping the ERROR and cfcatch variables, but while they generate huge stack traces I cannot see any field that is easily parsable to check the specific type of error (short of doing a text search on the error message or stack trace).
I could also do a pre-check with regex such as
(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})
but this could still generate invalid dates. Coldfusion also provides some date validation, but I have read that it is particularly bad. This also wouldn't help other scenarios that don't deal with dates.
So in brief: What is the best way to react to a particular error such as 'InvalidDateTimeException' in coldfusion?
[Edit]
Some clarifications from the comments - We are using MYSQL 5 and cfqueryparams. We use the 'euro' date format here in Australia but it would be much prefered if the api user presented ISO format dates (yyyy-mm-dd) to avoid confusion.
Well .... my advice to use is to catch the error before it gets to SQL. You didn't specify your DBMS (SQL Server, MySQL, etc), so I'll focus on ColdFusion solutions. I hope one of these suggestions point you in the right path.
Options:
The article that you linked to concerning Coldfusion date validation mentions the isValid function as the recommended solution. Consider using that with the USDATE validation type, as suggested.
If you are using CFCs or at least cffunctions for your API methods, then you have cfargument type="date" at your disposal to assist with ensuring the dates are valid (although my feeling is that would have the same lenient behavior as isDate)
Inside of your cfquery tag, you should be using cfqueryparam for all of the parameters you pass, especially those passed directly from the user (whether a form post or a API call). You should use cfqueryparam cfsqltype=CF_SQL_DATE
Using any of the methods above (or all of them) you should wrap your coldfusion code in a try/catch construct and have a much easier error to deal with.
Depending on your DBMS, you might have access to Try/catch constructs there too.
**** UPDATED:
After reading your comment about the international conversion issues, I have two approaches that I'd choose between:
Keep in mind that I haven't tested any code or anything ....
First, maybe the international functions can help you.
http://livedocs.adobe.com/coldfusion/8/htmldocs/help.html?content=functions_in-k_37.html
Use Setlocale to set the location to English (Australian) and then use LSParseDateTime to read in the yyyy-mm-dd format and then use dateformat to write it to mySQL using mm/dd/yyyy or whatever dateformat it expects. I don't have much experience dealing with those LS functions though.
Second option, use the regex you provided to make sure that the input has the right structure, then use createDate to create a date in US format using the parsed mm dd and yyyy elements. Validate the usdate using isValid.
Here's a blindly coded attempt at the second option. Remember, I haven't tested this code. I'm heavily using the list function listGetAt to split the inputted datetime into separate date and time strings and then using listGetAt to parse out the individual date parts.
<cfscript>
isosampledate = "2013-06-05 14:07:33";
passesValidation = false;
expectedDatePattern = "\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2}";
try {
if (refind(expectedDatePattern,isosampledate)) {
datePortion = listGetAt(isosampledate,1," ");
timePortion = listGetAt(isosampledate,2," ");
yearPart = listGetAt(datePortion,1,"-");
monthPart = listGetAt(datePortion,2,"-");
dayPart = listGetAt(datePortion,3,"-");
hoursPart = listGetAt(timePortion,1,":");
minutesPart = listGetAt(timePortion,2,":");
secondsPart = listGetAt(timePortion,3,":");
thisUSDate = createDateTime(yearPart,monthPart,dayPart,hoursPart,minutesPart,secondsPart)
if (isValid("usdate",thisUSDate) {
passesValidation = true;
sqlDate = CreateODBCDateTime(thisUSDate);
}
}
} catch (e:any) {
passesValidation = false;
}
</cfscript>
I'm pretty sure that if the inputted value was not a valid date then at least one of those date functions would throw an exception which would get picked up by the catch block.
Hope this helps. I'm off to bed.
Trying to create an advanced segment (include) using regex (or any other filter mechnanism, contains with just the substring isn't working either) which uses the value of the custom variable value.
It ought to be straightforward, but it's driving me insane. I currently have this regex:
.*CLAS_LIBRARIES.*
which rightly matches a custom variable value of:
HOME/CLASMAIN/CLAS_LIBRARIES/
but when I apply the segment and then browse the custom variable values in the report, it contains values like:
HOME/
/museumcollections/
HOME/MAPS/
Tried wrapping it like this:
.*(CLAS_LIBRARIES).*
(.*)(CLAS_LIBRARIES)(.*)
to no avail.
What the hell is going on, and am I an idiot?
What's the scope of your custom variable? Can multiple sessions have different values?
Advanced segments will return any data that matches your query (e.g. in case of creating a segment for a specific page, GA will return data for all user activity which included that specific page as part of their navigation).
I'm trying to parse out the wishlist column in open cart's wishlist_to_store column to retrieve the product ID's. At first glance I thought it was being stored as JSON, but that obviously not the case. Is there a library or method I can use to parse it out?
Example wishlist:
a:2:{i:0;s:5:"16419";i:1;s:5:"16415";}
The method you are looking to use is called unserialize(). This is fairly similar in function to what JSON does but is the PHP equivalent. You can use it as follows
$a = unserialize('a:2:{i:0;s:5:"16419";i:1;s:5:"16415";}');
var_dump($a);