Maxmind GeoLite2-City is_anonymous_proxy and is_satellite_provider always 0 - geoip

I've downloaded the Maxmind GeoLite2-City.csv file from https://dev.maxmind.com/geoip/geoip2/geolite2/
I see that the is_anonymous_proxy and is_satellite_provider columns are always 0.
I am trying to figure out why these columns are always 0, and if i send out an IP addr for anonymous proxy, will i get a 0 or a different value.
Is there a API (Java or other languages) to call GeoLite2-City and see if it gives out reliable results?

The is_anonymous_proxy and is_satellite_provider columns are not always 0. Try the following command to find examples:
grep ",1," GeoLite2-City-Blocks-IPv4.csv

Related

Parse Days in Status field from Jira Cloud for Google Sheets

I am using Jira Cloud for Sheets Adds on in order to get Days in Status field from Jira, it seems to have the following syntax, from this post
<STATUS_ID>_*:*_<NUMBER_OF_TIMES_ISSUE_WAS_IN_THIS_STATUS>_*:*_<SECONDS>_*|
Here is an example:
10060_*:*_1_*:*_1121033406_*|*_3_*:*_1_*:*_7409_*|*_10000_*:*_1_*:*_270003163_*|*_10088_*:*_1_*:*_2595005_*|*_10087_*:*_1_*:*_1126144_*|*_10001_*:*_1_*:*_0
I am trying to extract for example how many times the issue was In QA status and the duration on a given status. I am dealing with parsing this pattern for obtaining this information and return it using an ARRAYFORMULA. Days in Status field information will be provided only when the issue was completed (is in Done status), otherwise, no information will be provided. if the issue is in Done status, but it didn't transition for a given status, this information will not be provided in the Days in Status string.
I am trying to use REGEXEXTRACT function to match a pattern for example:
=REGEXEXTRACT(C2, "(10060)_\*:\*_\d+_\*:\*_\d+_\*|")
and it returns an empty value, where I expect 10068. I brought my attention that when I use REGEXMATCH function it returns TRUE:
=REGEXMATCH(C2, "(10060)_\*:\*_\d+_\*:\*_\d+_\*|")
so the syntax is not clear. Google refers as a reference for Regular Expression to the following documentation. It seems to be an issue with the vertical bar |, per this documentation it is a special character that should be represented like this \v, but this doesn't work. The REGEXMATCH returns FALSE. I am trying to use some online RegEx tester, that implements Google Sheets syntax (RE2), I found ReGo, that I don't know if it is a valid one.
I was trying to use SPLITfunction like this:
=query(SPLIT(C2, "_*:*_"), "SELECT Col1")
but it seems to be a more complicated approach for getting all the values I need from Days in Status field string, but it separates well all the values from the previous pattern. In this case, I am getting the first Status ID. The number of columns returned by SPLITwill varies because it depends on the number of statuses the issues transitioned in order to get to DONE status.
It seems to be a complex task given all the issues I have encounter, but maybe some of you were dealing with this before and may advise about some ideas. It requires properly parsing the information and then extracting the information on specific columns using ARRAYFORMULA function when it applies for a given status from Status column.
Here is a google spreadsheet sample with the input information. I would like to populate the information for the following columns for Times In QA (C column) and Duration in QA (D column, the information is provided in seconds I would need in days but this is a minor task) for In QA status, then the same would apply for the rest of the other statuses. I added the tab Settings for mapping the Status ID to my Status, I would need to use a lookup function for matching the Status column in the Jira Issues tab. I would like to have a solution, without adding helper columns maybe it will require some script.
https://docs.google.com/spreadsheets/d/1ys6oiel1aJkQR9nfxWJsmEyd7XiNkVB-omcNL0ohckY/edit?usp=sharing
try:
=INDEX(IFERROR(1/(1/QUERY(1*IFNA(REGEXEXTRACT(C2:C, "10087.{5}(\d+).{5}(\d+)")),
"select Col1,Col2/86400 label Col2/86400''"))))
...so after we do REGEXEXTRACT some rows (which cannot be extracted from) will output as #N/A error so we wrap it into IFNA to remove those errors. then we multiply it by *1 to convert everything into numeric numbers (regex works & outputs always only plain text format). then we use QUERY to convert 2nd column into proper seconds in one go. at this point every row has some value so to get rid of zeros for rows we don't need (like row 2,3,5,8,9,etc) and keep the output numeric, we use IFERROR(1/(1/ wrapping. and finally, we use INDEX or ARRAYFORMULA to process our array.

Error in calling crypto prices in Google Sheets

This command:
=VALUE(REGEXEXTRACT(IMPORTDATA("https://min-api.cryptocompare.com/data/price?fsym=ETH&tsyms=USD"), "{.+:(.+)}"))
used to work just fine for ETH and BTC, but it is giving me this error now:
Error
Function REGEXEXTRACT parameter 2 value "{.+:(.+)}" does not match text of Function REGEXEXTRACT parameter 1 value "{"Response":"Error"".
What's the reason?
Consider using =IMPORTDATA("https://cryptoprices.cc/BTC/")
There's no parsing required, no limitations, no authentication.
try it perhaps tomorrow (you probably reached the max limit of free API calls for the day)

how to call Sequence function from SQL into Informatica

I have a port 'Number_1' in expression transformation in Informatica. I connected the Number_1 port to a target sql table. I need to generate number for this port 'Number_1' every time i run the mapping starting from 1 till 999. once it reaches 999 then again the value of the Number_1 should reset to 1. I'm aware there is sequence generator trans. but i need to call Sequence function from SQL server. how to achieve above?
Create a stored procedure in sql server then use stored procedure tranformation to call this from Informatica
You may call it using Stored Procedure transfomation. You may also use a stored procedure as SQL Override on Source Qualifier, however...
I hope you know what you're doing, as this is in general a very bad idea. Each call requires communication between Intergration Service and database - this will cause huge delays. Therefore it's much better to use Informatica's Sequence Generator or - even better perhaps, if all you need is an integer port with round robin - a simple expression variable.
While maciejg says makes a lot of sense performance wise - however I've known a fair few people who were more comfortable using the native database sequencer than the inbuilt one ( even some Informatica specialists ).
The thing with Informatica sequencer is how much flexibility they give and when they get set wrong it can lead to unexpected numbers being picked.
One example I have is for a sequencer being used to create unique keys in a table - if you persist the value between sessions then it works fine until you select the incorrect option while reimporting the mapping.
If you choose to lookup your previous ending value from a config file / table and add the value produced by the sequencer to this then one day when someone by mistake sets the sequencer to persist values between runs you'll all of a sudden skip that many numbers in the sequence each time the session is restarted. Native db sequencers are very basic making them very predictable and fool proof

Key mismatch on column with value 0

Firt of all, thank you for using your time and reading my question.
Secondly, I would like to receive some help from you to fix a relational error with two entities in loopback.
The error I'm facing is the next one;
Key mismatch: OpsItinerary.itinerary_status_ind: 0, GenIndicatorValue.ind_value: 0
For some reasons, loopback cannot create a relation since the source and target value starts with 0, the identity value should be greater than that. I have some other relations with this target entity with values like 1 or 2 and the api performs exactly as I expect.
Is there any workaround to show the results expected since the relation is tested with another values (greater than 0) and it's working fine?
Thanks in advance.
Sounds like a bug. Please file an issue at https://github.com/strongloop/loopback/issues

How do you fetch or insert rows in batches using ODBC? ( in C or C++ )

I am trying to understand which ODBC functions to call and how to call
them in order to fetch rows in batches or insert rows in batches ( inserts that use bind variables not just an array of insert statements ).
I can fetch one row at a time by calling these functions in order
SQLBindParameter
SQLExecute
SQLFetch
Also if doing inserts / updates I can do one row at a time by calling these functions
SQLBindParameter
SQLExecute
What I don't know is what I need to change in these calls in order to:
1) Fetch rows in batches e.g. 150 rows per batch
2) Insert several rows per SQLExcecute call e.g. 150 rows per call
Short contained examples ( not necessarily compilable since ODBC progs tend to be long .. so ignore setup/initialization, ignore error checking ) demonstrating how this is done would be helpful. Or a pointer to a comprehensible open source code that is doing this sort of thing
The following article tells you how to send rows of parameters in one go:
http://www.easysoft.com/products/data_access/odbc_odbc_bridge/performance_white_paper.html#3_1_2
Basically, you need to search for SQLSetStmtAttr and SQL_ATTR_PARAMSET_SIZE.
To fetch multiple rows in one go see http://www.easysoft.com/developer/languages/c/odbc-tutorial-fetching-results.html
Search for SQL_ATTR_ROW_ARRAY_SIZE.
There are two self contained examples of how to perform array binding on this site
http://msdn.microsoft.com/en-us/library/ms709287(v=vs.85).aspx
Also DB2 client ships with several example code some of which show how to do array binding both for inserts and selects
Here is an excellent article from IBM Developerworks that might answer some of your questions about the ODBC architecture:
ODBC programming using Apache Derby (available via the Wayback Machine)
One of the main "tricks" for optimizing network traffic with an ODBC connection is how you define your cursor:
http://technet.microsoft.com/en-us/library/ms131453.aspx
'Hope that helps .. PSM