Contradiction between GeoLite2 database and GeoIP2 precision web service - geoip

I am using the GeoLite2 city-blocks and city-location free databases and found the following ip_addresses have same locid number of 223, however if I type these addresses to GeoIP2 precision web service it gives me 4 different location information. Should one locid number correspond to only one location? I am really confused?
12.234.227.170 (WI, US by GeoIP2)
69.174.58.60 (IL, US by GeoIP2)
71.216.182.245 (Bremerton, Washington by GeoIP2)
74.44.255.2 (Lakeville MN by GeoIP2)

GeoLite2 and GeoIP2 Precision results will differ in many cases. GeoLite2 provides access to free, less accurate data than is available in the GeoIP2 Precision Web Services. (Note: I work for MaxMind, the company that creates GeoLite2 and GeoIP2 products.)

Related

Is there a way to store two months of history in my Ripple testnet?

I have a ripple testnet and mainnet. Their history storage configurations are equal. However, in my mainnet my complete_ledgers result returns a first block whose closing time was in the beginning of October and in the testnet the first ledger that is available is from mid-November. The only other significative difference between the two configuration files is the node size (small for testnet and medium for mainnet, as advised). Below follows the configuration for the history block.
Is there any difference in the configuration of the mainnet and testnet that i am not accounting for? Do you have any suggestions on solving this?
[node_db]
type=NuDB
path=/data/nudb
advisory_delete=0
# How many ledgers do we want to keep (history)?
# Integer value that defines the number of ledgers
# between online deletion events
online_delete=700000
[ledger_history]
# How many ledgers do we want to keep (history)?
# Integer value (ledger count)
# or (if you have lots of TB SSD storage): 'full'
350000
I have seen in the documentation that the ledger_history should never be greater that the online_deletion and vice-versa, so I tried setting the ledger_history to the same value as I have in online_deletion, but that did not work.
It seems like the testnet and mainnet's chains do not progress in the same way or the configurations affect the chains differently, so I increased the ledger history and online_deletion's values for the testnet and I am now storing as much history as needed
The ledgers your rippled node can sync depends highly on the peers it is connected to.
Your node can only download as much history as its connected peers can provide. So if you want to download 700k ledgers but your peers only hold 100k ledgers, then your max downloadable history will be 100k ledgers.
Probably due to the config change and therefore restart of rippled, you got connected to different peers which hold more history and your node was able to download as much as it requested.
You can check your peers with the command:
rippled peers
and see what history they can provide in the complete_ledgers property.
Best,
Daniel

How to fetch GCP compute engine price details by applying parameters- OS, network, disc, memory?

I have downloaded GCP price list from - GCP pricing list JSON
Also I tried GCP interactive pricing calculator from GCP pricing calculator
Above pricing JSON will give the output in following format:
"CP-COMPUTEENGINE-VMIMAGE-F1-MICRO": {
"us": 0.0076,
"us-central1": 0.0076,
"us-east1": 0.0076,
"us-east4": 0.0086,
"us-west1": 0.0076,
"europe": 0.0086,
"europe-west1": 0.0086,
"europe-west2": 0.0096,
"europe-west3": 0.0096,
"europe-west4": 0.0084,
"northamerica-northeast1": 0.0084,
"asia": 0.0090,
"asia-east": 0.0090,
"asia-northeast": 0.0092,
"asia-southeast": 0.0092,
"australia-southeast1": 0.0106,
"australia": 0.0106,
"southamerica-east1": 0.0118,
"asia-south1": 0.0091,
"cores": "shared",
"memory": "0.6",
"gceu": "Shared CPU, not guaranteed",
"maxNumberOfPd": 16,
"maxPdSize": 64,
"ssd": [0]
},
In the above format, there is no information about operating system, network, disk, zone etc.
As per I know, compute engine price may varry if I select different operating systems(windows,RHEL,.. etc) and network, disk, etc.
Is there any API available for fetching/calculating compute engine price based on parameters (operating system, network, disk, zone etc.) by applying different permutations like?
http://cloud.google.com/compute?os=rhel
http://cloud.google.com/compute?os=rhel&memory=13gb&zone=east
I'm working for Google Cloud Support.
At the moment there is no such API for this matter. I have opened a feature request for this and you can follow the request's progress in this public issue tracker url https://issuetracker.google.com/issues/73713821

Intra-Organization Consenus in Hyperledger-fabric

I need to execute a particular transaction with the consent of 3 peers from different departments inside a particular organization how would I do about it with Hyperledger-fabric
Quoting the documentation for configuring an MSP (membership service provider):
Defining one MSP to represent each division. This would involve
specifying for each division, a set of certificates for root CAs,
intermediate CAs, and admin Certs, such that there is no overlapping
certification path across MSPs. This would mean that, for example, a
different intermediate CA per subdivision is employed. Here the
disadvantage is the management of more than one MSPs instead of one,
but this circumvents the issue present in the previous approach. One
could also define one MSP for each division by leveraging an OU
extension of the MSP configuration.
After you have configured your MSP accordingly, then you would craft an endorsement policy for the channel that stipulated that transactions needed to be endorsed by the three departments:
For example:
AND('Org1.member', 'Org2.member', 'Org3.member')
where Org1, Org2 and Org3 are the identifiers for the departments.

Clarification on Sitecore link database sycronization in muliti-server environments

The Sitecore Guide states this:
To ensure that Sitecore automatically updates the link database in the
CD environment:
*The CD and CM instances must use the same name to refer to the publishing target database across the environments (typically Web).
One of the following conditions should be met:
**The Core database should be shared or replicated between the CM and CD instances.
** The Link database data should be configured to be stored in a database which is shared between CM and CD publishing target database
(typically Web).
Two things aren't clear to me:
The line with the first *, I assume this means that if I have two web DBs, one being "web" and the other being "web2", then this means that the CM needs to use those names and CD1 needs to use "web" and CD2 needs to use "web2", yes"?
The last line with **: by "shared" does this mean that CD1 and CD2 would need to use the same web database, or does it just mean that as long as CM, CD1 and CD2 are set to use their respective web DBs to store the Link DB, the Link DB will be updated on publish? What database should the CM be configured to store it's like DB? It has two webs (web1, web2).
Here are details of our environment for context:
Our CM environment is 1 web server and 1 DB server. Our CD environment is two load balanced web servers, each with their own DB. So, two publishing targets for the CM to point to.
This is a good question. Typically you may have multiple web DBs for things such as pre production preview, e.g. "webpreview" as opposed to a public "web" DB. If you have two separate web DBs, "web1" and "web2" and two separate CDs use them respectively, then it seems you must have two separate publishing targets, web1 and web2. In the typical case (where "typical" maybe just means simple), there's a single web DB shared by 1-n CDs. So in your case CD1 and CD2 would both read from the same single web DB. Based on this context:
It means whatever connection string 'name' token you use on the CM for the "web" DB, you need to use the same token on CD1 and CD2. So it could be "web" or "webpublic" or similar. But must be consistent across all 3 instances (CM, CD1, CD2)
Yes, CD1 and CD2 would share the same exact web DB as I indicated above. And thus you would set the link database to use that shared "web" (or "webpublic"...) DB.

Using GeoIP only for US locations not 100% accurate

I have a module built to use the GeoIP Database. Im using it to get the location of the user visiting the website and we can provide the current weather on their city (only USA) pulled from YQL weather. I found this https://stackoverflow.com/questions/12365078/best-way-to-get-location-weather-details-accurately which reads is not going to be very accurate without letting the user enter their location.
So far it has been 90% succesfully displaying correct locations, sometimes we have to renew the DHCP lease to get the correct location... but my client still cannot pull the right city in their office. I don't really know much about their network connection, but I wonder if it is being forwarded, etc...
Here is how I am obtaining the user's IP on the module:
function getIpAddresses() {
$ipAddresses = array();
if (isset($_SERVER["HTTP_X_FORWARDED_FOR"])) {
$ipAddresses['proxy'] = isset($_SERVER["HTTP_CLIENT_IP"]) ? $_SERVER["HTTP_CLIENT_IP"] : $_SERVER["REMOTE_ADDR"];
$ipAddresses['user'] = $_SERVER["HTTP_X_FORWARDED_FOR"];
} else {
$ipAddresses['user'] = isset($_SERVER["HTTP_CLIENT_IP"]) ? $_SERVER["HTTP_CLIENT_IP"] : $_SERVER["REMOTE_ADDR"];
}
return $ipAddresses;
}
Is there a better way to get the IP / improve the accuracy without requesting the user's entry of their location?
Thanks!
You have a couple options:
Look for a better GeoIP data provider. There are several free data providers, in my experience they're pretty good at things like country level, as you get more granular you need to look at commercial solutions.
Record which IP they're at, even if they're renewing their IP they're probably sticking around in a particular sub-net. Hard code that sub-net to the correct location.
Incorporate a javascript location API, and send that data to the server. If the client computer is a laptop, it may provide better location information than simple GeoIP.