Custom labels on range aggregation in kibana - customization

I would like to customize the labels for range aggregation in kibana 5.6.6:
With something like :
Very short
Short
Medium
Long
Too long
Way too long
With this kind of range:
Thanks in advance.

Related

How to write a correct D3 string for a custom Metric in Apache Superset

I just created a simple table in apache superset, which contains different columns. And i have a two columns, which shows:
amount of bytes
rows count
But superset shows them similar:
column of bytes shows like 461G
column of rows count also shows like 1.8G
It may confuse users, and i want to show a rows count like 1.8B.
In manual is written, that i can create my own metric with needed format, using D3 syntax - https://github.com/d3/d3-format/blob/master/README.md#format. But i can't understand how to write it correctly.
Can you show me example of a d3 string to change 1.8G to 1.8B or 1.800.000.000?
In your case you can just type ".01s" into the D3 entry in your metric setup. This will give you 1.8B for 1,800,000,000 and 1.0B for 1,000,000,000

What is the algorithm of generating the code on 100 USD banknote?

I am designing the primary key for storing product. I look around to find some insight how to design the ID as using auto increment is too boring. Do any one know that the code 'KB46279860I' on the below banknote meaning?
100 USD picture
I think that code is not just using auto-increment but some algorithm like check digit,etc.
Could any one give me some hints , Thanks!!
If you're not planning on showing the user your ID then auto-increment could save you processing time as it is handled by your database directly.
If you are planning on showing the ID to the user without showing the one in the database, you could consider using Hashids, or GUID or generating your own unique random value with a check digit. You can use Luhn or Damm's algorithm for check digit.

Get Lowest Offers for Entire Inventory on Amazon

We are just getting started with MWS. We'd like to be able to use the lowest offers on each product to help calculate our price. There is an API to GetLowestOfferListForSku but that only returns a single sku and there is a throttle limit which would make it so we'd have to take several days to get all the data.
Does anybody know a way to get that data for multiple products in a single request?
You can fetch data on up to 20 SKUs using GetLowestOfferListingsForSKU by adding a SellerSKUList.SellerSKU.n parameter for each product (where n is a number from 1 to 20). The request looks something like this:
https://mws.amazonservices.com/Products/2011-10-01
?AWSAccessKeyId=AKIAJGUVGFGHNKE2NVUA
&Action=GetMatchingProduct
&SellerId=A2NK2PX936TF53
&SignatureVersion=2
&Timestamp=2012-02-07T01%3A22%3A39Z
&Version=2011-10-01
&Signature=MhSREjubAxTGSldGGWROxk4qvi3sawX1inVGF%2FepJOI%3D
&SignatureMethod=HmacSHA256
&MarketplaceId=ATVPDKIKX0DER
&SellerSKUList.SellerSKU.1=SKU1
&SellerSKUList.SellerSKU.2=SKU2
&SellerSKUList.SellerSKU.3=SKU3
Here's some relevant documentation which explains this: http://docs.developer.amazonservices.com/en_US/products/Products_ProcessingBulkOperationRequests.html
You might also find the MWS scratchpad helpful for testing:
https://mws.amazonservices.com/scratchpad/index.html

How big can Django primary keys get?

Is there a maximum value as to how high pk values for a model can get? For example, for something like an activity feed model, the pk's can get really large.
Ex, "Kyle liked this post", "Alex commented on this post" etc..As you can see, for every action, an activity feed object is created. Is there some sort of maximum threshold that will be reached? I've done some research but haven't found a concise answer. If there is some limit, how can one overcome this?
I currently use PostgreSQL as my database.
Django's auto-incrementing IDs are AutoFields, a subclass of IntegerField. It will generate the primary keys as PostgreSQL integer, which are signed 32-bit integers with a maximum value of 2147483647, a little over 2 billion.
I'm gonna take a guess and assume that postgreSQL stores primary keys as 64 bits unsigned integers. If this is the case, then you can have up to 2^64 different values. Even with a 32 bits integer, that leaves us with 4294967296 possibilities. Unless you are twitter or facebook, you should never be annoyed with this kind of limit.

Smart data extraction algorithm from websites

I'm building a deal aggregator so I need a crawler that will extract data from some sites: price, discount, image, coordinates and name of deal of cource.
Do you know of any tutorials, ebooks or something that will help me? For image and coordinates and discount I have a solution and pattern:
image: biggest image is always the main image of deal
discount: discount is always a number between 50 and 99 and always has a "%" symbol
coordinates: is always in decimal numbers so I get it with regex
How do I get the following items?
Name of deal?
Price?
Do you know of any data extraction algorithms that can be helpful?
I'd suggest you to use XPath based scraper. For example Web-Harvest
Or, if you want to analyze raw texts, I'd suggest using state-machine parser for recognizing templated parts of texts.
Look at this topic: Are there APIs for text analysis/mining in Java?