Is there a REST or Node.JS library API that provides global network metadata for Bitcoin and/or Etherum ?
The metadata I'm looking for is -
Average wait time for a transaction confirmation on the network
Average fee cost per transaction on the network
I know I could crawl/parse one of the many sites that provide this data, but that's not ideal. Hence I'm looking for a dedicated API to obtain this information.
Eventually I found out of places like CryptoCompare that provide that data
I have a Books API project, and the GCP shows "No data is available for the selected time frame" for the last 30 days. This message appears on both the "Metrics" and "Quotas" pages. See screenshots below.
Clearly there is data, which I can see via my app analytics reports.
Any suggestions on how to fix it?
UPDATE 1:
Following are some points that were missing on the original post:
The Google Books API is used by an iOS app, which is available on the App Store and widely used across many iOS devices (iPhone and iPads) in many countries.
There are thousands of iOS devices running my app so the Google Books API calls are invoked from thousands of endpoints with different locations and different IPs. All endpoints are using the same API_KEY.
The Google Books API calls are performed successfully from the iOS devices and there is no API issue (I can clearly see that using analytics tool).
The only issue I have, is with GCP console not showing the number of the API calls (and other metrics) associated with my API_KEY. As you can see in the previous screenshots, I get "No data is available for the selected time frame" anywhere.
This is a regression issue since until recently I could successfully view the actual data of the API usage. I didn't change anything in this period.
When going to GCP > IAM & Admin > Quotas, you can clearly see that the app indeed consumes API calls (see screenshot below).
Any suggestion why would the GCP console tell that no data is available, while data is indeed available?
As the documentation [1], Google Books respects copyright, contract, and other legal restrictions associated with the end user's location. As a result, some users might not be able to access book content from certain countries. For example, certain books are "previewable" only in the United States; we omit such preview links for users in other countries. Therefore, the API results are restricted based on your server or client application's IP address.
On the other hand, I hope link [2] could be helpful for you which seems similar to the issue you are facing. Also, documentation [3] [4] could be helpful for us to have more information about books API to use in the Google Cloud Platform.
[1] https://developers.google.com/books/docs/v1/using#UserLocation
[2] Google books api always returns nothing
[3] https://developers.google.com/books/docs/v1/using
[4] https://developers.google.com/books/docs/v1/getting_started
Consider the following micro services for an online store project:
Users Service keeps account data about the store's users (including first name, last name, email address, etc')
Purchase Service keeps track of details about user's purchases.
Each service provides a UI for viewing and managing it's relevant entities.
The Purchase Service index page lists purchases. Each purchase item should have the following fields:
id, full name of purchasing user, purchased item title and price.
Furthermore, as part of the index page, I'd like to have a search box to let the store manager search purchases by purchasing user name.
It is not clear to me how to get back data which the Purchase Service does not hold - for example: a user's full name.
The problem gets worse when trying to do more complicated things like search purchases by purchasing user name.
I figured that I can obviously solve this by syncing users between the two services by broadcasting some sort of event on user creation (and saving only the relevant user properties on the Purchase Service end). That's far from ideal in my perspective. How do you deal with this when you have millions of users? would you create millions of records in each service which consumes users data?
Another obvious option is exposing an API at the Users Service end which brings back user details based on given ids. That means that every page load in the Purchase Service, I'll have to make a call to the Users Service in order to get the right user names. Not ideal, but I can live with it.
What about implementing a purchase search based on user name? Well I can always expose another API endpoint at the Users Service end which receives the query term, perform a text search over user names in the Users Service, and then return all user details which match the criteria. At the Purchase Service, map the relevant ids back to the right names and show them in the page. This approach is not ideal either.
Am I missing something? Is there another approach for implementing the above? Maybe the fact that I'm facing this issue is sort of a code smell? would love to hear other solutions.
This seems to be a very common and central question when moving into microservices. I wish there was a good answer for that :-)
About the suggested pattern already mentioned here, I would use the term Data Denormalization rather than Polyglot Persistence, as it doesn't necessarily needs to be in different persistence technologies. The point is that each service handles its own data. And yes, you have data duplication and you usually need some kind of event bus to share data across services.
There's another option, which is a sort of a take on the first - making the search itself as a separate service.
So in your example, you have the User service for managing users. The Purchases services manages purchases. Each handles its own data and only the data it needs (so, for instance, the Purchases service doesn't really need the user name, only the ID). And you have a third service - the Search Service - that consumes data produced by other services, and creates a search "view" from the combined data.
It's totally fine to keep appropriate data in different databases, it's called Polyglot Persistence. Yes, you would like to keep user data and data about purchases separately and use message queue for sync. Millions of users seems fine to me, it's scalability, not design issue ;-)
In case of search - you probably want to search more than just username, right? So, if you use message queue to update data between services you can also easily route this data to ElasticSearch, for example. And from ElasticSearch perspective it doesn't really matter what field to index - username or product title.
I usually use both approaches. Sometimes i have another service which is sitting on top on x other services and combines the data. I don't really like this approach because it is causing dependencies and coupling between services. So in general, within my last projects we tried to stick to polyglot persistence.
Also think about, if you need to have x sub http requests for combining data in some kind of middleware service, it will lead you to higher latency. We always try to cut down the amount of requests for one task and handle everything what is possible through asynchronous queues. ( especially data sync )
If you conceptualize modules as the owners and controllers of the data they work on, then your model must also communicate that data out of that module to others. In contrast, the modules in a manufacturing process have the access to change data without possessing and controlling it.
Microservices is an architecture for distributed processing, like most code, where modules pass the data around to work on it. From classic articles by Harvard Business Review and McKinsey on the subject of owning members of a supply chain, I identified complexities arising from this model and wrote an article teaching programmers what you need to know: http://www.powersemantics.com/p.html
Manufacturing is an architecture for integrated processing, where modules work on the data without passing it around from point to point. This can be accomplished by having modules configured to access the same memory, files or database tables. My architecture shows how to accomplish this on memory via reference properties.
When you consider "exposing an API at the Users Service end which brings back user details based on given ids", you need to be aware that creates what HBR calls "irreversible" complexity, which I've dubbed centralization complexity. Don't build A->B (distributed) systems, because you can't decentralize them later after failing to separate requirements. Requirements in production processes represent user instructions, and centralized modules only enable you to change the wrong users' processes. In other words, centralized modules don't document user groups or distinguish them from derived-product-users.
I have a real time web analytics problem to address, and I'm wondering if some of the WSO2 products might be an appropriate solution.
An ecommerce web site shows pages of products to a browser user, and the web site vendor wants to collect details of what products were viewed in a list, what products were selected from the list for more info, what products were put into the basket, and what products were actually purchased - all in real time. I can use web page tagging to generate logging events for the four states (I.e. In list, view detail, in basket, purchased). The web site vendor wants too see results summarized by product and by rolling time band (e.g. Last hour, last 6 hours, last 24 hours, last 72 hours) by the four product states.
As a complete WSO2 newbie I'm hoping somebody can help with some pointers on how to address this. I've been reading about the BAM module to capture events. Is that a good place to start? Also can anybody suggest a good in memory data store to hold the event data aggregated by event type and rolling time period?
TIA
Yes, BAM is more kind batch processing, monitoring and complex engine and using it you can capture data, process and then present. In architectural point of view, the product states that are changed by the browser user will be captured by the web server and publish to BAM server.
A good point to start is learning about data publishing. Once you define the data [in BAM it is known as stream definition] to be published, you can write a hive script to process it and present. You can pump all data to BAM and then you can use hive script to process and store it in the manner you wanted. Later you can retrieve and present.
I am trying to surface data from an external SQL database in Sitecore. Ideally the data will be represented as items in the Sitecore content tree.
The official Sitecore documentation for "Integrating External Data Sources" is for Sitecore version 5. Does anyone know where I could find, or share a simple example of how I could surface external data in Sitecore 6.5. All the information I have found on it seems to be out of date.
Any help appreciated.
You need to implement a custom data provider. Here is a good tutorial: http://www.techphoria414.com/Blog/2011/January/Black-Art-of-Sitecore-Data-Providers
Also have a look at this sdn article. It is a bit out of date but it still has relevant points. http://sdn.sitecore.net/Developer/Integrating%20External%20Data%20Sources.aspx
Check out the YouTube data shared source data provider.