How to get the raw data from particular document's schedule instance? - business-objects-sdk

I am now able to get the data from a document usign RESTful Web Services SDK
as per
How to obtain the report data from BO with the use of RESTful Web Services?
and what I need is to
get the data not from the current version of the document but from the schedule that were executed some time ago
with the older data than the current data.
Any hints ?

I learnt that it is possible to get the IDs of schedules for a particular document.
Then it is possible to get an export of the schedule's content by requesting the export but instaed of providing documents ID, it is necessary to provide schedule ID.
This way I obtained the schedule/instance export.
I also have noticed the limitation that the pagind does NOT seem to work when the export is requested from schedule's ID instead of document's ID.

Related

Cloud Data Fusion - Input HTTP Post Body from BQ rows

I am a new cloud data fusion user and have run into a problem I cant find a solution for.
I have a table in BQ with ~150 rows of latitude and longitude points. For each row, I want to pass the lat and lng into an HTTP post request to get a result from TravelTime API. Ultimately I want to have a table with all my original rows with a column with the response for each one/
Where I am stuck is that so far I have only been able to hard-code the body of the post request into the HTPP Source plugin and successfully write the response to a file in gcs. However, I expect the rows will change over time, so I would like to dynamically generate and pass in the POST request body from my BQ data.
Is this possible with data fusion? Is this an advisable approach? Or is there a better way?
As #Albert Shau and #user3750486 agreed in the comments:
There is no out-of-the-box way to pass data from BQ rows dynamically in a POST HTTP request.
A possible workaround is to have an HTTP transform plugin that sits in the middle of the pipeline and can be configured to make calls based on the input data. Then you would have a BQ source followed by that plugin followed by the GCS sink. I think your best bet would be to write a custom transform.
This can be done by following this link that #Albert Shau provided or to do a custom code using GCP's Cloud Function as OP did.
Posting the answer as community wiki for the benefit of the community that might encounter this use case in the future.
Feel free to edit this answer for additional information.

feeding real time data into aws personalize

I want to feed real time data into aws personalize to build a recommendation engine. I've read online resources and in those guides, I could see that the training user-interaction data, user data and item data is provided in the beginning while creating the recommendation engine.
However, I have an app and I will gather data in the app and want to feed those realtime data into aws personalize. I want to know if building the recommendation engine is possible without providing any data at first and then stream real time data from my app later with the putevents, putItem and putUser api from aws-sdk? I'm quite new to this so I'm quite confused with this initial step
I want to know if building the recommendation engine is possible without providing any data at first and then stream real time data from my app later with the putevents, putItem and putUser api from aws-sdk?
Yes, it is possible. You just need to adjust the sequence of creating resources.
Interaction data is required for all Personalize recipes before a recommender can be created that provides recommendations. However, if you don't have interaction data (or enough data; see quotas and limits) to start with, you can create a dataset group and an interactions dataset, feed interactions to the dataset using the PutEvents API (see recording events page), and then create a domain recommender or custom solution when enough data has been ingested.
The minimum amount of interaction data (and potentially item metadata) required before you can train a model/recommender depends on the recipe that you select. Generally speaking, you will need 1000 interactions across 25 distinct users where each of those users has 2+ interactions. The domain recommenders also require specific event types. Check the docs linked above. The quality and relevance of recommendations will improve as you collect more data and retrain.

Do I need to update item csv in AWS personalize?

I'm trying to use AWS personalize, and following their documents.
So I've uploaded dataset files(interaction, user, item) to S3, then created a solution and a campaign.
And I implemented PutEvents API using java.
GetRecommendations API call works good.
At this moment I'm curious I need to update dataset files, especially item csv.
In general it's done at this point for very basic recommendations.
Since you are using PutEvents call, then all of the real-time events are added to Interactions dataset this way. Interactions datasets created by manual import and by PutEvents calls are separated from themselves. You can actually see them in Personalize Datasets web console.
Still you might want to update dataset files, using dataset import job feature, but it's going to replace your existing dataset. In general I would recommend using it only when:
You just created a fresh/bigger/better dump of your database with Interactions.
You've found, that your previous interactions dataset was invalid.
The schema of dataset changed (pretty much you are forced to do it then).
User or Item dataset changed/improved, it's actually a good idea to refresh it often, so Personalize can produce better recommendations. Keep in mind, that it also requires retraining of the Solution, so the new Items/Users will be included during the recommendations generation.
So for interactions you usually don't want to update dataset. For other datasets it might be a good idea to even create an automatic import mechanism.
Keep in mind, that Items and Users datasets are used only with Personalize Recipes, that support metadata. Otherwise they are simply ignored.

Pulling Instagram data into Google Big Query

I am new to development, so I am sorry if this is a really basic question. I am trying to access some of the data available from instagram's API as documented here. https://developers.facebook.com/docs/instagram-api/insights.
I would like some kind of data repository to pull the data into, so I am looking at Google Big Query to see if I can pull in the data. (The ultimate place will be PowerBi so I can publish online)
Looking at the Facebook request code - is it possible to put this into Google Big query to return the data?
I am replacing the 'instagram-business-user-id' with an ID I have generated already - but it feels like perhaps it needs more markup to let Big Query know what language it is in.
Any help would be much appreciated.
GET graph.facebook.com/{instagram-business-user-id}/insights
?metric=impressions,reach,profile_views
&period=day
Looking at the Facebook request code - is it possible to put this into Google Big query to return the data?
Yes it's absolutely possible using bigQuery API or bigQuery CLI
You can use this Psuedo workflow as an example (using BigQuery API):
Create a table in bigQuery with the desired schema for this you also have 2 options:
Save the result in 1 column with the full JSON, This means to the select you need you use JSON_EXTRACT to fetch specific data
Process the JSON in your code and save it in specific columns to simplify the select statement
Call instagram's API
Call bigQuery API or bigQuery CLI to insert the data, This link provides one option how to do this
Call bigQuery API or bigQuery CLI to fetch the data, This link provides one option how to do this

How to display all products without using ListMatchingProducts in the Amazon MWS Products API?

I have integrated the MWS API for my store. The issue is I was not able to get list of all products which I have submitted from feeds and also available products in Amazon store in account.
I have tried all the api of MWS no any api giving all products.
In Listmatchingproducts api it needs query parameter but for product listing there should not be query parameter required.
So for all product listing which api will be used and how?
In order for you to retrieve all of your products without input parameters, you can use the Reports API to request an inventory report or active listings reports or any of the report types here: http://docs.developer.amazonservices.com/en_US/reports/Reports_ReportType.html#ReportTypeCategories__ListingsReports
You can call the Reports API just like the Products API, but there are extra steps involved. You first request the report using the RequestReport operation, then you'll get back a GeneratedReportId. Take that Id and call the GetReport operation and you'll get back the report once it's available. If you need more than a report, but need to work with the data in some other way, you can just write a routine in whatever language you're using to parse out the data in memory.
Have you seen the client libraries? They do most of the work already, just plug in your keys. https://developer.amazonservices.com/gp/mws/api.html/188-4747010-1589520?ie=UTF8&group=bde&section=reports&version=latest
Basically there is no specific API to call the product list available in your store. But you can get your products using Reports API (ReportType enumeration)
http://docs.developer.amazonservices.com/en_US/reports/Reports_ReportType.html#ReportTypeCategories__ListingsReports
There is multiple steps involve in order to work with reports.Here is the steps that involve to get the product listing.
1-RequestReport
2-GetReportRequestList (includes the ReportId when done)
3-GetReport
Have you tried Scratchpad. its for UK marketplace.
https://mws.amazonservices.co.uk/scratchpad/index.html