Pushing all product and product change data from Spree to a remote API - spree

I have a unique requirement, for which I couldn't find anything on the documentation. (or I may have missed it).
I am building an extension to provide custom search implementation for Spree Commerce, and I am done with the extension.
The problem is getting the data from spree. I need all the products and I need products which are modified to be sent to the remote API.
I could write an tool to get the data using the API's/Webhooks, but is there a way by which I can push these from the spree extension itself? Also pulling entire product inventory every time does't look like a good solution. Can someone please guide me here.

Related

Microsoft Cloud For Sustainability on MS Dynamics - is there any demo data available/accessible, extended multilingual support?

I am trying to wrap my head around Microsoft Cloud for Sustainability. Apparently it's a solution based on Microsoft Dynamics. I need to have more back-end to that solution, because as it is right now I'm either lacking permissions (or extra paid access to Microsoft resources) or missing a chunk of documentation, because I'm unable to:
Change default language across the board - I can switch MS Dynamics to any language I want, but it will work for a shell only. Anything that's CfS specific, is in English. Do I remove the demo data and import my own scopes and data? As only thing available are database and Cube for BI analytics and JSON files describing CfS structure in general (that's in CDM), do I really have to create it from scratch? This brings me to second question:
Access entry-level data that's already in demo version - I need to see what's in the database the CfS is using, or be able to modify it. Is there any way to get to it via Business Central, if at all possible?
Since I will be preparing several presentations for potential customers, I need a way to quickly create a dataset based on initial and very basic information provided by each customer, how can I do that with trial user
I work for a company that's Microsoft Certified Partner, so logically resources for what I need should be available to me, but either links in the documentation are dead (and some are, as they redirect to general info) or require some special access level (or are dead, but error message is really not helpful at all).
Is there somewhere else I can go? The Documentation page offers little towards what I need...
P.S. I think I should tag this question with CfS specific tags, but not enough rep...

Pentaho / Salesforce: How to integrate SF-Enterprise-Web-Services-API V48.0 into PDI 9.0 that only supports v47.0

actually I am working with PDI 8.2, however I am able to upgrade to 9.0.
The main issue is that a customer wants to pull data from salesforce which works well so far. But he is using the Enterprise Web Services API with version 48.0, latest Pentaho supports v47.0 only.
I strongly assume that reading via v48.0 won't work with PDI so that I have to build a workaround. Could anyone point me to a feasible solution? To be honest, I don't even know whether the Enterprise or the Partner API is relevant for Pentaho. Have got my own SF-Account so that I could try around with the APIs.
Is the "Web Services lookup" the right step for the workaround?
Any answer would be appreciated! Thanks in advance!
Oh man, what a crazy question, all over the place.
I strongly assume that reading via v48.0 won't work
You'd have to try it but it should work. Salesforce has 3 releases a year and that's when they upgrade API versions. We're in Spring'20 now, it's v.48. That doesn't mean anything below is deprecated. You should have no problems calling with any API version >= 20. From what I remember their master service agreement states that API version released will stay up at least 3 years. Well, v.20 is 9 years old and still going strong...
Check for example https://baa.my.salesforce.com/services/data/ (if your client has "My Domain" enabled you can use that too instead of some unknown company), you should see a list similar to this: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_versions.htm (no login required, that'd be a chicken & egg situation. You need to choose API version you want when making the login call).
So... what does your integration do. I assume it reads or writes to SF tables (objects), pretty basic stuff. In that sense the 47 vs 48 won't matter much. You should still see Accounts, Contacts, custom objects... You won't see tables created specifically in v 48. Unless you must see something mentioned in Spring'20 release notes I wouldn't worry too much.
If your client wrote a specific class (service) to present you with data and it's written in v.48 it might not be visible when you login as v.47. But then they can just downgrade the version and all should be well. Such custom services are rarely usable by generic ETL tools anyway so it'd be a concern only if you do custom coding.
whether the Enterprise or the Partner API is relevant for Pentaho
Sounds like your ETL tool uses SOAP API. Salesforce offers 2 versions of the WSDL file with service definitions.
"Partner" is generic, all SF orgs in the world produce same WSDL file. It doesn't contain any concrete info about tables, columns, custom services written on top of vanilla salesforce. But it does contain info how to call login() or run a "describe" that gives you all tables your user can see, what are their names, what are columns, data types... So you learn stuff at runtime. "Partner" is great when you're building a generic reusable app that can connect to any SF or you want to be dynamic (some backup tool that learns columns every day and can handle changes without problems. Or there's a "connection wizard" where you specify which tables, which columns, what mapping... new field comes in - just rerun the wizard).
"Enterprise" will be specific to this particular customer. It contains everything "Partner" has but will also have description of current state of database tables etc. So you don't have to call "describe", you already have everything on the plate. You can use this to "consume" the WSDL file, generate your Java/PHP/C# classes out of it and interact with them in your program like any other normal object instead of crafting XML messages.
The downside is that if new field or new table is added - you won't know if your program doesn't call "describes". You'd need to generate fresh WSDL and consume it again and recompile your program...
Picking right one really depends what you need to do. ETL tools I've met generally are fine with "partner".
Is the "Web Services lookup" the right step
No idea, I've used Informatica, Azure Data Factory, Jitterbit, Talend... but no idea about this Pentaho thing. Just try it. If you pull data straight from SF tables without invoking any custom code (you can think of SF custom services like pulling data from stored procedures) - API version shouldn't matter that much. If you go < 41.0 I believe you won't see Individual object for example but I doubt you need to be on so much cutting edge.

Can I create an algorithm using Amazon MWS API?

I am working with my team to prep a project for a potential client. We've researched Amazon MWS API, and we're trying to develop an algorithm using the data scraped from this API.
Just want to make sure we understand the research correctly:
Is it possible to scrape data from Amazon.com like the plugins RevSeller or HowMany do? Then can we add that data to a database for use in an algorithm to determine whether or not an Amazon reseller should invest in reselling a product?
Thanks!
I am doing a similar project. I don't know the specifics of RevSeller or HowMany, but another very popular plugin is Amzpecty. If you use a tool like Fiddler, you can see the HTTP traffic and figure out what it does. They basically scrape out the ASIN and offer listing ID's on the current page you are looking at and one-by-one call the Amazon Product Advertising API, which is not the same thing as MWS. Out of that data returned, they produce a nice overlay that tells you all kinds of important stuff.
Instead of a browser plugin, I'm just writing an app that makes HTTP calls based on a list of ASIN's to the PA API and then I can run the results through my own algorithms. Hope that gives you a starting point.

Override context language based on visitor's GEOIP in Sitecore?

I am using sitecore 7.5 version with mongo analytics db and I need to override the context language based on Visitor’s Geo IP .
but whenever I call my file on httpRequestBegin Pipeline, current Sitecore.analytics.Tracker.Current is null.
Can anyone please help, I really need to find solution for this?
The tracker doesn't get built until the last processor of httpRequestBegin (ExecuteRequest).
Take a look at my blog post showing the sequence of events http://sitecoreskills.blogspot.co.uk/2015/02/a-sitecore-8-request-from-beginning-to.html
As you cans see, the CreateTracker pipeline is where the action happens. So your work either needs to occur after ExecuteRequest in httpRequestBegin, or if possible, in Createtracker
However, you should know that the Geo IP lookup doesn't necessarily happen immediately. The information might not show up until after the request has completed.
Another approach is to not use the Maxmind lookup that happens as part of DMS. Instead you could download the Maxmind database and do work yourself during the httpRequestBegin pipeline. Since you only need to identify the country, you can use the free version. That way, you don't need to involve the Tracker class.

Can Datomic simplify querying data contained in dynamically accessed HTML documents?

I need to write an API which would provide access to data being served as HTML documents from a web server. I need for my users to be able to perform queries over the data.
Say on a web site there is a page which lists items and their owners. Then there is additional set of profile pages for owners which for each owner provide information about their reputation. An example query I may need to answer is "Give me ID's and owners of all items submitted in 2013 whose owners have reputation of at least 10".
Given a query to answer, I need to be able to screen scrape only the parts of the web site I need for answering the query at hand. And ideally cache the obtained information for future use with new queries.
I have no problem writing the screen scraping part, but I am struggling with designing the storage/query/cache part. Is there something about Clojure/Datomic that makes it an especially suitable technology choice for this kind of processing of data? I have been pointed in this direction before.
It seems a nice challenge but not sure about a few things: a) would you like to expose to your users a Datalog query box and so make them learn datalog-like syntax? b) what exact kind of results do you wish to cache, raw DB responses, html fomatted text, json ?
Anyway I suggest you to install and play a little bit with the Datomic console to get a grasp if you didn't before as it seems to me the more close idea to what you want to achieve atm https://www.youtube.com/watch?v=jyuBnl0XQ6s http://blog.datomic.com/2013/10/datomic-console.html
For the API I suggest you to use http://clojure-liberator.github.io/liberator/ as it provides sane defaults to implement REST services and let you focus on your app behaviour