The Amazon MWS Reports API has an Acknowledged parameter available for returning all outstanding orders that have not been previously acknowledged by the merchant.
So far I haven't been able to find an equivalent parameter that I can use with the MWS Orders API, so I have to work my way through the past 30 days worth of orders using the ListOrdersByNextToken call(s), which come with a fairly severe (and apparently incorrectly documented) call threshold and refresh limit. I'm really hoping I've missed something in the documentation and/or the schemas about acknowledged vs. unacknowledged orders with the Orders API. If this is the case I'd certainly appreciate it if anyone could point me in the right direction.
Right now I'm leaning towards trying the LastUpdatedAfterparameter instead of the CreatedAfter parameter, but I'm absolutely certain the former will always retrieve all new and unshipped orders from the Amazon Marketplace.
It's possible to acknowledge individual orders by sending Amazon a _POST_ORDER_ACKNOWLEDGEMENT_DATA_ XML feed or a _POST_FLAT_FILE_ORDER_ACKNOWLEDGEMENT_DATA_ tab delimited feed however, there's no way (as far as I can tell) to solely pull unacknowledged orders using the Orders API.
You can schedule order reports using the Reports API and then pull unacknowledged order reports as you would with other kinds of reports. I think this is probably the closest feature to what you are looking for.
Related
I am currently working on a distributed crawling service. When making this, I have a few issues that need to be addressed.
First, let's explain how the crawler works and the problems that need to be solved.
The crawler needs to save all posts on each and every bulletin board on a particular site.
To do this, it automatically discovers crawling targets and publishes several messages to pub/sub. The message is:
{
"boardName": "test",
"targetDate": "2020-01-05"
}
When the corresponding message is issued, the cloud run function is triggered, and the data corresponding to the given json is crawled.
However, if the same duplicate message is published, duplicate data occurs because the same data is crawled. How can I ignore the rest when the same message comes in?
Also, are there pub/sub or other good features I can refer to for a stable implementation of a distributed crawler?
because PubSub is, by default, designed to deliver AT LEAST one time the messages, it's better to have idempotent processing. (Exact one delivery is coming)
Anyway, your issue is very similar: twice the same message or 2 different messages with the same content will cause the same issue. There is no magic feature in PubSub for that. You need an external tool, like a database, to store the already received information.
Firestore/datastore is a good and serverless place for that. If you need low latency, Memory store and it's in memory database is the fastest.
So the questions has more to do with what services should i be using to have the efficient performance.
Context and goal:
So what i trying to do exactly is use tag manager custom HTML so after each Universal Analytics tag (event or pageview) send to my own EC2 server a HTTP request with a similar payload to what is send to Google Analytics.
What i think, planned and researched so far:
At this moment i have two big options,
Use Kinesis AWS which seems like a great idea but the problem is that it only drops the information in one redshift table and i would like to have at least 4 o 5 so i can differentiate pageviews from events etc ... My solution to this would be to divide from the server side each request to a separated stream.
The other option is to use Spark + Kafka. (Here is a detail explanation)
I know at some point this means im making a parallel Google Analytics with everything that implies. I still need to decide what information (im refering to which parameters as for example the source and medium) i should send, how to format it correctly, and how to process it correctly.
Questions and debate points:
Which options is more efficient and easiest to set up?
Send this information directly from the server of the page/app or send it from the user side making it do requests as i explained before.
Does anyone did something like this in the past? Any personal recommendations?
You'd definitely benefit from Google Analytics custom task feature instead of custom HTML. More on this from Simo Ahava. Also, Google Big Query is quite a popular destination for streaming hit data since it allows many 'on the fly computations such as sessionalization and there are many ready-to-use cases for BQ.
Is there an MWS or AWS API call that I can make using a product ASIN that will tell me whether or not the information being returned from the Amazon servers is coming from the main product listing or from one of the "additional sellers" that are piggybacking off of the main product listing?
What I'm trying to do is programmatically determine if the MerchantId I'm using in the GetMatchingProductForId() call is the same MerchantId that originally created the product listing on Amazon. If they aren't the same it means (in theory, anyway) that I can work with a much smaller subset of the data, and post just the information that's required for the "Condition" and "Condition Note" values in a Marketplace Offering.
And yes, this question is directly related to How to get Seller Name from Amazon in ItemSearch using amazon API, but the API call and parameters in the answer have been deprecated by Amazon. Literally, the request returns <MerchantId>Deprecated</MerchantId> in the response, so I can't compare the Merchant ID value that I'm using to make the call against the <MerchantId> node returned in the response.
After a lot of (very tedious) research and experimentation I've settled on a series of three MWS API calls to determine whether a product is an Amazon Marketplace Listing or an Amazon Marketplace Offering.
GetCompetitivePricingForSKU -- This call returns an XML CompetitivePrice node containing a belongsToRequester attribute set to "true" or "false". The caveat (and the reason why I'm using three different MWS API calls) is this call fails miserably for merchants that have predetermined shipping charges for their products.
GetMyPriceForSKU -- This call will return an error if the merchant doesn't "own" the Marketplace Listing. Purely anecdotal and empirical, though.
GetMyPriceForASIN -- The least reliable call of these three. Sometimes it will return an XML MerchantSKU node... and sometimes it won't.
Since Amazon doesn't provide any definitive answer (or documentation) for this issue, please take all of this advice with a large grain of salt. Run your own use-cases and see which one(s) work for you.
The GetProductForID is intended to give you details about a product. Mainly this is used to get the ASIN by using a UPC code, and other product details such as bullets, image, size, etc.
If you are trying to see if your offer for the same product is competitive you can either use the GetLowestOfferListingForASIN (to get the asin use the GetProductForID (if you have some sort of id like UPC) or ListMatchingProducts to do a text search). That way you will know what type of offer you need to place to try and get the buy box.
If you are looking to get more details to all the unique offers listed for a product (new, used, etc), then you need to use the subscriptions API (its pretty new). This can get pretty complicated.
I am sure this question may seem a bit lacking, but I literally do not know where to begin with. I want to develop a solution that will allow me to manage ALL of my Amazon and Rakuten/Buy.com inventory from my own website.
My main concern is keeping the inventory in sync, so the process would be as follows:
1.Fetch Orders sold today
a.Subtract the respective quantities
2.Fetch Rakuten orders sold
a.Subtract the respective quantities
3.Update Internal DB of products
a.Send out updated feeds to Amazon and Rakuten.
Again, I apologize if this question may seem a bit lacking, but I am having trouble understanding how exactly to implement this, any tips would be appreciated
For the Amazon part look at https://developer.amazonservices.com/
Rakuten, I think you will be able to do what you want with it via the FTP access, I'm still researching this. If I find more I'll respond with a better answer.
In order to process orders, you'll need to use be registered with Rakuten in order to get an authorisation token. For the API doc etc... try sending an email to support#rakuten.co.uk.
Incidentally, to send out updated feeds, you'll need to use the inventory API in order to update stock quantities (given that you'll be selling the same item Amazon etc..).
I have a .Net application that uses list of names/email addresses and finds there match on Facebook using the graph API. During testing, my list had 900 names...I was checking facebook matches for each name in in a loop...The process completed...After that when I opened my Facebook page...it gave me message that my account has been suspended due to suspicious activities?
What am I doing wrong here? Doesn't facebook allow to search large number requests to their server? And 900 doesn't seem to be a big number either..
per the platform policies: https://developers.facebook.com/policy/ this may be the a suspected breach of their "Principals" section.
See Policies I.5
If you exceed, or plan to exceed, any of the following thresholds
please contact us by creating confidential bug report with the
"threshold policy" tag as you may be subject to additional terms: (>5M
MAU) or (>100M API calls per day) or (>50M impressions per day).
Also IV.5
Facebook messaging (i.e., email sent to an #facebook.com address) is
designed for communication between users, and not a channel for
applications to communicate directly with users.
Then the biggie, V. Enforcement. No surprise, it's both automated and also monitored by humans. So maybe seeing 900+ requests coming from your app.
What I'd recommend doing:
Storing what you can client side (in a cache or data store) so you make fewer calls to the API.
Put logging on your API calls so you, the developer, can see exactly what is happening. You might be surprise at what you find there.