how to remove images from amazon mws catalog - amazon-web-services

I have got 2 amazon account, US, UK (diff merchantId) and in UK account, i have got DE,ES,IT,FR account with the same merchantID as UK one.
the problem here is, last week in backend because of some issue images got duplicated and when i published the catalog, all images got duplicated in amazon accounts as well. I have spoken to catalog team and they said that they can not do anything unless i provide them the ASINS and only 7 ASINS per support case. I have got almost 9000 ASINS. I need to remove the duplicated images from my catalog and i dont know what to do..
I have published the image feed for 10 ASINS with Operation type 'DELETE' to delete all the images for that product and the response says successful BUT images are still there, I dont know why... i was reading somewhere that images are linked between accounts, so if i delete the images from my UK account, i still need to do the same for US account. i did the same with success response BUT still images are not being deleted from amazon...
can anyone tell me how to remove all images regardless of duplicated or not? and then i can update the images again with proper order.
appreciate your help and time.

If you`d wish to delete single images you have two options:
Delete them manually in your inventory.
Create a file with the full list of ASINs and forward this list in a new case to Amazon Seller Support and ask them to delete the images on this ASINs. As this order will delete every image on this ASINs you`ll have to re-upload the residual images.
Info: Option 2 is just goodwill and I cannot confirm that every Seller Support associate will do that.
Thanks
Raz

Related

Tagging photos in s3 for sorting versus storing in the database

So as a photo company we present photos to customers in a gallery view. Right now we show all the photos from a game. This is done by getting a list of the objects and getting a presigned URL to display them on our site. And it works very well.
The photos belong to an "event" and each photo object is stored in an easy to maintain/search folder structure. And every photo has a unique name.
But we are going to build a sorting system so customers can filter their view. Our staff would upload the images to S3 first, and then the images would be sorted. And as of right now, we do not store any reference to the image in the database until the customer adds it to their cart.
So I believe we have 3 ways we can tag these images..
Either store a reference to the image in our database with tags.
Apply metadata to the s3 object
Apply tags to the s3 object
My issue with the database method is, we shoot hundreds of thousands of images a month, I feel that would overly bloat the database. Maybe we create a separate DB just for this (dynamo, etc?)?
Metadata could work, and for the most part the images will only be tagged or untagged an average of 1 time. But I understand that every time the metadata is changed that it would create a new copy of that image. We don't do versioning so there would still only exist one copy. But there would be a cost associated with duplicating an image, right? But the pro would be, the metadata would come down with the GET object, so a second request wouldn't be needed. But available or not in the presigned URL header?
Tags on the other hand can be set/unset as needed with no/minimal additional cost. Although getting objects and the tags would be two separate calls... but on the other hand I can get the list of objects by tag and therefore only getting the presigned urls for the objects that need to be displayed vs all of them. So that could be helpful.
That is at least how I understand it. If I am wrong please tell me.

GCP image storage strategy

I am working on a project where a photographer is going to upload a set of high-resolutions pictures (20 to 40 pictures). I am going to store each picture twice: 1 original and 1 with the watermark. On the platform, only pictures with a watermark are going to be displayed. The user will be able to buy pictures and the one selected are going to be send by email (original pictures).
bucket-name: photoshoot-fr
main-folder(s): YYYY-MM-DD-HH_MODEL-NAME example: 2020-01-03_Melania-Doul
I am not sure here if I should have 2 different folder inside the previous folder which are original and protected. Both folders are going to contain the exact same pictures with the same id but one is going to store the original pictures and the other one protected pictures. Is there any better bucket design solution?
n.b: it's a personal project but there are multiple photographers and each photographers are going to upload 2-3 set of photos every day. Each main-folder is going to be deleted after 2 months.

Can I raise the point limit on a geospatial map in Quicksight?

I have a csv with approx. 108,000 rows, each of which is a unique long/lat combination. The file is uploaded into S3 and visualised in Quicksight.
My problem is that Quicksight is only showing the first 10,000 points. The points that it shows are in the correct place, the map works perfectly, it's just missing 90%+ of the points I wish to show. I don't know if it makes a difference but I am using an admin enabled role for both S3 and Quicksight as this is a Dev environment.
Is there a way to increase this limit so that I can show all of my data points?
I have looked in the visualisation settings (the drop doen in the viz) and explored the tab on the left as much as I can. I am quite new to AWS so this may be a really easy one.
Thanks in advance!
You could consider combining lat/lng that are near each other based on some rule you come up when preparing your data.
There appears to be limitations on how many rows and columns you can serve to QuickSight:
https://docs.aws.amazon.com/quicksight/latest/user/data-source-limits.html

Powerbi rest api AddRows

I am working on a realtime dashboard, i'd like to use the powerbi Rest Api.
My question how does the updating of rows work. I have 1300 records to load once and then update 2 columns for each row every 20 seconds.
The only rest call I see is to addrows, but it's not clear how it handles update of rows if it does
You have two patterns you can choose from:
You can send data in batches: upload 1300 rows, then call DELETE on the rows, then call upload with the next payload of rows.
Here's the DELETE method you need to all. We're adopting REST standards for our APIs so the 'methods' are the REST verbs :). https://msdn.microsoft.com/en-us/library/mt238041.aspx
Alternately you can incrementally update the data: You'd add a 'timestamp' column to your data set. Then in your query (like in Q&A) you'd ask for "show data for the last 20 seconds". If you do this, set the FIFO retention policy when you create the data set so you don't run out of space.
In either case, double check the number of rows you're pushing fit within the limits we spell out. https://msdn.microsoft.com/en-US/library/dn950053.aspx
HTH,
-Lukasz
i was searching something in powerbi docs that could help me in creating a report with rest APIs. couldn't find it exact though. however made a work-around.
firsly, I created a push dataset schema in powerbi with help of post push dataset rest api.
https://learn.microsoft.com/en-us/rest/api/power-bi/push-datasets/datasets-post-dataset-in-group
then I pushed rows/record/data into my dataset with this post rows in push dataset.
https://learn.microsoft.com/en-us/rest/api/power-bi/push-datasets/datasets-post-rows-in-group
then I went to powerbi service, and created a visual report manually there.
after this I embedded that report in my react app.
finally my report was live.
now if wanted to update my report in real time, I called delete push dataset rows api to delete the existing rows/records from my dataset.
https://learn.microsoft.com/en-us/rest/api/power-bi/push-datasets/datasets-delete-rows-in-group
and then called the post push dataset rows api again with new updated data. (repeated step 2)
and then finally I refreshed my website page, and now I see the updated visual report in my website.
it took me too much time. so I can feel if you are struggling w/ powerbi rest api. it's not straightforward. so feel free to ask anything down below. will happy to help.

Knowing when a photo is deleted with Flickr API

In my application, I have to run a periodical job where I need data about all the photos on my client's Flickr account. Currently, I perform several calls to flickr.photos.search to retrieve meta about all the photos each time.
What I ask is: is there a way to be get informed by Flickr when a photo is modified or deleted, so that I don't need to retrieve metas for each photos, but rather store theme once, and only download what has changed since the last time I run the job ?
Thx in advance for your help.
There is no such notification possible from the Flickr API to your code.
What you can do on the other hand is (recommended only if volume for change of photo metadata is high) -
Setup a cron job which would scan through the photos and store if the photo id's are deleted or not - which can be used later.