I need to create a form, which on submission, filters search results based on certain keywords.
The keywords I'm working with a year, make, model (it's a car part site). If I use Amazon's webstore search function. Make and Model are parameters that are able to be used because most of the products have make and model in the product name, which isn't really the most efficient to say the least.
I need to be able to query the product database, but it's just not how Amazon Webstore, I assume.
Does anybody have an example of this that I can look at? Does anyone develop custom templates in Amazon Webstore so I can ask questions?
Start at the Amazon Webstore Forum with dedicated staff, documentation and help folks, and more solutions.
Related
During the pandemic around March, Google started allowing business owners to tag their restaurants with the dining option they offer in light of the pandemic.
Here is an example of these tags
I was wondering if the Places API (or any other Google API) has the ability to return these dining types. I've checked the docs for the Places API and it seems to only be capable of returning the business' business_status which only includes OPERATIONAL, CLOSED_TEMPORARILY, CLOSED_PERMANENTLY and not the fields I am looking for.
Would the only other way to obtain these tags be by web scraping a search result?
I just spoke with Google support and they said no. There is also no plan in the future to add these data points to the JSON.
They gave me some workaround ideas but nothing realistic. Let me know if you find a workaround! I am trying to gather this data as well.
I need to write an API which would provide access to data being served as HTML documents from a web server. I need for my users to be able to perform queries over the data.
Say on a web site there is a page which lists items and their owners. Then there is additional set of profile pages for owners which for each owner provide information about their reputation. An example query I may need to answer is "Give me ID's and owners of all items submitted in 2013 whose owners have reputation of at least 10".
Given a query to answer, I need to be able to screen scrape only the parts of the web site I need for answering the query at hand. And ideally cache the obtained information for future use with new queries.
I have no problem writing the screen scraping part, but I am struggling with designing the storage/query/cache part. Is there something about Clojure/Datomic that makes it an especially suitable technology choice for this kind of processing of data? I have been pointed in this direction before.
It seems a nice challenge but not sure about a few things: a) would you like to expose to your users a Datalog query box and so make them learn datalog-like syntax? b) what exact kind of results do you wish to cache, raw DB responses, html fomatted text, json ?
Anyway I suggest you to install and play a little bit with the Datomic console to get a grasp if you didn't before as it seems to me the more close idea to what you want to achieve atm https://www.youtube.com/watch?v=jyuBnl0XQ6s http://blog.datomic.com/2013/10/datomic-console.html
For the API I suggest you to use http://clojure-liberator.github.io/liberator/ as it provides sane defaults to implement REST services and let you focus on your app behaviour
I am sure this question may seem a bit lacking, but I literally do not know where to begin with. I want to develop a solution that will allow me to manage ALL of my Amazon and Rakuten/Buy.com inventory from my own website.
My main concern is keeping the inventory in sync, so the process would be as follows:
1.Fetch Orders sold today
a.Subtract the respective quantities
2.Fetch Rakuten orders sold
a.Subtract the respective quantities
3.Update Internal DB of products
a.Send out updated feeds to Amazon and Rakuten.
Again, I apologize if this question may seem a bit lacking, but I am having trouble understanding how exactly to implement this, any tips would be appreciated
For the Amazon part look at https://developer.amazonservices.com/
Rakuten, I think you will be able to do what you want with it via the FTP access, I'm still researching this. If I find more I'll respond with a better answer.
In order to process orders, you'll need to use be registered with Rakuten in order to get an authorisation token. For the API doc etc... try sending an email to support#rakuten.co.uk.
Incidentally, to send out updated feeds, you'll need to use the inventory API in order to update stock quantities (given that you'll be selling the same item Amazon etc..).
I am new to CiviCRM and need to create a fundraising page. I am using CiviCRM with Drupal7. As per my limited knowledge of CiviCRM, Individual users can create their Personal Campaign Pages to support different Events. But is it possible for users to create an entirely independent Fundraising page that allows them to fundraise for a particular cause and collect donations? Something like "START YOUR CAMPAIGN" tab on http://my.charitywater.org website. if Yes, How is that done?
Any Help Much Appreciated!
Thanks in Advance!
One solution is to create a "normal" contribution page that is the cause you want to collect money for, and enable the personal campaign pages on it.
You can make this "root" contribution page as generic and abstract as you want and let your supporters create their fundraising pages how they want it.
Not really. You probably want them to be able to create a full Contribution page for that. If you are offering this to front end users you probably want to make a custom interface for it using Drupal and automatically create the contribution page in the background using the API etc. Other people have done this type of thing in the past but I not sure if any that work is public. I suggest you ask on the CiviCRM forums.
I’m preparing my graduation project from computer science, I made this website and it's running perfectly but my supervisor requested me to apply data mining on the website.
But I don’t understand what I should do.
The website is a social network, each user will have a profile and blog and access to some e-books that required you to be registered so you can download. The website also contains a music server that contains songs that a registered user can choose a song to download or to add it as a favorite in his profile page, the website contains ads (I used OpenX script), so this is most of the website services where I can perform data mining, the website is www.sy-stu.com.
I need ideas and what is the best way to present it in the interview?
You can ask your professor what was his intention of using data mining. Data mining algorithms can do various tasks, you need first define what you want to accomplish and then find some algorithms for this and technical possibilities.
Some ideas that came to my mind about usage of data mining in your project:
you can use data mining to find what songs (ebooks,etc.) can be favorited by a user based on other people favorites songs (find similarities, probably association rules would be a good algorithm for this).
you can use some clustering algorithms to group users based on some parameters and suggest them that they could become a connections with other people from the same group (if you have something like this)
Good luck!:)
Firstly, ask for clarification from your supervisor. Don't say 'What do you mean?', but ask 'Are you expecting something like this?' because it shows that you've at least thought about it.
If you can't think of anything, or your supervisor is vague, perform some simple data retrieval and analysis, e.g.
most active members
the most / least popular songs and books.
number of ads clicked etc
most popular website features
Just elementary analysis should suffice - you aren't doing a statistics degree. Work out the most songs downloaded in a day or per user, the average songs per user, how many users visit each day and how many sign up and never visit.
The purpose is to demostrate that your website is logging all activity, so that when you are asked 'how many books did the 20 most active users download in June' you will be able to work out the answer.
The alternative is a website that just runs and you don't have any knowledge of how your users are behaving and what they are doing, which means you aren't able to focus on things that they find important.
I dont know exactly what kind of data you are trying to mine, but have you check out google analytics? It is very easy to setup, once you register all you need is to include the javascript provided to your web pages. Google analytics will give you plenty of statistic about access to your site information regarding your site and visits. Is that what you need? The data produced is very easy to read as well and will be suitable for you to present I reckon.