Does reddit use Amazon Cloud Search? [closed] - amazon-web-services

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I read in reddit wiki that reddit moved to indextrunk but when I reviewed run.py file I found that there is keys like Cloud_Search_Api_key ... So I guessed it is using Amazon cloud search . If this true what are the values that should be changed in run.py to make cloudsearch works? and what is subreddit_cloud_api_key?
Thanks

I'm pretty sure Reddit uses Cloudsearch. Their Github FAQ claims they use Indextank, but IndexTank has been shut down since April 2012. If you search on Reddit and highlight the "δ" symbol, it will show text like "δ converted query to cloudsearch syntax: (and (field text 'search') (field text 'terms'))".
I'm not too familiar with Python or AWS, but it looks like CLOUDSEARCH_SEARCH_API and the other similar variables are URLs that Amazon calls Endpoints.
The variable names in reddit/r2/run.ini contain SEARCH and DOC, mirroring Amazon's documentation. Also, cloudsearch.py makes an HTTP connection to that variable:
search_api = g.CLOUDSEARCH_SEARCH_API
//...
connection = httplib.HTTPConnection(search_api, 80)
So you would probably set CLOUDSEARCH_SEARCH_API with the URL to your Cloudsearch endpoint.
EDIT: Kemitche has answered this on Reddit. Unlike me, he knows what he's talking about, so take a look.

Related

I was working through a BigQuery tutorial, refreshed a page and cannot get it back [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I was doing one of those self-paced tutorials that were offered when I first logged it to guide me through loading data into a storage bucket and copying it into BigQuery from there and so on. I refreshed the page and it went away and I have not been able to find a way to get it back. Any idea how to get back to a tutorial I was working through?
You can find the quickstart guides for BigQuery in the official GCP documentation.
Here, you can also find all how-to guides, related to the BigQuery. For loading data from Cloud Storage into BigQuery in particular, check the section called "Loading data into BigQuery" - link.

How to use one website with several amazon S3 buckets as a path suffixes?

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 days ago.
Improve this question
Could you please advise - I would like to host multiple S3 Amazon buckets under one site in next way:
Access to bucket1 - http://example.com/bucket1(bucket2...) and so on.
What now Amazon allows is access with links like http://s3-ap-northeast-1.amazonaws.com/bucket1 , but this is not allowed in my case (production server). Note that site DNS is hosted also at Amazon Route53.
Thanks ahead

Can an Adobe EdgeAnimate composition be used on Amazon S3

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 13 hours ago.
Improve this question
Despite many attempts, I can't seem to make any adobe EdgeAnimate composition work on Amazon S3. I appreciate a website hosted on S3 must be static, but looking through the code exported by EdgeAnimate, no sever side aid is needed. It's just HTML, JS, CSS files, all of which I use in websites that work without problems on S3.
All files are of course publicly accessible and the bucket is configured as a website.
Perhaps a CORRS config or an included XML file may fix it, but I've no experience with either (same goes for PHP, SQL etc..hence the need to run off S3).
I've posed the same question on AWS and Adobe forums, to no avail, so here goes a final attempt. Any suggestions will be gratefully received.
Thanks in advance.
Forum Links -
Amazon - https://forums.aws.amazon.com/thread.jspa?threadID=143338
Adobe - http://forums.adobe.com/thread/1373062

API for a Google Keep app? [duplicate]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Is there a API for Google Keep? I want to make a windows 8 app for Google Keep, so that it synchronizes with your phone.
I looked into the Drive SDK because Google Keep is a extension of Google Drive, but I couldn't find it.
UPDATE: yes, Google released a public REST API for Keep. Here's the public documentation.
No there's not and developers still don't know why google doesn't pay attention to this request!
As you can see in this link it's one of the most popular issues with many stars in google code but still no response from google! You can also add stars to this issue, maybe google hears that!
I have been waiting to see if Google would open a Keep API. When I discovered Google Tasks, and saw that it had an Android app, web app, and API, I converted over to Tasks. This may not directly answer your question, but it is my solution to the Keep API problem.
Tasks doesn't have a reminder alarm exactly like Keep. I can live without that if I also connect with the Calendar API.
https://developers.google.com/google-apps/tasks/

Collecting data from website without API [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am looking to build a webapp to improve user experience in booking railway tickets in India. The API is impossible to get due to hefty charge to procure it. I have seen many apps that provide details of the trains etc through their apps.
My Question is how are they scraping data from the website.In general how can I legally get data shown to user (I don't want payment and stuff that are impossible without API) on any website. How do people scrape such data? Any tools/methods?
Bear with me if question is naive. I'm pretty new to this stuff.
They can get the train schedule information using any one of several programming languages though it is most likely done with ordinary PHP and any good webserver host. For example all indian train schedules can be found on the indianrail.gov website.
Sending a specially built URL to ..
http://www.indianrail.gov.in/cgi_bin/inet_trnnum_cgi.cgi?lccp_trnname=1123
using the POST method of sending form data should give you all the details for train number 1123 After that it becomes just a simple task of tidying up the results for storage in a database.
Update: well armoured site its checking both the user agent and referer of inbound requests.
Ammendum: the indianrail.gov site is changing to http://www.trainenquiry.com/ -> will have to take another look