Is there a way to retrieve all targetable cities in the Ads API? - facebook-graph-api

The autocomplete API allows us to retrieve lists of all countries, regions, and locales by leaving out the query string and setting the result limit to a large number, but this feature isn't available at the city level.
Is there a way that we can retrieve a full list of all targetable cities and their IDs? If not, can we cache the autocomplete data for cities to build up such a list?

That functionality is probably not supported because of the massive amount of return data that would result in fetching all the cities in the world, even with paging. Although limiting the response data by country (by using country_list=["ca"]) and then fetching all cities doesn't sound too far-fetched, however, it is not implemented either.
To me, it sounds like you have two options.
Create a bug report using our bug tool to request a wishlist feature (doesn't guarantee anything, but at least we can track it if we choose to implement it and can serve as a way to gauge interest in the feature)
IANAL, but according to the FB Platform Policies part 2 of section 2 states
You may cache data you receive through use of the Facebook API in order to improve your application’s user experience, but you should try to keep the data up to date. This permission does not give you any rights to such data.
Which sounds like you can cache the autocomplete data since it will better improve the UX of your app, however, just remember that you do not have the rights to the data. I would be cautious about this as it would really suck if you worked really hard to get all the caching functionality built in only to have FB say that it's not allowed. I would advise with some experts some more before pursuing this path.

Related

read entire fb ad account structure

v2.5 (Using java to implement http/json rest layer)
What is the most efficient way to read the entire structure of an FB ad account, by which I mean, all the entailed campaigns, adsets, ads and ad creatives?
Here is one way, which as a relative newcomer, I assume not to be the most efficient way. This is kind of a breadth first crawl:
read the account data, eg., act_123?fields=id,name,owner...
read the campaign data, eg., act_123/campaigns?fields=id,name...
for each campaign, read the adSet data
for each adSet, read the ad data
for each ad, read the ad creative data
I'm thinking that those of you who've been at this a while and need to do something similar have figured out the best strategy to do this in a time efficient manner while staying on good terms with FB server servicing the API calls (avoid limits, too many calls in too short a time, etc).
Even if the entire account structure must be crawled, perhaps going depth first is better than bread first, IOW, for each campaign, request the campaign data (using a nested fields param), to fetch the adSet data, the ad Data, etc.?
Any tips, advice or guidance would be most appreciated.
Thanks
Regarding staying on good terms with the fb servers: The cost for your api calls are not constant but depends on the complexity of the call. So making many calls should not be more expensive in that sense than retrieving everything in one big call.
I also think that the way you do it is the way it is supposed to be done. You could get all your creatives directly on the adaccount level for example. But not fetch the entire hierarchy as far as I know.
there is a java sdk released by Facebook recently (still Beta) but you might find further information there.
I'm actually looking answer for this issue.

Facebook style like system in modx cms (php)

Trying to build a simple like system in modx (which uses php snippets of code) I just need a button that logged in users can press which adds a 'like' to a resource.
Would it be best to update a custom table or TV? my thoughts are that if it is a template variable i can use getResource to sort by amount of likes.
Any thoughts on the best way to approach this or how to build this would help. My php knowledge is limited.
Depends how you are going to use it after and if you are storing more data than just a 'like' count. TV's are expensive on resources [even more so if you are going to whip through the entire resource set with getResources] so if you are going to do a lot of processing after the fact I would either look at a custom table ~or~ explore using property sets on your pages [I think it should be pretty easy to write a plugin that will update a page property]
I'd definitely go for a custom table.
While you could simply increment a numeric TV to count the amount of likes, you will come to a situation where anyone may be able to keep on liking a resource without limit - while you didn't specify the exact concept, that hardly can be desired. Using a custom table you could throw in a relational alias to the user ID that liked the resource, add a timestamp so you know when it happened, and let your fantasy run wild on additional features that are now open to you.
While not a hard requirement for custom tables, you will probably want to take the time to learn xPDO, which is the database abstraction layer MODX is based on. There's a great tutorial on the RTFM which walks you through it.

What does facebook know about you with the likebox

We were having a beer talk and have something to clear out.
Is the following conclusion correct:
When I put a facebook-like-button-box on my page, does facebook know
every time I'm on that page, even if i'm not logged in.
basically the same as google analytics
if this nis correct, it should be possible to sandbox, the like-button until someone will use it. Then facebook gets only informations when the user actively confirms that.
cheers endo
No, they can't directly track you if you are not logged in and you view an external "like" button. They can, however, set a tracking cookie that identifies you when you sign in, which would allow them to match the tracking data in the current session to you.
One of Facebook's primary revenue streams comes from the analysis and sale of market trend information. They can analyse the likes and comment keywords of certain user clusters (e.g. middle-aged American females, teenagers in college, etc) and use these to produce statistics about market patterns and trends. They can also use keyword analysis to tell a company how many people are talking about something, e.g. "how many people have mentioned my latest blockbuster film?"
You could simply move the image and JavaScript code away from the Facebook servers and host it locally to avoid them from tracking your users.
In pre-emption of the "FACEBOOK = EVIL" arguments:
In the end, though, is it really a big issue? Some people see Facebook as this massive life-infringing uncaring supercorporation, but in reality they're just making a buck through completely anonymous statistics. No human being (or sentient robot) views your preferences, browser tracking data, or personal information. Everything is anonymised and turned into a bunch of numbers relating to a group. Sure, they could screw everyone over and be evil, but why bother when you already make that much money legitimately?

Sitecore return "Popular searches" while using Lucene Search?

I have a request to return a list of the most popular search terms used when searching a Sitecore site.
I have no idea how to implement this sort of function using Sitecore or whether Sitecore has this kind of functionality all ready. I can't find any documentation detailing this.
I am currently using search based of the LuceneSearch module (http://trac.sitecore.net/LuceneSearch) but altered to bind to a ListView for easy pagination.
At the moment I am probably just going to build a standalone function/class to update an XML file or something unless someone is able to point me in the correct direction...?
I would frankly use OMS for that - this is what it is designed to do. No need of separate database. Just register the search events via API with OMS. There is an out of the box Search report. May require some tweaking, but this seems to be the most out of the box solution.
Take a look here for more details.
I don't know of any standard functionality in Sitecore that would help you achieve this, so you will probably have to approach this from ground up - unless someone else in here is able to point to a package deal somewhere :-)
Solving this, really breaks down into two tasks
1) Collecting search term information. Whenever a user enters a search term in the searchbox that I assume you have; normalise it and store it in a SQL table (essentially a [term] [count] type table. Update the counter on terms you already store.
By normalising, I mean lowercasing it and so on - possibly breaking each search term (word) down and storing them one for one if that is what your solution calls for (probably not the route I would go)
2) Realtime retreiving information from the table, based on what the user is typing in the searchbox. Assuming you want some sort of "amazon-like" - also found on almost all major search engines nowadays - autocompletion. I normally implement these in a web service that then gets called by Ajax, JQuery or whatever rich client implementation you prefer.
As for updating an XML file, I think locking issues and performance would kill that solution; though it could perhaps be made to work on a very small scale.
Sorry that I can't be more specific in my response, but your question is very open-ended.
Very interesting question. One thing you could do it have another database to store these search queries. An insert into this DB would not be very difficult and would get around the issue of locking on a XML file. Maybe insert the search query into a DB table then to get the top results just pull the top x rows ordered by that query field. As Mark Cassidy said before, maybe normalize the data before inserting it.
You could isolate this work on your search layout (or sublayout) so it runs on a specific part of the site, not on every page.
Sitecore has an out of the box "site search" report in the executive insight dashboard, this will give you an indication of what search terms are driving the most visits and of course engagement value.
You just need to configure it by registering a page event on the search page and passing the query otherwise sitecore wouldnt know what form field constitutes a search. See this post it explains it in more detail. For more information you can download the analytics configuration reference document from sdn.http://sdn.sitecore.net/upload/sitecore6/65/engagement_analytics_configuration_reference_sc65-usletter.pdf
And dont forget for performance sitecore caches the reports at various levels so during development it may be handy to know how to force a cache update, I talk about this in the following blog post:
http://andytsitecore.blogspot.co.uk/2013/10/sitecore-dms-and-analytics.html

case studies or examples of high throughput services with highly dynamic data

I'm looking for some architecture ideas on a problem at work that I may have to solve.
the problem.
1) our enterprise LDAP has become a "contact master" filled with years of stale data and unused and unmaintained attributes.
2) management has decided that LDAP will no longer serve as a company phone book. it is for authorization purposes only.
3) the company has contact type data about people in hundreds of different sources. we need to scrub all the junk out of LDAP and give the other applications a central repo to store all this data about a person.
the ideal goal
1) have a single source to store all the various attributes about a person
2) the company probably has info on 500k people ( read 500K rows)
3) i estimate there could be 500 to 1000 optional attributes on these people. (read 500+ columns)
4) data would primarily be set/get via xml over jms (this infrastructure is already in place)
5) individual groups within the company could "own" columns. only they would be allowed to write to their columns, they would be responsible for keeping the data clean.
6) a single record lookup should be returned in sub seconds
7) system should support 1 million requests per hour at peak.
8) the primary goal is to serve real time data to the enterprise, reporting is a secondary goal.
9) we are a java, oracle, terradata shop. we are your typical big IT shop.
my thoughts:
1) originally i thought LDAP might work, but it doesn't scale when new columns are added.
2) my next thought was some kind of no-sql solution, but from what i have read, I don't think i cant get the performance I need, and its still relatively new. I'm not sure i can get my manager to sign off on something like that for such a critical project.
3) i think there will be a meta-data component to the solution that will track who owns the columns and what each column represents, and the original source system.
Thanks for reading, and thanks in advance for any thoughts.
SQL
With Teradata-grade tools an SQL-based solution may be feasible. I came across an article on database design awhile ago that discussed "anchor modeling".
Basically, the idea is to create a single, dumb, synthetic primary key table, while all real or meta data lives in other tables (subsets) and is attached by way of a foreign key + join.
I see the benefit of this design being two-fold. First, you can more easily compartmentalize data storage either for organizational or performance reasons. Second, you only create additional rows for records that have data in any given subset, so you use less space and indexing and searching are faster.
Subsets might be based on maintainer or some other criteria. XML set/get would be per-subset/record (rather than global record). All subsets for a given records can be composited and cached. Additional subsets can be created for metadata, search indexes, etc., and these can be queried independently.
NoSQL
NoSQL seems similar to LDAP (in theory, at least) but the benefit of a good NoSQL tool would include greater abstraction of metadata, versioning, and organization. In fact, from what I've read it seems that NoSQL datastores are designed to address some of the issues you've raised with respect to scaling and loosely structured data. There's a good question on SO regarding datastores.
Production NoSQL
Off-hand, there are a handful of large companies using NoSQL in massively-scaled environments, such as Google's Bigtable. It seems like the perfect tool for:
6) a single record lookup should be returned in sub seconds
7) system should support 1 million requests per hour at peak.
Bigtable is only available (to my knowledge) through AppEngine. Other, similar technologies are listed here.
Other Thoughts
The bigger picture view looks more or less the same regardless of the technology you decide to use. E.g. compartmentalize storage, composite views, cache views, stick metadata somewhere so you can find things.
The performance characteristics you're targeting are going to require some kind of caching and/or optimization based on real-world usage patterns. Regardless of the solution you choose, you probably can't resolve that in the design phase.
A couple thoughts:
1) our enterprise LDAP has become a "contact master" filled with years of stale data and unused and unmaintained attributes.
This isn't really a technological problem. You will have this problem with a new system as well, LDAP or not.
"LDAP ... doesn't scale"
There are lots of huge LDAP systems out there. LDAP is surely a dark art, but I'd willing to bet that it scales better than any SQL equivalent in this situation. Not to mention that LDAP is a standard for this kind of info, and as such it is accessible from zillions of different kinds of systems.
Maybe what you're looking for is a new LDAP system that's easier to manage / has better admin tools?
You may want to look into Len Silverston's Party Model. Here's a link to his book: http://www.amazon.com/Data-Model-Resource-Book-Vol/dp/0471380237.
I have no experience building something on that scale, though I think that thinking of it as 500k rows x 500 - 1000 columns sounds a bit ridiculous.