How to access facebook insight datas for consecutive months? - facebook-graph-api

I'm new to Facebook app development I hope I can get an answer here.
Is that possible to retrieve Facebook insight data for consecutive months?
I tried end_time=2010-01-01 to since=2010-01-31 and period=month but I got
The specified date range cannot exceed 3024000 seconds!!
How will I get like 2010-02-01 to 2010-02-28 and 2010-03-01 to 2010-03-31?
I have tried and used lots of examples but I couldn't succeed: How can I solve this problem?

The thing that has worked for me is very similar to what you are doing, with the difference being that I use the UNIX timestamp for SINCE and UNTIL.
Example:
https://graph.facebook.com/212686148747689/insights/
page_impressions_by_city_unique/week/?
access_token=QWERTYUI&since=1315699200&until=1320796800
(That's all supposed to be on one line, but it's easier to read it this way, at least for me.)
With this approach, you want to be careful and make sure that the difference between SINCE and UNTIL is not bigger that 90 days. Otherwise, you'll get an error, like so:
(#604) The specified date range cannot exceed 7776000 seconds
Finally, if you don't have a way of generating the UNIX timestamp automatically, go to a web site like:
http://www.epochconverter.com/
If anyone else has some better insights, please share. I hope this helps.

Related

iCloud GameKit 40 request / second limit questions

I'll be straightforward:
What does it count as a "request" ? in some posts I've read a request is a "fetch", but on some others says 1 same operation of saving/updating might occur into several "requests".
What does it mean the system "throttles" your requests? I've heard if you reach the limit of 40 req / second, the system "throttles" your requests.... what does exactly mean this? and which criteria does it use? I'm guessing if you get a peak for whatever reason of lets say 80 req/second but after that you go back to your average 20/second the system won't charge you? if it's that, which criteria does it use?
If for any reason you need more requests per second... you simply have to pay the penalty? or is there any way around it?
Thanks a lot in advance.
Okay I think I found the key of the matter.
The scarce documentation says you have 40 requests/second per database. So if you are using private databases foreach user that means each user has 40 request/second .
This changes if you are using public database where all have access of course.
Someone please correct me if I'm wrong.
Thanks

How to link multiple ports from a Expression to multiple groups of a Union

I add an image in order to explain myself better.
I have 300 something ports in a expression. I have created the equivalent number of groups in a union. I want each port of this expression to go to a port/field of the Union. One to one relationship. It seems like powercenter is not able to do this with autolink, or at least I'm unable to find the proper way to do this. How could I work arround this issue? Because I've been told that is likely that in a few days it will be more than 700 ports, and the amount it takes to do by hand is quite insane. Thanks in advance.
I'm surprised it validates... union is for homogenous sources but you seem to be trying to pivot your data (in which case I'd suggest using another transformation i.e. a normalizer and Informatica will start behaving as expected)
Possible solution: make a bunch of connections, save and export the file as xml, go to the lines when the connections are done, and replace that zone with as many rows as you need.
What I did specifically was to get the original rows, change the names as appropiate with the help of notepad++ and excel, and then go back to the original file and replace all of it. Check everything three times, and import the file back to powercenter.
I say possible solution because it's messy and dirty, but even though it may lead to mistakes I feel like the amount is vastly inferior and you have the versioning on your side, so just save before exporting. If someone with more experience could tell me it's thoughts about this, it would be a great opportunity to learn, just leaving this in case it goes unanswered

Cycling through URLs to download csv files

I have a list of URLs which will access a webservice. The webservice downloads .csv files. I want to be able to cycle through them using a date field which is in a specific format in the url itself, thereby downloading the data day-by-day. The access seems fairly slow as even a manual url entry can take a couple of minutes to complete, and I suspect the issue is at the webservice' end.
The URL is in the format:
http://web.service.com/ws/XYZ/data/?key=mysecretkeyf&field1=X&start=YYYY-MM-DD 00:00&end= YYYY-MM-DD 00:00&field=Y&format=csv
So the way I envision it (and I am keen to take advice) is using a variable for the start year, month and day fields cycling onto the next URL as the previous .csv is downloaded, with the code ending when the current date is accessed.
Any ideas most welcome.
The coding is really straightforward which makes me wonder if you are looking to code this yourself or asking if there is a service out there that would help do this. If you are coding, I'd choose a language that works well for you. #Vivek mentioned Python which is what I would choose as well.
If you do not want to go the coding approach, you could check out DownThemAll. I have used this utility for batch downloading where you have to increment numbers in parts of the URL. Check it out, it may be a good non-programming solution: DownThemAll

Moving, renaming huge amount of text files based on content and size

*Update July 4*
I ended up doing the following:
Sort on date
Check if last sentence is the same
If Yes: If bigger -> this is the new message to be chosen. If smaller: remove. If no more of the same can be found, choose this one and move to another folder.
If No: move on. Loop this again until all files with certain date have been checked.
Thanks all for the help!!
I'm busy with a big project where I have a huge number of emails that I have to filter, imported from gmail through thunderbird. There is a big problem though.
Because gmail uses conversations, but thunderbird doesn't format them as such, what I have is a text file for each email, though the complete previous conversation as well. And so a whole new text file for each reply.To clarify, an example of a conversation:
Me:Hi, how are you?
You, replying: Good!
Me: Great!
In gmail this looks exactly as above, but for me this are now 3 files:
file 1:
Me, sent at 11:41:
Hi, how are you?
file 2:
You, sent at 11:42:
Good!
Me, sent at 11:41:
Hi how are you?
file 3:
Me, sent at 11:43:
Great!
You, sent at 11:42:
Good!
Me, sent at 11:41:
Hi how are you?
As you can understand, this is no problem with 3 files: I just throw away file 1 and 2 and only use file 3. That's precisely what I want to do. But considering in total there are around 30k files, I would very much like to automate that.
It is unfortunately not possible to do this complete by file name, though partially it can. The files are named after their date. For instance: 20110102 for Jan 2, 2011. However as there are multiple email conversations on a day, I would lose a lot if I would just sort by date and only keep the largest.
I hope the problem is clear and you can help me with this.
I work on Mac OSX 10.7. I've tried using Applescript, but either my script is not good or Applescript can't handle the amount of files.
Maybe you have a recommendation for software or a script in some way? I'm open for all and not unfamiliar with programming.
Thanks in advance!
As your task is basically just text processing, any language you're familiar with, including AppleScript, PHP, bash, C, should be able to do the job. I think perhaps #inTide's breaking the problem down into discreet steps is what you need to do, building one portion at a time in the language of your choice.
Pick a language that you're familiar with and start writing one the code to the first step and make sure it's working as you expect, and then expand, adding a small bit of new functionality at each point and making sure that functionality works before moving on. Without an example of the code you've written or a better description of how AppleScript is failing for you, additional advice is difficult.

Is there a way to build an easy related posts app in django

It seems to by my nightmare for the last 4 weeks,
I can't come up with a solution for a "related posts" app in django/python in which it takes the users input and comes out with a related post that matches closely with the original input. I've tried using like statements but it seems that they are not sensitive enough.
Such as which i need typos to also be taken into consideration.
is there a library that could save me from all my pain and suffering?
Well, I suppose there are a few different ways to normalize the user input to produce desirable results (although I'm not sure to what extent libraries exist for them). One of the easiest ways to get related posts would be to compare the tags present on that post (granted your posts have tags). If you wanted to go another route, I would take the following steps: remove stop words from the subject, use some kind of stemmer on the remainder, and finally treat the remaining words as "tags" to compare with other posts. For the sake of efficiency, it would probably be a good idea to run these steps in a batch process on all of your current posts and store off the resulting "tags." As far as typos, I'm sure there are a multitude of spelling corrector libraries exist (I found this one after a few seconds with Google).