I am currently on this screen trying to import my app's swagger definition so I can create an API Gateway instance.
Unfortunately, you can see I'm getting some errors - even though swagger seems to think it's entirely fine.
Your API was not imported due to errors in the Swagger file.
Unable to create model for 200 response to method 'GET /api/v1/courses': Validation Result: warnings : [], errors : [Invalid content type specified: */*]
Unsupported model type 'MapProperty' in 200 response to method 'GET /api/v1/courses/all'. Ignoring.
Here is my swagger definition:
{
"swagger": "2.0",
"info": {
"description": "Api Documentation",
"version": "1.0",
"title": "Api Documentation",
"termsOfService": "urn:tos",
"contact": {},
"license": {
"name": "Apache 2.0",
"url": "http://www.apache.org/licenses/LICENSE-2.0"
}
},
"host": "********.appspot.com",
"basePath": "/",
"tags": [{
"name": "course-controller",
"description": "Course Controller"
}],
"paths": {
"/api/v1/courses": {
"get": {
"tags": ["course-controller"],
"summary": "getCourses",
"operationId": "getCoursesUsingGET",
"produces": ["*/*"],
"parameters": [{
"name": "code",
"in": "query",
"description": "code",
"required": false,
"type": "string"
}],
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/Course"
}
}
},
"401": {
"description": "Unauthorized"
},
"403": {
"description": "Forbidden"
},
"404": {
"description": "Not Found"
}
},
"deprecated": false
}
},
"/api/v1/courses/all": {
"get": {
"tags": ["course-controller"],
"summary": "getAllCourses",
"operationId": "getAllCoursesUsingGET",
"produces": ["*/*"],
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "object",
"additionalProperties": {
"type": "object"
}
}
},
"401": {
"description": "Unauthorized"
},
"403": {
"description": "Forbidden"
},
"404": {
"description": "Not Found"
}
},
"deprecated": false
}
}
},
"definitions": {
"Course": {
"type": "object",
"properties": {
"code": {
"type": "string"
},
"credits": {
"type": "integer",
"format": "int32"
},
"id": {
"type": "integer",
"format": "int32"
},
"lastUpdated": {
"type": "string"
},
"name": {
"type": "string"
},
"prerequisites": {
"type": "string"
},
"restrictions": {
"type": "string"
},
"seats": {
"$ref": "#/definitions/Seats"
},
"waitlist": {
"$ref": "#/definitions/Seats"
}
},
"title": "Course"
},
"Seats": {
"type": "object",
"properties": {
"actual": {
"type": "integer",
"format": "int32"
},
"capacity": {
"type": "integer",
"format": "int32"
},
"remaining": {
"type": "integer",
"format": "int32"
}
},
"title": "Seats"
}
}
}
Is there any reason you can find for this swagger definition breaking in API Gateway?
AWS API Gateway has some limitations in its OpenAPI support. For example, it does not support additionalProperties in models (this keyword is used in the 200 response schema for the /api/v1/courses/all endpoint in your API).
You can click the "Import and ignore warnings" button to ignore those errors and proceed with the import.
I'm working on an OpenAPI 3 schema.
I would like to use a data model from the components.schemas inside the responses content and have some required nested properties inside that data model. However, it doesn't seem like the required validation is being applied. I'm testing this in Postman with a mock server.
Here is my schema:
{
"openapi": "3.0.0",
"info": {
"version": "1.0.0",
"title": "Usage stats API"
},
"servers": [
{
"url": "http://some-middleware-endpoint.com"
}
],
"paths": {
"/publishers/{publisherId}/files/{fileId}": {
"get": {
"summary": "Get single file for publisher",
"parameters": [
{
"name": "publisherId",
"in": "path",
"description": "ID of the publisher",
"required": true,
"schema": {
"type": "integer",
"format": "int64"
}
},
{
"name": "fileId",
"in": "path",
"description": "ID of the file",
"required": true,
"schema": {
"type": "integer",
"format": "int64"
}
}
],
"responses": {
"200": {
"description": "File for publisher",
"headers": {
"Content-Type": {
"description": "application/json"
}
},
"content": {
"application/json": {
"schema": {
"type": "object",
"required": [
"meta"
],
"properties": {
"meta": {
"type": "object",
"required": ["page"],
"properties": {
"$ref": "#/components/schemas/Pagination"
}
}
}
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"Pagination": {
"properties": {
"page": {
"required": ["current-page", "per-page", "from", "to", "total", "last-page"],
"type": "object",
"properties": {
"current-page": {
"type": "integer"
},
"per-page": {
"type": "integer"
},
"from": {
"type": "integer"
},
"to": {
"type": "integer"
},
"total": {
"type": "integer"
},
"last-page": {
"type": "integer"
}
}
}
}
}
}
}
}
This response passes validation:
{
"meta": {
"page": {}
}
}
Even though all of the attributes I've required ("required": ["current-page", "per-page", "from", "to", "total", "last-page"]) are not present.
Basically, I would like page and all its nested properties to be required.
I guess I'm doing something wrong in defining the properties. Any help is appreciated!
Oh well, I guess my issue was pulling up the $ref one level up.
The following seems to work inside responses.content.
"meta": {
"type": "object",
"required": [
"page"
],
"$ref": "#/components/schemas/Pagination"
}
instead of
"meta": {
"type": "object",
"required": ["page"],
"properties": {
"$ref": "#/components/schemas/Pagination"
}
}
I used to copy data from one DynamoDB to another DynamoDB using a pipeline.json. It works when the source table has provisioned capacity and doesn't matter if destination is set to provisioned/on demand. I want both of my tables set to On Demand capacity. But when i use the same template it doesn't work. Is there any way that we can do that, or is it still under development?
Here is my original functioning script:
{
"objects": [
{
"startAt": "FIRST_ACTIVATION_DATE_TIME",
"name": "DailySchedule",
"id": "DailySchedule",
"period": "1 day",
"type": "Schedule",
"occurrences": "1"
},
{
"id": "Default",
"name": "Default",
"scheduleType": "ONDEMAND",
"pipelineLogUri": "#{myS3LogsPath}",
"schedule": {
"ref": "DailySchedule"
},
"failureAndRerunMode": "CASCADE",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
},
{
"id": "DDBSourceTable",
"tableName": "#{myDDBSourceTableName}",
"name": "DDBSourceTable",
"type": "DynamoDBDataNode",
"readThroughputPercent": "#{myDDBReadThroughputRatio}"
},
{
"name": "S3TempLocation",
"id": "S3TempLocation",
"type": "S3DataNode",
"directoryPath": "#{myTempS3Folder}/#{format(#scheduledStartTime, 'YYYY-MM-dd-HH-mm-ss')}"
},
{
"id": "DDBDestinationTable",
"tableName": "#{myDDBDestinationTableName}",
"name": "DDBDestinationTable",
"type": "DynamoDBDataNode",
"writeThroughputPercent": "#{myDDBWriteThroughputRatio}"
},
{
"id": "EmrClusterForBackup",
"name": "EmrClusterForBackup",
"amiVersion": "3.8.0",
"masterInstanceType": "m3.xlarge",
"coreInstanceType": "m3.xlarge",
"coreInstanceCount": "1",
"region": "#{myDDBSourceRegion}",
"terminateAfter": "10 Days",
"type": "EmrCluster"
},
{
"id": "EmrClusterForLoad",
"name": "EmrClusterForLoad",
"amiVersion": "3.8.0",
"masterInstanceType": "m3.xlarge",
"coreInstanceType": "m3.xlarge",
"coreInstanceCount": "1",
"region": "#{myDDBDestinationRegion}",
"terminateAfter": "10 Days",
"type": "EmrCluster"
},
{
"id": "TableLoadActivity",
"name": "TableLoadActivity",
"runsOn": {
"ref": "EmrClusterForLoad"
},
"input": {
"ref": "S3TempLocation"
},
"output": {
"ref": "DDBDestinationTable"
},
"type": "EmrActivity",
"maximumRetries": "2",
"dependsOn": {
"ref": "TableBackupActivity"
},
"resizeClusterBeforeRunning": "true",
"step": [
"s3://dynamodb-emr-#{myDDBDestinationRegion}/emr-ddb-storage-handler/2.1.0/emr-ddb-2.1.0.jar,org.apache.hadoop.dynamodb.tools.DynamoDbImport,#{input.directoryPath},#{output.tableName},#{output.writeThroughputPercent}"
]
},
{
"id": "TableBackupActivity",
"name": "TableBackupActivity",
"input": {
"ref": "DDBSourceTable"
},
"output": {
"ref": "S3TempLocation"
},
"runsOn": {
"ref": "EmrClusterForBackup"
},
"resizeClusterBeforeRunning": "true",
"type": "EmrActivity",
"maximumRetries": "2",
"step": [
"s3://dynamodb-emr-#{myDDBSourceRegion}/emr-ddb-storage-handler/2.1.0/emr-ddb-2.1.0.jar,org.apache.hadoop.dynamodb.tools.DynamoDbExport,#{output.directoryPath},#{input.tableName},#{input.readThroughputPercent}"
]
},
{
"dependsOn": {
"ref": "TableLoadActivity"
},
"name": "S3CleanupActivity",
"id": "S3CleanupActivity",
"input": {
"ref": "S3TempLocation"
},
"runsOn": {
"ref": "EmrClusterForBackup"
},
"type": "ShellCommandActivity",
"command": "(sudo yum -y update aws-cli) && (aws s3 rm #{input.directoryPath} --recursive)"
}
],
"parameters": [
{
"myComment": "This Parameter specifies the S3 logging path for the pipeline. It is used by the 'Default' object to set the 'pipelineLogUri' value.",
"id" : "myS3LogsPath",
"type" : "AWS::S3::ObjectKey",
"description" : "S3 path for pipeline logs."
},
{
"id": "myDDBSourceTableName",
"type": "String",
"description": "Source DynamoDB table name"
},
{
"id": "myDDBDestinationTableName",
"type": "String",
"description": "Target DynamoDB table name"
},
{
"id": "myDDBWriteThroughputRatio",
"type": "Double",
"description": "DynamoDB write throughput ratio",
"default": "1",
"watermark": "Enter value between 0.1-1.0"
},
{
"id": "myDDBSourceRegion",
"type": "String",
"description": "Region of the DynamoDB table",
"default": "us-west-2"
},
{
"id": "myDDBDestinationRegion",
"type": "String",
"description": "Region of the DynamoDB table",
"default": "us-west-2"
},
{
"id": "myDDBReadThroughputRatio",
"type": "Double",
"description": "DynamoDB read throughput ratio",
"default": "1",
"watermark": "Enter value between 0.1-1.0"
},
{
"myComment": "Temporary S3 path to store the dynamodb backup csv files, backup files will be deleted after the copy completes",
"id": "myTempS3Folder",
"type": "AWS::S3::ObjectKey",
"description": "Temporary S3 folder"
}
]
}
And here is the error message from Data Pipeline execution when source DynamoDB table is set to On Demand capacity:
at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:520)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:512)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:833)
at org.apache.hadoop.dynamodb.tools.DynamoDbExport.run(DynamoDbExport.java:79)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.dynamodb.tools.DynamoDbExport.main(DynamoDbExport.java:30)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
The following JSON file worked for upload (DynamoDB to S3) -
{
"objects": [
{
"id": "Default",
"name": "Default",
"scheduleType": "ONDEMAND",
"pipelineLogUri": "#{myS3LogsPath}",
"failureAndRerunMode": "CASCADE",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
},
{
"id": "DDBSourceTable",
"tableName": "#{myDDBSourceTableName}",
"name": "DDBSourceTable",
"type": "DynamoDBDataNode",
"readThroughputPercent": "#{myDDBReadThroughputRatio}"
},
{
"name": "S3TempLocation",
"id": "S3TempLocation",
"type": "S3DataNode",
"directoryPath": "#{myTempS3Folder}/data"
},
{
"subnetId": "subnet-id",
"id": "EmrClusterForBackup",
"name": "EmrClusterForBackup",
"masterInstanceType": "m5.xlarge",
"coreInstanceType": "m5.xlarge",
"coreInstanceCount": "1",
"releaseLabel": "emr-5.23.0",
"region": "#{myDDBSourceRegion}",
"terminateAfter": "10 Days",
"type": "EmrCluster"
},
{
"id": "TableBackupActivity",
"name": "TableBackupActivity",
"input": {
"ref": "DDBSourceTable"
},
"output": {
"ref": "S3TempLocation"
},
"runsOn": {
"ref": "EmrClusterForBackup"
},
"resizeClusterBeforeRunning": "true",
"type": "EmrActivity",
"maximumRetries": "2",
"step": [
"s3://dynamodb-dpl-#{myDDBSourceRegion}/emr-ddb-storage-handler/4.11.0/emr-dynamodb-tools-4.11.0-SNAPSHOT-jar-with-dependencies.jar,org.apache.hadoop.dynamodb.tools.DynamoDBExport,#{output.directoryPath},#{input.tableName},#{input.readThroughputPercent}"
]
}
],
"parameters": [
{
"myComment": "This Parameter specifies the S3 logging path for the pipeline. It is used by the 'Default' object to set the 'pipelineLogUri' value.",
"id" : "myS3LogsPath",
"type" : "AWS::S3::ObjectKey",
"description" : "S3 path for pipeline logs."
},
{
"id": "myDDBSourceTableName",
"type": "String",
"description": "Source DynamoDB table name"
},
{
"id": "myDDBSourceRegion",
"type": "String",
"description": "Region of the DynamoDB table",
"default": "us-west-2"
},
{
"id": "myDDBReadThroughputRatio",
"type": "Double",
"description": "DynamoDB read throughput ratio",
"default": "1",
"watermark": "Enter value between 0.1-1.0"
},
{
"myComment": "Temporary S3 path to store the dynamodb backup csv files, backup files will be deleted after the copy completes",
"id": "myTempS3Folder",
"type": "AWS::S3::ObjectKey",
"description": "Temporary S3 folder"
}
]
}
And the following worked for download (S3 to DynamoDB) -
{
"objects": [
{
"id": "Default",
"name": "Default",
"scheduleType": "ONDEMAND",
"pipelineLogUri": "#{myS3LogsPath}",
"failureAndRerunMode": "CASCADE",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
},
{
"name": "S3TempLocation",
"id": "S3TempLocation",
"type": "S3DataNode",
"directoryPath": "#{myTempS3Folder}/data"
},
{
"id": "DDBDestinationTable",
"tableName": "#{myDDBDestinationTableName}",
"name": "DDBDestinationTable",
"type": "DynamoDBDataNode",
"writeThroughputPercent": "#{myDDBWriteThroughputRatio}"
},
{
"subnetId": "subnet-id",
"id": "EmrClusterForLoad",
"name": "EmrClusterForLoad",
"releaseLabel": "emr-5.23.0",
"masterInstanceType": "m5.xlarge",
"coreInstanceType": "m5.xlarge",
"coreInstanceCount": "1",
"region": "#{myDDBDestinationRegion}",
"terminateAfter": "10 Days",
"type": "EmrCluster"
},
{
"id": "TableLoadActivity",
"name": "TableLoadActivity",
"runsOn": {
"ref": "EmrClusterForLoad"
},
"input": {
"ref": "S3TempLocation"
},
"output": {
"ref": "DDBDestinationTable"
},
"type": "EmrActivity",
"maximumRetries": "2",
"resizeClusterBeforeRunning": "true",
"step": [
"s3://dynamodb-dpl-#{myDDBDestinationRegion}/emr-ddb-storage-handler/4.11.0/emr-dynamodb-tools-4.11.0-SNAPSHOT-jar-with-dependencies.jar,org.apache.hadoop.dynamodb.tools.DynamoDBImport,#{input.directoryPath},#{output.tableName},#{output.writeThroughputPercent}"
]
},
{
"dependsOn": {
"ref": "TableLoadActivity"
},
"name": "S3CleanupActivity",
"id": "S3CleanupActivity",
"input": {
"ref": "S3TempLocation"
},
"runsOn": {
"ref": "EmrClusterForLoad"
},
"type": "ShellCommandActivity",
"command": "(sudo yum -y update aws-cli) && (aws s3 rm #{input.directoryPath} --recursive)"
}
],
"parameters": [
{
"myComment": "This Parameter specifies the S3 logging path for the pipeline. It is used by the 'Default' object to set the 'pipelineLogUri' value.",
"id" : "myS3LogsPath",
"type" : "AWS::S3::ObjectKey",
"description" : "S3 path for pipeline logs."
},
{
"id": "myDDBDestinationTableName",
"type": "String",
"description": "Target DynamoDB table name"
},
{
"id": "myDDBWriteThroughputRatio",
"type": "Double",
"description": "DynamoDB write throughput ratio",
"default": "1",
"watermark": "Enter value between 0.1-1.0"
},
{
"id": "myDDBDestinationRegion",
"type": "String",
"description": "Region of the DynamoDB table",
"default": "us-west-2"
},
{
"myComment": "Temporary S3 path to store the dynamodb backup csv files, backup files will be deleted after the copy completes",
"id": "myTempS3Folder",
"type": "AWS::S3::ObjectKey",
"description": "Temporary S3 folder"
}
]
}
Also, the subnet ID fields in both the pipeline definitions are totally optional, but it is always good to set them.
I suspect I'm making a newbie mistake.
I have an elasticsearch index (lswl) that is accepting data from logtash and winlogbeat that has indexed (not_analyzed) data but I can't seem to retrieve it.
When I run the following query
POST /lswl-2016.08.15/_search?pretty
{
"query": {
"match_all": {}
}
}
I get the following results:
"hits": {
"total": 9,
"max_score": 1,
"hits": [
{
"_index": "lswl-2016.08.15",
"_type": "wineventlog",
"_id": "AVaLgghl49PiM_pqlihQ",
"_score": 1
}
I know there's data in there because queries like this return a subset of values.
POST /lswl-*/_search?pretty
{
"query": {
"term": { "host": "BTRDAPTST02"}
}
}
I suspect that the problem is in the template I created for the the lswl index but for the life of me I can't figure out what I did incorrectly. The template is below for reference.
"template": "lswl*",
"settings":{
"number_of_shards": 1
},
"mappings": {
"wineventlog":{
"_source": {
"enabled": false
},
"properties": {
"#timestamp": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"#version": {
"type": "string",
"index": "not_analyzed"
},
"category": {
"type": "string"
},
"computer_name": {
"type": "string",
"index": "not_analyzed"
},
"count": {
"type": "long"
},
"event_id": {
"type": "long"
},
"host": {
"type": "string",
"index": "not_analyzed"
},
"level": {
"type": "string",
"index": "not_analyzed"
},
"log_name": {
"type": "string",
"index": "not_analyzed"
},
"message": {
"type": "string",
"fields": {
"original": {
"type": "string",
"index": "not_analyzed"
}
}
},
"record_number": {
"type": "string",
"index": "not_analyzed"
},
"source_name": {
"type": "string",
"index": "not_analyzed"
},
"tags": {
"type": "string",
"index": "not_analyzed"
},
"type": {
"type": "string",
"index": "not_analyzed"
},
"user": {
"properties": {
"domain": {
"type": "string",
"index": "not_analyzed"
},
"identifier": {
"type": "string",
"index": "not_analyzed"
},
"name": {
"type": "string",
"index": "not_analyzed"
},
"type": {
"type": "string",
"index": "not_analyzed"
}
Just remove the following part or set it to true
"_source": {
"enabled": false
},
And then delete the index and re-index your data. You'll be able to see the data afterwards.
Can anyone help me, When I request using link
https://graph.facebook.com/search?&limit=2&until=now&q=hi&type=post&access_token=AAAAAAITEghMBAIScEzst3h5VpV8pKZA7di2ZC3czxr5
Note: I am not giving valid token for security reason.
I always get the result for 27 Nov 2012, while I tried to search for latest post. Am I missing something? Please help.
Thanks in Advance.
RESULT I get:
{
"data": [
{
"id": "730221920_10151138508071921",
"from": {
"name": "Pinky Saowichit",
"id": "730221920"
},
"message": "\u0e2d\u0e32\u0e17\u0e34\u0e15\u0e22\u0e4c\u0e19\u0e35\u0e49\u0e2d\u0e32\u0e08\u0e44\u0e21\u0e48\u0e44\u0e14\u0e49\u0e2d\u0e2d\u0e31\u0e19\u0e1a\u0e48\u0e2d\u0e22\u0e46 \u0e19\u0e30\u0e04\u0e30 \u0e41\u0e1f\u0e19\u0e04\u0e25\u0e31\u0e1a \u0e43\u0e2b\u0e49\u0e23\u0e39\u0e49\u0e27\u0e48\u0e48\u0e32\u0e04\u0e34\u0e14\u0e16\u0e36\u0e07\u0e19\u0e30\u0e04\u0e30 \u0e41\u0e15\u0e48\u0e15\u0e2d\u0e19\u0e19\u0e35\u0e49\u0e40\u0e23\u0e32\u0e01\u0e33\u0e25\u0e31\u0e07\u0e04\u0e34\u0e14\u0e16\u0e36\u0e07\u0e04\u0e38\u0e13 \u0e04\u0e32\u0e19\u0e18\u0e35\u0e41\u0e25\u0e30\u0e40\u0e17\u0e40\u0e23\u0e0b\u0e48\u0e32\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e21\u0e32\u0e01\u0e46 \u0e2b\u0e38 \u0e2b\u0e38",
"privacy": {
"value": ""
},
"type": "status",
"status_type": "mobile_status_update",
"created_time": "2012-11-27T03:12:32+0000",
"updated_time": "2012-11-27T07:44:54+0000",
"likes": {
"data": [
{
"name": "Weniefredo Laud",
"id": "100004751890183"
},
{
"name": "\u0e1e\u0e23\u0e15 \u0e1c\u0e25\u0e14\u0e35",
"id": "100001056963372"
},
{
"name": "\u0e08\u0e23\u0e34\u0e0d\u0e32 \u0e25\u0e2d\u0e27",
"id": "100000046347271"
},
{
"name": "Noolek Tikamborn",
"id": "100003048464258"
}
],
"count": 6
}
},
{
"id": "100003971930240_283727615063091",
"from": {
"name": "Mariana Salcedo",
"id": "100003971930240"
},
"story": "Mariana Salcedo shared a link.",
"picture": "http://external.ak.fbcdn.net/safe_image.php?d=AQASxQms6c0xFkdO&w=90&h=90&url=http\u00253A\u00252F\u00252Fask.fm\u00252Fimages\u00252F50x50.gif",
"link": "http://ask.fm/marianasalcedo/answer/15735435637",
"name": "CRUSH hi",
"caption": "jajajajaja que enfadosa erees!",
"icon": "http://static.ak.fbcdn.net/rsrc.php/v2/yN/x/aS8ecmYRys0.gif",
"privacy": {
"value": ""
},
"type": "link",
"status_type": "shared_story",
"application": {
"name": "Ask.fm",
"id": "129215213762342"
},
"created_time": "2012-11-27T03:11:38+0000",
"updated_time": "2012-11-27T03:11:38+0000"
}
],
"paging": {
"previous": "https://graph.facebook.com/search?q=hi&limit=2&type=post&access_token=AAAAAAITEghMBAIScEzst3h5VpV8pKZA7di2ZC3czxr5ANnvKPPc1wqYJJZAANd9PI0NKLX5Xk5ReRX3T01iim8BoJp6r16itU8vPxRP&since=1353985952&__previous=1",
"next": "https://graph.facebook.com/search?q=hi&limit=2&type=post&access_token=AAAAAAITEghMBAIScEzst3h5VpV8pKZA7di2ZC3czxr5ANnvKPPc1wqYJJZAANd9PI0NKLX5Xk5ReRX3T01iim8Bo&until=1353985897"
}
}