batch update link_urls in the creative - facebook-graph-api

We are trying to attach some parameters to existing link_urls for creatives.
For example:
http://www.adchemy.com -> http://www.adchemy.com?param_ad_id=23423424242 for tracking
purposes, where is such id is an adgroup id for that creative.
We'll maintain one to one relation between creative and adgroup.
To update existing creative we have to create new and associate it with the ad.
We need to update around 5000 creatives.
Is there a way to update them in the batch rather than doing 10000 requests?
(5000 for new creatives and 5000 to update appropriate ads)?

There is a way in facebook API provides batch updates https://developers.facebook.com/docs/reference/api/batch/ not sure about the limit though it says 20.You can create Adgroups and insert creative element in the form of JSON or object and post information to adgroups url which is available at Open graph API
https://developers.facebook.com/docs/reference/ads-api/adgroup/

Related

Get all items in DynamoDB with API Gateway's Mapping Template

Is there a simple way to retrieve all items from a DynamoDB table using a mapping template in an API Gateway endpoint? I usually use a lambda to process the data before returning it but this is such a simple task that a Lambda seems like an overkill.
I have a table that contains data with the following format:
roleAttributeName roleHierarchyLevel roleIsActive roleName
"admin" 99 true "Admin"
"director" 90 true "Director"
"areaManager" 80 false "Area Manager"
I'm happy with getting the data, doesn't matter the representation as I can later transform it further down in my code.
I've been looking around but all tutorials explain how to get specific bits of data through queries and params like roles/{roleAttributeName} but I just want to hit roles/ and get all items.
All you need to do is
create a resource (without curly braces since we dont need a particular item)
create a get method
use Scan instead of Query in Action while configuring the integration request.
Configurations as follows :
enter image description here
now try test...you should get the response.
to try it out on postman deploy the api first and then use the provided link into postman followed by your resource name.
API Gateway allows you to Proxy DynamoDB as a service. Here you have an interesting tutorial on how to do it (you can ignore the part related to index to make it work).
To retrieve all the items from a table, you can use Scan as the action in API Gateway. Keep in mind that DynamoDB limits the query sizes to 1MB either for Scan and Query actions.
You can also limit your own query before it is automatically done by using the Limit parameter.
AWS DynamoDB Scan Reference

Should I store failed login attempts in AWS Cognito or Dynamo DB?

I have a requirement to build a basic "3 failed login attempts and your account gets locked" functionality. The project uses AWS Cognito for Authentication, and the Cognito PreAuth and PostAuth triggers to run a Lambda function look like they will help here.
So the basic flow is to increment a counter in the PreAuth lambda, check it and block login there, or reset the counter in the PostAuth lambda (so successful logins dont end up locking the user out). Essentially it boils down to:
PreAuth Lambda
if failed-login-count > LIMIT:
block login
else:
increment failed-login-count
PostAuth Lambda
reset failed-login-count to zero
Now at the moment I am using a dedicated DynamoDB table to store the failed-login-count for a given user. This seems to work fine for now.
Then I figured it'd be neater to use a custom attribute in Cognito (using CognitoIdentityServiceProvider.adminUpdateUserAttributes) so I could throw away the DynamoDB table.
However reading https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-dg.pdf the section titled "Configuring User Pool Attributes" states:
Attributes are pieces of information that help you identify individual users, such as name, email, and phone number. Not all information about your users should be stored in attributes. For example, user data that changes frequently, such as usage statistics or game scores, should be kept in a separate data store, such as Amazon Cognito Sync or Amazon DynamoDB.
Given that the counter will change on every single login attempt, the docs would seem to indicate I shouldn't do this...
But can anyone tell me why? Or if there would be some negative consequence of doing so?
As far as I can see, Cognito billing is purely based on storage (i.e. number of users), and not operations, whereas Dynamo charges for read/write/storage.
Could it simply be AWS not wanting people to abuse Cognito as a storage mechanism? Or am I being daft?
We are dealing with similar problem and main reason why we have decided to store extra attributes in DB is that Cognito has quotas for all the actions and "AdminUpdateUserAttributes" is limited to 25 per second.
More information here:
https://docs.aws.amazon.com/cognito/latest/developerguide/limits.html
So if you have a pool with 100k or more it can create a bottle neck if wanted to update a Cognito user records with every login etc.
Cognito UserAttributes are meant to store information about the users. This information can then be read from the client using the AWS Cognito SDK, or just by decoding the idToken on the client-side. Every custom attribute you add will be visible on the client-side.
Another downside of custom attributes is that:
You only have 25 values to set
They cannot be removed or changed once added to the user pool.
I have personally used custom attributes and the interface to manipulate them is not excellent. But that is just a personal thought.
If you want to store this information, and not depend on DynamoDB, you can use Amazon Cognito Sync. Besides the service, it offers a client with great features that you can incorporate to your app.
AWS DynamoDb appears to be your best option, it is commonly used for such use cases. Some of the benefits of using it:
You can store separate record for each login attempt with as much info as you want such as ip address, location, user-agent etc. You can also add datetime that can be used by pre-auth Lambda to query by time range for example failed attempt within last 30 minutes
You don't need to manage table because you can set TTL for DynamoDb record so that record will be deleted automatically after specified time.
You can also archive items in S3

Alfresco batch loading service for nodes metadata

In our project we are using Alfresco 4.2-c for content store. We need web service for nodes metadata loading. The required properties that should included in the result are createdOn or modifiedOn. We have the ids of the nodes, which dates should be retrieved. Is there any way of getting this properties for multiple notes in one request, not one by one.
I already tried POST /alfresco/service/api/bulkmetadata, but none of the properties that I need is included in the result.
I also tried to create search request, but it returns only nodeRef ids.
I am aware that there are services to get this information one by one, but I don't want that, because I need the information for over 40K nodes.
Try to enumerate the properties:
http://localhost:8080/alfresco/api/-default-/public/cmis/versions/1.1/browser/?
cmisselector=query
&maxItems=10
&skipCount=0
&succinct=true
&q=
select cmis:objectId, cmis:createdBy, cmis:description
from cmis:document
where
cmis:objectId in (
'8dac37fa-1cd4-4226-85a3-03e8fdb64e16',
'395417a9-4bd1-4dd1-b33c-c4e555abccae'
)

Selecting Categories using API in Amazon MWS

I am developing a integration between a desktop application and Amazon MWS and need to be able to offer users a choice of categories to put the product they are listing into. My problem is that I can't find any way of programmatically getting the current categories from MWS using the API.
Additionally once I have a category reference to use I will need a way to the pull in and add the category specific XML child of ProductData (eg Home, Jewelry, Computers, etc) but they don't seem to be linked in any well defined way. For example, I can't say "if the chosen category is reference nnnnn ask them to populate the Computers specific ProductData", unless I write something myself to map them.
Has anyone else come across these problems and found a workable solution?
Any help appreciated...
I am currently exploring the option of limiting users to only selling products already listed on Amazon but still can't figure out how to pull in the correct category specific XML.
There are various product look-ups but they all seem to work from either my SKU (which will not yet be there) or Amazons ASIN (which I don't yet know)
You can use amazon advertizement api for this.
You have to create account on amazon affiliate programme.From that you have to get security credentials also .
After That go to BrowseNode Tree Page and download root categories list and save it to file or database.From there you get categoryname and their browseNodeId.
Then call BrowseNodeApi to get Child Categories for parent Category.
Please Follow This Link
http://docs.aws.amazon.com/AWSECommerceService/latest/DG/ProgrammingGuide.html
code for calling BrowseNodeApi
SignedRequestHelper helper =
new SignedRequestHelper(appConfig["AWSAccessKey"], appConfig["AWSSecretKey"], appConfig["endpoint"]);
string url = helper.Sign("http://ecs.amazonaws.com/onca/xml?Service=AWSECommerceService&Operation=BrowseNodeLookup&BrowseNodeId=" + value + "&AssociateTag=beginners00-00&Version=2011-08-01");
HttpWebRequest request = WebRequest.Create(url) as HttpWebRequest;
// Get response
using (HttpWebResponse response = request.GetResponse() as HttpWebResponse)
{
}
and also download SignatureGenerator class HMAC

A way to bypass the per-ip limit retrieving profile picture?

My app download all the user's friends pictures.
All the requests are of this kind:
https://graph.facebook.com/<friend id>/picture?type=small
But, after a certain limit is reached, instead of the picture I get:
{"error""message":"(#4) Application request limit reached","type":"OAuthException"}}
Actually, the only way I found to prevent this is to change the server ip (manually).
There isn't a better way?
For the record:
The limit is related to the Graph Api only, and the graph.facebook.com/<user>/picture url is a graph api call that returns a redirect.
So, to avoid the daily limit, simply fetch all the images url from FQL, like:
SELECT uid, pic_small, pic_big, pic, pic_square FROM user WHERE uid = me() or IN (SELECT uid2 FROM friend WHERE uid1=me())
now these are the direct urls to the images, for eg:
http://profile.ak.fbcdn.net/hprofile-ak-snc4/275716_1546085527_622218197_q.jpg
so don't store them since they continuously change.
If it's needed for an online app better way not to download those images, but use an online version, there is couple of reasons for doing so:
Users change pictures (some frequently), do you need an updated version?
Facebook's servers probably faster than yours and friends pictures probably cached within browser of your user.
Update:
Since limit you reach is Graph API call limit, not the image retrieval, another solution that comes to my head just now is using friends connection of user in Graph API and specifying picture in fields argument, eq: https://graph.facebook.com/me/friends?fields=picture, this will return direct URL-s for friends pictures so you can do only one call to get all needed info to download the images for every user...