Alfresco batch loading service for nodes metadata - web-services

In our project we are using Alfresco 4.2-c for content store. We need web service for nodes metadata loading. The required properties that should included in the result are createdOn or modifiedOn. We have the ids of the nodes, which dates should be retrieved. Is there any way of getting this properties for multiple notes in one request, not one by one.
I already tried POST /alfresco/service/api/bulkmetadata, but none of the properties that I need is included in the result.
I also tried to create search request, but it returns only nodeRef ids.
I am aware that there are services to get this information one by one, but I don't want that, because I need the information for over 40K nodes.

Try to enumerate the properties:
http://localhost:8080/alfresco/api/-default-/public/cmis/versions/1.1/browser/?
cmisselector=query
&maxItems=10
&skipCount=0
&succinct=true
&q=
select cmis:objectId, cmis:createdBy, cmis:description
from cmis:document
where
cmis:objectId in (
'8dac37fa-1cd4-4226-85a3-03e8fdb64e16',
'395417a9-4bd1-4dd1-b33c-c4e555abccae'
)

Related

AWS AppSync searchItems type return data while table is empty

I deleted all the items in the DataTemplate table but when I query them again with the searchDataTemplates endpoint on the app or in AppSync it returns the old data, but when I use the listDataTemplates it returns nothing which is correct. Needed to repopulate the data in the table.
data template table
search endpoint
list endpoint
when I updated items individually it worked just fine but when i deleted all the items from the console (around 700 items) the search endpoint stopped working. Just the search
UPDATE:
I repopulated the data hoping it'd reset but now the listDataTemplates shows the new data and the search still shows the old data, is there some cache that needs to be reset?
SECOND UPDATE:
I removed the table and the appsync functions are gone however when i recreated the table (with no data) the testing out the function still returns the old data. I'm guessing the opensearch stuff hasn't been updated?
If you are using AppSync with Amplify CLI, #searchable will automatically create the followings:
An OpenSearch Domain
A Lambda Function that will be attached to the DynamoDB Streams and push the changes (create/update/delete) over to your OpenSearch Domain.
And the problem that you're facing is most likely due to the Lambda Function created failed to push the changes from DynamoDB Streams to OpenSearch. A quick suggestion is to check on the created Lambda Function first.
Reference: #searchable
This issue can only happen if caching is enabled in your application.
I am not sure what's the infrastructure you are using, so i would go ahead with some educated guess. Please feel free to correct me if i overstepped.
From your description of question, you have an AppSync as API layer and DynamoDb as primary database.
If these are the only two resources you have, please check the AppSync cache configuration.
Open AppSync console
from left panel select APIs -> your api -> caching
Validate Caching behavior is set to None
In case if you have AWS OpenSearch enabled for search query (i could be wrong, however picking up from previous comment). Then validate the cluster configuration.
Open AWS Open Search Service console
From left panel select Domains and click on the openserch domain that you are using
scroll to the bottom right and look for Advanced cluster settings and ensure the attribute Fielddata cache allocation is set to 0
If Fielddata cache allocation is not 0, update the cluster configuration and modify the advanced cluster setting to set the Fielddata cache allocation field to 0.
Wait for a few minutes (I would suggest 5 minutes) and then retry your use-case.
I hope this would help resolve your issue.

AWS Amplify filter for #searchable annotation

Currently I am using a DynamoDB instance for my social media application. While designing the schema I sticked to the "one table" rule. So I am putting every data in the same table like posts, users, comments etc. Now I want to make flexible queries for my data. Here I found out that I could use the #searchable annotation to create an Elastic Search instance for a table which is annotated with #model
In my GraphQL schema I only have one #model, since I only have one table. My problem now is that I don't want to make everything in the table searchable, since that would be most likely very expensive. There are some data which don't have to be added to the Elastic Search instance (For example comment related data). How could I handle it ? Do I really have to split my schema down into multiple tables to be able to manage the #searchable annotation ? Couldn't I decide If the row should be stored to the Elastic Search with help of the Partitionkey / Primarykey, acting like a filter ?
The current implementation of the amplify-cli uses a predefined python Lambda that are added once we add the #searchable directive to one of our models.
The Lambda code can not be edited and currently, there is no option to define a custom Lambda, you read about it
https://github.com/aws-amplify/amplify-cli/issues/1113
https://github.com/aws-amplify/amplify-cli/issues/1022
If you want a custom Lambda where you can filter what goes to the Elasticsearch Instance, you can follow the steps described here https://github.com/aws-amplify/amplify-cli/issues/1113#issuecomment-476193632
The closest you can get is by creating a template in amplify\backend\api\myapiname\stacks\ where you can manage all the resources related to Elasticsearch. A good start point is to
Add #searchable to one of your model in the schema.grapql
Run amplify api gql-compile
Copy the generated template in the build folder, \amplify\backend\api\myapiname\build\stacks\SearchableStack.json to amplify\backend\api\myapiname\stacks\
Remove the #searchable directive from the model added in step 1
Start editing your new template copied in step 3
Add a Lambda and use it in the template as the resolver for the DynamoDB Stream
Using this approach will give you total control of the resources related to the Elasticsearch service, but, will also require to do it all by your own.
Or, just go by creating a table for each model.
Hope it helps
It is now possible to override the generated streaming function code as well.
thanks to the AWS Support for the information provided
leaved a message on the related github issue as well https://github.com/aws-amplify/amplify-category-api/issues/437#issuecomment-1351556948
All you need is to run
amplify override api
edit the corresponding overrode.ts
change the code with the resources.opensearch.OpenSearchStreamingLambdaFunction.code
resources.opensearch.OpenSearchStreamingLambdaFunction.functionName = 'python_streaming_function';
resources.opensearch.OpenSearchStreamingLambdaFunction.handler = 'index.lambda_handler';
resources.opensearch.OpenSearchStreamingLambdaFunction.code = {
zipFile: `
# python streaming function customized code goes here
`
}
Resources:
[1] https://docs.amplify.aws/cli/graphql/override/#customize-amplify-generated-resources-for-searchable-opensearch-directive
[2]AWS::Lambda::Function Code - Properties - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html#aws-properties-lambda-function-code-properties

How to limit nodes retrievable via Service module in Drupal?

I've put together a simple web service in my Drupal website, using the apposite module (Services). Now, i can check the endpoint "node" and "retrieve" inside the module, for it to expose content of nodes of my website, but i don't really want to include EVERY node in it. I'd rather want to expose only a couple of selected contents.
To explain myself better: i need to expose the content of my 'disclaimer' nodes to a mobile app, but activating the node-retrieve endpoint means all nodes are exposed and i don't really want it.
So, there's a way to limit which node must be exposed via the Service module endpoint node-retrieve?
You can use services_entity module, and use pagesize query parameter to return a specific number of nodes.
Example: return 15 nodes of type page
/api/node?pagesize=15&parameters[type]=page

Selecting Categories using API in Amazon MWS

I am developing a integration between a desktop application and Amazon MWS and need to be able to offer users a choice of categories to put the product they are listing into. My problem is that I can't find any way of programmatically getting the current categories from MWS using the API.
Additionally once I have a category reference to use I will need a way to the pull in and add the category specific XML child of ProductData (eg Home, Jewelry, Computers, etc) but they don't seem to be linked in any well defined way. For example, I can't say "if the chosen category is reference nnnnn ask them to populate the Computers specific ProductData", unless I write something myself to map them.
Has anyone else come across these problems and found a workable solution?
Any help appreciated...
I am currently exploring the option of limiting users to only selling products already listed on Amazon but still can't figure out how to pull in the correct category specific XML.
There are various product look-ups but they all seem to work from either my SKU (which will not yet be there) or Amazons ASIN (which I don't yet know)
You can use amazon advertizement api for this.
You have to create account on amazon affiliate programme.From that you have to get security credentials also .
After That go to BrowseNode Tree Page and download root categories list and save it to file or database.From there you get categoryname and their browseNodeId.
Then call BrowseNodeApi to get Child Categories for parent Category.
Please Follow This Link
http://docs.aws.amazon.com/AWSECommerceService/latest/DG/ProgrammingGuide.html
code for calling BrowseNodeApi
SignedRequestHelper helper =
new SignedRequestHelper(appConfig["AWSAccessKey"], appConfig["AWSSecretKey"], appConfig["endpoint"]);
string url = helper.Sign("http://ecs.amazonaws.com/onca/xml?Service=AWSECommerceService&Operation=BrowseNodeLookup&BrowseNodeId=" + value + "&AssociateTag=beginners00-00&Version=2011-08-01");
HttpWebRequest request = WebRequest.Create(url) as HttpWebRequest;
// Get response
using (HttpWebResponse response = request.GetResponse() as HttpWebResponse)
{
}
and also download SignatureGenerator class HMAC

batch update link_urls in the creative

We are trying to attach some parameters to existing link_urls for creatives.
For example:
http://www.adchemy.com -> http://www.adchemy.com?param_ad_id=23423424242 for tracking
purposes, where is such id is an adgroup id for that creative.
We'll maintain one to one relation between creative and adgroup.
To update existing creative we have to create new and associate it with the ad.
We need to update around 5000 creatives.
Is there a way to update them in the batch rather than doing 10000 requests?
(5000 for new creatives and 5000 to update appropriate ads)?
There is a way in facebook API provides batch updates https://developers.facebook.com/docs/reference/api/batch/ not sure about the limit though it says 20.You can create Adgroups and insert creative element in the form of JSON or object and post information to adgroups url which is available at Open graph API
https://developers.facebook.com/docs/reference/ads-api/adgroup/