Not able to delete multiple campaigns using Postman from Eloqua - postman

I have been trying to delete multiple campaigns from Eloqua at a time using Postman. But I am not able to do. I don't see reference in the tool as well http://docs.oracle.com/cloud/latest/marketingcs_gs/OMCAB/index.html#Developers/RESTAPI/REST-API.htm%3FTocPath%3D%2520Application%2520API%7C_____0.
Please let me know if deleting the multiple campaigns is possible.

It is not possible.
The link you provided mentions it's outdated, and a redirection link was available: http://docs.oracle.com/cloud/latest/marketingcs_gs/OMCAC/rest-endpoints.html
Have a look at all the DELETE methods over there, and you will see that there is no provision for sending more than one id at a time.
Edit: You say you are using Postman. It is possible to perform repetitive tasks (like deleting mulitple campaigns) with different parameters each time by using Collections.
Edit 2:
Create an environment,
Type your url with the id as a variable, e.g.: xyz.com/delete/{id}
And send all the id values as a JSON or CSV file. They have given a sample JSON, you would simply have to provide your ids inside an array, e.g.:
[
{"id":1},
{"id":2},
{"id":3}
]

Related

I have a gcp storage bucket with 1000+ images in it. What's the easiest way to get a text file that lists all the URLs of objects in the bucket?

I know that this api https://storage.googleapis.com/storage/v1/b/<BUCKET_NAME>/o? can be used to retrieve json data of 1000 objects at a time and and we can parse the output in code to pick out just the names and generate URLs of the required form. But is there a simpler way to generate a text file of list of URLs in a bucket?
edit: adding more details
I have configured a google load balancer(with CDN if that matters) with IP address <LB_IP> in front of this bucket. So ideally I would want to be able to generate a list of URLs like
http://<LB_IP>/image1.jpg
http://<LB_IP>/image2.jpg
...
In a general way you can just run in linux gsutil ls gs://my_bucket > your_list.txt to get all your objects in a text list.
If this is not what you are looking for please edit your question with more specific details.
gsutil doesn't have a command to print URLs for objects in a bucket, however it can list objects, as #Chris32 mentioned.
In addition, according to this Stackoverflow post you could combine listing to a sed program, to replace listings with object names and generate a form of URL.
For publicly visible objects, public links are predictable, as they match the following:
https//storage.googleapis.com/BUCKET_NAME/OBJECT_NAME

AWS Amplify filter for #searchable annotation

Currently I am using a DynamoDB instance for my social media application. While designing the schema I sticked to the "one table" rule. So I am putting every data in the same table like posts, users, comments etc. Now I want to make flexible queries for my data. Here I found out that I could use the #searchable annotation to create an Elastic Search instance for a table which is annotated with #model
In my GraphQL schema I only have one #model, since I only have one table. My problem now is that I don't want to make everything in the table searchable, since that would be most likely very expensive. There are some data which don't have to be added to the Elastic Search instance (For example comment related data). How could I handle it ? Do I really have to split my schema down into multiple tables to be able to manage the #searchable annotation ? Couldn't I decide If the row should be stored to the Elastic Search with help of the Partitionkey / Primarykey, acting like a filter ?
The current implementation of the amplify-cli uses a predefined python Lambda that are added once we add the #searchable directive to one of our models.
The Lambda code can not be edited and currently, there is no option to define a custom Lambda, you read about it
https://github.com/aws-amplify/amplify-cli/issues/1113
https://github.com/aws-amplify/amplify-cli/issues/1022
If you want a custom Lambda where you can filter what goes to the Elasticsearch Instance, you can follow the steps described here https://github.com/aws-amplify/amplify-cli/issues/1113#issuecomment-476193632
The closest you can get is by creating a template in amplify\backend\api\myapiname\stacks\ where you can manage all the resources related to Elasticsearch. A good start point is to
Add #searchable to one of your model in the schema.grapql
Run amplify api gql-compile
Copy the generated template in the build folder, \amplify\backend\api\myapiname\build\stacks\SearchableStack.json to amplify\backend\api\myapiname\stacks\
Remove the #searchable directive from the model added in step 1
Start editing your new template copied in step 3
Add a Lambda and use it in the template as the resolver for the DynamoDB Stream
Using this approach will give you total control of the resources related to the Elasticsearch service, but, will also require to do it all by your own.
Or, just go by creating a table for each model.
Hope it helps
It is now possible to override the generated streaming function code as well.
thanks to the AWS Support for the information provided
leaved a message on the related github issue as well https://github.com/aws-amplify/amplify-category-api/issues/437#issuecomment-1351556948
All you need is to run
amplify override api
edit the corresponding overrode.ts
change the code with the resources.opensearch.OpenSearchStreamingLambdaFunction.code
resources.opensearch.OpenSearchStreamingLambdaFunction.functionName = 'python_streaming_function';
resources.opensearch.OpenSearchStreamingLambdaFunction.handler = 'index.lambda_handler';
resources.opensearch.OpenSearchStreamingLambdaFunction.code = {
zipFile: `
# python streaming function customized code goes here
`
}
Resources:
[1] https://docs.amplify.aws/cli/graphql/override/#customize-amplify-generated-resources-for-searchable-opensearch-directive
[2]AWS::Lambda::Function Code - Properties - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html#aws-properties-lambda-function-code-properties

boto3 list_services() with order

I made aws auto deployment code with boto3 library.
In my code, get all service list and use it.
I have to get lastest service. But I think there is no order option.
(https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ecs.html#ECS.Client.list_services)
Sometimes, first element is latest service.
But sometimes, old service is placed in first element.
Is there any option or way to get latest service?
Thanks.
The list_services method does not return details of individual services. It simply lists the services, and returns you a list of identifiers (ARNs) for those services.
To get more details of a given service, you can use describe_services. This allows you to get details of up to 10 services at a time.
So, take the list of service identifiers that you get back from list_services, and pass it to describe_services (with at most 10 service identifiers). Something like this (untested):
list_response = client.list_services(
cluster='xyz',
launchType='EC2'
)
desc_response = client.describe_services(
cluster='xyz',
services=list_response['serviceArns']
)
Note that you will have to do pagination using maxResults / nextToken if there are a lot of results.

Get Feed from event of page

I need to access all events from my page (using manage_pages). Then, for certain events (id's in an array) I need to access the feed and get the pictures.
I already constructed this link:
https://graph.facebook.com/[pageid]/events?fields=feed&access_token=[access_token_from_page]
which is fine. But I was wondering if I could somehow specify the IDs of the events in the url? And maybe also saying I just want photos?
I'm looking for a way to minimalize the response.
Thanks!

Autonomy - Force reindexing without losing data

I need to add a new parameter to my Autonomy HTTP fetch configuration.
ImportFieldOp2=Expand
ImportFieldOpApplyTo2=AUTHOR
ImportFieldOpParam2=;;AUTHOR_M
I stop the HTTPFetch service and, after the config modification, I started the service.
The problem is that the change made is only applied to the new documents.
The old documents don't have the new parameter.
If I remove all the documents indexed works, but is a production environment and I can't do that.
How can i force the indexation of the old documents without losing data?
Thank you.
Look at the Content engine parameter KillDuplicates.
KillDuplicates=DREREFERENCE should do what you want.
A full re-crawl would be required to update the existing documents with the new ones.