Vtiger 7 workflow to create record in Quote Module - vtiger

I want to set a workflow in Vtiger 7 so that when the Quote stage is set to "Superceded" the quote is automatically duplicated and the newly created quote opened up ready for editing.
The simplest way to do this seems to be using the "Create Record" action in the Workflows. But for some reason the "Create Record" action seems to apply to every other module apart from Quotes. The Quotes module simply does not exist in the dropdown list.
Can someone please advise how to code the so the "Create Record" dropdown list includes the Quotes module? Alternatively how would I go about creating a custom function to achieve the same ?

You can duplicate the Quote module record by writing Custom function. You have already id of your Quotes. You just need to pass the Quote id and perform the duplicate operation.
There are many references you will find on google to execute the custom function using workflow.
Wiki, vTiger Discussion forum

Related

How to add issue ID from branch name to merge request description template?

By default GitLab adds issue ID from branch name to the merge request description, see Merge requests to close issues:
Merge requests to close issues
To create a merge request to close an issue when it’s merged, you can either:
Add a note in the MR description.
In the issue, select Create a merge request. Then, you can either:
Create a new branch and a draft merge request in one action. The branch is named issuenumber-title by default, but you can choose any name, and GitLab verifies that it’s not already in use. The merge request inherits the milestone and labels of the issue, and is set to automatically close the issue when it is merged.
Create a new branch only, with its name starting with the issue number.
But I want to use a custom merge request description template, see Create a merge request template:
Create a merge request template
Similarly to issue templates, create a new Markdown (.md) file inside the .gitlab/merge_request_templates/ directory in your repository. Commit and push to your default branch.
Research
GitLab Flavored Markdown doesn't contain any markup for the issue ID from the branch name.
Markdown Style Guide for about.GitLab.com doesn't contain any markup for the issue ID from the branch name.
GitLab quick actions doesn't contain any action for the issue ID from the branch name.
Question
How to add issue ID from branch name to the merge request description template?
I'm looking for a similar problem, i see that Push Options can be a solution for this:
https://docs.gitlab.com/ee/user/project/push_options.html
for example:
merge_request.title="<title>" Set the title of the merge request.
merge_request.description="<description>" Set the description of the merge request.
It is true that this must be set up by the developer on the client side, but with git hooks and shell scripts, the content can be anything!

AWS Amplify filter for #searchable annotation

Currently I am using a DynamoDB instance for my social media application. While designing the schema I sticked to the "one table" rule. So I am putting every data in the same table like posts, users, comments etc. Now I want to make flexible queries for my data. Here I found out that I could use the #searchable annotation to create an Elastic Search instance for a table which is annotated with #model
In my GraphQL schema I only have one #model, since I only have one table. My problem now is that I don't want to make everything in the table searchable, since that would be most likely very expensive. There are some data which don't have to be added to the Elastic Search instance (For example comment related data). How could I handle it ? Do I really have to split my schema down into multiple tables to be able to manage the #searchable annotation ? Couldn't I decide If the row should be stored to the Elastic Search with help of the Partitionkey / Primarykey, acting like a filter ?
The current implementation of the amplify-cli uses a predefined python Lambda that are added once we add the #searchable directive to one of our models.
The Lambda code can not be edited and currently, there is no option to define a custom Lambda, you read about it
https://github.com/aws-amplify/amplify-cli/issues/1113
https://github.com/aws-amplify/amplify-cli/issues/1022
If you want a custom Lambda where you can filter what goes to the Elasticsearch Instance, you can follow the steps described here https://github.com/aws-amplify/amplify-cli/issues/1113#issuecomment-476193632
The closest you can get is by creating a template in amplify\backend\api\myapiname\stacks\ where you can manage all the resources related to Elasticsearch. A good start point is to
Add #searchable to one of your model in the schema.grapql
Run amplify api gql-compile
Copy the generated template in the build folder, \amplify\backend\api\myapiname\build\stacks\SearchableStack.json to amplify\backend\api\myapiname\stacks\
Remove the #searchable directive from the model added in step 1
Start editing your new template copied in step 3
Add a Lambda and use it in the template as the resolver for the DynamoDB Stream
Using this approach will give you total control of the resources related to the Elasticsearch service, but, will also require to do it all by your own.
Or, just go by creating a table for each model.
Hope it helps
It is now possible to override the generated streaming function code as well.
thanks to the AWS Support for the information provided
leaved a message on the related github issue as well https://github.com/aws-amplify/amplify-category-api/issues/437#issuecomment-1351556948
All you need is to run
amplify override api
edit the corresponding overrode.ts
change the code with the resources.opensearch.OpenSearchStreamingLambdaFunction.code
resources.opensearch.OpenSearchStreamingLambdaFunction.functionName = 'python_streaming_function';
resources.opensearch.OpenSearchStreamingLambdaFunction.handler = 'index.lambda_handler';
resources.opensearch.OpenSearchStreamingLambdaFunction.code = {
zipFile: `
# python streaming function customized code goes here
`
}
Resources:
[1] https://docs.amplify.aws/cli/graphql/override/#customize-amplify-generated-resources-for-searchable-opensearch-directive
[2]AWS::Lambda::Function Code - Properties - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html#aws-properties-lambda-function-code-properties

Two different interfaces for AWS Tag Editor?

It seems that there are two different Web UI for AWS Tag Editor (you need an AWS account to try them):
https://resources.console.aws.amazon.com/r/tags
I got this link from AWS Doc
https://eu-west-1.console.aws.amazon.com/resource-groups/tag-editor/find-resources?region=eu-west-1
In Management Console, if you select Resource Group > Tag Editor on the top of the console page, it will take you to this page
The two WebUI behave differently:
The former is global but the latter is region-specific (it will put you into a region even if you don't put the region parameter in the URL)
The former allows you to search for Not tagged in the filter; but the latter does not
The UI are slightly different
Is one of UI a newer version?
Update (2019-05-14)
(Please also see an explanation about the two links being NEW and OLD UIs that AWS offered at a certain point in time) By now the first link is gone. If you visit it, you will get a 404 Not Found error from AWS.
I am part of the team building the new Tag Editor. Yes, you are correct: Classic Tag Editor is deprecated, and will be shut down soon entirely. We are working on full feature parity between the two Editors, so you will very soon find everything you can do in the old one as well in the new one.
To add some more context on your different items below:
1) Both old and new Tag Editor use the same underlying tagging infrastructure, so this should never happen. Maybe there is some browser issue involved here? Feel free to open a support issue so we can look deeper into it, if this continues the case.
2) Yes, the new one also includes Lambda, and will very soon add more resource types. The same by the way for regions: The old Tag Editor supports not all regions, for example eu-north-1 or eu-west-3.
3) No, Route53 Hosted Zones are supported in both Editors. Route53 resources only exists in the us-east-1 region, so maybe you used the Tag Editor in another region?
4) Both Editors show the same data. The old editor merged what you used as Name Tag and the ID in the same field - in the new one, you see only the ID in the column ID, and the Name Tag is displayed in the column Tag: Name.
Searching across regions is something the new Editor soon will support, too, and the same applies for the filter you mention. For showing resources without a specific tag, there is a workaround you already can do: Click on the settings icon in the top right of the table, and enable the tag you are interested in as a column. You then can sort this column so that all untagged ones show up on top.
If you have any other ideas or requests for the Tag Editor, please let us know. The fastest and most reliable way is to just use the 'Feedback' Button in the console in the bottom left.
Cheers,
Florian
Hi I am providing my own answer here (thanks my colleagues Kannan for the insight)
#1 above is what AWS called Class Tag Editor. If you click on the Question mark on the Web UI (upper right corner), you will be taken to a page that says:
This documentation is for classic Tag Editor, which has been
deprecated
So #2 is the version that AWS want us to use.
Below I will called #1 Old and #2 New
I compared the example outputs from our environment (about 50 resources). The two outputs differ in these respects:
New seems to retain past resources for a longer time. For example, if an EC2 instance has been terminated, it may take a
longer time to be removed from the listing of New
New seems to include resources for DynamoDB but Old does not
Old seems to include resources for Route 53 Hosted Zones but New does not.
Both New and Old show Security Groups, but the ID strings are rendered slightly differently.
New renders an ID as sg-xxxxxxxxxxxxxxxxxxxxxx
Old renders an ID as someName (sg-xxxxxxxxxxxxxxxxx)

Sharepoint 2013 workflow not firing when document checked in but works if checked out

We have a document library that has both Document sets and Documents. We also have a Workflow that is manually started by the user on any item in this library. The problem we are having is that the workflow doesn't start if the document is checked in and . If the document is checked out, it works fine. The workflow runs fine on a Document Set.
Looking into the log files, I see the following messages:
Skip lookup field SortBehavior as it's not dependent lookup, but it has PrimaryFieldId ID 46fff461-81e3-b73a-9fba-f4f1e8088cbe
Skip lookup field CheckedOutUserId as it's not dependent lookup, but it has PrimaryFieldId ID 46fff461-81e3-b73a-9fba-f4f1e8088cbe
Skip lookup field SyncClientId as it's not dependent lookup, but it has PrimaryFieldId ID 46fff461-81e3-b73a-9fba-f4f1e8088cbe
The target list of field Taxonomy Catch All Column, TaxCatchAll, does not exists in the current web or the current user does not have permissioin to see it. Skip it. 46fff461-81e3-b73a-9fba-f4f1e8088cbe
Immediately below these lines, I see the following message:
The file "http://sharepointurl.com/abc/TestWf/select_element.pdf" is not checked out. You must first check out this document before making changes......
The workflow is very simple and only logs a test message. I am not sure why SharePoint is trying to check-out the document but I have a feeling that it has something to do with the above messages.
Anyone has any idea why this is happening?
Thanks
We were able to fix the issue after getting some support on Microsoft TechNet forum.
Assuming the workflow is a SharePoint Designer Workflow, Open SharePoint Designer and Connect to your site. Click on Workflows from the left navigation, and click on your workflow. Workflow information page will open. In the "Settings" area in the right pane, uncheck the "Automatically update the workflow status to the current stage name". This will fix the problem.

Possible to add a new column to an Amazon SimpleDB domain with a default value?

You can dynamically put a new attribute on a single record in a domain, but that attribute remains null for all other records. Is there an "update * set newattribute='defaultval" style statement that I can execute that will add the new attribute to all the other records? I have a lot of records and would prefer not to loop over them all and do it programatically.
I don't think there is any such option. We had a similar problem and had to do a hack. We added the Attribute_Name_Default as a separate attribute. We then wrote a wrapper for the Aws SimpleDB client which would check the default attribute for each attribute and assign the value to the original attribute before returning to actual code. Using dependency injection we did not have to change any code. If dependency injection is not an option just checkout the aws client from github make the change and use that jar as a dependency.