How to autoextend maximo contract at due date? - customization

I'm trying to know whether there is a standard way to auto extend a Maximo contract at due date. In most Maximo contract application, there are fields to set up auto extend but there is not any cron task to handle such purpose. Do I really have to create a custom cron task and script for such purpose? Share with me you experience.

You might want to consider implementing an appropriate Escalation which could check for appropriate contracts with a due date occurring within a defined time period based on the current date (e.g. due date occurs within the next 5 days -> duedate between getdate() and getdate() + 5)
Typically we have used this type of logic in conjunction with a communication template to send an email to the appropriate party who then manually reviews and updates the appropriate contract. But you could add an appropriate action to the escalation to increment the due date by an appropriate amount + also send out the associated email to notify the appropriate parties of the implemented update?
Escalations are coded within the Escalations application, the actions application (target actions) and the communications template application (email template)
:)

Related

What is the best practice to write an API for an action that affects multiple tables?

Consider the example use case as below.
You need to invite a Company as your connection. The sub actions that needs to happen in this situation is.
A Company need to be created by adding an entry to the Company table.
A User account needs to be created for the staff member to login by creating an entry in the User table.
A Staff object is created to ensure that the User has access to the Company by creating an entry in the Staff table.
The invited company is related to the invitee company, so a relation similar to friendship is created to connect the two companies by creating an entry in the Connection table.
An Invitation object is created to store the information as to who invited who onto the system, with other information like invitation time, invite message etc. For this, and entry is created in the Invitation table.
An email needs to be sent to the user to accept invitation and join by setting password.
As you can see, entries are to be made in 5 Tables.
Is it a good practice to do all this in a single API call?
If not, what are the other option.
How do I maintain data integrity if it is to be split into multiple APIs?
If the actions need to be atomic, then it's definitely best to do this in a single API call. Otherwise, you run the risk of someone not completing all the tasks required and leaving the resources in a potentially conflicting state.
That said, you're not updating a single resource, so this isn't a good fit for a single RESTful resource creation call (e.g., POST /companyInvitations) -- as all these other things being created and stitched together might lead to quite a bit of confusion.
If the action you're doing is "inviting a Company", then one option is to use Google's "custom method" syntax (POST /resources/1234:action) as defined in AIP-136. In this case, you might do POST /companies/1234:invite which says "I want to invite Company #1234 to be my connection".
Under the hood, this might atomically upsert (create if resources don't already exist) all the right things that you've listed out.
Something to consider when approaching an API call where multiple things happen when called, is how long those downstream actions take. Leaving the api call blocked isn't the best idea in the world while things are processing in the background.
You could consider (depending on your usecase) taking in the api request, immediately responding with a 200 status, and dropping the request onto an internal queue for processing. When your background service picks up the request it can update whatever needs to be updated and manage the transactions appropriately etc. This also caters for horizontal scaling scenarios where lots of "worker" services can be deployed to process the requests.
As part of this you could consider adding another "status" endpoint where requests can be made to find out how things are going. To avoid lots of polling status requests you could also take in callback details as part of the original api call which then gets called when the background processing is complete. Or you could do both!

Authorize.Net Query on recurring billing

We have been trying to integrate authorize.net payment gateway in one of our clients project based on Asp.net web API. We have few queries that we came across while implementing Recurring Planning scenarios.
Query 1
We checked the API’s for Creating Subscription, Getting Subscription, Updating Subscription. However once we have created subscription, is there any way we can update the amount in the subscription.
Let’s say for example.
We have a created a subscription for our user for 50$ amount on 01st Jan 2021 with 30 days interval.
And on 15th Jan 2021, our user wishes to purchase 1 more license which will cost him 10$ more.
Hence can we increase his billing cycle of subscription by updating the subscription?
We checked in Update Subscription API, & it is only allowing to update credit card info hence is there any way to update amount.
Query 2
Is there any way to implement Autorenewal, hence when a user wishes he/she can set auto renewal on/off for recurring billing.
Query 3
If there is any way to switch off auto renewal of recurring billing, then is there any link that we can generate & send them through which they can pay there next due.
Query 1: You cannot update a subscription amount. If the amount needs to change you either need to cancel the current subscription and create a new one for the new amount (being sure to prorate credit from the previous subscription payment) or use CIM to manage your subscription service which allows you to charge against their card at your discretion but requires you to also manage the subscription yourself.
Query 2: Not through Authorize.Net. If you want a subscription to start or end you need to explicitly do so through their API.
Query 3: Not through Authorize.Net. That application logic and, once again, you would be responsible for managing.
I'm assuming you are using or are aware of the API provided for Authorize.net here: https://github.com/AuthorizeNet/sdk-dotnet/tree/master/Authorize.NET/Api/Controllers
Query 1: As of now, there is a way to update the amount for a given subscription. You can use ARBSubscriptionType class. There is an amount property there you can set. Then you can create the request ARBUpdateSubscriptionRequest, passing in the ARBSubscriptionType class and the subscription Id.
Note: You might have to handle pro-rating.
Query 2: There isn't a built in renewal feature in Authorize.Net as far as I know. It seems like you could potentially update the totalOccurrences by some amount to act as a "renewal", when technically its an extension of the subscription. The method in which you check when to update, either a Modulo operation or a date check is up to you. You can use paymentScheduleType class to update totalOccurrences, passing it along to a ARBUpdateSubscriptionRequest.
Query 3: Authorize.Net does not have any in house link generation.

Checking subscription status realtime

Consider sample chat application where user purchase monthly/annual subscriptions (subscriptions like Amazon Prime, etc).
As soon as the subscription expires, user should not be able to send messages in app.
User can end their subscription before the original subscription end date.
One solution in my mind (Frontend) - to cache the end date in app and before every "send message" operation, compare the end date and current date.
But the problem is - if user ends the subscription early, user will still be able to send the message.
How can I push update the new subscription end date in cache.
Another solution was (Backend) - I have a table in backend storing subscription details like subscription_id, user_id, subscription_enddate. So before any "send message" operation, query the subscription table and compare the dates and then continue/cancel further operations.
Q1. Should I go with Backend solution or can you please share some improvements to frontend method or any best practice for this scenario?
Q2. Also is storing subscription details in separate table best practice or any good design instead. ?
PS- Sample chat app is based on AWS Amplify Datastore
Let me try to breakdown the answer and give my opinion. I would also like to mention solutions to such problems are determined by the scale and various tradeoffs.
Q1-
If sending messages has an adverse effect, you should never rely on the frontend solution only as it is easy to bypass them. You can use a mixture to ensure that the load is not very high on the backend.
Adding a Frontend Cache for subscription will ensure you will be able to filter most of the messages on the frontend if the cache is not tampered with.
Adding a service before the queue, that validates whether the user subscription has expired adds one more layer of security. If the user subscription is valid it pushes the message to Queue else throws an error. This way any bad actor can also not misuse the system.
Q2-
Depending on the use-cases and load, you can have a separate table or a separate micro-service for the subscription itself.
When to have a separate micro-service?
When the subscription data is required from multiple applications in your system and needs to have its own scalability independent of others, it can be beneficial to have a separate micro-service.
When to have a separate table?
In other cases, where you feel adding a service would be overkill. You can keep the data separate in a different table/DB giving you the flexibility to change subscription and even extract it easily in the future.

Real Time Google Analytics API - Identify user session

I'm retreiving event data using Real Time Google Analytics API, so as to trigger responses each time conditions are met - while the user navigates.
This is my actual query on Google Analytics Real Time API (which works perfectly!)
return service.data().realtime().get(
ids='ga:' + profile_id,
metrics='rt:totalEvents',
dimensions='rt:eventAction,rt:eventLabel,rt:eventCategory',
max_results='25').execute()
I'd like to show results grouped by each particular session or user. So as to trigger a message to this particular user if some conditions are met.
Is that possible? And if so, how do apply this criteria to this query?
"Trigger a message to a particular user" would imply that you either have personally identifiable data stored in GA, which would violate Googles TOS, or that you map an anonymous ID (clientid or UserID or similar) to a key stored in an external database (which might be legally murky, depending on your legislation). Since I don't want to throw away the answer I have written before reading your question to the end :-) I am going to assume the latter.
So, is that possible? No, not really. By default GA does not identify neither an identifier for the user (client id or user id) nor for the session (a session identifier is present only in the BigQuery export schema).
The realtime API has a very limited set of dimensions (mostly I think because data aggregation does not happen in realtime), so you can't even use custom dimensions. Your only chance would be to overwrite one of the standard fields, i.e. campaign information.
Of course this destroys the original data in the field. So you should use an extra view for the API query, send a custom dimension with the user identifier along, and then use an advanced filter to copy the custom dimension value to a standard field (while you original data is safe in your other data views). This is a bit hackish, though.
Also the realtime API only displays the current hit per user, so you cannot group by user in the query in any case - you'd need to download and store the data to an external database and do your aggregation there.

Is creating different objects with one POST call recommended with RestAPI

I'm working on implementing a RESTfull API for my web application and it's in php. I have faced a problem on deciding whether it's recommended to create multiple different type of objects using single POST call is allowed. My scenario is as follows.
addEmployee api service function allow clients to create an employee inside my application by passing the data as POST parameters.
There are two dependencies for Employee in my system as Job Title and Employment Status and those are separately saved objects within the system. So client has to pass Job Title name and Employment Status name along with the addEmployee POST call.
When a client calls addEmployee method, it internally checks whether given Job Title and Employment Status are already there in the system and if so it only add a reference for those existing objects within the Employee object.
If given Job Title or Employment Status is not there in the system, addEmployee method will first save Job Title and Employment Status objects in the system and then will add a references in the Employee Object.
There are separate API functions for addJobTitle and addEmploymentStatus which can be used by client if they need to add more Job Titles and Employment Statuses to the system.
In the above workflow I'm not sure whether 4th step is correct because the internal saving operation is not visible to client and it reduce the visibility of the API. But usability wise it's good because client can add an Employee with maximum one web service call.
I can replace the 4th step as follows to improve the visibility.
If given Job Title or Employment Status are not there in the system, addEmployee method will return an exception saying those are not available in the system and along with that response will provide uris to addJobTitle and addEmploymentStatus functions allowing clients to use uris and save those Job Titles and Employment Status first. After saving Job Title and Employment Status objects client can again call addEmployee method to add the employee with given Job Title and Employment Status.
2nd approach will improve the visibility of the API but performance vise and usability vise it will not be much effective because client has to call API 3 times maximum to add an Employee to the system.
Please advice me what is the approach I should follow to resolve this issue.
I think the 4th step you are attempting is valid without any changes and also is the recommended approach.
If you consider Job Title and Employment Status, both of them are related to the Employee
The best approach is that you don't expose methods in your API to add Job Title and add Employment Status. Because if you do the client can keep on creating those for example one can create Job Title Software Engineer and another can create SW Engineer. Before you know it you have hundreds of Job Titles. Same applies to Employment Status.
Only Listing methods for Job Title and Employment Status may suffice with backend provisioning of those (SQL or Manual Insert or Admin only insert)
Finally as you mentioned you can reduce the multiple calls and reduce bandwidth which is crucial if the API is to be used by for example Mobile Apps over Wireless Networks.