How to add software update subscription - c++

We want to add auto-update or update notification to our products (C++).
Update should be subscription-based:
user buys subscription for 1 year of updates
when subscription expires, no more updates are available.
Can someone suggest software or provider for implementing such a service?
I found a few examples of auto-update but they all are unlimited in time.
This service must be limited on per-user basis and allow extensions.

What you would need, in terms of ingredients, would be:
a method to download the updates - I would suggest HTTP(S) for that
a method to encode the license, including what kind of updates you're entitled to and how long you're entitled to it. Ideally, this would be opaque to the user but easily verifiable on both ends (so an erroneous entry can be notified to the user without having to contact the server)
an easy way to know whether updates are available, and perhaps when to check again
What I would suggest would be to define a simple XML over HTTP service using an embeddable HTTP client, such as (shameless plug) Arachnida, with a simple API - something like:
class UpdateAgent
{
/* boilerplate */
public :
/* set the key to use. Throws an InvalidKey exception if not valid
* validity is checked locally - no HTTP queries are used.
* Key may have been invalidated on the server without notification
* at this point */
void setKey(const std::string &key);
// Get the key currently set
std::string getKey() const;
/* using a synchronous HTTPS query, check with the server if updates are
* available for the current key. Throws on error: one of the QueryError
* subclasses if there has been a query error, or InvalidKey is the
* key is either not set or is not valid (i.e. invalidated server-side) */
bool isUpdateAvailable() const;
/* etc. */
};
They key itself would, as seen above, be a string that, through its encoding, would contain some kind of information as to its validity - e.g. some kind of CRC to know whether the entered string is valid. The rest of the key - including its expiration date - could be managed server-side, although expiration information could also be encoded in the key itself (but that would mean changing the key if the user extends the license).
As for the server-side, when presented with a key and a request for an update, the server would
check the validity of the key
check whether any updates are available for the software the key is for (information that may or may not be part of the key itself, depending on whether you want to manage it in a database or want it to be part of the license key)
copy or hardlink the file into a place it can be downloaded, with a unique and hard-to-guess name
provide the URL for download to the client - e.g. in an XML stream returned for the HTTP request
start a time-out to remove the file after it hasn't been downloaded for N seconds/minutes/hours
remove the file once it has been downloaded by the client
If a download fails, it can be restarted or asked for again. If you want to charge for individual downloads, you'd need the client to confirm a successful download - or report an error on failure - so you don't count individual downloads twice.
Of course, all this is off the top of my head - there might be some details I haven't thought of here. Each of the ingredients are pretty easy to come by. An open source version of Arachnida is available on SourceForge and I have some code to encode license keys if you need it (used it for another of my products), but I'm sure that you can write that if you don't want to use mine.
A few things you might want to think of are secure authentication of your clients - so they don't share license keys - securing your HTTP connection so you don't end up publishing your updates to the world, etc. Neither the server nor the client need be very complicated to implement, as most of the building blocks already exist.
HTH
rlc

Related

Amazon Connect Stop Call Recording

Is it possible to stop call recordings in Amazon Connect so the customer and agent can discuss sensitive material without being recorded?
I am aware of the set call recording behaviour blocks, but they don't seem to work on a call that has already been started with an agent with call recording enabled. Transferring to another contact flow with the recording type set to none doesn't seem to make a difference and the call carries on being recorded.
I am aware of the sample workflow Sample secure input with agent as outlined in this AWS blog https://aws.amazon.com/premiumsupport/knowledge-center/disable-recording-amazon-connect. This does work, however it relies on the customer entering payment details whilst the agent is on hold - preventing the agent and customer from having a sensitive conversation.
It seems the only way to stop recording once it has been enabled is to put the agent on hold?
Do not know if you have not solved your issue yet, but amazon has update their Amazon Connect API that would allow you to suspend the recording.
Boto3 implementation
response = client.suspend_contact_recording(
InstanceId='string',
ContactId='string',
InitialContactId='string'
)
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/connect.html#Connect.Client.suspend_contact_recording
They have also allow you to Start, Pause, Stop. (
We have just started to review this for a POC, turn recording off be default for a group of queues. Allow to Agents to start and stop and pause recording as needed.
You can also read this in an Amazon Blog post that should be able to help you fully implement the solution.
https://aws.amazon.com/blogs/contact-center/pausing-and-resuming-call-recordings-with-a-new-api-in-amazon-connect/#:~:text=is%20not%20recorded.-,End%20the%20call.,you%20start%20and%20stop%20it.
After speaking with Architects at AWS, the desired and designed for solution is to have the customer automatically enter sensitive information with the agent on hold and call recording turned off to remain PCI compliant.
If that is not an option there are workarounds possible that go against the way Amazon Connect has been designed. In order to turn off call recording once it has been enabled on a call, a new contact ID must be established. To do this you would need to transfer the user to your external phone number again or transfer to a queue and disable call recording in that new flow.
This brings in extra issues around how to get the customer back to the original agent once the sensitive information has been discussed. It also means you would potentially have 3+ contact IDs for the same transaction, with call recording spread across them.

How can I have my Filter before a javax.websocket Upgrade

I want to write my own "ServletContainerInitializer" that adds my local filter to the ServletContext. And I also want to manage ordering of ServletContainerInitializer invocation so that my local filter will get register and hit by the request before the websocket upgrade filter.
I want to know how to initialize my local ServletContainerInitializer ?
First, ServletContextInitializer are not ordered, that feature is not part of the Servlet spec. You can't accomplish that part of your question. (maybe in a future version of the Servlet spec)
Next, filtering on WebSocket Upgrade requests is highly discouraged, and a cause for a large number of problems in WebSocket. You have to be very careful to not do any of the following.
Access anything on the Response object
Do not wrap the Request or Response objects
Do not access the Request input streams or readers
Do not access the Response output streams or writers
Do not add headers
Do not change headers
Do not access request HttpSession
Do not access request user principal
Do not access request authentication / authorization methods
Do not access request parts (multipart/form-data)
Do not access request parameters
Do not access ServletContext
Do not access request.getScheme or isSecure
Do not remove things from the request (attributes, headers, parameters, etc)
In short, the only safe things you can do are
request.getAttribute(String name)
request.getContextPath()
request.getCookies()
request.getHeader(String name)
request.getIntHeader(String name)
request.getLocalName()
request.getLocalPort()
request.getPathInfo()
request.getPathTranslated()
request.getQueryString()
request.getRemoteAddr()
request.getRemotePort()
request.getRequestURI()
request.getRequestURL()
request.getServerName()
request.getServerPort()
As all other accesses on the request or response objects will change the state of the request and prevent an upgrade.
The fact that Jetty has a WebSocketUpgradeFilter is just our choice on implementation for the JSR-356 (aka javax.websocket) spec. It is added by a server side ServletContextInitializer and is forced to be first, always.
In practice you should work with the expectation that upgrades occur before the Servlet processing (and this includes filters), as this is how the spec is written. There are open bugs against the spec about how interactions with filters and whatnot should be treated, but those are currently unanswered and loosely scheduled for a future version of the javax.websocket spec.
Future versions of Jetty will likely change from using a filter to using something internal that cooperates at the path mapping level, merging the logic from the Servlet spec and the WebSocket spec into a single new set of rules.
Since this question comes up often, i've ticked the community wiki flag.
The number one reason this gets asked is because there is some authentication or authorization logic built into a filter on your project.
If this is the case, you have 2 options.
Refactor out the authentication and/or authorization logic into a standalone classs, unassociated with your filter.
Build a new Filter and a new ServerEndpointConfig.Configurator that uses this now common logic to accomplish the end results you need. Note that you do not have access to the entire HttpServletRequest object when under a potential WebSocket upgrade, you only have access to the HandshakeRequest object contents. (you can see the restrictions now)
Use the Servlet spec, and containers properly and implement / configure Security at the container level, which will always execute before websocket or servlets or filters. Thus dropping your security based Filters entirely.

How to send mass mail in Django and get status for every message?

I'm creating a web app for handling various surveys. An admin can create his own survey and ask users to fill it up. Users are defined by target groups assigned to the survey (so only user in survey's target group can fill the survey).
One of methods to define a target group is a "Token target group". An admin can decide to generate e.g. 25 tokens. After that, the survey can be accessed by anyone who uses a special link (containing the token of course).
So now to the main question:
Every token might have an e-mail address associated with itself. How can I safely send e-mails containing the access link for the survey? I might need to send a few thousand e-mails (max. 10 000 I believe). This is an extreme example and such huge mailings would be needed only occasionally.
But I also would like to be able to keep track of the e-mail message status (was it send or was there any error?). I would also like to make sure that the SMTP server doesn't block this mailing. It would also be nice if the application remained responsive :) (The task should run in background).
What is the best way to handle that problem?
As far as I'm concerned, the standard Django mailing feature won't be much help here. People report that setting up a connection and looping through messages calling send() on them takes forever. It wouldn't run "in background", so I believe that this could have negative impact on the application responsiveness, right?
I read about django-mailer, but as far as I understood the docs - it doesn't allow to keep track of the message status. Or does it?
What are my other options?
Not sure about the rest, but regardless for backgrounding the task (no matter how you eventually do it) you'll want to look for Celery
The key here is to reuse connection and to not open it again for each email. Here is a documentation on the subject.

Architecture for robust payment processing

Imagine 3 system components:
1. External ecommerce web service to process credit card transactions
2. Local Database to store processing results
3. Local UI (or win service) to perform payment processing of the customer order document
The external web service is obviously not transactional, so how to guarantee:
1. results to be eventually persisted to database when received from web service even in case the database is not accessible at that moment(network issue, db timeout)
2. prevent clients from processing the customer order while payment initiated by other client but results not successfully persisted to database yet(and waiting in some kind of recovery queue)
The aim is to do processing having non transactional system components and guarantee the transaction won't be repeated by other process in case of failure.
(please look at it in the context of post sell payment processing, where multiple operators might attempt manual payment processing; not web checkout application)
Ask the payment processor whether they can detect duplicate transactions based on an order ID you supply. Then if you are unable to store the response due to a database failure, you can safely resubmit the request without fear of double-charging (at least one PSP I've used returned the same response/auth code in this scenario, along with a flag to say that this was a duplicate).
Alternatively, just set a flag on your order immediately before attempting payment, and don't attempt payment if the flag was already set. If an error then occurs during payment, you can investigate and fix the data at your leisure.
I'd be reluctant to go down the route of trying to automatically cancel the order and resubmitting, as this just gets confusing (e.g. what if cancelling fails - should you retry or not?). Best to keep the logic simple so when something goes wrong you know exactly where you stand.
In any system like this, you need robust error handling and error reporting. This is doubly true when it comes to dealing with payments, where you absolutely do not want to accidentaly take someone's money and not deliver the goods.
Because you're outsourcing your payment handling to a 3rd party, you're ultimately very reliant on the gateway having robust error handling and reporting systems.
In general then, you hand off control to the payment gateway and start a task that waits for a response from the gateway, which is either 'payment accepted' or 'payment declined'. When you get that response you move onto the next step in your process and everything is good.
When you don't get a response at all (time out), or the response is invalid, then how you proceed very much depends on the payment gateway:
If the gateway supports it send a 'cancel payment' style request. If the payment cancels successfully then you probably want to send the user to a 'sorry, please try again' style page.
If the gateway doesn't support canceling, or you have no communications to the gateway then you will need to manually (in person, such as telephone) contact the 3rd party to discover what went wrong and how to proceed. To aid this you need to dump as much detail as you have to error logs, such as date/time, customer id, transaction value, product ids etc.
Once you're back on your site (and payment is accepted) then you're much more in control of errors, but in brief if you cant complete the order, then you should either dump the details to disk (such as csv file for manual handling) or contact the gateway to cancel the payment.
Its also worth having a system in place to track errors as they occur, and if an excessive number occur then consider what should happen. If its a high traffic site for example you may want to temporarily prevent further customers from placing orders whilst the issue is investigated.
Distributed messaging.
When your payment gateway returns submit a message to a durable queue that guarantees a handler will eventually get it and process it. The handler would update the database. Should failure occur at that point the handler can leave the message in the queue or repost it to the queue, or post an alternate message.
Should something occur later that invalidates the transaction, another message could be queued to "undo" the change.
There's a fair amount of buzz lately about eventual consistency and distribute messaging. NServiceBus is the new component hotness. I suggest looking into this, I know we are.

API Design: How should distinct classes of errors be handled from an asynchronous XMLHTTP call?

I have a legacy VB6 application that needs to make asynchronous calls to a web service. The web service provides a search method allows end-users to query a central database and view the results from within the application. I'm using the MSXML2.XMLHTTP to make the requests, and have written a SearchWebService class that encapsulates the web service call and code to handle the response asychronously.
Currently, the SearchWebService raises one of two events to the caller: SearchCompleted and SearchFailed. A SearchCompleted event is raised that contains the search results in a parameter to the event if the call completes successfully. A SearchFailed is raised when any type of failure is detected, which can be anything from an improperly-formatted URL (this is possible because the URL is user-configurable), to low-level network errors such as "Host not found", to HTTP errors such as internal server errors. It returns a error message string to the end-user (which is extracted from the web service response body, if present, or from the HTTP status code text if the response has no body, or translated from the network error code if a network error occurs).
Because of various security requirements, the calling application does not access the web service directly, but instead accesses it through a proxy web server running at the customer site, which in turn accesses the actual web service through via a VPN. However, the SearchWebService doesn't know that the calling application is accessing the web service through a proxy: it's just given a URL and told to make the request. The existence of the proxy is a application-level requirement.
The problem is that from an end-user perspective, it's important that the calling application be able to distinguish between low-level network errors versus HTTP errors from the web service, and to distinguish proxy errors from remote web server errors. For example, the application needs to know if a request failed because the proxy server is down, or because the remote web service that the proxy is accessing is down. An application-specific message needs to be presented to the end-user in each case, such as "Search web service proxy server appears to be down. The proxy server may need to be restarted" versus "The proxy is currently running but the remote web server appears to be unavailable. Please contact (name of person in charge of the remote web server)." I could handle this directly in the SearchWebService class, but it seems wrong to generate these application-specific error messages from such a generic class (and the class might be used in environments that don't require a proxy, where the error messages would no longer make sense).
This distinction is important for troubleshooting: a proxy server problem can usually be resolved by the customer, but a remote web server error has to handled by a third party.
I was thinking one way to handle this would be to have the SearchWebService class detect different types of errors and raise different events in each case. For example, instead of a single SearchFailed event, I could have a NetworkError event for low-level network errors (which would indicate a problem accessing the proxy server), a ConfigurationError event for invalid properties on the SearchWebService class (such as passing an improperly-formatted URL), and a ServiceError for errors that occur on the remote web server (implying that the proxy is working properly but the remote server returned an error).
Now that I think about it, there is also an additional error scenario: it could be possible that the proxy server is running properly, but the remote web server is down, or the proxy server has been misconfigured.
Is the approach of using multiple error events to classify different classes of error a reasonable solution to this problem? For the last scenario (the proxy is running but the remote server cannot be reached), I'm guessing I may have to set up the proxy to return a specific HTTP error code so that client can detect this situation (i.e. something more specific than a 500 response).
Originally I kept the single SearchFailed event and simply added an additional errorCode parameter to the event, but that got messy quickly, especially in cases where there wasn't a logical error code to use (such as if the VB6 raises a "real" error, i.e. if the XMLHTTP class isn't registered).
I think that some ideas I've used with Java exceptions may apply here.
Having a large number of different Exceptions gets pretty messy, yet we need to give enough detail to the user so we don't want to lose information.
Hence I have a small number of specific Exceptions, which I guess would correspond to your Events:
InvalidRequestEvent: Used when the user specifies bad information
TransientErrorEvent: used when there's infrastructure issues when a retry might work.
I tend to work in environments where we have clusters of servers so if a user request hits a dying server then if he resubmits he'll probably get a good one, hence from his perspective a simple retry often works. However sometimes the error is with a service such as the Network or Database and in which case the user needs diagnostic information to report to the helpdesk. Hence we need to decide on the extra information to put into the exception. This is (if I understand you correctly) your question.
In the case of InvalidRequestException we would bet giving some information about the problems with the input. It could be on the lines of "Mismatched parenthese" or "Unknown column CUTSOMER in table ORDER". In the case of TransientErrorException it could be "Proxy server is down".
Now depending upon your exact requirments you may not actually choose to put that text in the Exception, but rather an error number which the presentation layer converts to a locale-specific string (English, French ...).
So either Exception might contain something like this (sorry for that Java syntax, but I hope the idea is clear):
BaseException {
String ErrorText; // the error text itself
// OR if you want to allow for internationaliation
int ErrorCode; // my application specific code, corresponds to text held by the UI
String[] params; // specific parameters to be substitued in the error text
// CUTSOMER and ORDER in my example above
int SystemErrorCode; // If you have an underlying error code it goes here
String SystemErrorText; // any further diagnoistic you might need to give to
// the user so that they can report the problem to the
// help desk.
// OR instead of the text (this is something I've seen done)
int SystemErrorTag; // A unique id for this particular error problem.
// This server systems will label their message in the
// server logs. Users just tell the help desk this number
// they don't need to read detailed server error text.
}