Cancelling an upload task - c++

I've done some reading regarding the Azure SDK and in order to cancel a task you seemingly need to pass in a cancellation_token.
My upload code is very simple:
azure::storage::cloud_block_blob blockBlob = container.get_block_blob_reference(fileLeaf.wstring());
auto task = blockBlob.upload_from_file_async(fullFilePath);
However, some files I upload are potentially very large, and I would like to be able to cancel this operation. I'll probably also likely use continuations and would need all those cancelling too, if that's possible.
The problem I'm having is I can't see any way of attaching a cancellation_token to the task.
Any pointers?

There is a sample code using PPL library, I refered to it and changed the code for canceling task using PPLX library within C++ REST SDK which be used for Azure Storage SDK for C++, please try the code below.
/* Declare a cancellation_token_source and get the cancellation_token,
* please see http://microsoft.github.io/cpprestsdk/classpplx_1_1cancellation__token__source.html
*/
#include <pplxtasks.h>
cancellation_token_source cts;
auto token = cts.get_token();
//Pass the cancellation_toke to task via then method, please see https://msdn.microsoft.com/en-us/library/jj987787.aspx
task.then([]{}, token).wait();
// Cancel the task
cts.cancel();
Hope it helps.

Related

AWS Amplify federated google login work properly on browser but dont work on Android

The issues are when I am trying to run federated authentication with the help of amplify auth method on the browser it works fine, but when I try to run it on my mobile.
It throws error No user found when I try to use Auth.currentSession() but the same work on the browser.
tried to search about this type of issue but I found related to ionic-cordova-google-plugin not related to AWS Amplify Federated Login Issue.
Updating the question after closing the question with less debugging information without asking for any information.
This is issues raised in git hub with respect to my problem.
Issue No. 5351 amplify js it's still in open state.
https://github.com/aws-amplify/amplify-js/issues/5351
Another issue 3537 which is still in Open
These two issues has the same scenario like me, I hope its enough debugging information, if more required mention comment instead of closing without notification, it's bullying for a beginner not helping
I fixed the above problem by referring a comment or wrapped around fix.
Link that will take to that comment directly link to comment.
First read the above comment as it will give you overall idea of what exactly the issue is instead of directly jumping to the solution.
Once you read the comment you will be little unclear with respect to implementation as he has use capacitor and not every one are using capacitor.
In my implementation I ignore this part as I am not using capacitor.
App.addListener('appUrlOpen')
Now lets go to main step where we are fixing this issue, I am using deep links to redirect to my application
this.platform.ready().then(() => {
this.deeplinks
.route({
"/success.html": "success",
"/logout.html": "logout",
})
.subscribe(
(match: any) => {
const fragment = JSON.stringify(match).split('"fragment":"')[1];
// this link can be your any link based on your requirement,
// what I am doing it I am passing all the data which I get in my fragments.
// fragments consists of id_token, stage, code,response type.
// These need to be passed to Ionic in order for Amplify to run its magic.
document.location.href = `http://192.168.1.162:8100/#${fragment}`;
},
(nomatch) => {
console.log("Got a deeplink that didn't match", nomatch);
}
);
});
I got this idea by referring the issue in which the developer mentioned of sending code and state along with application deep linking URL.

Should I have concern about datastoreRpcErrors?

When I run dataflow jobs that writes to google cloud datastore, sometime I see the metrics show that I had one or two datastoreRpcErrors:
Since these datastore writes usually contain a batch of keys, I am wondering in the situation of RpcError, if some retry will happen automatically. If not, what would be a good way to handle these cases?
tl;dr: By default datastoreRpcErrors will use 5 retries automatically.
I dig into the code of datastoreio in beam python sdk. It looks like the final entity mutations are flushed in batch via DatastoreWriteFn().
# Flush the current batch of mutations to Cloud Datastore.
_, latency_ms = helper.write_mutations(
self._datastore, self._project, self._mutations,
self._throttler, self._update_rpc_stats,
throttle_delay=_Mutate._WRITE_BATCH_TARGET_LATENCY_MS/1000)
The RPCError is caught by this block of code in write_mutations in the helper; and there is a decorator #retry.with_exponential_backoff for commit method; and the default number of retry is set to 5; retry_on_rpc_error defines the concrete RPCError and SocketError reasons to trigger retry.
for mutation in mutations:
commit_request.mutations.add().CopyFrom(mutation)
#retry.with_exponential_backoff(num_retries=5,
retry_filter=retry_on_rpc_error)
def commit(request):
# Client-side throttling.
while throttler.throttle_request(time.time()*1000):
try:
response = datastore.commit(request)
...
except (RPCError, SocketError):
if rpc_stats_callback:
rpc_stats_callback(errors=1)
raise
...
I think you should first of all determine which kind of error occurred in order to see what are your options.
However, in the official Datastore documentation, there is a list of all the possible errors and their error codes . Fortunately, they come with recommended actions for each.
My advice is that your implement their recommendations and see for alternatives if they are not effective for you

C++ Poco ODBC Transactions - AutoCommit mode

I am currently attempting to use transactions in my C++ app, but I have a problem with the ODBC's auto commit mode.
I am using the POCO libaries to create a connection to a PostgreSQL database on the same machine. Currently, I can send data to this database as single statements, but I cannot get my head around how to use Poco's transaction libraries to be able to send this data more quickly.
As I have several thousand records to insert, and so continuing to use single insert statements is extrememly slow and inpractical - So I am trying to use Poco's transaction to speed this up a bit (a fair bit).
The error I am encountering is a theoretically a simple one - Poco is throwing the following error:
'Invalid access: Session is in auto commit mode.'
I understand, as a result of this, I should somehow set "auto commit" to false - as it only allows me to commit data to the database line by line, rather than as a single transaction.
The problem is how I set this.
Currently, I have a session created from Session.h, that looks alot like this:
session = new Poco::Data::Session(
"ODBC",
connection_data.str()
);
Where connection data is a simple stringstream with the login information, password, database, server and "Driver={PostgreSQL ANSI};" to tell ODBC to utilize PostgreSQL's driver.
I have tried just setting a property "autocommit" to false through the session's setFeature or setProperty settings, this, of course, was to no avail. (it was more of a ditch attempt at this point).
session->setFeature("AUTOCOMMIT", false);
Looking around, I saw a possible alternative method by creating a ODBC sessionImpl directly from ODBC/session/SessionImpl.h instead of using this generic method above, and then creating a new session object from this.
The benefits of this are that ODBC's sessionImpl has references to autocommit mode in the header, which would suggest it would be able to handle this:
void autoCommit(const std::string&, bool val);
/// Sets autocommit property for the session.
However, having not used sessionImpl before, I cannot garuntee if this will work or if can can get this to work with the limited documentation available.
I am using C++ 03 (Not 11), with Visual Studio 2015
Poco 1.7.5
Boost (Where needed)
Would any one know the correct way of setting this feature (above) or a alternative method to achieving this?
edit: Looking at the source of poco, at:
https://github.com/pocoproject/poco/blob/develop/Data/ODBC/src/SessionImpl.cpp#L153
The property seems be named autoCommit, and looking at
https://github.com/pocoproject/poco/blob/develop/Data/include/Poco/Data/AbstractSessionImpl.h#L120
the case of the property names seem to matter. So, does it help if you use session->setFeature("autoCommit", false);?
Cant you just call session->begin(); and session->end(); on the corresponding Session object?
What is returned by session->canTransact()?
According to the doc begin() will start a new transaction, the doc does not mention any property that needs to be set before or after.
See: https://pocoproject.org/docs/Poco.Data.Session.html
Also faced a similar issue.
First of all before begin() need:
m_ses.setFeature("autoCommit", false);
m_ses.begin();
And the second issue is that this feature stays "autoCommit" in false for all other sessions. So don't forget for the next session call
session.setFeature("autoCommit", true);

Can Amazon Simple Workflow (SWF) be made to work with jRuby?

For uninteresting reasons, I have to use jRuby on a particular project where we also want to use Amazon Simple Workflow (SWF). I don't have a choice in the jRuby department, so please don't say "use MRI".
The first problem I ran into is that jRuby doesn't support forking and SWF activity workers love to fork. After hacking through the SWF ruby libraries, I was able to figure out how to attach a logger and also figure out how to prevent forking, which was tremendously helpful:
AWS::Flow::ActivityWorker.new(
swf.client, domain,"my_tasklist", MyActivities
) do |options|
options.logger= Logger.new("logs/swf_logger.log")
options.use_forking = false
end
This prevented forking, but now I'm hitting more exceptions deep in the SWF source code having to do with Fibers and the context not existing:
Error in the poller, exception:
AWS::Flow::Core::NoContextException: AWS::Flow::Core::NoContextException stacktrace:
"aws-flow-2.4.0/lib/aws/flow/implementation.rb:38:in 'task'",
"aws-flow-2.4.0/lib/aws/decider/task_poller.rb:292:in 'respond_activity_task_failed'",
"aws-flow-2.4.0/lib/aws/decider/task_poller.rb:204:in 'respond_activity_task_failed_with_retry'",
"aws-flow-2.4.0/lib/aws/decider/task_poller.rb:335:in 'process_single_task'",
"aws-flow-2.4.0/lib/aws/decider/task_poller.rb:388:in 'poll_and_process_single_task'",
"aws-flow-2.4.0/lib/aws/decider/worker.rb:447:in 'run_once'",
"aws-flow-2.4.0/lib/aws/decider/worker.rb:419:in 'start'",
"org/jruby/RubyKernel.java:1501:in `loop'",
"aws-flow-2.4.0/lib/aws/decider/worker.rb:417:in 'start'",
"/Users/trcull/dev/etl/flow/etl_runner.rb:28:in 'start_workers'"
This is the SWF code at that line:
# #param [Future] future
# Unused; defaults to **nil**.
#
# #param block
# The block of code to be executed when the task is run.
#
# #raise [NoContextException]
# If the current fiber does not respond to `Fiber.__context__`.
#
# #return [Future]
# The tasks result, which is a {Future}.
#
def task(future = nil, &block)
fiber = ::Fiber.current
raise NoContextException unless fiber.respond_to? :__context__
context = fiber.__context__
t = Task.new(nil, &block)
task_context = TaskContext.new(:parent => context.get_closest_containing_scope, :task => t)
context << t
t.result
end
I fear this is another flavor of the same forking problem and also fear that I'm facing a long road of slogging through SWF source code and working around problems until I finally hit a wall I can't work around.
So, my question is, has anyone actually gotten jRuby and SWF to work together? If so, is there a list of steps and workarounds somewhere I can be pointed to? Googling for "SWF and jRuby" hasn't turned up anything so far and I'm already 1 1/2 days into this task.
I think the issue might be that aws-flow-ruby doesn't support Ruby 2.0. I found this PDF dated Jan 22, 2015.
1.2.1
Tested Ruby Runtimes The AWS Flow Framework for Ruby has been tested
with the official Ruby 1.9 runtime, also known as YARV. Other versions
of the Ruby runtime may work, but are unsupported.
I have a partial answer to my own question. The answer to "Can SWF be made to work on jRuby" is "Yes...ish."
I was, indeed, able to get a workflow working end-to-end (and even make calls to a database via JDBC, the original reason I had to do this). So, that's the "yes" part of the answer. Yes, SWF can be made to work on jRuby.
Here's the "ish" part of the answer.
The stack trace I posted above is the result of SWF trying to raise an ActivityTaskFailedException due to a problem in some of my activity code. That part is my fault. What's not my fault is that the superclass of ActivityTaskFailedException has this code in it:
def initialize(reason = "Something went wrong in Flow",
details = "But this indicates that it got corrupted getting out")
super(reason)
#reason = reason
#details = details
details = details.message if details.is_a? Exception
self.set_backtrace(details)
end
When your activity throws an exception, the "details" variable you see above is filled with a String. MRI is perfectly happy to take a String as an argument to set_backtrace(), but jRuby is not, and jRuby throws an exception saying that "details" must be an Array of Strings. This exception blows through all the nice error catching logic of the SWF library and into this code that's trying to do incompatible things with the Fiber library. That code then throws a follow-on exception and kills the activity worker thread entirely.
So, you can run SWF on jRuby as long as your activity and workflow code never, ever throws exceptions because otherwise those exceptions will kill your worker threads (which is not the intended behavior of SWF workers). What they are designed to do instead is communicate the exception back to SWF in a nice, trackable, recoverable fashion. But, the SWF code that does the communicating back to SWF has, itself, code that's incompatible with jRuby.
To get past this problem, I monkey-patched AWS::Flow::FlowException like so:
def initialize(reason = "Something went wrong in Flow",
details = "But this indicates that it got corrupted getting out")
super(reason)
#reason = reason
#details = details
details = details.message if details.is_a? Exception
details = [details] if details.is_a? String
self.set_backtrace(details)
end
Hope that helps someone in the same situation as me.
I'm using JFlow, it lets you start SWF flow activity workers with JRuby.

making use of libcurl progress data

In Xcode set up a stack of curl_easy_setopt-functions for uploading a file to a server/API, and (after a lot of trial and error) it all works like a charm. After going through several other Q&As i also managed to set up an easy CURLOPT_PROGRESSFUNCTION that looks like this:
static int my_progress_func(void *p,
double t,
double d,
double ultotal,
double ulnow)
{
printf("(%g %%)\n", ulnow*100.0/ultotal);
// globalProgressNumber = ulnow*100.0/ultotal;
return 0;
}
As the upload progresses "0%.. 16%.. 58%.. 100%" is output to the console; splendid.
What i'm not able to do is to actually USE this data (globalProgressNumber) eg. for my NSProgressIndicator; CURL kind of hijacks my App and doesn't allow any other input/output until the progress is complete.
I tried updating IBOutlets from my_progress_func (eg. [_myLabel setStringValue:globalProgressNumber];) but the static int function doesn't allow that.
Neither is [self] allowed, so posting to NSNotificationCenter isn't possible:
[[NSNotificationCenter defaultCenter]
postNotificationName:#"Progressing"
object:self];
My curl function runs from my main class/ window (NSPanel).
Any good advice on how to achieve a realtime/ updating element on my .xib?
CURL [...] doesn't allow any other input/output until the progress is complete.
Are you calling curl_easy_perform from the main thread? If yes, you should not do that since this function is synchronous, i.e it blocks until the transfer finishes.
In other words long running tasks (e.g with network I/O) must run on a separate thread, while UI code (e.g. updating the text of a label) must run on the main thread.
how to achieve a realtime/updating element
You should definitely take care to perform the curl transfer in a separate thread.
This could be easily achieved by wrapping this into an NSOperation with a custom protocol to notify a delegate of the progress (e.g your view controller):
push your operation into an NSOperationQueue,
the operation queue will take care to detach the transfer and run it into an another thread,
on the operation side, you should still use the progress function and set the operation object as the opaque object via the CURLOPT_PROGRESSDATA curl option. By doing so, each time the progress function is called you can retrieve the operation object by casting the void *clientp opaque pointer. Then notify the delegate of the current progress in the main thread (e.g with performSelectorOnMainThread) to make sure you can perform UI updates such as refreshing your NSProgressIndicator.
As an alternative to an NSOperation you can also use Grand Central Dispatch (GCD) and blocks. If possible, I greatly recommend you to work with BBHTTP which is a libcurl client for iOS 5+ and OSX 10.7+ (BBHTTP uses GCD and blocks).
FYI: here's an example from BBHTTP that illustrates how to easily perform a file upload with an upload progress block - this is for iOS but should be directly reusable for OS X.