Azure WebJob doesn't fail properly when async - azure-webjobs

I am starting to use WebJobs to handle some background tasks and I am having issues regarding error handling and retrying. It appears that if any of my functions are async, then the function always reports success in the dashboard even though I throw an exception within the function.
Consider the following simple example ->
public static void AlwaysFail([QueueTrigger("alwaysfail")] string message)
{
throw new Exception("Induced Error");
}
The code above behaves as I expect it to. The message is popped off the queue, exception is thrown, fail reported in the WebJobs dashboard, message is requeued. This occurs 5 times, and then the message is stored in the poison queue.
But if I attempt a similar function but make it async like below ->
public async static void AlwaysFail([QueueTrigger("alwaysfail")] string message)
{
throw new Exception("Induced Error");
}
The results are not as expected. The message is popped off the queue, exception is thrown, but the dashboard reports Success and nothing is retried. Furthermore, in these cases, it appears that although the dashboard is reporting Success, the exception is crashing the process with an out of this world exit code, waiting 60 seconds, and restarting as shown in the following log ->
[11/11/2014 06:13:35 > 0dc4f9: INFO] Executing: 'Functions.AlwaysFail' because New queue message detected on 'alwaysfail'.
[11/11/2014 06:13:36 > 0dc4f9: ERR ]
[11/11/2014 06:13:36 > 0dc4f9: ERR ] Unhandled Exception: System.InvalidOperationException: Induced Error
[11/11/2014 06:13:36 > 0dc4f9: ERR ] at Plmtc.BackgroundWorker.Functions.d__2.MoveNext()
[11/11/2014 06:13:36 > 0dc4f9: ERR ] --- End of stack trace from previous location where exception was thrown ---
[11/11/2014 06:13:36 > 0dc4f9: ERR ] at System.Runtime.CompilerServices.AsyncMethodBuilderCore.b__5(Object state)
[11/11/2014 06:13:36 > 0dc4f9: ERR ] at System.Threading.QueueUserWorkItemCallback.WaitCallback_Context(Object state)
[11/11/2014 06:13:36 > 0dc4f9: ERR ] at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
[11/11/2014 06:13:36 > 0dc4f9: ERR ] at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
[11/11/2014 06:13:36 > 0dc4f9: ERR ] at System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem()
[11/11/2014 06:13:36 > 0dc4f9: ERR ] at System.Threading.ThreadPoolWorkQueue.Dispatch()
[11/11/2014 06:13:36 > 0dc4f9: ERR ] at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback()
[11/11/2014 06:13:36 > 0dc4f9: SYS ERR ] Job failed due to exit code -532462766
[11/11/2014 06:13:36 > 0dc4f9: SYS INFO] Process went down, waiting for 60 seconds
[11/11/2014 06:13:36 > 0dc4f9: SYS INFO] Status changed to PendingRestart
The only way I can get an async function to fail in the dashboard to to write one that takes a POCO class as a parameter but only include a simple string in the message. In these cases, the failure is occurring during the message/parameter binding before any of the function's code is reached.
Anyone out there successfully using async functions to respond to queue triggers without these issues?

Because the method is async void, the WebJobs SDK is unable to await the method to its completion. Change it to return a Task...
public async static Task AlwaysFail([QueueTrigger("alwaysfail")] string message)
{
throw new Exception("Induced Error");
}
See this post for more details.

Related

ValueError('Cannot invoke RPC: Channel closed!') when multiple process are launched together

I get this error when I launch, from zero, more than 4 process in sync:
{
"insertId": "61a4a4920009771002b74809",
"jsonPayload": {
"asctime": "2021-11-29 09:59:46,620",
"message": "Exception in callback <bound method ResumableBidiRpc._on_call_done of <google.api_core.bidi.ResumableBidiRpc object at 0x3eb1636b2cd0>>: ValueError('Cannot invoke RPC: Channel closed!')",
"funcName": "handle_event",
"lineno": 183,
"filename": "_channel.py"
}
This is the pub-sub schema:
pub-sub-schema
The error seems to happen at step 9 or 10.
The actual code is:
future = publisher.publish(
topic_path,
encoded_message,
msg_attribute=message_key
)
future.add_done_callback(
callback=lambda f:
logging.info(...)
)
subscriber = pubsub_v1.SubscriberClient()
subscription_path = subscriber.subscription_path(
PROJECT_ID,
"..."
)
streaming_pull_future = subscriber.subscribe(
subscription_path,
callback=aggregator_callback_handler.handle_message
)
aggregator_callback_handler.callback = streaming_pull_future
wait_result(
timeout=300,
pratica=...,
f_check_res_condition=lambda: aggregator_callback_handler.response is not None
)
streaming_pull_future.cancel()
subscriber.close()
The module aggregator_callback_handler handles .nack and .ack.
The error is returned for some seconds, then the VMs on which the services are hosted scales and the error stops. Same if, instead of launching the processes all together, I scale them manually launching them one by one and leaving some sleep in-between.
I've already checked the timeouts and put the subscriber outside of context manager, but those solutions doesn't work.
Any idea on how to handle this?

I can't open my PBIX file, ExecuteXmla failed with result

I tried to update the latest version of power bi bu it won't work for me. I worked hours upon hours on a file and it won't open. please help.
Pleae find the attached error below for refernce :-
Feedback Type:
Frown (Error)
Error Message:
ExecuteXmla failed with result
Stack Trace:
Microsoft.PowerBI.Client.Windows.AnalysisServices.XmlaExecutionException
at Microsoft.PowerBI.Client.Windows.AnalysisServices.AnalysisServicesService.ExecuteXmla(String xmla)
at Microsoft.PowerBI.Client.Windows.AnalysisServices.AnalysisServicesService.<>c__DisplayClass45_0.<ImageLoadDatabaseFromPbix>b__0()
at Microsoft.PowerBI.Client.Windows.AnalysisServices.AnalysisServicesService.OnErrorClarify(Action action, String clarification)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Microsoft.PowerBI.Client.Windows.IExceptionHandlerExtensions.<>c__DisplayClass3_0.<HandleExceptionsWithNestedTasks>b__0()
at Microsoft.Mashup.Host.Document.ExceptionHandlerExtensions.HandleExceptions(IExceptionHandler exceptionHandler, Action action)
Stack Trace Message:
ExecuteXmla failed with result
Invocation Stack Trace:
at Microsoft.Mashup.Host.Document.ExceptionExtensions.GetCurrentInvocationStackTrace()
at Microsoft.Mashup.Client.UI.Shared.StackTraceInfo..ctor(String exceptionStackTrace, String invocationStackTrace, String exceptionMessage)
at Microsoft.PowerBI.Client.Windows.Telemetry.PowerBIUserFeedbackServices.GetStackTraceInfo(Exception e)
at Microsoft.PowerBI.Client.Windows.Telemetry.PowerBIUserFeedbackServices.ReportException(IWindowHandle activeWindow, IUIHost uiHost, FeedbackPackageInfo feedbackPackageInfo, Exception e, Boolean useGDICapture)
at Microsoft.Mashup.Client.UI.Shared.UnexpectedExceptionHandler.<>c__DisplayClass15_0.<HandleException>b__0()
at Microsoft.Mashup.Client.UI.Shared.UnexpectedExceptionHandler.HandleException(Exception e)
at Microsoft.Mashup.Host.Document.ExceptionHandlerExtensions.HandleExceptions(IExceptionHandler exceptionHandler, Action action)
at Microsoft.PowerBI.Client.Program.Main(String[] args)
PowerBINonFatalError:
{"AppName":"PBIDesktop","AppVersion":"2.97.861.0","ModuleName":"Microsoft.PowerBI.Client.Windows.dll","Component":"Microsoft.PowerBI.Client.Windows.AnalysisServices.AnalysisServicesService","Error":"Microsoft.PowerBI.Client.Windows.AnalysisServices.XmlaExecutionException - PFE_M_ENGINE_INTERNAL,PFE_METADATA_LOAD_FAILED","MethodDef":"ExecuteXmla - PFE_M_ENGINE_INTERNAL,PFE_METADATA_LOAD_FAILED","ErrorOffset":"112"}
InnerException0.Stack Trace Message:
M Engine error: 'Microsoft.Data.Mashup; The type initializer for 'Microsoft.Data.Mashup.MashupConnection' threw an exception.'.
An error occurred when loading the 'dcc69860-2164-43a9-80d1-c6551e32a271', from the file, '\\?\C:\Users\Deepak.Kumar\Microsoft\Power BI Desktop Store App\AnalysisServicesWorkspaces\AnalysisServicesWorkspace_4807a1a9-2f4e-4478-90c8-71d863f24970\Data\dcc69860-2164-43a9-80d1-c6551e32a271.0.db.xml'.
InnerException0.Stack Trace:
InnerException0.Invocation Stack Trace:
at Microsoft.Mashup.Host.Document.ExceptionExtensions.GetCurrentInvocationStackTrace()
at Microsoft.Mashup.Client.UI.Shared.FeedbackErrorInfo.GetFeedbackValuesFromException(Exception e, String prefix)
at Microsoft.Mashup.Client.UI.Shared.FeedbackErrorInfo.GetFeedbackValuesFromInnerExceptions(Exception e, Int32 depth)
at Microsoft.Mashup.Client.UI.Shared.FeedbackErrorInfo.CreateAdditionalErrorInfo(Exception e)
at Microsoft.Mashup.Client.UI.Shared.FeedbackErrorInfo..ctor(String message, Exception exception, Nullable`1 stackTraceInfo, String messageDetail)
at Microsoft.PowerBI.Client.Windows.Telemetry.PowerBIUserFeedbackServices.ReportException(IWindowHandle activeWindow, IUIHost uiHost, FeedbackPackageInfo feedbackPackageInfo, Exception e, Boolean useGDICapture)
at Microsoft.Mashup.Client.UI.Shared.UnexpectedExceptionHandler.<>c__DisplayClass15_0.<HandleException>b__0()
at Microsoft.Mashup.Client.UI.Shared.UnexpectedExceptionHandler.HandleException(Exception e)
at Microsoft.Mashup.Host.Document.ExceptionHandlerExtensions.HandleExceptions(IExceptionHandler exceptionHandler, Action action)
at Microsoft.PowerBI.Client.Program.Main(String[] args)
PowerBINonFatalError_ErrorDescription:
PFE_M_ENGINE_INTERNAL,PFE_METADATA_LOAD_FAILED
PowerBINonFatalError_MethodDefDescription:
PFE_M_ENGINE_INTERNAL,PFE_METADATA_LOAD_FAILED
PowerBIUserFeedbackServices_IsReported:
True
Thanks in advance. please some way to open my pbix file.

Error: ffmpeg exited with code 1 on AWS Lambda

I am using fluent-ffmpeg nodejs package to run ffmpeg for audio conversion on AWS Lambda. I am using this FFmpeg layer for lambda.
Here is my code
const bitrate64 = ffmpeg("file.mp3").audioBitrate('64k');
bitrate64.outputOptions([
'-preset slow',
'-g 48',
"-map", "0:0",
'-hls_time 6',
'-master_pl_name master.m3u8',
'-hls_segment_filename 64k/fileSequence%d.ts'
])
.output('./64k/prog_index.m3u8')
.on('progress', function(progress) {
console.log('Processing 64k bitrate: ' + progress.percent + '% done')
})
.on('end', function(err, stdout, stderr) {
console.log('Finished processing 64k bitrate!')
})
.run()
after running it via AWS lambda I get following error message
ERROR Uncaught Exception
{
"errorType": "Error",
"errorMessage": "ffmpeg exited with code 1: Conversion failed!\n",
"stack": [
"Error: ffmpeg exited with code 1: Conversion failed!",
"",
" at ChildProcess.<anonymous> (/var/task/node_modules/fluent-ffmpeg/lib/processor.js:182:22)",
" at ChildProcess.emit (events.js:198:13)",
" at ChildProcess.EventEmitter.emit (domain.js:448:20)",
" at Process.ChildProcess._handle.onexit (internal/child_process.js:248:12)"
]
}
I don't get any more info so I am not sure what's going on. Can anyone tell me what's wrong here and how can I enable more detailed logs?
Added on error callback to get a detailed error and found that there are permissions issue on lambda
.on('error', function(err, stdout, stderr) {
if (err) {
console.log(err.message);
console.log("stdout:\n" + stdout);
console.log("stderr:\n" + stderr);
reject("Error");
}
})

How to make Armeria exit on an `Address already in use` error?

How do I make sure that my program exits when Armeria fails to start because of an Address already in use error?
I have the following code:
import com.linecorp.armeria.common.HttpRequest;
import com.linecorp.armeria.common.HttpResponse;
import com.linecorp.armeria.server.AbstractHttpService;
import com.linecorp.armeria.server.Server;
import com.linecorp.armeria.server.ServerBuilder;
import com.linecorp.armeria.server.ServiceRequestContext;
import java.util.concurrent.CompletableFuture;
public class TestMain {
public static void main(String[] args) {
ServerBuilder sb = Server.builder();
sb.http(8080);
sb.service("/greet/{name}", new AbstractHttpService() {
#Override
protected HttpResponse doGet(ServiceRequestContext ctx, HttpRequest req) throws Exception {
String name = ctx.pathParam("name");
return HttpResponse.of("Hello, %s!", name);
}
});
Server server = sb.build();
CompletableFuture<Void> future = server.start();
future.join();
}
}
When I run it once everything is fine.
But when I run it the second time I get an Address already in use error, which is of course expected, but the program doesn't terminate on its own. This may be how it is supposed to be, but how do I make sure that it terminates upon errors during initialization?
$ gradle run
> Task :run
14:36:04.811 [main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework
14:36:04.815 [main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false
14:36:04.816 [main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 8
14:36:04.817 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
14:36:04.817 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
14:36:04.818 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
...
14:36:05.064 [globalEventExecutor-3-1] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
14:36:05.068 [globalEventExecutor-3-1] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
14:36:05.068 [globalEventExecutor-3-1] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0
14:36:05.068 [globalEventExecutor-3-1] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
Exception in thread "main" java.util.concurrent.CompletionException: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
at java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:326)
at java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:338)
at java.util.concurrent.CompletableFuture.uniRelay(CompletableFuture.java:911)
at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:953)
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:739)
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:250)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
<==========---> 80% EXECUTING [6s]
> :run
^C
It's a bug in Armeria, where Netty event loop threads are not terminated when a Server fails to start up. Here's the fix PR, which should be part of the next release (0.97.0): https://github.com/line/armeria/pull/2288

Blob trigger is not working in azure

I am trying write small webjob code in azure to queue the name of blob file when blog is added to container. My Job host is getting started but does not react when blob is added using manually/programmatically. I am not sure how to debug it. I did some search and followed the suggestion to wait for 10 minutes, in fact I waited for few hours but no luck. Following is log
06/01/2015 18:19:11 > 1bb5b4: SYS INFO] Status changed to Starting
[06/01/2015 18:19:12 > 1bb5b4: SYS INFO] Run script 'WebJob1.exe' with script host - 'WindowsScriptHost'
[06/01/2015 18:19:12 > 1bb5b4: SYS INFO] Status changed to Running
[06/01/2015 18:19:13 > 1bb5b4: INFO] This is with blob
[06/01/2015 18:19:14 > 1bb5b4: INFO] Found the following functions:
[06/01/2015 18:19:14 > 1bb5b4: INFO] WebJob1.Functions.ProcessQueueMessage
[06/01/2015 18:19:14 > 1bb5b4: INFO] Job host started
Following is my code
public static void QueuePhoneForNewblob(
[BlobTrigger("myshop/{name}")] string input,
string name,
[Queue("myshop")] out string message)
{
message = name;
Console.WriteLine("hello world");
}
public static void ProcessQueueMessage([QueueTrigger("myshop")] string logMessage, TextWriter logger)
{
logger.WriteLine(logMessage);
}
Ok got it, I was about to use worker role and getting ready with some dirty code; anyway I moved my trigger functions to functions.cs file which was previously there in program.cs