Blob trigger is not working in azure - azure-webjobs

I am trying write small webjob code in azure to queue the name of blob file when blog is added to container. My Job host is getting started but does not react when blob is added using manually/programmatically. I am not sure how to debug it. I did some search and followed the suggestion to wait for 10 minutes, in fact I waited for few hours but no luck. Following is log
06/01/2015 18:19:11 > 1bb5b4: SYS INFO] Status changed to Starting
[06/01/2015 18:19:12 > 1bb5b4: SYS INFO] Run script 'WebJob1.exe' with script host - 'WindowsScriptHost'
[06/01/2015 18:19:12 > 1bb5b4: SYS INFO] Status changed to Running
[06/01/2015 18:19:13 > 1bb5b4: INFO] This is with blob
[06/01/2015 18:19:14 > 1bb5b4: INFO] Found the following functions:
[06/01/2015 18:19:14 > 1bb5b4: INFO] WebJob1.Functions.ProcessQueueMessage
[06/01/2015 18:19:14 > 1bb5b4: INFO] Job host started
Following is my code
public static void QueuePhoneForNewblob(
[BlobTrigger("myshop/{name}")] string input,
string name,
[Queue("myshop")] out string message)
{
message = name;
Console.WriteLine("hello world");
}
public static void ProcessQueueMessage([QueueTrigger("myshop")] string logMessage, TextWriter logger)
{
logger.WriteLine(logMessage);
}

Ok got it, I was about to use worker role and getting ready with some dirty code; anyway I moved my trigger functions to functions.cs file which was previously there in program.cs

Related

ocserv could not execute script for the incoming connection

connect-script = /app/connect.sh
disconnect-script = /app/disconnect.sh
I have the above configuration in my ocserv.conf in the docker container, but ocserv fails to execute /app/connect.sh when there is a connection. I cann't find the real cause from the following log, has anyone had the same issue?
ocserv[26]: main[test]:xxx.xxx.179.135:57352 user of group 'Route' authenticated (using cookie)
ocserv[29]: main[test]:xxx.xxx.179.135:57352 executing script up /app/connect.sh
ocserv[29]: main[test]:xxx.xxx.179.135:57352 main-user.c:379: Could not execute script /app/connect.sh
ocserv[26]: main[test]:xxx.xxx.179.135:57352 connect-script exit status: 1
ocserv[26]: main[test]:xxx.xxx.179.135:57352 failed authentication attempt for user 'test'
The content of /app/connect.sh:
#!/bin/bash
echo "$(date) [info] User ${USERNAME} Connected - Server: ${IP_REAL_LOCAL} VPN IP: ${IP_REMOTE} Remote IP: ${IP_REAL} Device:${DEVICE}"
Well, I figured it out myself that the docker container I created doesn't have bash, and one solution is to substitute #!/bin/bash with #!/bin/sh.

How to make Armeria exit on an `Address already in use` error?

How do I make sure that my program exits when Armeria fails to start because of an Address already in use error?
I have the following code:
import com.linecorp.armeria.common.HttpRequest;
import com.linecorp.armeria.common.HttpResponse;
import com.linecorp.armeria.server.AbstractHttpService;
import com.linecorp.armeria.server.Server;
import com.linecorp.armeria.server.ServerBuilder;
import com.linecorp.armeria.server.ServiceRequestContext;
import java.util.concurrent.CompletableFuture;
public class TestMain {
public static void main(String[] args) {
ServerBuilder sb = Server.builder();
sb.http(8080);
sb.service("/greet/{name}", new AbstractHttpService() {
#Override
protected HttpResponse doGet(ServiceRequestContext ctx, HttpRequest req) throws Exception {
String name = ctx.pathParam("name");
return HttpResponse.of("Hello, %s!", name);
}
});
Server server = sb.build();
CompletableFuture<Void> future = server.start();
future.join();
}
}
When I run it once everything is fine.
But when I run it the second time I get an Address already in use error, which is of course expected, but the program doesn't terminate on its own. This may be how it is supposed to be, but how do I make sure that it terminates upon errors during initialization?
$ gradle run
> Task :run
14:36:04.811 [main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework
14:36:04.815 [main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false
14:36:04.816 [main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 8
14:36:04.817 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
14:36:04.817 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
14:36:04.818 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
...
14:36:05.064 [globalEventExecutor-3-1] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
14:36:05.068 [globalEventExecutor-3-1] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
14:36:05.068 [globalEventExecutor-3-1] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0
14:36:05.068 [globalEventExecutor-3-1] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
Exception in thread "main" java.util.concurrent.CompletionException: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
at java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:326)
at java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:338)
at java.util.concurrent.CompletableFuture.uniRelay(CompletableFuture.java:911)
at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:953)
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:739)
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:250)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
<==========---> 80% EXECUTING [6s]
> :run
^C
It's a bug in Armeria, where Netty event loop threads are not terminated when a Server fails to start up. Here's the fix PR, which should be part of the next release (0.97.0): https://github.com/line/armeria/pull/2288

Can't execute AWS Lambda function built with Micronaut and Graal: Error decoding JSON stream

I built a native java AWS Lambda function using Graal and Micronaut as explained here
After deploying it to AWS Lambda (custom runtime), I can't successfully execute it.
The error that AWS shows is:
{
"errorType": "Runtime.ExitError",
"errorMessage": "RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2 Error: Runtime exited with error: exit status 1"
}
The AWS log output is:
START RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2 Version: $LATEST
01:13:08.015 [main] INFO i.m.context.env.DefaultEnvironment - Established active environments: [ec2, cloud, function]
Error executing function (Use -x for more information): Error decoding JSON stream for type [request]: No content to map due to end-of-input
at [Source: (BufferedInputStream); line: 1, column: 0]
END RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2
REPORT RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2 Duration: 698.31 ms Billed Duration: 700 ms Memory Size: 512 MB Max Memory Used: 54 MB
RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2 Error: Runtime exited with error: exit status 1
Runtime.ExitError
But when I test it locally using
echo '{"value":"testing"}' | ./server
I got
01:35:56.675 [main] INFO i.m.context.env.DefaultEnvironment - Established active environments: [function]
{"value":"New value: testing"}
The function code is:
#FunctionBean("user-data-function")
public class UserDataFunction implements Function<UserDataRequest, UserData> {
private static final Logger LOG = LoggerFactory.getLogger(UserDataFunction.class);
private final UserDataService userDataService;
public UserDataFunction(UserDataService userDataService) {
this.userDataService = userDataService;
}
#Override
public UserData apply(UserDataRequest request) {
if (LOG.isDebugEnabled()) {
LOG.debug("Request: {}", request.getValue());
}
return userDataService.get(request.getValue());
}
}
And the UserDataService is:
#Singleton
public class UserDataService {
public UserData get(String value) {
UserData userData = new UserData();
userData.setValue("New value: " + value);
return userData;
}
}
To test it on AWS console, I configured the following test event:
{ "value": "aws lambda test" }
PS.: I uploaded to AWS Lambda a zip file that contains the "server" and the "bootstrap" file to allow the "custom runtime" as explained before.
What I'm doing wrong?
Thanks in advance.
Tiago Peixoto.
EDIT: added the lambda test event used on AWS console.
Ok, I figured it out. I just changed the bootstrap file from this
#!/bin/sh
set -euo pipefail
./server
to this
#!/bin/sh
set -euo pipefail
# Processing
while true
do
HEADERS="$(mktemp)"
# Get an event
EVENT_DATA=$(curl -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
REQUEST_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)
# Execute the handler function from the script
RESPONSE=$(echo "$EVENT_DATA" | ./server)
# Send the response
curl -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$REQUEST_ID/response" -d "$RESPONSE"
done
as explained here

Gearman client failed in webpage,but succeed in command line interface?

The client.php example using command "php client.php" , in http://gearman.org/getting-started/ can successfully communicate to worker.php, but using in webbrowser failed to communicated to worker.php, Does anyone know why and how to configure the gearmand or work around?
OS:CentOS 6.7
Gearmand version:1.1.8.
Gearmand started with "gearmand -l stderr --verbose DEBUG"
when Clients communicate using "gearman -f work < /somedir/somefile" command, the information return as predicted, terminal displays informations as follow,
DEBUG 2015-10-30 11:56:01.371309 [ 1 ] Received GEARMAN_GRAB_JOB_ALL ::58ca:3fa1:77f:0%4234047483:2705334353 -> libgearman-server/thread.cc:310
DEBUG 2015-10-30 11:56:01.371317 [ 1 ] ::58ca:3fa1:77f:0%4234047483:41704 Watching POLLIN -> libgearman-server/gearmand_thread.cc:151
DEBUG 2015-10-30 11:56:01.371334 [ proc ] ::58ca:3fa1:77f:0%4234047483:41704 packet command GEARMAN_CAN_DO -> libgearman-server/server.cc:111
DEBUG 2015-10-30 11:56:01.371344 [ proc ] Registering function: work -> libgearman-server/server.cc:522
DEBUG 2015-10-30 11:56:01.371352 [ proc ] ::58ca:3fa1:77f:0%4234047483:41704 packet command GEARMAN_GRAB_JOB_ALL -> libgearman-server/server.cc:111
DEBUG 2015-10-30 11:56:01.371371 [ 1 ] Received RUN wakeup event -> libgearman-server/gearmand_thread.cc:610
but when webbrowser navigates to "http://localhost/client.php",no information showed in web browser, terminal displays nothing too.
information in error.log of nginx as follow:
2015/10/30 04:59:10 [error] 2756#0: *2 FastCGI sent in stderr: "PHP message: PHP Warning: GearmanClient::doNormal(): send_packet(GEARMAN_COULD_NOT_CONNECT) Failed to send server-options packet -> libgearman/connection.cc:485 in /usr/share/nginx/html/client.php on line 4" while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET /client.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "localhost"
[root#localhost html]# cat client.php
<?php
$client= new GearmanClient();
$client->addServer("127.0.0.1",4730);
print $client->doNormal("reverse", "Hello World!");
?>
[root#localhost html]# cat worker.php
<?php
$worker= new GearmanWorker();
$worker->addServer("127.0.0.1",4730);
$worker->addFunction("reverse", "my_reverse_function");
while ($worker->work());
function my_reverse_function($job)
{
return strrev($job->workload());
}
?>
maybe the problem is that the webpage has limits or permission on socket operation?
I think configuration with --http-port option maybe now not mature and stable,So my prefered solution is that php webpages as client can submit job directly to Gearmand, to be processed by a C++ complied worker program. And the c++ worker program should serve many request without call and run and exit per request to save time.
Can this solution possible.
Please help me.
Thanks a lot!
With guidance from tom and Wali Usmani and Clint, finally the Cause was narrowed down to the permission problem in SELinux.
Details can be refered to https://groups.google.com/forum/#!topic/gearman/_dW8SRWAonw.
many thanks to tom and Wali Usmani and Clint.

Running Python Script using STAF (Software Testing Automation Framework)

I want to run python script (test.py) using STAF using below command but getting Retrun code 1
H:\>STAF 192.168.252.81 process START SHELL COMMAND "python /opt/test/test.p
" PARAMS "3344" wait returnstdout
Response
--------
{
Return Code: 1
Key : <None>
Files : [
{
Return Code: 0
Data :
}
]
}
Check that the remote machine is on trust list, then try it without "PARAMS" or hardcode the value inside your python script.