Created Azure function with service bus trigger and deployed in azure portal. Using servicebusexplorer, sending the queue to portal and working fine. But after stopped the function in azure, sending the message from explorer to local code in visual studio for debugging. But it doesn't fire.
I have given service connection string in local.settings.json and queue name are correct. It throwing the error in azure CLI after running the function from visual studio.
ERROR:
MessageReceiver error (Action=Receive, ClientId=MessageReq.queue, Endpoint=sss-bbbb.servicebus.windows.net)xxxxx.queue, EntityPath=sss.ccccc
Microsoft.Azure.ServiceBus: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
ErrorCode: TimedOut. System.Private.CoreLib: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
local.settings.json
{
"IsEncrypted": false,
"Values":
{
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"AzureWebJobsDashboard": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "dotnet",
"ServiceBusConnectionString": "Endpoint=//",
"RequestTimeout": "600000"
}
I have had a similar issue with s corporate firewalls blocking the Service Bus traffic.
I appended TransportType=AmqpWebSockets to the end of my connection string and it worked.
I’ve seen this happen before once when local firewall policies were blocking the AMQP connections that functions are using. A console app would likely fail as well using the service bus .NET Core SDKs if that was the case. That said could you try on a different network? Error reads that it’s a network resolution issue.
Related
I use Google IoT Core in my project as MQTT broker to connect IoT embedded devices based on Atmel MCU to Goole Cloud Plattform.
In the plattform log, i experience many "MQTT DISCONNECT" errors.
jsonPayload: {
disconnectType: "SERVER"
eventType: "DISCONNECT"
protocol: "MQTT"
resourceName: "projects/xxxxxxx/locations/europe-west1/registries/xxxxxxxx/devices/1234567890"
serviceName: "cloudiot.googleapis.com"
status: {
code: 6
description: "ALREADY_EXISTS"
message: "SERVER: The connection was closed because there is another active connection with the same device ID."
}
}
labels: {
device_id: "d1234567890"
}
logName: "projects/xxxxxxxxx/logs/cloudiot.googleapis.com%2Fdevice_activity"
The error is generated when the device boots-up and connect to MQTT server.
Despite this error, the connection is successful, as is the subscription to topics and message publishing.
I understand that the previous connection was not closed gracefully, but it is imbossible by the nature of the embedded device, that is meant to be always connected, and eventually turned off by disconnecting power supply (so it cannot send a disconnect message to server before).
The device ID is always the same at every reconnection, but unique per-device; i use chip serial number as in some Google's examples.
My question is, if there is a solution to this error, that can be ignored in developement phase, but would be unwanted behavior in the production environment.
I am thinking that you want the particular error excluded because of "hygiene" reasons. I am thinking that when you eyeball logs, you are on the alert for error type messages and ones of this nature would be considered distracting.
Fortunately, Stackdriver logging provides an exclusion capability. You can provide sets of filters that cause Stackdriver to discard log entries that you explicitly choose not to keep. This is described in detail here:
https://cloud.google.com/logging/docs/exclusions
I found the illustration at the page particularly helpful.
What you will likely want to do is forumulate a query that matches exactly this type of message. I haven't tested it but something loosely like:
logName = 'cloudiot.googleapis.com%2Fdevice_activity'
jsonPayload.eventType = 'DISCONNECT'
jsonPayload.status.description= 'ALREADY_EXISTS'
Once you have a filter expression that matches just the messages to be excluded, you can then use that filter as an exclusion filter.
The MQTT connection to the device bridge has a few special properties that can cause disconnects. For device connections, you are limited to one connection per-device (or device ID, for the sakes of a gateway). It looks to me like you're trying to connect the same device twice and it's causing a disconnect.
It's possible that you have a client that tries to open up two connections, or you are connecting a second client. If you're connecting the same device twice, the device will be disconnected. Maybe your client is setup to open up multiple channels or your application logic is reconnecting without disconnecting.
There are various other reasons for disconnects, for example, if you try to publish with the incorrect QoS or to an invalid topic but this doesn't appear to be that given publish is working on subsequent connections.
I am using Datastax's c++ driver version 2.8.0 for Apache Cassandra inside a kubernetes application. Cassandra is deployed as a 3 node cluster via this Helm chart.
The chart leverages kubernetes' headless services to make the Cassandra endpoints available, so there is an entry in the kubernetes DNS for those endpoints.
I have a c++ app running in a kubernetes pod that interacts with Cassandra, which connects using that DNS entry to resolve the endpoints. The application has a single connection to Cassandra object, following the driver usage guidelines. Connection is initialized at the beginning of the program, and failure to initialize the connection or to execute a query later on will actually fail the program.
Everything is working fine, but cassandra nodes/pods may eventually go down for some reason. When that happens, they're respawned, but get reassigned with a different IP. It seems like the c++ driver is able to get the new endpoints from the DNS without any additional code. However in such a situation the connection is not closed on the client side, and it looks like the previous endpoints remain in the connection pool on some level. This leads to a series of log events similar to the following:
1531920921.161 [WARN] (src/pool.cpp:420:virtual void cass::Pool::on_close(cass::Connection*)): Connection pool was unable to reconnect to host XXX.XXX.XXX.XX because of the following error: Connection timeout
and
1531920921.894 [WARN] (src/pool.cpp:420:virtual void cass::Pool::on_close(cass::Connection*)): Connection pool was unable to reconnect to host XXX.XXX.XXX.XX because of the following error: Connect error 'host is unreachable'
Which pop up every [reconnect timeout]. The more IP reassignments, the more log messages, which as you can guess can get to a pretty large number for long lived applications.
Is there some feature of the driver's API that allows dealing with that? Or a good/recommended way to handle that client side, more generally? One option, external to the driver, could be to reset the connection within the client code, but (although I may be missing out) I fail to see a way to "catch" such events : they only show up in the logs.
I am working on C++ kafka client librdkafka. Looking into the example https://github.com/edenhill/librdkafka/blob/master/src-cpp/rdkafkacpp.h and https://github.com/edenhill/librdkafka/blob/master/examples/rdkafka_example.cpp, it seems that there is no process of connecting to broker? How to do some reconnect staff for these connection errors? How to check the connection status?
librdkafka abstracts all broker connectivity from the application, it will attempt to always keep a connection to each known broker (either learnt through metadata.broker.list or by the broker list returned from the first bootstrap brokers).
Upon connection error librdkafka will attempt to connect again, forever.
If none of brokers can be connected to the ALL_BROKERS_DOWN event will be triggered but there is currently no corresponding event for when brokers being to come back online.
The application doesn't need to worry though since librdkafka takes care of all reconnects and message retransmissions in the background and it will keep trying to get the messages produced until either message.timeout.ms or message.send.max.retries are exceeded.
There's more information on this in the introduction guide:
https://github.com/edenhill/librdkafka/blob/master/INTRODUCTION.md
I have a question around biztalk and what happens when certain conditions around web service ports are met.
basically we have two applications - a main application (lets call it 'MainApplication') (containing the orchestration) and a web service application (lets call it 'MainApplicationWS'), where we expose a web service (created from biztalks web service tool) to take messages from wherever.
we have a testing tool which replays messages to the MainApplicationWS to simulate messages coming through from various external systems.
I have noticed that if we partial stop the MainApplicationWS application, and send messages through to the web service listed as a recieve location, nothing happens (obviously!) (also, the web service is still running, even though its been delisted as a recieve location). however, if i start up the MainApplicationWS again and bounce the host instances the messages are picked up from somewhere and played through to the orchestration and through to our application.
Im just a bit puzzled as to where its storing these messages while the MainApplicationWS is partially stopped. is the web service somehow hanging on to these? or does it still post through to the biztalk message box?
any clarification would be greatly appreciated :)
cheers,
adam
In short, I can't repeat your behaviour in Biztalk 2009. The closest to 'queueing' messages is if the orchestration is stopped but remains enlisted, such that messages are suspended resumable.
In long - I'm not quite sure what you mean by 'delisted as a receive location'. In Biztalk 2009:
Receive Locations can be enabled or disabled
Orchestrations can be stopped, and unenlisted
A Partial Stop on your BTS application disables receive ports and stops orchestrations (but doesn't unenlist them)
A full stop stops and unenlists orchestrations
The below is observed behaviour on BizTalk 2009 for a simple orchestration with a WCF Request/Response port, which receives a message, Maps the Send back to the same Port
The port is Direct Bound (MessageBox).
If the Isolated Host App Pool is disabled in IIS
A synchronous error is returned to the client - Standard IIS Error (503 Service Unavailable etc)
BizTalk receives no messages at all
If the BizTalk receive Location is disabled
WSDL: Syncrhonous error returned to the client - The Messaging Engine failed to register the adapter for "WCF-BasicHttp" for the receive location "xyz.svc". Please verify that the receive location exists, and that the isolated adapter runs under an account that has access to the BizTalk databases
Service Call : The requested service, xyz.svc could not be activated. See the server's diagnostic trace logs for more information.
If the Orchestration is stopped, but not unenlisted
The received message is Suspended, resumable. The client times out (no response is issued).
If the orch is started and the message resumed, the message is then processed. The client will only get a successful reply if the orch start and the suspended message resume are done before the client's configured WS / WCF timeout.
If the Orchestration is unenlisted
The received message is Suspended, not resumable.
The client receives an error - The server was unable to process the request due to an internal error.
With the WCF CustomBinding it is also possible to listen directly on the relevant BizTalk ReceiveHost (i.e. no need for IIS at all to listen to BasicHTTP or WSHTTP, although we generally still use the Wizard generated svc in IIS solely for the hosting and publication of the WSDL. We then create a new WCF Custom receive location directly in BizTalk and point the client to this)
Hope this helps?
I have an XML web service running on my Windows 2003 server. I have a windows service running on the same machine. I want to call the XML web service from the windows service.
This works fine on my development machine, which is running Windows XP. However, when I try to do this on my Windows Server 2003 box, it times out and throws an exception. My windows service catches the exception and writes it to the eventlog. This is how the error shows up in the event log :
The description for Event ID ( 0 ) in Source ( myWS ) cannot be found. The local computer may not have the necessary registry information or message DLL files to display messages from a remote computer. You may be able to use the /AUXSOURCE= flag to retrieve this description; see Help and Support for details. The following information is part of the event:
Exception in MyWS service : [System.Net.WebException: Unable to connect to the remote server ---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond [MY IP ADDRESS HERE]
at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.Sockets.Socket.InternalConnect(EndPoint remoteEP)
at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Int32 timeout, Exception& exception)
--- End of inner exception stack trace ---
at System.Net.HttpWebRequest.GetRequestStream()
at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
at TesterClient.Test() in C:\Projects\Odyl\OdylUtilities\MyWS\TesterClient.cs:line 30
at MyWS.MyWSWindowsService.DoMyWSService() in C:\Projects\Odyl\OdylUtilities\MyWS\MyWSWindowsService.cs:line 81
at MyWS.MyWSWindowsService.StartService() in C:\Projects\Odyl\OdylUtilities\MyWS\MyWSWindowsService.cs:line 45].
As you can see, it has some problem with writing to the eventlog (all that AUXSOURCE stuff), but the main problem is that it's getting a timeout error from the web service.
Here's the truly odd thing, though -- from my dev box (running XP), I can call the web service that's running on my Windows 2003 Server box. This is really perplexing -- obviously the web service is working fine, but for some reason, Windows Server 2003 will not let you call a web service from the same machine!
Can anybody please give me a hint as to what's going on?
This sounds like a networking issue related to your Win2K3 server. The network configuration is preventing it from accessing the service url at the given location.
Easiest way to troubleshoot this: RDP to the Win2K3 Server desktop and open up a web browser to connect to the web service. I expect you will get a timeout.
Easiest way to remedy this: if there is a URL to the service that works locally, i.e. localhost, try that from your service. Otherwise, be prepared to get your network administrator involved to understand the DNS settings necessary to support the URL path you want to use for your service.