From the following documentation : https://nodemailer.com/smtp/pooled/ , It looks like by default the NodeMailer will not use a connection pool by default. Hence Transporter.Close() is not required unless pooling is used.
Without pooling, a new connection is used everytime the sendEmail() is called. Can someone who knows the nodemailer code clarify if the SMTP transport connection is closed automatically for every sendEmail call when pooling is not used?
Related
I use Google IoT Core in my project as MQTT broker to connect IoT embedded devices based on Atmel MCU to Goole Cloud Plattform.
In the plattform log, i experience many "MQTT DISCONNECT" errors.
jsonPayload: {
disconnectType: "SERVER"
eventType: "DISCONNECT"
protocol: "MQTT"
resourceName: "projects/xxxxxxx/locations/europe-west1/registries/xxxxxxxx/devices/1234567890"
serviceName: "cloudiot.googleapis.com"
status: {
code: 6
description: "ALREADY_EXISTS"
message: "SERVER: The connection was closed because there is another active connection with the same device ID."
}
}
labels: {
device_id: "d1234567890"
}
logName: "projects/xxxxxxxxx/logs/cloudiot.googleapis.com%2Fdevice_activity"
The error is generated when the device boots-up and connect to MQTT server.
Despite this error, the connection is successful, as is the subscription to topics and message publishing.
I understand that the previous connection was not closed gracefully, but it is imbossible by the nature of the embedded device, that is meant to be always connected, and eventually turned off by disconnecting power supply (so it cannot send a disconnect message to server before).
The device ID is always the same at every reconnection, but unique per-device; i use chip serial number as in some Google's examples.
My question is, if there is a solution to this error, that can be ignored in developement phase, but would be unwanted behavior in the production environment.
I am thinking that you want the particular error excluded because of "hygiene" reasons. I am thinking that when you eyeball logs, you are on the alert for error type messages and ones of this nature would be considered distracting.
Fortunately, Stackdriver logging provides an exclusion capability. You can provide sets of filters that cause Stackdriver to discard log entries that you explicitly choose not to keep. This is described in detail here:
https://cloud.google.com/logging/docs/exclusions
I found the illustration at the page particularly helpful.
What you will likely want to do is forumulate a query that matches exactly this type of message. I haven't tested it but something loosely like:
logName = 'cloudiot.googleapis.com%2Fdevice_activity'
jsonPayload.eventType = 'DISCONNECT'
jsonPayload.status.description= 'ALREADY_EXISTS'
Once you have a filter expression that matches just the messages to be excluded, you can then use that filter as an exclusion filter.
The MQTT connection to the device bridge has a few special properties that can cause disconnects. For device connections, you are limited to one connection per-device (or device ID, for the sakes of a gateway). It looks to me like you're trying to connect the same device twice and it's causing a disconnect.
It's possible that you have a client that tries to open up two connections, or you are connecting a second client. If you're connecting the same device twice, the device will be disconnected. Maybe your client is setup to open up multiple channels or your application logic is reconnecting without disconnecting.
There are various other reasons for disconnects, for example, if you try to publish with the incorrect QoS or to an invalid topic but this doesn't appear to be that given publish is working on subsequent connections.
I am building a REST service which internally calls other services and we use org.apache.cxf.jaxrs.client.WebClient to do this.
I want to use HTTP connection pooling to improve the performance but the documentation isn't very clear about how to do this or if this is even possible. Has anyone here done this?
The only other option I can think of it is to re-use clients but I'd rather not get into the whole set of thread-safety and synchronization issues that comes with that approach.
By default, CXF uses a transport based on the in-JDK HttpURLConnection object to perform HTTP requests.
Connection pooling is performed allowing persistent connections to reuse the underlying socket connection for multiple http requests.
Set system properties (default values)
http.keepalive=true
http.maxConnections=5
Increment the value of http.maxConnections to set the maximum number of idle connections that will be simultaneously kept alive, per destination.
In this post are explained some detail how it works
Java HttpURLConnection and pooling
When you need many requests executed simultaneosly CXF can also use the asynchronous apache HttpAsyncClient. Ser details here
http://cxf.apache.org/docs/asynchronous-client-http-transport.html
I'm working on embedding Mongoose into an application, and I need to be able to have the app connect to a server when it starts up. How can I do this? On GitHub, I see examples for receiving connections, but none on how to initialize a connection with another one. Any ideas?
Mongoose is a web server. It is designed to accept incoming connections, not make outgoing ones.
If you want to make outgoing connections, the way forward will depend on what you are connecting to and what protocol(s) it may use.
If you want to make outgoing http or https connections, you could use libcurl.
If some other protocol you may be able to find an appropriate library. Or, you can use operating system layer socket APIs to make your own connection, and implement whatever protocol is required on top of that. Here is an example for Linux, for example.
Is this even possible?
I know, I can make a one-way asynchronous communication, but I want it to be two-way.
In other words, I'm asking about the request/response pattern, but non-blocking, like described here (the 3rd option)
Related to Asynchronous, acknowledged, point-to-point connection using gSoap - I'd like to make the (n)acks async, too
You need a way to associate requests with replies. In normal RPC, they are associated by a timeline: the reply follows the response before another response can occur.
A common solution is to send a key along with the request. The reply references the same key. If you do this, two-way non-blocking RPC becomes a special case of two one-way non-blocking RPC connections. The key is commonly called something like request-id or noince.
I think that is not possible by basic usage,
The only way to make it two way is via response 'results of the call'
But you might want to use little trick
1] Create another server2 at client end and call that server2 from server
Or if thats not you can do over internet because of NAT / firewall etc
2] re architect your api so that client calls server again based on servers responce first time.
You can have client - server on both end. For example you can have client server on system 1 and system 2. (I specify sender as cient and receiver as server). You send async message from sys1 client to sys 2 server. On recieving message from sys1 you can send async response from sys 2 client to sys1 server back. This is how you can make async two way communication.
I guess you would need to run the blocking invocation in a separate thread, as described here: https://developer.nokia.com/Community/Wiki/Using_gsoap_for_web_services#Multithreading_for_non-blocking_calls
I'm using ActiveMQ CPP 5.2.3 if it matters.
I have JMS producer that connects using failover transport to JMS network of brokers.
When I call connection->start() it hangs up (see AMQ-2114).
If I skip connection start() and call connection->createSession(), than this call is blocked too.
The requirement is that my application will try forever to connect to broker(s).
Any suggestions/workarounds?
NOTE:
This is not duplicate of here, since I'm talking about C++ and such solutions as embedded broker, spring are not available in C++.
This is normal when the connection is awaiting a transport to connect to the broker. The start method must send the client's id info to the broker before any other operation, so if no connection is present it must block. You can set some options on the failover transport like the startupMaxReconnectAttempts option to control how long it will try to connect before reporting a failure. See the URI configuration page:
http://activemq.apache.org/cms/configuring.html