Gmail sends EHLO..QUIT immediately to custom SMTP/MTA server - c++

I'm trying to write a simple receiving mail server (MTA) in C++ on Linux, I've gotten as far as when I try to send mail to it from my Gmail account, a Google server connects, but then quits right after. I have no idea what I'm missing. The current communication looks like:
S: 220 mx.domain.com ESMTP<CR><LF>
C: EHLO mail.google.com<CR><LF>QUIT<CR><LF>
S: 250 mx.domain.com at your service<CR><LF>221 Bye<CR><LF>
I'm very confused by the fact the Google mail server is sending both EHLO and QUIT in the same request. And of course it never sends the actual mail. Any ideas as to why it quits?

In my case it was because the server sent the response to the client padded with '\0' after capping to the correct response length everything works as intended.

Related

What should I send after STARTLS?

I am using smtp.gmail.com and port 587. After a successful connection, I send EHLO and receive the following:
250-smtp.gmail.com at your service, [62.16.4.123]
250-SIZE 35882577
250-8BITMIME
250-STARTTLS
250-ENHANCEDSTATUSCODES
250-PIPELINING
250-CHUNKING
250 SMTPUTF
I choose STARTTLS and after that, I don't know what to send to the server and to login and send email.
If I will send something like AUTH LOGIN or base64-encrypted login with password, the connection is broken.
Can someone explain what my client should send to successfully finish the STARTTLS negotiation?
Or, should I start over with a new SSL connection?
After you send an (unencrypted) STARTTLS command, if the server returns any reply other than 220, handle it has a failure and move on with other SMTP commands as needed (though, at this point, the only one that really makes sense is QUIT).
If the server returns 220 to STARTTLS, you then need to perform the actual TLS handshake over the existing TCP connection, starting with a TLS CLIENT HELLO. Whatever TLS library you are using with your socket should be able to handle this for you, do not implement this from scratch.
If the TLS handshake is successful, then you can send further SMTP commands (through the TLS-encrypted channel), starting with a new EHLO (as the server's capabilities may and likely will change, most notably the available AUTH schemes), then followed by AUTH, MAIL FROM, RCPT TO, DATA/BDAT, etc and finally QUIT, as needed.
If the TLS handshake fails, the TCP connection is left in an unknown state, so further SMTP communication is not possible. All you can do at that point is close the TCP connection and start over.

How can I send HTTP broadcast message with tornado?

I have a tornado HTTP server.
How can I implement broad-cast message with the tornado server?
Is there any function for that or I just have to send normal HTTP message all clients looping.
I think if I send normal HTTP message, the server should wait for the response.
It seems not the concept of broad-cast.
Otherwise, I need another third-part option for broad-cast?
Please give me any suggestion to implement broad-cast message.
Short answer: you might be interested in WebSockets. Tornado seems to have support for this.
Longer answer: I assume you're referring to broadcast from the server to all the clients.
Unfortunately that's not doable conceptually in HTTP/1.1 because of the way it's thought out. The client asks something of the server, and the server responds, independently of all the others.
Furthermore, while there is no request going on between a client and a server, that relationship can be said to not exist at all. So if you were to broadcast, you'd be missing out on clients not currently communicating with the server.
Granted, things are not as simple. Many clients keep a long-lived TCP connection when talking to the server, and pipeline HTTP requests for it on that. Also, a single request is not atomic, and the response is sent in packets. People implemented server-push/long-polling before WebSockets or HTTP/2 with this approach, but there are better ways to go about this now.
There is no built-in idea of a broadcast message in Tornado. The websocket chat demo included with Tornado demonstrates how to loop over the list of clients sending a message to each:
def send_updates(cls, chat):
logging.info("sending message to %d waiters", len(cls.waiters))
for waiter in cls.waiters:
try:
waiter.write_message(chat)
except:
logging.error("Error sending message", exc_info=True)
See https://github.com/tornadoweb/tornado/blob/master/demos/websocket/chatdemo.py

How to determine if an email is successfully delivered instantly

An app is designed to be installed on a user's computer, hence, an email function from the app would very much depends on the user's ISP service. Port 25 may be open or may be blocked.
When I use standard code for mail ua including port 25, it seems to deliver the email, however, for some user whose ISP blocks port 25 email does not go through. I'd like to have a reliable way to determine if port 25 fails to deliver email Instantly and then try to use another port to send email. In other words, I'd like to leverage two ports, if port X fails then automatically switch to port Y. Doable?
Btw, the web server side scripting language I'm using is Adobe ColdFusion's sibling, Railo, and the specific tag is CFMAIL. As mentioned above, wrapping CFTRY around CFMAIL does not help for this purpose.
Thanks.
There are multiple factors in determining if your message gets delivered.
1) When you send your message with CFMAIL, you can either specify a mail server in the cfmail tag, or use the server default. When the tag is executed, coldfusion/railo will try to access that server. If the server is unavailable or blocked, your message will go to the coldfusion/railo undeliverable folder. The only way to check that is to write a script to monitor the undeliverable folder and its content.
2) IF coldfusion/railo successfully connect to the SMTP server and attempts to hand off the email, your notifications would come from the SMTP server rather than coldfusion/railo. The message would be sent from the SMTP server to the failto="" path, or the from="" if not specified. That email would be notified for "mailbox does not exist", "relay not allowed", "user over their mailbox limit", etc....
If you need to monitor those bounces, you could create a separate email account for the failto="" and use CFPOP to monitor the email account for bounces.
Also, if you use a company like sendgrid for your outgoing SMTP server, they will provide an API to monitor bounces, opt outs, spam complaints, etc...

Facebook xmpp chat message

My app used to be able to send Facebook chat messages via the Facebook XMPP chat API.
As pointed out in this question, the expected message format is
<message from="-sender_ID#chat.facebook.com" to="-receiver_ID#chat.facebook.com">
<body>message body</body>
</message>
About two weeks ago, the Facebook XMPP server suddenly started rejecting messages, returning
<stream:error>
<invalid-from xmlns="urn:ietf:params:xml:ns:xmpp-streams"/>
</stream:error>
The invalid-from seems to indicate that the format of the sender ID has changed.
One change I noticed: during the various handshakes to establish the xmpp connection, Facebook now returns a Jabber ID in the following format:
<jid>-0#chat.facebook.com/fb_xmpp_script_<somehexstring></jid>
Using this jid as the sender ID doesn't work either though.
Has anyone else encountered this issue and figured out the new format?
Try not putting a from address on your message. The server should add that for you.
The received message is simply an indicator of the users chat state, as defined in XEP-0085 and has no direct relationship to the message you sent. That doesn't mean that the first didn't potentially trigger the second, whatever library you are using may have sent the chatstate as well when you sent the message. This type of message is commonly used in chat clients to indicate that someone you are chatting with is typing a message.

returning 204 No Content through WSO2 API Manager

Are there any known issues surrounding returning 204 No Content through API Manager?
Looking at the timing of the packets on the wire from APIManager, it looks like for other requests, APIManager waits for the client to acknowledge receipt of the packets making up the response, and then proceeds to close the connection, whereas for a 204 response, it immediately closes the connection, before the client has acknowledged receipt of anything.
No, there is no such known issue.
What is the version that you are using and what are the configurations you done?
It makes sense when the API Manager 204 it means it doesn't expect any content from the Back-end service hence the connection is closed immediately.