Gmail API sends email but some are never being received - python-2.7

I recently tried my hands at the new Gmail API. And all seems to work fine except one thing. My issue is as follows:
I working on a receptionist project that may need to generate more than one email in less than a minute during busy hours. So just for testing purposes I run the following code which works fine:
if __name__ == '__main__':
service = setup() //Simply an helper function to do the basic credential check. Works fine!
print('service:'+str(service))
for counter in range(1, 10):
print('Sending message '+ str(counter))
message = create_message(<SENDER_EMAIL_ID>,<RECEIVER_EMAIL_ID>, "Email Number: "+ str(counter) , "Sample text")
response = send_message(service, 'me' , message)
print(response)
The setup() function is as follows:
credentials = get_credentials()
http = credentials.authorize(httplib2.Http())
service = discovery.build('gmail', 'v1', http=http)
Now, when I run the code say thrice consecutively in less than a minute, the code runs fine and I am able to see all the 27 emails in the sent folder of the SENDER_EMAIL_ID using a web browser. And thus Gmail API is sending all the messages through whenever a request is being made. However, only some of these emails are being received at the RECEIVER_EMAIL_ID and rest are just being dropped.
However, if I run the program with say 2-5 minutes delay then all the mails are being received.
I have no idea why this is.
Any help would be really appreciated. :)

To expound more on #ken-y-n's response in the comments section, GMail API has usage limits. Specifically for this product, Daily usage is about
1 Billion quota units / day
250 quota units / user / second
You may have encountered the rateLimitExceeded error during your tests.
Since you're sending emails thru a loop, it will cost you about 100 units when calling send (plus other costs depending on the methods you're calling). This is the reason why some emails seemed to be dropped. You can counter this by implementing exponential backoff on the messages that failed to send.
Another alternative instead of running it thru a loop, is to use Batch requests which groups your API calls together to reduce the number of HTTP connections your app making.

Related

Cognito passwordless solution sends code multiple times

I have successfully implemented these instructions from AWS (https://aws.amazon.com/de/blogs/mobile/implementing-passwordless-email-authentication-with-amazon-cognito/), but as soon as I execute the signIn function via aws-amplify, it often takes up to 7 seconds and I receive 3 emails with different codes.
The reason for this is that the event createAuthChallenge executes the respective lambda function 3 times, which generates and sends the respective code. This only happens if I do not login/register for a certain time (~10 minutes). I thought that this might be because the function is cold and tried to keep it warm by setting "Provisioned Concurrency" in the lambda functions
CreateAuthChallenge
VerifyAuthChallenge
DefineAuthChallenge
PreSignup
PostAuthentication
to 1 and additionally(!) tried to warm up the functions by executing them every 5 minutes via cloudwatch.
I don't know what else I should do.
Thx!
We had followed a different post to setup our custom auth flow, but had the same issue with 3 codes being sent out.
In that post it has the CreateAuthChallenge lambda start with
exports.handler = async (event) => {
const crypto = require('crypto')
const aws = require('aws-sdk')
...
}
We have been able to stop sending 3 verification codes by moving those requires outside of the handler method.
const crypto = require('crypto')
const aws = require('aws-sdk')
exports.handler = async (event) => {
...
}
My guess is that trying to read the entire aws-sdk inside of the function was the cause of the slowness and because this lambda took longer than the cognito system allows for, it ended up getting called multiple times and eventually did complete, thus causing the extra verification codes.
I did not see the same issue from the link you posted, but it would worth reviewing the specific code you have and check if its trying to bring in a package that needs to be handled differently.
You get 5 seconds for the lambda to complete, otherwise it retries. Cold starts and the blocking call to send the email via SES is what is eating all of those 5 seconds. You can make the call to SES asynchronous by writing the code, email address, timestamp and other necessary details to the log instead with some fixed prefix like SEND_EMAIL. Since this is sensitive data, you should encode the data into some format like json, encrypt it and base64 encode it before writing it to the log. Then you can attach a Cloudwatch subscription filter to the lambda log to route the log lines with SEND_EMAIL to a lambda to decrypt and decode the details and send the actual email via SES. This allows you to take longer than 5 seconds to send the email and workaround the timeouts.

Postmates - webhook: determining the actual pickup_complete and delivered_complete

so I am looking into the postmates API and I have been able to create a delivery. This was great, I also setup a webhook url with ngrok to test the response from postmates but I am totally stumped as to how to determine when the pickup was actually completed and the dropoff/delivery was actually completed.
I saved all of the responses in a database and each time I did the test delivery, I received exactly 70 calls from the webhook endpoint. And each time 47 of them were in regards to the 'kind': 'event.delivery_status'. Here are the stats:
THIS IS ALL IN TEST MODE WITH THE SANDBOX...
11 of those are 'status':'pickup_complete'
14 of those are 'status':'pickup'
11 of those are 'status':'dropoff'
11 of those are 'status':'delivered'
all of the webhook responses for status=delivered have a 'data.courier_imminent':false value.
I went to the webpage for the 'data.tracking_url' and when the webpage showed that the delivery was complete, I immediately updated the database to see how many records that I had saved and I was only at 32 total records. this means that the webhook was continuing to send me updates after it was supposedly complete.
Lastly, all of these statuses are not in order, they are totally random, in fact the 6th to last record that was received was a pickup_complete status..
The real question:
how will I know what is actually a picked=completed, delivered=complete etc..
You'll receive a webhook of type event.delivery_status. One of the field within the body of the payload will be {status: "delivered"}. This has been accurate so far. Postmates doesn't return adelivered_at` timestamp, but you could create your own timestamp and store it along with the delivery for reporting.
As for the number of webhooks, Postmates has a delivery robot (called robo) that moves as if it was a real postmate. You'll receive a lot of webhooks of type event.courier_update with the updated location.

Alexa sent multiple request to AWS Lambda

I'm building the Alexa skill that sends the request to my web server,
then web server will do some process and upload a file to Amazon S3.
During the period of web server process, I make skill keep getting the file from Amazon S3 per 10 seconds till get the file. And the response is based on the file content.
But unfortunately, the web server process takes more than 1 minute. That means skill must stay more than 1 minute to get the file to response.
For now, I used progressive response with async await in my code,
and skill did keep waiting for the file on S3.
But I found that the skill will send the second request to Lambda after 50 seconds automatically. That means for the same skill, i got the two lambda function running at the same time.
And the execution result is : After the first response that progressive response made, 50 seconds later will hear another response that also made by the progressive response which belongs to the second request.
And nothing happened till the end.
I know it is bad to let skill waits this long, but i still want to figure out the executable way if skill needs to wait this long.
There are some points I want to figure out.
Is there anyway to prevent the skill to send the second
requests to Lambda?
Is there another way I can try to accomplish the goal?
Thanks
Eventually, I found that the second invoke of Lambda is not from Alexa, is from AWS Lambda itself. Refer to the following artical
https://cloudonaut.io/your-lambda-function-might-execute-twice-deal-with-it/
So you have to deal with this kind of situation in your Lambda code. One thing can be used is these two times invoke's request id is the same. So you can tell if this is the first time execution by checking your storage for the same request id which you store at the first time execution.
Besides, I also found that once the Alexa Skill waits for more than 1 minutes, it will crash and return the error by speaking (test by Amazon Echo). And there is nothing different in the AWS Lambda log compare to the normal execution one. That meaning the Log seems to be fine but actually the execution result is not.
Hope this can help someone is also struggled at this problem.

Google App Engine - http request/response

I have a Java web app hosted on Google App Engine (GAE). The User clicks on a button and he gets a data table with 100 rows. At the bottom of the page, there is a "Make Web service calls" button. Clicking on that, the application will take one row at a time and make a third party web-service call using the URLConnection class. That part is working fine.
However, since there is a 60 second limit to the HttpRequest/Response cycle, all the 100 transactions don't go through as the timeout happens around row 50 or so.
How do I create a loop and send the Web service calls without the User having to click on the 'Make Webservice calls' more than once?
Is there a way to stop the loop before 60 seconds and then start again without committing the HttpResponse? (I don't want to use asynchronous Google backend).
Also, does GAE support file upload (to get the 100 rows from a file instead of a database)
Thank you.
Adding some code as per the comments:
URL url = new URL(urlString);
HttpURLConnection connection = (HttpURLConnection) url
.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("POST");
connection.setConnectTimeout(35000);
connection.setRequestProperty("Accept-Language", "en-US,en;q=0.5");
connection.setRequestProperty("Authorization", encodedCredentials);
// Send post request
DataOutputStream wr = new DataOutputStream(
connection.getOutputStream());
wr.writeBytes(submitRequest);
It all depends on what happens with the results of these calls.
If results are not returned to a UI, there is no need to block it. You can use Tasks API to create 100 tasks and return a response to a user. This will take a few seconds at most. The additional benefit is that you can make up to 10 calls in parallel by using tasks.
If results have to be returned to a user, you can still use up to 10 threads to process as many requests in parallel as possible. Hopefully, this will bring your time under 1 minute, but you cannot guarantee it since you depend on responses from third-party resources which maybe unavailable at the moment. You will have to implement your own retry mechanism.
Also note that users are not accustomed to waiting for several minutes for a website to respond. You may consider a different approach when a user is notified after the last request is processed without blocking your client code.
And yes, you can load data from files on App Engine.
Try using asynchronous urlfetch calls:
LinkedList<Future<HttpResponse>> futures;
// Start all the request
for (Url url : urls) {
HttpRequest request = new HttpRequest(url, HTTPMethod.POST);
request.setPayload(...)
futures.add(urlfetchservice.fetchAsync(request);
}
// Collect all the results
for (Future<HttpResponse> future : futures) {
HttpResponse response = future.get()
// Do something with future
}

Sending 1000+ emails in Django

Here is my setup right now:
connection = mail.get_connection()
maillist = []
# my real setup is a little more complex for-loop, but basicly I add all recipients to a list.
for person in object_list:
mail_subject = "Mail subject here"
mail_body = "Mail body text...bla bla"
email_sender = "me#example.com"
maillist.append((mail_subject, mail_body, email_sender, [person.email]))
#send_mass_mail wants a tuple, so we convert the list
mailtuple = tuple(maillist)
mail.send_mass_mail(mailtuple, fail_silently=False, connection=connection)
However, the forloop iterates over 1000+ objects/persons and when I try this method I'm able to send 101 emails, and then it stops. No errors (as I can see) anywhere.
A fellow developer mentioned that maybe the POST size was too big? Any ideas from the SO-community?
Your SMTP server probably has some send limits. For example, I believe Gmail limits outgoing mail to 100 recipients.
As Micah suggested, there is a good chance you are hitting server limits.
Generally, when dealing with mass mail, it is always a good idea to throttle the sending. Doing 50 mails every 5 seconds for 300 seconds beats 3000 mails at once for many practical reasons including smtp server limitations.
Since you mentioned a POST limit - do you send out the emails in a view? I'm wondering how you handle canceled requests in your setup.
I'm using a management command to send out 1000+ newsletters. But instead of send_mass_mail i use the normal send method in a loop. It takes about 5 minutes (haven't a correct count atm) to send out the mails and i haven't run into any server limits yet.
My plan is to switch to celery to handle sending through a web interface. Perhaps you want to have a look at it in case you haven't already.
http://celeryproject.org/