Unable to retrieve Messages from AWS SQS queue using Boto - amazon-web-services

My python code looks like :
import json
import boto.sqs
import boto
from boto.sqs.connection import SQSConnection
from boto.sqs.message import Message
from boto.sqs.message import RawMessage
sqs = boto.connect_sqs(aws_access_key_id='XXXXXXXXXXXXXXX',aws_secret_access_key='XXXXXXXXXXXXXXXXX')
q = sqs.create_queue("Nishantqueue") // Already present
q.set_message_class(RawMessage)
results = q.get_messages()
ret = "Got %s result(s) this time.\n\n" % len(results)
for result in results:
msg = json.loads(result.get_body())
ret += "Message: %s\n" % msg['message']
ret += "\n... done."
print ret
My SQS queue contains atleast 5 to 6 messages... when i execute this ... i get output as this and this is on every run, this code isn't able to pull the mssgs from the queue :
Got 0 result(s) this time.
...done.
I am sure i am missing something in the loop.... couldn't find though

Your code is retrieving messages from an Amazon SQS queue, but it doesn't seem to be deleting them. This means that messages will be invisible for a period of time (specified by the visibility_timeout parameter), after which they will reappear. The expectation is that if a message is not deleted within this time, then it has failed to be processed and should reappear on the queue to try again.
Here's some code that pulls a message from a queue, then deletes it after processing. Note the visibility_timeout specified when a message is retrieved. It is using read() to simply return one message:
#!/usr/bin/python27
import boto, boto.sqs
from boto.sqs.message import Message
# Connect to Queue
q_conn = boto.sqs.connect_to_region("ap-southeast-2")
q = q_conn.get_queue('queue-name')
# Get a message
m = q.read(visibility_timeout=15)
if m == None:
print "No message!"
else:
print m.get_body()
q.delete_message(m)
It's possible that your messages were invisible ("in-flight") when you tried to retrieve them.

Related

AWS Error "Calling the invoke API action failed with this message: Rate Exceeded" when I use s3.get_paginator('list_objects_v2')

Some third party application is uploading around 10000 object to my bucket+prefix in a day. My requirement is to fetch all objects which were uploaded to my bucket+prefix in last 24 hours.
There are so many files in my bucket+prefix.
So I assume that when I call
response = s3_paginator.paginate(Bucket=bucket,Prefix='inside-bucket-level-1/', PaginationConfig={"PageSize": 1000})
then may be it makes multiple calls to S3 API and may be that's why it is showing Rate Exceeded error.
Below is my Python Lambda Function.
import json
import boto3
import time
from datetime import datetime, timedelta
def lambda_handler(event, context):
s3 = boto3.client("s3")
from_date = datetime.today() - timedelta(days=1)
string_from_date = from_date.strftime("%Y-%m-%d, %H:%M:%S")
print("Date :", string_from_date)
s3_paginator = s3.get_paginator('list_objects_v2')
list_of_buckets = ['kush-dragon-data']
bucket_wise_list = {}
for bucket in list_of_buckets:
response = s3_paginator.paginate(Bucket=bucket,Prefix='inside-bucket-level-1/', PaginationConfig={"PageSize": 1000})
filtered_iterator = response.search(
"Contents[?to_string(LastModified)>='\"" + string_from_date + "\"'].Key")
keylist = []
for key_data in filtered_iterator:
if "/" in key_data:
splitted_array = key_data.split("/")
if len(splitted_array) > 1:
if splitted_array[-1]:
keylist.append(splitted_array[-1])
else:
keylist.append(key_data)
bucket_wise_list.update({bucket: keylist})
print("Total Number Of Object = ", bucket_wise_list)
# TODO implement
return {
'statusCode': 200,
'body': json.dumps(bucket_wise_list)
}
So when we execute above Lambda Function then it shows below error.
"Calling the invoke API action failed with this message: Rate Exceeded."
Can anyone help to resolve this error and achieve my requirement ?
This is probably due to your account restrictions, you should add retry with some seconds between retries or increase pagesize
This is most likely due to you reaching your quota limit for AWS S3 API calls. The "bigger hammer" solution is to request a quota increase, but if you don't want to do that, there is another way using botocore.Config built in retries, for example:
import json
import time
from datetime import datetime, timedelta
from boto3 import client
from botocore.config import Config
config = Config(
retries = {
'max_attempts': 10,
'mode': 'standard'
}
)
def lambda_handler(event, context):
s3 = client('s3', config=config)
###ALL OF YOUR CURRENT PYTHON CODE EXACTLY THE WAY IT IS###
This config will use exponentially increasing sleep timer for a maximum number of retries. From the docs:
Any retry attempt will include an exponential backoff by a base factor of 2 for a maximum backoff time of 20 seconds.
There is also an adaptive mode which is still experimental. For more info, see the docs on botocore.Config retries
Another (much less robust IMO) option would be to write your own paginator with a sleep programmed in, though you'd probably just want to use the builtin backoff in 99.99% of cases (even if you do have to write your own paginator). (this code is untested and isn't even asynchronous, so the sleep will be in addition to the wait time for a page response. To make the "sleep time" exactly sleep_secs, you'll need to use concurrent.futures or asyncio (AWS built in paginators mostly use concurrent.futures)):
from boto3 import client
from typing import Generator
from time import sleep
def get_pages(bucket:str,prefix:str,page_size:int,sleep_secs:float) -> Generator:
s3 = client('s3')
page:dict = client.list_objects_v2(
Bucket=bucket,
MaxKeys=page_size,
Prefix=prefix
)
next_token:str = page.get('NextContinuationToken')
yield page
while(next_token):
sleep(sleep_secs)
page = client.list_objects_v2(
Bucket=bucket,
MaxKeys=page_size,
Prefix=prefix,
ContinuationToken=next_token
)
next_token = page.get('NextContinuationToken')
yield page

Why does boto3.client("ecs").describe_tasks(...) not always have a stopCode?

I have code which is similar to this (heavily stripped down, of course):
import boto3
client = boto3.client("ecs")
response = client.describe_tasks(cluster="some cluster arn",
tasks=["some task arn"])
task = response["tasks"][0]
if task["lastStatus"] == "STOPPED":
if task["stopCode"] == "EssentialContainerExited":
pass
This failed because of a key error in the last line. Reading the docs and some more docs, I assumed that the stopCode would always exist when the lastStatus is STOPPED.
Why did that break?

How to decode celery message in SQS

Some of celery tasks in sqs are pending forever, I want to read those messages (tasks) before deleting.
On going to sqs console, I am able to see the encoded message I tried decoding it with
value = base64.b64decode(value.encode('utf-8')).decode('utf-8')
This gives me dict dump with keys
['body', 'headers', 'content-type', 'properties', 'content-encoding']
In this dict body lookes like encoded
I tried to decode it with same
value = base64.b64decode(value.encode('utf-8')).decode('utf-8')
but it gives error saying
UnicodeDecodeError: 'utf8' codec can't decode byte 0x87 in position 1: invalid start byte
Am I missing something?
How to decode this messages? Is there is any way to decode it?
It seems that "Celery" uses "pickle.dump" to turn the payload of the task into bytes, and then encode to base64. Doing the reverse operation we get the payload again.
import base64
import boto3
import pickle
queue_name = 'your-queue-name'
sqsr = boto3.resource('sqs')
queue = sqsr.get_queue_by_name(QueueName=queue_name)
for message in queue.receive_messages(MaxNumberOfMessages=10):
print(f'{message.message_id} >>> {message.receipt_handle}'
f' >>> {message.body} >>> {message.message_attributes}')
body_dict = json.loads(base64.b64decode(message.body))
celery_payload = pickle.loads(base64.b64decode(body_dict.get('body')))
print(celery_payload)

AWS SQS: Sending dynamic message using boto

I have a working python/boto script which posts a message to my AWS SQS queue. The message body however is hardcoded into the script.
I creates a file called ~/file which contains two values
$ cat ~/file
Username 'encrypted_password_string'
I would like my boto script (see below) to send a message to my AWS SQS queue that contains these two values.
Can anyone please advise how to modify my script below so the message body sent to SQS contains the contents of file ~/file. Please also take note of the special characters that exists within a encrypted password string
Example:
~/file
username d5MopV/EsfSKk8BExCyLHFwNfBrOTzQ1
#!/usr/bin/env python
conf = {
"sqs-access-key": "xxxx",
"sqs-secret-key": "xxxx",
"sqs-queue-name": "UserPassChange",
"sqs-region": "xxxx",
"sqs-path": "sqssend"
}
import boto.sqs
conn = boto.sqs.connect_to_region(
conf.get('sqs-region'),
aws_access_key_id = conf.get('sqs-access-key'),
aws_secret_access_key = conf.get('sqs-secret-key')
)
q = conn.create_queue(conf.get('sqs-queue-name'))
from boto.sqs.message import RawMessage
m = RawMessage()
m.set_body('hardcoded message')
retval = q.write(m)
print 'added message, got retval: %s' % retval
one way to get it working:
in the script I added
import commands
then added,
USERNAME = commands.getoutput("echo $(who am i | awk '{print $1}')")
PASS = commands.getoutput("cat /tmp/.s")
and then added these values to my message body :
MSG = RawMessage()
MSG.set_body(json.dumps({'pass': PASS, 'user': USERNAME}))
The following example shows how to use Boto3 to send a file to a receiver.
test_sqs.py
import boto3
from moto import mock_sqs
#mock_sqs
def test_sqs():
sqs = boto3.resource('sqs', 'us-east-1')
queue = sqs.create_queue(QueueName='votes')
queue.send_message(MessageBody=open('beer.txt').read())
messages = queue.receive_messages()
assert len(messages) == 1
assert messages[0].body == 'tasty\n'

Wait until a Jenkins build is complete

I am using Python 2.7 and Jenkins.
I am writing some code in Python that will perform a checkin and wait/poll for Jenkins job to be complete. I would like some thoughts on around how I achieve it.
Python function to create a check-in in Perforce-> This can be easily done as P4 has CLI
Python code to detect when a build got triggered -> I have the changelist and the job number. How do I poll the Jenkins API for the build log to check if it has the appropriate changelists? The output of this step is a build url which is carrying out the job
How do I wait till the Jenkins job is complete?
Can I use snippets from the Jenkins Rest API or from Python Jenkins module?
If you need to know if the job is finished, the buildNumber and buildTimestamp are not enough.
This is the gist of how I find out if a job is complete, I have it in ruby but not python so perhaps someone could update this into real code.
lastBuild = get jenkins/job/myJob/lastBuild/buildNumber
get jenkins/job/myJob/lastBuild/build?token=gogogo
currentBuild = get jenkins/job/myJob/lastBuild/buildNumber
while currentBuild == lastBuild
sleep 1
thisBuild = get jenkins/job/myJob/lastBuild/buildNumber
buildInfo = get jenkins/job/myJob/[thisBuild]/api/xml?depth=0
while buildInfo["freeStyleBuild/building"] == true
buildInfo = get jenkins/job/myJob/[thisBuild]/api/xml?depth=0
sleep 1
ie. I found I needed to A) wait until the build starts (new build number) and B) wait until the building finishes (building is false).
You can query the last build timestamp to determine if the build finished. Compare it to what it was just before you triggered the build, and see when it changes. To get the timestamp, add /lastBuild/buildTimestamp to your job URL
As a matter of fact, in your Jenkins, add /lastBuild/api/ to any Job, and you will see a lot of API information. It even has Python API, but I not familiar with that so can't help you further
However, if you were using XML, you can add lastBuild/api/xml?depth=0 and inside the XML, you can see the <changeSet> object with list of revisions/commit messages that triggered the build
Simple solution using invoke and block_until_complete methods (tested with Python 3.7)
import jenkinsapi
from jenkinsapi.jenkins import Jenkins
...
server = Jenkins(jenkinsUrl, username=jenkinsUser,
password=jenkinsToken, ssl_verify=sslVerifyFlag)
job = server.create_job(jobName, None)
queue = job.invoke()
queue.block_until_complete()
Inpsired by a test method in pycontribs
This snippet starts build job and wait until job is done.
It is easy to start the job but we need some kind of logic to know when job is done. First we need to wait for job ID to be applied and than we can query job for details:
from jenkinsapi import jenkins
server = jenkins.Jenkins(jenkinsurl, username=username, password='******')
job = server.get_job(j_name)
prev_id = job.get_last_buildnumber()
server.build_job(j_name)
while True:
print('Waiting for build to start...')
if prev_id != job.get_last_buildnumber():
break
time.sleep(3)
print('Running...')
last_build = job.get_last_build()
while last_build.is_running():
time.sleep(1)
print(str(last_build.get_status()))
Don't know if this was available at the time of the question, but jenkinsapi module's Job.invoke() and/or Jenkins.build_job() return a QueueItem object, which can block_until_building(), or block_until_complete()
jobq = server.build_job(job_name, job_params)
jobq.block_until_building()
print("Job %s (%s) is building." % (jobq.get_job_name(), jobq.get_build_number()))
jobq.block_until_complete(5) # check every 5s instead of the default 15
print("Job complete, %s" % jobq.get_build().get_status())
Was going through the same problem and this worked for me, using python3 and python-jenkins.
while "".join([d['color'] for d in j.get_jobs() if d['name'] == "job_name"]) == 'blue_anime':
print('Job is Running')
time.sleep(1)
print('Job Over!!')
Working Github Script: Link
This is working for me
#!/usr/bin/env python
import jenkins
import time
server = jenkins.Jenkins('https://jenkinsurl/', username='xxxxx', password='xxxxxx')
j_name = 'test'
server.build_job(j_name, {'testparam1': 'test', 'testparam2': 'test'})
while True:
print('Running....')
if server.get_job_info(j_name)['lastCompletedBuild']['number'] == server.get_job_info(j_name)['lastBuild']['number']:
print "Last ID %s, Current ID %s" % (server.get_job_info(j_name)['lastCompletedBuild']['number'], server.get_job_info(j_name)['lastBuild']['number'])
break
time.sleep(3)
print('Stop....')
console_output = server.get_build_console_output(j_name, server.get_job_info(j_name)['lastBuild']['number'])
print console_output
the issue main issue that the build_job doesn't return the number of the job, returns the number of a queue item (that only last 5 min). so the trick is
build_job
get the queue number,
with the queue number get the job_number
now we know the name of the job and the job number
get_job_info and loop the jobs till we find one with our job number
check the status
so i made a function for it with time_out
import time
from datetime import datetime, timedelta
import jenkins
def launch_job(jenkins_connection, job_name, parameters={}, wait=False, interval=30, time_out=7200):
"""
Create a jenkins job and waits for the job to finish
:param jenkins_connection: jenkins server jenkins object
:param job_name: the name of job we want to create and see if finish string
:param parameters: the parameters of the job to build directory
:param wait: if we want to wait for the job to finish or not bool
:param interval: how often we want to monitor seconds int
:param time_out: break the loop after certain X seconds int
:return: build job number int
"""
# we lunch the job and returns a queue_id
job_id = jenkins_connection.build_job(job_name, parameters)
# from the queue_id we get the job number that was created
queue_job = jenkins_connection.get_queue_item(job_id, depth=0)
build_number = queue_job["executable"]["number"]
print(f"job_name: {job_name} build_number: {build_number}")
if wait is True:
now = datetime.now()
later = now + timedelta(seconds=time_out)
while True:
# we check current time vs the timeout(later)
if datetime.now() > later:
raise ValueError(f"Job: {job_name}:{build_number} is running for more than {time_out} we"
f"stop monitoring the job, you can check it in Jenkins")
b = jenkins_connection.get_job_info(job_name, depth=1, fetch_all_builds=False)
for i in b["builds"]:
loop_id = i["id"]
if int(loop_id) == build_number:
result = (i["result"])
print(f"result: {result}") # in the json looks like null
if result is not None:
return i
# break
time.sleep(interval)
# return result
return build_number
after we ask jenkins to build the job>get queue#>get job#> loop the info and get the status till change from None to something else.
if works will return the directory with the information of that job. (hope the jenkins library could implement something like this.)