Get notification when AWS Device Farm finished a run - amazon-web-services

How can I get notified when a Device Farm run is finished ?
Is it possible to get the report into s3 bucket ? So it can be use as a source trigger in CodePipeline ?

How can I get notified when a Device Farm run is finished?
One way to do that would be to have a small program with continuously calls get-run and checks the status. There are no waiters in boto3(assuming you're using this) for Device Farm at the time of writing this
https://github.com/boto/botocore/tree/develop/botocore/data/devicefarm/2015-06-23
Is it possible to get the report into s3 bucket ?
Device Farm's artifacts are already in s3 however it's in the Device Farm account and not in the account the run was scheduled with. We can see they're in s3 already from the create-upload command which returns a s3 presigned URL.
So it can be use as a source trigger in CodePipeline ?
That would be cool but this would be something the service doesn't do on our behalf at the moment. You would need to write the script to check if the run is finished, pull the artifacts, then reupload the artifacts to another s3 bucket.
Here's the links to those APIs needed in boto3
get_run
list_artifacts
upload files example

Related

How to keep Lambda from triggering multiple times?

TechStack: salesforce data ->Aws Appflow->s3 ->databricks job
Hello! I have an appflow flow that is grabbing salesforce data and uploading it to s3 in a folder with multiple parquet files. I have an lambda that is listening to the prefix where this folder is being dropped. This lambda then triggers a databricks job which is an ingestion process I have created.
My main issue is that when these files are being uploaded to s3 it is triggering my lambda 1 time per file that is uploaded, and was curious as to how I can have the lambda run just once.
Amazon AppFlow publishes a Flow notification - Amazon AppFlow when a Flow is complete:
Amazon AppFlow is integrated with Amazon CloudWatch Events to publish events related to the status of a flow. The following flow events are published to your default event bus.
AppFlow End Flow Run Report: This event is published when a flow run is complete.
You could trigger the Lambda function when this Event is published. That way, it is only triggered when the Flow is complete.
I hope I've understood your issue correctly but it sounds like your Lambda is working correctly if you have it setup to run every time a file is dropped into the S3 bucket as the S3 trigger will call the Lambda upon every upload.
If you want to reduce the amount of time your Lambda runs is setup an Event Bridge trigger to check the bucket for new files you could run this off an Event Bridge CRON to ping the Lambda on a defined schedule. You could then send all the files to your data bricks block in bulk rather than individually.

Work around for handling CPU Intensive task in aws ec2?

I have created a django application (running on aws ec2) which convert media file from one format to another format ,but during this process it consume CPU resource due to which I have to pay charges to aws.
I am trying to find a work around where my local pc (ubuntu) takes care of CPU intensive task and final result is uploaded to s3 bucket which I can share with user.
Solution :- One possible solution is that when user upload media file (html upload form) it goes to s3 bucket and at the same time via socket connection the s3 bucket file link is send to my ubuntu where it download file, process it and upload back to s3 bucket.
Could anyone please suggest me better solution as it seems to be not efficient.
Please note :- I have decent internet connection and computer which can handle backend very well but i not in state to pay throttle charges to aws.
Best solution for this is to create separate lambda function for this task. Trigger lambda whenever someone upload files on S3. Lambda will process files and store back to S3.

Edit image file in S3 bucket using AWS Lambda

Some images which is already uploaded on AWS S3 bucket and of course there is a lot of image. I want to edit and replace those images and I want to do it on AWS server, Here I want to use aws lambda.
I already can do my job from my local pc. But it takes a very long time. So I want to do it on server.
Is it possible?
Unfortunately directly editing file in S3 is not supported Check out the thread. To overcome the situation, you need to download the file locally in server/local machine, then edit it and re-upload it again to s3 bucket. Also you can enable versions
For node js you can use Jimp
For java: ImageIO
For python: Pillow
or you can use any technology to edit it and later upload it using aws-sdk.
For lambda function you can use serverless framework - https://serverless.com/
I have made youtube videos long back. This is related to how get started with aws-lambda and serverless
https://www.youtube.com/watch?v=uXZCNnzSMkI
You can trigger a Lambda using the AWS SDK.
Write a Lambda to process a single image and deploy it.
Then locally use the AWS SDK to list the images in the bucket and invoke the Lambda (asynchronously) for each file using invoke. I would also save somewhere which files have been processed so you can continue if something fails.
Note that the default limit for Lambda is 1000 concurrent executions, so to avoid reaching the limit you can send messages to an SQS queue (which then triggers the Lambda) or just retry when invoke throws an error.

upload Web Application Log file into S3

I have a requirement, we have one web Application.
from that application, we are downloading the Logs by clicking the Download Button ( manually).
After download using AWS CLI uploading the Logs into S3 then processing the data.
can we do automate this?
please help me to automate this If we can.
Thanks in Advance.
You can create a lambda function and assume a role of ec2-lambda and collect the logs and move them into a S3 bucket even with the time stamp as well you can even schedule it using cloud watch if you want the log backup at a specific time .
You can also use Ansible or Jenkins to do the task in Jenkins you can create a job and it has S3 plugin even available and simply run your Jenkins job which going to copy the logs to your s3 buckets

AWS EMR process immediately as log lands in S3

My actual implementation includes Device Farm & EMR. Device Farm produces logs and saves them in S3 and I want EMR to immediately pick them up and process (ultimate goal is to put produced structured info to DynamoDB).
What's the best approach? Is it possible to do that without integration of yet another thing which checks if there are no new logs in S3?
You can use events on your S3 bucket. Create an event viz. whenever new object( log file) is created; invoke lambda or SNS notification ( which in turn invokes EMR )