AWS: Automating queries in redshift - amazon-web-services

I want to automate a redshift insert query to be run every day.
We actually use Aws environment. I was told using lambda is not the right approach. Which is the best ETL process to automate a query in Redshift.

For automating SQL on Redshift you have 3 options (at least)
Simple - cron
Use a EC2 instance and set up a cron job on that to run your SQL code.
psql -U youruser -p 5439 -h hostname_of_redshift -f your_sql_file
Feature rich - Airflow (Recommended)
If you have a complex schedule to run then it is worth investing time learning and using apache airflow. This also needs to run on a server(ec2) but offers a lot of functionality.
https://airflow.apache.org/
AWS serverless - AWS data pipeline (NOT Recommended)
https://aws.amazon.com/datapipeline/
Cloudwatch->Lambda->EC2 method described below by John Rotenstein
This is a good method when you want to be AWS centric, it will be cheaper than having a dedicated EC2 instance.

One option:
Use Amazon CloudWatch Events on a schedule to trigger an AWS Lambda function
The Lambda function launches an EC2 instance with a User Data script. Configure Shutdown Behavior as Terminate.
The EC2 instance executes the User Data script
When the script is complete, it should call sudo shutdown now -h to shutdown and terminate the instance
The EC2 instance will only be billed per-second.

Redshift now supports scheduled queries natively: https://docs.aws.amazon.com/redshift/latest/mgmt/query-editor-schedule-query.html

You can use boto3 and psycopg2 to run the queries by creating a python script
and scheduling it in cron to be executed daily.
You can also try to convert your queries into Spark jobs and schedule those jobs to run in AWS Glue daily. If you find it difficult, you can also look into Spark SQL and give it a shot. If you are going with Spark SQL, keep in mind the memory usage as Spark SQL is pretty memory intensive.

Related

What AWS service to use for running batch jobs

I need to run java code that talk to MySql db in AWS and does some ETL on a nightly frequency. Which AWS service can I used for this?
I would recommend looking at the following:
AWS Glue ETL
AWS Batch
AWS ECS / Fargate scheduled tasks

Python pipeline on AWS Cloud

I have few python scripts which need to be executed in sequence on AWS Cloud so what are the best and simplest options? These script files are proof of concept so little bit dirty also but need to run overnight. Most of the script finishes within 10 mins but couple of them can take up to 1 hour running on a single core.
We do not have any servers like Jenkins, airflow etc...we are planning to use existing aws services.
Please let me know, Thanks.
1) EC2 Instance (Manually controlled)
Upload your scripts to an S3 bucket Use default VPC
launch EC2 Instance
Use SSM Remote session to log in
Run AWS CLI (AWS S3 Sync to download from S3)
Run them Manually
stop instance when done.
To be clean, make a SH file (or master .py file) to do the work. If you want it to stop charging you money afterwards, add command to stop instance when complete.
Least amount of work
2) If you want to run scripts daily
- Script out the work above (include modifying the Autoscale group at end to go to one box)
- Create an EC2 Auto Scale Group and launch it on a CRON job schedule.
It will start up, do the work, and then shut down and stop charging you.
3) Lambda
Pretty much like option 2, but AWS will do most of the work for you.
Either put all your scripts into one lambda..or put each script into its own lambda and have a master that does sync invoke of each script in the order you want.
You have a cloudwatch alarm trigger daily and does the work
I would say that if you are in POC mode, option 1 is best decision. It is likely closest to what you already do where you are currently executing. This is what #jarmod recommended already.
You didn't mention anything about which AWS resources your python scripts need to access or at least the purpose of the scripts, so it is difficult to provide a solution.
However a good option is to use AWS Batch.

Migrate AWS Glue Job to EC2

i'm currently using some glue jobs for minimum transformations and sending info from S3/Athena tables to Redshift, now we don't process a lot of data so glue is expensive, slow and difficult to tune for this volume of data.
I couldn't find how to start in EC2 to make the code migration, credentials, dependencies.
Maybe I can call a lambda to process it in my EC2 instance? Can I run spark on 1 node and then scale to a cluster in the future? should I migrate Glue Job to python (not pyspark)?
I found EMR will be expensive too for this volume, the ideal is start with minumum
Don't need the full solution, just pointing in the right direction so I can start trying this.
Thank you!
Here are few suggetions for your requiremnt
Serverless frameworks like Glue and lambda is more suitable rather than persisted EMR or EC2
AWS Lambda: You can consider using lambda with python modules, if your volume of data is less and transformations are minimal.
AWS Glue with Python not spark - It's also a cost effective solution.
AWS Ec2 - Going for EC2 legacy approach and costly.

How to run glue script from Glue Dev Endpoint

I have a glue script (test.py) written say in a editor. I connected to glue dev endpoint and copied the script to endpoint or I can store in S3 bucket. Basically glue endpoint is an EMR cluster, now how can I run the script from the dev endpoint terminal? Can I use spark-submit and run it ?
I know we can run it from glue console,but more interested to know if I can run it from glue end point terminal.
You don't need a notebook; you can ssh to the dev endpoint and run it with the gluepython interpreter (not plain python).
e.g.
radix#localhost:~$ DEV_ENDPOINT=glue#ec2-w-x-y-z.compute-1.amazonaws.com
radix#localhost:~$ scp myscript.py $DEV_ENDPOINT:/home/glue/myscript.py
radix#localhost:~$ ssh -i {private-key} $DEV_ENDPOINT
...
[glue#ip-w-x-y-z ~]$ gluepython myscript.py
You can also run the script directly without getting an interactive shell with ssh (of course, after uploading the script with scp or whatever):
radix#localhost:~$ ssh -i {private-key} $DEV_ENDPOINT gluepython myscript.py
If this is a script that uses the Job class (as the auto-generated Python scripts do), you may need to pass --JOB_NAME and --TempDir parameters.
For development / testing purpose, you can setup a zeppelin notebook locally, have an SSH connection established using the AWS Glue endpoint URL, so you can have access to the data catalog/crawlers,etc. and also the s3 bucket where your data resides.
After all the testing is completed, you can bundle your code, upload to an S3 bucket. Then create a Job pointing to the ETL script in S3 bucket, so that the job can be run, and scheduled as well.
Please refer here and setting up zeppelin on windows, for any help on setting up local environment. You can use dev instance provided by Glue, but you may incur additional costs for the same(EC2 instance charges).
Once you set up the zeppelin notebook, you can copy the script(test.py) to the zeppelin notebook, and run from the zeppelin.
According to AWS Glue FAQ:
Q: When should I use AWS Glue vs. Amazon EMR?
AWS Glue works on top of the Apache Spark environment to provide a
scale-out execution environment for your data transformation jobs. AWS
Glue infers, evolves, and monitors your ETL jobs to greatly simplify
the process of creating and maintaining jobs. Amazon EMR provides you
with direct access to your Hadoop environment, affording you
lower-level access and greater flexibility in using tools beyond
Spark.
Do you have any specific requirement to run Glue script in an EMR instance? Since in my opinion, EMR gives more flexibility and you can use any 3rd party python libraries and run directly in a EMR Spark cluster.
Regards

Automate AWS instance start and stop

I'm running a instance in amazon AWS and it runs non-stop everyday. I'm using ubuntu ec2 instance which is running Apache, Mirthconnect tool and LAMP server. I want to run this instance only on particular time duration of a day. I prefer not use any additional AWS services such as cloud-watch . Is there a way we could acheive this?.
The major purpose is for using Mirthconnect fetching data from mysql database
There are 3 solutions.
AWS Data Pipeline - You can schedule the instance start/stop just like cron. It will cost you one hour of t1.micro instance for every start/stop
AWS Lambda - Define a lambda function that gets triggered at a pre defined time. Your lambda function can start/stop instances. Your cost will be very minimal or $0
Write a shell script and run it as a cron job or run it on demand. The script will have AWS CLI command to start and stop the instance.
I used Data Pipeline for a long time before moving to Lambda. Data Pipeline is very trivial. Just paste the AWS CLI commands to stop and start instances. Lambda is more involved.
I guess for that you'll need another machine which is on 24x7. On which you can write cron job in python using boto or any other language like bash.
I don't see how you start a instance in stopped state without using any other machine.
Or you can have a simple raspberry pi on at your home which does the ON-OFF work for you using AWS CLI or simple Python. How about that? ;)