What are the advantages/disadvantages of using 'plain' Hadoop cluster Hortonworks with components HDFS, Hive, Oozie... vs some services on AWS like S3/Athena/Lambda?
my scenario data flow:
source data come from iot sensors in order to analytics and sometimes I need to query by deviceid & datetime with Hive/Athena ... (all conditions have been partitioned)
Disadvantages of installing Hadoop yourself in any cloud provider is obviously cost and a little bit of maintenance.
For example, HDFS disk gets full, add more volumes. You need to upgrade and patch software yourself. You're charged every machine hour, for every machine and turning off just the namenode of the cluster will render it unusable for a period of time; if you do not have any business use-case for running the cluster overnight, you're wasting money
Therefore the advantage of storing data in cloud is.
While slower than HDFS, object store in S3 is significantly cheaper and scalable
Triggering actions via Lambda or another scheduler, can actually happen faster than Oozie launching a YARN job. Your code isn't tied to Hadoop, either, so your functions should be able to be smaller, although you may be limited in language options. If you combine lambda or other filesystem triggers with container schedulers like Kubernetes, you can open lots of options.
Querying your data any time you want with tools like AWS Glue and Athena, decouples the maintenance of a Hive metastore and a compatible query engine, whether that's Hive, Presto, Impala, Drill, etc. Anyone with AWS access can run an Athena query without needing to know an address of your HiveServer and how to appropriately connect to it (for example, you should secure it and make it highly available)
Related
I'm a software engineer transitioning toward machine learning engineering, but need some assistance.
I'm currently using AWS Lambda and Step Functions to run query and preprocessing jobs for my ML pipeline, but am restrained by Lambda's 15m runtime limitation.
We're a strictly AWS shop, so I'm kind of stuck with SageMaker and other AWS tools for the time being. Later on we'll consider experimenting with something like Kubeflow if it looks advantageous enough.
My current process
I have my data scientists write python scripts (in a git repo) for the query and preprocessing steps of a model, and deploy them (via Terraform) as Lambda functions, then use Step Functions to sequence the ML Pipeline steps as a DAG (query -> preprocess -> train -> deploy)
The Query lambda pulls data from our data warehouse (Redshift), and writes the unprocessed dataset to S3
The Preprocessing lambda loads the unprocessed dataset from S3, manipulates it as needed, and writes it as training & validation datasets to a different S3 location
The Train and Deploy tasks use the SageMaker python api to train and deploy the models as SageMaker Endpoints
Do I need to be using Glue and SageMaker Processing jobs? From what I can tell, Glue seems more targeted towards ETLs than for writing to S3, and SageMaker Processing jobs seem a bit more complex to deploy to than Lambda.
There is a solution that just came out for long running actions in Redshift - Redshift Data API. https://aws.amazon.com/about-aws/whats-new/2020/09/announcing-data-api-for-amazon-redshift/
This allows Lambdas in a Step function to issue a set of SQL to Redshift and poll to see when the SQL is done. Now the run time of your Lambda is only as long as it needed to launch the SQL.
As for the processing steps - I'd recommend doing as much of the processing inside of Redshift before unloading the data to S3 (I hope you are not pulling lots of data through a select statement). This will be much faster than processing in Lambda and can benefit from Data API as well. Now there will likely be some processing steps that you cannot do in Redshift and Lambda is a good option. One additional benefit of UNLOAD is that you can set the output file size. This way you can launch a Lambda per file of the output and then you have many, shorter running Lambdas.
You could attempt to break up the work and have many Lambdas running in series but processing large amounts of data at once is not a strength of Lambda. Being able to do this will depend on the data processing you are doing.
You could use Glue for this but this is likely complete overkill, a whole new service to learn, and since it is an EMR wrapper it can get costly. To be honest Glue is not my favorite AWS service as it only does the most basic things easily and anything even slightly complex becomes a battle. So if this is a tool you know and like go for it.
If I had to perform ETL on a huge dataset(say 1Tb) stored in S3 as csv files, Both AWS Glue ETL job and AWS EMR steps can be used. Then how is AWS Glue different from AWS EMR. And which is the better solution in this case.
Most of the differences are already listed so I'll focus more on the use case specific.
When to choose aws glue
Data size is huge but structured i.e. it is in the table structure and is of known format (CSV, parquet, orc, json).
Lineage is required, if you need the data lineage graph while developing your etl job prefer developing the etl using glue native libraries.
The developers don't need to tweak the performance parameters like setting number of executors, per executor memory and so on.
You don't want the overhead of managing large cluster and pay only for what you use.
When to use EMR
Data is huge but semi-structured or unstructured where you can't take any benefit from Glue catalog.
You believe only in the outputs and lineage is not required.
You need to define more memory per executor depending upon the type of your job and requirement.
You can manage the cluster easily or if you have so many jobs which can run concurrently on the cluster saving you money.
In case of structured data, you should use EMR when you want more Hadoop capabilities like hive, presto for further analytics.
So it depends on what your use case is. Both are great service.
Glue allows you to submit ETL scripts directly in PySpark/Python/Scala, without the need for managing an EMR cluster. All setup/tear-down of infrastructure is managed.
There are also a few other managed components like Crawlers, Glue Data Catalog, etc which make it easier to work on your data.
You could use either for your use-case, Glue would be faster however you may not have the flexibility you get with EMR.
Glue uses EMR under the hood. This is evident when you ssh into the driver of your Glue dev-endpoint.
Now since Glue is a managed spark environment or say managed EMR environment, it comes with reduced flexibility. The type of workers that you can chose is limited. The number of language libraries that you can use in your spark code is limited. Glue did not support packages like pandas, numpy until recently. Apps like presto cant be integrated with Glue although Athena is a good alternative to a separate presto installation.
The main issue however is that Glue jobs have a cold start time from anywhere between 1 minute to 15 minutes.
EMR is a good choice for exploratory data analysis but for a production environment with CI/CD, Glue seems to be the better choice.
EDIT - Glue jobs no longer have a cold start wait time
From the AWS Glue FAQ:
AWS Glue works on top of the Apache Spark environment to provide a scale-out execution environment for your data transformation jobs. AWS Glue infers, evolves, and monitors your ETL jobs to greatly simplify the process of creating and maintaining jobs.
Amazon EMR provides you with direct access to your Hadoop environment, affording you lower-level access and greater flexibility in using tools beyond Spark.
Source: https://aws.amazon.com/glue/faqs/
AWS Glue is a ETL service from AWS. AWS Glue will generate ETL code in Scala or Python to extract data from the source, transform the data to match the target schema, and load it into the target
AWS EMR is a service where you can process large amount of data , its a supporting big data platform .It Supports Hadoop,Spark,Flink,Presto, Hive etc.You can spin up EC2 with the above listed softwares and make a similar ecosystem.
In your case , you want to process 1 TB of data .Now if you want do computations on the same data , you can use EMR and if you want to run the analytics on the transformed data , use Glue .
Following is something that i compiled post working on analytics projects (though a lot of it depends on use case) - but generally speaking :
Criteria
Glue
EMR
Costs
Comparatively Costlier
Much Cheaper (Due to Spot Instance Functionality, There have been cases when there are saving of upto 50% over top-off glue costs - even more depending upon the use case)
Orchestration
Inbuilt(Glue WorkFlows & Triggers)
Through Cloud Watch Triggers & Step Functions
Infra Work Required
No Infra Setup - Select Worker Type However,Roles & Permissions are needed
Identify the Type of Node Needed & Setup Autoscaling rules etc
Cluster Resiliency & Robustness
Highly Resilient (AWS MANAGED)
If Spot Instances are used then interruption might occur with 2 min notification(Though the System Recovers Automatically - For eg - Job Times might elongate)
Skill Sets Needed
PySpark & Intermediate AWS Knowledge
DevOps to Setup EMR & Manage, Intermediate Knowledge of Orchestration via Cloud Watch & Step Function, PySpark
Applicable Use Cases
Attractive Option in event: 1. You are not worried about Costs but need highly resilient infra2. Batch Setups wherein the Job might complete in fixed time3. Short RealTime Streaming Jobs which need to run for let's say hrs during a day
1. Use Case is of Volatile Clusters - Mostly Used for Batch Processing (Day MINUS Scenarios) - Thereby making a costs effective solution for Batch Jobs2. Attractive option for 24/7 Spark Streaming Programs3. You Need a Hadoop Ecosystem & Related tools (like HDFS, HIVE, HUE, Impala etc)4. You need to run Flink Programs etc5. You need control over Infra & It's tuning parameters
Also going back to OP's use case of 1TB of data processing. If its one time processing Glue should suffice, if its a Daily Once Batch EMR & GLUE will both be good (depending on how job is tuned Glue can be an attractive option), if its a multiple time daily job - then EMR is a better option (Considering balance of performance and cost)
We're building Lambda architecture on AWS stack. A lack of devops knowledge forces us to prefer AWS managed solution over custom deployments.
Our workflow:
[Batch layer]
Kinesys Firehouse -> S3 -Glue-> EMR (Spark) -Glue-> S3 views -----+
|===> Serving layer (ECS) => Users
Kinesys -> EMR (Spark Streaming) -> DynamoDB/ElasticCache views --+
[Speed layer]
We have already using 3 datastores: ElasticCache, DynamoDB and S3 (queried with Athena). Bach layer produce from 500,000 up to 6,000,000 row each hour. Only last hour results should be queried by serving layer with low latency random reads.
Neither of our databases fits batch-insert & random-read requirements. DynamoDB not fit batch-insert - it's too expensive because of throughput required for batch inserts. Athena is MPP and moreover has limitation of 20 concurrent queries. ElasticCache is used by streaming layer, not sure if it's good idea to perform batch inserts there.
Should we introduce the fourth storage solution or stay with existing?
Considered options:
Persist batch output to DynamoDB and ElasticCache (part of data that is updated rarely and can be compressed/aggregated goes to DynamoDB; frequently updated data ~8GB/day goes to elasticCache).
Introduce another database (HBase on EMR over S3/ Amazon redshift?) as a solution
Use S3 Select over parquet to overcome Athena concurrent query limits. That will also reduce query latency. But have S3 Select any concurrent query limits? I can't find any related info.
The first option is bad because of batch insert to ElasticCache used by streaming. Also does it follow Lambda architecture - keeping batch and speed layer views in the same data stores?
The second solution is bad because of the fourth database storage, isn't it?
In this case you might want to use something like HBase or Druid; not only can they handle batch inserts and very low latency random reads, they could even replace the DynamoDB/ElastiCache component from your solution, since you can write directly to them from the incoming stream (to a different table).
Druid is probably superior for this, but as per your requirements, you'll want HBase, as it is available on EMR with the Amazon Hadoop distribution, whereas Druid doesn't come in a managed offering.
I need to ETL data into my Cloud SQL instance. This data comes from API calls. Currently, I'm running a custom Java ETL code in Kubernetes with Cronjobs that makes request to collect this data and load it on Cloud SQL. The problem comes with managing the ETL code and monitoring the ETL jobs. The current solution may not scale well when more ETL processes are incorporated. In this context, I need to use an ETL tool.
My Cloud SQL instance contains two types of tables: common transactional tables and tables that contains data that comes from the API. The second type is mostly read-only in a "operational database perspective" and a huge part of the tables are bulk updated every hour (in batch) to discard the old data and refresh the values.
Considering this context, I noticed that Cloud Dataflow is the ETL tool provided by GCP. However, it seems that this tool is more suitable for big data applications that needs to do complex transformations and ingest data in multiple formats. Also, in Dataflow, the data is parallel processed and worker nodes are escalated as needed. Since Dataflow is a distributed system, maybe the ETL process would have an overhead when allocating resources to do a simple bulk load. In addition to that, I noticed that Dataflow doesn't have a particular sink for Cloud SQL. This probably means that Dataflow isn't the correct tool for simple bulk load operations in a Cloud SQL database.
In my current needs, I only need to do simple transformations and bulk load the data. However, in the future, we might want to handle other sources of data (pngs, json, csv files) and sinks (Cloud Storage and maybe BigQuery). Also, in the future, we might want to ingest streaming data and store it on Cloud SQL. In this sense, the underlying Apache Beam model is really interesting, since it offers an unified model for batch and streaming.
Giving all this context, I can see two approaches:
1) Use an ETL tool like Talend in the Cloud to help monitoring ETL jobs and maintenance.
2) Use Cloud Dataflow, since we may need streaming capabilities and integration with all kinds of sources and sinks.
The problem with the first approach is that I may end up using Cloud Dataflow anyway when future requeriments arrives and that would be bad for my project in terms of infrastructure costs, since I would be paying for two tools.
The problem with the second approach is that Dataflow doesn't seem to be suitable for simply bulk loading operations in a Cloud SQL Database.
Is there something I am getting wrong here? Can someone enlighten me?
You can use Cloud Dataflow just for loading operations. Here is a tutorial on how to perform ETL operations with Dataflow. It uses BigQuery but you can adapt it to connect to your Cloud SQL or other JDBC sources.
More examples can be found on the official Google Cloud Platform github page for Dataflow analysis of user generated content.
You can also have a look at this GCP ETL architecture example that automates the tasks of extracting data from operational databases.
For simpler ETL operations, Dataprep is an easy tool to use and provides flow scheduling as well.
Does anyone know how fast the copy speed is from Amazon S3 to Redshift?
I only want to use RedShift for about an hour a day, to run updates on Tabelau reports. The queries being run are always on the same database, but I need to run them each night to take in to account new data that's come in that day.
I don't want to keep a cluster going 24x7 just to be used for one hour a day, but the only way that I can see of doing this is to Import the entire database each night into Redshift (I don't think you can't suspend or pause a cluster). I have no idea what the copy speed is so I have no idea if its going to be relatively quick to copy a 10GB file in to Redshift every night.
Assuming its feasible, my thinking is to push the incremental changes on SQL Server dbase in to S3. Using Cloud Formation, I automate the provisioning of a Redshift cluster at 1am for 1 hour, import the dbase from S3, and schedule Tableau to run its queries between that time and get its results. I keep an eye on how long the queries take, and If I need longer than an hour I just amend the cloud formation.
In this way I hope to keep a really 'lean' Tableau server by outsourcing all the ETL to Redshift, and buying only what I consume on Redshift.
Please feel free to critique my solution, or out right blow it out of the water. Otherwise If the consensus of the answer is that importing is relevantly quick, It gives me a thumbs up I'm headed in the right direction with this solution.
Thanks for any assistance!
Redshift loads from S3 are very quick, however Redshift clusters do not come up / tear down very quickly at all. In the above example most of your time (and money) would be spent waiting for the cluster to come up, existing data to load, refreshed data to unload and cluster to tear down again.
In my opinion it would be better to use another approach for your overnight processing. I would suggest either:
For a couple of TB, InfiniDB on a largish EC2 instance with the database stored on an EBS volume.
For many TBs, Amazon EMR with the data stored on S3. If you don't want to get into Hadoop too much you can use Xplenty/Syncsort Ironcluster/etc. to orchestrate the Hadoop element.
While this question was written three years ago and it wasn't available at that time, a suitable solution to this now would be to use Amazon Athena, which allows on-demand SQL querying of data held in S3. This works on a pay-per-query model, and is intended for ad-hoc and "quick" workloads like this.
Behind the scenes, Athena uses Presto and Elastic MapReduce, but the only required knowledge for a developer/analyst in practice is SQL.
Tableau also now has a built-in Athena connector (as of 10.3).
More on Athena here: https://aws.amazon.com/athena/
You can presort data you are keeping on S3. It will make Vacuum much faster.
This is the classic problem with Redshift... if you looking different way .. Microsoft recently announced new service called SQL Data Warehouse (Uses PDW Engine) I think they want to compete directly with Redshift.... Most interesting concept here is ... Familiar SQL Server Query language and Toolset (including Stored proc support). They also decoupled Storage and Compute so you can have 1 GB storage but 10 Compute node for intensive query and vice versa.... they are claiming that compute node start in few seconds and when you resize cluster you don't have to take it offline. Cloud Data Warehouse Battle getting hot :)