Can Google Data Fusion make the same data cleaning than DataPrep? - google-cloud-platform

I want to run a machine learning model with some data. Before train the model with this data I need to process it, so I have been reading some ways to do it.
First of all create a Dataflow pipeline to upload it to Bigquery or Google Cloud Storage, then create a data pipeline with Google Dataprep to clean it.
The other way I reat to do it is with Data Fusion, that can create data pipelines more easier, but I don't know and here is my doubt, data Fusion it is only to create a pipeline like Dataflow and then I have to use DataPrep to clean the data or if Data Fusion can clean the data and prepare it to put into my machine learning model.
If Data Fusion can clean the data as DataPrep, when I should use DataPrep?

Datafusion and Dataprep can perform the same things. However their execution are different.
Datafusion create a Spark pipeline and run it on Dataproc cluster
Dataprep create a Beam pipeline and run it on Dataflow
IMO, Datafusion is more designed for data ingestion from one source to another one, with few transformation.
Dataprep is more designed for data preparation (as its name means), data cleaning, new column creation, splitting column. Dataprep also provide insight of the data for helping you in your recipes.
In addition, Beam is a part of Tensorflow extended and your Data engineer pipeline will be more consistent if you use a tool compliant with Beam
That's why I will recommend Dataprep instead Datafusion.

Related

Ingest RDBMS data to BigQuery

If we have an on-prem sources like SQL-Server and Oracle. Data from it has to be ingested periodically in batch mode in Big Query. What shud be the architecture? Which GCP native services can be used for this? Can Dataflow or DataProc be used?
PS: Our organization haven't licensed any third-party ETL tool so far. Preference is for google native service. Data Fusion is very expensive.
There are two approaches you can take with Apache Beam.
Periodically run a Beam/Dataflow batch job on your database. You could use Beam's JdbcIO connector to read data. After that you can transform your data using Beam transforms (PTransforms) and write to the destination using a Beam sink. In this approach, you are responsible for handling duplicate data (for example, by providing different SQL queries across executions).
Use a Beam/Dataflow pipeline that can read change streams from a database. The simplest approach here might be using one of the available Dataflow templates. For example, see here. You can also develop your own pipeline using Beam's DebeziumIO connector.

Best way to ingest data to bigquery

I have heterogeneous sources like flat files residing on prem, json on share point, api which serves data so and so. Which is the best etl tool to bring data to bigquery environment ?
Im a kinder garden student in GCP :)
Thanks in advance
There are many solutions to achieve this. It depends on several factors some of which are:
frequency of data ingestion
whether or not the data needs to be
manipulated before being written into bigquery (your files may not
be formatted correctly)
is this going to be done manually or is this going to be automated
size of the data being written
If you are just looking for an ETL tool you can find many. If you plan to scale this to many pipelines you might want to look at a more advanced tool like Airflow but if you just have a few one-off processes you could set up a Cloud Function within GCP to accomplish this. You can schedule it (via cron), invoke it through HTTP endpoint, or pub/sub. You can see an example of how this is done here
After several tries and datalake/datawarehouse design and architecture, I can recommend you only 1 thing: ingest your data as soon as possible in BigQuery; no matter the format/transformation.
Then, in BigQuery, perform query to format, clean, aggregate, value your data. It's not ETL, it's ELT: you start by loading your data and then you transform them.
It's quicker, cheaper, simpler, and only based on SQL.
It works only if you use ONLY BigQuery as destination.
If you are starting from scratch and have no legacy tools to carry with you, the following GCP managed products target your use case:
Cloud Data Fusion, "a fully managed, code-free data integration service that helps users efficiently build and manage ETL/ELT data pipelines"
Cloud Composer, "a fully managed data workflow orchestration service that empowers you to author, schedule, and monitor pipelines"
Dataflow, "a fully managed streaming analytics service that minimizes latency, processing time, and cost through autoscaling and batch processing"
(Without considering a myriad of data integration tools and fully customized solutions using Cloud Run, Scheduler, Workflows, VMs, etc.)
Choosing one depends on your technical skills, real-time processing needs, and budget. As mentioned by Guillaume Blaquiere, if BigQuery is your only destination, you should try to leverage BigQuery's processing power on your data transformation.

Google Cloud Dataflow - is it possible to define a pipeline that reads data from BigQuery and writes to an on-premise database?

My organization plans to store a set of data in BigQuery and would like to periodically extract some of that data and bring it back to an on-premise database. In reviewing what I've found online about Dataflow, the most common examples involve moving data in the other direction - from an on-premise database into the cloud. Is it possible to use Dataflow to bring data back out of the cloud to our systems? If not, are there other tools that are better suited to this task?
Abstractly, yes. If you've got a set of sources and syncs and you want to move data between them with some set of transformations, then Beam/Dataflow should be perfectly suitable for the task. It sounds like you're discussing a batch-based periodic workflow rather than a continuous streaming workflow.
In terms of implementation effort, there's more questions to consider. Does an appropriate Beam connector exist for your intended on-premise database? You can see the built-in connectors here: https://beam.apache.org/documentation/io/built-in/ (note the per-language SDK toggle at top of page)
Do you need custom transformations? Are you combining data from systems other than just BigQuery? Either implies to me that you're on the right track with Beam.
On the other hand, if your extract process is relatively straightforward (e.g. just run a query once a week and extract it), you may find there are simpler solutions, particularly if you're not moving much data and your database can ingest data in one of the BigQuery export formats.

Preprocessing on data stored in BigQuery

I've just started to use GCP and I have some doubts regarding the right use of some of its tools. Particularly, I'm trying to ingest data from Google Analytics into BigQuery. Would it be possible to use Dataprep on data stored in BigQuery? Almost every example I've seen uses Dataprep to visualize data stored in Google Storage, but nothing refers to BigQuery.
Any help would be really appreciated.
You can totally use Dataprep to process data stored in BigQuery. It gives you a great way to visualize how your dataset looks, and interactively define transformations.
Now, do you really want to use Dataprep for this? The transformations will be more expensive and slow, as they will run on Dataflow - which is usually more expensive and slow than doing everything within BigQuery (as the question refers to data that's already in BigQuery).
On the other hand, the interactive environment can help you quickly define what you want and run the created recipe periodically.
See more about this on Lak's "How to schedule a BigQuery ETL job with Dataprep".
https://medium.com/google-cloud/how-to-schedule-a-bigquery-etl-job-with-dataprep-b1c314883ab9.
According to the documentation on Dataprep, you can import BigQuery datasets.
But it might be easier to just to open Dataprep and check the importing options there:

Google Cloud Dataprep - ETL capabilities

I know that Dataprep isn't out yet but I'm very curious to know if it would be possible to perform ETL transformations using Dataprep?
Is it going to be a replacement to Dataflow?
Thanks
Dataprep is basically UI which spins up a Dataflow job so expect similar ETL capabilities and performance. As with every UI it is likely that actually writing your Dataflow pipeline in code will give you more control, on the other hand Dataprep will make it more accessible.
To get more information have a look at the product page and perhaps some videos from Next.