I can't find limit information about Cloud Data Fusion.
Does anyone know, how many data pipelines can I create with Cloud Data Fusion by default? (link, source needed)
You can create as many pipelines as long as you are not hitting the quotas of the resources used in the pipeline. For example your pipeline uses BigQuery, Compute Engine, etc. and one of these hit a quota, then you are not able to create a new pipeline. See Data Fusion Quotas and limits for reference.
Related
If I have a csv which is to update entries in SQL database. The file size is max 50 KB and update frequency is twice a week. Also if I have a requirement to do some automated sanity testing
What should I use Dataflow or Cloud Function.
For this use case (2 little files per week) Cloud Function will be the best option.
Other options would be to create a Dataflow batch pipeline and trigger by cloud function OR create a Dataflow streaming pipeline, but both options will be more expensive.
Here some documentation about how to connect on Cloud SQL from Cloud Function
And here some documentation related to triggering a Cloud Function from Cloud storage.
See ya.
If there are no aggregations, and each input "element" does not interact with others, that is, the pipeline works in 1:1, then you can use either Cloud Functions or Dataflow.
But if you need to do aggregations, filtering, any complex calculation that involves more than one single element in isolation, you will not be able to implement that with Cloud Functions. You need to use Dataflow in that situation.
We are building a customer facing App. For this app, data is being captured by IoT devices owned by a 3rd party, and is transferred to us from their server via API calls. We store this data in our AWS Documentdb cluster. We have the user App connected to this cluster with real time data feed requirements. Note: The data is time series data.
The thing is, for long term data storage and for creating analytic dashboards to be shared with stakeholders, our data governance folks are requesting us to replicate/copy the data daily from the AWS Documentdb cluster to their Google cloud platform -> Big Query. And then we can directly run queries on BigQuery to perform analysis and send data to maybe explorer or tableau to create dashboards.
I couldn't find any straightforward solutions for this. Any ideas, comments or suggestions are welcome. How do I achieve or plan the above replication? And how do I make sure the data is copied efficiently - memory and pricing? Also, don't want to disturb the performance of AWS Documentdb since it supports our user facing App.
This solution would need some custom implementation. You can utilize Change Streams and process the data changes in intervals to send to Big Query, so there is a data replication mechanism in place for you to run analytics. One of the use cases of using Change Streams is for analytics with Redshift, so Big Query should serve a similar purpose.
Using Change Streams with Amazon DocumentDB:
https://docs.aws.amazon.com/documentdb/latest/developerguide/change_streams.html
This document also contains a sample Python code for consuming change streams events.
I'm using GCP Stackdrive custom metrics and created few dashboard graphs to show the traffic on the system. The problem is that the graph system is keeping the data for few weeks - not forever.
From Stackdrive documentation:
See Quotas and limits for limits on the number of custom metrics and
the number of active time series, and for the data retention period.
If you wish to keep your metric data beyond the retention period, you
must manually copy the data to another location, such as Cloud Storage
or BigQuery.
Let's decide to work with Cloud Storage as a container to store data for the long term.
Questions:
How does this "manual data copy" is working? Just write the same data into two places (Google storage and Stackdrive)?
How the stackdrive is connecting the storage and generating graph of it?
You can use Stackdriver's Logs Export feature to export your logs into either of three sinks, Google Cloud Storage, BigQuery or Pub/Sub topic. Here are the instructions on how to export stackdriver logs. You are not writing logs in two places in real-time but exporting logs based on the filters you set.
One thing to keep in mind is you will not be able to use stackdriver graphs or alerting tools with the exported logs.
In addition, if you export logs into bigquery, you can plug a Datastudio graphe to see your metrics.
You can also do this with Cloud Storage export but it's less immediate and less handy
I'll suggest this guide on creating a pipeline to export metrics to BigQuery for long-term storage and analytics.
https://cloud.google.com/solutions/stackdriver-monitoring-metric-export
I need to ETL data into my Cloud SQL instance. This data comes from API calls. Currently, I'm running a custom Java ETL code in Kubernetes with Cronjobs that makes request to collect this data and load it on Cloud SQL. The problem comes with managing the ETL code and monitoring the ETL jobs. The current solution may not scale well when more ETL processes are incorporated. In this context, I need to use an ETL tool.
My Cloud SQL instance contains two types of tables: common transactional tables and tables that contains data that comes from the API. The second type is mostly read-only in a "operational database perspective" and a huge part of the tables are bulk updated every hour (in batch) to discard the old data and refresh the values.
Considering this context, I noticed that Cloud Dataflow is the ETL tool provided by GCP. However, it seems that this tool is more suitable for big data applications that needs to do complex transformations and ingest data in multiple formats. Also, in Dataflow, the data is parallel processed and worker nodes are escalated as needed. Since Dataflow is a distributed system, maybe the ETL process would have an overhead when allocating resources to do a simple bulk load. In addition to that, I noticed that Dataflow doesn't have a particular sink for Cloud SQL. This probably means that Dataflow isn't the correct tool for simple bulk load operations in a Cloud SQL database.
In my current needs, I only need to do simple transformations and bulk load the data. However, in the future, we might want to handle other sources of data (pngs, json, csv files) and sinks (Cloud Storage and maybe BigQuery). Also, in the future, we might want to ingest streaming data and store it on Cloud SQL. In this sense, the underlying Apache Beam model is really interesting, since it offers an unified model for batch and streaming.
Giving all this context, I can see two approaches:
1) Use an ETL tool like Talend in the Cloud to help monitoring ETL jobs and maintenance.
2) Use Cloud Dataflow, since we may need streaming capabilities and integration with all kinds of sources and sinks.
The problem with the first approach is that I may end up using Cloud Dataflow anyway when future requeriments arrives and that would be bad for my project in terms of infrastructure costs, since I would be paying for two tools.
The problem with the second approach is that Dataflow doesn't seem to be suitable for simply bulk loading operations in a Cloud SQL Database.
Is there something I am getting wrong here? Can someone enlighten me?
You can use Cloud Dataflow just for loading operations. Here is a tutorial on how to perform ETL operations with Dataflow. It uses BigQuery but you can adapt it to connect to your Cloud SQL or other JDBC sources.
More examples can be found on the official Google Cloud Platform github page for Dataflow analysis of user generated content.
You can also have a look at this GCP ETL architecture example that automates the tasks of extracting data from operational databases.
For simpler ETL operations, Dataprep is an easy tool to use and provides flow scheduling as well.
I have an app deployed in 5 regions.
The latency between the regions varies from 150ms to 300ms
Currently, we use the method outlined in this article (usage tracking part):
http://highscalability.com/blog/2018/4/2/how-ipdata-serves-25m-api-calls-from-10-infinitely-scalable.html
But we export logs from Stackdriver to Cloud Pub/Sub. Then we use Cloud Dataflow to count the number of requests consumed per API key and update it in Mongo Atlas database which is geo-replicated in 5 regions.
In our app, we only read usage info from the nearest Mongo replica for low latency. App never updates any usage data directly in Mongo as it might incur latency cost since the data has to be updated in Master which may be in another region.
Updating API key usage counter directly from the app in Mongo doesn't seem feasible because we've traffic coming in at 10,000 RPS and due to the latency between region, I think it will run into some other issue. This is just a hunch, so far I've not tested it. I came to this conclusion based on my reading of https://www.mongodb.com/blog/post/active-active-application-architectures-with-mongodb
One problem is that we end up paying for cloud pub/sub and Dataflow. Are there strategies to avoid this?
I researched on Google but didn't find how other multi-region apps keep track of usage per API key in real-time. I am not surprised, from my understanding most apps operate in a single region for simplicity and until now it was not feasible to deploy an app in multiple regions without significant overhead.
If you want real-time then the best option is to go with Dataflow. You could change the way data arrives to Dataflow, for example usging Stackdriver → Cloud Storage → Dataflow, but instead of going though pub/sub you would go through Storage, so it’s more of a choice of convenience and comparing prices of each product cost on your use case. Here’s an example of how it could be with Cloud Storage.