Delays when streaming data from the Google Search console API to BigQuery - google-cloud-platform

So I've been trying to stream data from Google Search Console API to BigQuery in real time.
The data are retrieved from GSC API and streamed to the BigQuery stream buffer. However, I experience high latency before the streaming buffer can be flushed (up to 2 hours or more). So, the data stays in the streaming buffer but is not in the table.
The data are also not visible in the preview and the table size is 0B with 0 rows (actually after waiting for >1day I still see 0B even though there are more than 0 rows).
Another issue is that, some time after the data is stored in the table (table size and number of rows are correct), it simply disappears from it and appears in the streaming buffer (I only saw this once). -> This was explained by the second bullet in shollyman's answer.
What I want is to have the data in the table in real time. According to the documentation this seems possible but doesn't work in my case (2h of delay as stated above).
Here's the code responsible for that part:
for row in response['rows']:
keys = ','.join(row['keys'])
# Data Manipulation Languate (DML) Insert one row each time to BigQuery
row_to_stream = {'keys':keys, 'f1':row['f1'], 'f2':row['f2'], 'ctr':row['ctr'], 'position':row['position']}
insert_all_data = {
"kind": "bigquery#tableDataInsertAllRequest",
"skipInvaliedRows": True,
"ignoreUnknownValues": True,
'rows':[{
'insertId': str(uuid.uuid4()),
'json': row_to_stream,
}]
}
build('bigquery', 'v2', cache_discovery=False).tabledata().insertAll(
projectId=projectid,
datasetId=dataset_id,
tableId=tableid,
body=insert_all_data).execute(num_retries=5)
I've seen questions that seem very similar to mine on here but I haven't really found an answer. I therefore have 2 questions.
1. What could cause this issue?
Also, I'm new to GCP and I've seen other options (at least they seemed like options to me) for real time streaming of data to BigQuery (e.g., using PubSub and a few projects around real time Twitter data analysis).
2. How do you pick the best option for a particular task?

By default, the BigQuery web UI doesn't automatically refresh the state of a table. There is a Refresh button when you click into the details of a table, that should show you the updated size information for both managed storage and the streaming buffer (displayed below the main table details). Rows in the buffer are available to queries, but the preview button may not show results until some data is extracted from the streaming buffer to managed storage.
I suspect the case where you observed data disappearing from managed storage and appearing back in the streaming buffer may have been a case where the table was deleted and recreated with the same name, or was truncated in some fashion and streaming restarted. Data doesn't transition from managed storage back to the buffer.
Deciding what technology to use for streaming depends on your needs. Pub/Sub is a great choice when you have multiple consumers of the information (multiple pub/sub subscribers consuming the same stream of messages independently), or you need to apply additional transformations of the data between the producer and consumer. To get the data from pub/sub to BigQuery, you'll still need a subscriber to write the messages into BigQuery, as the two have no direct integration.

Related

GCP Dataflow running streaming inserts into BigQuery: GC Thrashing

I am using Apache Beam 2.13.0 with GCP Dataflow runner.
I have a problem with streaming ingest to BigQuery from a batch pipeline:
PCollection<BigQueryInsertError> stageOneErrors =
destinationTableSelected
.apply("Write BQ Attempt 1",
BigQueryIO.<KV<TableDestination, TableRow>>write()
.withMethod(STREAMING_INSERTS)
.to(new KVTableDestination())
.withFormatFunction(new KVTableRow())
.withExtendedErrorInfo()
.withFailedInsertRetryPolicy(InsertRetryPolicy.neverRetry())
.withCreateDisposition(CreateDisposition.CREATE_NEVER)
.withWriteDisposition(WriteDisposition.WRITE_APPEND))
.getFailedInsertsWithErr();
The error:
Shutting down JVM after 8 consecutive periods of measured GC thrashing.
Memory is used/total/max = 15914/18766/18766 MB,
GC last/max = 99.17/99.17 %, #pushbacks=0, gc thrashing=true.
Heap dump not written.
Same code working in the streaming mode correctly (if the with explicit method setting omitted).
The code works on reasonably small datasets (less than 2 million records). Fails on 2,5 million plus.
On the surface it appears to be a similar problem to the one described here: Shutting down JVM after 8 consecutive periods of measured GC thrashing
Creating a separate question to add additional details.
Is there anything I could do to fix this? Looks like the issue is within the BigQueryIO component itself - GroupBy key fails.
The problem with transforms that contain GroupByKey is that it will wait until all the data for the current window has been received before grouping.
In Streaming mode, this is normally fine as the incoming elements are windowed into separate windows, so the GroupByKey only operates on a small(ish) chunk of data.
In Batch mode, however, the current window is the Global Window, meaning that GroupByKey will wait for the entire input dataset to be read and received before the grouping starts to be performed. If the input dataset is large, then your worker will run out of memory, which explains what you are seeing here.
This brings up the question: Why are you using BigQuery Streaming insert when processing Batch data? Streaming inserts are relatively expensive (compared to bulk which is free!) and have smaller quota/limits than Bulk import: even if you work around the issues you are seeing, there may be more issues yet to be discovered in Bigquery itself..
After extensive discussions with the support and the developers it has been communicated that using BigQuery streaming ingress from a batch pipeline is discouraged and currently (as of 2.13.0) not supported.

How to handle limitation of Dynamodb BatchWriteItem

Just wondering whats the best way to handle the fact that dynamodb can only write batch sizes of max 25.
I have 3 Lambdas (there are more but I am simplifying down so we don't get side tracked)
GetNItemsFromExternalSourceLambda
SaveAllToDynamoDBLambda
AnalyzeDynamoDBLambda
Here is what happens:
GetNItemsFromExternalSourceLambda can potentially fetch 250 items in 1 rest call it makes to an external api.
It then invokes SaveAllToDynamoDBLambda and passes a) all these items and b) paging info e.g. {pageNum:1, pageSize : 250, numPages:5 } in the payload
SaveAllToDynamoDBLambda needs to save all items to a dynamodb table and then , based on the paging info will either a) re-invoke GetNItemsFromExternalSourceLambda (to fetch next page of data) or b) invoke AnalyzeDynamoDBLambda
these steps can loop many times obviously until we have got all the data from the external source before finally proceeding to last step
the final AnalyzeDynamoDBLambda then is some lambda that processes all the data that was fetched and saved to the db
So my problems lies in fact that SaveAllToDynamoDBLambda can only write 25 items in a batch, which means I would have to tell my GetNItemsFromExternalSourceLambda to only fetch 25 items at a time from the external source which is not ideal. (being able to fetch 250 at a time would be a lot better)
One could extend the timeout period of the SaveAllToDynamoDBLambda so that it could do multiple batch writes inside one invocation but i dont like that approach.
I could also zip up the 250 items and save to s3 in one upload which could trigger a stream event but I would have same issue on the other side of that solution.
just wondering whats a better approach while still being able to invoke AnalyzeDynamoDBLambda only after all info from all rest calls has been saved to dynamodb.
Basically the problem is you need a way of subdividing the large batch (250 items in this case) down to batches of 25 of less.
A very simple solution would be to use a Kinesis stream in the middle. Kinesis can take up to 500 records per PutRecords call. You can then use GetRecords with a Limit of 25 and put the records into Dynamo with a single BatchWriteItem call.
Make sure you look at the size limits as well before deciding if this solution will work for you.

Dataflow doesn't update GroupByKey's "Output collections" field

Dataflow does not update GroupByKey's Output Collections even though I can see the output data in cloud storage (the next transform in the pipeline writes the output to GCS). None of the transforms after GroupByKey show input/output collections either. I have also tried to implement data-driven triggering using AfterPane.elementCountAtLeast(3000) but then too the Output Collections does not get updated after 3000 elements have been input.
The problem is that the Estimated Size keeps on increasing and I am afraid that it will eventually lead to more workers costing me more money. I have been waiting for more than an hour but it still doesn't get updated. I have an unbounded PCollection and I have set the windowing and triggering as shown below
input
.apply(
Window
.into[String](FixedWindows.of(Duration.standardMinutes(windowSize)))
.withAllowedLateness(Duration.standardMinutes(windowSize))
.discardingFiredPanes()
.triggering(AfterPane.elementCountAtLeast(3000)))
What might be the issue?

What is the most efficient way to store time series in Riak with heavy reads

My current approach:
I have one domain class - Application
Each application in my system is stored in "applications" bucket under APPLICATION_KEY key
Apart from application metadata stored in this bucket, each application has its own bucket called "time_metrics/APPLICATION_KEY" where I store time series in a way:
KEY - timestamp / VALUE - some attributes
My concern is efficiency of queries made over specific time window for given application. Currently to get time series from some specific time window and eventually make some reductions I have to make map/reduce over whole "time_metric/APPLICATION_KEY" bucket, which what I have found is not the recommended use case for Riak Map/Reduce.
My question: what would be the best db structure for this kind of a system and how efficiently query it.
Adding onto #macintux's answer.
Basho has had a few customers that have used riak for time series metrics.
Boundary has a nice tech talk about how they use Riak with their network monitoring software. They rollup data into different chunks of time (1m, 5m, 15m) for analysis.
They also have a series of blog posts about lessons learned while implementing this system.
Kivra also has a good slide deck about how they use timeseries data with riak.
You could roll up your data into some sort of arbitrary time length, then read the range you need by issuing regular K/V gets, and then reconstruct the larger picture / reduce in your application.
If you have spare computing power and you know in advance what keys you need, you certainly can use Riak's MapReduce, but often retrieving the keys and running your processing on the client will be as fast (and won't strain your cluster).
Some general ideas:
Roll up your data into larger blocks
If you're concerned about losing data if your client crashes while buffering it, you can always store the data as it arrives
Similar idea: store the data as it arrives, then retrieve it and roll it up at certain intervals
You can automatically expire data once you're confident it is being reliably stored in larger blocks, using either the Bitcask or Memory backends
Memory backend is quite useful (RAM permitting) for any data that only needs to be stored for a limited period of time
Related: don't be afraid to store multiple copies of your data to make reading/reporting easier later
Multiple chunks of time (5- and 15-minute blocks, for example)
Multiple report formats
Having said all that, if you're doing straight key/value requests (it's ideal to always be able to compute the keys you need, rather than doing indexing or searching), Riak can support very heavy traffic loads, so I wouldn't recommend spending too much time creating alternative storage mechanisms unless you know you're going to face latency problems.

Complex Event Processor 2.1.0 input streams

I'm new at the CEP 2.1 and my question is related to time-frame that tha CEP hold on to the input stream
let say that you regularly send data to some input stream let's say "HELLOSTREAM".
for how long does the CEP save the inputs to the stream what is the max time etc...
let say if I send data every day for 365 days will I get back all data on the 366 day or will he truncate the data at some point( will hold ony last 100 days) ? no matter what time-window I set in the query ?
is there a limit ?
CEP is a real-time processing server. It is used to find pre-defined pattern in real-time and for realtime monitoring. It keeps the data in memory and process the events but you can persist the data to cassandra for distributed processing...
Here data will be keep in the memory based on the window size that you defined, It depends on the window type that you are using and time or length given to that window... If you are not using any window it will not keep any data in memory...
If you want to store data for 365 days or 100 days, then it is not a real-time use-case. For that you have to use offline processing server like BAM.
To add to #Mohanadarshan's answer, if what you want is to extract and store some values from a fast event stream over a long period, a better CEP-based approach will be to use persistent event tables (which will be included in the upcoming CEP version 3.0.0 which will be released soon). This way you'll be able to do real-time processing against some extracted and persisted data. But as #Mohanadarshan has mentioned, if your requirement is batch processing (and if you do not need to detect anything real-time), WSO2 BAM will be a better option.
Using sliding windows over a very long period to store large amounts of data is not a good idea as they are stored in memory and also you'll loose data if server goes down.