NotImplementedError in Google Cloud Dataflow and PubSub - google-cloud-platform

I encountered an error when I was testing Dataflow locally on my computer. I intended to use the streaming service from Dataflow and got a "NotImplementedError". The detail of the error is like this:
I thought it might be caused by some package versions. The following is the list of dependencies in the setup.py file.
'google-api-core==1.4.1',
'google-auth==1.5.1',
'google-cloud-core==0.28.1',
'google-cloud-storage==1.10.0',
'google-resumable-media==0.3.1',
'googleapis-common-protos==1.5.3',
'librosa==0.6.2',
'wave==0.0.2',
'scipy==1.1.0',
'google-api-python-client==1.7.4',
'oauth2client==4.1.2',
'resampy==0.2.1',
'keen==0.5.1',
'google-cloud-bigquery==1.5.0',
'apache-beam[gcp]==2.5.0',
'google-cloud-dataflow==2.5.0',
'six==1.10.0',
'google-cloud-logging==1.7.0'
Could anyone help me solve this problem?

Related

Amplify pull --sandboxId <UUID> results in "Failed to pull sandbox app"

I'm following this tutorial for setting up AWS Amplify and can't seem to generate the data models from Amplify Studio with the "Amplify pull --sandboxId <UUID>" command with my ID. This results in the "Failed to pull sandbox app" error message. I have no clue what the reason for this is, if anyone has any insight I would greatly appreciate it.
I ran "amplify env pull" with no issues as well as just "amplify pull". No idea what could be causing this new issue, I can't find the solution anywhere.
Figured out the issue, it happens if you run "amplify init" before running this command.

Logstash Google Pubsub Input Plugin fails to load file and pull messages

I'm getting this error when trying to run Logstash pipeline with a configuration that is using google_pubsub on a docker container running in my production env:
2021-09-16 19:13:25 FATAL runner:135 - The given configuration is invalid. Reason: Unable to configure plugins: (PluginLoadingError) Couldn't find any input plugin named 'google_pubsub'. Are you sure this is correct? Trying to load the google_pubsub input plugin resulted in this error: Problems loading the requested plugin named google_pubsub of type input. Error: RuntimeError
you might need to reinstall the gem which depends on the missing jar or in case there is Jars.lock then resolve the jars with `lock_jars` command
no such file to load -- com/google/cloud/google-cloud-pubsub/1.37.1/google-cloud-pubsub-1.37.1 (LoadError)
2021-09-16 19:13:25 ERROR Logstash:96 - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
This seems to randomly happen when re-installing the plugin. I thought it's a proxy issue but I have the google domain enabled in the whitelist. Might be the wrong one / missing something. Still, doesn't explain the random failures.
Also, when I run the pipeline in my machine I get GCP events, but when I do it on a VM - no Pubsub messages are being pulled. Could it be a firewall rule blocking them?
The error message suggests there is a problem in loading the ‘google_pubsub’ input plugin. This error generally occurs when the input Pub/Sub plugin is not installed properly. Kindly ensure that you are installing the Logstash Plugin for Pub/Sub correctly.
For example, installing Logstash Plugin for Pub/Sub in a VM :
sudo -u root sudo -u logstash bin/logstash-plugin install logstash-input-google_pubsub
For a detailed demo refer to this community tutorial.

PGPy won't go on GCP Dataflow pipeline

I'm trying to use PGPy library in a custom GCP Dataflow pipeline implemented with Apache Beam.
What I get is that everything works with DirectRunner, but when I deploy the job and execute it on DataflowRunner I get an error on PGPy usage:
ModuleNotFoundError: No module named 'pgpy'
I think I'm missing something with DataflowRunner.
Thank you
In order to manage pipeline dependencies please refer to :
https://beam.apache.org/documentation/sdks/python-pipeline-dependencies/
My personal preference is to go straight to using setup.py as it lets you deal with multiple file dependencies, which tends to get used once the pipeline gets more complex.

gcloud cloud functions deployment failure code 13 message=Failure in the execution environment"

Deploying cloud function with gcloud failed with below message,
ERROR: (gcloud.beta.functions.deploy) OperationError: code=13,
message=Failure in the execution environment
Couldn`t find much information about the error in the cloud function logs.
Running deploy with --verbose debug traces the functions called in the Cloud SDK directory and ends with displaying the below error,
FunctionsError: OperationError: code=13, message=Failure in the
execution environment ERROR: (gcloud.beta.functions.deploy)
OperationError: code=13, message=Failure in the execution environment
Per this Google Public Issue Tracker, the error is due to a very large package.json file hitting an internal restriction. Possible workarounds:
1- Installing your dependencies locally (through 'npm install') and deploying with --include-ignored-files flag.
2- Reduce your package.json to less than 4000 characters
This is an ongoing issue and you can follow the discussion on this thread for related updates.
The status of firebase can be found under:
https://status.firebase.google.com/
Just sharing our experience here, with the hope it helps someone in the future.
In our case we got a similar error:
ERROR: (gcloud.beta.functions.deploy) OperationError: code=13, message=Error setting up the execution environment for your function. Please try deploying again after a few minutes.
This was caused by an import of package.json in the code to read out the version. I.e.:
import { version } from '../package.json';
Transpilation and local invocation of the generated JS code worked as expected with the above line in our code base. After we removed the import, we were able to deploy the function agian.
Some of the GCP errors are broad.
The solution to this for me was my go.mod file had go 1.14 while the GCP only supports go 1.11 or go 1.13
For my case, it was a python environment, and the culprit was a dependency yarl==1.5.1.
As there are no logs, I couldn't tell exactly why yarl was causing the breakage, but downgrading to yarl==1.3.0 fixed the issue for me.

AWS code deploy error before install

I am getting following error while deploying using code deploy agent
/opt/codedeploy-agent/deployment-root/6d3f114b-72a9-4d1a-9d65-1227b6839916/d-FWIG1AI1M/deployment-archive/appspec.yml
The problem is the appspec.yml gets created in folder inside the current deployment id.
Please advise as what is wrong?
Thanks
Do you have the error log? The place where appspec file is used looks good to me, according to http://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html