GCP Deployment manager error - google-cloud-platform

When I try to use the project creation template which is on github, even after changing the appropriate values in config.yaml I am getting following error.
location: /deployments/projectcreation000/manifests/manifest-1534790908361
message: 'Manifest expansion encountered the following errors: Error compiling Python code: No module named apis Resource: project.py Resource: config'
you can find the repo link here : https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/examples/v2/project_creation
Please help as I need it for production workflow. I have tried "sudo pip install apis" in Cloud Shell but it does not help, even after successful installation of apis module.

you either need to fix the import or move the file, so that apis.py will be found.

The apis module in this context refers to,
not a pip package. Ensure you have all the files in the same relative paths to each other when deploying these samples.

Related

Can you deploy a Gen2 cloud function from below the top level of a Cloud Source repository?

It appears that you cannot deploy a Gen2 cloud function using gcloud from a cloud source repo unless it is at the top level.
Here's a sample redacted deploy command for a gen 1 python function that works:
gcloud beta functions deploy funcname --source https://source.developers.google.com/projects/projectname/repos/reponame/moveable-aliases/main/paths/pathname --runtime python310 --trigger-http --project=projectname
if you add the -gen2 flag, it fails because it can't find main.py. Error is:
OperationError: code=3, message=Build failed with status: FAILURE and message: missing main.py and GOOGLE_FUNCTION_SOURCE not specified. Either create the function in main.py or specify GOOGLE_FUNCTION_SOURCE to point to the file that contains the function.
If you add main.py to the root of the repo and run the same command, it finds main.py, which indicates to me that it isn't honoring the paths.
There is an additional problem which doesn't matter unless the first one is fixed, which is that if pathname is below the top level (folder/subfolder) gcloud sees that as a syntax error when the gen2 flag is set, but not without it.
Is there any way around this? It is very inconvenient.
Answering as community wiki.As per above comments
There is a bug raised for this at issue tracker. Which is still open further progress can be tracked there.

Logstash Google Pubsub Input Plugin fails to load file and pull messages

I'm getting this error when trying to run Logstash pipeline with a configuration that is using google_pubsub on a docker container running in my production env:
2021-09-16 19:13:25 FATAL runner:135 - The given configuration is invalid. Reason: Unable to configure plugins: (PluginLoadingError) Couldn't find any input plugin named 'google_pubsub'. Are you sure this is correct? Trying to load the google_pubsub input plugin resulted in this error: Problems loading the requested plugin named google_pubsub of type input. Error: RuntimeError
you might need to reinstall the gem which depends on the missing jar or in case there is Jars.lock then resolve the jars with `lock_jars` command
no such file to load -- com/google/cloud/google-cloud-pubsub/1.37.1/google-cloud-pubsub-1.37.1 (LoadError)
2021-09-16 19:13:25 ERROR Logstash:96 - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
This seems to randomly happen when re-installing the plugin. I thought it's a proxy issue but I have the google domain enabled in the whitelist. Might be the wrong one / missing something. Still, doesn't explain the random failures.
Also, when I run the pipeline in my machine I get GCP events, but when I do it on a VM - no Pubsub messages are being pulled. Could it be a firewall rule blocking them?
The error message suggests there is a problem in loading the ‘google_pubsub’ input plugin. This error generally occurs when the input Pub/Sub plugin is not installed properly. Kindly ensure that you are installing the Logstash Plugin for Pub/Sub correctly.
For example, installing Logstash Plugin for Pub/Sub in a VM :
sudo -u root sudo -u logstash bin/logstash-plugin install logstash-input-google_pubsub
For a detailed demo refer to this community tutorial.

Calling vision (from google.cloud) Results in 'crash' in Log

Have been working on a GCP project involving OCR. Have attempted to follow the tutorial here, but the first function crashes when I try to upload a file to the trigger bucket. Moreover, cloud shell will not allow me to set the env variable GCP_PROJECT; it returns
ERROR: (gcloud.functions.deploy) ResponseError: status=[400], code=[Bad Request], message=[The request has errors
Problems:
environment_variables:
environment variable name GCP_PROJECT is reserved by the system: it cannot be set by users
]
Any suggestions?
I am not 100% sure, but after some work, I am guessing that the answer is that the .json file with the service account being used in the function had to be exported as GOOGLE_APPLICATION_CREDENTIALS in the gcloud SDK when deploying the function.
I got the same error when following this tutorial.
Changing
--set-env-vars "^:^GCP_PROJECT=my_proj:TRANSLATE_TOPIC
to
--project my_proj --set-env-vars "^:^TRANSLATE_TOPIC fixed it.
I have the same error.
I just follow their own tutorial and there is nothing special or customized.
Actually I got errors in every step of the tutorial and search the internet to fix it.
However I stuck on this one.

Install Scrapy in apache airflow will cause INVALID_ARGUMENT

I`m trying to install Scrapy from PyPi using below command.
gcloud composer environments update $(AIRFLOW_ENVIRONMENT_NAME) \
--update-pypi-packages-from-file requirements.txt \
--location $(AIRFLOW_LOCATION)
requirements.txt is like this.
google-api-python-client==1.7.*
google-cloud-datastore==1.7.*
Scrapy==2.0.0
After running gcloud command, It will cause an invalid argument but it runs successfully in the local environment.
gcloud composer environments update xxxx \
--update-pypi-packages-from-file requirements.txt \
--location asia-northeast1
ERROR: (gcloud.composer.environments.update) INVALID_ARGUMENT: Found 1 problem:
1) Error validating key Scrapy. PyPi dependency name is not formatted properly. It must be lowercase and follow the format of 'identifier' specified in PEP-508.
Is there any way to install?
As the previous answer stated, the error that you are receiving now is quite clear and it's caused by the wrong formatting of the dependency. It should be scrapy==2.0.0 instead of Scrapy==2.0.0 inside the requirements.txt.
I would like to add that to avoid the installation error when you fix the formatting, you should add one more dependency to your list and that is attrs==19.2.0. I was able to install your requirements to my environment by specifying the following list:
google-api-python-client==1.7.*
google-cloud-datastore==1.7.*
scrapy==2.0.0
attrs==19.2.0
Even though you adjust package name in requirements.txt file according to PEP-508 document prerequisites, formatting certan package name in lowercase layout scrapy==2.0.0, the issue most probably will remain the same and updating process will stuck with the error:
Failed to install PyPI packages
Generally, this kind of error appears then the source PyPI package has some external dependencies or this package is sensitive on some system-level libraries that GCP Composer doesn't support.
In this case a vendor recommends two ways either using KubernetesPodOperator to build own custom image and use it in particular Kubernetes Pod or deploy PyPi package as a local Python library, uploading shared object libraries for the PyPI dependency to Airflow /plugins directory, find more info here.

appcfg.py request_logs certificate verify failed (_ssl.c:661)

We've been using appcfg.py request_logs to download GAE logs, every once in a while it throws the error:
httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:661)
But after a few times trying it works out, sometimes also it works after updating gcloud using gcloud components update. We thought it might be some network throttling issue of some kind and didn't give it enough thought. Lately though, we're trying to figure out what is causing this.
The full command we use is:
appcfg.py request_logs -A testapp --version=20180321t073239 --severity=0 all_logs.log --append --no_cookies
It seems the error is related to httplib2 library, but since it is part of the appcfg.py calls we're not sure we should tamper with something within its calls
Versions:
Python 2.7.13
Google Cloud SDK 196.0.0
app-engine-python 1.9.67
This has become more persistent now and I couldn't download logs for a few days now no matter how many times I try.
Looking at the download logs command I tried the same command again but without the --no_cookies flag to see what would happen.
appcfg.py request_logs -A testapp --version=20180321t073239 --severity=0 all_logs.log --append
I got the error:
Error 403: --- begin server output ---
You do not have permission to modify this app (app_id=u'e~testapp').
--- end server output ---
Which lead me to the answer provided here https://stackoverflow.com/a/34694577/1394228 by #ninjahoahong. This worked for me and logs where downloaded from first trial in case someone faces the same issue
There's also this Google Group post which I didn't try but seems like it does the same thing.
Not sure if removing the file ~/.appcfg_oauth2_tokens would have other effects, yet to find out.
Update:
I also found out that my httplib2 located at /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/httplib2 was version = "0.7.5", I upgraded it to version = '0.11.3' using target location(directory) upgrade command:
sudo pip2 install --upgrade httplib2 -t /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/httplib2/