Cloud Functions: How to upload a folder that contains images? - google-cloud-platform

I want to deploy a bot using Cloud Functions from Google Cloud Platform. My code is write in python, but I want to add a folder with a lot of images that my bot will tweet on twitter. I want to do that to make sure my bot will run 24/7, and add to Cloud Scheduler to schedule my tweets.
Is possible to upload the image folder there somehow?
I tried to upload all of my file in a .zip file and upload there, but it will fail to build. Alone just code is working fine. I use Google Developer Platform for some time but I never tried this.
Here are the errors from google cloud platform:
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 3,
"message": "Function failed on loading user code. This is likely due to a bug in the user code. Error message: Error: please examine your function logs to see the error cause: https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation."
},
"authenticationInfo": {
"principalEmail": "robertmaximus383#gmail.com"
},
"serviceName": "cloudfunctions.googleapis.com",
"methodName": "google.cloud.functions.v1.CloudFunctionsService.CreateFunction",
"resourceName": "projects/twitter-post-bot/locations/us-central1/functions/twitter-post"
},
"insertId": "-63htw4d2bfe",
"resource": {
"type": "cloud_function",
"labels": {
"project_id": "twitter-post-bot",
"region": "us-central1",
"function_name": "twitter-post"
}
},
"timestamp": "2022-01-31T08:06:24.986674Z",
"severity": "ERROR",
"logName": "projects/twitter-post-bot/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operations/dHdpdHRlci1wb3N0LWJvdC91cy1jZW50cmFsMS90d2l0dGVyLXBvc3QvZEt5N24yX1NjTE0",
"producer": "cloudfunctions.googleapis.com",
"last": true
},
"receiveTimestamp": "2022-01-31T08:06:25.259596536Z"
}
2
{
"textPayload": "Function cannot be initialized. Error: function terminated. Recommended action: inspect logs for termination reason. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging\n",
"insertId": "000000-5838e3b2-5d8d-4187-86ca-97327d016436",
"resource": {
"type": "cloud_function",
"labels": {
"region": "us-central1",
"function_name": "twitter-post",
"project_id": "twitter-post-bot"
}
},
"timestamp": "2022-01-31T08:06:24.930142467Z",
"severity": "ERROR",
"labels": {
"execution_id": ""
},
"logName": "projects/twitter-post-bot/logs/cloudfunctions.googleapis.com%2Fcloud-functions",
"receiveTimestamp": "2022-01-31T08:06:25.532983237Z"
}
3
{
"textPayload": "Traceback (most recent call last):\n File \"/layers/google.python.pip/pip/bin/functions-framework\", line 8, in <module>\n sys.exit(_cli())\n File \"/layers/google.python.pip/pip/lib/python3.9/site-packages/click/core.py\", line 829, in __call__\n return self.main(*args, **kwargs)\n File \"/layers/google.python.pip/pip/lib/python3.9/site-packages/click/core.py\", line 782, in main\n rv = self.invoke(ctx)\n File \"/layers/google.python.pip/pip/lib/python3.9/site-packages/click/core.py\", line 1066, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File \"/layers/google.python.pip/pip/lib/python3.9/site-packages/click/core.py\", line 610, in invoke\n return callback(*args, **kwargs)\n File \"/layers/google.python.pip/pip/lib/python3.9/site-packages/functions_framework/_cli.py\", line 37, in _cli\n app = create_app(target, source, signature_type)\n File \"/layers/google.python.pip/pip/lib/python3.9/site-packages/functions_framework/__init__.py\", line 288, in create_app\n spec.loader.exec_module(source_module)\n File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\n File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\n File \"/workspace/main.py\", line 2, in <module>\n from twitter import *\nModuleNotFoundError: No module named 'twitter'",
"insertId": "000000-438da477-ef7c-4c81-98a2-21bdccc7ca12",
"resource": {
"type": "cloud_function",
"labels": {
"project_id": "twitter-post-bot",
"function_name": "twitter-post",
"region": "us-central1"
}
},
"timestamp": "2022-01-31T08:06:24.238Z",
"severity": "ERROR",
"labels": {
"execution_id": ""
},
"logName": "projects/twitter-post-bot/logs/cloudfunctions.googleapis.com%2Fcloud-functions",
"receiveTimestamp": "2022-01-31T08:06:25.532983237Z"
}

When you deploy your code on Cloud Functions, the code is taken, compiled and packaged in a container (with Buildpack) and deployed on Cloud Functions.
In your case, the code is not compiled, only packaged.
Your source code is kept in the container, but in a specific directory. I wrote an article with a sample in Golang but you can find where the statics files are stored: /workspace/src/
That will fix your issue, but I think you have a design issue. The static and unstructured data should be in your Cloud Functions. Cloud Storage is the perfect place for that, and it's a better and more scalable solution.

Related

Firebase functions deploy: The service has encountered an error during container import

We're trying to deploy firebase functions and we continuously have this error: The service has encountered an error during container import. Please try again late
Here's a video overview: https://www.loom.com/share/9afb2facb5e3461ebef74e7e802a2761
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 14,
"message": "The service has encountered an error during container import. Please try again later"
},
"authenticationInfo": {},
"serviceName": "cloudfunctions.googleapis.com",
"methodName": "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction",
"resourceName": "projects/voypost-matching-prod/locations/europe-west3/functions/createFromJobPubSub"
},
"insertId": "-wq3kwnb2c",
"resource": {
"type": "cloud_function",
"labels": {
"project_id": "voypost-matching-prod",
"function_name": "createFromJobPubSub",
"region": "europe-west3"
}
},
"timestamp": "2023-01-07T16:39:01.688761Z",
"severity": "ERROR",
"logName": "projects/voypost-matching-prod/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operations/dm95cG9zdC1tYXRjaGluZy1wcm9kL2V1cm9wZS13ZXN0My9jcmVhdGVGcm9tSm9iUHViU3ViL0pOaGkxYW9zMWxj",
"producer": "cloudfunctions.googleapis.com",
"last": true
},
"receiveTimestamp": "2023-01-07T16:39:02.021123649Z"
}
As per this doc, it may be due to non-utf8 characters, but there are none (checked using https://stackoverflow.com/a/41741313 grep -axv '.*' ./lib/**/*.js)
It failed 3 times in a row and continues failing. And every time it fails on the same functions.
There is always only one deployment ongoing - we don't run multiple firebase functions deploy at the same time.
The original discussion is on their github, but we were refered here.

All my Cloud Functions say, function is active but last deploy failed

Facing this issue with my Google Cloud Functions where from the very first function that I deployed to the ones I'm to upgrade today, are all saying the same thing on their status.
"Function is active, but the last deploy failed"
What may this be?
Here's the log visible for updating the function on the log explorer.
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {},
"authenticationInfo": {
"principalEmail": "start#pyme.team"
},
"serviceName": "cloudfunctions.googleapis.com",
"methodName": "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction",
"resourceName": "projects/pyme-webapp/locations/us-central1/functions/applicationSubmitted"
},
"insertId": "d1k3hyd3jfe",
"resource": {
"type": "cloud_function",
"labels": {
"region": "us-central1",
"function_name": "applicationSubmitted",
"project_id": "pyme-webapp"
}
},
"timestamp": "2022-02-02T20:23:05.726462Z",
"severity": "NOTICE",
"logName": "projects/pyme-webapp/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operations/cHltZS13ZWJhcHAvdXMtY2VudHJhbDEvYXBwbGljYXRpb25TdWJtaXR0ZWQvaWdGS2o4bXpjbDA",
"producer": "cloudfunctions.googleapis.com",
"last": true
},
"receiveTimestamp": "2022-02-02T20:23:06.263576440Z"
}
Similarly, all I see on the log in the function itself is:
Image of the Function Log itself available
The exact error that I am seeing and am concerned about and with is this: Function Error with ORANGE HAZARD on update
Attaching another, even more detailed update log as well.
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"authenticationInfo": {
"principalEmail": "start#pyme.team"
},
"requestMetadata": {
"callerIp": "80.83.136.68",
"callerSuppliedUserAgent": "FirebaseCLI/10.0.1,gzip(gfe),gzip(gfe)",
"requestAttributes": {
"time": "2022-02-02T20:21:00.491300Z",
"auth": {}
},
"destinationAttributes": {}
},
"serviceName": "cloudfunctions.googleapis.com",
"methodName": "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction",
"authorizationInfo": [
{
"resource": "projects/pyme-webapp/locations/us-central1/functions/workContracts",
"permission": "cloudfunctions.functions.update",
"granted": true,
"resourceAttributes": {}
}
],
"resourceName": "projects/pyme-webapp/locations/us-central1/functions/workContracts",
"request": {
"updateMask": "name,sourceUploadUrl,entryPoint,runtime,labels,httpsTrigger,availableMemoryMb,environmentVariables,sourceToken",
"function": {
"runtime": "nodejs16",
"availableMemoryMb": 512,
"entryPoint": "workContracts",
"name": "projects/pyme-webapp/locations/us-central1/functions/workContracts",
"sourceUploadUrl": "https://storage.googleapis.com/gcf-upload-us-central1-d393f99f-6b88-4b68-8202-d75b734aa7a1/64b2646f-35b6-4919-8e89-c662fc29f01f.zip?GoogleAccessId=service-748321615979#gcf-admin-robot.iam.gserviceaccount.com&Expires=1643835053&Signature=McjqD9mmo%2F1wLbvO6SklkHi%2B34nQEwcpz7cLOLNAF4RwG8bpHh8RThxFJwnGZo1F92iQnquRQyGYbJFuihP%2FUGrgW7cG6GmhVq2gkugDywngZXT9d7UTBG0wgKF29XcbZkwV3IX7oKKiUwf6Q6mzCOOoCrjc5LBxqJo9WvWDZynv8R75nVZTZ5IhekMdqAw%2BRvIBvooXa%2BuA3Sezhh%2Bz2BR1XtIyS21CY%2FkoPDaKPwvftr3%2Fjcyuzb2V39%2BSajQg3t0U7Gt6oSch9qUhl6gnknr6wphFGmC7t7h9l0LUbjHUDuaMNNoB1LXxI30CRNkRupf9XBKTKpKMf%2F0nAAMltA%3D%3D",
"httpsTrigger": {},
"labels": {
"deployment-tool": "cli-firebase"
}
},
"#type": "type.googleapis.com/google.cloud.functions.v1.UpdateFunctionRequest"
},
"resourceLocation": {
"currentLocations": [
"us-central1"
]
}
},
"insertId": "1g6c2gwd46lm",
"resource": {
"type": "cloud_function",
"labels": {
"region": "us-central1",
"function_name": "workContracts",
"project_id": "pyme-webapp"
}
},
"timestamp": "2022-02-02T20:21:00.307699Z",
"severity": "NOTICE",
"logName": "projects/pyme-webapp/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operations/cHltZS13ZWJhcHAvdXMtY2VudHJhbDEvd29ya0NvbnRyYWN0cy96bHlTLUtwbzI2VQ",
"producer": "cloudfunctions.googleapis.com",
"first": true
},
"receiveTimestamp": "2022-02-02T20:21:00.985842395Z"
}
If this isn't the log to look for, just let me know what to find but I'd appreciate the help.
So turns out today morning, I login and check and everything is fine. I still have no logs stating the exact cause of the error but the same functions, the same code and the exact same deployment methods have worked and the function seems to be working fine.
This is concerning as separate cloud functions should never ever be changing on deployments.
A cloud function which takes in a POST METHOD and send data to SendGrid for example has nothing to do with a cloud function triggered by updates to the Firestore Database and if they're both deployed since the 5th of January and never touched again (in terms of edits), they should not be showing the same deployment error message across the board.
my temporal solution is to delete the function then deploy. It seems like it cannot be deployed while in use, i'm sorry i couldn't provide a better solution i will edit it as soon as possible.

GCP logging console doesn't display some GKE log messages

I have K8S cluster in GCP (version is 1.20.8-gke.900 from the regular update channel).
All cluster pods write logs in STDOUT or STDERR from Docker containers.
I found that some log messages never appear in GCP Logging console.
For example:
{
"severity": "INFO",
"timestamp": "2021-08-18T09:38:34.016614425Z",
"caller": "dbscan/dbscan.go:82",
"message": "Query",
"method": "GET",
"uri": "/api/v1/test",
"path": "/api/v1/test",
"correlation-id": "2021435824-1629279514010580579-448",
"rowCount": 4,
"pid": 679135,
"sql": "SELECT id FROM test WHERE name = ANY($1::varchar[])",
"args": [
[
"aaa",
"bbb",
"ccc",
"ddd"
]
],
"time": 3282419,
"logging.googleapis.com/labels": {},
"logging.googleapis.com/sourceLocation": {
"file": "/go/pkg/mod/github.com/georgysavva/scany#v0.2.4/dbscan/dbscan.go",
"line": "82",
"function": "github.com/georgysavva/scany/dbscan.processRows"
}
}
However, I can see the above message in GKE console via kubectl utility.
Also, I can see below log message in GCP Logging:
{
"severity": "INFO",
"timestamp": "2021-08-18T09:41:48.695923055Z",
"caller": "puddle#v1.1.1/pool.go:470",
"message": "Dialing PostgreSQL server",
"host": "0.0.0.0",
"logging.googleapis.com/labels": {},
"logging.googleapis.com/sourceLocation": {
"file": "/go/pkg/mod/github.com/jackc/puddle#v1.1.1/pool.go",
"line": "470",
"function": "github.com/jackc/puddle.(*Pool).constructResourceValue"
}
}
I can't understand what prevents this message to be displayed in GCP Logging console...
I have tried reproducing, and could get the JSON snippet to log regularly once I removed the “time” field(not “timestamp”). I suspect the format of value in the “time” field("time": 3282419) mismatch is causing the logs from not getting generated on Cloud logging. Refer Time-related fields for information.

In Azure DevOps, is it possible to enumerate children pipeline build artifacts recursively with API?

In Azure DevOps, I want to get a list of recursive artifact elements from a pipeline build. It would be nice if I don't have to download the whole artifact root object. Does any one know how to do this with the current API?
The portal already supports this feature in the pipeline artifacts view. You can open and browse child artifacts, with the ability to download. The API however does not seem to support this use case.
Current API
https://learn.microsoft.com/en-us/rest/api/azure/devops/build/Artifacts/List?view=azure-devops-rest-6.0#buildartifact
I was able to find a request for the feature, but I'm not sure if it will be implemented soon.
https://developercommunity.visualstudio.com/idea/1300697/api-list-artifacts-enumerate-recursively-same-as-w.html
Has anyone else been able to work around this?
This is not documented but you can use the same API call as it is done on Azure DevOps. So it would be
POST https://dev.azure.com/{org}/_apis/Contribution/HierarchyQuery?api-version=5.0-preview
Minimal Json Payload:
{
"contributionIds": [
"ms.vss-build-web.run-artifacts-data-provider"
],
"dataProviderContext": {
"properties": {
"artifactId": 111, //obtain this from https://dev.azure.com/{org}/{proj}/_apis/build/builds/####/artifacts
"buildId": 1234,
"sourcePage": {
"routeValues": {
"project": "[ADOProjectNameHere]"
}
}
}
}
}
In my case it was:
https://dev.azure.com/thecodemanual/_apis/Contribution/HierarchyQuery/project/4fa6b279-3db9-4cb0-aab8-e06c2ad550b2?api-version=5.0-preview.1
With similar payload similar to this one:
{
"contributionIds": [
"ms.vss-build-web.run-artifacts-data-provider"
],
"dataProviderContext": {
"properties": {
"artifactId": 1158,
"buildId": 7875,
"sourcePage": {
"url": "https://dev.azure.com/thecodemanual/DevOps%20Manual/_build/results?buildId=7875&view=artifacts&pathAsName=false&type=publishedArtifacts",
"routeId": "ms.vss-build-web.ci-results-hub-route",
"routeValues": {
"project": "DevOps Manual",
"viewname": "build-results",
"controller": "ContributedPage",
"action": "Execute",
"serviceHost": "be1a2b52-5ed1-4713-8508-ed226307f634 (thecodemanual)"
}
}
}
}
}
So you would get such response:
{
"dataProviderSharedData": {},
"dataProviders": {
"ms.vss-web.component-data": {},
"ms.vss-web.shared-data": null,
"ms.vss-build-web.run-artifacts-data-provider": {
"buildId": 7875,
"buildNumber": "20201114.2",
"definitionId": 72,
"definitionName": "kmadof.hadar",
"items": [
{
"artifactId": 1158,
"name": "/hadar.zip",
"sourcePath": "/hadar.zip",
"size": 1330975,
"type": "file",
"items": null
},
{
"artifactId": 1158,
"name": "/scripts",
"sourcePath": "/scripts",
"size": 843,
"type": "directory",
"items": [
{
"artifactId": 1158,
"name": "/scripts/check-hadar-settings.ps1",
"sourcePath": "/scripts/check-hadar-settings.ps1",
"size": 336,
"type": "file",
"items": null
},
{
"artifactId": 1158,
"name": "/scripts/check-webapp-settings.ps1",
"sourcePath": "/scripts/check-webapp-settings.ps1",
"size": 507,
"type": "file",
"items": null
}
]
}
]
}
}
}
You need to use a fully scoped Personal Access Token (PAT) to authorize your request.
You can try as the steps below:
Execute the endpoint "Artifacts - Get Artifact" of the Artifacts API. From the response body, you can see the value of "downloadUrl" like as this.
https://artprodcus3.artifacts.visualstudio.com/{organization_ID}/{project_ID}/_apis/artifact/{object_ID}/content?format=zip
This URL is used to download (GET) the whole artifact as a ZIP file.
If you want to download a specified sub-folder or file in the artifact.
To download a specified sub-folder in the artifact, you can execute the following endpoint.
GET https://artprodcus3.artifacts.visualstudio.com/{organization_ID}/{project_ID}/_apis/artifact/{object_ID}/content?format=zip&subPath={/path/to/the/folder}
For example:
GET https://artprodcus3.artifacts.visualstudio.com/{organization_ID}/{project_ID}/_apis/artifact/{object_ID}/content?format=zip&subPath=/ef-tools
This will download the folder "ef-tools" and its content as a ZIP file from your artifact "drop".
To download a specified file in the artifact, you can execute the following endpoint.
GET https://artprodcus3.artifacts.visualstudio.com/{organization_ID}/{project_ID}/_apis/artifact/{object_ID}/content?format=file&subPath={/path/to/the/file}
For example:
GET https://artprodcus3.artifacts.visualstudio.com/{organization_ID}/{project_ID}/_apis/artifact/{object_ID}/content?format=file&subPath=/ef-tools/migrate.exe
This will download the file "ef-tools/migrate.exe" from your artifact "drop".

BigQuery: Load Data into EU Dataset from GCS

In the past I have successfully loaded data into US-hosted BigQuery datasets from CSV data in US-hosted GCS buckets. We since decided to move our BigQuery data to the EU and I created a new dataset with this region selected on it. I have successfully populated those of our tables small enough to be uploaded from my machine at home. But two tables are far too large for this so I would like to load them from files in GCS. I have tried doing this from both a US-hosted GCS bucket and an EU-hosted GCS bucket (thinking that bq load might not like to cross regions) but the load fails every time. Below is the error detail I'm getting from the bq command line (500, Internal Error). Does anyone know a reason why this might be happening?
{
"configuration": {
"load": {
"destinationTable": {
"datasetId": "######",
"projectId": "######",
"tableId": "test"
},
"schema": {
"fields": [
{
"name": "test_col",
"type": "INTEGER"
}
]
},
"sourceFormat": "CSV",
"sourceUris": [
"gs://######/test.csv"
]
}
},
"etag": "######",
"id": "######",
"jobReference": {
"jobId": "######",
"projectId": "######"
},
"kind": "bigquery#job",
"selfLink": "https://www.googleapis.com/bigquery/v2/projects/######",
"statistics": {
"creationTime": "1445336673213",
"endTime": "1445336674738",
"startTime": "1445336674738"
},
"status": {
"errorResult": {
"message": "An internal error occurred and the request could not be completed.",
"reason": "internalError"
},
"errors": [
{
"message": "An internal error occurred and the request could not be completed.",
"reason": "internalError"
}
],
"state": "DONE"
},
"user_email": "######"
}
After searching through other related questions on StackOverflow I eventually realised that I had set my GCS bucket region to EUROPE-WEST-1 and not the multi-region EU location. Things are now working as expected.