Suddenly I am not getting any logs except deployment logs for google cloud functions
Till now it worked fine but, after updating the function I haven't seen any logs. So I have done some research and deleted the cloud functions logs file and also the cloud function and again I have created a new function. Even then I am not able to see any logs related to the project excepted audit logs (i.e whenever the function gets updated)
Any clues what's wrong? I am not able to understand what exact problem.
any help is appreciated
I have viewed the Issue Tracker issuetracker.google.com/issues/155215191 and have found that work is still being done to address the scenario.
Related
[The issue is resolved, Stackoverflow does not allow to delete the question, the issue was a mistmatch in schema]
I have a piece of code which uploads data from cloud storage into bigquery correctly. I run the code locally and works. Now I moved the code to cloud functions, and it fails, could you help me how I can fix it?
The logs : "Function cannot be initialized. Error: function terminated…"
Check in Cloud Logging to have more info on the error, this link is interesting regarding Cloud Functions troubleshooting :
Function cannot be initialized. Error: function terminated.
Recommended action: inspect logs for termination reason. Additional
troubleshooting information can be found in Logging.
I'm seeing this Cloud Build error when I try to deploy a Cloud Function:
"Step #2 - "analyzer": [31;1mERROR: [0mfailed to initialize cache: failed to create image cache: accessing cache image "us.gcr.io/MY_PROJECT/gcf/us-central1/SOME_KEY/cache:latest": failed to get OS from config file for image 'us.gcr.io/MY_PROJECT/gcf/us-central1/SOME_KEY/cache:latest'"
I'm able to build and emulate the cloud function locally, but I can't deploy it due to this error. I was able to deploy just fine until now. I've looked everywhere and I can't find any discussion about this. Anyone know what's going on here?
UPDATE: I deployed a new function 3 days ago and now I can't seem to deploy an update to it. I get the same error. I'm fairly sure this is happening due to the lifecycle rule I set up to ensure I don't keep storing images of functions: Firebase storage artifacts is huge and keeps increasing. This rule is important to keep around because I don't want to pay for unnecessary storage, but it seems like it might be the source of our problem here. Can someone from Google look into this?
I got the same error, even for code that deployed successfully before.
A workaround is to delete the Docker images for the failing Firebase functions inside Container Registry and re-deploying the functions. (The images will be re-created upon deploying.)
The error still occurs sporadically, so I suspect this may be a bug introduced in Firebase's deployment process. Thankfully for now, the workaround above resolves the issue every time the error comes up.
I also encountered the same problem, and solved it by deleting the images in the Container Registry of Firebase Project.
I made a Script at that time, and I'll put it here. The usage is as follows. Please use it if you like.
Install the Google Cloud SDK.
Download the Script
Edit CONTAINER_REGISTRY to your registry name. For example: CONTAINER_REGISTRY=asia.gcr.io/project-name/gcf/asia-northeast1
Grant execute permission. - $ chmod +x script.sh
Execute it. - $ sh script.sh
Deploy your functions.
I'm having the same problem for the last few days and in contact with the support. I had the same log and in my case it wasn't connected to the artifacts because the artifacts rebuild themselves automatically on deploy (read below about a subtle case related to the artifacts and how to fix it), but deleting the functions and redeploying solved it for me.
Artifacts auto cleanup
Note that if the artifacts bucket is empty, then the problem is somewhere else.
But if it's not empty, what you can do to resolve any possible problems related to the artifacts auto cleanup, is to delete the whole "container" folder manually in the artifacts which should solve it. Then just redeploy again.
Make sure not to delete the artifacts bucket itself!
Dough from firebase confirmed in the question you referring to that removing the artifacts content is safe.
So, here is how to delete it:
go to the google cloud console, select your project -> storage -> browser https://console.cloud.google.com/storage/browser
Select the "artifacts" bucket
Choose "containers" and delete it
If the problem was here, it should work fine after that.
This happens because the deletion rule you refer to in your question checks the "last updated" timestamp of each file while on redeploy only some files are updated. So the next day the rule will delete some of the files while leaving the others which will lead to the inconsistent state of the bucket in this case. So you just remove everything manually.
I created a GCP cloud function in Go runtime 1.13. All resources are under the same project.
It's reading from a pub-sub topic A doing a transformation on the message writing to a different topic B.
I've had this working on the test project and that worked fine but I can't seem to reproduce it in our production environment.
I bound the function to a service account that is given the Pub/Sub Publisher and Viewer role.
But I seem to keep on getting this error:
rpc error: code = PermissionDenied desc = User not authorized to perform this action.
So summarize/clarify, reading from topic A gives no problems but writing to topic B makes the function crash.
What am I missing?
This turned out to be a user error. I'm sorry for wasting everyone's time and appreciate all the feedback. It seems like I was pointing to the wrong project and go figure I didn't have permissions.
Thank you all for the help.
Deploying a new service into Google Cloud Run fails with the message:
Failed to move user code into storage, please verify the pod
configuration and try it again.
What does this mean, and how can one go about debugging it?
Just for the sake of giving this question an answer. All the credit should go to AhmetB and his insight about this being a Known issue to Google, In which a missing or invalid entrypoint will cause this issue to surface.
I have found a Public Issue Tracker here, in which this issue has also been forwarded to Google by that channel. Google will be delivering further information on that PIT.
I created some Lambda-Edge functions but I'm unable to set up the logs for it. When trying to access them I am seeing the error message:
There was an error loading Log Streams. Please try again by refreshing
this page.
I have gone to everything I could find on google, but as far as I can see my permissions are set up fine. I've created a custom role for them like this.
The role contains the following permissions:
I can't really figure out, what else could cause this error. It has been around 2h since setting up the functions and permissions.
For anyone experiencing the same problem. There is a weird quirk to LambdaEdge.
The logs will be stored in the AWS location closest to the user that executes it.
Even if you've deployed your functions in us-east-1, switch location to the destination that is closest to you.