TL;DR
My test-repo is missing from All repositories list inside Cloud Source Repository panel, but I can still access it. Why?
Also asked here on Google Cloud Community
Details
I made test-repo
I was able to git clone using CloudSDK, make changes and even push.
When I went back to GCP Console > Cloud Source Repository, the repository did not show up and it showed me a welcome page.
I was still able to git pull.
I was able to access test-repo by going to my Cloud Function which was using the source code from test-repo.
When I made a new temporary repo caleld test2, it showed up under the All Repositories tab.
test-repo is still missing, but it shows up under Recently Viewed and I can access it there.
What is happening here???
Edit 1:
???
It is now only showing test-repo only.
Since you are able to reach your repository on every step of the way using the Cloud SDK, and even using the UI at some point, I would say this this is likely an UI issue in the GCP console.
That being said I recommend you to open an issue in Google's Issue Tracker so that Google Cloud's Engineering team can be aware of the issue and work towards fixing it.
Related
I created a repository (without link to git) for Google Dataform and a workspace.
I initialized a first setup and pushed those first files.
Where can I see the repo and all the commits I do in there?
Looked in Cloud Storage, Artifact Registry, Cloud Source Repositories but can't find it.
Dataform does not have all the functionality of Git so without linking to Github you won’t be able to see the repository. For your requirement, you can create Dataform repositories (Git repositories containing Dataform code, essentially) and create code workspaces attached to those repositories. Edit code in those workspaces, push the results to the relevant Git repository. You can also compile the repository/workspace Dataform code to Directed Acyclic Graphs (DAGs) of executable SQL and can also execute the compiled DAGs against BigQuery.
The repositories are listed here, please see the official documentation for more context.
You may also explore the possibility of connecting to a remote repository, as the Dataform repository doesn't meet your requirement for viewing commits.
I'm seeing this Cloud Build error when I try to deploy a Cloud Function:
"Step #2 - "analyzer": [31;1mERROR: [0mfailed to initialize cache: failed to create image cache: accessing cache image "us.gcr.io/MY_PROJECT/gcf/us-central1/SOME_KEY/cache:latest": failed to get OS from config file for image 'us.gcr.io/MY_PROJECT/gcf/us-central1/SOME_KEY/cache:latest'"
I'm able to build and emulate the cloud function locally, but I can't deploy it due to this error. I was able to deploy just fine until now. I've looked everywhere and I can't find any discussion about this. Anyone know what's going on here?
UPDATE: I deployed a new function 3 days ago and now I can't seem to deploy an update to it. I get the same error. I'm fairly sure this is happening due to the lifecycle rule I set up to ensure I don't keep storing images of functions: Firebase storage artifacts is huge and keeps increasing. This rule is important to keep around because I don't want to pay for unnecessary storage, but it seems like it might be the source of our problem here. Can someone from Google look into this?
I got the same error, even for code that deployed successfully before.
A workaround is to delete the Docker images for the failing Firebase functions inside Container Registry and re-deploying the functions. (The images will be re-created upon deploying.)
The error still occurs sporadically, so I suspect this may be a bug introduced in Firebase's deployment process. Thankfully for now, the workaround above resolves the issue every time the error comes up.
I also encountered the same problem, and solved it by deleting the images in the Container Registry of Firebase Project.
I made a Script at that time, and I'll put it here. The usage is as follows. Please use it if you like.
Install the Google Cloud SDK.
Download the Script
Edit CONTAINER_REGISTRY to your registry name. For example: CONTAINER_REGISTRY=asia.gcr.io/project-name/gcf/asia-northeast1
Grant execute permission. - $ chmod +x script.sh
Execute it. - $ sh script.sh
Deploy your functions.
I'm having the same problem for the last few days and in contact with the support. I had the same log and in my case it wasn't connected to the artifacts because the artifacts rebuild themselves automatically on deploy (read below about a subtle case related to the artifacts and how to fix it), but deleting the functions and redeploying solved it for me.
Artifacts auto cleanup
Note that if the artifacts bucket is empty, then the problem is somewhere else.
But if it's not empty, what you can do to resolve any possible problems related to the artifacts auto cleanup, is to delete the whole "container" folder manually in the artifacts which should solve it. Then just redeploy again.
Make sure not to delete the artifacts bucket itself!
Dough from firebase confirmed in the question you referring to that removing the artifacts content is safe.
So, here is how to delete it:
go to the google cloud console, select your project -> storage -> browser https://console.cloud.google.com/storage/browser
Select the "artifacts" bucket
Choose "containers" and delete it
If the problem was here, it should work fine after that.
This happens because the deletion rule you refer to in your question checks the "last updated" timestamp of each file while on redeploy only some files are updated. So the next day the rule will delete some of the files while leaving the others which will lead to the inconsistent state of the bucket in this case. So you just remove everything manually.
I'm trying to use AWS explorer in PyCharm to download and edit an existing lambda function on my AWS account, but I'm unable to find out how to do that. I've read through all the documentation available on the wiki as well as followed a bunch of tutorials on deploying new lambda functions, but I can't find out how to edit and download existing functions. I can download the AWS lambda using the console, but I'm not sure how to get this to be editable in my PyCharm project, but this also seems like a workaround anyway. Is there a way to do this within the AWS Explorer tool?
No, currently (Oct 2019) you can't download a Lambda Function's source and edit it locally. If you know the name of the S3 object where the code is stored, you could pull that file down adn make changes, re-zip it, re-upload it back to S3, force the Lambda to cold-start (change the memory slider) and it will pick up the new code. but this is extremely brittle.
Have you tried cloud9, I find it the best way to work on lambdas, especially if you are working as a team. but the problem with cloud9 is also it seems it's not actively being developed and you have lots of manual work to update SAM and dev tools in there. Anyhow I still recommend cloud9.
I have defined a simple configuration in Google Cloud Build that mirrors a github repository and triggers when I push to master. However, for some time, the build is not triggered anymore when I push. And when I trigger the build manually, an old commit is built.
Deleting and recreating the trigger didn't help.
How can I fix this?
As far as I can tell, this is a bug on Google's side but here's a workaround how I was able to fix it.
First, delete your Cloud Build trigger.
Then, navigate to Google Cloud Source Repositories. You should be able to find the repo that is mirrored from Github. Click on the settings icon next to repo and then click on "Disconnect this repository".
Now, recreate the trigger from scratch.
I went here and started on the first task which is to create a registry. I later closed my browser and when I go back to that page, I just get the homepage again and if I start that wizard, it acts as if I've never done it before and forces me to create a new repository.
How the heck do I get back to the repository I created initially and then how can I continue on with this wizard to the next steps with that repo? Or do I lose the repo entirely until I get through all steps in this wizard? Where the heck did my repo go? It says it exists but where? How do I get to back to that repo on the AWS control panel?
https://console.aws.amazon.com/ecs/home?region=us-east-1#/repositories
no? I couldn't comment instead of posting as answer due to insufficient reputation, sorry for that.