google cloud aspnetcore default builder yaml missing - google-cloud-platform

Starting Friday afternoon last week, I'm suddenly unable to deploy to GCP for my project and I receive the following error:
...
Building and pushing image for service [myService]
ERROR: (gcloud.beta.app.deploy) Could not read [<googlecloudsdk.api_lib.s
storage_util.ObjectReference object at 0x045FD130>]: HttpError accessing
//www.googleapis.com/storage/v1/b/runtime-builders/o/gs%3A%2F%2Fruntime-b
%2Faspnetcore-default-builder-20170524113403.yaml?alt=media>: response: <
s': '404', 'content-length': '9', 'expires': 'Mon, 05 Jun 2017 14:33:42 G
ary': 'Origin, X-Origin', 'server': 'UploadServer', 'x-guploader-uploadid
B2UpOw2hMicKUV6j5FRap9x4UKxxZsb04j9JxWA_kc27S_AIPf0QZQ40H6OZgZcLJxCnnx5m4
8x3JV3p9kvZZy-A', 'cache-control': 'private, max-age=0', 'date': 'Mon, 05
17 14:33:42 GMT', 'alt-svc': 'quic=":443"; ma=2592000; v="38,37,36,35"',
t-type': 'text/html; charset=UTF-8'}>, content <Not Found>. Please retry.
I tried again this morning and even updated my gcloud components to version 157. I continue to see this error.
Item of note, the 20170524113403 value in that YAML filename is, I think, a match to the first successful deploy to .NET App Flex for my project. I had since deleted that version using the Google Cloud Explorer with a more recent version 'published' early Friday morning. My publish worked Friday morning, now it doesn't. I don't see any logs that help me understand why that file is even needed and an Agent Ransack search on my entire drive doesn't reveal where that filename is coming from to try and point it to a more recent version.
I'm doing this through both Google Cloud Tools integrated into my Visual Studio 2017 (Publish to Google Cloud...) as well as running the command lines:
dotnet restore
dotnet publish -c Release
copy app.yaml -> destination location
gcloud beta app deploy .\app.yaml in destination location

Not sure if this was "fixed" by Google or not, but 4 days later, the problem went away. On some additional information, I was able to locate the logs on my machine during the publish phase and saw something interesting.
When Working:
2017-05-25 09:36:48,821 DEBUG root Calculated builder definition using legacy version [gs://runtime-builders/aspnetcore-default-builder-20170524113403.yaml]
Later when it stopped working:
2017-06-02 15:25:15,312 DEBUG root Resolved runtime [aspnetcore] as build configuration [gs://runtime-builders/gs://runtime-builders/aspnetcore-default-builder-20170524113403.yaml]
What I noticed was the duplicated "gs://runtime-builders/gs//runtime-builders..."
which disappeared this morning... no, I didn't change a thing besides waiting until today.
2017-06-07 08:11:37,042 INFO root Using runtime builder [gs://runtime-builders/aspnetcore-default-builder-20170524113403.yaml]
You'll see that double "gs://runtime-builders/gs://runtime-builders/" is GONE.

Related

New bundled expo-cli start is returning a manifest that cannot be parsed by expo-dev-launcher

We are currently going through the upgrade to expo SDK 47, following these instructions:
https://blog.expo.dev/expo-sdk-47-a0f6f5c038af
Part of the upgrade involves the recommended switch to using the bundled expo-cli, rather than the global instance.
When we make this change, and run npx expo start --dev-client, our android app throws an error when selecting the dev server, with the message:
Value ----------------------------224819657222108214122421 of type java.lang.String cannot be converted to JSONObject
When I hit the dev server url (eg. http://192.168.0.146:8081/?platform=android) manually to obtain the manifest, I can see it is indeed starting with the above value, as it contains some meta-data, eg.
----------------------------224819657222108214122421
Content-Disposition: form-data; name="manifest"
Content-Type: application/json
{"id":"f5c3b7be-999c-436e-b9c4-b453cf873af0","createdAt":"2023-01-12T10:13:41.499Z","runtimeVersion":"1.e","launchAsset":{"key":"bundle","contentType":"application/javascript","url":"http://10.20.0.174:8081/index.bundle?platform=android&dev=true&hot=false"},"asset
...
When I switch back to using the globally installed CLI as per the instructions in the upgrade blog, so changing the start command from npx expo start to expo-cli start, the same test results in the manifest being returned without the supplementary data, and the app can be loaded successfully.

"gcloud auth print-access-token" to get refresh token runs slow on my mac os-x

For our project, we use google cloud container registry (gcr.io) to push all our container images.
We have our build system that tries to pull the base images from the container registry.
To pull the container image from the registry we do that using oauth2 access token mechanism and the build script runs the "gcloud auth print-access-token" command to get the access token.
Following is the sample run for gloud --verbosity=debug auth print-access-token,
$ date;gcloud --verbosity=debug auth print-access-token;date
Fri Jul 17 10:23:57 PDT 2020
DEBUG: Running [gcloud.auth.print-access-token] with arguments: [--verbosity: "debug"]
< -- Get stuck here for 2 minutes -- >
DEBUG: Making request: POST https://oauth2.googleapis.com/token
INFO: Display format: "value(token)"
<Output Token Here>
Fri Jul 17 10:25:58 PDT 2020
Output from the gcloud config list
[core]
account = <email address>
disable_usage_reporting = False
log_http_redact_token = False
project = <project-id>
Your active configuration is: [default]
After looking at the code for Google SDK i found out that the Google SDK is trying to make a http call to http://google.metadata.internal every 10 minutes and hence it gets stuck finishing those calls since these calls will get resolved only from internal Google Compute instances.
Questions:
Is it expected that the gcloud tool is making the calls to google internal DNS when i run the utility from my MacBook? ( I am new to GCP so i am ready to share more information about my config if needed )
Is there a way to avoid the calls to google internal DNS for gcloud auth print-access-token commands ?
If there is no way to avoid the calls(eventhough it fails from my Mac), is there a way to reduce the time-outs for the calls to google's internal DNS or is there a way to not do it every 10 minutes ?

Cloud Datalab fails to launch

Keep getting the following error when trying to launch Cloud Datalab. Tried deleting all listed VMs in the project, but still does not work.
Oct 27 10:50:31 datalab-deploy-main-20151027-10-41-31 startupscript: 10:50 AM Checking if updated app version is serving.
Oct 27 10:50:31 datalab-deploy-main-20151027-10-41-31 startupscript: 10:50 AM Not enough VMs ready (0/1 ready, 1 still deploying). Version: datalab:main.388142264345574525
Oct 27 10:50:31 datalab-deploy-main-20151027-10-41-31 startupscript: ERROR: Not enough VMs ready (0/1 ready, 1 still deploying). Version: datalab:main.388142264345574525
Oct 27 10:50:31 datalab-deploy-main-20151027-10-41-31 startupscript: 10:50 AM Rolling back the update. This can sometimes take a while since a VM version is being rolled back.
Oct 27 10:50:32 datalab-deploy-main-20151027-10-41-31 startupscript: Could not start serving the given version.
Oct 27 10:50:32 datalab-deploy-main-20151027-10-41-31 startupscript: ERROR: (gcloud.preview.app.deploy) Command failed with error code [1]
Oct 27 10:50:32 datalab-deploy-main-20151027-10-41-31 startupscript: Step deploy datalab module failed.
Is this the only Managed VM deployed in your project? If so, I wonder if we can try cleaning up the managed VM storage bucket and give it another try: Go to Developer Console, storage, browse, and find the 2 buckets starting with "vm-config" and "vm-containers". Delete those buckets and try deploying Datalab again.
Sometimes there are permission issues with these 2 buckets when they were created. If that happened, Managed VM deployment fails due to being unable to pull images from Google Cloud Storage.
It could be caused by a permissions issue and might have been fixed recently. Could you give it another try? If it still fails with the same error, try the following:
Go to Developer Console, Permissions, under "Service Accounts", find the account [project-id]#appspot.gserviceaccount.com.
Copy the account id to somewhere else since we'll use it later. Then remove the account from the list, make sure it disappears from the list, and add it back with "Can edit" permission.
Wait for a few minutes and try deploying Datalab again.
Let me know how it goes!

Google Dataflow StarterPipeline fails to execute in the cloud

I am new to Google Cloud Platform, so might be asking simple questions
I was testing StarterPipline and Word Count examples using Cloud Dataflow API and although these work locally, both fail if I try to run these pipelines on the cloud dataflow service.
I've verified that all API required are enabled and I am successfully authenticated.
There are NO messages in LOG files and the only thing I see is that request staged class files on Cloud Storage and gives "Job finished with status FAILED" before starting worker pool (log below).
Any thoughts and suggestions would be greatly appreciated!
Thanks, Vladimir
Sep 15, 2015 4:13:09 PM com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner fromOptions
INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 45 files. Enable logging at DEBUG level to see which files will be staged.
Sep 15, 2015 4:13:09 PM com.google.cloud.dataflow.sdk.Pipeline applyInternal
WARNING: Transform AnonymousParDo2 does not have a stable unique name. This will prevent updating of pipelines.
Sep 15, 2015 4:13:09 PM com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner run
INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
Sep 15, 2015 4:13:09 PM com.google.cloud.dataflow.sdk.util.PackageUtil stageClasspathElements
INFO: Uploading 45 files from PipelineOptions.filesToStage to staging location to prepare for execution.
Sep 15, 2015 4:13:19 PM com.google.cloud.dataflow.sdk.util.PackageUtil stageClasspathElements
INFO: Uploading PipelineOptions.filesToStage complete: 0 files newly uploaded, 45 files cached
Dataflow SDK version: 1.0.0
Sep 15, 2015 4:13:20 PM com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner run
INFO: To access the Dataflow monitoring console, please navigate to https://console.developers.google.com/project/XXXXXXXXXXXXXXXXXXX/dataflow/job/2015-09-15_07_13_20-12403932015881940310
Submitted job: 2015-09-15_07_13_20-12403932015881940310
Sep 15, 2015 4:13:20 PM com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner run
INFO: To cancel the job using the 'gcloud' tool, run:
> gcloud alpha dataflow jobs --project=XXXXXXXXXXXXXXXXXXX cancel 2015-09-15_07_13_20-12403932015881940310
Sep 15, 2015 4:13:27 PM com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner run
INFO: Job finished with status FAILED
Exception in thread "main" com.google.cloud.dataflow.sdk.runners.DataflowJobExecutionException: Job 2015-09-15_07_13_20-12403932015881940310 failed with status FAILED
at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.run(BlockingDataflowPipelineRunner.java:155)
at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.run(BlockingDataflowPipelineRunner.java:56)
at com.google.cloud.dataflow.sdk.Pipeline.run(Pipeline.java:176)
at com.google.cloud.dataflow.starter.StarterPipeline.main(StarterPipeline.java:68)

Upgrade to Sharepoint 2010 - Project Tracking & IT Team Workspaces not recognized

I have a WSS 3 SP 3 server that has a few sites that use the Project Tracking Workspace & IT Team Workspace Site Templates. When I Upgrade the content DBs I get Errors saying:
[powershell] [SPContentDatabaseSequence] [ERROR] [4/7/2014 2:43:47 PM]: Found 2 web(s) using missing web template 75817 (lcid: 1033) in ContentDatabase WSS_Content_team.site.com.
[powershell] [SPContentDatabaseSequence] [ERROR] [4/7/2014 2:43:47 PM]: The site definitions with Id 75817 is referenced in the database [WSS_Content_team.site.com], but is not installed on the current farm. The missing site definition may cause upgrade to fail. Please install any solution which contains the site definition and restart upgrade if necessary.[powershell] [SPContentDatabaseSequence] [ERROR] [4/7/2014 2:43:47 PM]: Found 120 web(s) using missing web template 75820 (lcid: 1033) in ContentDatabase WSS_Content_team.site.com.
[powershell] [SPContentDatabaseSequence] [ERROR] [4/7/2014 2:43:47 PM]: The site definitions with Id 75820 is referenced in the database [WSS_Content_team.site.com], but is not installed on the current farm. The missing site definition may cause upgrade to fail. Please install any solution which contains the site definition and restart upgrade if necessary.
Things I've tried:
I have Downloaded the Fab40 Site Templates, Extracted the Project Tracking Workspace & IT Team Workspace templates and globally deployed the solution in the Farm.
In the folder C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\1033\XML I can see the two files, WEBTEMPProjSing.xml and WEBTEMPITTeam.xml, that in them have the template IDs of 75820 and 75817.
I've Downloaded the Templates from TechSolutions:
Project Tracking Workspace
IT Team Workspace
Installed them, globally deployed them, still the same errors.
If I query the Farm for WebTemplates they do not show up. The only time I can get them to show up in the GET-SPWebTemplate is when I deploy the tech solutions solutions to a Specific Web. Though when I do that the Template ID is 1, not the 75820 or 75817
If there was not 120 sites with the Project Tracking Workspace, I would just bag the whole sub site and recreate it. Though That's quote a bit to do for 120 Sites.
To make this even worst, I will then be upgrading these to 2013.
Any Suggestions?
With some additional Google searching I believe I found the issue.
With the Help of this blog post I was able to get the two templates I needed installed. Here is the Procedure I followed:
Start a Sharepoint Powershell in Administrator mode.
Install the ApplicationTemplateCore Solution file
Add-SPSolution -LiteralPath C:\Fab40\ApplicationTemplateCore.wsp
Wait 5 Minutes, then deploy the solution
stsadm -o deploysolution -name ApplicationTemplateCore.wsp -allowgacdeployment -immediate
Wait 5 Minutes, then Copy App Bin Content
stsadm -o copyappbincontent
Reset the IIS Server
iisreset
Install the other Solutions that you need with the same commands.
Add-SPSolution -LiteralPath C:\Fab40\ ProjectTrackingWorkspace.wsp
Wait 5 Minutes, then deploy the solution
stsadm -o deploysolution -name ProjectTrackingWorkspace.wsp -allowgacdeployment -immediate
Wait 5 Minutes, then Copy App Bin Content
stsadm -o copyappbincontent
Once you are done adding the Solutions reboot the server. I used the same commands to do this the first several times, though the Fab 40 Solutions would never be seen or installed. It must have something to do with the wait time and the resetting IIS and the reboot. That was the only combination that worked for me.