Update an existing deployment using deployment manager update API - google-cloud-platform

I use Java APIs for the CRUD operations of the google cloud deployment manager API. I can create, preview, delete deployments OK.
But when I try to update an existing deployment that's in preview mode, the API returns the below error.
Deployment in preview must not have a target with UPDATE
The same inputs works OK for create and preview. So, I'm sure that the inputs are OK.
I looked up for others who have reported this issue.
Here is one such report but no solution.
Does anyone know if there's a git hub repo for google cloud deployment manager where we can report this issue?

As mentioned in the answer here, is a known issue and you can still use the workaround suggested.
I have created an issue tracker for this error message. So, you can add your comments there and follow up for upcoming updates.

Related

Using NextJS On Demand Revalidation on AWS Amplify

We have built a NextJS website that is running on AWS Amplify, we are currently using getStaticProps in order to render the pages, and we generate them using getStaticPaths.
We would like to use on-demand revalidation in order to refresh the data on our command when we update the database, for example.
Our local env works perfectly, the data is the same until we change and revalidate using our secret API endpoint, on the other hand when we deploy to AWS Amplify, the revalidation doesn't work.
We looked into the logs and didn't see any errors, nor are the permissions for SQS invalid. We even removed the branch and redeployed it, yet nothing worked.
I have tried searching for the same problem online yet didn't find any solution, did anyone here stumble upon the same issue?
Thank you!
Seems like this isn't supported yet, and in the works by the Amplify team.
This was answered in an issue I opened on Amplify's GitHub:
https://github.com/aws-amplify/amplify-hosting/issues/3116
right now, Amplify Hosting does not support on-demand ISR. Supporting
it is on our roadmap and we will update our documentation to make this
clear.
I'm using Vercel while they work on that.

Google Cloud Build Step Logs Not Viewable in Console

I am not able to view Google Cloud Build logs in the console. For each step that I click on I cannot see the associated logs in the Build Log window on the right (see picture). This occurs with both the Build Summary and each detail step. The only way to view these logs is to click View Raw, but that is only a great workaround.
Another issue is that each build step status (Success/Failure) is only populated at the end of the entire build process, as opposed to updating after each step.
Is anybody also experiencing this or have suggestions to rememdy this issue? My browser is Google Chrome Version 93.0.4577.82 (Official Build) (x86_64)
Experience shows that there can be adverse interactions between Chrome Plugins and a variety of websites that have rich content or streaming (such as Google's Console). If something seems odd, try and create a new Chrome profile or try running in incognito mode and see if that resolves the issue. If it does, you can incrementally add (or remove) the plugins until you find the one that is causing problems. If you do find the culprit plugin, consider posting that as a comment to others on what you find.
As per the documentation, if you’re storing your build logs in logging, you won’t be able to see them in the cloud build page, instead you will be able to see them in the Logging page(i.e. Operations logging).
To view the build in Cloud Build page in the Cloud Console, if your build logs are present in the Google-created Cloud Storage bucket, grant the Project Viewer role on the project but if your build logs are in a user-specified Cloud Storage bucket, grant the Storage Object Viewer role. And for more information, consider looking at the documentation.
Your second point is an expected behaviour, please look here.
Adding on Kolban's answer above, one of the Chrome extensions that interfered is Imagus. Uninstalling / disabling it should fix the problem.
Another Chrome extension that seems to cause the problem is "Dark Mode". My version is 0.4.2 on Chrome version 96.0.4664.110 and disabling this and refreshing the Build Detail page restored the build log listing.

AWS CodePipeline recognizes my new GitHub commit fine - but how?

I am currently fiddling around with AWS CodePipeline for the first time and set up the Source and the Build step so far with a demo project.
I have connected the Source Step with a GitHub account (a system-account we use), with admin access to all Repos. As the documentation states, the OAuth-scopes admin:repo_hook and repo are required for this to use; which are granted and the connection is fine.
As the title of this question already states: The integration works just fine - when I push a new commit on master to GitHub, the Pipeline starts working and runs through smoothly.
My question however is: How? As the docs state here:
To integrate with GitHub, AWS CodePipeline uses OAuth tokens
however, when looking in my GitHub settings, I would have expected to find the application listed as an "OAuth application" directly on the Repository or on the organization "OAuth applications", but neither is the case!
Thus, I am wondering how CodePipeline recognizes my new commit. Is it polling the SCM or some other sort of magic? I did not find any WebHooks either.
Thank you in advance!
AWS CodePipeline is connected to GitHub via the new "Integrations" concept: https://github.com/integrations/aws-codepipeline
This concept was announced here: https://developer.github.com/changes/2016-09-14-Integrations-Early-Access/
GitHub Integrations authenticate using JSON Web Tokens and private/public keys, so I'm not sure if AWS are technically correct in describing that as "OAuth" or not. Details here: https://developer.github.com/early-access/integrations/authentication/#as-an-integration

Change deployment reason for Azure WebServices

I have been working on moving our continuous integration(CI) to Azure using Azure's built in CI. After each build, it shows the last commit message as the deployment reason. I would like to put the current version of my App in there.
Does anybody know if there is a way I can override, or extend the information shown in the Deployment details view after a build for Azure WebServices.
I haven't seen anywhere to add this, so I guess UserVoice is the best way to get their attention. I'd certainly vote it up.

Goals dialog not populated

Sitecore 6.6 Update 4
I've got an instance of Sitecore that is having an issue with goals. After creating (and publishing) goals, I try to assign the goal to a specific content item. When I click on the 'Analyze\Attributes\Goals' button in the ribbon, the dialog is presented, but no goals are populated in the box.
I've looked at my error logs and don't see any errors. I've watched via Fiddler and don't see anything. I've used Chrome's developer tools and see no errors.
I have another instance of Sitecore running on the same server and it has no issues populating the goals dialog box.
Any ideas?
Thanks!
Likely your goals have not been deployed to your Analytics dataset. Try pointing SQL Management Studio to your Analytics Database and issue the following:
SELECT *
FROM [Sitecore_analytics].[dbo].[PageEventDefinitions]
And make sure that the goals you are registering are actually present here. There should be a Guid in PageEventDefinitionId that matches the Sitecore Item ID of your Goal.
Okay, thanks to Mark (+1) for pointing me down a direction for solving this. This has to do with automating analytics deployment on CD servers.
Looking at section 6.2.1 of the ECM Administrator and Developers Guide, you can see that there are two tasks:
Adding the Auto Publish action
Updating the Web.config with a workflow provider for the default definition database
The goals were associated with the "Analytics Workflow", but they weren't going into a draft state after creating them and they weren't being properly deployed when saving.
After ensuring that the steps from the ECM dev guide were followed in the client's CM/CD environments, everything started working again.
Note: this may not be something someone normally sees with a default install. I had begun the process of implementing the ECM autopublish by editing the web.config files and had not completed the process of adding the "auto publish" action. Once I ensured that all items were correct, the process worked as expected.