I have looked through the documentation. I need to be able to pull the Minimum Capacity, Maximum Capacity, and Scaling Policies that are defined in my AppStream fleets.
In this module
https://www.powershellgallery.com/packages/AWS.Tools.AppStream/4.1.217
and the defined documentation here
https://docs.aws.amazon.com/powershell/latest/reference/index.html
you can see there are some functions but none of them return the fleet configuration information that I need to pull. I've already tried really digging into Get-APSFleetList and looking into the ComputeCapacityStatus model, but that contains information about the current capacity by showing available sessions, running, etc. It looks like describe-fleet-capacity in the CLI would return some max/min capacity information but there doesn't seem to be a mapping of that function over to the Powershell SDK. Haven't found anything about the scaling policies.
Any suggestions on how to resolve this are much appreciated!
Related
What is the most efficient way to update all assets labels per project?
I can list all project resources and their labels with gcloud asset search-all-resources --project=SomeProject. The command also returns the labels for those assets.
Is there something like gcloud asset update-labels?
I'm unfamiliar with the service but, APIs Explorer (Google's definitive service documentation), shows a single list method.
I suspect (!?) that you will need to iterate over all your resource types and update instances of them using any (there may not be) update (PATCH) method that permits label changes for that resource type.
This seems like a reasonable request and you may wish to submit a feature request using Google's issue tracker
gcloud does not seem to have a update-labels command.
You could try the Cloud Resource Manager API. For example, call the REST or Python API: https://cloud.google.com/resource-manager/docs/creating-managing-labels#update-labels
We've started to use Google Cloud Platform's Artifact Registry where pricing is pr. GB pr. month.
But how can I see how much storage is being used and by what?
It also looks like all pushed images are saved forever by default. So for each build, the repository will only grow and grow? How do I (automatically?) delete old builds, keeping only the most recent one (or N, or tagged images) available?
It seems disingenuous to price us pr. GB, but not provide any means to investigate or prune how much storage is being used, so I'm hoping we've missed something.
Edited to add: We have CI/CD pipelines creating between 20-50 new images a day. Having to manually delete them is not maintainable in the long run.
Edited to add: Essentially I'm looking for sethvargo/gcr-cleaner: Delete untagged image refs in Google Container Registry, as a service but for Artifact Registry instead of the Container Registry, which it will replace. Or the shell-script gist (also GCR-only) that inspired gcr-cleaner.
GCR Cleaner does support purging from Artifact Registry. I verified this myself and updated the documentation to reflect so. I don't plan on changing the tool's name since it's pretty well-recognized, but it will work with GCR and AR.
I hope somebody can come with a better answer, but I came across [FR] Show Image size information in Artifact Registry GUI [156322291] in Google's Issue Tracker. So this is a known issue.
And gcr-cleaner has this issue: Support for Artifact Registry · Issue #9 · sethvargo/gcr-cleaner - that is closed because it went stale.
Its looking like Artifact Repository is not yet mature enough for prime-time, and that I'm better off using Container Registry for the time being. A shame though.
The Artifact Registry service should introduce a support for retention policies in Q4'22. Until then some GCP customers will be able to subscribe for this feature as early previewers. You can see more information at Configure cleanup policies for repositories
Other way around would be to use Cloud Scheduler or Cloud Run jobs to do the work.
Since recently GCP Error Reporting shows this error: "Deployment limit reached. You are only seeing data for the most recent deployments."
Unfortunately I cannot find any documentation which states which deployment this message talks about, how high the limit is or if there is a way to extend the limit.
Can anyone shed light on this? Maybe referencing hidden documentation?
This deployment limit is documented here. It actually mentions a "10,000 deployment-error group pairs, where a deployment is a combination of service and version".
So this isn't about a specific service that reached a deployment limit but about the sum of all the service-version (App Engine, Cloud Functions, Cloud Run, ...) that you've ever deployed and enabled error reporting on.
This also means that the oldest service-version pairs are most likely not serving anymore.
Inside an AWS instance, I go to the browser and hit http://169.254.169.254. If am getting some many files and folder kind of thing including dates like of thing and the latest one is also there in that. I want to know is there any specific meaning of that?
From Instance Metadata and User Data - Amazon Elastic Compute Cloud:
The earlier versions are available to you in case you have scripts that rely on the structure and information present in a previous version.
I'm newbie in AWS, with my free tier account I'm trying to build my nodeJS project with AWS CodeBuild but I get this error:
Build failed to start The build failed to start. The following error occured: Cannot have more than 0 builds in queue for the account
I followed the simple aws tutorial, leaving all default settings for let aws create all service, image etc for me.
Also I stored source code in a AwsCodeCommit repository.
Could anybody help me?
In my case, there was a security vulnerability in my account and AWS automatically raised a support ticket and suspended all resources that were linked to it. I had to fix it and then on chat with aws support they resumed my service.
I've seen a lot of answers around the web suggesting to call support, which is a great idea, but I was actually able to get around this on my own.
As the root user I went in and put in a current credit card. The one that was currently there was expired. I then deleted my CodeBuild project and create a new one. Now my builds work! It makes sense that AWS just needed a valid payment method before it allowed me to use premium services.
My solution may not work for you, but sure I hope it does!
My error was Project-level concurrent build limit cannot exceed the account-level concurrent build limit of 1 when I tried to increase the Concurrent build limit under checkbox Restrict number of concurrent builds this project can start in CodeBuild Project Configuration. I resolved it by writing to support to increase the limit. They increased it to 20 and it works now as expected. They increased it even though I'm on Basic plan on AWS if anyone's wondering.
My solution was to add new service role name and the concurrent build to 1. This worked
I think your issue is resolved at the moment. Any way I faced the same issue. In my case I had a "code build project" connecting to a GitHub repository. And then I added AWS Access Key and Secret hard coding the buildspec.yml file. With this AWS identified it as an unauthorized login. So they added security restrictions to the resources while opening a support issue. In such a case you can look for the emails from AWS in which they explain the reason for this behavior and the steps to get this corrected.