The documentation says:
$ cf logs APP_NAME --recent
displays all the lines in the Loggregator buffer.
How big is this buffer? Can I change it myself?
Disclaimer: I don't work for Swisscom so I have no idea how their platform is configured. That said, I believe I can still answer your questions.
How big is this buffer?
I believe the value you're looking for is doppler.maxRetainedLogMessages which defaults to 100.
https://github.com/cloudfoundry/loggregator/wiki/Loggregator-Component-Properties
That said, I would make the case that the number of lines buffered shouldn't matter. The recent logs feature is purely a convenience and it's not mean for log storage. If you're trying to use it in a way where you need to know specifically how much it will buffer then you're probably using it wrong and should probably consider a different option.
A couple common cases where the --recent buffer might not be large enough and solutions:
Scenario 1: You're troubleshooting (perhaps a failed app start or stage) and cf logs --recent doesn't show you enough information.
Run cf logs in a second terminal and repeat your action. This will show you the entire log output saved locally on your machine. You can also redirect to a file with cf logs <app> > some-file.log.
Scenario 2: You're running a production app.
You will want reliable log storage for a sustained and predictable amount of time. In this case, you'll want to setup a log drain so that your logs are sent somewhere outside of the platform for long term storage.
https://docs.cloudfoundry.org/devguide/services/log-management.html#user-provided
If you've got another scenario, feel free to add a comment. There is probably a different/better way to handle it.
Can I change it myself?
The setting I mentioned above is configured by your platform operator.
Hope that helps!
thanks to Daniel Mikusa for your insights and helpful explanation. I can confirm that Swisscom Application Cloud uses the default value.
Quote from our deployment manifest:
loggregator:
(...)
maxRetainedLogMessages: 100
outgoing_dropsonde_port: 8081
Related
I've got a job that's failing and I think the problem is that I've misunderstood what the layout of the directory structure for the running job.
How can I see what's actually on disk so I can diagnose what's happening?
Can I do it from the GoCD UI, or am I going to have to connect to the agent box and look at things that way?
In Jenkins, I'd just use the "workspace" link to eyeball the layout.
Currently, I'm adding directorying listing commands to the jobs themselves, then inspect the out in the logs.
A comprehensive way to inspect the structure is to define an "artifact" of * - this declares the entire pipeline's working directory as an artifact, then you can inspect it in the UI.
This is probably a very bad plan, because it's going to use up tons of disk space and it takes a long time to create the artifact so it slows down your pipeline a lot.
We've started to use Google Cloud Platform's Artifact Registry where pricing is pr. GB pr. month.
But how can I see how much storage is being used and by what?
It also looks like all pushed images are saved forever by default. So for each build, the repository will only grow and grow? How do I (automatically?) delete old builds, keeping only the most recent one (or N, or tagged images) available?
It seems disingenuous to price us pr. GB, but not provide any means to investigate or prune how much storage is being used, so I'm hoping we've missed something.
Edited to add: We have CI/CD pipelines creating between 20-50 new images a day. Having to manually delete them is not maintainable in the long run.
Edited to add: Essentially I'm looking for sethvargo/gcr-cleaner: Delete untagged image refs in Google Container Registry, as a service but for Artifact Registry instead of the Container Registry, which it will replace. Or the shell-script gist (also GCR-only) that inspired gcr-cleaner.
GCR Cleaner does support purging from Artifact Registry. I verified this myself and updated the documentation to reflect so. I don't plan on changing the tool's name since it's pretty well-recognized, but it will work with GCR and AR.
I hope somebody can come with a better answer, but I came across [FR] Show Image size information in Artifact Registry GUI [156322291] in Google's Issue Tracker. So this is a known issue.
And gcr-cleaner has this issue: Support for Artifact Registry · Issue #9 · sethvargo/gcr-cleaner - that is closed because it went stale.
Its looking like Artifact Repository is not yet mature enough for prime-time, and that I'm better off using Container Registry for the time being. A shame though.
The Artifact Registry service should introduce a support for retention policies in Q4'22. Until then some GCP customers will be able to subscribe for this feature as early previewers. You can see more information at Configure cleanup policies for repositories
Other way around would be to use Cloud Scheduler or Cloud Run jobs to do the work.
I'm newbie in AWS, with my free tier account I'm trying to build my nodeJS project with AWS CodeBuild but I get this error:
Build failed to start The build failed to start. The following error occured: Cannot have more than 0 builds in queue for the account
I followed the simple aws tutorial, leaving all default settings for let aws create all service, image etc for me.
Also I stored source code in a AwsCodeCommit repository.
Could anybody help me?
In my case, there was a security vulnerability in my account and AWS automatically raised a support ticket and suspended all resources that were linked to it. I had to fix it and then on chat with aws support they resumed my service.
I've seen a lot of answers around the web suggesting to call support, which is a great idea, but I was actually able to get around this on my own.
As the root user I went in and put in a current credit card. The one that was currently there was expired. I then deleted my CodeBuild project and create a new one. Now my builds work! It makes sense that AWS just needed a valid payment method before it allowed me to use premium services.
My solution may not work for you, but sure I hope it does!
My error was Project-level concurrent build limit cannot exceed the account-level concurrent build limit of 1 when I tried to increase the Concurrent build limit under checkbox Restrict number of concurrent builds this project can start in CodeBuild Project Configuration. I resolved it by writing to support to increase the limit. They increased it to 20 and it works now as expected. They increased it even though I'm on Basic plan on AWS if anyone's wondering.
My solution was to add new service role name and the concurrent build to 1. This worked
I think your issue is resolved at the moment. Any way I faced the same issue. In my case I had a "code build project" connecting to a GitHub repository. And then I added AWS Access Key and Secret hard coding the buildspec.yml file. With this AWS identified it as an unauthorized login. So they added security restrictions to the resources while opening a support issue. In such a case you can look for the emails from AWS in which they explain the reason for this behavior and the steps to get this corrected.
I got a deletion notice for my Google Cloud Shell home directory. Does that mean that my data will also be deleted?
This is documented here:
If you do not access Cloud Shell for 120 days, we will delete your
home disk. You will receive an email notification before we do so and
simply starting a session will prevent its removal.
This only applies to the home directory of your Cloud Shell instance (you may want to store it on Cloud Storage anyway if you want to keep it). Any other Google services you use will be unaffected.
Considering i'm paying for their services this is extremely annoying. Lost a lot of important documents and feel like taking my business somewhere else.
I realize CloudFoundry is still in beta and I'll admit to being moderately ignorant when it comes to this level of cloud computing but here's my question: I create an app, everything works, I upload it to CF. Now what? I want to launch my app in the wild. I want users to not see a CF URL.
Here are some pieces I do know, but I'm not getting the entire picture.
I know I can map a URL to an app. So presumably that's just some DNS routing happening. But other than that, is it safe at this point to bet the farm on CF and, for example, launch of startup using it? At what point am I going to realize I need to move to something like RackSpace (or whatever) and is it possible to take my CF VM and just move it?
Overall, I just don't fully understand what we're getting with CF other than a quick way to deploy a demo application.
At this point, if you need a custom domain, you need to configure an external proxy and from there route the traffic to your CF.com URL. This is a good example.
But the advantage of CloudFoundry is that it is entirely open source. You can always move your app to a compatible service provider, for example AppFog, with not much more than a simple push.
You could even deploy your own CF instance/server on Rackspace.
It appears that there is still no support for external domain mapping on Cloud Foundry. Here is another example that uses a Python reverse proxy running on Google AppEngine. This works well. http://programming.mvergel.com/2011/11/cloud-foundry-and-custom-domain.html
Right now, CloudFoundry.com doesn't offer domain mapping. You might expect that it will do so in a future fully-supported paid version, but as you note, right now it is still in beta.
For what it's worth, I am running a startup B2B product on CloudFoundry.
I have deployed the open source version on our own infrastructure though, I keep a close watch on changes and even review other people's commits.
That's a significant investment in terms of learning and time, but in my opinion it's worth it.