AWS ECR enhanced scanning - Continuous scanning option: when is the registry scanned? - amazon-web-services

Really basic question, but I can't seem to find it in the documentation. When continuous scanning is turned on, at what frequency and at which time the registry will be scanned? I've turned it on a couple of hours ago but it hasn't scanned yet.

Continuous scanning will be executed when a new vulnerability is received by AWS. I couldn't pinpoint when the initial scan happened. It took approximately a half day.

I had a same issue and came across with this question. I will post the docs quote.
https://docs.aws.amazon.com/AmazonECR/latest/userguide//image-scanning.html?icmpid=docs_ecr_hp-registry-private
As new vulnerabilities appear, the scan results are updated and Amazon Inspector emits an event to EventBridge to notify you.

did you turn one the "Scan on push" option, go the Edit option end then enable that option, it will automatically scan your repo after each push.
if you go for manual scan it's generally trigger immediately.
please give a try

Related

Can I configure Google DataFlow to keep nodes up when I drain a pipeline

I am deploying a pipeline to Google Cloud DataFlow using Apache Beam. When I want to deploy a change to the pipeline, I drain the running pipeline and redeploy it. I would like to make this faster. It appears from the logs that on each deploy DataFlow builds up new worker nodes from scratch: I see Linux boot messages going by.
Is it possible to drain the pipeline without tearing down the worker nodes so the next deployment can reuse them?
rewriting Inigo's answer here:
Answering the original question, no, there's no way to do that. Updating should be the way to go. I was not aware it was marked as experimental (probably we should change that), but the update approach has not changed in the last 3 i have been using DF. About the special cases of update not working, supposing your feature existed, the workers would still need the new code, so no really much to save, and update should work in most of the other cases.

Why does AWS Glue say "Max concurrent runs exceeded", when there are no jobs running?

I have an AWS Glue job, with max concurrent runs set to 1. The job is currently not running. But when I try to run it, I keep getting the error: "Max concurrent runs exceeded".
Deleting and re-creating the job does not help. Also, other jobs in the same account run fine, so it cannot be a problem with account wide service quotas.
Why am I getting this error?
I raised this issue with AWS support, and they confirmed that it is a known bug:
I would like to inform you that this is a known bug, where an internal distributed counter that keeps track of job concurrency goes into a stale state due to an edge case, causing this error. Our internal Service team has to manually reset the counter to fix this issue. Service team has already added the bug fix in their product roadmap and will be working on it. Unfortunately I may not be able to comment on the ETA on the deployment, as we don’t have any visibility on product teams road map and fix release timeline.
The suggested workarounds are:
Increase the max concurrency to 2 or higher
Re-create the job with a different name
Glue container is start and its taking some time same when your job end container shutdown taking some time in between if you try to execute new Jon and default concurrency is 1 so you will get this error.
How to resolve:
Go to your Glue Job --> Under Job detail tab you can find "Maximum concurrency" default value is 1 change it to 3 or more as per your need.
I tried changing "Maximum concurrency" to 2 and then run it !
It worked but again running it cause the same issue, but I looked into my s3 ,it has dumped the data ,so it run for once!
I'm still looking for a stable solution but this may work!

Setting build priority in yaml or UI

Is there a way to setup up a build's priority in a yaml based pipeline? There seem to be references to build priority in the Azure DevOps API, but nothing in how to do this via yaml. I thought there might be some docs in the Triggers section, but no.
We need this because we have some fast building NuGet packages, but these get starved via slow-build pipelines making turnaround time for packages painful.
The closest thing I could come up with to working around this is via agent demands in the yaml
demands:
- Agent.ComputerName = XYZ
to separate build pipelines, but this is a bit of a hack and doesn't use agents efficiently.
A way to set this in UI would be acceptable, but I couldn't seem to find anything.
Recently Azure DevOps introduced the ability to manually specify a build/release runs next.
This manifests as a Run next button. (image source).
So while you can't say "this pipeline always takes priority" yet, you can manually force a specific run to the front of the queue.
If you need a specific pipeline to always take priority, then you likely want to setup a separate agent pool just for those pipelines, or use demands as Leo Liu mentioned.
Setting build priority in yaml or UI
I'm afraid this feature is not yet supported in Azure DevOps at this moment.
There is a popular user voice about it, you can upvote it and check the feedback from that ticket.
Currently as a workaround, just like what you did, set the demands in build definitions to force building with the specific agents.
Hope this helps.

cloud-builds pub/sub topic appears to be unlisted or inaccessible

I'm attempting to create an integration between Bitbucket Repo and Google Cloud Build to automatically build and test upon pushes to certain branches and report status back (for that lovely green tick mark). I've got the first part working, but the second part (reporting back) has thrown up a bit of a stumbling block.
Per https://cloud.google.com/cloud-build/docs/send-build-notifications, Cloud Build is supposed to automatically publish update messages to a Pub/Sub topic entitled "cloud-builds". However, trying to find it (both through the web interface and via gcloud command line tool) has turned up nothing. Copious amounts of web searching has turned up https://github.com/GoogleCloudPlatform/google-cloud-visualstudio/issues/556, which seems to suggest that the topic referenced in that doc is now being filtered out of results; however, that issue seems to be specific to the visual studio tools and not GCP as a whole. Moreover, https://cloud.google.com/cloud-build/docs/configure-third-party-notifications suggests that it's still accessible, but perhaps only to Cloud Functions? And maybe only manually via the command line, since the web interface for Cloud Functions also does not display this phantom "cloud-builds" topic?
Any guidance as to where I can go from here? Near as I can tell, the two possibilities are that something is utterly borked in my GCP project and the Pub/Sub topic is either not visible just for me or has somehow been deleted, or I'm right and this topic just isn't accessible anymore.
I was stuck with the same issue, after a while I created the cloud-builds topic manually and created a cloud function that subscribed to that topic.
Build details are pushed to the topic as expected after that, and my cloud function gets triggered with new events.
You can check the existence of the cloud-builds topic an alternate way from the UI, by downloading the gcloud command line tool and, after running gcloud init, running gcloud pubsub topics list to list all topics for the configured project. If the topic projects/{your project}/topics/cloud-builds is not listed, I would suggest filing a bug to the cloud build team here.
Creating the cloud-builds topic manually won't work since it's a special topic that Google managed.
In this case, you have to go to the API central and disable the CloudBuild API, and then enable it again, the cloud-builds topic will be created for you. Enable and disable Cloud Build API

How big is the Loggregator buffer on Swisscom Application Cloud

The documentation says:
$ cf logs APP_NAME --recent
displays all the lines in the Loggregator buffer.
How big is this buffer? Can I change it myself?
Disclaimer: I don't work for Swisscom so I have no idea how their platform is configured. That said, I believe I can still answer your questions.
How big is this buffer?
I believe the value you're looking for is doppler.maxRetainedLogMessages which defaults to 100.
https://github.com/cloudfoundry/loggregator/wiki/Loggregator-Component-Properties
That said, I would make the case that the number of lines buffered shouldn't matter. The recent logs feature is purely a convenience and it's not mean for log storage. If you're trying to use it in a way where you need to know specifically how much it will buffer then you're probably using it wrong and should probably consider a different option.
A couple common cases where the --recent buffer might not be large enough and solutions:
Scenario 1: You're troubleshooting (perhaps a failed app start or stage) and cf logs --recent doesn't show you enough information.
Run cf logs in a second terminal and repeat your action. This will show you the entire log output saved locally on your machine. You can also redirect to a file with cf logs <app> > some-file.log.
Scenario 2: You're running a production app.
You will want reliable log storage for a sustained and predictable amount of time. In this case, you'll want to setup a log drain so that your logs are sent somewhere outside of the platform for long term storage.
https://docs.cloudfoundry.org/devguide/services/log-management.html#user-provided
If you've got another scenario, feel free to add a comment. There is probably a different/better way to handle it.
Can I change it myself?
The setting I mentioned above is configured by your platform operator.
Hope that helps!
thanks to Daniel Mikusa for your insights and helpful explanation. I can confirm that Swisscom Application Cloud uses the default value.
Quote from our deployment manifest:
loggregator:
(...)
maxRetainedLogMessages: 100
outgoing_dropsonde_port: 8081