I need to enable veracode for my application, so just wanted to know can we run the veracode for every pull request/merge request only for modified files? or scanning only once in a day on entire code is sufficient?
There are 20 developers are working my team and getting multiple merge request at a time sometimes, so is there any issue if we submit multiple scans to veracode at a time?
Thanks
Related
Really basic question, but I can't seem to find it in the documentation. When continuous scanning is turned on, at what frequency and at which time the registry will be scanned? I've turned it on a couple of hours ago but it hasn't scanned yet.
Continuous scanning will be executed when a new vulnerability is received by AWS. I couldn't pinpoint when the initial scan happened. It took approximately a half day.
I had a same issue and came across with this question. I will post the docs quote.
https://docs.aws.amazon.com/AmazonECR/latest/userguide//image-scanning.html?icmpid=docs_ecr_hp-registry-private
As new vulnerabilities appear, the scan results are updated and Amazon Inspector emits an event to EventBridge to notify you.
did you turn one the "Scan on push" option, go the Edit option end then enable that option, it will automatically scan your repo after each push.
if you go for manual scan it's generally trigger immediately.
please give a try
I have many (like 30) shiny apps deployed on shiny server (open source version). It happens from time to time that when I (or one of my colleagues) update one of the packages, some of the apps stop working. I wonder if instead of checking all the apps manually every time we change anything, there is some way to perform the checks automatically? An ideal solution would be to have a script that is run daily (hourly?) checking if each app can be loaded and if not, an email is send. I am not taking about small errors here. More about app not loading at all due to in ex. some function missing. Any suggestions on how this can be achieved?
Are there any best practices on virus scanning all files being uploaded to the Sitecore media library (and ultimately stored in Sitecore's DB)?
I searched all over the web but there is too much noise caused by the word virus since many people seem to have performance issues on server that have anti-virus software installed.
I don't know if it is an established best practice, but I would probably add a processor for the uiUpload pipeline that used an API or command line process for a commercial antivirus product. Other than the fact that it is in a pipeline processor, it shouldn't really be much different from how you would do it in any other ASP.NET application. Performance will definitely be a concern, but you could create a dialog with a psuedo progress bar to give some feedback to the user.
Take a look at this post by Mike Reynolds. It may help you out:
http://sitecorejunkie.com/2013/11/09/perform-a-virus-scan-on-files-uploaded-into-sitecore/
I am not aware of any published best practices, but if you are able to add a step in the upload process, you might want to take a look at Metascan, which provides API level integration to multiple antivirus engines. Using this, you could build a workflow for those uploaded files to scan them prior to them hitting your Sitecore media library by establishing rules based on the results of the antivirus engines used in your Metascan deployment. There's also a hosted version at metascan-online(dot)com
Disclaimer /// I am an employee of OPSWAT, who produces Metascan, but it appears to be a potential solution to your issue
In one of our recent Projects, we were faced with a requirement to scan incoming files for virus. The problem in the project was that the files after begin uploaded, were made public available on the website.
The way we solved the problem was to implementing https://www.virustotal.com/. Its a free online virus scanner that has a public API. You can send files via SSL.
We implemented the solution by adding newly uploaded files to a Sitecore workflow. The workflow would handle the scanning of files, and move the files to the final stage of the workflow, if the files wasn't infected. If a file was infected, the file would be deleted.
A Scheduler is running every 5 minutes to check for new incoming files with the workflow.
This also means that the files aren't available straight away, as the scheduler has to check the file, but you should be able to implement the functionality directly when the user has uploaded the file, by adding your custom code to the upload pipeline.
Anyone knows how to up the request time length in drupal? I'm trying to download large files via the web services module but my token keeps expiring because the request takes so long. I know there is a setting in drupal to do this but I just can't find it.
UPDATE
So I found out how to up the request time (/admin/build/services/settings) but that didn't work. I'm still getting "The request timed out" on files about 10mb large. Anyone has any ideas? Also, I'm using ASIHTTPRequest and drupal-ios-sdk and downloading the files to an iPad.
Turns out it was the default timeOutSeconds property on the ASIHTTPRequest was too small (10 seconds). When I uped it, my large files downloaded ok
I have a job that I want to run every time a commit is made to a repository. I want to avoid pulling this code down, I only want the notification build trigger. So, is there either a way to not pull down certain repositories in your SCM upon a build or a way to poll things that aren't in the SCM for a build?
you could use a post commit hook to trigger your hudson job.
Since you want to avoid changing SVN, you have to write a job that gets executed every so often (may be every 5 Minutes). This jobs runs a svn command using windows bach or shell script task to get the current revision for the branch in question. You can set the status of the job to unstable if there is a change. Don't use failure because you can't distinguish than between a real failure and a repository change. I think there is a plugin that sets the job status depending on the contents of you output.
You can then use the email extension plugin to send an email every time the revision changes. You can get the revision number from the last (or better the last successful or unstable) job. You can archive a file containing the revision number on the jobs or you can set the description for the job to the revision using the description setter plugin. Have a look at Hudsons remote API for ideas on how to get the information from the previous job.
Since you run your job very often during the day. don't forget to delete old job runs. But I would keep at least two days worth of history, just in case your svn is down for 24 hours.