We're trying to implement Trivy as the pipeline scanner solution in our pipelines and the table visualization is awesome.
Although, it comes with information that are not so interesting on the ending, such as secrets and ssh keys (see image).
Is there a way to suppress this output and get the end result of the scan just the tables separating language packages and OS packages that are vulnerable?
Thanks in advance. =]
Related
I'm still very new to the world of machine learning and am looking for some guidance for how to continue a project that I've been working on. Right now I'm trying to feed in the Food-101 dataset into the Image Classification algorithm in SageMaker and later deploy this trained model onto an AWS deeplens to have food detection capabilities. Unfortunately the dataset comes with only the raw image files organized in sub folders as well as a .h5 file (not sure if I can just directly feed this file type into sageMaker?). From what I've gathered neither of these are suitable ways to feed in this dataset into SageMaker and I was wondering if anyone could help point me in the right direction of how I might be able to prepare the dataset properly for SageMaker i.e convert to a .rec or something else. Apologies if the scope of this question is very broad I am still a beginner to all of this and I'm simply stuck and do not know how to proceed so any help you guys might be able to provide would be fantastic. Thanks!
if you want to use the built-in algo for image classification, you can either use Image format or RecordIO format, re: https://docs.aws.amazon.com/sagemaker/latest/dg/image-classification.html#IC-inputoutput
Image format is straightforward: just build a manifest file with the list of images. This could be an easy solution for you, since you already have images organized in folders.
RecordIO requires that you build files with the 'im2rec' tool, re: https://mxnet.incubator.apache.org/versions/master/faq/recordio.html.
Once your data set is ready, you should be able to adapt the sample notebooks available at https://github.com/awslabs/amazon-sagemaker-examples/tree/master/introduction_to_amazon_algorithms
While writing records in a flat file using Informatica ETL job, greek characters are coming as boxes.We can see original characters in the database.In session level, we are using UTF-8 encoding.We have a multi language application and need to process Chinese, Russian, Greek,Polish,Japanese etc. characters.Please suggest.
try to change your page encoding. I also faced this kind of issue. We are using ANSII encoding, hence we created separate integration service with different encoding and file ran successfully.
There is an easy option. In session properties, select target flat file then click set file propeties. In that you can change the code-page. There you can choose UTF-8. By default it is in ANSII, that's why you are facing this issue.
My DynamoDB table is quite large and I don't particularly want to dump the whole thing. There is one column that I want to test on, so I would like a dump of all of its values that I could have locally to code/test with. However I am not finding anything that lets me do this.
I found RazorSQL and it semi worked (in the sense that it let me pull down just one column of information from the table but it clearly didn't pull down all the data).
I also found a Data Pipeline Template on AWS but from what I can tell this will dump the entire table. I am relatively new to AWS so it's possible I'm not understanding something about pipelines properly.
I'm okay with writing to S3 because I can pull down all the data from there, but anything that gets to my local machine is fine by me
Thanks for the help!
UPDATE: This tutorial looks promising but I want to achieve this effect in a non-interactive method
I am working on a project using map reduce and HBase. We are using
Cloudera’s CDH3 distribution which has hbase-0.89.20100924+28 bundled
into it. I would like to use cascading as we have some processing that
requires multiple map reduce jobs, but I have been looking through the
different forks of the HBase adaptors for cascading on github and
can’t seem to find one for our version of HBase. Could someone point
me in the correct direction?
We are using https://github.com/ryanobjc/cascading.hbase with CDH3u1. If you have not tried that one with CDH3 give it a try.
This looks like timestamped version comming from source control. Can't you just extract it from tarball/jar file?
can some one suggest me best idea to overcome this situation. Iam using kettle 4.1.0 community version, here when i want to preview the data in spoon for the transformation table output, then when i click on preview data option, the data is being generated in database directly even if we are not performing Run transformation option.. how can i overcome this problem..
regards
kiran kumar.g
Thats just how it works. Perhaps the name "preview" is badly named.
Couple of ways around it. Preview the step before the table output, and disable the hop so no data goes to table output. If the table output step is collecting several inputs, then put a "dummy" step and make that do the collecting, and then preview that.
or change your db to use a local db (via properties or jndi or even a different connection on the step) then you wont care if the db is generated.