The link provided for NSL-KDD datasets is from http://nsl.cs.unb.ca/NSL-KDD/
However, I am not able to access the website.
I need the data for my dissertation. I am trying to train it in a neural network for intrusion detection system.
Is there any other way to get it?
It seems to be available via the Way Back Machine:
https://web.archive.org/web/20150205070216/http://nsl.cs.unb.ca/NSL-KDD/
The ARFF and Text files containing the dataset are available within the links.
Related
I am new to ImageNet and would like to download full sized images of one of the subsets/synsets however I have found it incredibly difficult to actually find what subsets are available and where to find the ID code so I can download this.
All previous answers (from only 7 months ago) contain links which are now all invalid. Some seem to imply there is some sort of algorithm to making up an ID as it is linked to wordnet??
Essentially I would like a dataset of plastic or plastic waste or ideally marine debris. Any help on how to get the relevant ImageNet ID or suggestions on other datasets would be much much appreciated!!
I used this repo to achieve what you're looking for. Follow the following steps:
Create an account on Imagenet website
Once you get the permission, download the list of WordNet IDs for your task
Once you've the .txt file containing the WordNet IDs, you are all set to run main.py
As per your need, you can adjust the number of images per class
By default ImageNet images are automatically resized into 224x224. To remove that resizing, or implement other types of preprocessing, simply modify the code in line #40
Source: Refer this medium article for more details.
You can find all the 1000 classes of ImageNet here.
EDIT:
Above method doesn't work post March 2021. As per this update:
The new website is simpler; we removed tangential or outdated functions to focus on the core use case—enabling users to download the data, including the full ImageNet dataset and the ImageNet Large Scale Visual Recognition Challenge (ILSVRC).
So with this, to parse and search imagenet now you may have to use nltk.
More recently, the organizers hosted a Kaggle challenge based on the original dataset with additional labels for object detection. To download the dataset you need to register a Kaggle account and join this challenge. Please note that by doing so, you agree to abide by the competition rules.
Please be aware that this file is very large (168 GB) and the download will take anywhere from minutes to days depending on your network connection.
Install the Kaggle CLI and set up credentials as per this guideline.
pip install kaggle
Then run these:
kaggle competitions download -c imagenet-object-localization-challenge
unzip imagenet-object-localization-challenge.zip -d <YOUR_FOLDER>
Additionally to understand ImageNet hierarchy refer this.
i am trying to generate SNMP data for printers for later analysis using a prediction algorithm to be able to fortell emanating faults in printers before they actually occur. I seek advice on how best i could collect the data and prepare it in a dataset format like .csv so as to feed it into my classifier.
Would really appreciate any help rendered
Cheers!
My approach might not be the most efficient one but it is possible to start with and later improve it.
What I would do in your case would be the following:
1) Create a python script that polls every printer, you need to poll. This using Pysnmp.
2) What I don't understand is where you want to collect your data from but anyways, you can import csv in your poller script and create a csv file if that is what you want. Or if you want that data inserted into a sql database eg MySQL you can push the data as well from your script.
Hope this helps:)
I'm still very new to the world of machine learning and am looking for some guidance for how to continue a project that I've been working on. Right now I'm trying to feed in the Food-101 dataset into the Image Classification algorithm in SageMaker and later deploy this trained model onto an AWS deeplens to have food detection capabilities. Unfortunately the dataset comes with only the raw image files organized in sub folders as well as a .h5 file (not sure if I can just directly feed this file type into sageMaker?). From what I've gathered neither of these are suitable ways to feed in this dataset into SageMaker and I was wondering if anyone could help point me in the right direction of how I might be able to prepare the dataset properly for SageMaker i.e convert to a .rec or something else. Apologies if the scope of this question is very broad I am still a beginner to all of this and I'm simply stuck and do not know how to proceed so any help you guys might be able to provide would be fantastic. Thanks!
if you want to use the built-in algo for image classification, you can either use Image format or RecordIO format, re: https://docs.aws.amazon.com/sagemaker/latest/dg/image-classification.html#IC-inputoutput
Image format is straightforward: just build a manifest file with the list of images. This could be an easy solution for you, since you already have images organized in folders.
RecordIO requires that you build files with the 'im2rec' tool, re: https://mxnet.incubator.apache.org/versions/master/faq/recordio.html.
Once your data set is ready, you should be able to adapt the sample notebooks available at https://github.com/awslabs/amazon-sagemaker-examples/tree/master/introduction_to_amazon_algorithms
My ultimate goal is to have map data (offline, because I will customize it myself) and display it in an app (Android). I could make osmdroid work to load maps online and I was trying to figure out how to download and display offline maps. I downloaded MOBAC (Mobile Atlas Creator) and export the data to SQLite format and when I had a look at it I realized that tiles are saved in image format (PNG).
What I would like to do is to import data to the phone to later use it in algorithms such as a search engine or a routing algorithm, so I need the "nodes" and "ways" (as I get them from the original OSM XML), import them to the phone and visualize it to later have this data available for the algorithms I want to develop. Basically, what MAPS.ME does. I think it wouldn't be difficult to convert the XML into the SQLite since a simple script could make it, but then, how can I generate the tiles from this custom SQLite database? Or, is there a way I can download the data in a more appropriate way to do what I'm planning to do?
Thanks.
Rendering the tiles in an app from raw Openstreetmap data would be computation heavy and inefficient. I would suggest to use image tiles you exported for visual representation.
In addition to tiles you should export a data set you will need in the application for desired functionality. You will not need all data from Openstreetmap so you should identify what you need and build your custom export (there are tools and libraries for processing and filtering of Openstreetmap data. I have used pyosmium for some filtering and processing but there are others.) For example, you can build your custom database with POIs you want to search for.
Routing is another chapter. You can implement it yourself but it is a very complex task. There is java library called Graphopper which can do the data extraction (from Openstreetmap) and offline routing for you. They have an online API too but it is possible to make it working completely offline (I did it for one application). Try to look at the source code because than you can see how complex topic routing is. Final note: data exported from Graphopper contains information about some POIs along routes. It may be possible to search for some things via its java API but I haven't investigated this yet.
Can anybody please let me know whether it is possible to export microstrategy grid data in text format to a FTP server (required access will be provided). If not directly, then can we use some kind of java coding/web services to achieve this. I don't want the process but want to understand whether this can be achieved or not?
Thanks in Advance!
You can retrieve report results (and build a new report from scratch at that) via the SDK and from there you can process the data to your liking, i.e. transform & upload to a ftp-server.
Possibly easier would be to create a file-subscription and store the file to a specific directory where you automatically pick it up and deliver it to your ftp.
There might be other solutions as well, but Yes is the answer to the "Yes/No" part of your question.