Visualize IP data on map - geoip

I have been using a honeypot and collecting some IPs of attackers. Now I am using geolite2/python to get the location of the attackers.
My question is, are there any tools to visualize the locations of the attackers on a world map?

You can do this via IPInfo's free Map IPs tool, which creates an interactive map of IP addresses. For eg: check this map of AWS IP Ranges.
You can either copy/paste the IPs or use the tool via cURL. It will process up to 500K IPs.
$ cat ipList | curl -XPOST --data-binary #- "ipinfo.io/map?cli=1"
Disclaimer: I work at IPInfo.

You can use the Google's Data Visualization tools, specifically, the geo chart tools.
found in here:
https://developers.google.com/chart/interactive/docs/gallery/geochart
You can prepare the data in Json format and load it to Javascript, then send google the data and you will have a map gernated by google.

The GeoIP data can be visuallized with Splunk (http://www.splunk.com/) and its Google Maps add-on (https://apps.splunk.com/app/368/). This add-on has a GeoIP component already, so an external GeoIP program is not needed.

Related

I have a gcp storage bucket with 1000+ images in it. What's the easiest way to get a text file that lists all the URLs of objects in the bucket?

I know that this api https://storage.googleapis.com/storage/v1/b/<BUCKET_NAME>/o? can be used to retrieve json data of 1000 objects at a time and and we can parse the output in code to pick out just the names and generate URLs of the required form. But is there a simpler way to generate a text file of list of URLs in a bucket?
edit: adding more details
I have configured a google load balancer(with CDN if that matters) with IP address <LB_IP> in front of this bucket. So ideally I would want to be able to generate a list of URLs like
http://<LB_IP>/image1.jpg
http://<LB_IP>/image2.jpg
...
In a general way you can just run in linux gsutil ls gs://my_bucket > your_list.txt to get all your objects in a text list.
If this is not what you are looking for please edit your question with more specific details.
gsutil doesn't have a command to print URLs for objects in a bucket, however it can list objects, as #Chris32 mentioned.
In addition, according to this Stackoverflow post you could combine listing to a sed program, to replace listings with object names and generate a form of URL.
For publicly visible objects, public links are predictable, as they match the following:
https//storage.googleapis.com/BUCKET_NAME/OBJECT_NAME

Deletion Operation in AWS DocumentDB

I have a question about deleting data in AWS DocumentDB.
I am using PuTTY to connect to EC2 instance and I use mongo shell command to connect with my Document DB.
I checked AWS DocumentDB documentation but I couldn't find how to delete singular data or whole data in one collection. For example I say:
rs0:PRIMARY> show databases
gg_events_document_db 0.000GB
rs0:PRIMARY> use gg_events_document_db
switched to db gg_events_document_db
rs0:PRIMARY> db.data_collection.find({"Type": "15"})
{"Type" : "15", "Humidity" : "14.3%"}
Now I have found the data and I want to delete it. What query should I need to run?
Or what if I want to delete all data in the collection? Without deleting my collection how can I do it?
Probably I am asking very basic questions but I couldn't find a query like this on my own.
I would be so happy if some experienced people in AWS DocumentDB can help me or share some resources.
Thanks a lot 🙌
Amazon DocumentDB has compatibility with MongoDB APIs for 3.6 and 4.0. This said, the same APIs can be used for this need. With respect to:
Or what if I want to delete all data in the collection? Without
deleting my collection how can I do it?
Yo can use:
db.data_collection.drop()
To delete a single document matching a filter, you would use the deleteOne() method.
For example, for your case that would be:
db.data_collection.deleteOne({"Type": "15"})
To delete all documents matching the filter, then use deleteMany().
There's also remove() method, but it was deprecated.
The drop() method deletes the entire collection.

CloudFront Top Referrers Report - ALL referrer URLs

In AWS I can find under:
Cloudfront >> Reports & Analytics >> Top Referrers (CloudFront Top Referrers Report)
There I get the top25 items. How can I get ALL of them?
I have turned on logging in my bucket, but it seems that the referrer is not part of the log-file. Any idea how amazon collects its top25 and how I can according to that get the whole list?
Thanks for your help, in advance.
Amazon's built in analytics are, as you've noticed, rather basic. The data you're looking for all lives in the logfiles that you can set cloudfront up to export (in the cs(Referer) field). If you know what you're looking for, you can set up a little pipeline to download logs, pull out the numbers you care about and generate reports.
Amazon also makes it easy[1] to set up Athena or Redshift to look directly at Cloudfront or S3 logfiles in their target bucket. After a one-time setup, you could query them directly for the numbers you need.
There are also paid services built to fill in the holes in Amazon's default reports. S3stat (https://www.s3stat.com/), for example, will give you a Top 200 Referrer list in its reports, with the ability to export complete lists.
[1] "easy", using Amazon's definition of the word, meaning really really hard.

Filter AWS api results by the lack of a field

I am using Boto3 for a project, one part of which involves looking up unallocated elastic ip addresses. The filter API is usually very expressive, but I can't figure out how to use it for this use case, which doesn't seem all that unusual.
How can I query for an EIP without any associations?
For example, the following doesn't work:
boto3.resource("ec2").vpc_addresses.filter(Filters=[{"Name": "association-id", "Values": []}])
[addr['PublicIp'] for addr in boto3.client("ec2").describe_addresses()['Addresses'] if 'AssociationId' not in addr]
Get all addresses
Find the addresses with no association
Print the PublicIp

How to access the public data from Amazon S3

I am new to Analytics and Amazon. I found some data set that is public on AWS S3. I downloaded the s3fox toll but unable to use it. What are the other means to download this data? I dont want to use a EC2 instance or Hadoop. I simply want to download these text files and run in R.
I want to download following file:
s3://aws-publicdatasets/common-crawl/parse-output/segment/1341690169105/textData-00112
Regards
Baba
You can access it using the following url:
http://aws-publicdatasets.s3.amazonaws.com/common-crawl/parse-output/segment/1341690169105/textData-00112
You can download using link mentioned by imiperalix and run below line to load data in the form of table.
textdata = read.table("{path}textData-00112");