How to test if a URL is a phishing in command line using Google Safe Browsing? - safe-browsing

In Google Safe Browsing, there are two ways to test if a URL is a phishing URL:
lookup-based and
hash-based.
In this question, I focus on the hash-based solution, better for privacy, as used by browsers such as Firefox.
For this, the browser downloads a hash database goog-phish-shavar which is saved as ~/.cache/mozilla/firefox/<profile_folder>/safebrowsing/goog-phish-shavar.sbstore.
Now, I want to test a URL in command line as follows
test-safebrowsing-url goog-phish-shavar.sbstore http://example-phishing.com
How to do this?

The files that you are looking at are Firefox-specific and so you'll need something like sbdbdump to extract the hash prefixes from it:
cd ~/.cache/mozilla/firefox/<profile_folder>/safebrowsing/
~/sbdbdump/dump.py -v --name goog-phish-shavar . > ~/goog-phish-shavar.hashes
and then you'll have to convert a URL to its possible hashes following the hashing rules. regexp-lookup.py can help with that.
Finally, you'll have to check all of the URL hashes against the list of prefixes. If you find any matches, you need to make a request for the full hashes that start with that prefix.

For Google Safe Browsing v3, there is https://github.com/Stefan-Code/gglsbl3.
For Google Safe Browsing v4, there is https://github.com/afilipovich/gglsbl
They both support command line usage of hash-based analysis.

Related

Postman - Can't inspect Find & Replace results

I am searching for requests in a collection which contain a specific string using Find And Replace. After entering in a value, I am able to see the results. I would like to inspect the requests/responses that are in the search results. Specifically, I need to see their headers, and in case of results that are responses—their entire response bodies.
There is an "Open in builder" button by each search result that unfortunately does absolutely nothing, and to further complicate things—the URL of the request is truncated in the search results. Because of this, I can not even manually find the request in the collection and inspect it.
Does anyone have a solution to this problem?

Replacing part of ${url}'s from a sitemap in Jmeter

I have a jmeter test plan that goes to a site's sitemap.xml page, retrieves each url on that page with an XPath Extractor, then passes ${url} to a HTTP Request sampler within a ForEach Controller to send the results for each page to a file. This works great, except I just realized that the links on this sitemap.xml page are hardcoded. This is a problem when i want to test https://staging-website.com, but all of the links on sitemap.xml are all www.website.com pages. It seems like there must be a way to replace 'www.website.com' in each ${url} with 'staging-website.com' with regex or something, but I haven't been able to figure out how. Any suggestions would be greatly appreciated.
Add a BeanShell pre-processor to manipulate the url.
String sUrl = vars.get("url");
String sNewUrl = sUrl.replace("www.website.com", "https://staging-website.com");
log.info("sNewUrl:" + sNewUrl);
vars.put("url", sNewUrl);
You can also try to correlate the sitemap.xml with the regular expression extractor positioned till www.website.com so that you extract only the URL portion of the data instead of the full host name. Shouldn't you be having it already since the HTTPSampler only allows you to enter the URI segment and not the host name?
You can use __strReplace() function available via JMeter Plugins project like:
${__strReplace(${url},${url},staging-website.com,)}
Demo:
The easiest way to install JMeter Custom Functions (as well as any other plugins) is using JMeter Plugins Manager
I was able to replace the host within the string by putting
${__javaScript('${url}'.replace('www.website'\,'staging.website'))}
in the path input of the second http request sampler. The answers provided by Selva and Dimitri were more elegant, so if I have time in the future to come back to this I will give them another try. I really appreciate the help!

Is adding a robots.txt to my Django application the way to get listed by Google?

I have a website (Django) on a linux server, but Google isn't finding the site at all. I know that i don't have a robots.txt file on the server. Can someone tell me how to create one, what to write inside and where to place it? That would be a great help!
robot txt is not for google find your site. i think you must register your site to google and also add sitemap.xml
Webmaster Tools - Crawl URL ->
https://www.google.com/webmasters/tools/submit-url?continue=/addurl&pli=1
also see this for robot.txt
Three ways to add a robots.txt to your Django project | fredericiana
-> http://fredericiana.com/2010/06/09/three-ways-to-add-a-robots-txt-to-your-django-project/
what is robot.txt
It is great when search engines frequently visit your site and index your content but often there are cases when indexing parts of your online content is not what you want. For instance, if you have two versions of a page (one for viewing in the browser and one for printing), you'd rather have the printing version excluded from crawling, otherwise you risk being imposed a duplicate content penalty. Also, if you happen to have sensitive data on your site that you do not want the world to see, you will also prefer that search engines do not index these pages (although in this case the only sure way for not indexing sensitive data is to keep it offline on a separate machine). Additionally, if you want to save some bandwidth by excluding images, stylesheets and javascript from indexing, you also need a way to tell spiders to keep away from these items.
One way to tell search engines which files and folders on your Web site to avoid is with the use of the Robots metatag. But since not all search engines read metatags, the Robots matatag can simply go unnoticed. A better way to inform search engines about your will is to use a robots.txt file.
from What is Robots.txt -> http://www.webconfs.com/what-is-robots-txt-article-12.php
robot.txt files are used to tell search engines which content should or should not be indexed. The robot.txt files is in no way required to be indexed by a search engine.
There are a number of thing to note about being indexed by search engines.
There is no guarantee you will ever be indexed
Indexing takes time, a month, two months, 6 months
To get indexed quicker try sharing a link to your site through blog comments etc to increase the chances of being found.
submit your site through the http://google.com/webmasters site, this will also give you hints and tips to make your site better as well as crawling stats.
location of robots.txt is same as view.py and this code
in view
def robots(request):
import os.path
BASE = os.path.dirname(os.path.abspath(__file__))
json_file = open(os.path.join(BASE , 'robots.txt'))
json_file.close()
return HttpResponse(json_file);
in url
(r'^robots.txt', 'aktel.views.robots'),

What does this URL mean?

http://localhost/students/index.cfm/register?action=studentreg
I did not understand the use of 'register' after index.cfm. Can anyone please help me understand what it could mean? There is a index.cfm file in students folder. Could register be a folder name?
They might be using special commands within their .htaccess files to modify the URL to point to something else.
Things like pointing home.html -> index.php?p=home
ColdFusion will execute index.cfm. It is up to the script to decide what to do with the /register that comes after.
This trick is used to build SEO friendly URL's. For example http://www.ohnuts.com/buy.cfm/bulk-nuts-seeds/almonds/roasted-salted - buy.com uses the /bulk-nuts-seeds/almonds/roasted-salted to determine which page to show.
Whats nice about this is it avoids custom 404 error handlers and URL rewrites. This makes it easier for your application to directly manage the URL's used.
I don't know if it works on all platforms, as I've only used it on IIS.
You want to look into the cgi.PATH_INFO variable, it is populated automatically by CF server when such URL format used.
Better real-life example would look something like this.
I have an URL which I want to make prettier:
http://mybikesite/index.cfm?category=bicycles&manufacturer=cannondale&model=trail-sl-4
I can rewrite it this way:
http://mybikesite/index.cfm/category/bicycles/manufacturer/cannondale/model/trail-sl-4
Our cgi.PATH_INFO value is: /category/bicycles/manufacturer/cannondale/model/trail-sl-4
We can parse it using list functions to get the same data as original URL gives us automatically.
Second part of your URL is plain GET variable, it is pushed into URL scope as usually.
Both formats can be mixed, GET vars may be used for paging or any other secondary stuff.
index.cfm is using either a CFIF IsDefind("register") or a CFIF #cgi.Path_Info# CONTAINS statements to execute a function or perform a logic step.

Cleansing string / input in Coldfusion 9

I have been working with Coldfusion 9 lately (background in PHP primarily) and I am scratching my head trying to figure out how to 'clean/sanitize' input / string that is user submitted.
I want to make it HTMLSAFE, eliminate any javascript, or SQL query injection, the usual.
I am hoping I've overlooked some kind of function that already comes with CF9.
Can someone point me in the proper direction?
Well, for SQL injection, you want to use CFQUERYPARAM.
As for sanitizing the input for XSS and the like, you can use the ScriptProtect attribute in CFAPPLICATION, though I've heard that doesn't work flawlessly. You could look at Portcullis or similar 3rd-party CFCs for better script protection if you prefer.
This an addition to Kyle's suggestions not an alternative answer, but the comments panel is a bit rubbish for links.
Take a look a the ColdFusion string functions. You've got HTMLCodeFormat, HTMLEditFormat, JSStringFormat and URLEncodedFormat. All of which can help you with working with content posted from a form.
You can also try to use the regex functions to remove HTML tags, but its never a precise science. This ColdFusion based regex/html question should help there a bit.
You can also try to protect yourself from bots and known spammers using something like cfformprotect, which integrates Project Honeypot and Akismet protection amongst other tools into your forms.
You've got several options:
"Global Script Protection" Administrator setting, which applies a regular expression against post and get (i.e. FORM and URL) variables to strip out <script/>, <img/> and several other tags
Use isValid() to validate variables' data types (see my in depth answer on this one).
<cfqueryparam/>, which serves to create SQL bind parameters and validate the datatype passed to it.
That noted, if you are really trying to sanitize HTML, use Java, which ColdFusion can access natively. In particular use the OWASP AntiSamy Project, which takes an HTML fragment and whitelists what values can be part of it. This is the same approach that sites like SO and slashdot.org use to protect submissions and is a more secure approach to accepting markup content.
Sanitation of strings in coldfusion and in quite any language is very important and depends on what you want to do with the string. most mitigations are for
saving content to database (e.g. <cfqueryparam ...>)
using content to show on next page (e.g. put url-parameter in link or show url-parameter in text)
saving files and using upload filenames and content
There is always a risk if you follow the idea to prevent and reduce a string by allow basically everything in the first step and then sanitize malicious code "away" by deleting or replacing characters (blacklist approach).
The better solution is to replace strings with rereplace(...) agains regular expressions that explicitly allow only the characters needed for the scenario you use it as an easy solution, whenever this is possible. use cases are inputs for numbers, lists, email-addresses, urls, names, zip, cities, etc.
For example if you want to ask for a email-address, you could use
<cfif reFindNoCase("^[A-Z0-9._%+-]+#[A-Z0-9.-]+\.(?:[A-Z]{5})$", stringtosanitize)>...ok, clean...<cfelse>...not ok...</cfif>
(or an own regex).
For HTML-Imput or CSS-Imput I would also recommend OWASP Java HTML Sanitizer Project.