I wish to block access to any file with name "modules.php" in any website.
I have written a Rule in mod security but I'm not sure if it is working or not?
here is the rule
SecRule REQUEST_LINE "#rx modules.php" \
"phase:2,block,severity:2,msg:'Blocking access to modules.php files.'"
Is that correct?
It will work. Instead REQUEST_LINE to REQUEST_FILE for better performance.
Related
So I have an S3 bucket with this structure:
ready_data/{auto_generated_subfolder_name}/some_data.json
The thing is, I want to recursively listen for any data that is put after the ready_data/ directory.
I have tried to set the prefix to ready_data/ and ready_data/*, but this only seems to capture events when a file is added directly in the ready_data directory. The ML algorithm might created a nested structure like ready_data/{some_dynamically_named_subfolder}/{some_somefolder}/data.json and I want to be able to know about the data.json object being created in a path where ready_data is the top-level subfolder.
ready_data/ is correct.
The "prefix" is a left-anchored substring. In pseudocode, the test is determining whether left(object_key, length(rule_prefix)) == rule_prefix so wildcards aren't needed and aren't interpreted. (Doesn"t throw an error, but won't match.)
Be sure you create the rule matching s3:ObjectCreated:* because there are multiple ways to create objects in S3 -- not just Put. Selecting only one of the APIs is a common mistake.
I am writing code that needs to respond to html redirections using the Location header. Mostly it works find, however, in some cases the header simply says to redirect to some resource, presumably on the same host, for example
Location: main.html
In other cases it will simply provide a new domain, for example
Location: abc.example.com
Now, how can I tell whether the redirection needs the existing host domain prefixed to the given partial URL without having to check all possible top-level domains in the suffix of the string? The only thing I can think to do is simply try one resultant URL and, if it fails, attempt the other one.
Has anyone got a clever solution or come across this problem before?
According to section 14.30 of RFC 2616 the Location: header must specify an absoluteURI. RFC 2616 borrows (see section 3.2.1) the specification of the absoluteURI from RFC 2396, and section 3 of RFC 2396 makes it clear that an absoluteURI is, well, an absolute URI.
Any other kind of an URI like the sample responses you're getting, violates RFC 2616. Those responses are invalid, and there is no valid interpretation of them.
First, it does not appear that you're getting valid Location headers. Location headers must include either absolute URLs (that start with a specifier (e.g. "http:") or relative URLs (that start with either "//hostname" or "/" (for paths). (see https://en.wikipedia.org/wiki/HTTP_location and https://www.rfc-editor.org/rfc/rfc3986#section-4.2).
That being said, if you're stuck with a server sending you broken Location headers, you could do some heuristics to try to guess whether it starts with a hostname or not. Obvious ones include whether it ends with common file format extensions (e.g. .html, .txt, .pdf) or common TLD suffixes (e.g. .com, .org, .net). This isn't fool-proof, as with the explosion of TLDs, there's likely to be overlap with file extensions, and in theory, a file can end in anything (as, for instance, some end with .com). But it would probably get you 98% of the way there - and then you could just try both and see which gives you an answer.
There's always the possibility that both give you an answer, in which case it's just a Hard Problem(tm). That's why the specification is what it is - to avoid this kind of ambiguity.
The title pretty much says it all. The scenario is a user uploads a file but they could be hitting 1 of 6 servers depending on the current load at the time. We have run into a situation where the users are trying to upload files with special characters in their names. We can write a function to sanitize the file name but then we have to check that the new sanitized file name doesn't exists. My thought was to just rename the file using createuuid(). I believe the createuuid() function uses the servername as part of the algorithm if I remember correctly so if anything, the uniqueness should be 6 fold due to the 6 servers. Am I correct in this thinking?
If I remember correctly, CF uses timestamp+clock+servername.
Did you consider sanitizing the uploaded filename and just append the UUID? This appears failproof to me.
I have a website running an an old apache server with SSI enabled. My host wants to move to a new server which has SSI disabled for security reasons.
I have a whole lot of pages with Google Friendly urls which just have one line
<!--#include virtual="Url_Including_Search_String"-->
What is the best alternative to the SSI to keep my google friendly search strings returning the specified search result?
I can achieve most of the results with rewrite rules in the .htaccess file, however some search strings have a space in the keyword but the url doesn't. I can't do this with a rewrite rule
ie www.somedomain.com.au/SYDNEY.htm would have
<!--#include virtual="/search.php?keyword=SYDNEY&Submit=SEARCH"-->
However,the issue is
www.somedomain.com.au/POTTSPOINT.htm would have
<!--#include virtual="/search.php?keyword=POTTS+POINT&Submit=SEARCH"-->
A rewrite rule cannot detect where a space should be in a Suburb name, so hoping there is an alternative for <!--#include virtual=
I have looked at RewriteMap but don't think I can access the file I would need to put this in.
I would use Mod Rewrite to redirect any calls to non-existent files to your Search page.
For example:
http://example.com/SYDNEY redirects to
http://example.com/search.php?q=SYDNEY
(assuming there is not actually a /SYDNEY/ file at your server root.)
Then get rid of all those individual redirect pages.
As for the spaces, I'd modify my actual Search page to recognize (for example) "POTTSPOINT" and figure out that the space should be inserted. Basically compare the search term against a database of substitutions.
I'm working with Coldfusion (because I have to) and we use iPlanet 7 (because we have to), and I would like to pass clean URL's instead of the query-param junk (for numerous reasons). My problem is I don't have access to the overall obj.conf file, and was wondering if there were .htaccess equivalents I could pass on the fly per directory. Currently I am using Application.cfc to force the server to look at index.cfm in root before loading the requested page, but this requires a .cfm file is passed, so it just 404's out if the user provides /path/to/file but no extension. Ultimately, I would like to allow the user to pass domain.com/path/to/file but serve domain.com/index.cfm?q1=path&q2=to&q3=file. Any ideas?
You can mod_dir with the DirectoryIndex directive to set which page is served on /directory/ requests.
http://httpd.apache.org/docs/2.2/mod/mod_dir.html
I'm not sure what exists for iPlanet, haven't had to work with it before. But it would be possible to use a url like index.cfm/path/to/file, and pull the extra path information via the cgi.path_info variable. Not exactly what you're looking for, but cleaner that query-params.