I'm setting up unit testing for the first time on my Zend Framework app (and it's actually the first time I'm doing unit testing at all).
A problem I'm getting at the moment is that I use a view helper to include my headscripts and links:
class Zend_View_Helper_HeadIncludes extends Zend_View_Helper_Abstract {
public function headIncludes($type, $folder) {
if($folder == "full" && APPLICATION_ENV == "production") {
$folder = "min";
}
$handler = opendir(getenv("DOCUMENT_ROOT") . "/". $type ."/" . $folder);
while ($file = readdir($handler)) {
if ($file != "." && $file != "..") {
if($type == "js") {
$this->view->headScript()->appendFile('/js/' . $folder . '/' . $file);
} else if ($type == "css" ) {
$this->view->headLink()->appendStylesheet('/css/' . $folder . '/' . $file);
}
}
}
closedir($handler);
}
}
This is included in each view script. When I try and run a test it fails because opendir() tries to find eg "/css/full" relative to the document root, which seems to not be the same value for tests and the application. What's the best way to resolve this? I could add in a conditional to do something different when APPLICATION_ENV = "testing", but am not sure if this would run contrary to what setting up testing is supposed to achieve.
The environment variable 'DOCUMENT_ROOT' is only going to be set by your web server. You may want to be using 'APPLICATION_PATH' as a reference instead as it's more reliable across virtual hosts as well as command line usage.
Related
This is for a project. I need to redirect certain incoming URL requests to files hosted on the same webserver. I cannot use .htaccess and I cannot rely on any plugins.
I managed to do this with a plugin but I am having a hard time extracting the necessary code in order to write my own plugin which does what I need to do hardcoded.
WordPress Multisite
All software up to date
"Redirection" WordPress Plugin (sucessfully)
writing custom WP functions to do this (semi-successfully)
I found some code in the Pantheon documentation:
$blog_id = get_current_blog_id();
// You can easily put a list of many 301 url redirects in this format
// Trailing slashes matters here so /old-url1 is different from /old-url1/
$redirect_targets = array(
'/test/test.xml' => '/files/' . $blog_id . '/test.xml',
'/regex/wildcard.xml(.*)' => '/files/' . $blog_id . '/regex.xml',
);
if ( (isset($redirect_targets[ $_SERVER['REQUEST_URI'] ] ) ) && (php_sapi_name() != "cli") ) {
echo 'https://'. $_SERVER['HTTP_HOST'] . $redirect_targets[ $_SERVER['REQUEST_URI'] ];
header('HTTP/1.0 301 Moved Permanently');
header('Location: https://'. $_SERVER['HTTP_HOST'] . $redirect_targets[ $_SERVER['REQUEST_URI'] ]);
if (extension_loaded('newrelic')) {
newrelic_name_transaction("redirect");
}
exit();
}
https://example.com/test/test.xml sucessfully
redirects to
/files/5/text.xml
However, some incoming requests contain a query string, e.g.
https://example.com/regex/wildcard.xml?somequerystring
obviously, the redirect from
https://example.com/regex/wildcard.xml
to
/files/5/regex.xml
works fine.
However, as soon as there is a query string involved, the redirect does not work. Given that I need to do this with PHP, how can I achieve a wildcard redirect from either /regex/* or /regex/wildcard.xml* to /files/5/regex.xml?
Any help would be greatly appeciated.
Thanks,
Daniel
If you prefer to retain your code logic and just make the code work, then try this:
$blog_id = get_current_blog_id();
$redirect_targets = array(
'/wordpress\/test\/test\.xml/i' => 'files/'.$blog_id.'/test.xml',
'/([^\/]+)\/([^\/]+)\.xml.*/i' => 'files/'.$blog_id.'/$1.xml',
);
// Get reuest uri without GET attributes.
$request_uri = get_request_uri();
// Loop through redirect rules.
foreach ($redirect_targets as $pattern => $redirect) {
// If matched a rule, then create a new redirect URL
if ( preg_match( $pattern, $request_uri ) ) {
$new_request_uri = preg_replace( $pattern, $redirect, $request_uri );
$new_url = 'https://'.$_SERVER['HTTP_HOST'].$new_request_uri;
header( 'HTTP/1.0 301 Moved Permanently' );
header( 'Location: '.$new_url );
if ( extension_loaded( 'newrelic' ) ) {
newrelic_name_transaction( "redirect" );
}
exit();
}
}
// Returns REQUEST URI without 'get' arguments
// if example.com/test/test.php?some=arg it will return test/test.php
function get_request_uri () {
return strtok( $_SERVER['REQUEST_URI'], '?' );
}
You can modify redirect rule as you want. It works as a normal Regex pattern.
I'm trying to add :
require get_template_directory() . '/mytheme/folder/file.php';
To specific category in WooCommerce, I tried already:
function get_file() {
if( has_term( 'categoryName', 'product_cat' ) ) {
require get_template_directory() . '/mytheme/folder/file.php';
}
}
add_action( 'woocommerce_before_main_content', 'get_file');
But it doesen't work ;/
How can i add this file only for 1 category?
Or for multi cat. by using else if
Regards, Gabrielle
I'm surprised you aren't getting a PHP error for the file not existing. But check out the get_template_directory() function. It gives you the path to the current theme folder. Therefore I think your path is probably wrong since your theme folder is being repeated... something akin to SOME_PATH_STUFF/wp-content/themes/mytheme/mytheme/folder/file.php
Try the following instead:
function get_file() {
if( has_term( 'categoryName', 'product_cat' ) ) {
require get_template_directory() . '/folder/file.php';
}
}
add_action( 'woocommerce_before_main_content', 'get_file');
Given the following in a CGI script with Perl and taint mode I have not been able to get past the following.
tail /etc/httpd/logs/error_log
/usr/local/share/perl5/Net/DNS/Dig.pm line 906 (#1)
(F) You tried to do something that the tainting mechanism didn't like.
The tainting mechanism is turned on when you're running setuid or
setgid, or when you specify -T to turn it on explicitly. The
tainting mechanism labels all data that's derived directly or indirectly
from the user, who is considered to be unworthy of your trust. If any
such data is used in a "dangerous" operation, you get this error. See
perlsec for more information.
[Mon Jan 6 16:24:21 2014] dig.cgi: Insecure dependency in eval while running with -T switch at /usr/local/share/perl5/Net/DNS/Dig.pm line 906.
Code:
#!/usr/bin/perl -wT
use warnings;
use strict;
use IO::Socket::INET;
use Net::DNS::Dig;
use CGI;
$ENV{"PATH"} = ""; # Latest attempted fix
my $q = CGI->new;
my $domain = $q->param('domain');
if ( $domain =~ /(^\w+)\.(\w+\.?\w+\.?\w+)$/ ) {
$domain = "$1\.$2";
}
else {
warn("TAINTED DATA SENT BY $ENV{'REMOTE_ADDR'}: $domain: $!");
$domain = ""; # successful match did not occur
}
my $dig = new Net::DNS::Dig(
Timeout => 15, # default
Class => 'IN', # default
PeerAddr => $domain,
PeerPort => 53, # default
Proto => 'UDP', # default
Recursion => 1, # default
);
my #result = $dig->for( $domain, 'NS' )->to_text->rdata();
#result = sort #result;
print #result;
I normally use Data::Validate::Domain to do checking for a “valid” domain name, but could not deploy it in a way in which the tainted variable error would not occur.
I read that in order to untaint a variable you have to pass it through a regex with capture groups and then join the capture groups to sanitize it. So I deployed $domain =~ /(^\w+)\.(\w+\.?\w+\.?\w+)$/. As shown here it is not the best regex for the purpose of untainting a domain name and covering all possible domains but it meets my needs. Unfortunately my script is still producing tainted failures and I can not figure out how.
Regexp-Common does not provide a domain regex and modules don’t seem to work with untainting variable so I am at a loss now.
How to get this thing to pass taint checking?
$domain is not tainted
I verified that your $domain is not tainted. This is the only variable you use that could be tainted, in my opinion.
perl -T <(cat <<'EOF'
use Scalar::Util qw(tainted);
sub p_t($) {
if (tainted $_[0]) {
print "Tainted\n";
} else {
print "Not tainted\n";
}
}
my $domain = shift;
p_t($domain);
if ($domain =~ /(^\w+)\.(\w+\.?\w+\.?\w+)$/) {
$domain = "$1\.$2";
} else {
warn("$domain\n");
$domain = "";
}
p_t($domain);
EOF
) abc.def
It prints
Tainted
Not tainted
What Net::DNS::Dig does
See Net::DNS::Dig line 906. It is the beginning of to_text method.
sub to_text {
my $self = shift;
my $d = Data::Dumper->new([$self],['tobj']);
$d->Purity(1)->Deepcopy(1)->Indent(1);
my $tobj;
eval $d->Dump; # line 906
…
From new definition I know that $self is just hashref containing values from new parameters and several other filled in the constructor. The evaled code produced by $d->Dump is setting $tobj to a deep copy of $self (Deepcopy(1)), with correctly set self-references (Purity(1)) and basic pretty-printing (Indent(1)).
Where is the problem, how to debug
From what I found out about &Net::DNS::Dig::to_text, it is clear that the problem is at least one tainted item inside $self. So you have a straightforward way to debug your problem further: after constructing the $dig object in your script, check which of its items is tainted. You can dump the whole structure to stdout using print Data::Dumper::Dump($dig);, which is roughly the same as the evaled code, and check suspicious items using &Scalar::Util::tainted.
I have no idea how far this is from making Net::DNS::Dig work in taint mode. I do not use it, I was just curious and wanted to find out, where the problem is. As you managed to solve your problem otherwise, I leave it at this stage, allowing others to continue debugging the issue.
As resolution to this question if anyone comes across it in the future it was indeed the module I was using which caused the taint checks to fail. Teaching me an important lesson on trusting modules in a CGI environment. I switched to Net::DNS as I figured it would not encounter this issue and sure enough it does not. My code is provided below for reference in case anyone wants to accomplish the same thing I set out to do which is: locate the nameservers defined for a domain within its own zone file.
#!/usr/bin/perl -wT
use warnings;
use strict;
use IO::Socket::INET;
use Net::DNS;
use CGI;
$ENV{"PATH"} = ""; // Latest attempted fix
my $q = CGI->new;
my $domain = $q->param('domain');
my #result;
if ( $domain =~ /(^\w+)\.(\w+\.?\w+\.?\w+)$/ ) {
$domain = "$1\.$2";
}
else {
warn("TAINTED DATA SENT BY $ENV{'REMOTE_ADDR'}: $domain: $!");
$domain = ""; # successful match did not occur
}
my $ip = inet_ntoa(inet_aton($domain));
my $res = Net::DNS::Resolver->new(
nameservers => [($ip)],
);
my $query = $res->query($domain, "NS");
if ($query) {
foreach my $rr (grep { $_->type eq 'NS' } $query->answer) {
push(#result, $rr->nsdname);
}
}
else {
warn "query failed: ", $res->errorstring, "\n";
}
#result = sort #result;
print #result;
Thanks for the comments assisting me in this matter, and SO for teaching more then any other resource I have come across.
I am unable to implement pagination with Facebook OpenGraph. I have exhausted every option I have found.
My hope is to query for 500 listens repeatedly until there are none left. However, I am only able to receive a response from my first query. Below is my current code, but I have tried setting the parameters to different amounts rather than having the fields from the [page][next] dictate them
$q_param['limit'] = 500;
$next_exists = true;
while($next_exists){
$music = $facebook->api('/me/music.listens','GET', $q_param);
$music_data = array_merge($music_data, $music['data']);
if($music["paging"]["next"]==null || $music["paging"]["next"]=="")
$next_exists = false;
else{
$url = $music["paging"]["next"];
parse_str(parse_url($url, PHP_URL_QUERY), $array);
foreach ($array as $key => $value) {
$q_param[$key]=$value;
}
}
}
}
a - Can you please share what do you get after first call?
b - Also, possible if you can share the whole file?
I think your script is timing out. Try adding following on top of your file:
set_time_limit(0);
Can you check apache log files?
sudo tail -f /var/log/apache2/error.log
Using an arbitrary set of urls (eg: http://api.longurl.org/v2/services) what is the best way to turn this list into a regex?
Is this appropriate regex?
(((easyuri|eepurl|eweri)\.com)|((migre|mke|myloc)\.me)|etc...)'
Can you do multiple levels of optional patterns like that?
I see different ways to accomplish this.
Use XPath and try to select a node given the current URL.
Parse the xml into a dictionary and test your current URL if it exists as a key.
Store the domains of the XML in a database, index the url field and query your current URL.
If performance is not an issue: Match the current URL against the entire XML file as text.
Perhaps there are more ideas.
Building a regex from the XML does not seem to me a good idea since all the other solutions appear to me far more easy to develop.
OP'S ANSWER:
Well it turns out that this does work:
/((?:easyuri|eepurl|eweri)\.com)|((?:migre|mke|myloc)\.me)/
Run against this:
easyuri.com eepurl.comer eweri.us migre.me mke.memo myloc.em
You get this:
[0] => Array
(
[0] => easyuri.com
[1] => eepurl.com
[2] => migre.me
[3] => mke.me
)
But the easiest way would just be something like this:
/0rz\.tw|1link\.in|1url\.com|2\.gp|2big\.at|etc\.\.\./
Regex helps you complicate things more than is possible with other methods. ;P
Here's the PHP I eventually used to create the regex:
Assumes that you have cURL'd http://api.longurl.org/v2/services and converted the xml to an array called $urlShorteners like: $urlShorteners = array('0rz.tw', '1link.in', 'etc...');
foreach($urlShorteners as $url) {
$urls[] = array_reverse(explode('.', $url));
}
foreach($urls as $url) {
$tldKeys[array_shift($url)][] = $url;
}
foreach($tldKeys as $tld => $doms) {
if($tld != '') {
$subPattern = array();
foreach($doms as $subDomain) {
$subPattern[] = implode("\.", array_reverse($subDomain));
}
if (count($subPattern) > 1) $optionPattern[] = "((?:" . implode("|", $subPattern) . ")\." . $tld . ")";
else $optionPattern[] = "(" . $subPattern[0] . "\." . $tld . ")";
}
}
$regex = '/' . implode('|', $optionPattern) . '/';
echo $regex . "\n";