so I decided I would put my few R functions into a package and I'm reading/learning Writing R Extension.
it obviously complains about an amount of things I'm not doing right.
after enough googling, I'm firing a few questions here, this one is about testing style: I am using RUnit and I like having tests as close possible to the code being tested. this way I won't forget about the tests and I use the tests as part of the technical documentation.
for example:
fillInTheBlanks <- function(S) {
## NA in S are replaced with observed values
## accepts a vector possibly holding NA values and returns a vector
## where all observed values are carried forward and the first is
## carried backward. cfr na.locf from zoo library.
L <- !is.na(S)
c(S[L][1], S[L])[1 + cumsum(L)]
}
test.fillInTheBlanks <- function() {
checkEquals(fillInTheBlanks(c(1, NA, NA, 2, 3, NA, 4)), c(1, 1, 1, 2, 3, 3, 4))
checkEquals(fillInTheBlanks(c(1, 2, 3, 4)), c(1, 2, 3, 4))
checkEquals(fillInTheBlanks(c(NA, NA, 2, 3, NA, 4)), c(2, 2, 2, 3, 3, 4))
}
but R CMD check issues NOTE lines, like this one:
test.fillInTheBlanks: no visible global function definition for
‘checkEquals’
and it complains about me not documenting the test functions.
I don't really want to add documentation for the test functions and I definitely would prefer not having to add a dependency to the RUnit package.
how do you think I should look at this issue?
Where are you putting your unit tests? You may not want to put them into the R directory. A more standard approach is to put them under inst\unitTests. Have a look at this R-wiki page regarding the configuration.
Alternatively, you can specify what files will be exported in your NAMESPACE, and by extension, what functions should and should not be documented.
Beyond that, ideally you should have your tests run when R CMD CHECK is called; that's part of the design. In which case, you should create a test script to call your tests in a separate tests directory. And you will need to load the RUnit package in that script (but you don't need to make it a dependency of your package).
Edit 1:
Regarding your failure because it can't find the checkEquals function: I would change you function to be like this:
test.fillInTheBlanks <- function() {
require(RUnit)
checkEquals(fillInTheBlanks(c(1, NA, NA, 2, 3, NA, 4)), c(1, 1, 1, 2, 3, 3, 4))
checkEquals(fillInTheBlanks(c(1, 2, 3, 4)), c(1, 2, 3, 4))
checkEquals(fillInTheBlanks(c(NA, NA, 2, 3, NA, 4)), c(2, 2, 2, 3, 3, 4))
}
That way the package is loaded when the function is called or it will inform the user that the package is required.
Edit 2:
From "Writing R Extensions":
Note that all user-level objects in a package should be documented; if a package pkg contains user-level objects which are for “internal” use only, it should provide a file pkg-internal.Rd which documents all such objects, and clearly states that these are not meant to be called by the user. See e.g. the sources for package grid in the R distribution for an example. Note that packages which use internal objects extensively should hide those objects in a name space, when they do not need to be documented (see Package name spaces).
You can use the pkg-internal.Rd file as one option, but if you intend on having many hidden objects, this is usually handled in the declarations in the NAMESPACE.
Did you load the RUnit package?
Your best bet is probably to look at a package containing existing code using RUnit.
Related
Sorry that this question is a bit vague but that's the problem, I've been pretty much shooting in the dark due to being unable to find ANY information on this specific area.
I'm hosting AUAudioUnits on OSX. That all works fine. I can find em, load em, instantiate em and use em. No probs.
But it seems there are more options buried somewhere and I've no idea where to even look to configure this.
So... the problem: I have a specific AU (Superior Drummer 3 in case it's relevant) that comes in 2 flavours: stereo and 16 channel. It seems that both come from the same component (I assume via AUAudioUnit.registerSubclass) and it has a configurationDictionary that seems to contain the information for the 16 channel version (configurationDictionary dump here):
configs: ["SupportedChannelLayoutTags": {
Output = (
6619138,
6684674,
6750210,
6946818
);
}, "HasCustomView": 1, "BusCountWritable": <__NSArrayI 0x600000cd3210>(
0,
0,
0
)
, "ChannelConfigurations": <__NSSingleObjectArrayI 0x600000006e00>(
<__NSArrayI 0x600000290620>(
0,
2
)
)
, "InitialOutputs": <__NSArrayI 0x600003025ef0>(
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2
)
]
But I have absolutely no idea where to go from here and am utterly stumped. How is one supposed to extract that information from the dictionary (given that I have no idea of the type information - and bearing in mind I've no interest in hacking it, I want to know the proper flags etc to do this) and how is one supposed to reconfigure the subclass with that info?
Any help at all would be so appreciated I cannot tell you. Even just a pointer in the right direction.
Cheers
I have a network model defined using slim library in python tensorflow.
Now I want to port it to c++. However it seems to me that tensorflow.contrib.slim is a python only library. How could I use it in my c++ software?
Basically I have something like:
self.conv1 = slim.convolution2d( \
inputs=self.imageIn,num_outputs=32,\
kernel_size=[8,8],stride=[4,4],padding='VALID', \
biases_initializer=None,scope=myScope+'_conv1')
How could I implement this in c++ tensorflow?
At the moment I'm trying to replace slim implementation with plain tensorflow, and then going c++. However replacing
self.conv1 = slim.convolution2d( \
inputs=self.imageIn,num_outputs=32,\
kernel_size=[8,8],stride=[1,4,4,1],padding='VALID', \
biases_initializer=None, weights_initializer=_initializer, scope=myScope+'_conv1')
with plain tf
with tf.variable_scope(myScope+'_conv1'):
weights = tf.get_variable("weights",[8, 8, 3, 32],
initializer=_initializer, dtype=tf.float32)
self.conv1 = tf.nn.conv2d(self.imageIn, weights, [1, 4, 4, 1], padding='VALID')
in a working model produces a mess: nothing works anymore. What am I forgetting?
Thanks a lot
I'm trying to compile a huge, world-renowned numerical weather prediction code - written mostly in Fortran 90 - that uses cpp extensively, and successfully, with PGI, Intel and gfortran. Now, I've inherited a version where experts have added several hundred cases of variadic macros. They use Intel and fpp, which is presumably a little more Fortran-centric, and can get it all to work. I need to use gfortran, and have not been able to get cpp to work on this code with its new additions.
A gross simplification of the problem is as follows -
Code to preprocess:
PRINT *, "Hello" // "Don"
#define adderv(...) (myadd(__VA_ARGS__))
sumv = adderv(1, 2, 3, 4, 5)
Using cpp without the -traditional option will handle the variadic macro, but not the Fortran concatenation:
$ cpp -P t.F90
PRINT *, "Hello"
sumv = (myadd(1, 2, 3, 4, 5))
On the other hand, using the -traditional flag handles the concatenation, but not the variadic macro:
$ cpp -P -traditional t.F90
t.F90:2:0: error: syntax error in macro parameter list
#define adderv(...) (myadd(__VA_ARGS__))
^
PRINT *, "Hello" // "Don"
sumv = adderv(1, 2, 3, 4, 5)
I'm really struggling to find a way to facilitate the processing of both.
I've started by playing with gpp, and feel like I'm getting close, but the reality is I might still be a long way from a solution. It doesn't accept the ... and, it doesn't expand __VA_ARGS__. Of course, the following isn't really a variadic macro any more...
PRINT *, "Hello" // "Don"
#define adderv() (myadd(__VA_ARGS__))
sumv = adderv(1, 2, 3, 4, 5)
$ gpp t.F90
PRINT *, "Hello" // "Don"
sumv = (myadd(__VA_ARGS__))
I've scoured the web to no avail, and the best possibility I've seen so far, which strikes me as possibly ugly and painful, is to split all my Fortran concatenation operators into separate lines. i.e.
PRINT *, "Hello" // "Don"
becomes
PRINT *, "Hello" /&
& / "Don"
The innards of cpp and gpp are a bit intimidating to me, but if anybody sees the potential for success and might point me in the right direction, I'd be very appreciative. Restructuring this huge code really isn't an option, though an automated strategy (such as splitting those concat operators into separate lines) might be, if I'm desperate enough.
Additional information - roygvib suggested I try adding the -C flag. We had been suppressing it lately because it seemed to introduce many C comments into the Fortran code. Well, I went ahead and tried this, and I think I'm closer:
$ cat t.f90
PRINT *, "Hello" // "Don"
#define adderv(...) (myadd(__VA_ARGS__))
sumv = adderv(1, 2, 3, 4, 5)
When I invoke with -P and -C flags, naturally it passes through the C++ (Fortran concat operator), but it also seems to generate some C-commented copyright text:
$ /lib/cpp -P -C t.F90
/* Copyright (C) 1991-2014 Free Software Foundation, Inc.
This file is part of the GNU C Library.
.
.
.
/* wchar_t uses ISO/IEC 10646 (2nd ed., published 2011-03-15) / Unicode 6.0. */
/* We do not support C11 <threads.h>. */
PRINT *, "Hello" // "Don"
sumv = (myadd(1, 2, 3, 4, 5))
A little bit of research ( Remove the comments generated by cpp ) is suggesting that this addition of the copyright may be a relatively new "feature" of cpp.
I can't see any simple way to suppress this, so I'm thinking I may need to build a wrapper script (e.g. mycpp) that calls cpp as above, filters out any C-style comments, then passes that to the next stage.
It's not optimal, and I'm a little leery because this whole package also has C code in it. Theoretically, though, I think that the worst thing that would happen would be failure to generate comments in preprocessed C code.
If anybody has knowledge as to how I might simply suppress the generation of that copyright message, I might be in business.
At least in the context of the simple example described below, I resolved the problem by installing an older cpp. Other research had confirmed that version 4.8 was inserting additional C comments into preprocessed Fortran code, which obviously isn't a good thing. The solution was simple, use cpp-4.7.
Installation (on Ubuntu 16.04) was more straightforward than I had anticipated. A simple
sudo apt-get install cpp-4.7
put the necessary executable in /usr/bin/cpp-4.7
and that preprocesses the following examples the way I want.
$ /usr/bin/cpp-4.7 -C -P t.F90
PRINT *, "Hello" // "Don"
sum = (myadd(1, 2, 3, 4, 5))
Like DonMorton, I am trying to use cpp -P with fortran files because __VA_ARGS__ and others things are used. As without -C option // are removed and comments are added.
So, I removed these extralines using the ideas of another answer :
cpp -P -C t.F90 | sed '/\/\*.*\*\// d; /\/\*/,/\*\// d'
And, I get, as expected :
PRINT *, "Hello" // "Don"
sumv = (myadd(1, 2, 3, 4, 5))
But, there is still a problem. You could not use // (C++ style comments) in macro args : // something is replaced by /* something */
I'm going to modify DRAW(Deep Recurrent Attentive Writer) code that other person shared here for variable length sequence using tf.scan function. So I need to change the for loop in the original code into a structure that is suitable for scan function. Below is original part of the code,
...
for t in range(T):
c_prev = tf.zeros((batch_size,img_size)) if t==0 else cs[t-1]
x_hat=x-tf.sigmoid(c_prev) # error image
r=read(x,x_hat,h_dec_prev)
h_enc,enc_state=encode(enc_state,tf.concat(1,[r,h_dec_prev]))
z,mus[t],logsigmas[t],sigmas[t]=sampleQ(h_enc)
h_dec,dec_state=decode(dec_state,z)
cs[t]=c_prev+write(h_dec) # store results
h_dec_prev=h_dec
DO_SHARE=True # from now on, share variables
...
In order to use tf.scan, I need to pass several previous states(c_prev, h_dec_prev...). However, as I know tf.scan only gets one tensor (is it right?) for the loop as an example in here
elems = np.array([1, 2, 3, 4, 5, 6])
sum = scan(lambda a, x: a + x, elems)
It seems there should be only one a and it should be a tensor. In this case, only possible way I can imagine is to flatten several different state tensors and concatenate it. But I'm worrying that it will mess up the code and make slow down the speed a lot especially when the state sizes are all different. Is there any efficient (and fast) way to handle this kind of problem?
I'm studying from Bjarne Stroustroup's Programming Principles and Practice Using C++ (Second Ed.) At the moment I'm stuck at the vectors chapter, because of this error message in the Terminal:
fourth19.cpp:15:23: error: non-aggregate type 'std::vector<int>' cannot be
initialized with an initializer list
std::vector <int> v = {5, 7, 9, 4, 6, 8}; //vector of 6 ints
My/his code looks like this:
std::vector <int> v = {5, 7, 9, 4, 6, 8}; //vector of 6 ints
std::cout<<v[0];
I didn't find anything that explains how to do this with Xcode 7+.
So if you have Xcode 7+ please write me down what to change and where to change that.
The default compiler flags for new Xcode projects is -std=gnu++11.
To check this:
1: Select your project in the Project Navigator (left-hand side of window, (Option-1 shows it if hidden). It's the top item in the tree.
2: To the left of the search field, ensure that 'All' is selected rather than 'Basic'
3: Search for 'C++ Language Dialect' in the settings view.
4: It'll be in the section 'Apple LLVM 7.1 Language - C++'