How to use vectors in C++ using Xcode 7.3? - c++

I'm studying from Bjarne Stroustroup's Programming Principles and Practice Using C++ (Second Ed.) At the moment I'm stuck at the vectors chapter, because of this error message in the Terminal:
fourth19.cpp:15:23: error: non-aggregate type 'std::vector<int>' cannot be
initialized with an initializer list
std::vector <int> v = {5, 7, 9, 4, 6, 8}; //vector of 6 ints
My/his code looks like this:
std::vector <int> v = {5, 7, 9, 4, 6, 8}; //vector of 6 ints
std::cout<<v[0];
I didn't find anything that explains how to do this with Xcode 7+.
So if you have Xcode 7+ please write me down what to change and where to change that.

The default compiler flags for new Xcode projects is -std=gnu++11.
To check this:
1: Select your project in the Project Navigator (left-hand side of window, (Option-1 shows it if hidden). It's the top item in the tree.
2: To the left of the search field, ensure that 'All' is selected rather than 'Basic'
3: Search for 'C++ Language Dialect' in the settings view.
4: It'll be in the section 'Apple LLVM 7.1 Language - C++'

Related

AUAudioUnit hosting

Sorry that this question is a bit vague but that's the problem, I've been pretty much shooting in the dark due to being unable to find ANY information on this specific area.
I'm hosting AUAudioUnits on OSX. That all works fine. I can find em, load em, instantiate em and use em. No probs.
But it seems there are more options buried somewhere and I've no idea where to even look to configure this.
So... the problem: I have a specific AU (Superior Drummer 3 in case it's relevant) that comes in 2 flavours: stereo and 16 channel. It seems that both come from the same component (I assume via AUAudioUnit.registerSubclass) and it has a configurationDictionary that seems to contain the information for the 16 channel version (configurationDictionary dump here):
configs: ["SupportedChannelLayoutTags": {
Output = (
6619138,
6684674,
6750210,
6946818
);
}, "HasCustomView": 1, "BusCountWritable": <__NSArrayI 0x600000cd3210>(
0,
0,
0
)
, "ChannelConfigurations": <__NSSingleObjectArrayI 0x600000006e00>(
<__NSArrayI 0x600000290620>(
0,
2
)
)
, "InitialOutputs": <__NSArrayI 0x600003025ef0>(
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2
)
]
But I have absolutely no idea where to go from here and am utterly stumped. How is one supposed to extract that information from the dictionary (given that I have no idea of the type information - and bearing in mind I've no interest in hacking it, I want to know the proper flags etc to do this) and how is one supposed to reconfigure the subclass with that info?
Any help at all would be so appreciated I cannot tell you. Even just a pointer in the right direction.
Cheers

Importing slim model in tensorflow c++

I have a network model defined using slim library in python tensorflow.
Now I want to port it to c++. However it seems to me that tensorflow.contrib.slim is a python only library. How could I use it in my c++ software?
Basically I have something like:
self.conv1 = slim.convolution2d( \
inputs=self.imageIn,num_outputs=32,\
kernel_size=[8,8],stride=[4,4],padding='VALID', \
biases_initializer=None,scope=myScope+'_conv1')
How could I implement this in c++ tensorflow?
At the moment I'm trying to replace slim implementation with plain tensorflow, and then going c++. However replacing
self.conv1 = slim.convolution2d( \
inputs=self.imageIn,num_outputs=32,\
kernel_size=[8,8],stride=[1,4,4,1],padding='VALID', \
biases_initializer=None, weights_initializer=_initializer, scope=myScope+'_conv1')
with plain tf
with tf.variable_scope(myScope+'_conv1'):
weights = tf.get_variable("weights",[8, 8, 3, 32],
initializer=_initializer, dtype=tf.float32)
self.conv1 = tf.nn.conv2d(self.imageIn, weights, [1, 4, 4, 1], padding='VALID')
in a working model produces a mess: nothing works anymore. What am I forgetting?
Thanks a lot

Why does "small" give an error about "char"?

Trying to compile an open source project with both VS2010, VS2012 in x86 and x86_64 on a windows platform running QT5.4.
A file named unit.h contains a part :
[...]
// DO NOT change noscale's value. Lots of assumptions are made based on this
// value, both in the code and (more importantly) in the database.
enum unitScale
{
noScale = -1,
extrasmall = 0,
small = 1, // Line that causes errors.
medium = 2,
large = 3,
extralarge = 4,
huge = 5,
without = 1000
};
[...]
Generates
error C2062: type 'char' unexpected
error C3805: 'type': unexpected token, expected either '}' or a ','
I tried every trick in my hat to solve it. I removed every use of the "small" enum in the code and I still get the error. But after having removed all the uses, I rename "small" to "smallo" everything is fine. It seems to indicate name collision but a file search gives me no references in the whole project. It's not any keyword I know of.
Got any ideas?
EDIT: Thanks to very helpful comments here is an even stranger version that works. Could somebody explain?
#ifdef small // Same with just straight "#if"
#pragma message("yes")
#endif
#ifndef small
#pragma message("no") // Always prints no.
#endif
#undef small
enum unitScale
{
noScale = -1,
extrasmall = 0,
small = 1,
medium = 2,
large = 3,
extralarge = 4,
huge = 5,
without = 1000
};
EDIT 2: The pragma directive was showing yes but only in files that had previously loaded the windows.h header, and it was lost in the compiler output in a sea of no.
Thanks everyone! What a quest.
small is a defined in rpcndr.h. It is used as datatype for MIDL.

Compiling arrays stored in external text files (C++ compiled using command line g++)

I am a novice c++ programmer so please forgive me if this is a naive question. I have files containing large arrays holding tens-of-thousands of strings that I have used previously in javascript applications. Is there some way to include these into C++ source code so that the arrays are compiled along with the code?
At present, the files are formatted as functions that return (javascript) literal arrays, like this:
// javascript array stored in .js text file
function returnMyArray()
{
return ["string1", "string2", "string3", ... "stringBigNumber"];
} // eof returnMyArray()
I 'include' the external file with the usual javascript script & src tags and assign the array with something like:
myArray = returnMyArray();
I want to achieve the equivalent in c++, i.e. assign an array stored in a file to an array in my c++ source code so that the data is available for execution when compiled.
I suppose in theory I could copy and paste (suitable formatted) arrays from files into my c++ source code but they are too large for this to be practical.
I can easily re-write the files to whatever format would be easiest to have c++ access the data - either in c++ array syntax or one string per line to be read into an array.
In a similar vein, is there an easy way to include files containing custom function libraries when compiling with g++ in terminal? (my web searches show plenty of ways for various IDE applications but I am writing source in vim and compiling with g++ on the command line).
I am sorry if this is trivial and I have missed it but I am stumped!
Thank you.
Here's how I'd structure this:
file: data.array
/* C++ style comments are ok in this file and will be ignored
* both single and multiline comments will work */
// the data in the array is a comma seperated list, lines can be any length
1, 2, 3, 4,
5, 6, 7, 8,
9, 10, 11, 12,
// more comma seperated data
9996, 9997, 9998, 9999
file: class.h
extern int myArray[]; // you should fill in the size if you can
// more stuff here
file: class.cpp
// if you have an editor that highlights syntax and errors, it may not like this
// however, #include is handled before compiling and performs a blind substitution
// so this is perfectly legal and should compile.
// Visual C++ 2010 highlights this as an error, but the project builds fine.
int myArray[]
{
#include "data.array"
};
// other definitions of stuff in class.h

distributing R package containing unit tests

so I decided I would put my few R functions into a package and I'm reading/learning Writing R Extension.
it obviously complains about an amount of things I'm not doing right.
after enough googling, I'm firing a few questions here, this one is about testing style: I am using RUnit and I like having tests as close possible to the code being tested. this way I won't forget about the tests and I use the tests as part of the technical documentation.
for example:
fillInTheBlanks <- function(S) {
## NA in S are replaced with observed values
## accepts a vector possibly holding NA values and returns a vector
## where all observed values are carried forward and the first is
## carried backward. cfr na.locf from zoo library.
L <- !is.na(S)
c(S[L][1], S[L])[1 + cumsum(L)]
}
test.fillInTheBlanks <- function() {
checkEquals(fillInTheBlanks(c(1, NA, NA, 2, 3, NA, 4)), c(1, 1, 1, 2, 3, 3, 4))
checkEquals(fillInTheBlanks(c(1, 2, 3, 4)), c(1, 2, 3, 4))
checkEquals(fillInTheBlanks(c(NA, NA, 2, 3, NA, 4)), c(2, 2, 2, 3, 3, 4))
}
but R CMD check issues NOTE lines, like this one:
test.fillInTheBlanks: no visible global function definition for
‘checkEquals’
and it complains about me not documenting the test functions.
I don't really want to add documentation for the test functions and I definitely would prefer not having to add a dependency to the RUnit package.
how do you think I should look at this issue?
Where are you putting your unit tests? You may not want to put them into the R directory. A more standard approach is to put them under inst\unitTests. Have a look at this R-wiki page regarding the configuration.
Alternatively, you can specify what files will be exported in your NAMESPACE, and by extension, what functions should and should not be documented.
Beyond that, ideally you should have your tests run when R CMD CHECK is called; that's part of the design. In which case, you should create a test script to call your tests in a separate tests directory. And you will need to load the RUnit package in that script (but you don't need to make it a dependency of your package).
Edit 1:
Regarding your failure because it can't find the checkEquals function: I would change you function to be like this:
test.fillInTheBlanks <- function() {
require(RUnit)
checkEquals(fillInTheBlanks(c(1, NA, NA, 2, 3, NA, 4)), c(1, 1, 1, 2, 3, 3, 4))
checkEquals(fillInTheBlanks(c(1, 2, 3, 4)), c(1, 2, 3, 4))
checkEquals(fillInTheBlanks(c(NA, NA, 2, 3, NA, 4)), c(2, 2, 2, 3, 3, 4))
}
That way the package is loaded when the function is called or it will inform the user that the package is required.
Edit 2:
From "Writing R Extensions":
Note that all user-level objects in a package should be documented; if a package pkg contains user-level objects which are for “internal” use only, it should provide a file pkg-internal.Rd which documents all such objects, and clearly states that these are not meant to be called by the user. See e.g. the sources for package grid in the R distribution for an example. Note that packages which use internal objects extensively should hide those objects in a name space, when they do not need to be documented (see Package name spaces).
You can use the pkg-internal.Rd file as one option, but if you intend on having many hidden objects, this is usually handled in the declarations in the NAMESPACE.
Did you load the RUnit package?
Your best bet is probably to look at a package containing existing code using RUnit.