Could not find value of VARIABLE from The Puppet Template - templates

I'm having trouble to get Variable from the Puppet template.
es_deploy.pp class
class elasticsearch::es_deploy inherits elasticsearch {
$cluster_name = 'cluster'
notify { "Cluster_Name Value: $cluster_name": }
$keys_cluster = keys($elasticsearch)
deploy_on_host { $keys_cluster: es => $elasticsearch; }
define deploy_on_host ($es) {
$keys_node = keys($es[$title])
deploy_instances { $keys_node: node_info => $es[$title], es_hosts => $es['node_list']; }
define deploy_instances ($node_info, $es_hosts) {
file {"/etc/elasticsearch/elasticsearch.yml":
ensure => file,
mode => 644,
owner => root,
group => root,
content => template("elasticsearch/elasticsearch.erb");
}
$network_host = $node_info['ip_address']
notify { "Network_Host Value: $network_host": }
}
}
Template elasticsearch.erb
cluster.name: <%= scope.lookupvar("elasticsearch::es_deploy::cluster_name") -%>
network.host: <%= #network_host %>
I don't know why I'm not able to get values from es_deploy class directly. I used a workaround by scope.lookupvar() to get cluster_name but it's not working with network_host in the same way. The elasticsearch template was included from define block where I set network_host variable so it should be accessible but it's not. Notify show correct both values.
Puppet shows an error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Failed to parse template elasticsearch/elasticsearch.erb:
Filepath: /usr/lib/ruby/vendor_ruby/puppet/parser/templatewrapper.rb
Line: 82
Detail: Could not find value for 'network_host' at /etc/puppet/environments/testing/modules/elasticsearch/templates/elasticsearch.erb:74
at /etc/puppet/environments/testing/modules/elasticsearch/manifests/es_deploy.pp:123 on node es1
I will appreciate any help, thanks

Your template is unable to directly access the variables of class elasticsearch::es_deploy because it is not being invoked in that class's scope. Instead, it is being invoked in the scope of defined type elasticsearch::es_deploy::deploy_instances, which is unrelated to the scope of elasticsearch::es_deploy, naming and lexical nesting notwithstanding.
The Puppet Language Reference contains a section on scoping rules, which explains this. Since Puppet 3.0, all variable references are (supposed to be) looked up according to the static scoping rules, though there was at one time a bug in that regard with respect to references from templates. Relevant provisions from the reference include (emphasis in the original):
Code inside a class definition or defined type exists in a local scope.
Variables and defaults declared in a local scope are only available in that scope and its children.
[...]
[Version 3] of Puppet uses static scope for variables
[...]
In static scope, parent scopes are only assigned by class inheritance (using the inherits keyword). Any derived class receives the contents of its base class in addition to the contents of node and top scope.
All other local scopes have no parents — they only receive their own contents, and the contents of node scope (if applicable) and top scope.
If you want the template to be able to retrieve data via the expression #cluster_name when invoked from a defined-type instance, that needs to correspond to a local variable of that type. You could achieve that by passing it as a parameter, or just by making a local copy of the class's variable:
$cluster_name = $elasticsearch::es_deploy::cluster_name
My suggestion, however, would be to continue having the template look up the variable in the appropriate scope if that scope indeed can be viewed as a canonical source for the information.
I should say also that nesting class or defined type definitions inside class bodies has widely been considered poor form since well before the release of Puppet 3. Even in Puppet 2 and earlier, with their exclusive reliance on dynamic scope, lexically nesting definitions produced scope confusion. The Puppet 3 (and 4) docs specifically note that the practice is not deprecated in that version, but warn that it is a candidate for future deprecation. Also, they explicitly say:
Defined resource types can (and should) be stored in modules. Puppet
is automatically aware of any defined types in a valid module and can
autoload them by name.
Definitions should be stored in the manifests/ directory of a module
with one definition per file, and each filename should reflect the
name of its defined type.
I should be clear that in context, it is evident that the docs are distinguishing "should" from "must" in those comments.

Related

Pybind11 Class Definition

What's the difference between the following class definitions in pybind11?
(1)
py::class_<Pet> pet(m, "Pet");
(2)
py::class_<Pet>(m, "Pet")
What's the use of pet in pet(m, "Pet")? I found this definition on page 42 section "5.8 Enumerations and internal types" of the documentation, which can be found here.
The first creates a named variable that you can refer to later within the same scope (as is done in the example that you reference), the second creates a (unnamed) temporary that you can only use by chaining the function calls that set more properties on the same statement. If the variable does not escape the local scope, then the only difference is syntax. Otherwise, by naming it, you could for example pass it along to one or more helper functions (e.g. for factoring out the definitions of common properties).
What is important to understand is that all Python classes, functions, etc. are run-time constructs. I.e. some Python API code needs to be called to create them, for example when the module is loaded. An object of py::class_ calls the APIs to create a Python class and to register some type info for internal pybind11 use (e.g. for casting later on). I.e. it is only a recipe to create the requested Python class, it is not that class itself. Once the Python class is created and its type info stored, the recipe object is no longer needed and can be safely destroyed (e.g. b/c by letting it go out of scope).

What is the java "package private" equivalent in C++?

What is the java "package private" equivalent in C++ ?
Java package privacy feature (where only classes within same package has visibility) is useful when providing APIs.
is there a similar feature for C++ (other than declaring other classes as "friend"s) ?
to elaborate more,
e.g. assume A.h and B.h are in same package (i.e. API lib)
File : A.h
class A
{
public :
void doA();
private :
int m_valueA;
};
File : B.h
class B
{
public :
void doB()
private:
int m_valueB;
}
What I want is,
public visibility : ONLY A::doA() and B::doB()
within package(i.e. API lib) : A should be able to access B::m_valueB and B should be able to access A::m_valueA.
WITHOUT making each other "friend" classes.
C++ has no equivalent to the Java concept of a "package". A package in Java is an arbitrary collection of code, defined only by being collected together into a bundle.
As such, "package private" kind of makes a mockery of the concept of encapsulation. Yes, the scope of access is "limited" to some degree, but it remains largely unbounded. So it's probably for the best that this isn't a natural feature of the language.
While C++ does not offer the concept of a "package", there are ways to allow specific arbitrary bundles of code to call functions which other arbitrary bundles of code cannot. This requires the use of a "key type" idiom.
A "key type" is a (usually empty) type whose main feature is that only certain code can create objects of that type. Any function taking such a type as a parameter therefore can only be called by code that is able to create the key type. The type therefore "unlocks" the function; hence the name.
The traditional use for this is to allow forwarding of private access through emplace and similar perfect forwarding constructs in C++. The key type's default constructor is made private, and the only people who can create it are explicit friends of the key type. But since the type is publicly copyable, any forwarding functions can copy them to the destination.
In your case, you want the key type to only be constructible from code in certain files. To do this, you simply have a header that provides a definition of the key type, typically as a simple empty class. In the public headers of your "package", any functions that you want to be "package private" will take a package_private as a const& parameter.
However, your public headers for the "package" do not include the definition of package_private; only a forward declaration of it. This means that code that only has access to the public headers are unable to create objects of that type. They can see the typename, but they cannot do anything with it.
So it might look like this:
//Internal header, included by all code in the "package"
struct package_private {};
inline constexpr static package_private priv; //Makes it easier to call these functions
//Header for library.
struct package_private;
void package_private_function(package_private const&, ...); //Must be `const&` to avoid needing to define `package_private`.
//To call the package private function inside the library:
package_private_function(priv, ...);
//This is a compile error for any code that doesn't have the internal header:
library::package_private priv{};
library::package_private_function(priv, ...);
C++ being C++, users can always cheat:
alignas(max_align_t) char data[sizeof(max_align_t)];
library::package_private &key = *reinterpret_cast<library::package_private*>(&data);
instance.pack_priv_function(key, ...);
This isn't even undefined behavior in C++20, so long as package_private is within the given alignment and size of data and it is an implicit lifetime type. You could do things that force package_private to not be these things, but that only renders this code UB. It will still compile and almost certainly still work; after all, the function isn't ever accessing this object.
The traditional way to hint to users that some type in a header is internal and shouldn't be used by external code is to stick it in a detail namespace.
C++20 modules provides a way to prevent a category of ways to subvert this. If we consider a module to be a "package", all you have to do is not export the package_private type. It can still be listed as parameters of functions that do get exported (and they no longer need to be const&). But the package_private type itself isn't exported.
Code within the module can use the name; you can put the definition into an implementation partition that gets imported by any in-module files that need this access. But code outside of the module which imports it can't use that name, so they can't even do the casting trick shown above. There are metaprogramming techniques that can inspect a function's signature without knowing its types, but those are really hard and are spoiled by overloading.
Then again, Java reflection can break any encapsulation, so it's not like "package private" is fool-proof.
c++ doesn't have packages as in java. But it does have namespaces, however namespaces is just that, a namespace. So it is a different beast.
A somewhat emulation could in some situations be an inner classes (classes within other classes) - since inner classes are considered members.
Besides that, there are header files and implementation(.cpp files) - in that sense you have units or modules that control what is actual visible (not just private but completely hidden - in particular if put into anon. namespace). This concept covers both just a single .h file and .cpp file or entire projects/libs/dlls that is more like a full package (and can choose what parts of API they expose through what gets 'shown' in their respective header files).
You might be interested in the PIMPL idiom in C++, which, as #darune said, is not equivalent but close in semantic. Typically, you'll do this:
In YourPublicClass.hpp
class MyPublicClass
{
// Public interface
public:
void doSomething();
void manipulatePrivateStuff(Stuff * stuff);
MyPublicClass(...);
~MyPublicClass();
struct Stuff; // <= This is were the magic happens, this stuff
// is unknown/private from who include this header
private:
Stuff * _member;
};
In YourPublicClass.cpp
#include <iostream>
#include "YourPublicClass.hpp"
struct MyPublicClass::Stuff
{
// Public members that are only accessible from this compilation unit
// but private from the rest of the code, like a private package
int a;
void explodeInTenSeconds() { if (!a--) std::cout<<"Boom!"<<std::endl; }
Stuff(int delay = 10) : a(delay) {}
};
void MyPublicClass::doSomething() { _member->explodeInTenSeconds(); }
void MyPublicClass::manipulatePrivateStuff(Stuff * stuff) { stuff->a = 10; }
MyPublicClass::MyPublicClass(...) : _member(new Stuff(10)) {}
MyPublicClass::~MyPublicClass() { delete(_member); }
If you need another class to access the "package private" Stuff, you'll to move the MyPublicClass::Stuff declaration to its own header and include that header in this class's definition file (.cpp). This header shouldn't be included outside of your "package", it's not public. It's not required to be manipulated, the compiler is perfectly fine with only knowing it's a pointer to a unspecified structure.
C++ doesn't have packages.
The consequence is that the requested behavior "different access for other code inside my package from code outside my package" isn't even meaningful. There is no "code inside my package" because there is no "my package".
Elaborating one step farther, the C++ private access modifier meets the specification of Java package private. It's inaccessible (private) for code outside the package, just like Java package private. And it is trivially accessible to code inside the same package --- because there is no such code.
Obviously that's not useful for establishing collaboration. But it's what you get when you try to ask questions about C++ that are only meaningful in some other language. Another aspect of your question that is Java-centric and harmful for thinking about C++, is that you consider all code to be organized into classes. In C++ that isn't so, there are free functions and associated (by ADL) functions which are not class members.
Firstly, It is important to understand what private-package is. It means that other members of the same package have access to the item. A package in java is an arbitrary collection of code which is defined only by being collected in bundle.
This feature of "private-package" in C++ has not equivalent to the java language.

Why dirty injection is necessary even for code within template's scope?

Please consider the following:
import options
template tpl[T](a: untyped) : Option[T] =
var b {.inject.}: T = 4
a
none(int)
discard
tpl[int]:
echo b
This builds and runs and results in output:
4
But, if you remove the {.inject.} pragma, you get:
...template/generic instantiation from here
Error: undeclared identifier: 'b'
I don't think we can consider the block of code echo b foreign to the "insides" of the template since: it's only used expanded withing the template, and: it's passed as an argument, not used outside.
I'm forced to use something dirty and now I've dirtied my global scope to make this work ?
The way this works makes sense. You want to be explicit about what is available inside the code specified by the user.
You can keep your global scope hygienic by using block:
import options
template tpl[T](a: untyped) : Option[T] =
block:
var b {.inject.}: T = 4
a
none(int)
discard
tpl[int]:
echo b
The reasoning is that the injected names are part of the template's public interface. You don't want the implementation details of the template to leak inside the user code and possibly create naming clashes there, so all variables created inside the template are hidden from the user code by default.
Using inject like this is not considered dirty, it's the right thing to do here.

Collecting classes definitions on the fly while parsing

Currently, I'm working on a simple compiler project.
Suppose having the following grammar:
file_input : file_item*
;
file_item : class_def
| variable_decl
;
class_def : 'class' NAME scope
;
variable_decl : 'dim' NAME 'as' NAME
;
now, while building our symbol table if we declared a variable before the class definition we will get semantic error, because it won't find the class required in the symbol table
simply, we need let the compiler wait till the class name is defined, so declaring a variable of type foo and defining the class foo later won't disturb the compiler.
any suggestion on how to achieve that ?
thanks for your time.
You'll require a multi-pass approach:
First walk over the AST once to build the table mapping class names to the class definition without doing anything else that would require performing lookups on the table. Then walk it a second time, with the table already built and you'll be able to loop up any class you want when encountering a variable definition.
One approach could be that when class foo is used in a variable declaration and it doesn't yet exist, create the class foo immediately, but add a flag (something like "undefined") to the class definition. When the class is actually defined later on, update the class definition in the symbol table and remove the "undefined" flag.
At the end of the compile, look through in the symbol table for any classes that are still flagged as "undefined" and report the error then. It might be useful to record the line number of the first use of the class for error reporting purposes.
This will work for now, but later on when you want to check for correct member access within a class, it will be tricky to do without the full class definition. You could do a similar thing where you defer the parsing of the member access until you have the definition, but overall I think it would be harder than just multi-pass as sepp2k suggested.

How does require in node.js deal with globals?

I just found out that if I require a module and store it as a global, I can overwrite methods and properties in the module as shown below:
global.passwordhelper_mock = require("helpers/password")
sinon.stub(passwordhelper_mock, "checkPassword").returns true
If I then require another module which in itself utilizes the above stubbed method, my stubbed version will be used.
How does the require function in node.js take notice to these globals? Why does it only work when I overwrite/stub a module that has been saved as a global?
Thanks
How does the require function in node.js take notice to these globals?
Somewhere inside the module there must be a call to module.exports.someObject = function(x) {...} in order for someObject to be come available globally.
Why does it only work when I overwrite/stub a module that has been saved as a global?
Not sure I follow here. If the object was hidden then you couldn't overwrite it. You can overwrite any object available to you, either a global object (e.g. console) or a property of any object available to you at runtime (e.g. console.log).