how do I get it out to test whether there is a template file.
I want to load the webserver default.conf.erb only if there is no other appropriate config there.
I had thought so, but without success.
template "#{node[:nginx][:dir]}/sites-available/#{host_name}.loc.conf" do
variables attribs
if File.exists? ("/templates/default/#{host_name}.conf.erb")
source "#{host_name}.conf.erb"
else
source "default.conf.erb"
end
notifies :reload, "service[nginx]"
end
Such a concept (under the label of File Specificity) is already implemented in chef.
Just place your file in templates/host-#{host_name}/ instead of templates/default/.
Related
I want to avoid "code duplication" in my supervisor config file.
In short supervisor has to launch a script with two parameters that are very similar:
command=my_script --param1=/path/to/bar.txt --param2=/path/to/foo.txt
I want to avoid the duplication of path/to.
I tried
[program:my_program]
PATHTO="/path/to"
command=my_script --param1=%(ENV_PATHTO)sbar.txt --param2=%(ENV_PATHTO)sfoo.txt
I've tried some variations with environment=PATHTO="/path/to", ${PATHTO}, etc.
Nothing seems to work.
Questions :
how to define variables in a supervisor config file ?
Is there a concept of "template" config file ?
According to file format section in document:
http://supervisord.org/configuration.html?#file-format
environment variables that are present in the environment at the time that supervisord is started can be used in the configuration file using the Python string expression syntax %(ENV_X)s.
and variables in environment=PATHTO="/path/to" will be inherited by child processes. This does not change the environment of supervisord itself.
http://supervisord.org/configuration.html?#supervisord-section-values
It seems trivial, but I've searched far and wide.
I'm using this resource to make v8 run with ES Modules and I'm trying to implement my own search/load algorithm. Thus far, I've managed to make a simple system which loads a file from a known location, however I'd like to implement external modules. This means that the known location is actually unknown throughout the application. Take the following directory tree as an example:
~/
- index.js
import 'module1_index'; // This is successfully resolved to /libs/module1/module1_index.js
/libs/module1/
- module1_index.js
export * from './lib.js' // This import fails because it is looking for ./lib.js in ~/source
- lib.js
export /* literally anything */
The above example begins by executing the index.js file from ~. When module1_index.js is executed, lib.js is looked for from ~ and consequently fails. In order to address this, the files must be looked for relative to the file being executed at the moment, however I have not found a means to do this.
First Attempt
I'm given the opportunity to look for the file in the callResolve method (main.cpp:280):
v8::MaybeLocal<v8::Module> callResolve(v8::Local<v8::Context> context, v8::Local<v8::String> specifier, v8::Local<v8::Module> referrer)
or in loadModule (main.cpp:197)
v8::MaybeLocal<v8::Module> loadModule(char code[], char name[], v8::Local<v8::Context> cx)
however, as mentioned, I have found no function by which to extract the ScriptOrigin from the module. I should mention, when files are successfully resolved, the ScriptOrigin is initiated with the exact path to the file, and is reliable.
Second Attempt
I set up a stack, which keeps track of the current file being executed. Every import which is made is pushed onto the stack. Once the file has finished executing, it is popped. This also did not work, as there was no way to reliably determine once the file had finished executing.
It seems that the loadModule function does just that: loads. It does not execute, so I cannot pop after the module has loaded, as the imports are not fully resolved. The checkModule/execModule functions are only invoked on dynamic imports, making them useless to determining the completion of a static import.
I'm at a loss. I'm not familiar with v8 enough to know where to look, although I have dug through some NodeJS source code looking for an implementation, to no avail.
Any pointers are greatly appreciated.
Thanks.
Jake.
I don't know much about module resolution, but looking at V8's sources, I can see an example mapping a v8::Module to a std::string absolute_path, which sounds like what you're looking for. I'm not copying the whole code here, because the way it uses custom metadata is a bit involved; the short story is that it keeps a std::unordered_map to keep data about each module's source on the side. (I wonder if it would be possible to use Module::ScriptId() as that map's key, for simplification.)
Code search finds a bunch more example uses of InstantiateModule, mostly in tests. Tests often serve as useful examples/documentation :-)
My application uses log4j but OkHttpClient uses java util logging. So apart from log4j.properties, I created a logging.properties file with the following contents:
handlers=java.util.logging.FileHandler
.level=FINE
okhttp3.internal.http2.level=FINE
java.util.logging.FileHandler.pattern = logs/%hjava%u.log
java.util.logging.FileHandler.limit = 50000
java.util.logging.FileHandler.count = 1
java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter
java.util.logging.ConsoleHandler.level = FINE
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
I then added this to jvm params used for starting the application -Djava.util.logging.config.file="file://${BASE_DIR}/logging.properties"
But I don't see any new folders being created as indicated by the Filehandler. Any one know why?
But I don't see any new folders being created as indicated by the Filehandler. Any one know why?
The FileHandler will not create any new folders. A directory must be created before the FileHandler will create a file.
The system property requires a path to file that is located on the filesystem It will not expand system properties or environment variables by using the dollar sign syntax.
You can use a relative path based off of the working directory or you have to use an absolute path to the logging.properties. The logging properties can not be packaged inside of an archive.
If you want to work around this limitation then you want to create a custom config class and use the java.util.logging.config.class property in conjunction with the java.util.logging.config.file property. You then write a class that reads the file://${BASE_DIR}/logging.properties and performs the needed transformation into a path to a file. Then update the configuration if you are using JDK9 or newer. On older versions you need to use readConfiguration and add code to work work around limitations of the LogManager
I'm trying to deploy a process definition from a file using the following code
DeploymentBuilder deploymentBuilder = repositoryService.createDeployment().name(definitionName);
deploymentBuilder.addInputStream(definitionName, definitionFileInputStream);
String deploymentId = deploymentBuilder.deploy().getId();
System.out.println(deploymentId);
The above code runs successfully and the new deploymentId is printed out.
Later, I tried to list the deployed process definitions using the following code
List<ProcessDefinition> definitions = repositoryService.createProcessDefinitionQuery().list();
System.out.println(definitions.size());
The above code runs successfully but the output is always 0.
I've done some investigations and found that in the ACT_GE_BYTEARRAY table an entry with the corresponding deploymentId exists and the BYTES_ column contains that contents of the definitions file.
I have also found that there is no corresponding entry found in ACT_RE_PROCDEF table.
Is there something messing? from the API and the examples I found it seems that the above code shall suffice, or is there a missing step?
Thanks for your help
It turned out that the issue was related to definitionName (thanks thorben!) as it has to ends on either .bpmn20.xml or .bpmn.
After further testing, the suffix is required for the following definitionName of the code
deploymentBuilder.addInputStream(definitionName, definitionFileInputStream);
Leaving the following definitionName without the suffix is fine
repositoryService.createDeployment().name(definitionName);
It seems that you forget the isExecutable flag on your deployed process definitions. Please check if your process model contains a isExecutable flag. If you use the camunda modeler simply set this option in the properties panel of the process.
If you call #deploy() with non-executable definitions an deployment will created, but the process definition are not deployed since they are not executable.
On the latest version of camunda platform (7.7), a new method called #deployWithResult() was added to the DeploymentBuilder. This method returns the deployed process definitions, so it is easy to check if process definitions are deployed.
I am using an cookbook github.com opscode-cookbooks/openldap.
I wrote an wrapper cookbook "lab_openldap" that includes "openldap::server" recipe.
The server.rb recipe uses following clausule to upload the PEM file from cookbooks files/ssl/*.pem to server to the location node['openldap']['ssl_cert'].
if node['openldap']['tls_enabled'] && node['openldap']['manage_ssl']
cookbook_file node['openldap']['ssl_cert'] do
source "ssl/#{node['openldap']['server']}.pem"
mode 00644
owner "root"
group "root"
end
end
The PEM is tried to be read from "openldap" cookbook file/ssl/#{node['openldap']['server']}.pem" location.
I have my PEM file in wrapper "lab_openldap" cookbook file/ssl/#{node['openldap']['server']}.pem" location.
Is it possible to modify the "lab_openldap::server.rb" recipe to load PEM from a wrapper cookbook?
Notes:
I am aware of https://github.com/bryanwb/chef-rewind but it does not seem to manage this situation.
Update
The provided answer using r.resource is correct.
Actually the issue in the particular code is on "source" keyword that according to http://docs.opscode.com/resource_cookbook_file.html refers to the location of a file in the /files directory in a cookbook located in the chef-repo.
r = resources("cookbook_file[#{node['openldap']['ssl_cert']}]")
r.cookbook('lab_openldap')
cookbook_file node['openldap']['ssl_cert'] do
source "ssl/#{node['openldap']['server']}.pem"
mode 00644
owner "root"
group "root"
end
You can do this now in chef directly:
include_recipe "openldap::server"
edit_resource(:cookbook_file, node['openldap']['ssl_cert']) do
cookbook cookbook_name
end
Note that to avoid this situation from needing to be used, library cookbooks like openldap should be written as custom resources, rather than as recipes. They should then export properties allowing their templates to be overwritten, using the pattern in this answer:
https://stackoverflow.com/a/63570830/506908
Of course it is! You just need to set the cookbook attribute on the resource when you wrap it. By default, it's "the current cookbook", but you can change it:
r = resources("cookbook_file[#{node['openldap']['ssl_cert']}]")
r.cookbook('my_wrapper_cookbook')
If you look at Bryan's Chef Rewind, you'll see it does the same thing