How to implement my product resource into a Pods structure? - ember.js

Reading http://www.ember-cli.com/#pod-structure
Lets say I have a product resource. Which currently has the following directory structure:
app/controllers/products/base.js
app/controllers/products/edit.js
app/controllers/products/new.js
app/controllers/products/index.js
With pods all the logic in these files are put in a single file app/products/controller.js?
At the same time, my routes and templates for these resources currently look like:
app/routes/products/base.js
app/routes/products/edit.js
app/routes/products/new.js
app/routes/products/index.js
app/templates/products/-form.hbs
app/templates/products/edit.hbs
app/templates/products/index.hbs
app/templates/products/new.hbs
app/templates/products/show.hbs
How should this be converted to Pods?

You can use ember generate --pod --dry-run to help with that:
$ ember g -p -d route products/base
version: 0.1.6
The option '--dryRun' is not supported by the generate command. Run `ember generate --help` for a list of supported options.
installing
You specified the dry-run flag, so no changes will be written.
create app/products/base/route.js
create app/products/base/template.hbs
installing
You specified the dry-run flag, so no changes will be written.
create tests/unit/products/base/route-test.js
$
(I don't know why it complains yet it honours the option, might be a bug).
So you'd end up with a structure like:
app/controllers/products/base/route.js
app/controllers/products/edit/route.js
etc.

Related

How to add multiple bases in /etc/default/jetty

I am trying to add multiple jetty bases to jetty server. Documentation says it is possible without providing the information on how to do it.
So in /etc/default/jetty file with single jetty base is which works is
JETTY_HOME=/opt/jetty
JETTY_BASE=/opt/jetty/my_base
JETTY_USER=jetty
JETTY_HOST=jetty
JETTY_ARGS=jetty.http.port=8989
JETTY_LOGS=/opt/jetty/logs/
i have tried combinations
JETTY_HOME=/opt/jetty
JETTY_BASE=/opt/jetty/my_base
JETTY_BASE=/var/www/domain1
JETTY_BASE=/var/www/domain2
...
i have also tried ',' comma separated does not work
JETTY_HOME=/opt/jetty
JETTY_BASE=/opt/jetty/my_base, /var/www/domain1, /var/www/domain2
....
Has anyone managed to get this working, can you let know how to achieve this. Thanks
Create more /etc/default/<name> entries.
Say you want 3 bases.
Lets call them tau, upsilon, and phi.
You'll create ...
/etc/default/tau
/etc/default/upsilon
/etc/default/phi
each of which is a standalone instance of Jetty, with their own ${jetty.base} directory and a common ${jetty.home} directory.
The jetty.sh you (likely) copied into place for your service entry also needs the same name.
So, cp jetty.sh /path/to/service.d/tau.sh becomes how /etc/default/tau is used/searched for.
(Note: depending on your chosen shell, a symlink will probably work too)

How can I run a custom command at build time in meson?

in my projects I adopt a semantic versioning scheme following the standard described by semver. I obtain something like this: product_v1.2.3-alpha-dirty.elf .
I work with embedded system and with make I usually generate a version_autogen.h file at compile time that contains both information of the version number , e.g 1.4.3.1, and current git repository, e.g --dirty, --clean and so on, using shell commands.
I'm starting to using meson and it is very easy and flexible but the custom commands like
run_command('command', 'arg1', 'arg2', 'arg3')
are available only at configure time while I need them at compile time to retrieve information like git status and similar.
How can I do that?
After a deeper research I found that custom_target() (as suggested by nielsdg) can do my job. I did something like this:
# versioning
version_autogen_h = custom_target(
'version_autogen.h',
output : 'version_autogen.h',
input : 'version_creator.sh',
command : ['#INPUT#', '0', '0', '1', 'alpha.1', '#OUTPUT#' ],
)
where version_creator.sh is my bash script that retrieves git info and creates the file version_autogen.h given the version numbers passed as the command arguments. The custom target is created at compile time, so my script is executed at compile time as well, exactly when I want it to be.
I also discovered that in meson there is the possibility to use generators to do something similar to this but in that case they transform an input file in one or more output file, so they didn't fit my case where I didn't need to have a file as input but just versioning numbers.
meson has specialized command for this job - vcs_tag
This command detects revision control commit information at build time
and places it in the specified output file. This file is guaranteed to
be up to date on every build. Keywords are similar to custom_target.
, so it'd look a bit shorter with possibility to avoid generation script and have just
git_version_h = vcs_tag(input : 'version.h.in',
output : 'version.h')
where version.h.in file that you should provide with #VCS_TAG# string that will be replaced, e.g.
#define MYPROJ_VERSION "#VCS_TAG#"
of course, you can have file header and naming according to your project style and possibly to add other definitions also. It is also possible to use another replace string and own command line to generate version, e.g.
vcs_tag(command: [
'git', '--git-dir', meson.build_root(),
'describe', '--tags', '--long',
'--match', '?.*.*', '--always'
],
...
)
which I found and adapted from here

adding a custom plugin to google-fluentd

I need to capture file descriptors for a given process. This is simular to what collectd's processes plugin does, but need to get this on the fluentd, google-fluentd specifically rails.
I've added my plugin under /etc/google-fluentd/plugin directory and no luck, it is not getting registered. I've even moved under /opt/google-fluentd/embedded/lib/ruby/gems/2.6.0/gems/fluentd-1.7.4/lib/fluent/plugin still no luck. Out of desperation I have also tried renaming in_tail.rb to in_tail2.rb and tail plugin is gone.
2020-08-14 18:28:16 -0700 [error]: fluent/log.rb:362:error: config error file="/etc/google-fluentd/google-fluentd.conf" error_class=Fluent::ConfigError error="Unknown input plugin 'tail'. Run 'gem search -rd fluent-plugin' to find plugins"
Which tells me that there is some other place where plugin must be mentioned. Is it too naive to think that I can just write a single file plugin under /etc/google-fluentd/plugin?
After a few hours of going up and down the call stack in the fluentd trying to figure out the logic behind why and which plugins fluentd loads here is what I figured out.
#type has to match registration call and filename!
ie i had used
#type fc_count
my filename was
/etc/google-fluentd/in_fd.rb
with
Fluent::Plugin.register_input('fd_count', self)
Although type and registration matched, fluent couldn't match file path to plugin/in_fd.rb as it loads configuration. Basically if you don't use a plugin it won't load it and the way it determines it is by going through config. This is the reason why when I renamed an existing input plugin it was no longer found.

How to pass command line arguments into the implementation js file in gauge project?

I am using gauge-js for running puppeteer scripts and I am trying to pass in custom argument from command lines.
while I am running my gauge run spec command to run the test cases I want to pass in any custom argument like gauge run spec --username=test and read that value inside my implementation files.
You cannot pass custom arguments to Gauge. However, you can use environment variables to pass any additional information you need in your implementation files.
For example, you can run gauge as
(on mac/*nix)
username=test gauge run spec
or
(on windows)
set username=test
gauge run specs
and use the environment variable in your implementation file using process.env.username.
You can additionally set the variable in the .property files in the env folder. These get picked up as environment variables as well.

Cloud ML Feature methods

The pre-processing page in the cloud ML How to guide (https://cloud.google.com/ml/docs/how-tos/preprocessing-data) says that you should see the SDK reference documentation for details about each type of feature and the
Can anyone point me to this documentation or a list of feature types and their methods? I'm trying to setup a discrete target but keep getting "data type int64 expected type: float" errors whenever I set my target to .discrete() rather than .continuous()
You need to download the SDK reference documentation:
Navigate to the directory where you want to install the docs in the
command line. If you used ~/google-cloud-ml to download the samples
as recommended in the setup guide, it's a good place.
Copy the documentation archive to your chosen directory using
gsutil:
gsutil cp gs://cloud-ml/sdk/cloudml-docs.latest.tar.gz .
Unpack the archive:
tar -xf cloudml-docs.latest.tar.gz
This creates a docs directory inside the directory that you chose. The
documentation is essentially a local website: open docs/index.html in your browser to open it at its root. You can find the transform references in there.
(This information is now in the setup guide as well. It's the final step under LOCAL: MAC/LINUX)
On the type-related errors, let's assume for a bit that your feature set is specified somewhat along the following lines:
feature_set = {
'target': features.target('category').discrete()
}
When a discrete target is specified like above, the data-type of the target feature is an int64 due to one of the following:
No vocab for target data-column (i.e. 'category') was generated during the analysis of your data, i.e. the metadata (in the generated metadata.yaml) has an empty list for the target data-column's vocab.
A vocab for 'category' was indeed generated, and the data-type of the very first item (or key) of this vocab was an int.
Under these circumstances, if a float is encountered, the transformation to the target feature's data-type will fail.
Instead, casting the entire data-column ('category' in this case) into a float should help with this.