How to change the experiment file path generated when running Ray's run_experiments()? - ray

I'm using the following spec on my code to generate experiments:
experiment_spec = {
"test_experiment": {
"run": "PPO",
"env": "MultiTradingEnv-v1",
"stop": {
"timesteps_total": 1e6
},
"checkpoint_freq": 100,
"checkpoint_at_end": True,
"local_dir": '~/Documents/experiment/',
"config": {
"lr_schedule": grid_search(LEARNING_RATE_SCHEDULE),
"num_workers": 3,
'observation_filter': 'MeanStdFilter',
'vf_share_layers': True,
"env_config": {
},
}
}
}
ray.init()
run_experiments(experiments=experiment_spec)
Note that I use grid_search to try various learning rates. The problem is "lr_schedule" is defined as:
LEARNING_RATE_SCHEDULE = [
[
[0, 7e-5], # [timestep, lr]
[1e6, 7e-6],
],
[
[0, 6e-5],
[1e6, 6e-6],
]
]
So when the experiment checkpoint is generated it has a lot of [ in it's path name, making the path unreadable to the interpreter. Like this:
~/Documents/experiment/PPO_MultiTradingEnv-v1_0_lr_schedule=[[0, 7e-05], [3500000.0, 7e-06]]_2019-08-14_20-10-100qrtxrjm/checkpoint_40
The logic solution is to manually rename it but I discovered that its name is referenced in other files like experiment_state.json, so the best solution is to set a custom experiment path and name.
I didn't find anything in documentation.
This is my project if it helps
Can someone help?
Thanks in advance

You can set custom trial names - https://ray.readthedocs.io/en/latest/tune-usage.html#custom-trial-names. Let me know if that works for you.

Related

nvim coc-eslint .eslintrc with prettier/prettier sets double quotes instead of single quotes

I need to apologize in advance because I am totally confused at the moment. I've been wrangling with my .eslintrc.json (at the end of my post) for several hours now.
All I want, is to set single quotes. To my understanding single quotes are part of the default settings of "eslint:recommended". But when I execute Prettier, double quotes are being set.
Next thing I tried was setting single quotes in rules for "prettier/prettier". That's not working either. Prettier is still setting double quotes.
Last of my options was setting single quotes directly in rules as "quotes: ["error": "single"].
Strangely enough though, double quotes are being shown as linting errors while editing.
I am running out of options.
Maybe someone can help me.
Here's my .eslintrc.json:
{
"env": {
"browser": true,
"commonjs": true,
"es2021": true,
"node": true
},
"extends": ["eslint:recommended", "prettier"],
"plugins": ["prettier", "#babel", "vue"],
"parserOptions": {
"ecmaVersion": 2022,
"parser": "#babel/eslint-parser",
"sourceType": "module"
},
"rules": {
"no-console": "off",
"indent": ["error", 2],
"linebreak-style": ["error", "unix"],
"quotes": ["error", "single"],
"semi": ["error", "always"],
"prettier/prettier": [
"error",
{
"singleQuote": true,
"onlyUseLocalVersion": false
}
]
}
}
Finally I tried to set
{
"prettier.singleQuote":true
}
in coc-settings.json (:CocConfig) and now it works.
That shouldn't be necessary if singleQuote is already set in .eslintrc. So I consider setting singelQuote in coc-settings.json rather a workaround than a real solution.

Regex Statement in VSCode snippet for removing file extension

I'd like to create a VS-Code snippet for importing css into a react component. If I'm using the snippet in "MyComponent.tsx", then I'd like the snippet to import the associated css file for the component:
import "./MyComponent.css";
The component and it's css will always be located in the same directory.
I thought that the following snippet would be able to do this:
//typescriptreact.json
"import componet css": {
"prefix": "icss2",
"body": [
"import \"./${1:$TM_FILENAME/^(.+)(\.[^ .]+)?$/}.css\";"
],
"description": ""
},
But this results in:
import "./MyComponent.tsx/^(.+)([^ .]+)?$/.css";
What's the correct way to do this?
You can use
"import componet css": {
"prefix": "icss2",
"body": [
"import \"./${TM_FILENAME_BASE/^(.*)\\..*/$1/}.css\";"
],
"description": ""
}
The ${TM_FILENAME_BASE} variable holds the file name without the path, and the ^(.*)\\..* regex matches and captures all up to the last . while just matching the extension, and only the captured part remains due to the $1 replacement pattern (that refers to Group 1 value).
"import component css": {
"prefix": "icss2",
"body": [
"import \"./${TM_FILENAME_BASE}.css\";"
],
"description": ""
}
TM_FILENAME_BASE The filename of the current document without its
extensions
from snippet variables documentation.
So there is no need to remove the .tsx extension via a transform - it is already done for you.
The more interesting question is what if you have a file like
myComponent.next.tsx // what should the final result be?
${TM_FILENAME_BASE} will only take off the final .tsx resulting in import "./myComponent.next.css";
#Wiktor's results in import "./myComponent.css";
Which is correct in your case? Is something like myComponent.next.tsx a possible case for you? If not just use ${TM_FILENAME_BASE} with no need for a transform.

Understanding the GN build system in Fuchsia OS, what is `build_api_module`?

GN stands for Generate Ninja. It generates ninja files which build things. The main file is BUILD.GN at the root of the fuchsia source tree
It contains a lot of build_api_module calls:
build_api_module("images") {
testonly = true
data_keys = [ "images" ]
deps = [
# XXX(46415): as the build is specialized by board (bootfs_only)
# for bringup, it is not possible for this to be complete. As this
# is used in the formation of the build API with infrastructure,
# and infrastructure assumes that the board configuration modulates
# the definition of `zircon-a` between bringup/non-bringup, we can
# not in fact have a complete description. See the associated
# conditional at this group also.
"build/images",
# This has the images referred to by $qemu_kernel_label entries.
"//build/zircon/zbi_tests",
]
}
however, it's unclear for me what this does exactly. Looking at its definition on build/config/build_api_module.gn for example:
template("build_api_module") {
if (current_toolchain == default_toolchain) {
generated_file(target_name) {
outputs = [ "$root_build_dir/$target_name.json" ]
forward_variables_from(invoker,
[
"contents",
"data_keys",
"deps",
"metadata",
"testonly",
"visibility",
"walk_keys",
"rebase",
])
output_conversion = "json"
metadata = {
build_api_modules = [ target_name ]
if (defined(invoker.metadata)) {
forward_variables_from(invoker.metadata, "*", [ "build_api_modules" ])
}
}
}
} else {
not_needed([ "target_name" ])
not_needed(invoker, "*")
}
}
it looks like it simply generates a file.
Can someone explain to me how build_api_module("images") ends up building all the zircon kernel images?
The build_api_module() targets generate JSON files that describe something about the current build system configuration. These files are typically consumed by other tools (in some cases dependencies to other build rules) that need to know about the current build.
One example is the tests target which generates the tests.json file. This file is used by fx test to determine which tests are available and match the test name you provide to the component URL to invoke.
Can someone explain to me how build_api_module("images") ends up building all the zircon kernel images?
It doesn't. These targets are descriptive of the current build configuration, they are not prescriptive of what artifacts the build generates. In this specific case, the images.json file is typically used by tools like FEMU and ffx to determine what system images to use on a target device.

Find pattern with regex in Sublime text 2.02

I would like to create a new Syntax Rule in Sublime in order to search a string pattern so that that pattern is highlighted. The parttern I am looking for is IPC or TST, therefore I was making use of the following Sublime Syntax rule
{ "name": "a3",
"scopeName": "source.a3",
"fileTypes": ["a3"],
"patterns": [
{ "name": "IPC",
"match": "\\b\\w(IPC|TST)\\w\\b "
}
],
"uuid": "c76f733d-879c-4c1d-a1a2-101dfaa11ed8"
}
But for some reason or another, it doesn't work at all.
Could someone point me out in the right direction?
Thanks in advance
After looking around and testing a lot, I have found the issue, apparently apart from identifying the patter, I should invoke the colour, for doing it I have to make use of "capture", being the command as follows:
{ "name": "IPC colour",
"match": "\\b(IPC|TST)\\b",
"captures": {
"1": { "name": "meta.preprocessor.diagnostic" }
}
},
Where "name": "meta.preprocessor.diagnostic" will indicate the sort of colour assign to the found pattern.
regards!

Problems combining (union) Multipolygons in geodjango

I'm using geodjango and postgis (1.x),
What is the best way to combine (union) a list of multipolygons.
in what i assume is rather inefficient i'm looping trough like this
combined = multipolygon
for item in items:
combined = combined.union(item.geom) #geom is a multipolygon
Usually this works fine, but often i'm getting the error Error encountered checking Geometry returned from GEOS C function "GEOSUnion_r".
Here is the geo json version of the item the error is thrown on if it helps
{ "type": "MultiPolygon", "coordinates":
[ [ [ [ -80.077576, 26.572225 ],
[ -80.037729, 26.571180 ],
[ -80.080279, 26.273744 ],
[ -80.147464, 26.310066 ],
[ -80.152851, 26.455851 ],
[ -80.138560, 26.538013 ],
[ -80.077576, 26.572225 ]
] ] ]
}
does anyone have anyideas? the end goal is to take find all the locations (another table) which fall within this list of n polygons (using coordinates__within=combined_area)
Also, the polygons show up fine on the maps in the geodjango admin.
You can always use Union aggregate method. That should be a bit more efficient because everything is computed on the database level which means you don't have to loop over things in Python.
combined_area = FooModel.objects.filter(...).aggregate(area=Union('geom'))['area']
final = BarModel.objects.filter(coordinates__within=combined_area)