I am interested in using the tune library for reinforcement learning and I would like to use the in-built tensorboard capability. However, the metric that I am using to tune my hyperparameters is based on a time-consuming evaluation procedure that should be run infrequently.
According to the documentation, it looks like the _train method returns a dictionary that is used both for logging and for tuning hyperparameters. Is it possible either to perform logging more frequently within the _train method? Alternately, could I return the values that I wish to log from the _train method but some of the time omit the expensive-to-compute metric from the dictionary?
One option is to use your own logging mechanism in the Trainable. You can log to the trial-specific directory (Trainable.logdir). If this conflicts with the built-in Tensorboard logging, you can remove that by setting tune.run(loggers=None).
Another option is to, as you mentioned, some of the time omit the expensive-to-compute metric from the dictionary. If you run into issues with that, you can also return "None" as the value for those metrics that you don't plan to compute in a particular iteration.
Hope that helps!
Related
Historically, in Oracle I've used the fixed_date parameter to change the system date to run a series of reports that tie together to verify those links still are correct.
Now that we've moved to Amazon RDS, that capability is not available.
What are my options?
I've considered changing all calls to 'system_date' to use a custom function that simulates this. (Ugh, this is hundreds of packages, but is possible)
Are there better options for using fixed_date?
Seems like the only option you have is to create custom function and replace all the calls to system_date.
CREATE OR REPLACE FUNCTION fml.system_date
RETURN date
AS
BEGIN
return to_date('03-04-2021','DD-MM-YYYY');
END;
Not sure I would do this approach, but you could also investigate "stored outlines" if there are not too many queries involved. Have it call the alternate function/package instead. The fix_date call will still fail, but maybe it can be a workaround. That outline could then be used only for reports user for example.
I am not sure why Amazon doesn't support something like this yet...
I'm using a library in lambda where a "state file" is persisted
This is what it looks like in code:
def initialize
#config = '/tmp/dogscaler.yaml'
#state = self.load
end
If you need to look at the whole logic
https://github.com/cvent/dogscaler/blob/master/lib/dogscaler/state.rb#L5
My issue is that, this won't work in lambda (it being serverless). I'm trying to look for a solution where I don't have to change the logic in how the file is read and modifed.
Can this be achieved with S3?
Would something like this pseudo code work?
read s3://path/to/file
write s3://path/to/file
Are there better solutions to S3?
Additional Context
The file is needed for a cooldown period logic. Every time the application runs, it would check a time stamp from that file to make a judgement on wether to change an element or not. File is less than 1KB.
Based on the updated information you could store the data in a number of places.
S3 would be perfectly fine, but might be overkill if this is all you're using it for.
The same can be said of DynamoDB.
Parameter Store is a solid option for your use case. Bear in mind that if you are calling it often you may need to increase your TPS limit. It doesn't sound like that will be an issue for you. Also keep in mind that there is no protection here for multiple instances of your Lambda function writing to the parameter at the "same time." The last write will win. If you need to protect against that DynamoDB is probably the best option.
I completed this tutorial on distributed tensorflow experiments within an ML Engine experiment and I am looking to define my own custom tier instead of the STANDARD_1 tier that they use in their config.yaml file. If using the tf.estimator.Estimator API, are any additional code changes needed to create a custom tier of any size? For example, the article suggests: "If you distribute 10,000 batches among 10 worker nodes, each node works on roughly 1,000 batches." so this would suggest the config.yaml file below would be possible
trainingInput:
scaleTier: CUSTOM
masterType: complex_model_m
workerType: complex_model_m
parameterServerType: complex_model_m
workerCount: 10
parameterServerCount: 4
Are any code changes needed to the mnist tutorial to be able to use this custom configuration? Would this distribute the X number of batches across the 10 workers as the tutorial suggests would be possible? I poked around some of the other ML Engine samples and found that reddit_tft uses distributed training, but they appear to have defined their own runconfig.cluster_spec within their trainer package: task.pyeven though they are also using the Estimator API. So, is there any additional configuration needed? My current understanding is that if using the Estimator API (even within your own defined model) that there should not need to be any additional changes.
Does any of this change if the config.yaml specifies using GPUs? This article suggests for the Estimator API "No code changes are necessary as long as your ClusterSpec is configured properly. If a cluster is a mixture of CPUs and GPUs, map the ps job name to the CPUs and the worker job name to the GPUs." However, since the config.yaml is specifically identifying the machine type for parameter servers and workers, I am expecting that within ML-Engine the ClusterSpec will be configured properly based on the config.yaml file. However, I am not able to find any ml-engine documentation that confirms no changes are needed to take advantage of GPUs.
Last, within ML-Engine I am wondering if there are any ways to identify usage of different configurations? The line "If you distribute 10,000 batches among 10 worker nodes, each node works on roughly 1,000 batches." suggests that the use of additional workers would be roughly linear, but I don't have any intuition around how to determine if more parameter servers are needed? What would one be able to check (either within the cloud dashboards or tensorboard) to determine if they have a sufficient number of parameter servers?
are any additional code changes needed to create a custom tier of any size?
No; no changes are needed to the MNIST sample to get it to work with different number or type of worker. To use a tf.estimator.Estimator on CloudML engine, you must have your program invoke learn_runner.run, as exemplified in the samples. When you do so, the framework reads in the TF_CONFIG environment variables and populates a RunConfig object with the relevant information such as the ClusterSpec. It will automatically do the right thing on Parameter Server nodes and it will use the provided Estimator to start training and evaluation.
Most of the magic happens because tf.estimator.Estimator automatically uses a device setter that distributes ops correctly. That device setter uses the cluster information from the RunConfig object whose constructor, by default, uses TF_CONFIG to do its magic (e.g. here). You can see where the device setter is being used here.
This all means that you can just change your config.yaml by adding/removing workers and/or changing their types and things should generally just work.
For sample code using a custom model_fn, see the census/customestimator example.
That said, please note that as you add workers, you are increasing your effective batch size (this is true regardless of whether or not you are using tf.estimator). That is, if your batch_size was 50 and you were using 10 workers, that means each worker is processing batches of size 50, for an effective batch size of 10*50=500. Then if you increase the number of workers to 20, your effective batch size becomes 20*50=1000. You may find that you may need to decrease your learning rate accordingly (linear seems to generally work well; ref).
I poked around some of the other ML Engine samples and found that
reddit_tft uses distributed training, but they appear to have defined
their own runconfig.cluster_spec within their trainer package:
task.pyeven though they are also using the Estimator API. So, is there
any additional configuration needed?
No additional configuration needed. The reddit_tft sample does instantiate its own RunConfig, however, the constructor of RunConfig grabs any properties not explicitly set during instantiation by using TF_CONFIG. And it does so only as a convenience to figure out how many Parameter Servers and workers there are.
Does any of this change if the config.yaml specifies using GPUs?
You should not need to change anything to use tf.estimator.Estimator with GPUs, other than possibly needing to manually assign ops to the GPU (but that's not specific to CloudML Engine); see this article for more info. I will look into clarifying the documentation.
I am in the process of selecting a logging system for our software development. We are using Boost extensively so the obvious option is boost.log V2
but before I select it to be used in my team, I have some questions that I could not find the answer in the documentation:
1- Can I remove the effect of it completely from the generated code? For example assume that I have this code and I need it to be in this way for debugging:
int main()
{
for( int i=0;i<100;i++)
{
int j=doSomething(i);
BOOST_LOG_TRIVIAL(trace) << << "I=2<<i <<" j="<<j;
}
}
is there any way that I remove the effect of logging system in above code so I am not loosing any performance as the result of using it?
2- Can I add section to the logging at the same time that I am adding severity? My code has several sections and we are working on a section at any time. I want to be able to set the logging to log the data for a specific section and not for the whole application which may have several section and possibly hundred of logging entry which needs to be filtered based on the part that I am working on it.
3- possibility of sending different logging to different sinks so for example some logging goes to console and some other goes to a file?
Can I remove the effect of it completely from the generated code?
If you mean removing any use of Boost.Log at compilation stage (e.g. by a preprocessor switch) then no, Boost.Log does not provide that. You will have to implement your own support for that, including conditional compilation of Boost.Log initialization and your own logging macros that expand to nothing when logging is disabled at compile time.
If you mean just disabling logs completely without removing the compile-time dependency then you can use core::set_logging_enabled or filters for that. It will still have small performance cost to check the condition for every log record, but no log records will be produced.
Can I add section to the logging at the same time that I am adding severity?
Yes, you can use channels for that. You can apply filters to the channel name to select which messages to keep and which to suppress. Here is a related answer.
possibility of sending different logging to different sinks so for example some logging goes to console and some other goes to a file?
Yes, again, this can be achieved with channels and filters. See the linked above SO answer describing that.
I have just started with Spark and was struggling with the concept of tasks.
Can any one please help me in understanding when does an action (say reduce) not run in the driver program.
From the spark tutorial,
"Aggregate the elements of the dataset using a function func (which
takes two arguments and returns one). The function should be
commutative and associative so that it can be computed correctly in
parallel. "
I'm currently experimenting with an application which reads a directory on 'n' files and counts the number of words.
From the web UI the number of tasks is equal to number of files. And all the reduce functions are taking place on the driver node.
Can you please tell a scenario where the reduce function won't execute at the driver. Does a task always include "transformation+action" or only "transformation"
All the actions are performed on the cluster and results of the actions may end up on the driver (depending on the action).
Generally speaking the spark code you write around your business logic is not the program that would actually run - rather spark uses it to create a plan which will execute your code in the cluster. The plan creates a task of all the actions that can be done on a partition without the need to shuffle data around. Every time spark needs the data arranged differently (e.g. after sorting) It will create a new task and a shuffle between the first and the latter tasks
Ill take a stab at this, although I may be missing part of the question. A task is indeed always transformation(s) and an action. The transformation's are lazy and would not submit anything, thus the need for an action. You can always call .toDebugString on your RDD to see where each job split will be; each level of indentation is a new stage. I think the reduce function showing on the driver is a bit of a misnomer as it will run first in parallel and then merge the results. So, I would expect that the task does indeed run on the workers as far as it can.