I'm using the following command to create a job:
wmic job call create "C:\Windows\system32\defrag.exe",0,127,FALSE,TRUE,"********000000.000000-500"
But I keep getting an error:
Invalid format.
Hint: <paramlist> = <param> [, <paramlist>].
I've seen similar syntax online so I'm a little confused why it isn't working on my system. Elevated prompt to administrator to further test.
I have noticed the help command for this method seems to be different from the MSDN description.
Help:
Call [ In/Out ]Params&type Status
==== ===================== ======
Create [IN ]Command(STRING) (null)
[IN ]DaysOfMonth(UINT32)
[IN ]DaysOfWeek(UINT32)
[IN ]InteractWithDesktop(BOOLEAN)
[IN ]RunRepeatedly(BOOLEAN)
[IN ]StartTime(DATETIME)
[OUT]JobId(UINT32)
MSDN Link:
https://msdn.microsoft.com/en-us/library/aa389389(v=vs.85).aspx
Trying to avoid the use of PowerShell (Get-WmiObject). Thanks all!
You should specify each property name as well:
wmic job call create Command="C:\Windows\system32\defrag.exe",DaysOfMonth=0,DaysOfWeek=127,InteractWithDesktop=FALSE,RunRepeatedly=TRUE,StartTime="********000000.000000-500"
Executing (Win32_ScheduledJob)->Create()
Method execution successful.
Out Parameters:
instance of __PARAMETERS
{
JobId = 1;
ReturnValue = 0;
};
Also DaysOfMonth=0 and DaysOfWeek=127 are incorrect values according to MSDN.
Related
We built a Dialogflow agent using google cloud functions as webhook which worked properly until yesterday evening. At that time I exported the agent and reimported it later on and it worked for a while.
What stopped working is that agent.context.get('...'); (also agent.getContext('...')) does return undefined even if the context is set according to the UI and raw API response.
As an example I have an intent which has a required slot shop, webhook for slot filling enabled.
When I test the agent, the intent named info is matched correctly and also the context info_dialog_params_store seems to be there:
And here is part of the output context according to the raw API response:
"outputContexts": [
{
"name": "projects/MYAGENTNAME/agent/sessions/0b753e8e-b377-587b-3db6-3c8dc898879b/contexts/info_dialog_params_store",
"lifespanCount": 1,
"parameters": {
"store": "",
"store.original": "",
"kpi": "counts",
"date_or_period": "",
"kpi.original": "trafico",
"date_or_period.original": ""
}
}
In the webhook I mapped the intent correctly to a js function:
let intentMap = new Map();
intentMap.set('info', info);
agent.handleRequest(intentMap);
And the first line of the info function looks like:
function info(agent) {
store_context = agent.context.get('info_dialog_params_store');
}
Which returns
TypeError: Cannot read property 'get' of undefined
at info (/user_code/index.js:207:36)
at WebhookClient.handleRequest (/user_code/node_modules/dialogflow-fulfillment/src/dialogflow-fulfillment.js:303:44)
at exports.dialogflowFirebaseFulfillment.functions.https.onRequest (/user_code/index.js:382:9)
at cloudFunction (/user_code/node_modules/firebase-functions/lib/providers/https.js:57:9)
at /var/tmp/worker/worker.js:762:7
at /var/tmp/worker/worker.js:745:11
at _combinedTickCallback (internal/process/next_tick.js:73:7)
at process._tickDomainCallback (internal/process/next_tick.js:128:9)
I am quite sure that I did not change anything which could affect the proper functioning of agent, except some refactoring.
I also tried the beta functions activated as well as deactivated as I read that there can be issues with environments, but that did not change anything.
Anyone knows in which direction I can investigate further?
I had the same issue, I resolved it updating dialogflow-fulfillment in package.json:
from "dialogflow-fulfillment": "^0.5.0"
to "dialogflow-fulfillment": "^0.6.0"
I solved the problem by turning off "Beta features"
enter image description here
Actually I could fix it by the following 'magic' steps:
Copied my original function to a text file
Copy and pasted the original example code into the GUI fulfillment code editor (Code on GitHub)
Deployed the function
Created a minimal example for my info function:
function info(agent) {
store_context = agent.context.get('info_dialog_params_store');
}
Tested it, and it worked
Copied back my original code
Everything was fine again
I installed Tronbox and want to deploy smart contract. But before that, I want to create a transaction for which I have private key and address. So, I installed tron-api-cli, followed instruction from link https://www.npmjs.com/package/tron-api-cli. But I am not getting how to create transaction in command line. Can somebody help?
Even the tron-api-cli installation is completed, tron-api-cli command gives error:
tron-api-cli: command not found
The package is kind of misnamed. It's not a Command Line Interface (CLI), it's a client that you can use inside a Javascript application.
To create a transaction in JS, you use the TransactionFactory. For example, see the method sendTRX from AccountCLI class:
sendTRX(toAddress,amount,node){
pKeyRequired(this.pkey)
let tx = TransactionFactory.createTx(TronProtocol.Transaction.Contract.ContractType.TRANSFERCONTRACT,{owner:this.address,to:toAddress,amount})
return this.blockCli.addRef(tx).then((txWithRef)=>{
let transactionString = this.sign(txWithRef,this.pkey)
return axios.post(`${this.endpoint}${API_TRON_BROADCAST}`,{payload:transactionString,node}).then((res)=>{return res.data})
})
How do I retrieve parameters from an AWS Batch job request? Suppose I have a job submitter app that sends a job request with the following code (in C#):
SubmitJobRequest submitJobRequest = new SubmitJobRequest()
{
JobName = "MyJobName",
JobQueue = "MyJobQueue",
JobDefinition = "MyJobDefinition:1",
Parameters = new Dictionary<string, string>() { {"Foo", "Bar" } },
};
SubmitJobResponse submitJobResponse = AWSBatchClient.SubmitJob(submitJobRequest);
What I want to be able to do now is retrieve what's in the Parameters field in submitJobRequest in my docker app that gets launched. How do I do that? It's not passed in as program args, as I've tested that (the only args I see are those were statically defined for 'Command' my job definition). I know that I can set environment variables via container overrides and then retrieve them via Environment.GetEnvironmentVariable (in C#). But I don't know how to get the parameters. Thanks.
Here is an example using yaml cloudformation.(or you can use json for same properties).You can declare the parameters using a Ref in the command section. I am using user_name but you can add more. We might have a limit of 30KB payload
ContainerProperties:
Command:
- "python"
- "Your command here"
- "--user_name"
- "Ref::user_name"
Now you can submit your job to the queue like this. I am using python and boto3 client:
# Submit the job
job1 = client.submit_job(
jobName=jobName,
jobQueue=jobQueue,
jobDefinition=jobDefinition,
parameters={
'user_name':user_name
}
)
To retrieve the parameters use this(I am using argparse):
parser = argparse.ArgumentParser(description='AWS Driver Batch Job Runner')
parser.add_argument('--user_name', dest='user_name', required=True)
args = parser.parse_args()
print(args.user_name)
Found the answer I was looking for. I just had to add a ref to the parameter for Command in the job definition. In the question example, I would've needed to specify Ref::Foo for Command in the job definition and then "Bar" would've gotten passed as program args to my container app.
To expand on my example, in my specific case, my program uses the CommandLineParser package for passing parameters. Suppose one of the CommandLine options is called Foo. If I were running the program from a command line, I'd set a value for Foo with something like "--Foo Bar". To effectively do the same for my batch job, in my job definition, for Command, I would specify "--Foo Ref::Foo" (without quotes). Then for the Parameters field in my SubmitJobRequest object, I would set Foo exactly as per my original example and then my batch program would see "Bar" for the Foo CommandLine option (just like as if it was run with "--Foo Bar"). Hope that helps.
I'm new to Python. I wanted to create a simple "Hello World" program in notebook.
For that, I created a file named dataAna.ipynb in C:\Python27 directory.
when i executed jupyter notebook in my Command prompt. When I am opening the file 'dataAna.ipynb' on my localhost, It is showing the following Error.
Unreadable Notebook: C:\Python27\dataAna.ipynb NotJSONError("Notebook does not appear to be JSON: u''...",)
Most basic question I've seen on Stackoverflow. I'll help you out but maybe just search for a getting started guide next time...
I was able to reproduce your "problem" by creating an empty file and naming it to *.ipynb. I also created a notebook file like it was intended to be created via the "New" button
I noticed that the empty manually created file was empty but the file created via "New" button was not. It contained the following content:
{
"cells": [],
"metadata": {},
"nbformat": 4,
"nbformat_minor": 2
}
Seems that's the minimal information for an empty Notebook file. I won't remember it because I like to use the "New" button
type "jupyter notebook" in terminal and see screenshot above
I would like to increase the deploy time, in a stack layer that hosts many apps (AWS Opsworks).
Currenlty I get the following error:
Eror
[2014-05-05T22:27:51+00:00] ERROR: Running exception handlers
[2014-05-05T22:27:51+00:00] ERROR: Exception handlers complete
[2014-05-05T22:27:51+00:00] FATAL: Stacktrace dumped to /var/lib/aws/opsworks/cache/chef-stacktrace.out
[2014-05-05T22:27:51+00:00] ERROR: deploy[/srv/www/lakers_test] (opsworks_delayed_job::deploy line 65) had an error: Mixlib::ShellOut::CommandTimeout: Command timed out after 600s:
Thanks in advance.
First of all, as mentioned in this ticket reporting a similar issue, the Opsworks guys recommend trying to speed up the call first (there's always room for optimization).
If that doesn't work, we can go down the rabbit hole: this gets called, which in turn calls Mixlib::ShellOut.new, which happens to have a timeout option that you can pass in the initializer!
Now you can use an Opsworks custom cookbook to overwrite the initial method, and pass the corresponding timeout option. Opsworks merges the contents of its base cookbooks with the contents of your custom cookbook - therefore you only need to add & edit one single file to your custom cookbook: opsworks_commons/libraries/shellout.rb:
module OpsWorks
module ShellOut
extend self
# This would be your new default timeout.
DEFAULT_OPTIONS = { timeout: 900 }
def shellout(command, options = {})
cmd = Mixlib::ShellOut.new(command, DEFAULT_OPTIONS.merge(options))
cmd.run_command
cmd.error!
[cmd.stderr, cmd.stdout].join("\n")
end
end
end
Notice how the only additions are just DEFAULT_OPTIONS and merging these options in the Mixlib::ShellOut.new call.
An improvement to this method would be changing this timeout option via a chef attribute, that you could in turn update via your custom JSON in the Opsworks interface. This means passing the timeout attribute in the initial Opsworks::ShellOut.shellout call - not in the method definition. But this depends on how the shellout method actually gets called...