I am trying to execute a command on windows command prompt for deploying a job, but every time it's throwing the same error like below:
INFO Initializing Job Deployment environment.
ERROR -metarepository is required.
ERROR The process will be terminated.
I don't understand I am passing the every detail arguments for the particular command, but why it's not taking the argument is totally shocking.
DeployJobs -host "xxxxxx.com" -user "sasdemo" -password "xxxxx" -port 8561 –deploytype deploy –objects "/Shared Data/Marketing/Test_Jen" –sourcedir "E:\sasconfig\Lev1\SASApp\SASEnvironment\SASCode\Jobs\Sourcecodes" –deploymentdir "E:\sasconfig\Lev1\SASApp\SASEnvironment\SASCode\Jobs" –metarepository "Foundation" -metaserverid "A5392QFF.AT000002" -servermachine "xxxxxxxx.com" –serverport 8591 –serverusername "sasdemo" –serverpassword "xxxxxxx" –batchserver "SASApp – SAS DATA Step Batch Server" –folder "/Shared Data/Marketing"
The above I am trying to execute from Meta server windows command prompt after navigating to E:\sashome\SASDataIntegrationStudioServerJARs\4.8
I tried to follow the complete note from here but still no luck.
Related
Hi I am trying to deploy a node application from cloud 9 to ELB but I keep getting the below error.
Starting environment deployment via CodeCommit
--- Waiting for Application Versions to be pre-processed --- ERROR: Pre-processing of application version app-491a-200623_151654 has
failed. ERROR: Some application versions failed to process. Unable to
continue deployment.
I have attached an image of the IAM roles that I have. Any solutions?
Go to your console and open up your elastic beanstalk console. Go to both applications and environments and delete them. Then in your terminal hit
eb init #Follow instructions
eb create --single ##Follow instructions.
It would fix the error, which is due to some application states which are failed. If you want to check those do
aws elasticbeanstalk describe-application-versions
I was searching for this answer as a result of watching a YouTube tutorial for how to pass the AWS Certified Developer Associate exam. If anyone else gets this error as a result of that tutorial, delete the 002_node_command.config file created in the tutorial and commit that change, as that is causing the error to occur.
A failure within the pre-processing phase, may be caused by an invalid manifest, configuration or .ebextensions file.
If you deploy an (invalid) application version using eb deploy and you enable the preprocess option, The details of the error will not be revealed.
You can remove the --process flag and enable the verbose option to improve error output.
in my case I deploy using this command:
eb deploy -l "XXX" -p
And can return a failure when I mess around with .ebextensions:
ERROR: Pre-processing of application version xxx has failed.
ERROR: Some application versions failed to process. Unable to continue deployment.
With that result I can't figure up what is wrong,
but deploying without -p (or --process)and adding -v (verbose) flag:
eb deploy -l "$deployname" -v
It returns something more useful:
Uploading: [##################################################] 100% Done...
INFO: Creating AppVersion xxx
ERROR: InvalidParameterValueError - The configuration file .ebextensions/16-my_custom_config_file.config in application version xxx contains invalid YAML or JSON.
YAML exception: Invalid Yaml: while scanning a simple key
in 'reader', line 6, column 1:
(... details of the error ...)
, JSON exception: Invalid JSON: Unexpected character (#) at position 0.. Update the configuration file.
Now I can fix the problem.
I have many EC2 instances that retain Celery jobs for processing. To efficiently start the overall task of completing the queue, I have tested AWS-RunBashScript in AWS' SSM with a BASH script that calls a Python script. For example, for a single instance this begins with sh start_celery.sh.
When I run the command in SSM, this is the following output (compare to other output below, after reading on):
/home/ec2-user/dh2o-py/venv/local/lib/python2.7/dist-packages/celery/utils/imports.py:167:
UserWarning: Cannot load celery.commands extension u'flower.command:FlowerCommand':
ImportError('No module named compat',)
namespace, class_name, exc))
/home/ec2-user/dh2o-py/tasks/task_harness.py:49: YAMLLoadWarning: calling yaml.load() without
Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
task_configs = yaml.load(conf)
Running a worker with superuser privileges when the worker accepts messages serialized with pickle is a very bad idea!
If you really want to continue then you have to set the C_FORCE_ROOT
environment variable (but please think about this before you do).
User information: uid=0 euid=0 gid=0 egid=0
failed to run commands: exit status 1
Note that only warnings are thrown. When I SSH to the same instance and run the same command (i.e. sh start_celery.sh), the following (same) output results BUT the process runs:
I have verified that the process does NOT run when doing this via SSM, and I have no idea why. As a work-around, I tried running the sh start_celery.sh command with bootstrapping in user data for each EC2, but that failed too.
So, why does SSM fail to actually run the process that I succeed in doing by actually via SSH to each instance running identical commands? The details below relate to machine and Python configuration:
I am hoping someone here has come across this issue and has an answer for me.
I have setup a project in device farm and have written automation tests in Appium using JS.
When I create a run manually using the console the runs succeed without any issues and my tests get executed.
However when I try and schedule a run using the CLI using the following command it fails with an error
aws devicefarm schedule-run --project-arn projectArn --app-arn appArn --device-pool-arn dpARN --name myTestRun --test type=APPIUM_NODE,testPackageArn="testPkgArn"
Error : An error occurred (ArgumentException) when calling the ScheduleRun operation: Standard Test environment is not supported for testType: APPIUM_NODE
Cli Versions : aws-cli/1.17.0 Python/3.8.1 Darwin/19.2.0 botocore/1.14.0
That is expected currently for the standard environment. The command will need to use the custom environment which the cli can do by setting the testSpecArn value.
This arn is an upload in device farm consisting of a .yaml file which defines how the tests are executed.
This process is discussed here
https://docs.aws.amazon.com/devicefarm/latest/developerguide/how-to-create-test-run.html#how-to-create-test-run-cli-step6
The error in this case is caused by the fact that the APPIUM_NODE test type can only be used with the custom environment currently.
I am a newbie to autosys. I am trying to get the jobs execution information on daily basis into a csv file. For this I am trying to write an autosys job which I can schedule to run daily. Below is the snippet of the code:
insert_job: job_run_time job_type: CMD
box_name: box_job_run_time
command: autorep -J box_job1 -r -1
But this is giving the below error:
'autorep' is not recognized as an internal or external command,
operable program or batch file.
Please help with the solution
Shanky, before you run autorep command directly on the terminal, it's required to configure first to read Autosys DB storing the run details.
Mostly it would be a few variables and instance names.
Please check with the scheduling team on how to configure Autosys client on the server.
Thanks Piyush and Mansi for your reply!
Mansi, where this command autorep -J box_job1 -r -1 >> Output.csv can be configured? so that it can be scheduled to run daily.
I am trying to run a command (Jar file execution) on a remote machine using the 'Execute Command' keyword of SSH library. But the control returns even before the command execution is completed. Is there a way to wait until the command is executed?
Below are the keywords written:
Run The Job
[Arguments] ${machine_ip} ${username} ${password} ${file_location} ${KE_ID}
Open Connection ${machine_ip} timeout=60
Login ${username} ${password}
${run_jar_file}= Set Variable java -jar -Dspring.profiles.active=dev ${file_location} Ids=${KE_ID}
${output}= Execute Command ${run_jar_file}
Log ${output}
Sleep 30
Close Connection
Use Read and Write, instead of using "Execute Command" so that you can specify timeout for command execution.
refer: http://robotframework.org/SSHLibrary/latest/SSHLibrary.html#Write
You are explicitly asking for the command to be run in the background (by virtue of adding & as the last character in the command to be run), so the ssh library has no way of knowing when the program you're running exits. If you want to wait for it to finish, don't run it in the background.
In other words, remove the trailing & from the command you are running.
If anyone is still strugling with this one, i have discovered solution
Open Connection ${SSH_HOST} timeout=10s
Login login pass
Write your_command
Set Client Configuration prompt=$
${output}= Read Until Prompt
Should End With ${output} ~ $