When trying to pass through build.ID to shell-local post processor the evaluate string in the post processor is ERR_ID_NOT_IMPLEMENTED_BY_BUILDER I am using vsphere-iso.
The docs mention
Here is the list of available build variables:
ID: Represents the VM being provisioned. For example, in Amazon it is the instance ID; in DigitalOcean, it is the Droplet ID; in VMware, it is the VM name.
So I assumed it was supported with vsphere-iso?
Basically I am trying to passthrough the evaluated vm/template name through to a post powershell post processor.
Here is the post processor config:
post-processor "shell-local" {
environment_vars = [
"VCENTER_USER=${var.vsphere_username}",
"VCENTER_PASSWORD=${var.vsphere_password}",
"VCENTER_SERVER=${var.vsphere_endpoint}",
"TEMPLATE_NAME=${build.ID}",
"TEMPLATE_UUID=${local.build_uuid}",
]
env_var_format = "$env:%s=\"%s\"; "
execute_command = ["${var.common_post_processor_cli}.exe", "{{.Vars}} {{.Script}}"]
script = "scripts/windows/cleanup.ps1"
}
Here is the post processor script
param(
[string]
$TemplateName = $env:TEMPLATE_NAME
)
Write-Host $TemplateName
Here is the result logged to the console
==> vsphere-iso.windows-server-standard-dexp (shell-local): Running local shell script: scripts/windows/cleanup.ps1
vsphere-iso.windows-server-standard-dexp (shell-local): ERR_ID_NOT_IMPLEMENTED_BY_BUILDER
Related
I've deployed a tensorflow multi-label classification model using a sagemaker endpoint as follows:
predictor = sagemaker_model.deploy(initial_instance_count=1, instance_type="ml.m5.2xlarge", endpoint_name='testing-2')
It gets deployed and works fine when I invoke it from the Sagemaker Jupyter instance:
sample = ['this movie was extremely good']
output=predictor.predict(sample)
output:
{'predictions': [[0.00370046496,
4.32942124e-06,
0.00080883503,
9.25126587e-05,
0.00023958087,
0.000130862]]}
However, I am unable to send a request to the deployed endpoint from other notebooks or sagemaker studio. I'm unsure of the request format.
I've tried several variations in the input format and still failed. The error message is as below:
sagemaker error
Request:
{
"body": {
"text": "Testing model's prediction on this text"
},
"contentType": "application/json",
"endpointName": "testing-2",
"customURL": "",
"customHeaders": [
{
"Key": "sm_endpoint_name",
"Value": "testing-2"
}
]
}
Error:
Error invoking endpoint: Received client error (400) from primary with message "{ "error": "Failed to process element:
0 key: text of 'instances' list. Error: INVALID_ARGUMENT: JSON object: does not have named input: text" }".
See https://us-west-2.console.aws.amazon.com/cloudwatch/home?region=us-west-2#logEventViewer:group=/aws/sagemaker/Endpoints/testing-2
in account 793433463428 for more information.
Is there any way to find out exactly how the model expects the request format to be?
Earlier I had the same model on my local system and the way I tested it was using this curl request:
curl -s -H 'Content-Type: application/json' -d '{"text": "what ugly posts"}' http://localhost:7070/sentiment
And it worked fine without any issues.
I've tried different formats and replaced the "text" key inside body with other words like "input", "body", nothing etc.
Based on your description above, I assume you are deploying the TensorFlow model using the SageMaker TensorFlow container.
If you want to view what your model expects as input you can use the saved_model CLI:
1
├── keras_metadata.pb
├── saved_model.pb
└── variables
├── variables.data-00000-of-00001
└── variables.index
!saved_model_cli show --all --dir {"1"}
After you have confirmed the input name above you can invoke the endpoint as follows:
import json
import boto3
client = boto3.client('runtime.sagemaker')
data = {"instances": ['this movie was extremely good']}
response = client.invoke_endpoint(EndpointName=<EndpointName>,
Body=json.dumps(data))
response_body = response['Body']
print(response_body.read())
The same payload can then also be used in Studio when invoking the endpoint.
I have a docker container running in Fargate that emits json logs to the console using log4j-layout-template.
The logs emitted look like this:
{"#timestamp":"2022-03-22T09:08:16.838Z","ecs.version":"1.2.0","log.level":"INFO","message":"Server version name: Apache Tomcat/8.5.76","process.thread.name":"main","log.logger":"org.apache.catalina.startup.VersionLoggerListener"}
{"#timestamp":"2022-03-22T09:08:16.838Z","ecs.version":"1.2.0","log.level":"INFO","message":"Server built: Feb 23 2022 17:59:11 UTC","process.thread.name":"main","log.logger":"org.apache.catalina.startup.VersionLoggerListener"}
I configure my CDK with the following:
var def = ingestGatewayTaskDefinition.addContainer(
id + "Container",
ContainerDefinitionOptions
.builder()
.image(fromEcrRepository(ecrRepository))
.memoryLimitMiB(memory)
.cpu(cpu)
.environment(environment)
.secrets(secrets)
.logging(
LogDriver.awsLogs(
AwsLogDriverProps
.builder()
.logGroup(
LogGroup.Builder
.create(this, props.getServiceName())
.logGroupName("dev/" + props.getServiceName())
.retention(RetentionDays.ONE_DAY)
.build()
)
.streamPrefix("dev/" + props.getServiceName())
//.datetimeFormat("%Y-%m-%dT%H:%M:%SZ") //??
.build()
)
)
.build()
);
But in Cloud Watch the message portion is the json and is not parsed but should be discoverable.
How do I parse these fields?
This is what is ends up looking like:
What I am looking for in Cloud Watch is this:
#timestamp
ecs.version
log.level
message
log.logger
2022-03-22T09:08:16.838Z
1.2.0
INFO
Server version name:...
org.apache...
2022-03-22T09:08:16.838Z
1.2.0
INFO
"Server built:...
org.apache...
There's nothing wrong with the parsing, your events are being parsed correctly.
The following query should work correctly:
fields #timestamp, #message
| filter log.level="INFO"
| sort #timestamp desc
The Log Stream UI does not show the inferred nested structure, but it's still available for querying.
Why don't I create Amazon lightsailclient and set up UserData?
var shuju = new CreateInstancesRequest()
{
BlueprintId = "centos_7_1901_01",
BundleId = "micro_2_0",
AvailabilityZone = "ap-northeast-1d",
InstanceNames = new System.Collections.Generic.List<string>() { "test" },
UserData = "echo root:test123456- |sudo chpasswd root\r\nsudo sed -i 's/^#\\?PermitRootLogin.*/PermitRootLogin yes/g' /etc/ssh/sshd_config;\r\nsudo sed -i 's/^#\\?PasswordAuthentication.*/PasswordAuthentication yes/g' /etc/ssh/sshd_config;\r\nsudo reboot\r\n"
};
If you wish to run a User Data script on a Linux instance, then the first line must begin with #!.
It uses the same technique as an Amazon EC2 instance, so see: Running Commands on Your Linux Instance at Launch - Amazon Elastic Compute Cloud
I’m using pyVmomi to deploy a VM from a template on vSphere,
this woks ok, the new VM get the name I sent as parameter but I want that the DNS name \ hostname will be same as VM.
Is there a way to set the hostname when doing the actual clone ?
If not how can I do that after the new VM was created ?
Here is part of the code I'm using:
# RelocateSpec
relospec = vim.vm.RelocateSpec()
relospec.datastore = datastore
relospec.pool = resource_pool
# ConfigSpec
configSpec = vim.vm.ConfigSpec()
configSpec.annotation = "This is the annotation for this VM"
# CloneSpec
clonespec = vim.vm.CloneSpec()
clonespec.location = relospec
clonespec.powerOn = power_on
clonespec.config = configSpec
print ("cloning VM...")
task = template.Clone(folder=destfolder, name=vm_name, spec=clonespec)
wait_for_task(task)
I think you need a clonespec.customization (vim.vm.customization.Specification). You should be able to specify the hostname there somehow or other.
Oh, as far as I know VMware Tools must be installed for guest OS customization.
Hope that helps.
I want to run python script (test.py) using STAF using below command but getting Retrun code 1
H:\>STAF 192.168.252.81 process START SHELL COMMAND "python /opt/test/test.p
" PARAMS "3344" wait returnstdout
Response
--------
{
Return Code: 1
Key : <None>
Files : [
{
Return Code: 0
Data :
}
]
}
Check that the remote machine is on trust list, then try it without "PARAMS" or hardcode the value inside your python script.