I wanted to create a Windows VM using gcloud command line.
Tried the "Equivalent Command Line" syntax - the syntax failed.
After some trial and error, discovered that the --create-disk list of parameters needs to be repeated (please observe the script below).
gcloud compute instances create ifworker-0 \
--project=ceng-test \
--zone=us-east4-c \
--machine-type=n2-standard-2 \
--network-interface=nic-type=VIRTIO_NET \
--network-tier=PREMIUM \
--maintenance-policy=MIGRATE \
--provisioning-model=STANDARD \
--service-account=the-service-account \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--tags=ifworker-net-0 \
--create-disk=mode=rw \
--create-disk=size=40GB \
--create-disk=type=projects/ceng-test/zones/us-central1-a/diskTypes/pd-balanced \
--create-disk=boot=yes \
--create-disk=auto-delete=yes \
--create-disk=image=projects/windows-cloud/global/images/windows-server-2022-dc-core-v20220513 \
--no-shielded-secure-boot \
--shielded-vtpm \
--shielded-integrity-monitoring \
--reservation-affinity=any
However, even then the script is failing - the error is reproduced below.
ERROR: (gcloud.compute.instances.create) Could not fetch resource:
- Invalid value for field 'resource.disks[0]': '{
"type": "PERSISTENT",
"mode": "READ_WRITE",
"boot": true,
"initializeParams": { },
"autoDele...'.
Boot disk must have a source specified.
Need some guidance here. Thanks for your attention and time.
As checked on your command, boot and image properties should be in the same line.
It should look like this.
--create-disk=boot=yes,image=projects/windows-cloud/global/images/windows-server-2022-dc-core-v20220513
Based on GCP's documentation the image properties should be included in the same line with --create-disk=[PROPERTY=VALUE,…] parameters, specifying the name of the image that will be initialized.
Below is the command that worked on my end:
gcloud compute instances create ifworker-0 \
--project=<project_name> \
--zone=us-east4-c \
--machine-type=n2-standard-2 \
--network-interface=nic-type=VIRTIO_NET \
--network-tier=PREMIUM \
--maintenance-policy=MIGRATE \
--provisioning-model=STANDARD \
--service-account=the-service-account \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--tags=ifworker-net-0 \
--create-disk=mode=rw \
--create-disk=size=40GB \
--create-disk=type=projects/ceng-test/zones/us-central1-a/diskTypes/pd-balanced \
--create-disk=boot=yes,image=projects/windows-cloud/global/images/windows-server-2022-dc-core-v20220513 \
--create-disk=auto-delete=yes \
--no-shielded-secure-boot \
--shielded-vtpm \
--shielded-integrity-monitoring \
--reservation-affinity=any
Note:
Change <project_name> and/or service account details.
For "gcloud compute instances create" there should be only one --create-disk line per disk. In other cases multiple disks are created.
As we want only one disk, there should be only one line, with all parameters delimited by ",".
The correct example follows.
gcloud compute instances create ifworker-0 \
--project=<project_name> \
--zone=us-east4-c \
--machine-type=e2-micro \
--network-interface=nic-type=VIRTIO_NET \
--network-tier=PREMIUM \
--maintenance-policy=MIGRATE \
--provisioning-model=STANDARD \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--tags=ifworker-net-0 \
--create-disk=mode=rw,size=40GB,type=projects/<project_name>/zones/us-central1-a/diskTypes/pd-balanced,boot=yes,auto-delete=yes,image=projects/windows-cloud/global/images/windows-server-2022-dc-core-v20220513 \
--no-shielded-secure-boot \
--shielded-vtpm \
--shielded-integrity-monitoring \
--reservation-affinity=any
I am trying to create a table from the CLI that has a few GSIs. However, It is returning an error:
Parameter validation failed: Missing required parameter in
GlobalSecondaryIndexes[5]: "IndexName" Missing required parameter in
GlobalSecondaryIndexes[5]: "KeySchema" Missing required parameter in
GlobalSecondaryIndexes[5]: "Projection"
This is what I have
aws dynamodb create-table \
--region=eu-west-2 \
--endpoint-url http://localhost:8000 \
--table-name PBA2 \
--attribute-definitions \
AttributeName=PK,AttributeType=S \
AttributeName=SK,AttributeType=S \
AttributeName=GSI1PK,AttributeType=S \
AttributeName=GSI1SK,AttributeType=S \
AttributeName=GSI2PK,AttributeType=S \
AttributeName=GSI2SK,AttributeType=S \
AttributeName=GSI3PK,AttributeType=S \
AttributeName=GSI3SK,AttributeType=S \
AttributeName=GSI4PK,AttributeType=S \
AttributeName=GSI4SK,AttributeType=S \
AttributeName=GSI5PK,AttributeType=S \
AttributeName=GSI5SK,AttributeType=S \
--key-schema \
AttributeName=PK,KeyType=HASH \
AttributeName=SK,KeyType=RANGE \
--provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=10 \
--global-secondary-indexes \
'IndexName=GSI1,KeySchema=[{AttributeName=GSI1PK,KeyType=HASH},{AttributeName=GSI1SK,KeyType=RANGE}],Projection={ProjectionType=ALL}' \
'IndexName=GSI2,KeySchema=[{AttributeName=GSI2PK,KeyType=HASH},{AttributeName=GSI2SK,KeyType=RANGE}],Projection={ProjectionType=ALL}' \
'IndexName=GSI3,KeySchema=[{AttributeName=GSI3PK,KeyType=HASH},{AttributeName=GSI3SK,KeyType=RANGE}],Projection={ProjectionType=ALL}' \
'IndexName=GSI4,KeySchema=[{AttributeName=GSI4PK,KeyType=HASH},{AttributeName=GSI4SK,KeyType=RANGE}],Projection={ProjectionType=ALL}' \
'IndexName=GSI5,KeySchema=[{AttributeName=GSI5PK,KeyType=HASH},{AttributeName=GSI5SK,KeyType=RANGE}],Projection={ProjectionType=ALL}' \
ProvisionedThroughput="{ReadCapacityUnits=10,WriteCapacityUnits=10}"
I tried to follow this answer but as you see, I am not getting the result I want. How can I add multiple GSIs?
You have accidentally supplied ProvisionedThroughput=... as a 6th GSI.
Instead, it should be an (optional) attribute on each of the first 5 GSIs. Something like this (quote as needed:
IndexName=GSI1,KeySchema=[{AttributeName=GSI1PK,KeyType=HASH},{AttributeName=GSI1SK,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=10,WriteCapacityUnits=10} \
IndexName=GSI2,KeySchema=[{AttributeName=GSI2PK,KeyType=HASH},{AttributeName=GSI2SK,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=10,WriteCapacityUnits=10} \
....
I finally figure it out:
aws dynamodb create-table \
--region=eu-west-2 \
--endpoint-url http://localhost:8000 \
--table-name PBA2 \
--attribute-definitions \
AttributeName=PK,AttributeType=S \
AttributeName=SK,AttributeType=S \
AttributeName=GSI1PK,AttributeType=S \
AttributeName=GSI1SK,AttributeType=S \
AttributeName=GSI2PK,AttributeType=S \
AttributeName=GSI2SK,AttributeType=S \
AttributeName=GSI3PK,AttributeType=S \
AttributeName=GSI3SK,AttributeType=S \
AttributeName=GSI4PK,AttributeType=S \
AttributeName=GSI4SK,AttributeType=S \
AttributeName=GSI5PK,AttributeType=S \
AttributeName=GSI5SK,AttributeType=S \
--key-schema \
AttributeName=PK,KeyType=HASH \
AttributeName=SK,KeyType=RANGE \
--provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=10 \
--global-secondary-indexes \
IndexName=GSI1,KeySchema=["{AttributeName=GSI1PK,KeyType=HASH}","{AttributeName=GSI1SK,KeyType=RANGE}"],Projection="{ProjectionType=ALL}",ProvisionedThroughput="{ReadCapacityUnits=10,WriteCapacityUnits=10}" \
IndexName=GSI2,KeySchema=["{AttributeName=GSI2PK,KeyType=HASH}","{AttributeName=GSI2SK,KeyType=RANGE}"],Projection="{ProjectionType=ALL}",ProvisionedThroughput="{ReadCapacityUnits=10,WriteCapacityUnits=10}" \
IndexName=GSI3,KeySchema=["{AttributeName=GSI3PK,KeyType=HASH}","{AttributeName=GSI3SK,KeyType=RANGE}"],Projection="{ProjectionType=ALL}",ProvisionedThroughput="{ReadCapacityUnits=10,WriteCapacityUnits=10}" \
IndexName=GSI4,KeySchema=["{AttributeName=GSI4PK,KeyType=HASH}","{AttributeName=GSI4SK,KeyType=RANGE}"],Projection="{ProjectionType=ALL}",ProvisionedThroughput="{ReadCapacityUnits=10,WriteCapacityUnits=10}" \
IndexName=GSI5,KeySchema=["{AttributeName=GSI5PK,KeyType=HASH}","{AttributeName=GSI5SK,KeyType=RANGE}"],Projection="{ProjectionType=ALL}",ProvisionedThroughput="{ReadCapacityUnits=10,WriteCapacityUnits=10}"
I have installed tesseract version 4.0 in ubuntu.
I am able to perform all the actions of tesseract using Tesseract CLI like simple OCR text generation.
I want to train the LSTM.
I read this article and tried to run the following command directly on terminal after isntalling Tesseract from Build.
mkdir -p ~/tesstutorial/engoutput
training/lstmtraining --debug_interval 100 \
--traineddata ~/tesstutorial/engtrain/eng/eng.traineddata \
--net_spec '[1,36,0,1 Ct3,3,16 Mp3,3 Lfys48 Lfx96 Lrx96 Lfx256 O1c111]' \
--model_output ~/tesstutorial/engoutput/base --learning_rate 20e-4 \
--train_listfile ~/tesstutorial/engtrain/eng.training_files.txt \
--eval_listfile ~/tesstutorial/engeval/eng.training_files.txt \
--max_iterations 5000 &>~/tesstutorial/engoutput/basetrain.log
Althoguh it created the engouput directory.
Current path was pointed to SRC directory of tesseract.
Get the following error :
bash: training/lstmtraining: No such file or directory
Running as
Fixed by following code
Create Training Data First
cd ~/tesseract-ocr/src
training/tesstrain.sh \
--fonts_dir /usr/share/fonts/ \
--lang eng \
--linedata_only \
--noextract_font_properties \
--exposures "0" \
--langdata_dir /home/shan/langdata_lstm \
--output_dir /home/shan/tesstutorial/engtrain \
--tessdata_dir /home/shan/tesseract-ocr/tessdata \
--fontlist "Arial"
sudo chmod -R 777 /home/shan/tesstutorial/engtrain
Then LSTM Model
sudo chmod -R 777 /home/shan/tesstutorial/
cd ~/tesseract-ocr/src/
training/lstmtraining --stop_training \
--continue_from ~/tesstutorial/engoutput/base_checkpoint \
--traineddata ~/tesstutorial/engtrain/eng/eng.traineddata \
--model_output ~/tesstutorial/engoutput/eng.traineddata
sudo chmod -R 777 ~/tesstutorial
cd ~/tesseract-ocr/src/
training/lstmtraining --debug_interval 100 \
--traineddata ~/tesstutorial/engtrain/eng/eng.traineddata \
--net_spec '[1,36,0,1 Ct3,3,16 Mp3,3 Lfys48 Lfx96 Lrx96 Lfx256 O1c111]' \
--model_output ~/tesstutorial/engoutput/base --learning_rate 20e-4 \
--train_listfile ~/tesstutorial/engtrain/eng.training_files.txt \
--max_iterations 5000 &>~/tesstutorial/engoutput/basetrain.log
Runnning the dataeng-machine-learning codelab on step 9. 4. Feature Engineering.
The notebook step for running a tarin job is:
%%bash
OUTDIR=gs://${BUCKET}/taxifare/ch4/taxi_trained
JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${REPO}/courses/machine_learning/feateng/taxifare/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC \
--runtime-version=1.0 \
-- \
--train_data_paths="gs://$BUCKET/taxifare/ch4/taxi_preproc/train*" \
--eval_data_paths="gs://${BUCKET}/taxifare/ch4/taxi_preproc/valid*" \
--output_dir=$OUTDIR \
--num_epochs=100
That works great no matter how many time I run it.
However if I run:
%%bash
OUTDIR=gs://${BUCKET}/taxifare/ch4/taxi_trained
JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${REPO}/courses/machine_learning/feateng/taxifare/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC \
--runtime-version=1.0 \
-- \
--train_data_paths="gs://$BUCKET/taxifare/ch4/taxi_preproc/train*" \
--eval_data_paths="gs://${BUCKET}/taxifare/ch4/taxi_preproc/valid*" \
--output_dir=$OUTDIR \
--num_epochs=100 \
--verbosity DEBUG
Job fails after about 40 sec. with this in the logs:
The replica master 0 exited with a non-zero status of 2. Termination reason: Error.
I've found this usage in here:
https://cloud.google.com/ml-engine/docs/how-tos/getting-started-training-prediction#cloud-train-single
So I guesss it's ok to use.
What am I doing wrong?
Note that every argument after the "-- \" line is a pass through to the tensorflow code and is therefore dependent on the individual sample code.
In this case, the "--verbosity" flag isn't supported by the sample you are running. Looking at the samples repo, it looks like the only sample that has that flag is the census estimator sample.
The taxifare example is currently hardcoded to INFO, and the code doesn't parse the --verbose flag.
I added texturePacker script to export sprite sheet and its working. I would like to know how to set 'Pre Multiply Alpha' and 'NPot any size' while exporting sheet through Xcode script?
Here is my present Code:
TP="/usr/local/bin/TexturePacker"
${TP} --smart-update \
--format cocos2d \
--padding 2 \
--main-extension "-ipadhd" \
--autosd-variant 0.5:-ipad \
--autosd-variant 0.5:-hd \
--autosd-variant 0.25: \
--opt RGBA8888 \
--data iOS/Resources/Game_SpriteSheet/CBirdSpriteSheet_1-ipadhd.plist \
--sheet iOS/Resources/Game_SpriteSheet/CBirdSpriteSheet_1-ipadhd.pvr.ccz \
SpriteSheet/Sprite_Sheet_1/*.png
Screenshot from external texture packer. I want same in script.
Have you tried adding the --premultiply-alpha and --size-constraints <value> options to the command? [1]
TP="/usr/local/bin/TexturePacker"
${TP} --smart-update \
--format cocos2d \
--padding 2 \
--main-extension "-ipadhd" \
--autosd-variant 0.5:-ipad \
--autosd-variant 0.5:-hd \
--autosd-variant 0.25: \
--opt RGBA8888 \
--premultiply-alpha \
--size-constraints NPOT \
--data iOS/Resources/Game_SpriteSheet/CBirdSpriteSheet_1-ipadhd.plist \
--sheet iOS/Resources/Game_SpriteSheet/CBirdSpriteSheet_1-ipadhd.pvr.ccz \
SpriteSheet/Sprite_Sheet_1/*.png
[1] http://www.codeandweb.com/texturepacker/documentation