i am trying to deploy my first function to the google cloud using the terminal from vs code:
I followed the tutorial and downlaoded the git repo.
I try to deploy like this:
gcloud functions deploy nodejs-http-function \
--gen2 \
--runtime=nodejs16 \
--region=europe-west1 \
--source=projects \
--entry-point=helloGET \
--trigger-http \
--allow-unauthenticated
But it fails saying there was an unexpected error.
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Build failed with status: FAILURE and message: An unexpected error occurred. Refer to build logs:
Info
2022-08-30T10:37:20.251175889ZCopying gs://gcf-v2-sources-988480765475-europe-west1/nodejs-http-function/function-source.zip#1661855830947918...
Info
2022-08-30T10:37:20.315626228Z/ [0 files][ 0.0 B/ 22.0 B]
/ [1 files][ 22.0 B/ 22.0 B]
Info
2022-08-30T10:37:20.315708737ZOperation completed over 1 objects/22.0 B.
Info
2022-08-30T10:37:21.477875307ZArchive: /tmp/source-archive.zip
Info
2022-08-30T10:37:21.477882569Zwarning [/tmp/source-archive.zip]: zipfile is empty
Info
2022-08-30T10:37:21.988690356ZFetching storage object: gs://gcf-v2-sources-988480765475-europe-west1/nodejs-http-function/function-source.zip#1661855830947918
Info
2022-08-30T10:37:23.934777658ZCopying gs://gcf-v2-sources-988480765475-europe-west1/nodejs-http-function/function-source.zip#1661855830947918...
Info
2022-08-30T10:37:23.997108157Z/ [0 files][ 0.0 B/ 22.0 B]
/ [1 files][ 22.0 B/ 22.0 B]
Info
2022-08-30T10:37:23.997183532ZOperation completed over 1 objects/22.0 B.
Info
2022-08-30T10:37:25.147810126Zwarning [/tmp/source-archive.zip]: zipfile is empty
Info
2022-08-30T10:37:25.147817595ZArchive: /tmp/source-archive.zip
Info
2022-08-30T10:37:25.157272064ZERROR
Info
2022-08-30T10:37:25.157303927ZERROR: error fetching storage source: generic::unknown: retry budget exhausted (3 attempts): fetching gcs source: unpacking source from gcs: source fetch container exited with non-zero status: 1
I dont really know what to make of this.
EDIT:
My path to the index.js file is here:
/Users/juliustolksdorf/Projects/Interior_Circle/webhook/nodejs-docs-samples-main/functions/helloworld
and I am running the command from the VS code terminal.
I ofc first tried " % cd /Users/juliustolksdorf/Projects/Interior_Circle/webhook/nodejs-docs-samples-main/functions/helloworld" to get to the right path where my index.js file lies, and then did the command to deploy but it still failed.
Related
I am using windows 10 and powershell to build and submit a transaction. The code submitted is below
cscli transaction simple-payment build --out-file tx.txsigned
--signing-key addr_xsk1fzw9r482t0ekua7rcqewg3k8ju5d9run4juuehm2p24jtuzz4dg4wpeulnqhualvtx9lyy7u0h9pdjvmyhxdhzsyy49szs6y8c9zwfp0eqyrqyl290e6dr0q3fvngmsjn4aask9jjr6q34juh25hczw3euust0dw --network testnet
--from addr_test1vq5zuhh9685fup86syuzmu3e6eengzv8t46mfqxg086cvqqc5zr4t `
--to addr_test1vpuuxlat45yxvtsk44y4pmwk854z4v9k879yfe99q3g3aagqqzar3 --ada 420 --message "thx for lunch"
Thank you for help
I have to run a spark job, (I am new to spark) and getting following error-
[2022-02-16 14:47:45,415] {{bash.py:135}} INFO - Tmp dir root location: /tmp
[2022-02-16 14:47:45,416] {{bash.py:158}} INFO - Running command: spark-submit --class org.xyz.practice.driver.PractitionerDriver s3://pfdt-poc-temp/xyz_test/org.xyz.spark-xy_mvp-1.0.0-SNAPSHOT.jar
[2022-02-16 14:47:45,422] {{bash.py:169}} INFO - Output:
[2022-02-16 14:47:45,423] {{bash.py:173}} INFO - bash: spark-submit: command not found
[2022-02-16 14:47:45,423] {{bash.py:177}} INFO - Command exited with return code 127
[2022-02-16 14:47:45,437] {{taskinstance.py:1482}} ERROR - Task failed with exception
What has to be done,
def run_spark(**kwargs):
import pyspark
sc = pyspark.SparkContext()
df = sc.textFile('s3://demoairflowpawan/people.txt')
logging.info('Number of lines in people.txt = {0}'.format(df.count()))
sc.stop()
spark_task = BashOperator(
task_id='spark_java',
bash_command='spark-submit --class {{ params.class }} {{ params.jar }}',
params={'class': 'org.xyz.practice.driver.PractitionerDriver', 'jar': 's3://pfdt-poc-temp/xyz_test/org.xyz.spark-xy_mvp-1.0.0-SNAPSHOT.jar'},
dag=dag
)
The question is - why do you expect the spark-submit to be there?
If you created the airflow default pods, then they come with airflow code only.
You can check here an example for spark and airflow - https://medium.com/codex/executing-spark-jobs-with-apache-airflow-3596717bbbe3 - and they state specifically "Spark binaries must be added and mapped".
So you need to figure out how to download the spark binaries to the existing airflow pod.
Alternatively - you can create another k8s job which will do the spark-submit, and have your DAG activate this job.
sorry for the high level answer...
Elastic Beanstalk is adding & removing instances one after the other. Googling around points to checking the "State transition message" which is coming up as "Client.UserInitiatedShutdown: User initiated shutdown" for which https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/troubleshooting-launch.html#troubleshooting-launch-internal states some possible reasons but none of these apply. No one has touched any setting, etc. Any ideas?
UPDATE: Did a bit more digging and found out that app deployment is failing. Relevant log errors are below:
eb-engine.log
2021/08/05 15:46:29.272215 [INFO] Executing instruction: PreBuildEbExtension
2021/08/05 15:46:29.272220 [INFO] Starting executing the config set Infra-EmbeddedPreBuild.
2021/08/05 15:46:29.272235 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-init -s arn:aws:cloudformation:us-east-1:345470085661:stack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f -r AWSEBAutoScalingGroup --region us-east-1 --configsets Infra-EmbeddedPreBuild
2021/08/05 15:50:44.538818 [ERROR] An error occurred during execution of command [app-deploy] - [PreBuildEbExtension]. Stop running the command. Error: EbExtension build failed. Please refer to /var/log/cfn-init.log for more details.
2021/08/05 15:50:44.540438 [INFO] Executing cleanup logic
2021/08/05 15:50:44.581445 [INFO] CommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1628178644,"severity":"ERROR"}]}]}
2021/08/05 15:50:44.620394 [INFO] Platform Engine finished execution on command: app-deploy
2021/08/05 15:51:22.196186 [ERROR] An error occurred during execution of command [self-startup] - [PreBuildEbExtension]. Stop running the command. Error: EbExtension build failed. Please refer to /var/log/cfn-init.log for more details.
2021/08/05 15:51:22.196215 [INFO] Executing cleanup logic
eb-cfn-init.log
[2021-08-05T15:42:44.199Z] Completed executing cfn_init.
[2021-08-05T15:42:44.226Z] finished _OnInstanceReboot
+ RESULT=1
+ [[ 1 -ne 0 ]]
+ sleep_delay
+ (( 2 < 3600 ))
+ echo Sleeping 2
Sleeping 2
+ sleep 2
+ SLEEP_TIME=4
+ true
+ curl https://elasticbeanstalk-platform-assets-us-east-1.s3.amazonaws.com/stalks/eb_php74_amazon_linux_2_1.0.1153.0_20210728213922/lib/UserDataScript.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 4627 100 4627 0 0 24098 0 --:--:-- --:--:-- --:--:-- 24098
+ RESULT=0
+ [[ 0 -ne 0 ]]
+ SLEEP_TIME=2
+ /bin/bash /tmp/ebbootstrap.sh 'https://cloudformation-waitcondition-us-east-1.s3.amazonaws.com/arn%3Aaws%3Acloudformation%3Aus-east-1%3A345470085661%3Astack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f/AWSEBInstanceLaunchWaitHandle?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20200528T171102Z&X-Amz-SignedHeaders=host&X-Amz-Expires=86399&X-Amz-Credential=AKIAIIT3CWAIMJYUTISA%2F20200528%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=57c7da0aec730af1b425d1aff68517c333cf9d5432c984d775419b415cac8513' arn:aws:cloudformation:us-east-1:345470085661:stack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f 65c52bb7-0376-4d43-b304-b64890a34c1c https://elasticbeanstalk-health.us-east-1.amazonaws.com '' https://elasticbeanstalk-platform-assets-us-east-1.s3.amazonaws.com/stalks/eb_php74_amazon_linux_2_1.0.1153.0_20210728213922 us-east-1
[2021-08-05T15:46:07.683Z] Started EB Bootstrapping Script.
[2021-08-05T15:46:07.739Z] Received parameters:
TARBALLS =
EB_GEMS =
SIGNAL_URL = https://cloudformation-waitcondition-us-east-1.s3.amazonaws.com/arn%3Aaws%3Acloudformation%3Aus-east-1%3A345470085661%3Astack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f/AWSEBInstanceLaunchWaitHandle?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20200528T171102Z&X-Amz-SignedHeaders=host&X-Amz-Expires=86399&X-Amz-Credential=AKIAIIT3CWAIMJYUTISA%2F20200528%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=57c7da0aec730af1b425d1aff68517c333cf9d5432c984d775419b415cac8513
STACK_ID = arn:aws:cloudformation:us-east-1:345470085661:stack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f
REGION = us-east-1
GUID =
HEALTHD_GROUP_ID = 65c52bb7-0376-4d43-b304-b64890a34c1c
HEALTHD_ENDPOINT = https://elasticbeanstalk-health.us-east-1.amazonaws.com
PROXY_SERVER =
HEALTHD_PROXY_LOG_LOCATION =
PLATFORM_ASSETS_URL = https://elasticbeanstalk-platform-assets-us-east-1.s3.amazonaws.com/stalks/eb_php74_amazon_linux_2_1.0.1153.0_20210728213922
Is this some corrupted AMI?
Turned out that there was a config script in the .ebextension directory that was not behaving.
I have created an EMR cluster thru AWS CLI
aws emr create-cluster --applications Name=Hive Name=HBase Name=Hue Name=Hadoop Name=ZooKeeper
--tags Name="EMR-Atlas" --release-label emr-5.16.0 --ec2-attributes SubnetId=subnet-xxxxx,
KeyName=atlas-emr-dif --use-default-roles --ebs-root-volume-size 100 --instance-groups
InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m4.xlarge InstanceGroupType=CORE,InstanceCount=1,
InstanceType=m4.xlarge --log-uri s3://xxx/logs/new-log --steps Name="Run Remote Script",
Jar=command-runner.jar,Args=
[bash,-c,
"curl https://s3.amazonaws.com/aws-bigdata-blog/artifacts/aws-blog-emr-atlas/apache-atlas-emr.sh
-o /tmp/script.sh; chmod +x /tmp/script.sh; /tmp/script.sh"]
Then I have established a SSH connection for HUE:
--ssh -L 8888:localhost:8888 -i key.pem hadoop#<EMR Master IP Address>
I have created a Hive table thru HUE :
CREATE external TABLE us_disease
(
YearStart int,
StratificationCategory2 string,
GeoLocation string,
ResponseID string,
LocationID int,
TopicID string
)
row format delimited
fields terminated by ','
LOCATION 's3://XXXX/data/USHealthcare/'
TBLPROPERTIES ("skip.header.line.count"="1");
I am able to fetch records with SELECT statement thru HUE.
But, if I try to execute the select statement thru HQL it fails.
I tried in the following way:
My HQL is plain SELECT statment
select * from us_disease limit 10;
and I have stored the same in S3 as hive.hql.
I executed the hql thru step in emr cluster:
Log :
INFO redirectError to /mnt/var/log/hadoop/steps/s-xxxxxxxx/stderr
INFO Working dir /mnt/var/lib/hadoop/steps/s-xxxxxxxx
INFO ProcessRunner started child process 30597 :
hadoop 30597 5505 0 11:40 ? 00:00:00 bash /usr/lib/hadoop/bin/hadoop jar /var/lib/aws/emr/step-runner/hadoop-jars/command-runner.jar hive-script --run-hive-script --args -f s3://dif-test/data-governance/hql/hive.hql
2021-03-30T11:40:36.318Z INFO HadoopJarStepRunner.Runner: startRun() called for s-xxxxxxxx Child Pid: 30597
INFO Synchronously wait child process to complete : hadoop jar /var/lib/aws/emr/step-runner/hadoop-...
INFO waitProcessCompletion ended with exit code 127 : hadoop jar /var/lib/aws/emr/step-runner/hadoop-...
INFO total process run time: 2 seconds
2021-03-30T11:40:36.437Z INFO Step created jobs:
2021-03-30T11:40:36.438Z WARN Step failed with exitCode 127 and took 2 seconds
stderr:
/usr/lib/hadoop/bin/hadoop: line 169: /etc/alternatives/jre/bin/java: No such file or directory
Any help appreciated. Thank you.
The issue got fixed after I updated the emr version. Previously I was using emr-5.16.0 . I changed to emr-5.32.0.
Modified code :
aws emr create-cluster --applications Name=Hive Name=HBase Name=Hue Name=Hadoop Name=ZooKeeper --tags Name="EMR-Atlas" --release-label emr-5.32.0 --ec2-attributes SubnetId=subnet-xxxx,KeyName=atlas-emr-dif --use-default-roles --ebs-root-volume-size 100 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m5.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m5.xlarge --log-uri s3://xxx/xxx/new-log --steps Name="Run Remote Script",Jar=command-runner.jar,Args=[bash,-c,"curl https://s3.amazonaws.com/aws-bigdata-blog/artifacts/aws-blog-emr-atlas/apache-atlas-emr.sh -o /tmp/script.sh; chmod +x /tmp/script.sh; /tmp/script.sh"]
Hi I am newbie to tensorflow. My aim is to convert .pb file to .tflite from pretrain model for my understanding. I have download mobilenet_v1_1.0_224 Model. Below is structure for model
mobilenet_v1_1.0_224.ckpt.data-00000-of-00001 - 66312kb
mobilenet_v1_1.0_224.ckpt.index - 20kb
mobilenet_v1_1.0_224.ckpt.meta - 3308kb
mobilenet_v1_1.0_224.tflite - 16505kb
mobilenet_v1_1.0_224_eval.pbtxt - 520kb
mobilenet_v1_1.0_224_frozen.pb - 16685kb
I know model already has .tflite file, but for my understanding I am trying to convert it.
My First Step : Creating frozen Graph file
import tensorflow as tf
imported_meta = tf.train.import_meta_graph(base_dir + model_folder_name + meta_file,clear_devices=True)
graph_ = tf.get_default_graph()
with tf.Session() as sess:
#saver = tf.train.import_meta_graph(base_dir + model_folder_name + meta_file, clear_devices=True)
imported_meta.restore(sess, base_dir + model_folder_name + checkpoint)
graph_def = sess.graph.as_graph_def()
output_graph_def = graph_util.convert_variables_to_constants(sess, graph_def, ['MobilenetV1/Predictions/Reshape_1'])
with tf.gfile.GFile(base_dir + model_folder_name + './my_frozen.pb', "wb") as f:
f.write(output_graph_def.SerializeToString())
I have successfully created my_frozen.pb - 16590 kb . But original file size is 16,685kb, which is clearly visible in folder structure above. So this is my first question why file size is different, Am I following some wrong path.
My Second Step : Creating tflite file using bazel command
bazel run --config=opt tensorflow/contrib/lite/toco:toco -- --input_file=/path_to_folder/my_frozen.pb --output_file=/path_to_folder/model.tflite --inference_type=FLOAT --input_shape=1,224,224,3 --input_array=input --output_array=MobilenetV1/Predictions/Reshape_1
This commands give me model.tflite - 0 kb.
Trackback for bazel Command
INFO: Analysed target //tensorflow/contrib/lite/toco:toco (0 packages loaded).
INFO: Found 1 target...
Target //tensorflow/contrib/lite/toco:toco up-to-date:
bazel-bin/tensorflow/contrib/lite/toco/toco
INFO: Elapsed time: 0.369s, Critical Path: 0.01s
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/tensorflow/contrib/lite/toco/toco '--input_file=/home/ubuntu/DEEP_LEARNING/Prashant/TensorflowBasic/mobilenet_v1_1.0_224/frozengraph.pb' '--output_file=/home/ubuntu/DEEP_LEARNING/Prashant/TensorflowBasic/mobilenet_v1_1.0_224/float_model.tflite' '--inference_type=FLOAT' '--input_shape=1,224,224,3' '--input_array=input' '--output_array=MobilenetV1/Predictions/Reshape_1'
2018-04-12 16:36:16.190375: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1265] Converting unsupported operation: FIFOQueueV2
2018-04-12 16:36:16.190707: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1265] Converting unsupported operation: QueueDequeueManyV2
2018-04-12 16:36:16.202293: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 290 operators, 462 arrays (0 quantized)
2018-04-12 16:36:16.211322: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 290 operators, 462 arrays (0 quantized)
2018-04-12 16:36:16.211756: F tensorflow/contrib/lite/toco/graph_transformations/resolve_batch_normalization.cc:86] Check failed: mean_shape.dims() == multiplier_shape.dims()
Python Version - 2.7.6
Tensorflow Version - 1.5.0
Thanks In advance :)
The error Check failed: mean_shape.dims() == multiplier_shape.dims()
was an issue with resolution of batch norm and has been resolved in:
https://github.com/tensorflow/tensorflow/commit/460a8b6a5df176412c0d261d91eccdc32e9d39f1#diff-49ed2a40acc30ff6d11b7b326fbe56bc
In my case the error occurred using tensorflow v1.7
Solution was to use tensorflow v1.15 (nightly)
toco --graph_def_file=/path_to_folder/my_frozen.pb \
--input_format=TENSORFLOW_GRAPHDEF \
--output_file=/path_to_folder/my_output_model.tflite \
--input_shape=1,224,224,3 \
--input_arrays=input \
--output_format=TFLITE \
--output_arrays=MobilenetV1/Predictions/Reshape_1 \
--inference-type=FLOAT