yolo v4 wieghts to .pb conversion - computer-vision

i trained model using YOLO -V 4-tiny and i am not able convert this weight file to .Pb,i used tensor flow version 2.3.0
i try with some git hub links but if we trained it on tensor flow -1.x we can but not compatible for 2.x

Related

PyTorch model deployment in AI Platform

I'm deploying a Pytorch model in Google Cloud AI Platform, I'm getting the following error:
ERROR: (gcloud.beta.ai-platform.versions.create) Create Version failed. Bad model detected with error: Model requires more memory than allowed. Please try to decrease the model size and re-deploy. If you continue to have error, please contact Cloud ML.
Configuration:
setup.py
from setuptools import setup
REQUIRED_PACKAGES = ['torch']
setup(
name="iris-custom-model",
version="0.1",
scripts=["model.py"],
install_requires=REQUIRED_PACKAGES
)
Model version creation
MODEL_VERSION='v1'
RUNTIME_VERSION='1.15'
MODEL_CLASS='model.PyTorchIrisClassifier'
!gcloud beta ai-platform versions create {MODEL_VERSION} --model={MODEL_NAME} \
--origin=gs://{BUCKET}/{GCS_MODEL_DIR} \
--python-version=3.7 \
--runtime-version={RUNTIME_VERSION} \
--package-uris=gs://{BUCKET}/{GCS_PACKAGE_URI} \
--prediction-class={MODEL_CLASS}
You need to use Pytorch compiled packages compatible with Cloud AI Platform Package information here
This bucket contains compiled packages for PyTorch that are compatible with Cloud AI Platform prediction. The files are mirrored from the official builds at https://download.pytorch.org/whl/cpu/torch_stable.html
From documentation
In order to deploy a PyTorch model on Cloud AI Platform Online
Predictions, you must add one of these packages to the packageURIs
field on the version you deploy. Pick the package matching your Python
and PyTorch version. The package names follow this template:
Package name =
torch-{TORCH_VERSION_NUMBER}-{PYTHON_VERSION}-linux_x86_64.whl where
PYTHON_VERSION = cp35-cp35m for Python 3 with runtime versions <
1.15, cp37-cp37m for Python 3 with runtime versions >= 1.15
For example, if I were to deploy a PyTorch model based on PyTorch
1.1.0 and Python 3, my gcloud command would look like:
gcloud beta ai-platform versions create {VERSION_NAME} --model {MODEL_NAME}
...
--package-uris=gs://{MY_PACKAGE_BUCKET}/my_package-0.1.tar.gz,gs://cloud->ai-pytorch/torch-1.1.0-cp35-cp35m-linux_x86_64.whl
In summary:
1) Remove torch from your install_requires dependencies in setup.py
2) Include torch package when creating your version model.
!gcloud beta ai-platform versions create {VERSION_NAME} --model {MODEL_NAME} \
--origin=gs://{BUCKET}/{MODEL_DIR}/ \
--python-version=3.7 \
--runtime-version={RUNTIME_VERSION} \
--package-uris=gs://{BUCKET}/{PACKAGES_DIR}/text_classification-0.1.tar.gz,gs://cloud-ai-pytorch/torch-1.3.1+cpu-cp37-cp37m-linux_x86_64.whl \
--prediction-class=model_prediction.CustomModelPrediction

What to define as entrypoint when initializing a pytorch estimator with a custom docker image for training on AWS Sagemaker?

So I created a docker image for training. In the dockerfile I have an entrypoint defined such that when docker run is executed, it will start running my python code.
To use this on aws sagemaker in my understanding I need to create a pytorch estimator in a jupyter notebook in sagemaker. I tried something like this:
import sagemaker
from sagemaker.pytorch import PyTorch
sagemaker_session = sagemaker.Session()
role = sagemaker.get_execution_role()
estimator = PyTorch(entry_point='train.py',
role=role,
framework_version='1.3.1',
image_name='xxx.ecr.eu-west-1.amazonaws.com/xxx:latest',
train_instance_count=1,
train_instance_type='ml.p3.xlarge',
hyperparameters={})
estimator.fit({})
In the documentation I found that as image name I can specify the link the my docker image on aws ecr. When I try to execute this it keeps complaining
[Errno 2] No such file or directory: 'train.py'
It complains immidiatly, so surely I am doing something completely wrong. I would expect that first my docker image should run, and than it could find out that the entry point does not exist.
But besides this, why do I need to specify an entry point, as in, should it not be clear that the entry to my training is simply docker run?
For maybe better understanding. The entrypoint python file in my docker image looks like this:
if __name__=='__main__':
parser = argparse.ArgumentParser()
# Hyperparameters sent by the client are passed as command-line arguments to the script.
parser.add_argument('--epochs', type=int, default=5)
parser.add_argument('--batch_size', type=int, default=16)
parser.add_argument('--learning_rate', type=float, default=0.0001)
# Data and output directories
parser.add_argument('--output_data_dir', type=str, default=os.environ['OUTPUT_DATA_DIR'])
parser.add_argument('--train_data_path', type=str, default=os.environ['CHANNEL_TRAIN'])
parser.add_argument('--valid_data_path', type=str, default=os.environ['CHANNEL_VALID'])
# Start training
...
Later I would like to specify the hyperparameters and data channels. But for now I simply do not understand what to put as entry point. In the documentation it says that the entrypoint is required and it should be a local/global path to the entrypoint...
If you really would like to use a complete separate by yourself build docker image, you should create an Amazon Sagemaker algorithm (which is one of the options in the Sagemaker menu). Here you have to specify a link to your docker image on amazon ECR as well as the input parameters and data channels etc. When choosing this options, you should not use the PyTorch estimater but the Algoritm estimater. This way you indeed don't have to specify an entrypoint because it simple runs the docker when training and the default entrypoint can be defined in your docker file.
The Pytorch estimator can be used when having you own model code, but you would like to run this code in an off-the-shelf Sagemaker PyTorch docker image. This is why you have to for example specify the PyTorch framework version. In this case the entrypoint file by default should be placed next to where your jupyter notebook is stored (just upload the file by clicking on the upload button). The PyTorch estimator inherits all options from the framework estimator where options can be found where to place the entrypoint and model, for example source_dir.

Google Speech API using PHP "Invalid audio channel count" using myfile.FLAC

I use audio32KHz.flac default to test then it's ok. But I try my file FLAC, 32KHz, It not working:
Fatal error: Uncaught Google\ApiCore\ApiException:
{
"message": "Invalid audio channel count",
"code": 3,
"status": "INVALID_ARGUMENT",
"details": []
}
thrown in C:\xampp\htdocs\speech\speech-19\vendor\google\gax\src\ApiException.php on line 139
How to convert file my file.FLAC to mono FLAC? Thank you!
1. To convert your audio file to mono:
You can use the sox library (Easy to install and use)
sudo apt-get install -y sox
Then convert your file to mono:
sox yourfile.flac output.flac channels 1
2. To use the API with multiple channels audio files:
a). Add these two arguments to your config. I don't know php but I believe you would write it like this:
->setaudioChannelCount(2)
->setenableSeparateRecognitionPerChannel(true)
Reference: Transcribing audio with multiple channels
b). Use the gcloud alpha command:
gcloud alpha ml speech recognize yourfile.flac --language-code='en-US' --audio-channel-count=2 --separate-channel-recognition
Reference: gcloud alpha ml speech recognize

Error when creating model version on GCP AI Platform

I am trying to create a version of the model and link it to my exported Tensorflow model. however it gives me the following error : health probe timeout: generic::unavailable: The fetch failed with status 3 and reason: UNREACHABLE_5xx Check the url is available and that authentication parameters are specified correctly.
I have made my SaveModel directory public and have attached service-xxxxxxxxxxxx#cloud-ml.google.com.iam.gserviceaccount.com to my bucket with Storage Legacy Bucket Reader. My service account service-xxxxxxxxxxxx#cloud-ml.google.com.iam.gserviceaccount.com has role ML Engine Admin and Storage Admin. The bucket and ml-engine are part of the same project and region us-central1. I am initialising the model version with the following config:
Python version: 2.7
Framework: TensorFlow
Framework version: 1.12.3
Runtime version: 1.12
Machine type: n1-highmem-2
Accelerator: Nvidia Tesla K-80
Accelerator count: 1
Note : I used python 2.7 for training and runtime version 1.12
Can you verify Saved model is valid by using CLI.
Check that Serving tag-sets are available in your saved model, use the SavedModel CLI:
saved_model_cli show --dir <your model directory>

Can't create Deep Learning VM using Tensorflow 2.0 framework

I'm trying to create a Deep Learning Virtual Machine using Google Cloud Platform that uses tensorflow 2.0. But when I instantiate it i get the following error:
deep-learning-training-vm: {"ResourceType":"compute.v1.instance","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"errors":[{"domain":"global","message":"Invalid value for field 'resource.disks[0].initializeParams.sourceImage': 'https://www.googleapis.com/compute/v1/projects/click-to-deploy-images/global/images/tf-2-0-cu100-experimental-20190909'. The referenced image resource cannot be found.","reason":"invalid"}],"message":"Invalid value for field 'resource.disks[0].initializeParams.sourceImage': 'https://www.googleapis.com/compute/v1/projects/click-to-deploy-images/global/images/tf-2-0-cu100-experimental-20190909'. The referenced image resource cannot be found.","statusMessage":"Bad Request","requestPath":"https://compute.googleapis.com/compute/v1/projects/my-project/zones/us-west1-b/instances","httpMethod":"POST"}}
I don't quite understand the error but I believe that gcp is not able to find the right image for my virtual machine, i.e, the image that have this version of tensorflow in it (maybe because of TF 2.0 release?).
Have someone faced this problem before? Is there a way to create a DL VM using tensorflow 2.0?
It seemed to be a transient issue, since it is available now.
In addition, you can create your DL VM via gcloud. Here's an example of the command:
gcloud compute instances create INSTANCE_NAME \
--zone=ZONE \
--image-family=tf2-latest-cu100 \
--image-project=deeplearning-platform-release \
--maintenance-policy=TERMINATE \
--accelerator="type=nvidia-tesla-v100,count=1" \
--metadata="install-nvidia-driver=True,proxy-mode=project_editors" \
--machine-type=n2-highmem-8
There's more information on how to this in the DL documentation.
Also, if you are looking to create a VM with Tensorflow and Jupyter, you can try using AI Platform Notebooks.
When you create a new Notebook, you can select Tensorflow 2.0 and further customize it to select the accelerator, machine-type, etc.