🔥码云GVP开源项目 12k star Uniapp+ElementUI 功能强大 支持多语言、二开方便! 广告
# 提供 TF 服务模型 要运行 ModelServer,请执行以下命令: ```py $ tensorflow_model_server --model_name=mnist --model_base_path=/home/armando/models/mnist ``` 服务器开始在端口 8500 上提供模型: ```py I tensorflow_serving/model_servers/main.cc:147] Building single TensorFlow model file config: model_name: mnist model_base_path: /home/armando/models/mnist I tensorflow_serving/model_servers/server_core.cc:441] Adding/updating models. I tensorflow_serving/model_servers/server_core.cc:492] (Re-)adding model: mnist I tensorflow_serving/core/basic_manager.cc:705] Successfully reserved resources to load servable {name: mnist version: 1} I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: mnist version: 1} I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: mnist version: 1} I external/org_tensorflow/tensorflow/contrib/session_bundle/bundle_shim.cc:360] Attempting to load native SavedModelBundle in bundle-shim from: /home/armando/models/mnist/1 I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:236] Loading SavedModel from: /home/armando/models/mnist/1 I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:155] Restoring SavedModel bundle. I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:190] Running LegacyInitOp on SavedModel bundle. I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:284] Loading SavedModel: success. Took 29853 microseconds. I tensorflow_serving/core/loader_harness.cc:86] Successfully loaded servable version {name: mnist version: 1} E1121 ev_epoll1_linux.c:1051] grpc epoll fd: 3 I tensorflow_serving/model_servers/main.cc:288] Running ModelServer at 0.0.0.0:8500 ... ``` 要通过调用模型对图像进行分类来测试服务器,请按照笔记本`ch-11c_TF_Serving_MNIST`进行操作。 笔记本电脑的前两个单元提供了服务仓库中 TensorFlow 官方示例的测试客户端功能。我们修改了示例以发送`'input'`并在函数签名中接收`'output'`以调用 ModelServer。 使用以下代码调用笔记本的第三个单元中的测试客户端函数: ```py error_rate = do_inference(hostport='0.0.0.0:8500', work_dir='/home/armando/datasets/mnist', concurrency=1, num_tests=100) print('\nInference error rate: %s%%' % (error_rate * 100)) ``` 我们得到差不多 7%的错误率! (您可能会得到不同的值): ```py Extracting /home/armando/datasets/mnist/train-images-idx3-ubyte.gz Extracting /home/armando/datasets/mnist/train-labels-idx1-ubyte.gz Extracting /home/armando/datasets/mnist/t10k-images-idx3-ubyte.gz Extracting /home/armando/datasets/mnist/t10k-labels-idx1-ubyte.gz .................................................. .................................................. Inference error rate: 7.0% ```