diff --git a/Demo/iOS/AICamera/README_cn.md b/Demo/iOS/AICamera/README.cn.md
similarity index 93%
rename from Demo/iOS/AICamera/README_cn.md
rename to Demo/iOS/AICamera/README.cn.md
index 036d0b6..8168ed0 100644
--- a/Demo/iOS/AICamera/README_cn.md
+++ b/Demo/iOS/AICamera/README.cn.md
@@ -51,9 +51,9 @@ App模型需要开启摄像头权限。启动App后,点击屏幕,将会出
## 快速安装
-如果你想快速地体验PDCamera,可通过扫描以下二维码进行安装。成功识别二维码之后,会自动跳转到安装页面,点击“Install PDCamera”链接,App会自动下载并安装到你的iOS设备上。
+如果你想快速地体验PDCamera,可通过扫描以下二维码进行安装。成功识别二维码之后,会自动跳转到安装页面,点击**“Install PDCamera”**链接,App会自动下载并安装到你的iOS设备上。
-成功安装App后,你还需要安装如下步骤进一步设置:设置 → 通用 → 设备管理 → Baidu USA llc → 信任“Baidu USA llc”。
+成功安装App后,你还需要按照如下步骤设置你的iOS设备:**设置 → 通用 → 设备管理 → Baidu USA llc → 信任“Baidu USA llc”**。
@@ -71,6 +71,8 @@ Github上面只维护了该Demo相关的源码文件和项目配置。用户可
VGG模型的识别精度高,但由于模型较大(104.3MB),需要占用较高的内存(\~800MB),并且识别速度慢(每帧~1.5秒),因此对设备的计算能力要求较高(iPhone6s以上),默认没有添加到项目中。用户也可自行下载[vgg\_ssd\_net.paddle](http://cloud.dlnel.org/filepub/?uuid=1116a5f3-7762-44b5-82bb-9954159cb5d4),添加到项目中,体验其高精度识别效果。
+这里,我们使用的是**合并的模型**(merged model)。如何从配置文件(例如`config.py`)和训练得到的参数文件(例如`params_pass_0.tar.gz`)得的**合并的模型**文件,请参考[如何合并模型](../../../deployment/model/merge_config_parameters/README.cn.md)。
+
### 准备PaddlePaddle库
用户可按照[iOS平台编译指南](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_ios_cn.md),拉取[Paddle](https://github.com/PaddlePaddle/Paddle)最新的源码,编译适用于iOS平台的PaddlePaddle库。在执行`make install`之后,PaddlePaddle库将会安装在`CMAKE_INSTALL_PREFIX`所指定的目录下。该目录包含如下子目录:
diff --git a/Demo/iOS/AICamera/README.md b/Demo/iOS/AICamera/README.md
index 4764b50..8184d1b 100644
--- a/Demo/iOS/AICamera/README.md
+++ b/Demo/iOS/AICamera/README.md
@@ -1,65 +1,60 @@
# PDCamera iOS Demo with SSD Model
+- [Overview](#overview)
+ - [Pre-trained Models](#pre-trained-models)
+ - [Demo Screenshot](#demo-screenshot)
+- [Fast Installation through QR Code](#fast-installation-through-qr-code)
+- [Build from Source Code](#build-from-source-code)
+ - [Prepare Models](#prepare-models)
+ - [Prepare PaddlePaddle Inference Library](#prepare-paddlepaddle-inference-library)
+ - [Directory Tree](#directory-tree)
+- [Integrate Paddle C Library to iOS Project](#integrate-paddle-c-library-to-ios-project)
+
+## Overview
+
This iOS demo shows PaddlePaddle running SSD(Single Shot MultiBox Detector)Object detection on iOS devices locally and offline. It loads a pretrained model with PaddlePaddle and uses camera to capture images and call PaddlePaddle's inference ability to show detected objects to users.
You can look at SSD model architecture [here](https://github.com/PaddlePaddle/models/tree/develop/ssd) and a linux demo [here](https://github.com/PaddlePaddle/Mobile/tree/develop/Demo/linux)
+### Pre-trained Models
-## Download and run the app
+`pascal_mobilenet_300_66` and `vgg_ssd_net` models can classify 20 objects.
+`face_mobilenet_160_91` can only classify human's face.
-To simply run the demo with iPhone/iPad, scan the QR code below, click "Install PDCamera" in the link and the app will be downloaded in the background.
-After installed, go to Settings -> General -> Device Management -> Baidu USA llc -> Trust "Baidu USA llc"
+| Model | Dimensions | Accuracy | Size |
+| ------------------------ |:----------:| --------:|------:|
+| [pascal\_mobilenet\_300\_66.paddle](http://cloud.dlnel.org/filepub/?uuid=39c325d9-b468-4940-ba47-d50c8ec5fd5b) | 300 x 300 | 66% | 23.2MB |
+| [vgg\_ssd\_net.paddle](http://cloud.dlnel.org/filepub/?uuid=1116a5f3-7762-44b5-82bb-9954159cb5d4) | 300 x 300 | 71% | 104.3MB |
+| [face\_mobilenet\_160\_91.paddle](http://cloud.dlnel.org/filepub/?uuid=038c1dbf-08b3-42a9-b2dc-efccd63859fb) | 160 x 160 | 91% | 18.4MB |
+### Demo Screenshot
-### QR code link
+Simply tap on the screen to toggle settings.
-
-
-### Demo screenshot
+- Models: Select Pascal MobileNet 300 or Face MobileNet 160, App will exit, need to launch to restart.
+- Camera: Toggle Front/Back Camera. App will exit, need to launch to restart.
+- Accuracy Threshold: Adjust threshold to filter more/less objects based on probability.
+- Time Refresh Rate: Adjust the time to refresh bounding box more/less frequently.
-
+
+
+
+
+Figure-1
+
Detected object will be highlighted as a bounding box with a classified object label and probability.
+## Fast Installation through QR Code
-## Classifications
-`pascal_mobilenet_300_66` and `vgg_ssd_net` models can only classify following 20 objects:
-
-- aeroplane
-- bicycle
-- background
-- boat
-- bottle
-- bus
-- car
-- cat
-- chair
-- cow
-- diningtable
-- dog
-- horse
-- motorbike
-- person
-- pottedplant
-- sheep
-- sofa
-- train
-- tvmonitor
-
-`face_mobilenet_160_91` can only classify human's face
-
-
-## Settings
-
-Simply tap on the screen to toggle settings
-
-- Models: Select Pascal MobileNet 300 or Face MobileNet 160, App will exit, need to launch to restart.
-- Camera: Toggle Front/Back Camera. App will exit, need to launch to restart.
-- Accuracy Threshold: Adjust threshold to filter more/less objects based on probability
-- Time Refresh Rate: Adjust the time to refresh bounding box more/less frequently
+To simply run the demo with iPhone/iPad, scan the QR code below, click "Install PDCamera" in the link and the app will be downloaded in the background.
+After installed, go to Settings -> General -> Device Management -> Baidu USA llc -> Trust "Baidu USA llc"
+
+
+
-## Development or modify
+## Build from Source Code
Use latest XCode for development. This demo requires a camera for object detection, therefore you must use a device (iPhone or iPad) for development and testing. Simulators will not work as they cannot access camera.
@@ -67,23 +62,7 @@ For developers, feel free to use this as a reference to start a new project. Thi
Swift cannot directly call C API, in order to have client in Swift work, create Objective-C briding header and a Objective-C++ wrapper (.mm files) to access paddle APIs.
-
-## Integrate Paddle C Library to iOS
-
--Follow this guide [Build PaddlePaddle for iOS](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_ios_cn.md) to generate paddle libs(include, lib, third_party).
--Create a folder paddle-ios and add to project root. Put the 3 paddle libs folder under paddle-ios.
-- Add the `include` directory to **Header Search Paths**
-
-
-- Add the `Accelerate.framework` or `veclib.framework` to your project, if your PaddlePaddle is built with `IOS_USE_VECLIB_FOR_BLAS=ON`
-- Add the libraries of paddle, `libpaddle_capi_layers.a` and `libpaddle_capi_engine.a`, and all the third party libraries to your project
-
-
-- Set `-force_load` for `libpaddle_capi_layers.a`
-
-
-
-## Download Models
+### Prepare Models
Our models are too large to upload to Github. Create a model folder and add to project root. Download [face_mobilenet_160_91.paddle](http://cloud.dlnel.org/filepub/?uuid=038c1dbf-08b3-42a9-b2dc-efccd63859fb) and [pascal_mobilenet_300_66.paddle](http://cloud.dlnel.org/filepub/?uuid=39c325d9-b468-4940-ba47-d50c8ec5fd5b) to the model folder.
@@ -92,11 +71,53 @@ Note: Only runs on iPhone6s or above (iPhone 6 or below will crash due to memory
If you want to try it out, download [vgg_ssd_net.paddle](http://cloud.dlnel.org/filepub/?uuid=1116a5f3-7762-44b5-82bb-9954159cb5d4), then go to
XCode target -> Bulid Phases -> Copy Bundle Resources, click '+' to add vgg_ssd_net.paddle
+### Prepare PaddlePaddle Inference Library
+
+Follow this guide [Build PaddlePaddle for iOS](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_ios_cn.md) to generate paddle libs(include, lib, third_party).
+Create a folder paddle-ios and add to project root. Put the 3 paddle libs folder under paddle-ios.
+
+### Directory Tree
+
+```
+$ git clone https://github.com/PaddlePaddle/Mobile.git
+$ cd Mobile/Demo/iOS/AICamera
+$ tree
+.
+├── AICamera # sources codes
+├── PDCamera.xcodeproj
+├── README.md
+├── README_cn.md
+├── assets
+├── models # models
+│ ├── face_mobilenet_160_91.paddle
+│ ├── pascal_mobilenet_300_66.paddle
+│ └── vgg_ssd_net.paddle
+└── paddle-ios # PaddlePaddle inference library
+ ├── include
+ ├── lib
+ │ ├── libpaddle_capi_engine.a
+ │ ├── libpaddle_capi_layers.a
+ │ └── libpaddle_capi_whole.a
+ └── third_party
+```
+
+## Integrate Paddle C Library to iOS Project
-## Accuracy
+- Add the `include` directory to **Header Search Paths**
+
+
+
+
+
+- Add the `Accelerate.framework` or `veclib.framework` to your project, if your PaddlePaddle is built with `IOS_USE_VECLIB_FOR_BLAS=ON`
+- Add the libraries of paddle, `libpaddle_capi_layers.a` and `libpaddle_capi_engine.a`, and all the third party libraries to your project
+
+
+
+
+
+- Set `-force_load` for `libpaddle_capi_layers.a`
-| Model | Dimensions | Accuracy |
-| ------------------------ |:----------:| --------:|
-| face_mobilenet_160_91 | 160x160 | 91% |
-| pascal_mobilenet_300_66 | 300x300 | 66% |
-| vgg_ssd_net | 300x300 | 71% |
+
+
+
diff --git a/README.cn.md b/README.cn.md
new file mode 100644
index 0000000..ba0d6a1
--- /dev/null
+++ b/README.cn.md
@@ -0,0 +1,47 @@
+# 移动PaddlePaddle
+
+[](https://travis-ci.org/PaddlePaddle/Mobile)
+[](http://www.paddlepaddle.org/docs/develop/mobile/README.html)
+[](https://github.com/PaddlePaddle/Mobile/wiki)
+[](LICENSE)
+
+PaddlePaddle支持在移动设备上,使用训练好的模型进行离线推断。这里,我们主要介绍如何在移动设备上部署PaddlePaddle推断库,以及移动设备上可以使用到的一些优化方法。
+
+## 构建PaddlePaddle库
+PaddlePaddle可以通过原生编译、交叉编译的方式,构建多种移动平台上的推断库。
+
+- [Android平台编译指南](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_android_cn.md)
+- [iOS平台编译指南](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_ios_cn.md)
+- [Rapsberry Pi3平台编译指南](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_raspberry_cn.md)
+- NVIDIA Driver PX2平台,采用原生编译的方式,可直接依照[PaddlePaddle源码编译指南](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/getstarted/build_and_install/build_from_source_cn.rst)进行编译
+
+## 使用示例
+
+- [命令行示例程序](./benchmark/tool/C/README.cn.md)
+- [iOS示例应用:PDCamera](./Demo/iOS/AICamera/README.cn.md)
+
+## 部署优化方法
+移动端对接入库的大小通常都有要求,在编译PaddlePaddle库时,用户可以通过设置一些编译选项来进行优化。
+
+- [如何构建最小的PaddlePaddle推断库](./deployment/library/build_for_minimum_size.md)
+
+训练得到的模型,可在不降低或者轻微降低模型推断精度的前提下,进行一些变换,优化移动设备上的内存使用和执行效率。
+
+- [合并网络中的BN层](./deployment/model/merge_batch_normalization/README.md)
+- [压缩模型大小的rounding方法](./deployment/model/rounding/README.md)
+- [如何合并模型](./deployment/model/merge_config_parameters/README.cn.md)
+- INT8量化方法
+
+## 模型压缩
+基于PaddlePaddle框架,可以使用模型压缩训练进一步裁剪模型的大小。
+
+- [Pruning稀疏化方法](./model_compression/pruning/README.md)
+
+## 性能数据
+我们列出一些移动设备上的性能测试数据,给用户参考和对比。
+
+- [Mobilenet模型性能数据](./benchmark/README.md)
+- ENet模型性能数据
+- [DepthwiseConvolution优化效果](https://github.com/hedaoyuan/Function/blob/master/src/conv/README.md)
+
+本教程由[PaddlePaddle](https://github.com/PaddlePaddle/Paddle)创作,采用[Apache-2.0 license](LICENSE)许可协议进行许可。
diff --git a/README.md b/README.md
index 5ebe531..f6a7b42 100644
--- a/README.md
+++ b/README.md
@@ -1,33 +1,40 @@
# Mobile
[](https://travis-ci.org/PaddlePaddle/Mobile)
+[](http://www.paddlepaddle.org/docs/develop/mobile/README.html)
+[](https://github.com/PaddlePaddle/Mobile/wiki)
[](LICENSE)
Here mainly describes how to deploy PaddlePaddle to the mobile end, as well as some deployment optimization methods and some benchmark.
-## How to build PaddlePaddle for mobile
-- Build PaddlePaddle for Android [[Chinese](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_android_cn.md)] [[English](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_android_en.md)]
-- Build PaddlePaddle for IOS [[Chinese](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_ios_cn.md)] [[English](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_ios_en.md)]
-- Build PaddlePaddle for Raspberry Pi3 [[Chinese](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_raspberry_cn.md)] [[English](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_raspberry_en.md)]
-- Build PaddlePaddle for PX2
-- [How to build PaddlePaddle mobile inference library with minimum size.](./deployment/library/build_for_minimum_size.md)
+## Build PaddlePaddle
+- [Build PaddlePaddle for Android](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_android_en.md)
+- [Build PaddlePaddle for IOS](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_ios_en.md)
+- [Build PaddlePaddle for Raspberry Pi3](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_raspberry_en.md)
+- Build PaddlePaddle for NVIDIA Driver PX2
## Demo
- [A command-line inference demo.](./benchmark/tool/C/README.md)
-- [iOS demo of AICamera](./Demo/iOS/AICamera/README.md)
+- [iOS demo of PDCamera](./Demo/iOS/AICamera/README.md)
## Deployment optimization methods
-- [Merge batch normalization before deploying the model to the mobile.](./deployment/model/merge_batch_normalization/README.md)
-- [Compress the model before deploying the model to the mobile.](./deployment/model/rounding/README.md)
-- [Merge model config and parameter files into one file.](./deployment/model/merge_config_parameters/README.md)
-- How to deploy int8 model in mobile inference with PaddlePaddle.
+Optimization for the library:
+
+- [How to build PaddlePaddle mobile inference library with minimum size.](./deployment/library/build_for_minimum_size.md)
+
+Optimization for models:
+
+- [Merge batch normalization layers](./deployment/model/merge_batch_normalization/README.md)
+- [Compress the model based on rounding](./deployment/model/rounding/README.md)
+- [Merge model's config and parameters](./deployment/model/merge_config_parameters/README.md)
+- How to deploy int8 model in mobile inference with PaddlePaddle
## Model compression
-- [How to use pruning to train smaller model](./model_compression/pruning/)
+- [How to use pruning to train smaller model](./model_compression/pruning/README.md)
## PaddlePaddle mobile benchmark
- [Benchmark of Mobilenet](./benchmark/README.md)
- Benchmark of ENet
-- [Benchmark of DepthwiseConvolution in PaddlePaddle](https://github.com/hedaoyuan/Function/blob/master/src/conv/README.md)
+- [Benchmark of DepthwiseConvolution](https://github.com/hedaoyuan/Function/blob/master/src/conv/README.md)
This tutorial is contributed by [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) and licensed under the [Apache-2.0 license](LICENSE).
diff --git a/benchmark/tool/C/README_cn.md b/benchmark/tool/C/README.cn.md
similarity index 81%
rename from benchmark/tool/C/README_cn.md
rename to benchmark/tool/C/README.cn.md
index bd4264b..37cf3f9 100644
--- a/benchmark/tool/C/README_cn.md
+++ b/benchmark/tool/C/README.cn.md
@@ -9,13 +9,15 @@
- **Step 1,编译Android平台上适用的PaddlePaddle库。**
- 用户需要按照[Android平台编译指南](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_android_cn.md),编译Android平台上适用的PaddlePaddle库。在执行`make install`之后,PaddlePaddle库将会安装在`CMAKE_INSTALL_PREFIX`所指定的目录下。该目录包含如下几个子目录:
+ 用户可以按照[Android平台编译指南](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_android_cn.md),拉取PaddlePaddle最新代码,编译Android平台上适用的PaddlePaddle库。在执行`make install`之后,PaddlePaddle库将会安装在`CMAKE_INSTALL_PREFIX`所指定的目录下。该目录包含如下几个子目录:
- `include`,其中包含使用PaddlePaddle所需要引入的头文件,通常代码中加入`#include `即可。
- `lib`,其中包含了PaddlePaddle对应架构的库文件。其中包括:
- 动态库,`libpaddle_capi_shared.so`。
- 静态库,`libpaddle_capi_layers.a`和`libpaddle_capi_engine.a`。
- `third_party`,PaddlePaddle所依赖的第三方库。
+ 你也可以从[wiki](https://github.com/PaddlePaddle/Mobile/wiki)下载编译好的版本。
+
- **Step 2,编译示例程序。**
示例程序项目使用CMake管理,可按照以下步骤,编译Android设备上运行的可执行程序。
@@ -58,8 +60,8 @@
- **Step 3,准备模型。**
- Android设备上推荐使用`merged model`。以Mobilenet为例,要生成`merged model`,首先你需要准备以下文件:
- - 模型配置文件[mobilenet.py](https://github.com/PaddlePaddle/Mobile/blob/develop/models/mobilenet.py),它是使用PaddlePaddle的v2 api编写的`Mobilenet`模型的网络结构。用户可在[models](https://github.com/PaddlePaddle/Mobile/tree/develop/models)获取更多PaddlePaddle常用的网络配置,该repo下面同时提供了使用PaddlePaddle训练模型的方法。
+ Android设备上推荐使用**合并的模型(merged model)**。以Mobilenet为例,要生成**合并的模型**文件,首先你需要准备以下文件:
+ - 模型配置文件[mobilenet.py](https://github.com/PaddlePaddle/Mobile/tree/develop/models/standard_network/mobilenet.py),它是使用PaddlePaddle的v2 api编写的`Mobilenet`模型的网络结构。当前repo的[models](https://github.com/PaddlePaddle/Mobile/tree/develop/models)目录下维护了一些移动端常用的PaddlePaddle网络配置。同时,用户可在[models](https://github.com/PaddlePaddle/models)repo下面找到更多PaddlePaddle常用的网络配置,该repo下面同时提供了使用PaddlePaddle训练模型的方法。
- 模型参数文件。使用PaddlePaddle v2 api训练得到的参数将会存储成`.tar.gz`文件。比如,我们提供了一个使用[flowers102](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/)数据集训练`Mobilnet`分类模型的参数文件[mobilenet\_flowers102.tar.gz](http://cloud.dlnel.org/filepub/?uuid=4a3fcd7a-719c-479f-96e1-28a4c3f2195e)。你也可以使用以下命令下载该参数文件:
```bash
@@ -71,7 +73,7 @@
在准备好模型配置文件(.py)和参数文件(.tar.gz)之后,且所在机器已经成功安装了PaddlePaddle的Python包之后,我们可以通过执行以下脚本生成需要的`merged model`。
```bash
- $ cd Mobile/tools/merge_config_parameters
+ $ cd Mobile/deployment/model/merge_config_parameters
$ python merge_model.py
```
@@ -81,7 +83,7 @@
wget -C http://cloud.dlnel.org/filepub/?uuid=d3b95cf9-4dc3-476f-bdc7-98ac410c4f71 -O mobilenet_flowers102.paddle
```
- 更多有关于生成`merged model`的详情,请参考[merge\_config\_parameters](https://github.com/PaddlePaddle/Mobile/tree/develop/tools/merge_config_parameters)。
+ 更多有关于生成`merged model`的详情,请参考[merge\_config\_parameters](https://github.com/PaddlePaddle/Mobile/tree/develop/deployment/model/merge_config_parameters/README.cn.md)。
- **Step 4,在Android设备上测试。**
diff --git a/deployment/model/merge_config_parameters/README.cn.md b/deployment/model/merge_config_parameters/README.cn.md
new file mode 100644
index 0000000..9c86236
--- /dev/null
+++ b/deployment/model/merge_config_parameters/README.cn.md
@@ -0,0 +1,56 @@
+# 如何合并模型
+
+由PaddlePaddle训练得到的模型,通常包含两个部分:模型配置文件和参数文件。PaddlePaddle提供工具,将配置文件和参数文件合并成一个文件,即这里所说的**合并的模型**文件,方便在移动端上的离线推断应用中使用。
+
+针对PaddlePaddle的v1和v2 api,我们分别提供了两套合并模型的工具,下面将分别介绍。
+
+## merge\_v2\_model
+
+这个工具适用于使用v2 api训练的模型。`merge_v2_model`是PaddlePaddle提供的一个python函数,使用该工具,首先你需要安装PaddlePaddl的python包。我们以移动端上常用的`Mobilenet`为例,来介绍这个工具的使用。
+
+- **Step 1,准备工作。**
+ - 准备**模型配置文件:** 用于推断任务的模型配置文件,必须只包含`inference`网络,即不能包含训练网络中需要的`label`、`loss`以及`evaluator`层。我们使用的基于`Mobilenet`的图像分类任务配置文件见[mobilenet.py](../../../models/standard_network/mobilenet.py)。
+
+ - 准备**参数文件:** 使用PaddlePaddle v2 api训练得到的参数将会存储成`.tar.gz`文件,可直接用于合并模型。我们提供一个使用[flowers102](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/)数据集训练`Mobilenet`分类模型的参数文件[mobilenet_flowers102.tar.gz](http://cloud.dlnel.org/filepub/?uuid=4a3fcd7a-719c-479f-96e1-28a4c3f2195e)。用户可点击参数文件名字通过浏览器下载,或者使用以下命令下载:
+
+ ```bash
+ wget -C http://cloud.dlnel.org/filepub/?uuid=4a3fcd7a-719c-479f-96e1-28a4c3f2195e -O mobilenet_flowers102.tar.gz
+ ```
+
+- **Step 2,合并模型。**
+
+ 运行python脚本[merge_model.py](./merge_model.py),即可得到合并的模型`mobilenet_flowers102.paddle`。
+
+ ```bash
+ $ cat merge_model.py
+ import paddle.v2 as paddle
+ from paddle.utils.merge_model import merge_v2_model
+
+ # import network configuration
+ from mobilenet import mobile_net
+
+ if __name__ == "__main__":
+ image_size = 224
+ num_classes = 102
+ net = mobile_net(3 * image_size * image_size, num_classes, 1.0)
+ param_file = './mobilenet_flowers102.tar.gz'
+ output_file = './mobilenet_flowers102.paddle'
+ merge_v2_model(net, param_file, output_file)
+ ```
+
+## paddle\_merge\_model
+
+这个工具适用于使用v1 api训练的模型。`paddle_merge_model`是PaddlePaddle提供的一个可执行文件。假设PaddlePaddle的安装目录位于`PADDLE_ROOT`,该工具的使用方法如下:
+
+```bash
+$PADDLE_ROOT/opt/paddle/bin/paddle_merge_model \
+ --model_dir="pass-00000" \
+ --config_file="config.py" \
+ --model_file="output.paddle"
+```
+
+该工具需要三个参数:
+
+- `--model_dir`,参数文件所在目录。
+- `--config_file`,`inference`网络配置文件的路径。
+- `--model_file`,生成的**合并的模型**文件的路径。
diff --git a/deployment/model/merge_config_parameters/README.md b/deployment/model/merge_config_parameters/README.md
index 4edd515..fa43a7a 100644
--- a/deployment/model/merge_config_parameters/README.md
+++ b/deployment/model/merge_config_parameters/README.md
@@ -10,8 +10,8 @@ This applies to all PaddlePaddle v2 models, we show a demo of mobilenet.
### Step 1: Prepartions
-**Model Config :** [Mobilenet model config](../../models/mobilenet.py).
-**Model Parameters:** [Mobilenet model param pretrained on flower102 download](https://pan.baidu.com/s/1geHkrw3)
+**Model Config :** [Mobilenet model config](../../../models/standard_network/mobilenet.py).
+**Model Parameters:** [Mobilenet model param pretrained on flower102 download](http://cloud.dlnel.org/filepub/?uuid=4a3fcd7a-719c-479f-96e1-28a4c3f2195e).
### Step2: Merge