Skip to content

Fluid benchmark & book validation #6208

@dzhwinter

Description

@dzhwinter

In the 0.11.0 version, we will release the book chapters written with fluid, there are some tasks need to be done.

Task Lists 1 : compare results with Paddle books V2

Need to validate these books can convergence to the approximate same result with books chapters.

Need to note that we have three different implementation of understand_sentiment, only test the lstm one in this chapter.

  • book.06 understand_sentiment lstm CPU loss validation @ranqiu92

  • book.06 understand_sentiment lstm GPU loss validation @ranqiu92

  • book.07 label semantic roles CPU loss validation @chengduoZH
    We do not have GPU version label semantic roles implementation.

  • book.08 machine translation CPU loss validation @jacquesqiao @ChunweiYan

  • book.08 machine translation GPU loss validation @jacquesqiao @ChunweiYan

Task Lists How to do

We have benchmark scripts and docker image. So these things should be done quickly and report a bug if you find any issue. (operator implement, convergence result).
Because we are still finetuning the performance, so if you find any magnitude gap in performance, please file an issue without hesitation.

scripts are put under this directory, please find the correct chapter name:
https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/fluid/tests/book

old books docker image:
paddlepaddle/book:latest-gpu

new books docker image:
dzhwinter/benchmark:latest

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions