Tutorial 1: Corerain Rainman V3 User Guide

  • Corerain Rainman V3 User Guide


    Using PC/Host

    Docker Installation in PC/Host

    The development environment is based on the docker image provided by Corerain in Linux(Ubuntu). Corerain provides two training mode containers, one is a GPU-based version and the other is a CPU-based version. To support CNN training in a GPU environment, we assume that the host has a local NVIDIA GPU and the correct drivers are installed. To run the Corerain docker image, you must install the docker-ce and nvidia-docker packages on the same host. If you are training in a CPU environment, you only need the docker-ce package.

    The GPU-based version:

    • Need to install Nvidia graphics driver.
    • Install docker-ce and nvidia-docker2.
    • Get the image file of Corerain based on Nvidia GPU development environment。

    The CPU-based version:

    • Install docker-ce.
    • Get the CPU-based docker_cpu_1.2 image file of Corerain.

    Docker-ce installation link:Docker-ce official installation guide

    Nvidia-docker2 installation link:Nvidia-docker official installation guide

    1. Install Docker-ce

    First, the user enters the commands below in the Linux (Ubuntu) terminal to install docker-ce:

    sudo apt-get remove docker docker-engine docker.io
    sudo apt-get update
    sudo apt-get install apt-transport-https \
                         ca-certificates     \
                         curl                \
    curl -fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
    sudo add-apt-repository \
      	"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
        $(lsb_release -cs) \
    sudo apt-get update
    sudo apt-get install docker-ce

    2. Install container file

    Install GPU-based version (recommended if it's GPU-based development environment):

    sudo docker run --runtime=nvidia --name plumber -dti brucvv/plumber

    Install CPU-based version (if you don't have a GPU-based environment):

    sudo docker run --name plumber -dti brucvv/plumber:cpu_1.2

    User can execute container after its installation:

    sudo docker exec -ti plumber bash

    3. Enter the container for algorithm training and compiler optimization

    After log on the container, the default directory in container is /app. User can type below bash commands to start a training, evaluation process. Please note that the second step(training) may take about more than one hour to complete, it depends on the GPU performance. If you use CPU to train, it may take about 3-4 hours to complete.

    a. CNN model training

    Please refer to Tutorial 2: DNN Network Training Instruction for a step-by-step explanation. Please use following commands under the /app/detection directory for algorithm training.

    cd detection/
    script/1_run_convert.sh  ----------------------------------------------------Data convertion
    script/2_run_training.sh ----------------------------------------------------Model training
    script/3_run_test.sh --------------------------------------------------------Model testing
    script/4_gen_PostParam.sh ---------------------------------------------------Generate post-processing parameter
    script/5_run_truncate_model.sh ----------------------------------------------Export the inference model
    b. Optimization using plumber

    Please refer to Tutorial 3: User Guide of Plumber for instruction.
    Enter the following commands under the /app directory to use the Plumber to freeze the Tensorflow model, and optimized hardware model.

    ./1_plumber_freeze.sh -------------------------------------------------------Freeze th model
    Please enter the indices of output nodes, separated by ','. If you want to select all, please enter 'all': all
    # Please enter `all` to select all nodes that the Plumber compiler can support. 
    ./2_plumber_genSG.sh --------------------------------------------------------Generate Streaming Graph
    ./3_plumber_SG_opt.sh -------------------------------------------------------Execute SG Optimization
    ./4_plumber_HDL_opt.sh ------------------------------------------------------Execute hardware optimization
    ./5_plumber_export_data.sh --------------------------------------------------Export data
    ./6_cp_board_files.sh -------------------------------------------------------Consolidate files required for RainmanV3 execution


    1. A simplified compilation step is provided in docker:1_plumber_freeze.sh --> 0_run_complete_flow.sh --> 5_plumber_export_data.sh --> 6_cp_board_files.sh

    2. The result of the compilation is written to the /app/board_files/ folder by default, which includes: network parameters (float_little), FPGA configuration file (rainman9.6.rbf), SG description file (*.pbtxt) and post-processing parameters. (post_params)

    Now, the training and compiling work on the PC/Host has completed, next step is to download configuration files into the Rainman V3 board.

    Rainman V3 board

    1. Hardware configuration

    1. Place the Rainman v3 on a flat, non-conductive surface.
    2. Connect the Ethernet port on the Rainman Accelerator Board directly to the Ethernet port on the host PC.
    3. Power the accelerator board through the USB Type-C interface. We recommend to use a 5V 2A mobile phone charger to power up Rainman V3. The USB port of your PC or laptop may not provide enough current.
    4. If accelerator board is powered up correctly, the green LED on the board (near the SD card) will light up. If you use a board version with a fan, the fan turns on after power on.

    2. Network settings and board login

    In the PC, you must manually configure the IPv4 address of the network interface to the accelerator board. Otherwise, you cannot connect the Rainman V3 board. The configuration parameters are as follows:

    Setting Value
    Host/PC IPv4
    Rainman default gateway

    Linux setting:

    Mac setting:

    The default IP address of the Rainman V3 board is The user can enter the Rainman V3 board from the host through ssh with this IP address.

    • Login the board in Linux:
    User name:root

    SSH command is shown below:

    ssh root@

    Enter the password:

    • Login the board on Windows:

    If you need to login to the Rainman V3 board under Windows system, it is recommended to download MobaXterm, the software can support remote script editing, picture viewing, etc., download link: https://mobaxterm.mobatek.net/


    ssh root@

    Note: Users can view the contents of the Rainman V3 board in the left column of MobaXterm, which is convenient for document editing and image viewing.

    After logging in to the accelerator card through SSH, user should execute the following command to view the license. A valid license is required to use the software and hardware resources of the Rainman V3 platform. This command is required each time the board is rebooted.


    If the license has been successfully checked, the checker program will display a "license valid" message, as follows

    searching license from default path:  /var/corerain/licensehw.dat
    license valid.

    3. Board execution

    A. Import the required data from host/PC to the Rainman V3 board: Linux host execution

    Copy the board_files generated by docker to the board SD card. (default path : /app)
    In linux, we can use the command below:

    scp -r board_files root@

    Note:If it is a Windows user, you can use an SSH client, such as MobaXterm

    B. SSD running example: Rainman V3 board execution
     ./build/ssd_5b_runner \
     --pbtxt *SG description file path* \
     --coeff_path *Network parameter path* \
     --param_path *post-processing parameter path* \
     --input_path *Input image path(image before detection)* \
     --output_path *Output image path(image after detection)* \
     -cls *Forecast category number(same as training)*

    Please refer to run_5b_cls3.sh under the folder test/tools:

    cd test/tools

    If a path error is reported, please check whether the configuration in the run_5b_cls3.sh file is correct. The correct file configuration is shown as below (this configuration only detects one picture named 1530697831_1, and it is saved in the imagetxt folder)

    ./build/ssd_5b_runner --pbtxt /root/board_files/model_hdl_sg.pbtxt --coeff_path /root/board_files/float_little/ --param_path /root/board_files/post_params/ --input_path /root/imagetxt/1530697831_1.jpg --output_path ./out.jpg -cls 2

    In this example, an image is fed into the CNN for inference, the target frame is plotted on output image, with the index of the corresponding category marked in the upper left corner.

    Note: the number of categories = 1 + total number of identification categories, for example, the training sample contains both pedestrian and vehicle objects, then the number of categories = 3
    We will get a marked pictures detection, saved as:

    C. Browse the result
    1. Use the scp command line to copy the picture from the Rainman V3 board to the host to browse (assume the default IP address is
    scp test/tools/out.jpg *username*@
    1. Use python to create an Http service station and browse directly on the Rainman V3 board in the browser without copying image file to local host:
    python -m SimpleHTTPHost

    You can browse the files on the board by entering the IP address and port of the Rainman V3 board in the browser.


    1. Brose with Windows MobaXterm:
      Please refer to the chapter Login the board on Windows to browse the files in Rainman V3 board file with MobaXterm.

    In Linux, if you cannot use the above scp command to copy the picture from the Rainman V3 board, please check if the IP address of the host is correct and confirm whether to install and start the SSH Service.

    SSH Service Installation guide:

    sudo apt-get install openssh-Host
    sudo service ssh start

    Now, we have completed the usage of single image detection with SSD neural network in the Rainman V3 board.

    Variable parameter processing

    The main function in SSD demo is /root/test/tools/single_img/ssd_5b_runner.cc

    1. line 34-39 is the configuration of post-processing parameter.
    NUM_EXT_LAYER ---------------------------------------- SSD branch number
    prior_scaling ---------------------------------------- Default parameter, generally unchanged
    num_anchors ------------------------------------------ The number of anchors corresponding to each branch
    feat_sizes ------------------------------------------- {height}x{width}x{anchor number} corresponding to the feature map of each branch
    1. line 80 defines the data size of the input network, default is 256x256x3.
    2. line 174-184 call opencv's drawing function and store the result, non-essential steps
    3. line 171 is the network output result, the type definition vector<BBox>, please refer to the specific data format. /usr/local/include/raintime_thirdparty/third_party/libssd/bbox.hh