Skip to content
/ tvm Public

repo for stuffs regarding tvm like blog post, tutorial codes

Notifications You must be signed in to change notification settings

kc-ml2/tvm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TVM Cartpole

This repository is a simple tutorial of TVM: Cartpole DQN at Pynq-z1 board with TVM compiler.
For more tutorials, please refer this TVM Tutorial site. VTA tutorial is also important, but I won't cover VTA in this tutorial.
Here, I used Pynq-z1 board for target hardware.
I referred Building a DQN in PyTorch: Balancing Cart Pole with Deep RL for Cartpole DQN model.
If you want to have in-depth information about TVM, I highly recommend you to read [ML2's TVM blog post](링크 넣기). Here, I wrote some troubleshootings that I experienced during this project.



0. Disclaimer

The final goal of this project is fully utilizing related TVM stacks like AutoTVM and VTA so that I can fully utilize Pynq-Z1 board's FPGA. However, due to time lack, I couldn't finish this code. Legacy of AutoTVM+VTA code is vta_cartpole.py file.
Instead, you can refer cpu_cartpole.py file, which only utilizes CPU of Pynq-z1 board.
I made anotehr branch for vta_cartpole, vta_cartpole. I uploaded unfinished file, so that you can freely comment/advise me.



1. Installation

Prerequisites

  • Environment
    • pynq-z1 board
    • MacBook Pro 2019
    • Ubuntu 18.04 with Virtualbox
    • Python 3.6
  • For pynq-z1 board setup, follow this document.
  • Install python packges in requirements.txt(Recommend you to use virtual environments so that you don't get version conflict).
    pip install -r requirements.txt
  • TVM Installation
    • Since you can't install TVM with pip command, you have to install it from source.
    • In order to integrate the compiled module, we do not need to build entire TVM on the target device. You only need to build the TVM compiler stack on your desktop and use that to cross-compile modules that are deployed on the target device. We only need to use a light-weight runtime API that can be integrated into various platforms.
    • TVM(Host side)
      • Please follow here for TVM installation.
      • You have to set python properly so that you can run tvm.
      • Check whether TVM has successfully installed through importing it:
        import tvm
    • Runtime(Device side)
      • With cross compilation and RPC, you can compile a program on your local machine then run it on the remote device. It is useful when the remote device resource are limited. The runtime size is often less than 1MB, which makes it suitable for device with memory constraints.
      • Please follow here for runtime installation.
      • After installation, run rpc-server at device side with:
        python -m tvm.exec.rpc_server --host 0.0.0.0 --port=9090
      • If you see this message, your device succesfully started RPC server:
        INFO:root:RPCServer: bind to 0.0.0.0:9090
      • If you are running autoTVM stuff, you have to use tracker-server system. Refer vta_cartpole branch.


2. File Description

 .
 ┣ 📂cartpole_model
 ┃ ┣ 📜__init__.py
 ┃ ┣ 📜cartpole-dqn.pth
 ┃ ┣ 📜dqn_agent.py
 ┃ ┣ 📜dqn_agent_demo.py
 ┃ ┣ 📜random_cartpole_agent.py
 ┃ ┗ 📜train_and_save.py
 ┣ 📜readme.md
 ┣ 📜requirements.txt
 ┗ 📜.gitignore
 ┗ 📜cpu_cartpole.py
  • cartpole_model directory
    • Includes cartpole DQN model
    • Codes are derived from Building a DQN in PyTorch: Balancing Cart Pole with Deep RL
    • cartpole-dqn.pth:
      • I already trained DQN model. You can train your own model with train_and_save.py, but if you don't want to, it's okay to use this file.
    • dqn_agent.py:
      • Defines DQN model
    • dqn_agent_demo.py:
      • Checks whether my DQN model works well
    • random_cartpole_agent.py:
      • Agent takes random action. Used to compare whether DQN model works well.
    • train_and_save.py:
      • Trains DQN model and save to cartpole-dqn.pth.
  • rpc_connect.py
    • run cartpole at pynq board only with pynq-side-arm-cpu


3. Results

  • First of all, you have to run rpc_server at pynq device: python -m tvm.exec.rpc_server --host 0.0.0.0 --port=9090
  • Run cartpole with:
    python cpu_cartpole.py
  • Expected result:
Cannot find config for target=llvm -keys=cpu -mtriple=arm-linux-gnueabihf, workload=('dense_nopack.x86', ('TENSOR', (1, 64), 'float32'), ('TENSOR', (2, 64), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
Cannot find config for target=llvm -keys=cpu -mtriple=arm-linux-gnueabihf, workload=('dense_nopack.x86', ('TENSOR', (1, 4), 'float32'), ('TENSOR', (64, 4), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:14<00:00,  6.96it/s]
Test done
Elapsed time: 14.382231712341309 seconds
average reward per episode :9.35
  • The warning messages, Cannot find ~~ regression occured because we don’t have tuned config for this model. Fallback means that the tunable parameters aren’t tuned, so the defaults will be used. Performance won’t be optimal. For more information, refer this discussion

About

repo for stuffs regarding tvm like blog post, tutorial codes

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages