【Windows版】Monodepth2复现过程

一、参考资料

源码

二、相关环境

1. 运行环境

系统:Windows10
CPU:i5 4200H
GPU:GeForce GTX 850M,2GB
内存:12GB
python:3.6.6
pytorch:0.4.1
torchvision:pytorch对应的版本
anconda虚拟环境名称:monodepth2-gpu

2. monodepth2-gpu.yaml

name: monodepth2-gpu
channels:
  - http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
  - defaults
  - http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
  - http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/bioconda/
  - http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/
  - http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
  - http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
dependencies:
  - certifi=2021.5.30=py36haa95532_0
  - pip=21.2.2=py36haa95532_0
  - python=3.6.6=hea74fb7_0
  - setuptools=52.0.0=py36haa95532_0
  - vc=14.2=h21ff451_1
  - vs2015_runtime=14.27.29016=h5e58377_2
  - wheel=0.36.2=pyhd3eb1b0_0
  - wincertstore=0.2=py36h7fe50ca_0
  - intel-openmp=2021.3.0=h57928b3_3372
  - jbig=2.1=h8d14728_2003
  - jpeg=9d=h8ffe710_0
  - lerc=2.2.1=h0e60522_0
  - libblas=3.9.0=10_mkl
  - libcblas=3.9.0=10_mkl
  - libdeflate=1.7=h8ffe710_5
  - liblapack=3.9.0=10_mkl
  - libpng=1.6.37=h1d00b33_2
  - libtiff=4.3.0=h0c97f57_1
  - lz4-c=1.9.3=h8ffe710_1
  - mkl=2021.3.0=hb70f87d_564
  - numpy=1.19.5=py36h4b40d73_2
  - python_abi=3.6=2_cp36m
  - tbb=2021.3.0=h2d74725_0
  - xz=5.2.5=h62dcd97_1
  - zlib=1.2.11=h62dcd97_1010
  - zstd=1.5.0=h6255e5f_0
  - opencv=3.3.1=py36h20b85fd_1
prefix: D:\360Downloads\Anaconda3-5.1.0\Anaconda3-5.1.0\envs\monodepth2-gpu

3. requirements.txt

Prediction for a single image,预测单张图片需要的环境

certifi==2021.5.30
cycler==0.10.0
decorator==4.4.2
imageio==2.9.0
kiwisolver==1.3.1
matplotlib==3.3.4
networkx==2.5.1
numpy==1.19.5
Pillow==8.3.1
pyparsing==2.4.7
python-dateutil==2.8.2
PyWavelets==1.1.1
scikit-image==0.17.2
scipy==1.5.4
six==1.16.0
tifffile==2020.9.3
torch==0.4.1
torchvision=0.4.1+cpu
wincertstore==0.2

三、相关介绍

1. 目录结构

E:.
│  .gitignore
│  depth_prediction_example.ipynb
│  evaluate_depth.py
│  evaluate_pose.py
│  export_gt_depth.py
│  kitti_utils.py
│  layers.py
│  LICENSE
│  options.py
│  README.md
│  requirements-cpu.txt
│  test_simple.py
│  train.py
│  trainer.py
│  utils.py
│
├─.github
│  └─ISSUE_TEMPLATE
│          problem-training-on-kitti.md
│          training-on-custom-training-data.md
│
├─.idea
│  │  .gitignore
│  │  misc.xml
│  │  modules.xml
│  │  monodepth2.iml
│  │  vcs.xml
│  │  workspace.xml
│  │
│  └─inspectionProfiles
│          profiles_settings.xml
│
├─assets
│      copyright_notice.txt
│      teaser.gif
│      test_image.jpg
│      test_image_depth.npy
│      test_image_disp.jpeg
│      test_image_disp.npy
│
├─datasets
│  │  kitti_dataset.py
│  │  mono_dataset.py
│  │  __init__.py
│  │
│  └─__pycache__
│          kitti_dataset.cpython-36.pyc
│          mono_dataset.cpython-36.pyc
│          __init__.cpython-36.pyc
│
├─experiments
│      mono+stereo_experiments.sh
│      mono_experiments.sh
│      odom_experiments.sh
│      stereo_experiments.sh
│
├─models
│  └─mono+stereo_640x192  # 新建模型文件夹
│          depth.pth  # 解压后的模型文件
│          encoder.pth
│          pose.pth
│          poses.npy
│          pose_encoder.pth
│
├─networks
│  │  depth_decoder.py
│  │  pose_cnn.py
│  │  pose_decoder.py
│  │  resnet_encoder.py
│  │  __init__.py
│  │
│  └─__pycache__
│          depth_decoder.cpython-36.pyc
│          pose_cnn.cpython-36.pyc
│          pose_decoder.cpython-36.pyc
│          resnet_encoder.cpython-36.pyc
│          __init__.cpython-36.pyc
│
├─splits
│  │  kitti_archives_to_download.txt
│  │
│  ├─benchmark
│  │      eigen_to_benchmark_ids.npy
│  │      test_files.txt
│  │      train_files.txt
│  │      val_files.txt
│  │
│  ├─eigen
│  │      test_files.txt
│  │
│  ├─eigen_benchmark
│  │      test_files.txt
│  │
│  ├─eigen_full
│  │      train_files.txt
│  │      val_files.txt
│  │
│  ├─eigen_zhou
│  │      train_files.txt
│  │      val_files.txt
│  │
│  └─odom
│          test_files_09.txt
│          test_files_10.txt
│          train_files.txt
│          val_files.txt
│
└─__pycache__
        evaluate_depth.cpython-36.pyc
        kitti_utils.cpython-36.pyc
        layers.cpython-36.pyc
        options.cpython-36.pyc
        utils.cpython-36.pyc

2. 从google云盘中下载模型

def download_model_if_doesnt_exist(model_name):
    """If pretrained kitti model doesn't exist, download and unzip it
    """
    # values are tuples of (<google cloud URL>, <md5 checksum>)
    download_paths = {
        "mono_640x192":
            ("https://storage.googleapis.com/niantic-lon-static/research/monodepth2/mono_640x192.zip",
             "a964b8356e08a02d009609d9e3928f7c"),
        "stereo_640x192":
            ("https://storage.googleapis.com/niantic-lon-static/research/monodepth2/stereo_640x192.zip",
             "3dfb76bcff0786e4ec07ac00f658dd07"),
        "mono+stereo_640x192":
            ("https://storage.googleapis.com/niantic-lon-static/research/monodepth2/mono%2Bstereo_640x192.zip",
             "c024d69012485ed05d7eaa9617a96b81"),
        "mono_no_pt_640x192":
            ("https://storage.googleapis.com/niantic-lon-static/research/monodepth2/mono_no_pt_640x192.zip",
             "9c2f071e35027c895a4728358ffc913a"),
        "stereo_no_pt_640x192":
            ("https://storage.googleapis.com/niantic-lon-static/research/monodepth2/stereo_no_pt_640x192.zip",
             "41ec2de112905f85541ac33a854742d1"),
        "mono+stereo_no_pt_640x192":
            ("https://storage.googleapis.com/niantic-lon-static/research/monodepth2/mono%2Bstereo_no_pt_640x192.zip",
             "46c3b824f541d143a45c37df65fbab0a"),
        "mono_1024x320":
            ("https://storage.googleapis.com/niantic-lon-static/research/monodepth2/mono_1024x320.zip",
             "0ab0766efdfeea89a0d9ea8ba90e1e63"),
        "stereo_1024x320":
            ("https://storage.googleapis.com/niantic-lon-static/research/monodepth2/stereo_1024x320.zip",
             "afc2f2126d70cf3fdf26b550898b501a"),
        "mono+stereo_1024x320":
            ("https://storage.googleapis.com/niantic-lon-static/research/monodepth2/mono%2Bstereo_1024x320.zip",
             "cdc5fc9b23513c07d5b19235d9ef08f7"),
        }

    if not os.path.exists("models"):
        os.makedirs("models")

    model_path = os.path.join("models", model_name)

    def check_file_matches_md5(checksum, fpath):
        if not os.path.exists(fpath):
            return False
        with open(fpath, 'rb') as f:
            current_md5checksum = hashlib.md5(f.read()).hexdigest()
        return current_md5checksum == checksum

    # see if we have the model already downloaded...
    if not os.path.exists(os.path.join(model_path, "encoder.pth")):

        model_url, required_md5checksum = download_paths[model_name]

        if not check_file_matches_md5(required_md5checksum, model_path + ".zip"):
            print("-> Downloading pretrained model to {}".format(model_path + ".zip"))
            urllib.request.urlretrieve(model_url, model_path + ".zip")

        if not check_file_matches_md5(required_md5checksum, model_path + ".zip"):
            print("   Failed to download a file which matches the checksum - quitting")
            quit()

        print("   Unzipping model...")
        with zipfile.ZipFile(model_path + ".zip", 'r') as f:
            f.extractall(model_path)

        print("   Model unzipped to {}".format(model_path))

四、可能存在的问题

  • pytorch版本不匹配问题
raise NotSupportedError(base.range(), "slicing multiple dimensions at the same time isn't supported yet")
torch.jit.frontend.NotSupportedError: slicing multiple dimensions at the same time isn't supported yet
        proposals (Tensor): boxes to be encoded
    """

    # perform some unpacking to make it JIT-fusion friendly
    wx = weights[0]
    wy = weights[1]
    ww = weights[2]
    wh = weights[3]

    proposals_x1 = proposals[:, 0].unsqueeze(1)
                   ~~~~~~~~~ <--- HERE
    proposals_y1 = proposals[:, 1].unsqueeze(1)
    proposals_x2 = proposals[:, 2].unsqueeze(1)
    proposals_y2 = proposals[:, 3].unsqueeze(1)

    reference_boxes_x1 = reference_boxes[:, 0].unsqueeze(1)
    reference_boxes_y1 = reference_boxes[:, 1].unsqueeze(1)
    reference_boxes_x2 = reference_boxes[:, 2].unsqueeze(1)
    reference_boxes_y2 = reference_boxes[:, 3].unsqueeze(1)
【解决】torch.jit.frontend.NotSupportedError: slicing multiple dimensions at the same..
https://blog.csdn.net/qq_29750461/article/details/103448297

错误原因:
pytorch版本不匹配

解决办法:
在 @torch.jit.script 这个注释前加注释 :# @torch.jit.script
  • OpenMP错误
OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect
 results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any li
brary. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but
 that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
 
提示这意味着已将OpenMP运行时的多个副本链接到程序中。 这很危险,因为它会降低性能或导致错误的结果。 最好的办法是确保仅将单个OpenMP运行时链接到该流程中,例如 通过避免在任何库中静态链接OpenMP运行时。 作为不安全,不受支持,未记录的解决方法,您可以设置环境变量KMP_DUPLICATE_LIB_OK = TRUE以允许程序继续执行,但可能导致崩溃或无提示地产生错误的结果。
参考资料
直接解决OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.报错问题
https://blog.csdn.net/Victor_X/article/details/110082033

错误原因:
monodepth2虚拟环境中的libiomp5md.dll与anaconda基础环境中的libiomp5md.dll冲突

解决办法:
将monodepth2虚拟环境中的libiomp5md.dll换个名字即可

版权声明:本文为m0_37605642原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。