• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Oulu-IMEDS/KNEEL: Hourglass Networks for Knee Anatomical Landmark Localization: ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

Oulu-IMEDS/KNEEL

开源软件地址(OpenSource Url):

https://github.com/Oulu-IMEDS/KNEEL

开源编程语言(OpenSource Language):

Python 98.4%

开源软件介绍(OpenSource Introduction):

KNEEL: Hourglass Networks for Knee Anatomical Landmark Localization

(c) Aleksei Tiulpin, University of Oulu, 2019

About

Approach

In this paper we tackled the problem of anatomical landmark localization in knee radiographs at all stages of osteoarthritis. We combined recent advances of landmark localization field and distilled them into a novel modification of hourgalss architecture:

To train this model, we propose to use mixup, coutout augmentation and dropout and no weight decay. We further propose to use transfer learning from low-cost annotations (knee joint centers on the whole knee radiographs). In the paper, we showed that our transfer learning technique allows to significantly bost the performance. Furthermore, having the models trained to work with the while radiographs and the localized knee joint areas, we were able to build a full pipeline for landmark localization.

What's included

The repository includes the codes for training and testing, annotations for the OAI dataset and also the links to the pre-trained models.

How to install and run

Preparing the training data

Download the OAI baseline images from https://nda.nih.gov/oai/. The access to the images is free and painless. You just need to register and provide the information about yourself and agree with the terms of data use.

We provide the script and the annotations for creating the cropped ROIs from the original DICOM images. The annotations are stored in the file annotations/bf_landmarks_1_0.3.csv. The script for creating the high cost and the low cost datasets from the raw DICOM data is stored in scripts/data_stuff/create_train_dataset_from_oai.py.

Execute the aforementioned script as follows:

python create_train_dataset_from_oai.py --oai_xrays_path <OAI_PATH> \
                                        --annotations_path ../annotations \
                                        --to_save_high_cost_img <path the images corresponding to high-cost annotations> \
                                        --to_save_low_cost_img <path the images corresponding to low-cost annotations>

Here, <OAI_PATH> should correspond to the folder with OAI baseline images containing the file contents.csv.

After you have created the dataset, you can follow the script run_experiments.sh and setup the --data_root parameter to be the same as <path the images corresponding to high/low-cost annotations>.

Note: you will likely see warnings UserWarning: Incorrect value for Specific Character Set 'ISO_2022_IR_6' - assuming 'ISO 2022 IR 6' _warn_about_invalid_encoding(encoding, patched). Don't pay attention to that, as these are the artifacts coming from DICOM metadata.

Reproducing the experiments from the paper

All the experiments done in the paper were made with PyTorch 1.1.0 and anaconda. To run the experiments, simply copy the content of the folder hc_experiments into hc_experiments_todo. Set up the necessary environment variables in the file run_experiments.sh and then run this script. The code is written to leverage all the available GPU resources running 1 experiment per card.

In order to facilitate reproducibility, conda env file is provided besides the inference Docker files (see below).

Inference on your data

  1. Download the models: sh fetch_snapshots.sh
  2. Run the inference as follows (remember to use nvidia-docker tag gpu in the docker image for cuda support):
docker run -it --name landmark_inference --rm \                                                                                                                                                               ✔  118  18:33:53
            -v <WORKDIR_LOCATION>:/workdir/ \
            -v $(pwd)/snapshots_release:/snapshots/:ro \
            -v <DATA_LOCATION>:/data/:ro --ipc=host \
            miptmloulu/kneel:cpu python -u inference_new_data.py \
            --dataset_path /data/ \
            --dataset <DATASET_NAME> \
            --workdir /workdir/ \
            --mean_std_path /snapshots/mean_std.npy\
            --lc_snapshot_path /snapshots/lext-devbox_2019_07_14_16_04_41 \
            --hc_snapshot_path /snapshots/lext-devbox_2019_07_14_19_25_40 \
            --device <DEVICE> \
            --refine True

In the command above, you need to replace:

  • <WORKDIR_LOCATION> - where you will be saving the results.
  • <DATA_LOCATION> where the data are located.
  • <DATASET_NAME> the name of the folder containing DICOM images. It should be a sub-folder of <DATA_LOCATION>.
  • <DEVICE> - cudaor cpu depending on the platform of execution and on how you built the docker image.

Please note that your NVIDIA driver must be compatible with cuda 10. You can also build the docker files yoruself if you want.

Running a flask micro-service

In addition to CLI inference, we also provide a flask micro-service allowing for integration of KNEEL into data processing pipeline. We have build support for this for both CPU and GPU. To execute the micro-service on cpu, run the following command:

docker run -it --name landmark_inference --rm \
              -v $(pwd)/snapshots_release:/snapshots/:ro \
              -p 5000:5000 \
              --ipc=host \
              miptmloulu/kneel:cpu python -u -m kneel.inference.app \
              --lc_snapshot_path /snapshots/lext-devbox_2019_07_14_16_04_41 \
              --hc_snapshot_path /snapshots/lext-devbox_2019_07_14_19_25_40 \
              --refine True --mean_std_path /snapshots/mean_std.npy \
              --deploy True --device cpu

To perform the same on gpu, run the following with nvidia-docker:

nvidia-docker run -it --name landmark_inference --rm \
            -v $(pwd)/snapshots_release:/snapshots/:ro \
            -p 5000:5000 \
            --ipc=host \
            miptmloulu/kneel:gpu python -u -m kneel.inference.app \
            --lc_snapshot_path /snapshots/lext-devbox_2019_07_14_16_04_41 \
            --hc_snapshot_path /snapshots/lext-devbox_2019_07_14_19_25_40 \
            --refine True --mean_std_path /snapshots/mean_std.npy \
            --deploy True --device cuda

Now, when the microservice is deployed, it is fairly easy to get the landmarks using a python or nodejs script. Just send a POST request with json having {"dicom":<RAW_DICOM_IN_BASE_64>}. To encode a DICOM image in Python, just read it as a binary file and then use standard python base64 library: base64.b64encode(dicom_binary).decode('ascii').

License

If you use the annotations from this work, you must cite the following paper (Accepted to ICCV 2019 VRMI Workshop)

@article{tiulpin2019kneel,
  title={KNEEL: Knee Anatomical Landmark Localization Using Hourglass Networks},
  author={Tiulpin, Aleksei and Melekhov, Iaroslav and Saarakkala, Simo},
  journal={arXiv preprint arXiv:1907.12237},
  year={2019}
}

The codes and the pre-trained models are not available for any commercial use including research for commercial purposes.




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap