Skip to content

EdwinTSalcedo/CUBITAL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

50 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Edge AI-Based Vein Detector for Efficient Venipuncture in the Antecubital Fossa (CUBITAL)

This repository contains supplementary material for the conference paper "Edge AI-Based Vein Detector for Efficient Venipuncture in the Antecubital Fossa" (MICAI 2023 Oral session). Authors: Edwin Salcedo and Patricia Peรฑaloza

[Project page] [Dataset] [arXiv]

Contents

1. Overview

2. Dataset

3. Getting started

4. Citation

1. Overview

Motivation

Assessing vein condition and visibility is crucial before obtaining intravenous access in the antecubital fossa, a common site for blood draws and intravenous therapy. However, medical practitioners often struggle with patients who have less visible veins due to factors such as fluid retention, age, obesity, darker skin tones, or diabetes. Current research explores the use of near-infrared (NIR) imaging and deep learning (DL) for forearm vein segmentation, achieving high precision. However, a research gap remains in the recognition of veins, specifically in the antecubital fossa. Additionally, most studies rely on stationary computers, limiting portability for medical personnel during venipuncture procedures. To address these challenges, we propose a portable vein finder for the antecubital fossa based on the Raspberry Pi 4B.

Computer vision system

We implemented various vein semantic segmentation models in Deep_Learning_based_Segmentation.ipynb and selected the best-performing oneโ€”a U-Net model. We then enhanced it in Inference_Multi_task_U_Net.ipynb by adding an additional head to predict the coordinates of the antecubital fossa and its angle. The final computer vision system deployed in the vein finder is shown below:

We also include plots showing the model's layers in both horizontal and vertical alignment.

Hardware prototype

The device was designed using the 3D CAD software SolidWorks. It can be viewed by opening the file Ensamblaje.SLDASM. We also provide a detailed list of its components and visuals of the final 3D-printed prototype.

Component Specifications CAD Design
Power bank Xiaomi Mi Power Bank 3 Power bank
NIR Camera Raspberry Pi Camera Module 2 NoIR Holder Picam Noir Leds Matrix
LCD display Waveshare 3.5inch Touch Screen Screen LCD Assembly
Processing unit Raspberry Pi 4 Model B Raspberry Pi 4B
Relay module D.C. 5V 1 Channel Relay Module with Optocoupler Relay
LED matrix Perforated Phenolic Plate 5x7cm + 12 Infrared Ray IR 940nm Emitter LED Diode Lights LED Matrix
On/off switch ON-OFF Switch 19*13mm KCD1-101 -
Case - Base Cover Charger
9v battery holder - Case Holder Battery
Frontal view Back view Side view Inner view

2. Dataset

Data collection

To collect the dataset, we captured 2,016 NIR images of 1,008 young individuals with low visibility veins. Each individual placed one arm at a time on a table, allowing us to use a preliminary version of the device to capture an NIR image. The dataset, available here, comes in four versions:

  • A: final_dataset.zip โ†’ Base version with complete annotations. Three samples are shown below.
  • B: final_augmented_dataset.zip โ†’ Resulting dataset after applying data augmentation to version A.
  • C: square_final_dataset512x512.zip โ†’ This is a resized version of dataset A, with images reshaped to 512x512 pixels, to match the input requirements of the semantic segmentation models.
  • D: square_augmented_final_dataset512x512.zip โ†’ Similarly, a resized version of dataset B (512x512).

Below, you can see the original NIR samples, their preprocessed versions (after applying grayscale conversion and CLAHE), and their annotations: a grayscale mask overlay (with a different colormap for visualization), a dot representing the x and y coordinates of the antecubital fossa, and a floating number representing the arm angle. Furthermore, we provide a detailed explanation of the file final_dataset.zip, which contains the base version of the dataset.

NIR Images Preprocessed Images Annotations
final_dataset/
------------- dataset.csv # Demographic data for each sample include age, complexion, gender, observations, NIR image location, preprocessed image location, mask location, antecubital fossa coordinates, and arm angle. Each subject contributed two samples, one for each arm.
------------- masks/  # Grayscale images with pixel values 0, 1, and 2, representing the background, arm, and vein, respectively.
------------- nir_images/ # NIR images
------------- preprocessed_images/ # The same NIR images after applying grayscale conversion and CLAHE.

Preliminary results

Initial results from the implementation of U-Net, SegNet, PSPNet, Pix2Pix, and DeepLabv3+ on dataset version C are presented. The results indicate that U-Net achieved the highest accuracy. As a result, we focused our further research on this method for antecubital fossa detection.

Validation data

To validate the device, we asked three certified nurses to indicate the location where they would perform venipuncture on 384 samples. We saved this information in image format and shared it in the validation folder, inside the dataset location. We have also included the documents signed by the nurses, confirming their consent to share this information. The annotated images can be used to compare the model's inference to the nurses' chosen venipuncture locations. We used these image subsets to evaluate the proposed base U-Net model's performance, finding an 83% agreement between the regions identified by the nurses and those identified by the U-Net vein segmentation algorithm.

3. Getting started

Initial inference samples

First, we provide a pretrained multi-task U-Net model, embedded within a complete pipeline, for performing inference on NIR images included in the folder subset/preprocessed_images. You can run the pipeline by following these steps:

# Clone the repository
git clone git@github.com:EdwinTSalcedo/CUBITAL.git cubital

# Create and activate a new conda environment
conda create -n new_env python=3.10.12
conda activate new_env

# Install the dependencies 
pip install -r requirements.txt

# Execute inference script
(new_env) python inference.py

Graphical User Interface (GUI)

Furthermore, we provide two GUIs for on-device deployment of the base and modified U-Nets. Run the following command for forearm vein segmentation:

(new_env) python edgeai/final_interface_vein_segmentation.py 

And, execute the following command for vein segmentation in the antecubital fossa. This implements the novel architecture proposed in this research.

(new_env) python edgeai/final_interface_multitask.py 

While you can execute any of these scripts with any camera connected to your device, both require an NIR camera and NIR lighting for optimal inference results.

On-device deployment

Once the repository is cloned onto a Raspberry Pi 4B, two libraries are required to launch any of the interfaces shown above: OpenCV and TensorFlow Lite. The following steps aim to complete this installation and have been verified on a Raspberry Pi 4B with the latest Raspbian Buster (64-bit) and a 32 GB SD card.

Step 1: Install dependencies

  1. Updating Existing Packages: Execute the following command to update and upgrade your systemโ€™s packages:
sudo apt-get update && sudo apt-get upgrade
  1. Installing Image I/O Packages: For support with various image file formats, install the necessary packages using:
sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev
  1. Setting Up Video I/O Packages: To handle different video file formats and work with video streams, use the commands below:
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev 
sudo apt-get install libxvidcore-dev libx264-dev
  1. Installing the GTK Development Library: To compile the highgui module (used for displaying images and creating basic GUIs), install the GTK development library:
sudo apt-get install libgtk2.0-dev
  1. Additional Dependencies for OpenCV Optimization: For enhanced OpenCV operation optimization, install these extra dependencies:
sudo apt-get install libatlas-base-dev gfortran

Step 2: Installing pip (Package Management Tool)

If you havenโ€™t installed pip for Python 3 yet, execute the command below:

sudo apt-get install python3-pip

Step 3: Installing the Numpy Library:

pip install numpy

Step 4: Accessing OpenCV on Raspbian Repository To locate OpenCV in the default Raspbian Buster repository, use the command:

apt list python*opencv*

Step 5: Installing OpenCV Execute the following command to install OpenCV on Raspberry Pi.

sudo apt install python3-opencv

Step 6: Verifying OpenCV Installation To confirm the installation of OpenCV, use:

apt show python3-opencv

Optionally, you can combine the previous commands into one. The next one worked for us:

sudo apt-get install -y libhdf5-dev libc-ares-dev libeigen3-dev gcc gfortran libgfortran5 libatlas3-base libatlas-base-dev libopenblas-dev libopenblas-base libblas-dev liblapack-dev cython3 libatlas-base-dev openmpi-bin libopenmpi-dev python3-dev build-essential cmake pkg-config libjpeg-dev libtiff5-dev libpng-dev libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libxvidcore-dev libx264-dev libfontconfig1-dev libcairo2-dev libgdk-pixbuf2.0-dev libpango1.0-dev libgtk2.0-dev libgtk-3-dev libhdf5-serial-dev libhdf5-103 libqt5gui5 libqt5webkit5 libqt5test5 python3-pyqt5

Step 7: Run the following command to install TensorFlow Lite's interpreter:

pip3 install tflite-runtime

To check if TensorFlow Lite is installed correctly, run:

python3 -c "import tflite_runtime.interpreter as tflite; print('TensorFlow Lite is installed successfully!')"

If no errors appear, the installation was successful. You can run now any of the GUIs shown above!

4. Citation

If you find CUBITAL useful in your project, please consider citing the following paper:

@inproceedings{salcedo2023,
  title={Edge AI-Based Vein Detector for Efficient Venipuncture in the Antecubital Fossa},
  author={Salcedo, Edwin and Pe{\~n}aloza, Patricia},
  booktitle={Mexican International Conference on Artificial Intelligence},
  pages={297--314},
  year={2023},
  organization={Springer}
}

About

Hardware development & computer vision for forearm vein segmentation ๐Ÿ’‰.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published