在許多預防性的疾病篩檢項目中,高解析度三維立體醫學影像資料(例如:核磁共振造影、細切的電腦斷層掃描)是目前最精確的非侵入性之病灶偵測和惡性程度分析方法。但是從高解析度三維立體資料偵測細微病灶,目前仍然是一件特別困難的差事,主要原因是,在二維影像中,細微的病灶和許多身體細結構相類似而且非常容易混淆。雖然,放射專科醫師可以從連續幾張二維圖像在腦中重建三維資料做區分,一般深度學習的卷積神經網路還是大部分只能處理二維影像。所以在此研究計畫書,我們提出一個從三維立體影像匯集多個不同方向的二維正射投影圖之總體,經過目前最好的語義分割卷積神經網路辨識後,這些各種投影方向的二維分割資料可以被反推回原來的三維位置去累積病灶位置的可能性高低地圖。從我們的一個先期性研究中所得到的結果分析,從多個不同方向的二維正射投影圖之分割資料總體回推三維形狀結構資料是準確率高而且有臨床應用價值的新研究方法和方向。 ;High spatial resolution volumetric data such as MRI and spiral CT scanning are state-of-the-art, non-invasive screening methods for the detection of lesions and the analysis of malignancy. However, detecting small lesions from volumetric data remains difficult because many anatomic variations resemble true lesions in 2D images. While radiologists can browse through consecutive 2D images to visualize the 3D structure in mind, existing deep learning convolutional neural networks (CNNs) are mostly designed for 2D images. In this research proposal, we envision an ensemble of multi-directional 2D orthographic projection views (by volume rendering) of a subvolume of interest such that a partially obscured lesion can at least be detected from some directions by existing semantic segmentation CNNs. These 2D segmentation data are then reversely projected back to a 3D-probability-accumulation map to identify lesions. Experimental results of our pilot study indicate the effectiveness of building 3D context information from multi-directional 2D projection views.