參考文獻 |
[1] C. L. Srinidhi, O. Ciga, and A. L. Martel, “Deep neural network models for
computational histopathology: A survey,” Med. Image Anal., vol. 67, 2021, doi:
10.1016/j.media.2020.101813.
[2] P.-H. C. Chen et al., “An augmented reality microscope with real-time artificial
intelligence integration for cancer diagnosis,” Nat. Med., vol. 25, no. 9, pp. 1453–1457,
2019, doi: 10.1038/s41591-019-0539-7.
[3] G. Campanella et al., “Clinical-grade computational pathology using weakly supervised
deep learning on whole slide images,” vol. 25, no. 8, pp. 1301–1309, 2020, doi:
10.1038/s41591-019-0508-1.Clinical-grade.
[4] M. I. Razzak, S. Naz, and A. Zaib, “Deep learning for medical image processing:
Overview, challenges and the future,” Lect. Notes Comput. Vis. Biomech., vol. 26, pp.
323–350, 2018, doi: 10.1007/978-3-319-65981-7_12.
[5] K. Sirinukunwattana et al., “Gland segmentation in colon histology images: The glas
challenge contest,” Med. Image Anal., vol. 35, pp. 489–502, 2017, doi:
10.1016/j.media.2016.08.008.
[6] B. Li, Y. Li, and K. W. Eliceiri, “Dual-stream Multiple Instance Learning Network for
Whole Slide Image Classification with Self-supervised Contrastive Learning,” Proc.
IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 14313–14323, 2021, doi:
10.1109/CVPR46437.2021.01409.
[7] S. Maksoud, K. Zhao, P. Hobson, A. Jennings, and B. C. Lovell, “SOS: Selective
objective switch for rapid immunofluorescence whole slide image classification,” Proc.
IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 3861–3870, 2020, doi:
10.1109/CVPR42600.2020.00392.
[8] O. Ciga, T. Xu, S. Nofech-Mozes, S. Noy, F. I. Lu, and A. L. Martel, “Overcoming the
limitations of patch-based learning to detect cancer in whole slide images,” Sci. Rep.,
vol. 11, no. 1, p. 8894, 2021, doi: 10.1038/s41598-021-88494-z.
[9] Y. Xu, Z. Jia, Y. Ai, F. Zhang, M. Lai, and E. I. C. Chang, “Deep convolutional activation
features for large scale Brain Tumor histopathology image classification and
segmentation,” ICASSP, IEEE Int. Conf. Acoust. Speech Signal Process. - Proc., vol.
2015-Augus, pp. 947–951, 2015, doi: 10.1109/ICASSP.2015.7178109.
[10] X. Wang et al., “Weakly supervised deep learning for whole slide lung cancer image
analysis,” IEEE Trans. Cybern., vol. 50, no. 9, pp. 3950–3962, 2019.
[11] N. Hashimoto et al., “Multi-scale Domain-Adversarial Multiple-instance CNN for
Cancer Subtype Classification with Unannotated Histopathological Images,” Proc. IEEE
Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 3851–3860, 2020, doi:
10.1109/CVPR42600.2020.00391.
[12] O. Maron and T. Lozano-perez, “A Framework for Multiple-Instance Learning.”
[13] M. A. Carbonneau, V. Cheplygina, E. Granger, and G. Gagnon, “Multiple instance
learning: A survey of problem characteristics and applications,” Pattern Recognit., vol.
77, pp. 329–353, 2018, doi: 10.1016/j.patcog.2017.10.009.
[14] M. Ilse, J. M. Tomczak, and M. Welling, “Attention-based deep multiple instance
learning,” 35th Int. Conf. Mach. Learn. ICML 2018, vol. 5, no. Mil, pp. 3376–3391, 2018.
[15] Y. Zhao et al., “Predicting Lymph Node Metastasis Using Histopathological Images
Based on Multiple Instance Learning with Deep Graph Convolution,” Proc. IEEE
Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 4836–4845, 2020, doi:
10.1109/CVPR42600.2020.00489.
[16] S. Takahama et al., “Multi-stage pathological image classification using semantic
segmentation,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2019-Octob, pp. 10701–10710,
2019, doi: 10.1109/ICCV.2019.01080.
[17] M. Y. Lu, D. F. K. Williamson, T. Y. Chen, R. J. Chen, M. Barbieri, and F. Mahmood,
“Data-efficient and weakly supervised computational pathology on whole-slide images,”
Nat. Biomed. Eng., vol. 5, no. 6, pp. 555–570, 2021, doi: 10.1038/s41551-020-00682-w.
[18] N. Tomita, B. Abdollahi, J. Wei, B. Ren, A. Suriawinata, and S. Hassanpour, “AttentionBased Deep Neural Networks for Detection of Cancerous and Precancerous Esophagus
Tissue on Histopathological Slides,” JAMA Netw. Open, vol. 2, no. 11, pp. 1–13, 2019,
doi: 10.1001/jamanetworkopen.2019.14645.
[19] H. Zhang et al., “DTFD-MIL: Double-Tier Feature Distillation Multiple Instance
Learning for Histopathology Whole Slide Image Classification,” pp. 18780–18790,
66
2022, doi: 10.1109/cvpr52688.2022.01824.
[20] L. Qu, X. Luo, M. Wang, and Z. Song, “Bi-directional Weakly Supervised Knowledge
Distillation for Whole Slide Image Classification,” no. NeurIPS, 2022, [Online].
Available: http://arxiv.org/abs/2210.03664.
[21] D. Tellez et al., “Whole-slide mitosis detection in H&E breast histology using PHH3 as
a reference to train distilled stain-invariant convolutional networks,” IEEE Trans. Med.
Imaging, vol. 37, no. 9, pp. 2126–2136, 2018.
[22] H. Chen et al., “Rectified cross-entropy and upper transition loss for weakly supervised
whole slide image classifier,” in Medical Image Computing and Computer Assisted
Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October
13–17, 2019, Proceedings, Part I 22, 2019, pp. 351–359.
[23] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of
deep networks,” in International conference on machine learning, 2017, pp. 1126–1135.
[24] X. Li et al., “A comprehensive review of computer-aided whole-slide image analysis:
from datasets to feature extraction, segmentation, classification and detection
approaches,” Artif. Intell. Rev., vol. 55, no. 6, pp. 4809–4878, 2022, doi:
10.1007/s10462-021-10121-0.
[25] M. I. Fazal, M. E. Patel, J. Tye, and Y. Gupta, “The past, present and future role of
artificial intelligence in imaging.,” Eur. J. Radiol., vol. 105, pp. 246–250, Aug. 2018,
doi: 10.1016/j.ejrad.2018.06.020.
[26] P. Pati et al., “Weakly Supervised Joint Whole-Slide Segmentation and Classification in
Prostate Cancer,” 2023, [Online]. Available: http://arxiv.org/abs/2301.02933.
[27] Z. Guo et al., “A Fast and Refined Cancer Regions Segmentation Framework in Wholeslide Breast Pathological Images,” Sci. Rep., vol. 9, no. 1, pp. 1–10, 2019, doi:
10.1038/s41598-018-37492-9.
[28] J. Yan et al., “Hierarchical Attention Guided Framework for Multi-resolution
Collaborative Whole Slide Image Segmentation,” Int. Conf. Med. Image Comput.
Comput. Interv., vol. 12908 LNCS, no. September, pp. 153–163, 2021, doi: 10.1007/978- 3-030-87237-3_15.
[29] L. Qu, X. Luo, S. Liu, M. Wang, and Z. Song, “DGMIL : Distribution Guided Multiple
67
68
Instance,” no. Mil, 2022.
[30] Y. Guan et al., “Node-aligned Graph Convolutional Network for Whole-slide Image
Representation and Classification,” pp. 18791–18801, 2022, doi:
10.1109/cvpr52688.2022.01825.
[31] Z. Shao et al., “TransMIL: Transformer based Correlated Multiple Instance Learning for
Whole Slide Image Classification,” Adv. Neural Inf. Process. Syst., vol. 3, no. NeurIPS,
pp. 2136–2147, 2021.
[32] W. Huang et al., “Automatic HCC Detection Using Convolutional Network with MultiMagnification Input Images,” 2019 IEEE Int. Conf. Artif. Intell. Circuits Syst., pp. 194–
198, 2019.
[33] L. Duran-Lopez, J. P. Dominguez-Morales, A. F. Conde-Martin, S. Vicente-Diaz, and
A. Linares-Barranco, “PROMETEO: A CNN-Based Computer-Aided Diagnosis System
for WSI Prostate Cancer Detection,” IEEE Access, vol. 8, pp. 128613–128628, 2020,
doi: 10.1109/ACCESS.2020.3008868.
[34] M. Adnan, S. Kalra, and H. R. Tizhoosh, “Representation learning of histopathology
images using graph neural networks,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern
Recognit. Work., vol. 2020-June, pp. 4254–4261, 2020, doi:
10.1109/CVPRW50498.2020.00502.
[35] N. Tomita, B. Abdollahi, J. Wei, B. Ren, A. Suriawinata, and S. Hassanpour, “AttentionBased Deep Neural Networks for Detection of Cancerous and Precancerous Esophagus
Tissue on Histopathological Slides.,” JAMA Netw. open, vol. 2, no. 11, p. e1914645,
Nov. 2019, doi: 10.1001/jamanetworkopen.2019.14645.
[36] J. Ke, Y. Shen, J. D. Wright, N. Jing, X. Liang, and D. Shen, “Identifying patch-level
MSI from histological images of Colorectal Cancer by a Knowledge Distillation Model,”
Proc. - 2020 IEEE Int. Conf. Bioinforma. Biomed. BIBM 2020, pp. 1043–1046, 2020,
doi: 10.1109/BIBM49941.2020.9313141.
[37] J. Yang et al., “ReMix: A General and Efficient Framework for Multiple Instance
Learning Based Whole Slide Image Classification,” Lect. Notes Comput. Sci. (including
Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 13432 LNCS, pp. 35–
45, 2022, doi: 10.1007/978-3-031-16434-7_4.
[38] Y. Sharma, A. Shrivastava, L. Ehsan, C. A. Moskaluk, S. Syed, and D. E. Brown,
“Cluster-to-Conquer: A Framework for End-to-End Multi-Instance Learning for Whole
Slide Image Classification,” pp. 682–698, 2021, [Online]. Available:
http://arxiv.org/abs/2103.10626.
[39] J. Yao, X. Zhu, J. Jonnagaddala, N. Hawkins, and J. Huang, “Whole slide images based
cancer survival prediction using attention guided deep multiple instance learning
networks,” Med. Image Anal., vol. 65, 2020, doi: 10.1016/j.media.2020.101789.
[40] S. Chao and D. Belanger, “Generalizing Few-Shot Classification of Whole-Genome
Doubling Across Cancer Types,” in Proceedings of the IEEE/CVF International
Conference on Computer Vision, 2021, pp. 3382–3392.
[41] J. Gamper, B. Chan, Y. W. Tsang, D. Snead, and N. Rajpoot, “Meta-svdd: Probabilistic
meta-learning for one-class classification in cancer histology images,” arXiv Prepr.
arXiv2003.03109, 2020.
[42] F. Fagerblom, K. Stacke, and J. Molin, “Combatting out-of-distribution errors using
model-agnostic meta-learning for digital pathology,” in Medical Imaging 2021: Digital
Pathology, 2021, vol. 11603, pp. 186–192.
[43] D. J. Ho et al., “Deep Multi-Magnification Networks for multi-class breast cancer image
segmentation,” Comput. Med. Imaging Graph., vol. 88, pp. 1–35, 2021, doi:
10.1016/j.compmedimag.2021.101866.
[44] M. Rasoolijaberi et al., “Multi-Magnification Image Search in Digital Pathology,” IEEE
J. Biomed. Heal. Informatics, vol. 26, no. 9, pp. 4611–4622, 2022, doi:
10.1109/JBHI.2022.3181531.
[45] B. E. Bejnordi et al., “Diagnostic assessment of deep learning algorithms for detection
of lymph node metastases in women with breast cancer,” Jama, vol. 318, no. 22, pp.
2199–2210, 2017.
[46] N. Farahani, A. V Parwani, and L. Pantanowitz, “Whole slide imaging in pathology:
advantages, limitations, and emerging perspectives,” Pathol Lab Med Int, vol. 7, no. 23–
33, p. 4321, 2015.
[47] M. D. Zarella et al., “A Practical Guide to Whole Slide Imaging: A White Paper From
the Digital Pathology Association,” Arch. Pathol. Lab. Med., vol. 143, no. 2, pp. 222–
69
234, Oct. 2018, doi: 10.5858/arpa.2018-0343-RA.
[48] J. D. Ianni et al., “Tailored for Real-World: A Whole Slide Image Classification System
Validated on Uncurated Multi-Site Data Emulating the Prospective Pathology
Workload,” Sci. Rep., vol. 10, no. 1, p. 3217, 2020, doi: 10.1038/s41598-020-59985-2.
[49] X. Wang, H. Chen, C. Gan, H. Lin, and Q. Dou, “Weakly Supervised Learning for Whole
Slide Lung Cancer Image Classification,” 1st Conf. Med. Image with Deep Learn.
2018),Amsterdam,The Netherlards, no. Midl, pp. 1–10, 2018, [Online]. Available:
https://www.semanticscholar.org/paper/Weakly-Supervised-Learning-for-Whole-SlideLung-Wang-Chen/35d0998f2c5b53591073d36c9e2b0ddc89a496b1.
[50] Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,”
Handb. brain theory neural networks, vol. 3361, no. 10, p. 1995, 1995.
[51] Ž. Vujović, “Classification Model Evaluation Metrics,” Int. J. Adv. Comput. Sci. Appl.,
vol. 12, no. 6, pp. 599–606, 2021, doi: 10.14569/IJACSA.2021.0120670.
[52] M. Hossin and M. N. Sulaiman, “A Review On Evaluation Metrics For Data
Classification Evaluations,” Int. J. Data Min. Knowl. Manag. Process, vol. 0, no. March,
pp. 4–5, 2015.
[53] K. Chang et al., “The Cancer Genome Atlas Pan-Cancer analysis project,” Nat. Genet.,
vol. 45, no. 10, pp. 1113–1120, 2013, doi: 10.1038/ng.2764.
[54] C. M. Bishop, Neural networks for pattern recognition. Oxford university press, 1995.
[55] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.
[56] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann
machines,” in Proceedings of the 27th international conference on machine learning
(ICML-10), 2010, pp. 807–814.
[57] G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self-normalizing neural
networks,” Adv. Neural Inf. Process. Syst., vol. 30, 2017.
[58] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv Prepr.
arXiv1412.6980, 2014.
[59] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to
document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2323, 1998, doi:
70
71
10.1109/5.726791.
[60] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in
Proceedings of the IEEE conference on computer vision and pattern recognition, 2016,
pp. 770–778.
[61] X. Liu et al., “Self-supervised Learning: Generative or Contrastive,” IEEE Trans. Knowl.
Data Eng., vol. 35, no. 1, pp. 857–876, 2021, doi: 10.1109/TKDE.2021.3090866.
[62] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive
learning of visual representations,” in International conference on machine learning,
2020, pp. 1597–1607.
[63] T. G. Dietterich, R. H. Lathrop, and T. Lozano-Pérez, “Solving the multiple instance
problem with axis-parallel rectangles,” Artif. Intell., vol. 89, no. 1–2, pp. 31–71, 1997,
doi: 10.1016/s0004-3702(96)00034-3.
[64] Y. Wang, J. Li, and F. Metze, “Comparing the max and noisy-or pooling functions in
multiple instance learning for weakly supervised sequence learning tasks,” Proc. Annu.
Conf. Int. Speech Commun. Assoc. INTERSPEECH, vol. 2018-Septe, no. September, pp.
1339–1343, 2018, doi: 10.21437/Interspeech.2018-990.
[65] J. Amores, “Multiple instance classification: Review, taxonomy and comparative study,”
Artif. Intell., vol. 201, pp. 81–105, 2013, doi: 10.1016/j.artint.2013.06.003.
[66] M. Zaheer, S. Kottur, S. Ravanbhakhsh, B. Póczos, R. Salakhutdinov, and A. J. Smola,
“Deep sets,” Adv. Neural Inf. Process. Syst., vol. 2017-Decem, no. ii, pp. 3392–3402,
2017.
[67] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d
classification and segmentation,” in Proceedings of the IEEE conference on computer
vision and pattern recognition, 2017, pp. 652–660.
[68] X. Wang, Y. Yan, P. Tang, X. Bai, and W. Liu, “Revisiting multiple instance neural
networks,” Pattern Recognit., vol. 74, pp. 15–24, 2018, doi:
10.1016/j.patcog.2017.08.026.
[69] A. Vaswani et al., “Attention Is All You Need,” 31st Conf. Neural Inf. Process. Syst.
(NIPS 2017), vol. 8, no. 1, pp. 8–15, 2017, doi: 10.1109/2943.974352.
[70] J. A. . Hartigan and M. . A. . Wong, “Algorithm AS 136 : A K-Means Clustering
Algorithm,” J. R. Stat. Soc. Ser. C (Applied Stat., vol. 28, no. 1, pp. 100–108, 1979.
[71] K. P. Sinaga and M. S. Yang, “Unsupervised K-means clustering algorithm,” IEEE
Access, vol. 8, pp. 80716–80727, 2020, doi: 10.1109/ACCESS.2020.2988796.
[72] A. Paszke et al., “Pytorch: An imperative style, high-performance deep learning library,”
Adv. Neural Inf. Process. Syst., vol. 32, 2019.
[73] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale
hierarchical image database,” in 2009 IEEE conference on computer vision and pattern
recognition, 2009, pp. 248–255.
[74] J. Yang, H. Chen, J. Yan, X. Chen, and J. Yao, “Towards better understanding and better
generalization of few-shot classification in histology images with contrastive learning,”
arXiv Prepr. arXiv2202.09059, 2022.
[75] Y. Bengio, G. Mesnil, Y. Dauphin, and S. Rifai, “Better mixing via deep
representations,” in International conference on machine learning, 2013, pp. 552–560.
[76] P. Upchurch et al., “Deep feature interpolation for image content changes,” in
Proceedings of the IEEE conference on computer vision and pattern recognition, 2017,
pp. 7064–7073.
[77] T.-H. Cheung and D.-Y. Yeung, “Modals: Modality-agnostic automated data
augmentation in the latent space,” 2021.
[78] L. Van der Maaten and G. Hinton, “Visualizing data using t-SNE.,” J. Mach. Learn. Res.,
vol. 9, no. 11, 2008.
[79] S. Wu, H. Zhang, G. Valiant, and C. Ré, “On the generalization effects of linear
transformations in data augmentation,” in International Conference on Machine
Learning, 2020, pp. 10410–10420.
[80] M. Caron et al., “Emerging properties in self-supervised vision transformers,” in
Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp.
9650–9660. |