博碩士論文 111522604 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:102 、訪客IP:3.145.95.133
姓名 潘祐家(Prabowo Yoga Wicaksana)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 基於群集的潛在增強通過元學習對病理全切片影像分類
(Cluster-based Latent Augmentation via Meta-Learning for Whole Slide Image Classification)
相關論文
★ Single and Multi-Label Environmental Sound Recognition with Gaussian Process★ 波束形成與音訊前處理之嵌入式系統實現
★ 語音合成及語者轉換之應用與設計★ 基於語意之輿情分析系統
★ 高品質口述系統之設計與應用★ 深度學習及加速強健特徵之CT影像跟骨骨折辨識及偵測
★ 基於風格向量空間之個性化協同過濾服裝推薦系統★ RetinaNet應用於人臉偵測
★ 金融商品走勢預測★ 整合深度學習方法預測年齡以及衰老基因之研究
★ 漢語之端到端語音合成研究★ 基於 ARM 架構上的 ORB-SLAM2 的應用與改進
★ 基於深度學習之指數股票型基金趨勢預測★ 探討財經新聞與金融趨勢的相關性
★ 基於卷積神經網路的情緒語音分析★ 運用深度學習方法預測阿茲海默症惡化與腦中風手術存活
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 ( 永不開放)
摘要(中) 病理全切片影像(WSI)分類涉及分析以超高解析度掃描整個玻片捕獲的生物樣本或組
織樣本的大型數位影像。 WSI 分類的目標是準確地識別和分類圖像中不同的感興趣區
域,例如癌細胞或正常組織以及特定的細胞類型和結構。
WSI 分類在數字病理學和醫學研究領域中具有眾多應用,包括幫助診斷和治療癌症,識
別潛在的藥物標靶,以及實現個體化治療。近年來,深度學習技術的最新進展已經在
WSI 分類性能方面取得了顯著的改進,最先進的模型實現了高水準的準確性和穩健性。
然而,當前的 WSI 分類管道通常需要大量標記數據進行訓練,獲取這些數據既昂貴又
耗時。 此外,簡單地在大型數據集上訓練模型並不能保證良好的泛化性能。 事實上,
由於對訓練數據的過度擬合和難以學習魯棒特徵等問題,WSI 模型在大型數據集上訓
練時的泛化能力往往受到限制。 在這項工作中,我們提出了一個框架來處理上述問題。
我們將集成分類器方法與元學習相結合,使模型能夠從有限數量的標記樣本中學習,
同時仍能取得卓越的性能。 此外,我們還提出了一種簡單的基於集群的噪聲注入,以
強制模型學習更穩健和多樣化的特徵。 通過綜合實驗,我們僅在兩個公開可用的 WSI
分類任務數據集上使用少量樣本證明了我們方法的有效性。
摘要(英) Whole Slide Image (WSI) classification involves analyzing large digital images of tissue
samples or other biological specimens captured by scanning the entire slide at ultra resolution.
The objective of WSI classification is to accurately identify and classify different regions of
interest within the image, such as cancerous or normal tissue, as well as specific cell types and
structures.
WSI classification has numerous applications in the field of digital pathology and
medical research, including aiding in the diagnosis and treatment of cancer, identifying potential
drug targets, and enabling personalized medicine. Recent advances in deep learning techniques
have led to significant improvements in WSI classification performance, with state-of-the-art
models achieving high levels of accuracy and robustness.
However, current WSI classification pipelines typically require a large amount of
labeled data for training, which can be both costly and time-consuming to obtain. Moreover,
simply training a model on a large dataset does not guarantee good generalization performance.
In fact, the generalization ability of WSI models is often limited when trained on large datasets,
due to issues such as overfitting to the training data and difficulty in learning robust features.
In this work, we propose a framework to deal with the aforementioned issues. We incorporate
ensemble classifier approach integrated with meta-learning, which enables the model to learn
from a limited number of labeled samples while still achieving remarkable performance.
Furthermore, we also propose a simple cluster-based noise injection to force the model to learn
more robust and diverse features. Through comprehensive experiments, we demonstrate the
effectiveness of our approach with only a small number of samples on two publicly available
datasets for WSI classification tasks.
關鍵字(中) ★ 病理全切片影像分析
★ 元學習
★ 群集分析
★ 潛在空間增強
★ 集成分類器
關鍵字(英) ★ Whole slide images
★ Meta-learning
★ Clustering
★ Latent space augmentation
★ Ensemble classifier
論文目次 Contents
摘 要........................................................................................................................................... i
Abstract ...................................................................................................................................... ii
Contents.....................................................................................................................................iii
List of Figures ............................................................................................................................ v
List of Tables............................................................................................................................ vii
Chapter 1 Introduction................................................................................................................ 1
1.1 Background....................................................................................................................... 1
1.2 Problem Formulation........................................................................................................ 2
1.3 Scope and Limitations ...................................................................................................... 3
1.4 Research Objective ........................................................................................................... 3
1.5 Research Benefits ............................................................................................................. 3
1.6 Research Contributions..................................................................................................... 3
1.7 Thesis Overview ............................................................................................................... 4
Chapter 2 Literature Review ...................................................................................................... 6
Chapter 3 Theoretical Basis ....................................................................................................... 9
3.1 Whole Slide Images.......................................................................................................... 9
3.1.1 Whole Slide Image Classification............................................................................ 10
3.1.2 Evaluation Metrics for Whole Slide Images Classification ..................................... 11
3.1.3 Dataset for Whole Slide Image Classification ......................................................... 13
3.2 Feedforward Neural Network ......................................................................................... 14
3.2.1 Activation Functions................................................................................................ 16
3.2.2 Back-propagation ..................................................................................................... 17
3.3 Convolutional Neural Networks..................................................................................... 20
3.3.1 Convolution Layer ................................................................................................... 20
3.3.2 Pooling Layer........................................................................................................... 22
3.3.3 Architecture.............................................................................................................. 23
3.3.4 Self-Supervised Contrastive Learning ..................................................................... 25
3.4 Multiple Instance Learning............................................................................................. 27
3.4.1 Problem.................................................................................................................... 28
3.4.2 Methodologies.......................................................................................................... 29
3.4.3 Aggregation Functions............................................................................................. 32
3.5 Meta-learning.................................................................................................................. 34
3.5.1 MAML Algorithm.................................................................................................... 36
iii
iv
3.5.2 MAML in Supervised Learning............................................................................... 38
3.6 K-means Clustering ........................................................................................................ 39
Chapter 4 Research Methodology ............................................................................................ 41
4.1 System Analysis.............................................................................................................. 41
4.2 Tools and Materials ........................................................................................................ 43
4.3 Research Procedures....................................................................................................... 44
4.4 System Design ................................................................................................................ 45
4.4.1 Data Preparation and Preprocessing ........................................................................ 46
4.4.2 Feature Extraction .................................................................................................... 47
4.4.3 Multiple Instance Learning Classification and Meta-Learning ............................... 50
4.5 Evaluation Design........................................................................................................... 54
Chapter 5 Result and Discussion.............................................................................................. 55
5.1 Preprocessing Result....................................................................................................... 55
5.2 Feature Extraction Result ............................................................................................... 55
5.3 Classification Result ....................................................................................................... 57
5.3.1 Result on Camelyon16 dataset................................................................................. 57
5.3.2 Result on TCGA dataset .......................................................................................... 58
5.3.3 Ablation Study ......................................................................................................... 60
Chapter 6 Conclusions.............................................................................................................. 63
6.1 Research Summary ......................................................................................................... 63
6.2 Limitation ....................................................................................................................... 64
6.3 Future Research .............................................................................................................. 64
Bibliographies .......................................................................................................................... 65
參考文獻 [1] C. L. Srinidhi, O. Ciga, and A. L. Martel, “Deep neural network models for
computational histopathology: A survey,” Med. Image Anal., vol. 67, 2021, doi:
10.1016/j.media.2020.101813.
[2] P.-H. C. Chen et al., “An augmented reality microscope with real-time artificial
intelligence integration for cancer diagnosis,” Nat. Med., vol. 25, no. 9, pp. 1453–1457,
2019, doi: 10.1038/s41591-019-0539-7.
[3] G. Campanella et al., “Clinical-grade computational pathology using weakly supervised
deep learning on whole slide images,” vol. 25, no. 8, pp. 1301–1309, 2020, doi:
10.1038/s41591-019-0508-1.Clinical-grade.
[4] M. I. Razzak, S. Naz, and A. Zaib, “Deep learning for medical image processing:
Overview, challenges and the future,” Lect. Notes Comput. Vis. Biomech., vol. 26, pp.
323–350, 2018, doi: 10.1007/978-3-319-65981-7_12.
[5] K. Sirinukunwattana et al., “Gland segmentation in colon histology images: The glas
challenge contest,” Med. Image Anal., vol. 35, pp. 489–502, 2017, doi:
10.1016/j.media.2016.08.008.
[6] B. Li, Y. Li, and K. W. Eliceiri, “Dual-stream Multiple Instance Learning Network for
Whole Slide Image Classification with Self-supervised Contrastive Learning,” Proc.
IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 14313–14323, 2021, doi:
10.1109/CVPR46437.2021.01409.
[7] S. Maksoud, K. Zhao, P. Hobson, A. Jennings, and B. C. Lovell, “SOS: Selective
objective switch for rapid immunofluorescence whole slide image classification,” Proc.
IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 3861–3870, 2020, doi:
10.1109/CVPR42600.2020.00392.
[8] O. Ciga, T. Xu, S. Nofech-Mozes, S. Noy, F. I. Lu, and A. L. Martel, “Overcoming the
limitations of patch-based learning to detect cancer in whole slide images,” Sci. Rep.,
vol. 11, no. 1, p. 8894, 2021, doi: 10.1038/s41598-021-88494-z.
[9] Y. Xu, Z. Jia, Y. Ai, F. Zhang, M. Lai, and E. I. C. Chang, “Deep convolutional activation
features for large scale Brain Tumor histopathology image classification and
segmentation,” ICASSP, IEEE Int. Conf. Acoust. Speech Signal Process. - Proc., vol.
2015-Augus, pp. 947–951, 2015, doi: 10.1109/ICASSP.2015.7178109.
[10] X. Wang et al., “Weakly supervised deep learning for whole slide lung cancer image
analysis,” IEEE Trans. Cybern., vol. 50, no. 9, pp. 3950–3962, 2019.
[11] N. Hashimoto et al., “Multi-scale Domain-Adversarial Multiple-instance CNN for
Cancer Subtype Classification with Unannotated Histopathological Images,” Proc. IEEE
Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 3851–3860, 2020, doi:
10.1109/CVPR42600.2020.00391.
[12] O. Maron and T. Lozano-perez, “A Framework for Multiple-Instance Learning.”
[13] M. A. Carbonneau, V. Cheplygina, E. Granger, and G. Gagnon, “Multiple instance
learning: A survey of problem characteristics and applications,” Pattern Recognit., vol.
77, pp. 329–353, 2018, doi: 10.1016/j.patcog.2017.10.009.
[14] M. Ilse, J. M. Tomczak, and M. Welling, “Attention-based deep multiple instance
learning,” 35th Int. Conf. Mach. Learn. ICML 2018, vol. 5, no. Mil, pp. 3376–3391, 2018.
[15] Y. Zhao et al., “Predicting Lymph Node Metastasis Using Histopathological Images
Based on Multiple Instance Learning with Deep Graph Convolution,” Proc. IEEE
Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 4836–4845, 2020, doi:
10.1109/CVPR42600.2020.00489.
[16] S. Takahama et al., “Multi-stage pathological image classification using semantic
segmentation,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2019-Octob, pp. 10701–10710,
2019, doi: 10.1109/ICCV.2019.01080.
[17] M. Y. Lu, D. F. K. Williamson, T. Y. Chen, R. J. Chen, M. Barbieri, and F. Mahmood,
“Data-efficient and weakly supervised computational pathology on whole-slide images,”
Nat. Biomed. Eng., vol. 5, no. 6, pp. 555–570, 2021, doi: 10.1038/s41551-020-00682-w.
[18] N. Tomita, B. Abdollahi, J. Wei, B. Ren, A. Suriawinata, and S. Hassanpour, “Attention￾Based Deep Neural Networks for Detection of Cancerous and Precancerous Esophagus
Tissue on Histopathological Slides,” JAMA Netw. Open, vol. 2, no. 11, pp. 1–13, 2019,
doi: 10.1001/jamanetworkopen.2019.14645.
[19] H. Zhang et al., “DTFD-MIL: Double-Tier Feature Distillation Multiple Instance
Learning for Histopathology Whole Slide Image Classification,” pp. 18780–18790,
66
2022, doi: 10.1109/cvpr52688.2022.01824.
[20] L. Qu, X. Luo, M. Wang, and Z. Song, “Bi-directional Weakly Supervised Knowledge
Distillation for Whole Slide Image Classification,” no. NeurIPS, 2022, [Online].
Available: http://arxiv.org/abs/2210.03664.
[21] D. Tellez et al., “Whole-slide mitosis detection in H&E breast histology using PHH3 as
a reference to train distilled stain-invariant convolutional networks,” IEEE Trans. Med.
Imaging, vol. 37, no. 9, pp. 2126–2136, 2018.
[22] H. Chen et al., “Rectified cross-entropy and upper transition loss for weakly supervised
whole slide image classifier,” in Medical Image Computing and Computer Assisted
Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October
13–17, 2019, Proceedings, Part I 22, 2019, pp. 351–359.
[23] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of
deep networks,” in International conference on machine learning, 2017, pp. 1126–1135.
[24] X. Li et al., “A comprehensive review of computer-aided whole-slide image analysis:
from datasets to feature extraction, segmentation, classification and detection
approaches,” Artif. Intell. Rev., vol. 55, no. 6, pp. 4809–4878, 2022, doi:
10.1007/s10462-021-10121-0.
[25] M. I. Fazal, M. E. Patel, J. Tye, and Y. Gupta, “The past, present and future role of
artificial intelligence in imaging.,” Eur. J. Radiol., vol. 105, pp. 246–250, Aug. 2018,
doi: 10.1016/j.ejrad.2018.06.020.
[26] P. Pati et al., “Weakly Supervised Joint Whole-Slide Segmentation and Classification in
Prostate Cancer,” 2023, [Online]. Available: http://arxiv.org/abs/2301.02933.
[27] Z. Guo et al., “A Fast and Refined Cancer Regions Segmentation Framework in Whole￾slide Breast Pathological Images,” Sci. Rep., vol. 9, no. 1, pp. 1–10, 2019, doi:
10.1038/s41598-018-37492-9.
[28] J. Yan et al., “Hierarchical Attention Guided Framework for Multi-resolution
Collaborative Whole Slide Image Segmentation,” Int. Conf. Med. Image Comput.
Comput. Interv., vol. 12908 LNCS, no. September, pp. 153–163, 2021, doi: 10.1007/978- 3-030-87237-3_15.
[29] L. Qu, X. Luo, S. Liu, M. Wang, and Z. Song, “DGMIL : Distribution Guided Multiple
67
68
Instance,” no. Mil, 2022.
[30] Y. Guan et al., “Node-aligned Graph Convolutional Network for Whole-slide Image
Representation and Classification,” pp. 18791–18801, 2022, doi:
10.1109/cvpr52688.2022.01825.
[31] Z. Shao et al., “TransMIL: Transformer based Correlated Multiple Instance Learning for
Whole Slide Image Classification,” Adv. Neural Inf. Process. Syst., vol. 3, no. NeurIPS,
pp. 2136–2147, 2021.
[32] W. Huang et al., “Automatic HCC Detection Using Convolutional Network with Multi￾Magnification Input Images,” 2019 IEEE Int. Conf. Artif. Intell. Circuits Syst., pp. 194–
198, 2019.
[33] L. Duran-Lopez, J. P. Dominguez-Morales, A. F. Conde-Martin, S. Vicente-Diaz, and
A. Linares-Barranco, “PROMETEO: A CNN-Based Computer-Aided Diagnosis System
for WSI Prostate Cancer Detection,” IEEE Access, vol. 8, pp. 128613–128628, 2020,
doi: 10.1109/ACCESS.2020.3008868.
[34] M. Adnan, S. Kalra, and H. R. Tizhoosh, “Representation learning of histopathology
images using graph neural networks,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern
Recognit. Work., vol. 2020-June, pp. 4254–4261, 2020, doi:
10.1109/CVPRW50498.2020.00502.
[35] N. Tomita, B. Abdollahi, J. Wei, B. Ren, A. Suriawinata, and S. Hassanpour, “Attention￾Based Deep Neural Networks for Detection of Cancerous and Precancerous Esophagus
Tissue on Histopathological Slides.,” JAMA Netw. open, vol. 2, no. 11, p. e1914645,
Nov. 2019, doi: 10.1001/jamanetworkopen.2019.14645.
[36] J. Ke, Y. Shen, J. D. Wright, N. Jing, X. Liang, and D. Shen, “Identifying patch-level
MSI from histological images of Colorectal Cancer by a Knowledge Distillation Model,”
Proc. - 2020 IEEE Int. Conf. Bioinforma. Biomed. BIBM 2020, pp. 1043–1046, 2020,
doi: 10.1109/BIBM49941.2020.9313141.
[37] J. Yang et al., “ReMix: A General and Efficient Framework for Multiple Instance
Learning Based Whole Slide Image Classification,” Lect. Notes Comput. Sci. (including
Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 13432 LNCS, pp. 35–
45, 2022, doi: 10.1007/978-3-031-16434-7_4.
[38] Y. Sharma, A. Shrivastava, L. Ehsan, C. A. Moskaluk, S. Syed, and D. E. Brown,
“Cluster-to-Conquer: A Framework for End-to-End Multi-Instance Learning for Whole
Slide Image Classification,” pp. 682–698, 2021, [Online]. Available:
http://arxiv.org/abs/2103.10626.
[39] J. Yao, X. Zhu, J. Jonnagaddala, N. Hawkins, and J. Huang, “Whole slide images based
cancer survival prediction using attention guided deep multiple instance learning
networks,” Med. Image Anal., vol. 65, 2020, doi: 10.1016/j.media.2020.101789.
[40] S. Chao and D. Belanger, “Generalizing Few-Shot Classification of Whole-Genome
Doubling Across Cancer Types,” in Proceedings of the IEEE/CVF International
Conference on Computer Vision, 2021, pp. 3382–3392.
[41] J. Gamper, B. Chan, Y. W. Tsang, D. Snead, and N. Rajpoot, “Meta-svdd: Probabilistic
meta-learning for one-class classification in cancer histology images,” arXiv Prepr.
arXiv2003.03109, 2020.
[42] F. Fagerblom, K. Stacke, and J. Molin, “Combatting out-of-distribution errors using
model-agnostic meta-learning for digital pathology,” in Medical Imaging 2021: Digital
Pathology, 2021, vol. 11603, pp. 186–192.
[43] D. J. Ho et al., “Deep Multi-Magnification Networks for multi-class breast cancer image
segmentation,” Comput. Med. Imaging Graph., vol. 88, pp. 1–35, 2021, doi:
10.1016/j.compmedimag.2021.101866.
[44] M. Rasoolijaberi et al., “Multi-Magnification Image Search in Digital Pathology,” IEEE
J. Biomed. Heal. Informatics, vol. 26, no. 9, pp. 4611–4622, 2022, doi:
10.1109/JBHI.2022.3181531.
[45] B. E. Bejnordi et al., “Diagnostic assessment of deep learning algorithms for detection
of lymph node metastases in women with breast cancer,” Jama, vol. 318, no. 22, pp.
2199–2210, 2017.
[46] N. Farahani, A. V Parwani, and L. Pantanowitz, “Whole slide imaging in pathology:
advantages, limitations, and emerging perspectives,” Pathol Lab Med Int, vol. 7, no. 23–
33, p. 4321, 2015.
[47] M. D. Zarella et al., “A Practical Guide to Whole Slide Imaging: A White Paper From
the Digital Pathology Association,” Arch. Pathol. Lab. Med., vol. 143, no. 2, pp. 222–
69
234, Oct. 2018, doi: 10.5858/arpa.2018-0343-RA.
[48] J. D. Ianni et al., “Tailored for Real-World: A Whole Slide Image Classification System
Validated on Uncurated Multi-Site Data Emulating the Prospective Pathology
Workload,” Sci. Rep., vol. 10, no. 1, p. 3217, 2020, doi: 10.1038/s41598-020-59985-2.
[49] X. Wang, H. Chen, C. Gan, H. Lin, and Q. Dou, “Weakly Supervised Learning for Whole
Slide Lung Cancer Image Classification,” 1st Conf. Med. Image with Deep Learn.
2018),Amsterdam,The Netherlards, no. Midl, pp. 1–10, 2018, [Online]. Available:
https://www.semanticscholar.org/paper/Weakly-Supervised-Learning-for-Whole-Slide￾Lung-Wang-Chen/35d0998f2c5b53591073d36c9e2b0ddc89a496b1.
[50] Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,”
Handb. brain theory neural networks, vol. 3361, no. 10, p. 1995, 1995.
[51] Ž. Vujović, “Classification Model Evaluation Metrics,” Int. J. Adv. Comput. Sci. Appl.,
vol. 12, no. 6, pp. 599–606, 2021, doi: 10.14569/IJACSA.2021.0120670.
[52] M. Hossin and M. N. Sulaiman, “A Review On Evaluation Metrics For Data
Classification Evaluations,” Int. J. Data Min. Knowl. Manag. Process, vol. 0, no. March,
pp. 4–5, 2015.
[53] K. Chang et al., “The Cancer Genome Atlas Pan-Cancer analysis project,” Nat. Genet.,
vol. 45, no. 10, pp. 1113–1120, 2013, doi: 10.1038/ng.2764.
[54] C. M. Bishop, Neural networks for pattern recognition. Oxford university press, 1995.
[55] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.
[56] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann
machines,” in Proceedings of the 27th international conference on machine learning
(ICML-10), 2010, pp. 807–814.
[57] G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self-normalizing neural
networks,” Adv. Neural Inf. Process. Syst., vol. 30, 2017.
[58] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv Prepr.
arXiv1412.6980, 2014.
[59] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to
document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2323, 1998, doi:
70
71
10.1109/5.726791.
[60] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in
Proceedings of the IEEE conference on computer vision and pattern recognition, 2016,
pp. 770–778.
[61] X. Liu et al., “Self-supervised Learning: Generative or Contrastive,” IEEE Trans. Knowl.
Data Eng., vol. 35, no. 1, pp. 857–876, 2021, doi: 10.1109/TKDE.2021.3090866.
[62] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive
learning of visual representations,” in International conference on machine learning,
2020, pp. 1597–1607.
[63] T. G. Dietterich, R. H. Lathrop, and T. Lozano-Pérez, “Solving the multiple instance
problem with axis-parallel rectangles,” Artif. Intell., vol. 89, no. 1–2, pp. 31–71, 1997,
doi: 10.1016/s0004-3702(96)00034-3.
[64] Y. Wang, J. Li, and F. Metze, “Comparing the max and noisy-or pooling functions in
multiple instance learning for weakly supervised sequence learning tasks,” Proc. Annu.
Conf. Int. Speech Commun. Assoc. INTERSPEECH, vol. 2018-Septe, no. September, pp.
1339–1343, 2018, doi: 10.21437/Interspeech.2018-990.
[65] J. Amores, “Multiple instance classification: Review, taxonomy and comparative study,”
Artif. Intell., vol. 201, pp. 81–105, 2013, doi: 10.1016/j.artint.2013.06.003.
[66] M. Zaheer, S. Kottur, S. Ravanbhakhsh, B. Póczos, R. Salakhutdinov, and A. J. Smola,
“Deep sets,” Adv. Neural Inf. Process. Syst., vol. 2017-Decem, no. ii, pp. 3392–3402,
2017.
[67] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d
classification and segmentation,” in Proceedings of the IEEE conference on computer
vision and pattern recognition, 2017, pp. 652–660.
[68] X. Wang, Y. Yan, P. Tang, X. Bai, and W. Liu, “Revisiting multiple instance neural
networks,” Pattern Recognit., vol. 74, pp. 15–24, 2018, doi:
10.1016/j.patcog.2017.08.026.
[69] A. Vaswani et al., “Attention Is All You Need,” 31st Conf. Neural Inf. Process. Syst.
(NIPS 2017), vol. 8, no. 1, pp. 8–15, 2017, doi: 10.1109/2943.974352.
[70] J. A. . Hartigan and M. . A. . Wong, “Algorithm AS 136 : A K-Means Clustering
Algorithm,” J. R. Stat. Soc. Ser. C (Applied Stat., vol. 28, no. 1, pp. 100–108, 1979.
[71] K. P. Sinaga and M. S. Yang, “Unsupervised K-means clustering algorithm,” IEEE
Access, vol. 8, pp. 80716–80727, 2020, doi: 10.1109/ACCESS.2020.2988796.
[72] A. Paszke et al., “Pytorch: An imperative style, high-performance deep learning library,”
Adv. Neural Inf. Process. Syst., vol. 32, 2019.
[73] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale
hierarchical image database,” in 2009 IEEE conference on computer vision and pattern
recognition, 2009, pp. 248–255.
[74] J. Yang, H. Chen, J. Yan, X. Chen, and J. Yao, “Towards better understanding and better
generalization of few-shot classification in histology images with contrastive learning,”
arXiv Prepr. arXiv2202.09059, 2022.
[75] Y. Bengio, G. Mesnil, Y. Dauphin, and S. Rifai, “Better mixing via deep
representations,” in International conference on machine learning, 2013, pp. 552–560.
[76] P. Upchurch et al., “Deep feature interpolation for image content changes,” in
Proceedings of the IEEE conference on computer vision and pattern recognition, 2017,
pp. 7064–7073.
[77] T.-H. Cheung and D.-Y. Yeung, “Modals: Modality-agnostic automated data
augmentation in the latent space,” 2021.
[78] L. Van der Maaten and G. Hinton, “Visualizing data using t-SNE.,” J. Mach. Learn. Res.,
vol. 9, no. 11, 2008.
[79] S. Wu, H. Zhang, G. Valiant, and C. Ré, “On the generalization effects of linear
transformations in data augmentation,” in International Conference on Machine
Learning, 2020, pp. 10410–10420.
[80] M. Caron et al., “Emerging properties in self-supervised vision transformers,” in
Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp.
9650–9660.
指導教授 王家慶(Jia-Ching Wang) 審核日期 2023-7-28
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明