博碩士論文 105827608 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:23 、訪客IP:3.142.197.212
姓名 單尼爾(Daniyal)  查詢紙本館藏   畢業系所 生醫科學與工程學系
論文名稱 在臨床決策中應用數據挖掘方法時確定訓練樣本量的指南
(A guideline to determine the training sample size when applying data mining methods in clinical decision making)
相關論文
★ 足弓指標參數之比較分析★ 運用腦電波研究中風病人的復健成效 與持續情形
★ 重複間斷性Theta爆發刺激對手部運動之腦波的影響★ Amylose mediated electricity production of Staphylococcus epidermidis for inhibition of Cutibacterium acnes growth
★ 使用虛擬實境系統誘發事件相關電位P300之研究★ 虛擬實境誘發體感覺事件相關電位P300之動態因果模型研究
★ 使用GPU提升事件相關電位之動態因果模型的運算效能★ 基於動態因果模型之老化相關的運動網路研究
★ 應用腦電圖預測中風病人復健情況★ 以益智遊戲進行空間工作記憶訓練在事件相關電位P3上的影響
★ 基於虛擬實境復健之中風後運動網路功能性重組研究★ 應用腦電圖與相關臨床因子預測中風病人復原之研究
★ 中風復健後與虛擬實境物理參數 相關的動作網絡重組★ 以運動指標預測復健成效暨設計復健方針
★ 運用時頻轉換分析方法研究 工作記憶訓練之人類大腦可塑性★ 中風患者在復健後的大腦神經連結的變化
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2024-6-30以後開放)
摘要(中) 背景:生物醫學是一個富含各種異構,進化,複雜和非結構化數據的領域(即HACE定理)。獲取生物醫學數據需要時間和人力,而且通常非常昂貴。因此,無法進行全族群的研究,只能使用抽樣去推論。近年來,醫療領域的兩個日益增長的問題:使用小樣本進行實驗並從大量醫療數據中提取有用信息(大數據)。研究人員聲稱,在生物醫學領域的小樣本研究中,過度擬合會導致假陽性(I型錯誤)或假陰性(II型錯誤),這會產生誇大的結果,而這些結果並不代表真正的效果。另一方面,在過去幾年中,由於來自fMRI,DTI,PET / SPECT和M / EEG等許多來源的數據的不斷生成,數據量變得越來越大,越來越複雜。 大數據挖掘已成為最迷人和發展最快的領域,可以選擇,探索和建模大量醫療數據,以幫助臨床決策,預防用藥錯誤,並提高患者的治療效果。但是,大數據中的挑戰很多,例如缺失值,數據的異構性,管理數據的複雜性等,這些可能會影響結果。因此,必須為大數據挖掘找到合適的流程和算法,以便從海量數據中提取有用的信息。然而,迄今為止,沒有相關的指導原則,特別是關於適合可信的樣本量,其中包含可靠結果的最重要信息。
目的:本研究的目的是整合人工智能和統計參數特性,以確定最佳樣本量。該法克服了當前樣本量計算方法中發現的偏差,例如統計參數的預期值;指定的閾值並標準化干預措施之間的差異(理論上不清楚)。此外,我還研究了樣本大小中數據變異性對分類器性能的影響。
方法:在這項研究中,我使用了兩種數據:實驗數據和模擬數據。實驗數據包括兩個數據集 - 第一個數據集由63個中風患者的腦信號(連續數據)組成,另一個由120個睡眠日記(離散的分類數據)組成,每個日記記錄一人數據。為了找到最佳樣本量,首先,我將每個實驗數據集分成多個樣本量,每個數據集占10%。 然後,我在四種最常用的AI方法中使用了這些樣本大小,例如SVM,決策樹,樸素貝葉斯和Logistic回歸。 十倍交叉驗證用於評估分類準確性。我還測量了每個樣本大小的樣本中的宏觀方差,特徵值,比例。另一方面,我通過獲取實際數據的平均值來生成人工數據集;生成的數據模擬了真實數據。我使用這個數據集來檢查標準偏差對分類器準確性的影響,當樣本大小從小樣本大小增加到大樣本時。最後,我將兩個實驗數據集的分類器結果應用到ROC圖中,以找到合適的樣本大小以及分類器性能對不同樣本大小(從小到大)的影響。
結果:結果描述了樣本大小對所有數據集中分類器和數據方差的準確性的顯著影響。中風和睡眠數據顯示機器學習(ML)分類器,數據方差(參數方差和主題方差),特徵值和方差比例的性能的內在屬性。我使用這個內在屬性來設計三個標準來確定最佳樣本大小。根據標準1,當分類器的性能與數據變化同時實現內在行為時,樣本被認為是最佳樣本大小。在第二個標準中,我使用了性能,特徵值和比例,當它們表明特定樣本大小的同時內在屬性時,則樣本大小被認為是有效樣本大小。此外,ROC圖表顯示分類器在小樣本量期間表現較差,但隨著樣本量的增加,性能得到改善。
結論:所有結果都斷言樣本量對AI方法和數據差異的性能有顯著影響。當數據變化具有可忽略的波動時,樣本大小的增加給出了AI方法的穩定結果。此外,當準確度,特徵值,比例和方差變得與樣本中的增量無關時,樣本大小的固有屬性有助於我們找到最佳樣本大小。
摘要(英) Background: Biomedicine is a field rich in a variety of heterogeneous, evolving, complex and unstructured data, coming from autonomous sources (i.e. heterogeneous, autonomous, complex and evolving (HACE) theorem). Acquisition of biomedical data takes time, and human power, and usually are very expensive. So, it is difficult to work with populations, and hence, researchers work with samples. In recent years, two growing concerns have overwhelmed in the healthcare area: use of small sample size for experiment and extraction of useful information from massive medical data (big data). Researchers have claimed that overfitting causes false positive (type I error) or false negative (type II error) in small sample size studies in the biomedicine field which produces exaggerated results that do not represent a true effect. On the other hand, in last few years, the volume of data is getting bigger and more complicated due to the continuous generation of data from many sources such as Functional magnetic resonance imaging (fMRI), computed tomography (CT) scan, Positron-emission tomography (PET)/ Single-photon emission computed tomography (SPECT) and Electroencephalogram (EEG). Big data mining has become the most fascinating and fastest growing area which enables the selection, exploring and modelling the vast amount of medical data to help clinical decision making, prevent medication error, and enhance patients’ outcomes. However, there are few challenges in big data, such as missing values, heterogeneous nature of data, the complexity of managing data, etc. that may affect the outcome. So, it is essential to find an appropriate process and algorithm for big data mining to extract useful information out of massive data. Up to date, however, there is no guideline for this, especially about a fair sample size that consists of paramount information for reliable results.
Purpose: The goal of this study is to explore the relationship among sample size, statistical parameters and performance of machine learning (ML) methods to ascertain an optimal sample size. Moreover, the study also examines the impact of standard deviations on sample sizes by analyzing the performance of machine learning methods.
Method: In this study, I used two kinds of data: experimental data and simulated data. Experimental data is comprised two datasets-the first dataset has 63 stroke patients′ brain signals (continuous data), and the other is consist of 120 sleep diaries (discrete categorical data) and each diary records one-person data. To find an optimal sample size, first, I divided each experimental dataset into multiple sample sizes by taking 10% proportion of each dataset. Then, I used these sample sizes in the four most used machine learning methods such as Support vector machine (SVM), Decision tree, Naive Bayes, and Logistic Regression. The ten-fold cross-validation was used to evaluate the classification accuracy. I also measured the grand variance, Eigen value, proportion among the samples of each sample size. On the other hand, I generated artificial dataset by taking an average of real data; the generated data mimicked the real data. I used this dataset to examine the effect of standard deviation on the accuracy of the classifiers when sample sizes were systematically increased from small to large sample sizes. In last, I applied classifiers’ results of both experimental datasets into Receiver operating characteristic curve (ROC) graph to find an appropriate sample size and influence of classifiers’ performance on different sample sizes, small to large size.
Results: The results depicted a significant effect of sample sizes on the accuracy of classifiers, data variances, Eigen Value, and proportion in all datasets. Stroke and Sleep datasets showed the intrinsic property in the performance of ML classifiers, data variances (parameter wise variance and subject wise variance), Eigen Value, and proportion of variance. I used this intrinsic property to design two criteria for deciding an appropriate sample size. According to criteria 1, a sample is considered an optimal sample size when the performances of classifiers achieve intrinsic behaviour simultaneously with data variation. In the second criteria, I have used performance, Eigen value and proportion to decide a suitable sample size. When these factors indicate a simultaneous intrinsic property on a specific sample size, then the sample size is considered as an effective sample size. In this study, both criteria suggested similar optimal sample sizes 250 in sleep dataset, although, eigen value showed a little variation as compared to variance between 250 to 500 sample sizes. The variation in eigen values decreased after 500 samples. Thus, due to this trivial variation, criteria II suggested 500 samples size as an effective sample size. It should be noted that if criteria I & II recommend two different sample sizes, then choose a sample size that achieves earlier simultaneous intrinsic property between performance and variance or among performance, eigen value and proportion on a sample size. last, I also designed a third criterion that is based on the receiver operating characteristic curve. The ROC graph illustrates that classifiers have a good performance when the sample sizes have a large size. The large sample sizes have position above the diagonal line. On the other, small sample sizes show worse performance, and they are allocated below the diagonal line. However, the performances of classifiers improve with increment in sample sizes.
Conclusion: All the results assert that the sample size has a dramatic impact on the performance of ML methods and data variance. The increment in sample size gives a steady outcome of machine learning methods when data variation has negligible fluctuation. In addition, the intrinsic property of sample size helps us to find an optimal sample size when accuracy, Eigen value, proportion and variance become independent of increment in samples.
關鍵字(中) ★ 大數據挖掘
★ 人工智能
★ 樣本量,
★ 準則
★ 異構數據
★ 標準
關鍵字(英) ★ Big data mining
★ Artificial intelligence
★ Sample size
★ Guideline
★ Heterogeneous data
★ Criteria
論文目次 Contents
Abstract ...................................................................................................................................................... vi
摘要 ............................................................................................................................................................ ix
List of figures ............................................................................................................................................ xiv
List of tables.............................................................................................................................................. xvi
Chapter 1: Literature Review .................................................................................................................... 1
1.1 Introduction ................................................................................................................................ 1
1.1.1 Big Data ............................................................................................................................... 1
1.1.2 Big Data Type ...................................................................................................................... 2
1.1.3 Data Mining ......................................................................................................................... 3
1.1.4 Characteristic of Big Data .................................................................................................. 4
1.1.5 Six Characteristic of Big Data ............................................................................................ 6
1.1.7 Big Data Issues .................................................................................................................... 8
1.1.8 Data Sampling ................................................................................................................... 11
1.1.9 The statistical analysis ...................................................................................................... 14
1.1.10 Ethical Issues related to Small Sample Size .................................................................... 19
1.1.11 Role of Data Variance ....................................................................................................... 20
1.1.12 Feature Selection ............................................................................................................... 22
1.1.12.1 Feature Selection Methods ........................................................................................... 23
1.1.13 Data Mining Methods/ Machine Learning Classifiers ................................................... 28
1.1.14 Principal component analysis (PCA) ............................................................................... 35
1.1.15 Current Sample Size Calculation methods and their Limitations ................................ 36
Chapter 2: Research Objectives .............................................................................................................. 41
2.1 Objectives and Hypothesis ....................................................................................................... 41
Chapter 3: Materials and Methods ......................................................................................................... 42
3.1 Materials ................................................................................................................................... 42
3.1.1 Data Preparation ............................................................................................................... 42
3.1.2 Training, Testing and Validation Subsets ....................................................................... 43
3.1 Methods ..................................................................................................................................... 47
3.2.1 Variance Calculation ........................................................................................................ 47
3.2.2 Data Mining Methods/ Machine Learning Classifiers ................................................... 48
3.2.3 Sample Size Calculation ................................................................................................... 51
3.2.4 Deciding an Appropriate Classifiers ............................................................................... 523.2.5 Receiver Operating Characteristic (ROC) ..................................................................... 52
3.2.6 Simulated Data .................................................................................................................. 54
3.2.7 Decide the size of samples to analyze the relation between standard deviation and performance of classifiers ................................................................................................................. 55
Chapter 4: Results .................................................................................................................................... 56
4.1 Generated simulated Dataset with multiple Standard Deviations........................................ 56
4.1.1 Distribution data samples under multiple standard deviations in the simulated dataset 56
4.1.2 Impact of standard deviation on the performance of classifiers ....................................... 59
4.2 Impact of Sample Size on Variance and Performance of the Classifiers in Sleep Data and Stroke Data ........................................................................................................................................... 62
4.2.1 Sleep Dataset Result .......................................................................................................... 62
4.2.2 Stroke Dataset Result ........................................................................................................ 64
4.3 The Slope Table of Performance of ML Classifiers and Variances for Sleep Dataset and Stroke Dataset ...................................................................................................................................... 66
4.4 Impact of sample size on Eigen Value, Proportion and Performance of classifiers in sleep dataset and stroke dataset .................................................................................................................... 71
4.5 Slope Tables to examine the variation among the performance, Eigen value, and proportion of variance .......................................................................................................................... 73
4.6 Baseline correction in simulated data ..................................................................................... 76
4.7 Intrinsic Property of Standard deviation toward the performance of Classifiers .............. 79
4.8 Wrapper Feature Selection method ........................................................................................ 79 4.9 Number of attributes for optimal performance of classifiers in different sample sizes ...... 80
4.10 Impact of attribute sizes on the performance of ML classifiers and Variance of data ....... 82
4.11 How to decide an optimal sample size? ................................................................................... 84
4.11.1 Criterion 1: Intrinsic behaviour of performance of classifiers and data variation ..... 84
4.11.2 Criterion 2: Intrinsic behaviour of performance of classifiers, Eigen value and proportion ......................................................................................................................................... 86
4.11.3 Criterion 3: Position of sample size on the ROC graph ..................................................... 87
Chapter 5: Discussion ............................................................................................................................... 92
5.1 Conventional and artificial intelligence based methods for deciding the optimal sample size 93
5.2 Criteria 1 and 2 for deciding an effective ML classifier ........................................................ 93
5.3 Why I used the simultaneous intrinsic property of ML performance and statistical parameter (variance, Eigen value, and proportion) .......................................................................... 94
5.4 Baseline Correction ................................................................................................................... 94
5.5 Influence of balance and imbalance classes on the performance of classifiers in ROC graph 95
5.6 Limitations ................................................................................................................................ 96
5.6.1 Deciding the optimal sample size in Small dataset ......................................................... 96
5.6.2 Performance of Tree classifier in simulated data ........................................................... 96
5.6.3 The absolute value of the simulated dataset ................................................................... 96
Chapter 6: Conclusion .............................................................................................................................. 98
Chapter 7: Future Work .......................................................................................................................... 99
Guideline: Steps of deciding an appropriate sample size .................................................................... 100
Step1: Data Preparation ..................................................................................................................... 100
Clean Data ...................................................................................................................................... 100
Normalize Data ............................................................................................................................... 100
Label Data ....................................................................................................................................... 100
Apply feature selection methods .................................................................................................... 101
Step 2: Data Sampling ........................................................................................................................ 101
Randomization ............................................................................................................................... 101
Decide number of samples in each sample size ............................................................................ 101
Step 3: Calculate Variance/ Eigen Value and proportion ............................................................... 102
Measure the variance of each sample size..................................................................................... 102
Measure the grand variance of each sample size ......................................................................... 102
Measure Eigen and Proportion of variance .................................................................................. 102
Plot the line graph for visualization .............................................................................................. 102
Step 4: Apply all data into ML classifiers ......................................................................................... 103
Choose ML algorithms ................................................................................................................... 103
Measure the accuracy of classifiers ............................................................................................... 103
Training, Testing and Cross-Validation ....................................................................................... 103
Measure the accuracy of classifier for each sample size .............................................................. 103
Step5: Compare the variance and performance .............................................................................. 103
Criterion 1: Intrinsic behavior of performance of classifiers and data variation ..................... 104
Criterion 2: Intrinsic behavior of performance of classifiers, Eigen value and proportion ..... 104
Criterion 3: Position of sample size on the ROC graph .............................................................. 104
Step 6: Decide sample Size ................................................................................................................. 104
Step 7: Decide appropriate classifiers. .............................................................................................. 105
Flow Chart .............................................................................................................................................. 106
................................................................................................................................................................. 106
List of abbreviations ............................................................................................................................... 109
Bibliography ........................................................................................................................................... 110
Appendix ................................................................................................................................................. 123
Generated simulated Dataset with multiple Standard Deviations ................................................. 123
參考文獻 [1] J. Faber and L. M. Fonseca, “How sample size influences research outcomes.,” Dental Press J. Orthod., vol. 19, no. 4, pp. 27–9, 2014.
[2] R. Nambiar, R. Bhardwaj, A. Sethi, and R. Vargheese, “A look at challenges and opportunities of Big Data analytics in healthcare,” in 2013 IEEE International Conference on Big Data, 2013, pp. 17–22.
[3] A. Oussous, F.-Z. Benjelloun, A. Ait Lahcen, and S. Belfkih, “Big Data technologies: A survey,” J. King Saud Univ. - Comput. Inf. Sci., vol. 30, no. 4, pp. 431–448, Oct. 2018.
[4] S. D. Halpern, J. H. T. Karlawish, and J. A. Berlin, “The continuing unethical conduct of underpowered clinical trials.,” JAMA, vol. 288, no. 3, pp. 358–62, Jul. 2002.
[5] K. S. Button et al., “Power failure: why small sample size undermines the reliability of neuroscience,” Nat. Rev. Neurosci., vol. 14, no. 5, pp. 365–376, May 2013.
[6] R. Nambiar, R. Bhardwaj, A. Sethi, and R. Vargheese, “A look at challenges and opportunities of Big Data analytics in healthcare,” in 2013 IEEE International Conference on Big Data, 2013, pp. 17–22.
[7] P. Buneman, “Semistructured data,” Symp. Princ. Database Syst., 1997.
[8] J. Gantz and D. Reinsel, “IDC I V I E W E x t r a c t i n g V a l u e f r o m C h a o s Sponsored by EMC Corporation,” 2011.
[9] E. Mayo-Wilson, T. Li, N. Fusco, and K. Dickersin, “Practical guidance for using multiple data sources in systematic reviews and meta-analyses (with examples from the MUDS study),” Res. Synth. Methods, vol. 9, no. 1, pp. 2–12, Mar. 2018.
[10] A. Gandomi and M. Haider, “Beyond the hype: Big data concepts, methods, and analytics,” Int. J. Inf. Manage., vol. 35, no. 2, pp. 137–144, Apr. 2015.
[11] W. M. Mason, “Statistical Analysis: Multilevel Methods,” Int. Encycl. Soc. Behav. Sci., pp. 381–386, Jan. 2015.
[12] R. Sambasivan, S. Das, and S. K. Sahu, “A Bayesian Perspective of Statistical Machine Learning for Big Data.”
[13] S. Velupillai et al., “Using clinical Natural Language Processing for healthoutcomes research: Overview and actionable suggestions for future advances,” J. Biomed. Inform., vol. 88, pp. 11–19, Dec. 2018.
[14] F. Jiang et al., “Artificial intelligence in healthcare: past, present and future,” Stroke Vasc. Neurol., vol. 2, no. 4, pp. 230–243, Dec. 2017.
[15] Daniyal, W.-J. Wang, M.-C. Su, S.-H. Lee, C.-S. Hung, and C.-C. Chen, “A guideline to determine the training sample size when applying big data mining methods in clinical decision making,” in 2018 IEEE International Conference on Applied System Invention (ICASI), 2018, pp. 678–681.
[16] T. Zhang and B. Yang, “Dimension reduction for big data,” 2018.
[17] A. Farhangfar, L. A. Kurgan, and W. Pedrycz, “A Novel Framework for Imputation of Missing Values in Databases,” IEEE Trans. Syst. Man, Cybern. - Part A Syst. Humans, vol. 37, no. 5, pp. 692–709, Sep. 2007.
[18] J. Luengo, S. García, and F. Herrera, “On the choice of the best imputation methods for missing values considering three groups of classification methods,” Knowl. Inf. Syst., vol. 32, no. 1, pp. 77–108, Jul. 2012.
[19] X. Zhu, S. Zhang, Z. Jin, Z. Zhang, and Z. Xu, “Missing Value Estimation for Mixed-Attribute Data Sets,” IEEE Trans. Knowl. Data Eng., vol. 23, no. 1, pp. 110–121, Jan. 2011.
[20] J. Tian, B. Yu, D. Yu, and S. Ma, “Clustering-based multiple imputation via gray relational analysis for missing data and its application to aerospace field.,” ScientificWorldJournal., vol. 2013, p. 720392, May 2013.
[21] B. Twala, M. Cartwright, and M. Shepperd, “Comparison of various methods for handling incomplete data in software engineering databases,” in 2005 International Symposium on Empirical Software Engineering, 2005., pp. 102–111.
[22] S. Thirukumaran and A. Sumathi, “Missing value imputation techniques depth survey and an imputation Algorithm to improve the efficiency of imputation,” in 2012 Fourth International Conference on Advanced Computing (ICoAC), 2012, pp. 1–5.
[23] F. Keller, E. Muller, and K. Bohm, “HiCS: High Contrast Subspaces for Density-Based Outlier Ranking,” in 2012 IEEE 28th International Conference on Data Engineering, 2012, pp. 1037–1048.
[24] V. J. Hodge and J. Austin, “A Survey of Outlier Detection Methodologies,” Artif. Intell. Rev., vol. 22, no. 2, pp. 85–126, Oct. 2004.[25] S. Chawla and A. Gionis, “k -means–: A unified approach to clustering and outlier detection,” in Proceedings of the 2013 SIAM International Conference on Data Mining, 2013, pp. 189–197.
[26] H. V. Nguyen, E. Müller, J. Vreeken, F. Keller, and K. Böhm, “CMI: An Information-Theoretic Contrast Measure for Enhancing Subspace Cluster and Outlier Detection,” in Proceedings of the 2013 SIAM International Conference on Data Mining, 2013, pp. 198–206.
[27] Shuo Wang and Xin Yao, “Multiclass Imbalance Problems: Analysis and Potential Solutions,” IEEE Trans. Syst. Man, Cybern. Part B, vol. 42, no. 4, pp. 1119–1130, Aug. 2012.
[28] H. Ma, L. Wang, and B. Shen, “A new fuzzy support vector machines for class imbalance learning,” in 2011 International Conference on Electrical and Control Engineering, 2011, pp. 3781–3784.
[29] N. V. Chawla, N. Japkowicz, and A. Kotcz, “Editorial,” ACM SIGKDD Explor. Newsl., vol. 6, no. 1, p. 1, Jun. 2004.
[30] C. Seiffert, T. M. Khoshgoftaar, J. Van Hulse, and A. Napolitano, “A Comparative Study of Data Sampling and Cost Sensitive Learning,” in 2008 IEEE International Conference on Data Mining Workshops, 2008, pp. 46–52.
[31] H. Chen and Z. Yan, “Security and Privacy in Big Data Lifetime: A Review,” Springer, Cham, 2016, pp. 3–15.
[32] R. Bao, Z. Chen, and M. S. Obaidat, “Challenges and techniques in Big data security and privacy: A review,” Secur. Priv., vol. 1, no. 4, p. e13, Jul. 2018.
[33] “Big Data for Sustainable Development | United Nations.” [Online]. Available: http://www.un.org/en/sections/issues-depth/big-data-sustainable-development/index.html. [Accessed: 24-Mar-2019].
[34] B. K. Nayak, “Understanding the relevance of sample size calculation.,” Indian J. Ophthalmol., vol. 58, no. 6, pp. 469–70, 2010.
[35] K. K. Dobbin, Y. Zhao, and R. M. Simon, “How Large a Training Set is Needed to Develop a Classifier for Microarray Data?,” Clin. Cancer Res., vol. 14, no. 1, pp. 108–114, Jan. 2008.
[36] R. L. Figueroa, Q. Zeng-Treitler, S. Kandula, and L. H. Ngo, “Predicting sample size required for classification performance.,” BMC Med. Inform. Decis. Mak., vol. 12, p. 8, Feb. 2012.
[37] D. Moher, C. S. Dulberg, and G. A. Wells, “Statistical power, sample size,and their reporting in randomized controlled trials.,” JAMA, vol. 272, no. 2, pp. 122–4, Jul. 1994.
[38] E. DePoy, L. N. Gitlin, E. DePoy, and L. N. Gitlin, “Statistical Analysis for Experimental-Type Designs,” Introd. to Res., pp. 282–310, Jan. 2016.
[39] A. Banerjee, U. B. Chitnis, S. L. Jadhav, J. S. Bhawalkar, and S. Chaudhury, “Hypothesis testing, type I and type II errors.,” Ind. Psychiatry J., vol. 18, no. 2, pp. 127–31, Jul. 2009.
[40] D. R. (David R. Anderson and D. J. Sweeney, Statistics for business and economics. South-Western Cengage Learning, 2011.
[41] W. G. (William G. Cochran, Sampling techniques. Wiley, 1977.
[42] G. Kalton, Introduction to Survey Sampling. 2455 Teller Road, Thousand Oaks California 91320 United States of America : SAGE Publications, Inc., 1983.
[43] M. H. (Morris H. Hansen, W. N. Hurwitz, and W. G. (William G. Madow, Sample survey methods and theory. Wiley, 1953.
[44] G. D. Israel, “Sampling the Evidence of Extension Program Impact 1.”
[45] J. Cohen, Statistical power analysis for the behavioral sciences. L. Erlbaum Associates, 1988.
[46] G. D. Israel, “Determining Sample Size 1 The Level Of Precision,” 1992.
[47] “Measures of Variability: Range, Interquartile Range, Variance, and Standard Deviation - Statistics By Jim.” [Online]. Available: https://statisticsbyjim.com/basics/variability-range-interquartile-variance-standard-deviation/. [Accessed: 21-Jun-2019].
[48] C. J. Ferguson, “Is psychological research really as good as medical research? Effect size comparisons between psychology and medicine.,” Rev. Gen. Psychol., vol. 13, no. 2, pp. 130–136, Jun. 2009.
[49] R. B. Kline, Beyond significance testing : statistics reform in the behavioral sciences. American Psychological Association, 2013.
[50] V. Bewick, L. Cheek, and J. Ball, “Statistics review 11: assessing risk.,” Crit. Care, vol. 8, no. 4, pp. 287–91, Aug. 2004.
[51] NCSS, “PASS Sample Size Software Standard Deviation Estimator.”
[52] “Population and sample standard deviation review (article) | KhanAcademy.” [Online]. Available: https://www.khanacademy.org/math/statistics-probability/summarizing-quantitative-data/variance-standard-deviation-sample/a/population-and-sample-standard-deviation-review. [Accessed: 28-Mar-2019].
[53] F. Weytens, O. Luminet, L. L. Verhofstadt, and M. Mikolajczak, “An Integrative Theory-Driven Positive Emotion Regulation Intervention,” PLoS One, vol. 9, no. 4, p. e95677, Apr. 2014.
[54] J. M. Fuster, M. Bodner, and J. K. Kroger, “Cross-modal and cross-temporal association in neurons of frontal cortex,” Nature, vol. 405, no. 6784, pp. 347–351, May 2000.
[55] R. M. Hansen and A. B. Fulton, “Background adaptation in children with a history of mild retinopathy of prematurity.,” Invest. Ophthalmol. Vis. Sci., vol. 41, no. 1, pp. 320–4, Jan. 2000.
[56] T. C. A. Freeman and T. A. Fowler, “Unequal retinal and extra-retinal motion signals produce different perceived slants of moving surfaces,” Vision Res., vol. 40, no. 14, pp. 1857–1868, Jun. 2000.
[57] M. Laubach, J. Wessberg, and M. A. L. Nicolelis, “Cortical ensemble activity increasingly predicts behaviour outcomes during learning of a motor task,” Nature, vol. 405, no. 6786, pp. 567–571, Jun. 2000.
[58] C. Braun, R. Schweizer, T. Elbert, N. Birbaumer, and E. Taub, “Differential activation in somatosensory cortex for different discrimination tasks.,” J. Neurosci., vol. 20, no. 1, pp. 446–50, Jan. 2000.
[59] M. G. Bloj, D. Kersten, and A. C. Hurlbert, “Perception of three-dimensional shape influences colour perception through mutual illumination,” Nature, vol. 402, no. 6764, pp. 877–879, Dec. 1999.
[60] F. Bonato and J. Cataliotti, “The effects of figure/ground, perceived area, and target saliency on the luminosity threshold.,” Percept. Psychophys., vol. 62, no. 2, pp. 341–9, Feb. 2000.
[61] C.-C. Chen, S.-H. Lee, W.-J. Wang, Y.-C. Lin, and M.-C. Su, “EEG-based motor network biomarkers for identifying target patients with stroke for upper limb rehabilitation and its construct validity,” PLoS One, vol. 12, no. 6, p. e0178822, Jun. 2017.
[62] D. G. Altman, “Statistics and ethics in medical research: III How large a sample?,” Br. Med. J., vol. 281, no. 6251, pp. 1336–8, Nov. 1980.[63] J. P. Ioannidis, A. B. Haidich, and J. Lau, “Any casualties in the clash of randomised and observational evidence?,” BMJ, vol. 322, no. 7291, pp. 879–80, Apr. 2001.
[64] H. M. Colhoun, P. M. McKeigue, and G. Davey Smith, “Problems of reporting genetic associations with complex outcomes.,” Lancet (London, England), vol. 361, no. 9360, pp. 865–72, Mar. 2003.
[65] J. P. A. Ioannidis, “Genetic associations: false or true?,” Trends Mol. Med., vol. 9, no. 4, pp. 135–8, Apr. 2003.
[66] S. Wacholder, S. Chanock, M. Garcia-Closas, L. El ghormli, and N. Rothman, “Assessing the Probability That a Positive Report is False: An Approach for Molecular Epidemiology Studies,” JNCI J. Natl. Cancer Inst., vol. 96, no. 6, pp. 434–442, Mar. 2004.
[67] G. A. Barnard, “Must clinical trials be large? The interpretation ofp-values and the combination of test results,” Stat. Med., vol. 9, no. 6, pp. 601–614, Jun. 1990.
[68] P. M. Fayers and D. Machin, “Sample size: how many patients are necessary?,” Br. J. Cancer, vol. 72, no. 1, pp. 1–9, Jul. 1995.
[69] A. Kagan and L. A. Shepp, “Why the variance?,” Stat. Probab. Lett., vol. 38, no. 4, pp. 329–333, Jul. 1998.
[70] L. A. Goodman, “On the Exact Variance of Products,” J. Am. Stat. Assoc., vol. 55, no. 292, p. 708, Dec. 1960.
[71] M. J. Salganik, “Variance estimation, design effects, and sample size calculations for respondent-driven sampling.,” J. Urban Health, vol. 83, no. 6 Suppl, pp. i98-112, Nov. 2006.
[72] D. Rajnarayan and D. Wolpert, “Bias-Variance Techniques for Monte Carlo Optimization: Cross-validation for the CE Method,” Oct. 2008.
[73] “Population & Sample Variance: Definition, Formula & Examples - Video & Lesson Transcript | Study.com.” [Online]. Available: https://study.com/academy/lesson/population-sample-variance-definition-formula-examples.html. [Accessed: 21-Jun-2019].
[74] R. N. Forthofer, E. S. Lee, and M. Hernandez, Biostatistics : a guide to design, analysis, and discovery. .
[75] S. G. Kwak and J. H. Kim, “Central limit theorem: the cornerstone of modern statistics,” Korean J. Anesthesiol., vol. 70, no. 2, p. 144, Apr. 2017.[76] A. L. Blum and P. Langley, “Selection of relevant features and examples in machine learning,” Artif. Intell., vol. 97, no. 1–2, pp. 245–271, Dec. 1997.
[77] A. Goltsev and V. Gritsenko, “Investigation of efficient features for image recognition by neural networks,” Neural Networks, vol. 28, pp. 15–23, Apr. 2012.
[78] A. Khotanzad and Y. H. Hong, “Rotation invariant image recognition using features selected via a systematic method,” Pattern Recognit., vol. 23, no. 10, pp. 1089–1101, Jan. 1990.
[79] T. W. Rauber, F. de Assis Boldt, and F. M. Varejao, “Heterogeneous Feature Models and Feature Selection Applied to Bearing Fault Diagnosis,” IEEE Trans. Ind. Electron., vol. 62, no. 1, pp. 637–646, Jan. 2015.
[80] D. L. Swets and J. J. Weng, “Efficient content-based image retrieval using automatic feature selection,” in Proceedings of International Symposium on Computer Vision - ISCV, pp. 85–90.
[81] F. Amiri, M. Rezaei Yousefi, C. Lucas, A. Shakery, and N. Yazdani, “Mutual information-based feature selection for intrusion detection systems,” J. Netw. Comput. Appl., vol. 34, no. 4, pp. 1184–1199, Jul. 2011.
[82] Guangrong Li, Xiaohua Hu, Xiajiong Shen, Xin Chen, and Zhoujun Li, “A novel unsupervised feature selection method for bioinformatics data sets through feature clustering,” in 2008 IEEE International Conference on Granular Computing, 2008, pp. 41–47.
[83] Qinbao Song, Jingjie Ni, and Guangtao Wang, “A Fast Clustering-Based Feature Subset Selection Algorithm for High-Dimensional Data,” IEEE Trans. Knowl. Data Eng., vol. 25, no. 1, pp. 1–14, Jan. 2013.
[84] D. D. Lewis, Y. Yang, T. G. Rose, F. Li, and F. Li LEWIS, “RCV1: A New Benchmark Collection for Text Categorization Research,” 2004.
[85] Z. Zhao, Z. Zhao, F. Morstatter, S. Sharma, A. Anand, and H. Liu, “Advancing Feature Selection Research − ASU Feature Selection Repository.”
[86] P. Langley, “Selection of Relevant Features in Machine Learning,” 1994.
[87] P. Langley and M. A., Proceedings of the Seventeenth International Conference on Machine Learning (ICML-2000), June 29-July 2, 2000, Stanford University. Morgan Kaufmann Publishers, 2000.
[88] I. Kononenko, “Estimating attributes: Analysis and extensions of RELIEF,”Springer, Berlin, Heidelberg, 1994, pp. 171–182.
[89] L. Yu and H. Liu, “Efficient Feature Selection via Analysis of Relevance and Redundancy,” J. Mach. Learn. Res., vol. 5, no. Oct, pp. 1205–1224, 2004.
[90] D. Ienco and R. Meo, “Exploration and Reduction of the Feature Space by Hierarchical Clustering,” in Proceedings of the 2008 SIAM International Conference on Data Mining, 2008, pp. 577–587.
[91] D. M. Witten and R. Tibshirani, “A Framework for Feature Selection in Clustering,” J. Am. Stat. Assoc., vol. 105, no. 490, pp. 713–726, Jun. 2010.
[92] I. Guyon, J. Weston, S. Barnhill, and V. Vapnik, “Gene Selection for Cancer Classification using Support Vector Machines,” Mach. Learn., vol. 46, no. 1/3, pp. 389–422, 2002.
[93] K. Michalak, H. Kwa´snicka, and K. Kwa´snicka, “CORRELATION-BASED FEATURE SELECTION STRATEGY IN CLASSIFICATION PROBLEMS,” 2006.
[94] W. H. Hsu, “Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning,” Inf. Sci. (Ny)., vol. 163, no. 1–3, pp. 103–122, Jun. 2004.
[95] M. Dash, H. Liu, and J. Yao, “Dimensionality reduction of unsupervised data,” in Proceedings Ninth IEEE International Conference on Tools with Artificial Intelligence, pp. 532–539.
[96] N. Vandenbroucke, L. Macaire, and J.-G. Postaire, “Unsupervised color texture feature extraction and selection for soccer image segmentation,” in Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101), 2000, pp. 800–803 vol.2.
[97] M. Alibeigi, S. Hashemi, and A. Hamzeh, “Unsupervised Feature Selection Based on the Distribution of Features Attributed to Imbalanced Data Sets,” 2011.
[98] M. Pabitra, C. A. Murthy, and S. K. Pal, “Unsupervised Feature Selection Using Feature Similarity.”
[99] P.-Y. Zhou and K. C. C. Chan, “An unsupervised attribute clustering algorithm for unsupervised feature selection,” in 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA), 2015, pp. 1–7.
[100] R. Agrawal et al., “Automatic subspace clustering of high dimensional data for data mining applications,” ACM SIGMOD Rec., vol. 27, no. 2, pp. 94–105, Jun. 1998.
[101] B. Mirkin, “Concept Learning and Feature Selection Based on Square-Error Clustering,” Mach. Learn., vol. 35, no. 1, pp. 25–39, 1999.
[102] “Feature Selection for Unsupervised Learning.” [Online]. Available: https://dl.acm.org/citation.cfm?id=1016787. [Accessed: 05-Apr-2019].
[103] J. Z. Huang et al., “Weighting Method for Feature Selection in K-Means,” pp. 209–226, Oct. 2007.
[104] G. Doquire and M. Verleysen, “A graph Laplacian based approach to semi-supervised feature selection for regression problems,” Neurocomputing, vol. 121, pp. 5–13, Dec. 2013.
[105] M. Yang, Y.-J. Chen, and G.-L. Ji, “Semi_Fisher Score: A semi-supervised method for feature selection,” in 2010 International Conference on Machine Learning and Cybernetics, 2010, pp. 527–532.
[106] K. Benabdeslem and M. Hindawi, “Constrained Laplacian Score for Semi-supervised Feature Selection,” Springer, Berlin, Heidelberg, 2011, pp. 204–218.
[107] B. Yegnanarayana, “Artificial neural networks for pattern recognition,” Sadhana, vol. 19, no. 2, pp. 189–238, Apr. 1994.
[108] K. Benabdeslem and M. Hindawi, “Efficient Semi-Supervised Feature Selection: Constraint, Relevance, and Redundancy,” IEEE Trans. Knowl. Data Eng., vol. 26, no. 5, pp. 1131–1143, May 2014.
[109] Y. Wang, J. Wang, H. Liao, and H. Chen, “An efficient semi-supervised representatives feature selection algorithm based on information theory,” Pattern Recognit., vol. 61, pp. 511–523, Jan. 2017.
[110] R. Nambiar, R. Bhardwaj, A. Sethi, and R. Vargheese, “A look at challenges and opportunities of Big Data analytics in healthcare,” in 2013 IEEE International Conference on Big Data, 2013, pp. 17–22.
[111] D. T. Hau and E. W. Coiera, “Learning Qualitative Models of Dynamic Systems,” Mach. Learn., vol. 26, no. 2/3, pp. 177–211, 1997.
[112] K. M. Al-Aidaroo, A. A. Bakar, and Z. Othman, “Medical Data Classification with Naive Bayes Approach,” Inf. Technol. J., vol. 11, no. 9, pp. 1166–1174, Sep. 2012.
[113] M. Ringnér, “What is principal component analysis?,” Nat. Biotechnol., vol.26, no. 3, pp. 303–304, Mar. 2008.
[114] T. Raykov and G. A. Marcoulides, “Population Proportion of Explained Variance in Principal Component Analysis: A Note on Its Evaluation Via a Large-Sample Approach,” Struct. Equ. Model. A Multidiscip. J., vol. 21, no. 4, pp. 588–595, Oct. 2014.
[115] P. Kadam and S. Bhalerao, “Sample size calculation.,” Int. J. Ayurveda Res., vol. 1, no. 1, pp. 55–7, Jan. 2010.
[116] P. B. Vaidya, B. S. R. Vaidya, and S. K. Vaidya, “Response to Ayurvedic therapy in the treatment of migraine without aura.,” Int. J. Ayurveda Res., vol. 1, no. 1, pp. 30–6, Jan. 2010.
[117] B. Röhrig, J.-B. du Prel, D. Wachtlin, R. Kwiecien, and M. Blettner, “Sample size calculation in clinical trials: part 13 of a series on evaluation of scientific publications.,” Dtsch. Arztebl. Int., vol. 107, no. 31–32, pp. 552–6, Aug. 2010.
[118] “Determining the sample size in a clinical trial. - Semantic Scholar.” [Online]. Available: https://www.semanticscholar.org/paper/Determining-the-sample-size-in-a-clinical-trial.-Kirby-Gebski/7faa0337887d7ab6b67a40424144168257af28fa#paper-header. [Accessed: 24-Jun-2019].
[119] P. Patra, “Sample size in clinical research, the number we need,” Int. J. Med. Sci. Public Heal., vol. 1, no. 1, pp. 5–10, Jul. 2012.
[120] Jaykaran, N. Kantharia, and P. Yadav, “Reporting of sample size and power in negative clinical trials published in Indian medical journals,” J. Pharm. Negat. Results, vol. 2, no. 2, p. 87, 2011.
[121] J. Cai and D. Zeng, “Sample Size/Power Calculation for Case-Cohort Studies,” Biometrics, vol. 60, no. 4, pp. 1015–1024, Dec. 2004.
[122] V. Kasiulevičius, “Sample size calculation in epidemiological studies Satisfaction with primary helthecare in Lithuania View project,” 2006.
[123] M. Borenstein, H. Rothstein, J. Cohen, and Lawrence Erlbaum Associates., Power and precision : a computer program for statistical power analysis and confidence intervals. Lawrence Erlbaum Associates, 1997.
[124] R. G. O’brien, “UnifyPow: A SAS Macro for Sample-Size Analysis.”
[125] G. Welk, Physical activity assessments for health-related research. Human Kinetics, 2002.[126] “Sample Size Calculator - Confidence Level, Confidence Interval, Sample Size, Population Size, Relevant Population.” [Online]. Available: https://www.surveysystem.com/sscalce.htm. [Accessed: 27-Mar-2019].
[127] P. Bacchetti, C. E. McCulloch, and M. R. Segal, “Simple, Defensible Sample Sizes Based on Cost Efficiency,” Biometrics, vol. 64, no. 2, pp. 577–585, Jun. 2008.
[128] S. D. Halpern, J. H. T. Karlawish, and J. A. Berlin, “The continuing unethical conduct of underpowered clinical trials.,” JAMA, vol. 288, no. 3, pp. 358–62, Jul. 2002.
[129] H. C. Kraemer, J. Mintz, A. Noda, J. Tinklenberg, and J. A. Yesavage, “Caution Regarding the Use of Pilot Studies to Guide Power Calculations for Study Proposals,” Arch. Gen. Psychiatry, vol. 63, no. 5, p. 484, May 2006.
[130] D. G. Altman et al., “The Revised CONSORT Statement for Reporting Randomized Trials: Explanation and Elaboration,” Ann. Intern. Med., vol. 134, no. 8, p. 663, Apr. 2001.
[131] M. J. Gardner and D. G. Altman, “Confidence intervals rather than P values: estimation rather than hypothesis testing.,” BMJ, vol. 292, no. 6522, pp. 746–750, Mar. 1986.
[132] J. Ranstam, “Why the P-value culture is bad and confidence intervals a better alternative,” Osteoarthr. Cartil., vol. 20, no. 8, pp. 805–808, Aug. 2012.
[133] S. N. Goodman, “p Values, Hypothesis Tests, and Likelihood: Implications for Epidemiology of a Neglected Historical Debate,” Am. J. Epidemiol., vol. 137, no. 5, pp. 485–496, Mar. 1993.
[134] G. H. Guyatt, E. J. Mills, and D. Elbourne, “In the Era of Systematic Reviews, Does the Size of an Individual Trial Still Matter?,” PLoS Med., vol. 5, no. 1, p. e4, Jan. 2008.
[135] S. Edwards, R. Lilford, D. Braunholtz, and J. Jackson, “Why ‘underpowered’ trials are not necessarily unethical,” Lancet, vol. 350, no. 9080, pp. 804–807, Sep. 1997.
[136] S. Borra and A. Di Ciaccio, “Measuring the prediction error. A comparison of cross-validation, bootstrap and covariance penalty methods,” Comput. Stat. Data Anal., vol. 54, no. 12, pp. 2976–2989, Dec. 2010.
[137] G. Gong, “Cross-Validation, the Jackknife, and the Bootstrap: Excess Error Estimation in Forward Logistic Regression,” J. Am. Stat. Assoc., vol. 81, no.393, pp. 108–113, Mar. 1986.
[138] M. W. Browne, “Cross-Validation Methods,” J. Math. Psychol., vol. 44, no. 1, pp. 108–132, Mar. 2000.
[139] J. Brownlee, What is the Difference Between Test and Validation Datasets? 2017.
[140] S. G. Kwak and J. H. Kim, “Central limit theorem: the cornerstone of modern statistics,” Korean J. Anesthesiol., vol. 70, no. 2, p. 144, Apr. 2017.
[141] M. R. Siegle, C. L. K. Robinson, and J. Yakimishyn, “The Effect of Region, Body Size, and Sample Size on the Weight-Length Relationships of Small-Bodied Fishes Found in Eelgrass Meadows,” Northwest Sci., vol. 88, no. 2, pp. 140–154, May 2014.
[142] A. M. Verdery, T. Mouw, S. Bauldry, and P. J. Mucha, “Network Structure and Biased Variance Estimation in Respondent Driven Sampling,” PLoS One, vol. 10, no. 12, p. e0145296, Dec. 2015.
[143] P. Sulewski, “On Differently Defined Skewness,” Comput. Methods Sci. Technol., vol. 14, no. 1, pp. 39–46, 2008.
[144] H. G.-M. Kim, D. Richardson, D. Loomis, M. Van Tongeren, and I. Burstyn, “Bias in the estimation of exposure effects with individual- or group-based exposure assessment,” J. Expo. Sci. Environ. Epidemiol., vol. 21, no. 2, pp. 212–221, Mar. 2011.
[145] X. Wu et al., “Top 10 algorithms in data mining,” Knowl. Inf. Syst., vol. 14, no. 1, pp. 1–37, Jan. 2008.
[146] E. DePoy, L. N. Gitlin, E. DePoy, and L. N. Gitlin, “Statistical Analysis for Experimental-Type Designs,” Introd. to Res., pp. 282–310, Jan. 2016.
[147] D. H. Wolpert, “Ubiquity symposium: Evolutionary computation and the processes of life: what the no free lunch theorems really mean: how to improve search algorithms,” Ubiquity, vol. 2013, no. December, pp. 1–15, Dec. 2013.
[148] S. Nowlan and D. H. Wolpert, “Communicated by The Lack of A Priori Distinctions Between Learning Algorithms.”
[149] E. R. DeLong, D. M. DeLong, and D. L. Clarke-Pearson, “Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach.,” Biometrics, vol. 44, no. 3, pp. 837–45, Sep. 1988.[150] A. Banerjee, U. B. Chitnis, S. L. Jadhav, J. S. Bhawalkar, and S. Chaudhury, “Hypothesis testing, type I and type II errors.,” Ind. Psychiatry J., vol. 18, no. 2, pp. 127–31, Jul. 2009.
[151] A. Banerjee, U. B. Chitnis, S. L. Jadhav, J. S. Bhawalkar, and S. Chaudhury, “Hypothesis testing, type I and type II errors.,” Ind. Psychiatry J., vol. 18, no. 2, pp. 127–31, Jul. 2009.
[152] A. Kirby, V. Gebski, and A. C. Keech, “Determining the sample size in a clinical trial.,” Med. J. Aust., vol. 177, no. 5, pp. 256–7, Sep. 2002.
[153] M. Noordzij, G. Tripepi, F. W. Dekker, C. Zoccali, M. W. Tanck, and K. J. Jager, “Sample size calculations: basic principles and common pitfalls,” Nephrol. Dial. Transplant., vol. 25, no. 5, pp. 1388–1393, May 2010.
[154] X. Guo, Y. Yin, C. Dong, G. Yang, and G. Zhou, “On the Class Imbalance Problem,” in 2008 Fourth International Conference on Natural Computation, 2008, pp. 192–201.
[155] S. Visa and S. Visa, “Issues in mining imbalanced data sets - a review paper,” Proc. Sixt. MIDWEST Artif. Intell. Cogn. Sci. Conf. 2005, pp. 67--73.
指導教授 陳純娟(Chun-Chuan Chen) 審核日期 2019-7-15
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明