參考文獻 |
[1] LeCun, Y., Y. Bengio, and G. Hinton, Deep learning. Nature, 2015. 521: p. 436.
[2] ZHAVORONKOV, A., Insilico to present at the WuXi Healthcare Forum 2019>. 2019
[3] Hoerl, A.E. and R.W. Kennard, Ridge Regression: Biased Estimation for Nonorthogonal Problems. Technometrics, 1970. 12(1): p. 55-67.
[4] Safavian, S.R. and D. Landgrebe, A survey of decision tree classifier methodology. IEEE transactions on systems, man, and cybernetics, 1991. 21(3): p. 660-674.
[5] Breiman, L., et al., RANDOM FORESTS. January 2001.
[6] Chen, T., et al., Targeted Local Support Vector Machine for Age-Dependent Classification. Journal of the American Statistical Association, 2014. 109(507): p. 1174-1187.
[7] Haykin, S., Neural networks. Vol. 2. 1994: Prentice hall New York.
[8] AP, S.C., et al. An autoencoder approach to learning bilingual word representations. in Advances in Neural Information Processing Systems. 2014.
[9] Ames, B.N., M.K. Shigenaga, and T.M.J.P.o.t.N.A.o.S. Hagen, Oxidants, antioxidants, and the degenerative diseases of aging. 1993. 90(17): p. 7915-7922.
[10] Finkel, T., M. Serrano, and M.A.J.N. Blasco, The common biology of cancer and ageing. 2007. 448(7155): p. 767.
[11] Beal, M.F.J.A.o.n., Aging, energy, and oxidative stress in neurodegenerative diseases. 1995. 38(3): p. 357-366.
[12] Lindsay, J., et al., Risk factors for Alzheimer’s disease: a prospective analysis from the Canadian Study of Health and Aging. 2002. 156(5): p. 445-453.
[13] Graves, A., A.-r. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. in Acoustics, speech and signal processing (icassp), 2013 ieee international conference on. 2013. IEEE.
[14] Zhavoronkov, A., et al., Artificial intelligence for aging and longevity research: Recent advances and perspectives. Ageing Res Rev, 2019. 49: p. 49-66.
[15] Jia, K., et al., An analysis of aging-related genes derived from the Genotype-Tissue Expression project (GTEx). Cell Death Discovery, 2018. 4(1): p. 91.
[16] Tan, Y., J. Wang, and J.M. Zurada, Nonlinear blind source separation using a radial basis function network. IEEE transactions on neural networks, 2001. 12(1): p. 124-134.
[17] Simonyan, K. and A. Zisserman, Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[18] Warsito, W. and L. Fan, Neural network based multi-criterion optimization image reconstruction technique for imaging two-and three-phase flow systems using electrical capacitance tomography. Measurement Science and Technology, 2001. 12(12): p. 2198.
[19] Krizhevsky, A., I. Sutskever, and G.E. Hinton. Imagenet classification with deep convolutional neural networks. in Advances in neural information processing systems. 2012.
[20] Kingma, D.P. and M.J.a.p.a. Welling, Auto-encoding variational bayes. 2013.
[21] Mnih, A. and K. Gregor, Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030, 2014.
[22] Goodfellow, I., et al. Generative adversarial nets. in Advances in neural information processing systems. 2014.
[23] Mayr, A., et al., Large-scale comparison of machine learning methods for drug target prediction on ChEMBL. 2018. 9(24): p. 5441-5451.
[24] Caruana, R.J.M.L., Multitask Learning. 1997. 28(1): p. 41-75.
[25] Bengio, Y., A. Courville, and P. Vincent, Representation Learning: A Review and New Perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013. 35(8): p. 1798-1828.
[26] Rajpurkar, P., et al., CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning. arXiv preprint arXiv:1711.05225, 2017.
[27] Iandola, F., et al., Densenet: Implementing efficient convnet descriptor pyramids. arXiv preprint arXiv:1404.1869, 2014.
[28] Liu, Y., J. Zhou, and K.P. White, RNA-seq differential expression studies: more sequence or more replication? Bioinformatics, 2013. 30(3): p. 301-304.
[29] Grabherr, M.G., et al., Full-length transcriptome assembly from RNA-Seq data without a reference genome. Nature biotechnology, 2011. 29(7): p. 644.
[30] Schena, M., et al., Quantitative monitoring of gene expression patterns with a complementary DNA microarray. Science, 1995. 270(5235): p. 467-470.
[31] Consortium, I.H.G.S., Initial sequencing and analysis of the human genome. Nature, 2001. 409(6822): p. 860.
[32] Meng, F., et al., Involvement of human micro-RNA in growth and response to chemotherapy in human cholangiocarcinoma cell lines. Gastroenterology, 2006. 130(7): p. 2113-2129.
[33] Wu, Y. and K. He, Group normalization. arXiv preprint arXiv:1803.08494, 2018.
[34] Ioffe, S. and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[35] Wang, Z. and A.C. Bovik, Mean squared error: Love it or leave it? A new look at Signal Fidelity Measures. IEEE Signal Processing Magazine, 2009. 26(1): p. 98-117.
[36] Willmott, C.J. and K.J.C.r. Matsuura, Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. 2005. 30(1): p. 79-82.
[37] Klambauer, G., et al. Self-normalizing neural networks. in Advances in Neural Information Processing Systems. 2017.
[38] Nair, V. and G.E. Hinton. Rectified linear units improve restricted boltzmann machines. in Proceedings of the 27th international conference on machine learning (ICML-10). 2010.
[39] McCulloch, W.S. and W. Pitts, A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 1943. 5(4): p. 115-133.
[40] Rosenblatt, F., The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 1958. 65(6): p. 386.
[41] Livingstone, M.S. and D.H.J.N. Hubel, Effects of sleep and arousal on the processing of visual information in the cat. 1981. 291(5816): p. 554.
[42] Rumelhart, D.E., G.E. Hinton, and R.J. Williams, Learning representations by back-propagating errors. nature, 1986. 323(6088): p. 533.
[43] De Boer, P.-T., et al., A tutorial on the cross-entropy method. Annals of operations research, 2005. 134(1): p. 19-67.
[44] Ephraim, Y. and D. Malah, Speech enhancement using a minimum-mean square error short-time spectral amplitude estimator. IEEE Transactions on acoustics, speech, and signal processing, 1984. 32(6): p. 1109-1121.
[45] Chang, C.-C., C.-J.J.A.t.o.i.s. Lin, and technology, LIBSVM: A library for support vector machines. 2011. 2(3): p. 27.
[46] Safavian, S.R., D.J.I.t.o.s. Landgrebe, man,, and cybernetics, A survey of decision tree classifier methodology. 1991. 21(3): p. 660-674.
[47] Jolliffe, I., Principal component analysis. 2011: Springer.
[48] Hinton, G.E. and R.R. Salakhutdinov, Reducing the dimensionality of data with neural networks. science, 2006. 313(5786): p. 504-507.
[49] Ng, A. and S. Autoencoder, CS294A Lecture notes. Dosegljivo: https://web. stanford. edu/class/cs294a/sparseAutoencoder_2011new. pdf.[Dostopano 20. 7. 2016], 2011.
[50] Lonsdale, J., et al., The Genotype-Tissue Expression (GTEx) project. Nature Genetics, 2013. 45: p. 580.
[51] Seifuddin, F., et al., lncRNAKB: A comprehensive knowledgebase of long non-coding RNAs. 2019: p. 669994.
[52] Valero, O., On Banach fixed point theorems for partial metric spaces. Applied General Topology, 2005. 6(2): p. 229-240.
[53] Mortazavi, A., et al., Mapping and quantifying mammalian transcriptomes by RNA-Seq. 2008. 5(7): p. 621.
[54] Yang, J., et al., Synchronized age-related gene expression changes across multiple tissues in human and the link to complex diseases. Scientific Reports, 2015. 5: p. 15145.
[55] Langfelder, P. and S. Horvath, WGCNA: an R package for weighted correlation network analysis. BMC Bioinformatics, 2008. 9(1): p. 559. |