參考文獻 |
[1] D. Gerhard, Neuroscience. 5th Edition, 5th ed., ser. The Yale Journal of Biology and Medicine. Yale University, US: YJBM, 3 Jan. 2013, vol. 86.
[2] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al., “Imagenet large scale visual recognition challenge,” International journal of computer vision, vol. 115, no. 3, pp. 211–252, 2015.
[3] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580–587.
[4] K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” in Advances in neural information processing systems, 2014, pp. 568–576.
[5] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal processing magazine, vol. 29, no. 6, pp. 82–97, 2012.
[6] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, “Natural language processing (almost) from scratch,” Journal of machine learning research, vol. 12, no. Aug, pp. 2493–2537, 2011.
[7] H. Zeng, M. D. Edwards, G. Liu, and D. K. Gifford, “Convolutional neural network architectures for predicting DNA–protein binding,” Bioinformatics, vol. 32, no. 12, pp. i121–i127, 2016.
[8] M. Jermyn, J. Desroches, J. Mercier, M.-A. Tremblay, K. St-Arnaud, M.-C. Guiot, K. Petrecca, and F. Leblond, “Neural networks improve brain cancer detection with
Raman spectroscopy in the presence of operating room light artifacts,” Journal of biomedical optics, vol. 21, no. 9, p. 094002, 2016.
[9] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint
arXiv:1312.5602, 2013.
[10] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
[11] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[12] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
[13] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
[14] W. Du, Z. Wang, and D. Chen, “Optimizing of convolutional neural network accelerator,” in Green Electronics, C. Ravariu and D. Mihaiescu, Eds. Rijeka:
IntechOpen, 2018, ch. 8. [Online]. Available: https://doi.org/10.5772/intechopen.75796
[15] C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Optimizing FPGA-based accelerator design for deep convolutional neural networks,” in Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 2015, pp. 161–170.
[16] S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, “CUDNN: Efficient primitives for deep learning,” arXiv preprint
arXiv:1410.0759, 2014.
[17] Y.-H. Chen, J. Emer, and V. Sze, “Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks,” ACM SIGARCH Computer Architecture News, vol. 44, no. 3, pp. 367–379, 2016.
[18] Y.-H. Chen, T. Krishna, J. S. Emer, and V. Sze, “Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks,” IEEE Journal of Solid-State Circuits, vol. 52, no. 1, pp. 127–138, 2016.
[19] Y. Zorian, “Embedded memory test and repair: Infrastructure ip for soc yield,” in Proceedings IEEE International Test Conference. IEEE, 2002, pp. 340–349.
[20] Y. Zorian and S. Shoukourian, “Embedded-memory test and repair: infrastructure ip for soc yield,” IEEE Design & Test of Computers, no. 3, pp. 58–66, 2003.
[21] J.-F. Li, J.-C. Yeh, R.-F. Huang, and C.-W. Wu, “A built-in self-repair design for RAMs with 2-D redundancy,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 13, no. 6, pp. 742–745, 2005.
[22] T.-W. Tseng, J.-F. Li, and C.-C. Hsu, “ReBISR: A reconfigurable built-in self-repair scheme for random access memories in SOCs,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 18, no. 6, pp. 921–932, 2009.
[23] T.-W. Tseng, Y.-J. Huang, and J.-F. Li, “DABISR: A defect-aware built-in self-repair scheme for single/multi-port RAMs in SoCs,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 29, no. 10, pp. 1628–1639, 2010.
[24] T.-W. Tseng, J.-F. Li, and C.-S. Hou, “A built-in method to repair SoC RAMs in parallel,” IEEE design & Test of Computers, vol. 27, no. 6, pp. 46–57, 2010.
[25] C.-S. Hou, J.-F. Li, and T.-W. Tseng, “Memory built-in self-repair planning framework for RAMs in SoCs,” IEEE Transactions on Computer-Aided Design of Integrated
Circuits and Systems, vol. 30, no. 11, pp. 1731–1743, 2011.
[26] S.-K. Lu, C.-J. Tsai, and M. Hashizume, “Enhanced built-in self-repair techniques for improving fabrication yield and reliability of embedded memories,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 24, no. 8, pp. 2726–2734, 2016.
[27] A. Tanabe, T. Takeshima, H. Koike, Y. Aimoto, M. Takada, T. Ishijima, N. Kasai, H. Hada, K. Shibahara, T. Kunio, et al., “A 30-ns 64-Mb DRAM with built-in self-
test and self-repair function,” IEEE Journal of Solid-State Circuits, vol. 27, no. 11, pp. 1525–1533, 1992.
[28] V. Schober, S. Paul, and O. Picot, “Memory built-in self-repair using redundant words,” in Proceedings IEEE International Test Conference 2001, 2001, pp. 995–1001.
[29] D. Anand, B. Cowan, O. Farnsworth, P. Jakobsen, S. Oakland, M. R. Ouellette, and D. L. Wheater, “An on-chip self-repair calculation and fusing methodology,” IEEE
Design Test of Computers, vol. 20, no. 5, pp. 67–75, 2003.
[30] C.-T. Huang, C.-F. Wu, J.-F. Li, and C.-W. Wu, “Built-in redundancy analysis for memory yield improvement,” IEEE Transactions on Reliability, vol. 52, no. 4, pp. 386–399, 2003.
[31] I. Kang, W. Jeong, and S. Kang, “High-efficiency memory BISR with two serial RA stages using spare memories,” Electronics Letters, vol. 44, no. 8, pp. 515–517, 2008.
[32] X. Wang, D. Vasudevan, and H.-H. S. Lee, “Global built-in self-repair for 3D memories with redundancy sharing and parallel testing,” in Proceedings IEEE International 3D Systems Integration Conference (3DIC), 2011, pp. 1–8.
[33] M. Nicolaidis, N. Achouri, and S. Boutobza, “Optimal reconfiguration functions for column or data-bit built-in self-repair,” in Proceedings IEEE Design Automation and
Test in Europe (DATE’03), 2003, pp. 590–595.
[34] R. Zappa, C. Selva, D. Rimondi, C. Torelli, M. Crestan, G. Mastrodomenico, and L. Albani, “Micro programmable built-in self repair for SRAMs,” in Proceedings IEEE International Workshop on Memory Technology, Design and Testing, 2004, pp. 72–77.
[35] C.-L. Su, R.-F. Huang, and C.-W. Wu, “A processor-based built-in self-repair design for embedded memories,” in Proceedings IEEE Test Symposium, 2003, pp. 366–371.
[36] X. Du, S. M. Reddy, W.-T. Cheng, J. Rayhawk, and N. Mukherjee, “At-speed built-in self-repair analyzer for embedded word-oriented memories,” in Proceedings IEEE 17th International Conference on VLSI Design, 2004, pp. 895–900.
[37] P. Ohler, S. Hellebrand, and H.-J. Wunderlich, “An integrated built-in test and repair approach for memories with 2D redundancy,” in Proceedings IEEE 12th European Test Symposium (ETS’07), 2007, pp. 91–96.
[38] S.-Y. Kuo and W. K. Fuchs, “Efficient spare allocation for reconfigurable arrays,” IEEE Design & Test of Computers, vol. 4, no. 1, pp. 24–31, 1987.
[39] T.-W. Tseng, J.-F. Li, and D.-M. Chang, “A built-in redundancy-analysis scheme for RAMs with 2D redundancy using 1D local bitmap,” in Proceedings of the Design Au-
tomation & Test in Europe Conference, vol. 1, 2006, pp. 6–pp.
[40] K. Cho, Y.-W. Lee, S. Seo, and S. Kang, “An efficient BIRA utilizing characteristics of spare pivot faults,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 38, no. 3, pp. 551–561, 2018.
[41] T. Kawagoe, J. Ohtani, M. Niiro, T. Ooishi, M. Hamada, and H. Hidaka, “A built-in self-repair analyzer (CRESTA) for embedded DRAMs,” in Proceedings IEEE International Test Conference 2000, 2000, pp. 567–574.
[42] T.-J. Chen, J.-F. Li, and T.-W. Tseng, “Cost-efficient built-in redundancy analysis with optimal repair rate for rams,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 31, no. 6, pp. 930–940, 2012.
[43] S. Nakahara, K. Higeta, M. Kohno, T. Kawamura, and K. Kakitani, “Built-in self-test for GHz embedded SRAMs using flexible pattern generator and new repair algorithm,” in Proceedings IEEE International Test Conference, 1999, pp. 301–310.
[44] D. K. Bhavsar, “An algorithm for row-column self-repair of RAMs and its implementation in the Alpha 21264,” in Proceedings IEEE International Test Conference, 1999, pp. 311–318.
[45] T.-Y. Hsieh, K.-H. Li, and Y.-H. Peng, “On efficient error-tolerability evaluation and maximization for image processing applications,” in Proceedings IEEE Technical Papers of 2014 International Symposium on VLSI Design, Automation and Test, 2014, pp. 1–4.
[46] T.-Y. Hsieh, C.-C. Ku, and C.-H. Yeh, “A yield and reliability enhancement framework for image processing applications,” in 2012 IEEE Asia Pacific Conference on Circuits and Systems, 2012, pp. 683–686.
[47] T.-Y. Hsieh, M. A. Breuer, M. Annavaram, S. K. Gupta, and K.-J. Lee, “Tolerance of performance degrading faults for effective yield improvement,” in Proceedings IEEE 2009 International Test Conference, 2009, pp. 1–10.
[48] Q. Fan, S. S. Sapatnekar, and D. J. Lilja, “Cost-quality trade-offs of approximate memory repair mechanisms for image data,” in Proceedings IEEE 2017 18th International Symposium on Quality Electronic Design (ISQED), 2017, pp. 438–444.
[49] T. F. Hsieh, J. F. Li, J. S. Lai, C. Y. Lo, D. M. Kwai, and Y. F. Chou, “Refresh Power Reduction of DRAMs in DNN Systems Using Hybrid Voting and ECC Method,” in Proceedings IEEE International Test Conference in Asia (ITC-Asia), 2020.
[50] H. R. Mahdiani, S. M. Fakhraie, and C. Lucas, “Relaxed fault-tolerant hardware implementation of neural networks in the presence of multiple transient errors,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 8, pp. 1215–1228, 2012. |