博碩士論文 102522601 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:39 、訪客IP:3.144.121.155
姓名 賀苒(Ran He)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 在數位學習環境中考量社群關係之同儕互評機制
(Peer Assessment Based on Student′s Social Relations in E-learning Environment)
相關論文
★ 基於edX線上討論板社交關係之分組機制★ 利用Kinect建置3D視覺化之Facebook互動系統
★ 利用 Kinect建置智慧型教室之評量系統★ 基於行動裝置應用之智慧型都會區路徑規劃機制
★ 基於分析關鍵動量相關性之動態紋理轉換★ 基於保護影像中直線結構的細縫裁減系統
★ 建基於開放式網路社群學習環境之社群推薦機制★ 英語作為外語的互動式情境學習環境之系統設計
★ 基於膚色保存之情感色彩轉換機制★ 一個用於虛擬鍵盤之手勢識別框架
★ 分數冪次型灰色生成預測模型誤差分析暨電腦工具箱之研發★ 使用慣性傳感器構建即時人體骨架動作
★ 基於多台攝影機即時三維建模★ 基於互補度與社群網路分析於基因演算法之分組機制
★ 即時手部追蹤之虛擬樂器演奏系統★ 基於類神經網路之即時虛擬樂器演奏系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 隨著目前網路技術的普及,數位學習成為被廣泛利用的教學方式,近年來更出現了Flipped Classroom、MOOC(Massive Open Online Course)等創新學習模式。其中MOOC是一個具有代表性的數位學習模式。
面對目前這樣的在線學習模式,將有大量學生修習同一門課程的情形,這會讓批改學生作業的教師帶來巨大的工作量。為解決上述之情形,本研究提出了一種全新的同儕互評模式,並依據線上討論版中學生的互動訊息得出社群關係,為學生分配適合評量的同儕作業,完成同儕互評的課程活動。本研究提出的算法旨在提升學生同儕互評結果的有效性。同時,在本研究中也探究了同儕互評準確度與學生能力的關係。
通過本研究進行的實驗結果可以得知,本研究提出的算法可以有效的減少同儕互評中公平性的問題。避免同儕由於關係較好而互相打出高分,影響同儕互評準確率的情況發生。同時,也驗證了能力較好的同學可以進行更準確的評量之結論。將同儕互評結果和助教評量結果進行相關性分析后,也可以得到顯著相關的結論,也驗證了在本研究中同儕互評結果的有效性。通過實驗結束後的問卷調查結果,可以看出學生對於同儕互評對自己學習成效的影響持積極正面的態度。說明了同學認為目前這種同儕互評活動對本人的學習成效有幫助。
摘要(英) With the current popularity of network technology, e-learning become more and more widely used. In recent years, Flipped Classroom, MOOC (Massive Open Online Course) and other learning mode are becoming the innovative study type.
Under this situation, there will be a large number of students to practice in a same course, it make teachers have the huge burden that they should judging a lot of students’ assignment. In order to solve the above-mentioned problem, this study presents a new peer assessment way which is consider students’ social relations first, student’s interaction information get from the online discussion board, and then, assign appropriate assessment for the student. Algorithm proposed in this study is designed to enhance the validity of peer assessment results. Meanwhile, in this study also explored the relationship between peer assessment accuracy and the students′ abilities.
Through the experimental results can be learned that the algorithm proposed in this study can effectively reduce the fairness issue in peer assessment. At the same time, it also verified the conclusion that student who have better ability can give the more accurate assessment result. From peer assessment results and teaching assistant assessment results correlation analysis results can get the significantly relevant conclusions, also verified the effectiveness of peer assessment in this study. Through questionnaire results, students for peer assessment activity hold the positive attitude. Description that students believe the current peer assessment activities is help for their learning achievement.
關鍵字(中) ★ 數位學習
★ 同儕互評
★ 線上討論版
★ 社交關係
關鍵字(英)
論文目次 摘要 i
Abstract ii
Acknowledgements iii
Contents iv
List of Figures vii
List of Tables viii
Chapter 1 Introduction 1
1.1 Research Background and Motivation 2
1.2 Research Purpose 3
1.3 Research Methods 4
1.4 Thesis Organization 5
Chapter 2 Related Works 6
2.1 Discussion Board Information 6
2.1.1 The Influence of Online Discussion Boards for Online Learning 6
2.1.2 The Function of Online Discussion Boards 7
2.2 Evaluation Methods 9
2.2.1 The Purpose of the Evaluation 9
2.2.2 The Meaning and Type of the Evaluation 10
2.2.3 The Method of the Evaluation 11
2.3 The Meaning of Peer Assessment 12
2.4 The Effectiveness of Peer Assessment 14
2.5 Peer Assessment Common Questions and Different Assessment Rubric 15
2.5.1 Common Questions 16
2.5.2 Assessment Rubric 16
2.6 The Current Research Status of Peer Assessment 18
Chapter 3 Proposed Method 22
3.1 System Architecture Design 22
3.2 Setting Stage 24
3.3 Experiment Stage 26
3.4 Results Calculation 31
Chapter 4 System Implementation 32
4.1 System Architecture 32
4.2 Development environment 33
4.3 User Function Interface 34
Chapter 5 Experiment and Result Analysis 36
5.1 The First Semester Experiment 36
5.1.1 Participants 36
5.1.2 Experimental Design 37
5.1.3 Experimental Process 39
5.1.4 Data Analysis 43
5.2 The Second Semester Experiment 53
5.2.1 Participants 54
5.2.2 Experimental Design 55
5.2.3 Experimental procedures 56
5.2.4 Data Analysis 58
5.3 Compare Two Experimental Results 64
Chapter 6 Discussion & Conclusion 65
Chapter 7 Future Works 67
References 68
Appendix I: The First Semester Questionnaire 71
Appendix II: The Second Semester Questionnaire 73
參考文獻 [1] C. Piech, J. Huang, Z. H. Chen, C. Do, A. Ng and D. Koller, “Tuned Models of Peer Assessment in MOOCs,” arXiv preprint arXiv:1307.2579, 2013.
[2] L. Yuan and S. Powell, “MOOCs and open education: Implications for higher education,” Cetis White Paper, 2013.
[3] J. R. Harris, “Peer assessment in large undergraduate classes: an evaluation of a procedure for marking laboratory reports and a review of related practices,” Advances in physiology education, vol. 35, no. 2, pp. 178-187, 2011.
[4] K. Topping, “Peer assessment between students in colleges and universities,” Review of Educational Research, vol. 68, no. 3, pp. 249-276, 1998.
[5] R. H. Kay, “Developing a comprehensive metric for assessing discussion board effectiveness,” British Journal of Educational Technology, vol. 37, no. 5, pp. 761-783, 2006.
[6] B. Cox and B. Cox, “Developing Interpersonal and Group Dynamics through Asynchronous Threaded Discussions: The Use of Discussion Board in Collaborative Learning,” Education, vol. 128, no. 4, pp. 553-565, 2008.
[7] J. Sutherland, “Types of Evaluation for Primary School English,” The Conference on the Training of Elementary School EFL Teachers, National Taiwan Normal University, Ping Tung, Taiwan, pp. 11-13, 1999.
[8] W. Harlen and M. James, “Assessment and learning: differences and relationships between formative and summative assessment,” Assessment in Education, vol. 4, no. 3, pp. 365-379, 1997.
[9] A. Bandura, “Self-efficacy: The exercise of control,” The British Journal of Clinical Psychology, vol. 37, no. 4, pp. 470, 1998.
[10] D. Sluijsmans, F. Dochy, G. Moerkerke, “Creating a learning environment by using self-, peer-and co-assessment,” Learning Environments Research, vol. 1, no. 3, pp. 293-319, 1998.
[11] J. Y. Lu, N. Law, “Online peer assessment: Effects of cognitive and affective feedback,” Instructional Science, vol. 40, no. 2, pp. 257-275, 2012.
[12] 鄒佳蕙(周惠文 教授指導),「網路同儕互評, 楷模學習在小組合作環境下對學習績效與電腦態度影響之探討」, 國立中央大學, 資訊管理研究所,碩士論文,民國九十一年。
[13] J. Van der Pol, B. A. M. van den Berg, W. F. Admiraal and P. R. J. Simons, “The nature, reception, and use of online peer feedback in higher education,” Computers & Education, vol. 51, no. 4, pp. 1804-1817, 2008.
[14] S. Freeman and J. W. Parks, “How accurate is peer grading?” CBE-Life Sciences Education, vol. 9, no. 4, pp. 482-488, 2010.
[15] C. C. Chang, K. H. Tseng, P. N. Chou and Y. H. Chen, “Reliability and validity of Web-based portfolio peer assessment: A case study for a senior high school’s students taking computer course,” Computers & Education, vol. 57, no. 1, pp. 1306-1316, 2011.
[16] Y. C. Hsu, C. C. Tsai and M. J. Chen, “A Pilot Study on Mathematical Creative Analogy Activities with Networked Peer Assessment,” Journal of National Taiwan Normal University: Journal of Research in Education Sciences, vol. 47, no. 1, pp. 1-13, 2002.
[17] C. E. Kulkarni, R. Socher, M. S. Bernstein and S. R. Klemmer, “Scaling short-answer grading by combining peer assessment with algorithmic scoring,” in Proceedings of the first ACM conference on Learning@ scale conference. ACM, pp. 99-108, 2014.
[18] F. Y. Yu and C. P. Wu, “Different identity revelation modes in an online peer-assessment learning environment: Effects on perceptions toward assessors, classroom climate and learning activities,” Computers & Education, vol. 57, no. 3, pp. 2167-2177, 2011.
[19] Coursera. September, 15th, 2014,from:http://help.coursera.org/customer/portal/articles/1163294-how-do-peer-assessments-work-
[20] F. Y. Yu, “Multiple peer-assessment modes to augment online student question-generation processes,” Computers & Education, vol. 56, no. 2, pp. 484-494, 2011.
[21] J. Sitthiworachart and M. Joy, “Effective peer assessment for learning computer programming,” ACM SIGCSE Bulletin, ACM, vol. 36, no. 3, pp. 122-126, 2004.
[22] 沈慶珩,黃信義,「網路同儕互評在 Moodle 系統上的應用」,教育資料與圖書館學,vol. 43, no. 3, pp. 267-284, 2006.
[23] L. R. Harris and G. T. L. Brown, “Opportunities and obstacles to consider when using peer-and self-assessment to improve student learning: Case studies into teachers′ implementation,” Teaching and Teacher Education, vol. 36, pp. 101-111, 2013.
[24] G. Y. M. Kao, “Enhancing the quality of peer review by reducing student “free riding”: Peer assessment with positive interdependence,” British Journal of Educational Technology, vol. 44, no. 1, pp. 112-124, 2013.
[25] J. W. Strijbos, S. Narciss, K. Dünnebier, “Peer feedback content and sender′s competence level in academic writing revision tasks: Are they critical for feedback perceptions and efficiency?” Learning and instruction, vol. 20, no. 4, pp. 291-303, 2010.
[26] D. E. Paré and S. Joordens, “Peering into large lectures: examining peer and expert mark agreement using peerScholar, an online peer assessment tool,” Journal of Computer Assisted Learning, vol. 24, no. 6, pp. 526-540, 2008.
[27] J. Ugander, B. Karrer, L. Backstrom and C. Marlow, “The anatomy of the facebook social graph,” arXiv preprint arXiv:1111.4503, 2011.
指導教授 施國琛 審核日期 2015-9-24
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明