中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/83915
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 78852/78852 (100%)
Visitors : 38267606      Online Users : 633
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/83915


    Title: 以人工智慧方法驅動音樂轉錄與生成;AI Driven Music Transcription and Generation
    Authors: 沙文森;Vincent, CHAPUIS
    Contributors: 資訊工程學系
    Keywords: 音樂;深度學習;轉錄;生成;Music;DeepLearning;Transcription;Generation
    Date: 2020-07-09
    Issue Date: 2020-09-02 17:41:06 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 音樂轉錄與生成為一廣泛之研究領域,結集多數對於此領域好奇且充滿希望的研究
    學者們,積極的尋求能夠超越現況且能滿足人們好奇心的創新技術,藉此藉由技術突
    破,完成過去難以達成之各種任務。為能夠更了解此領域之困難點,本論文針對音樂
    轉錄與生成之相關研究做深入之文獻探討,並針對各困難點提出對應之改良演算法,
    用實驗來驗證其魯棒姓。首先,本論文採用Waon演算法來轉錄音符,並開發友善操作
    之圖像使用者介面。接者,本論文將介紹如何在足夠之資料集中,能夠透過以卷積神
    經網路為基底之遞規神經網路作深度學習訓練,來滿足我們期望之結果。除此之外,
    本論文也將介紹如何能夠透過如Transflormer之新型模型,將轉錄過之音符作為音樂生
    成之可用素材。本論文提出之各項實驗,皆以MiDi之格式作為深度學習之輸入源,搭
    配Pytorch之深度學習框架所完成的。最後,本研究針對實驗之成果做深入之討論,探
    討此項研究如何能夠更進階之優化以完成一互動產品,提供作曲家更友善之圖形化操
    作介面。;Music Transcription and Generation is a wide field that has been looked over by many
    with hope and curiosity. Hope to reach and surpass human skills and creativity, curios-
    ity to find new ways of accomplishing tasks that were either difficult either impossible
    for previously existing technology to succeed. In this thesis, we explore this field and
    review the different existing technics used to realize those tasks. We then introduce the
    several approaches tested during our research to try new methods or to improve the
    current state-of-the-art. We first used an algorithmic approach based on the Waon al-
    gorithm to transcript music notes and developed a Graphical User Interface to help for
    this task. We then show how deep learning approach like Convolutional Neural net-
    work linked with Recurrent Neural Network can give satisfying results in the matter
    when adequate dataset is chosen, and how it can also be a great asset for generating
    music with cutting edge models like the Transformer. For all those tasks we mainly
    used the MiDi file format and Python frameworks like Pytorch to reach our goals. We
    finally discuss on how those technics can improve a composer’s life to help him cre-
    ate new music and improve his ideas, and how future work on this subject could be
    focused on creating an ergonomic user interface for production use.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML107View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明