中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/94522
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 80990/80990 (100%)
造访人次 : 41266315      在线人数 : 192
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/94522


    题名: 深度強化學習於適應性號誌控制之研究;Research on Deep Reinforcement Learning for Adaptive Traffic Signal Control
    作者: 王亦凡;Wang, I-Fan
    贡献者: 土木工程學系
    关键词: 適應性號誌控制;深度強化學習;Rainbow DQN;交通模擬;adaptive signal control;deep reinforcement learning;Rainbow DQN;traffic simulation
    日期: 2024-08-15
    上传时间: 2024-10-09 14:51:37 (UTC+8)
    出版者: 國立中央大學
    摘要: 本研究旨在探討深度強化學習在適應性號誌控制中的應用,將透過微觀交通模擬軟體Vissim來模擬尖峰時段臺北市路口的車流情境,在考量不同車種當量影響和機車兩段式左轉設計下,建構基於深度強化學習演算法的適應性號誌控制系統,以改善目前市區路口尖峰時段的交通狀態。
    架構上將透過深度強化學習網路Rainbow DQN作為號誌控制系統的判斷模型,考量流向基礎之車流狀態和時相狀態,動作選擇以時制順序切換與延長綠燈時間作為號誌控制方式,獎勵目標以最小化路口總壓力,並將結果與定時號誌為基準比較兩者間的路口績效表現。
    實驗設計將晨峰和昏峰拆分成各三個不同時段場景訓練,結果顯示透過深度強化學習於適應性號誌控制確實能降低路口之停等長度,在各實驗場景皆可快速收斂於100回合內,並於晨峰尖峰時段改善50%的績效,且模型設計能適應研究設計中市區內不同尖峰時段的車流量,彈性的狀態、動作和獎勵設計能將模型一般化應用於其他場景應用。
    ;This study aims to explore the application of deep reinforcement learning in adaptive traffic signal control. Using the microscopic traffic simulation software Vissim, we simulate the traffic conditions at intersections in Taipei City during peak hours. Considering the effects of different vehicle types and the two-stage left-turn design for motorcycles, we construct an adaptive traffic signal control system based on a deep reinforcement learning algorithm to improve the current traffic conditions at urban intersections during peak hours.
    The framework employs the deep reinforcement learning network Rainbow DQN as the decision model for the signal control system. The model considers traffic flow conditions and phase states, with action choices focusing on phase sequence switching and green light extension as control methods. The reward objective is to minimize the total intersection pressure. The system′s performance is compared with fixed-time signals as a baseline.
    The experimental design splits morning and evening peaks into three different time periods for training. Results show that deep reinforcement learning in adaptive traffic signal control effectively reduces waiting times at intersections. The model converges quickly within 100 episodes across all experimental scenarios and improves performance by 50% during peak morning hours. Furthermore, the model design can adapt to varying traffic volumes during different peak periods in urban areas, with flexible state, action, and reward designs enabling generalization to other scenarios.
    显示于类别:[土木工程研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML39检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明