dc.description.abstract | 6G is a next-generation mobile communication technology with faster speed, lower latency and higher spectral efficiency, supporting more devices and a wider range of application scenarios than the current 5G. Among them, LEO satellite communication can achieve global communication coverage due to its operation in earth orbit, unlike terrestrial infrastructure which is limited by geographical location and terrain, which makes LEO satellite communication one of the key technologies to realize global IoT.
Compared with high-orbit satellite communication, LEO satellite communication also has the advantages of low latency and high reliability, which makes it well suited for applications such as high-speed data transmission, remote operation, and real-time communication. However, LEO satellites move very fast, and in order to cope with the channel variations caused by high doppler effects, beam tracking with MIMO beamforming techniques is considered an option for high beam flexibility that effectively resists fast fading channel effects, but as the number of beams and users increases, the problem of signal interference among multiple users in the terrestrial downlink will become more severe.
In this paper, we design the communication service between LEO satellites and ground users by beamforming technique to design multiple beams and combine with digital pre-coding method to design beam weight coefficients to suppress the intra-beam interference problem between ground users in the same area. Through the reinforcement learning, we can dynamically adjust the multi-beam angle tracking strategy based on the LEO satellite trajectory information to minimize the interference between beams in different areas, suppress the inter-beam interference between ground users in different areas, and improve the quality of satellite communication to achieve the maximum total data transmission rate. Numerical simulations show that the transmission throughput of LEO satellite downlink communication based on deep reinforcement learning is significantly better than the other three algorithms compared in this paper. | en_US |