語音識別作為一種新的計算機界面形式。它啟用了語音助手(例如Alexa 和Siri),這可以幫助我們獲得許多服務,例如獲取日常信息和設置駕駛導航系統。自1990 年代初以來,語音識別已得到廣泛研究。然而,隨著越來越多的便攜式嵌入式設備(如導航系統、語言翻譯器等)出現在市場上,需要基於低計算設備的離線語音識別。在這項研究中,我們專注於將編碼器-解碼器神經網絡應用於Raspberry Pi 等低功耗設備。與需要將錄製的語音傳輸到昂貴的服務器以提供計算和推理的Alexa 和Siri 相比,我們構建了一個僅在本地推斷語音樣本的語音識別模型。我們的模型使用CNN 作為編碼器,使用具有註意力機制的LSTM 或GRU 作為解碼器。此外,採用Tensorflow Lite 將模型導入Raspberry Pi 進行語音推理。實驗結果表明,在Raspberry Pi 上使用注意力機制後,模型對孤立詞的識別能力在召回率上提高了約2% 到5%。由於低功耗設備的計算能力有限,Raspberry Pi 上的推理時間非常長。;Speech recognition serves as a new form of computer interface. It enables the voice assistant (e.g., Alexa and Siri), which helps us on many services like obtaining daily information and setting up driving navigation system. Speech recognition has been extensively studied since the early 1990s. However, as more and more portable embedded devices (e.g., navigation system, language translator, etc.) appear on the market, there is a need for offline speech recognition based on low computation device. In this research, we focus on applying an Encoder-Decoder neural network to a low-power device like the Raspberry Pi. In contrast to Alexa and Siri that require the transmission of recorded voice to expensive servers to provide computation and inference, we build a speech recognition model that just infers speech samples locally. Our model uses CNN as the encoder and LSTM or GRU with attention mechanism as the decoder. In addition, Tensorflow Lite is adopted to import the model to the Raspberry Pi for speech inference. The experimental results indicate that the model’s ability to recognize isolated words was improved about 2% to 5% in recall by using the attention mechanism on Raspberry Pi. Inference times on the Raspberry Pi are so long due to the limited computing power of the low-power device.