


AI helps brain-computer interface research, New York University's breakthrough neural speech decoding technology, published in Nature sub-journal
Apr 17, 2024 am 08:40 AMAuthor | Chen Xupeng
Aphasia due to defects in the nervous system can lead to serious life disabilities, and it may limit people professional and social life.
In recent years, the rapid development of deep learning and brain-computer interface (BCI) technology has provided the feasibility of developing neural voice prostheses that can help aphasic people communicate. However, speech decoding of neural signals faces challenges.
Recently, researchers from VideoLab and Flinker Lab at the University of Jordan have developed a new type of differentiable speech synthesizer that can use a lightweight convolutional neural network to encode speech into a series of interpretable speech parameters (such as pitch, loudness, formant frequency, etc.), and these parameters are synthesized into speech through a differentiable neural network. This synthesizer can also parse speech parameters (such as pitch, loudness, formant frequencies, etc.) through a lightweight convolutional neural network, and resynthesize speech through a differentiable speech synthesizer.
The researchers established a neural signal decoding system that is highly interpretable and applicable to situations with small data volumes, by mapping neural signals to these speech parameters without changing the meaning of the original content.
The research is titled "A neural speech decoding framework leveraging deep learning and speech synthesis" and was published in "Nature Machine Intelligence on April 8, 2024 ” magazine.
Paper link: https://www.nature.com/articles/s42256-024-00824-8
Research Background
Most attempts to develop neuro-speech decoders rely on a special kind of data: electrocorticography (ECoG) recordings from patients undergoing epilepsy surgery. Using electrodes implanted in patients with epilepsy to collect cerebral cortex data during speech production, these data have high spatiotemporal resolution and have helped researchers achieve a series of remarkable results in the field of speech decoding, helping to promote brain-computer interfaces. development of the field.
Speech decoding of neural signals faces two major challenges.
First of all, the data used to train personalized neural to speech decoding models is very limited in time, usually only about ten minutes, while deep learning models often require a large amount of training data to drive.
Secondly, human pronunciation is very diverse. Even if the same person speaks the same word repeatedly, the speech speed, intonation and pitch will change, which adds complexity to the representation space built by the model.
Early attempts to decode neural signals into speech mainly relied on linear models. The models usually did not require huge training data sets and were highly interpretable, but the accuracy was low.
Recent deep neural networks, especially the use of convolutional and recurrent neural network architectures, are developed in two key dimensions: the intermediate latent representation of simulated speech and the quality of synthesized speech. For example, there are studies that decode cerebral cortex activity into mouth movement space and then convert it into speech. Although the decoding performance is powerful, the reconstructed voice sounds unnatural.
On the other hand, some methods successfully reconstruct natural-sounding speech by using wavenet vocoder, generative adversarial network (GAN), etc., but their accuracy is limited. Recently, in a study of patients with implanted devices, speech waveforms that were both accurate and natural were achieved by using quantized HuBERT features as an intermediate representation space and a pretrained speech synthesizer to convert these features into speech.
However, HuBERT features cannot represent speaker-specific acoustic information and can only generate a fixed and unified speaker's voice, so additional models are needed to convert this universal voice into a specific patient's voice. Furthermore, this study and most previous attempts adopted a non-causal architecture, which may limit its use in practical brain-computer interface applications that require temporal causal operations.
Main model framework
To address these challenges, researchers introduce a new decoding framework from electroencephalogram (ECoG) signals to speech in this article. The researchers build a low-dimensional intermediate Representation (low dimension latent representation), which is generated by a speech encoding and decoding model using only speech signals (Figure 1).
The framework proposed in the study consists of two parts: one is the ECoG decoder, which converts the ECoG signal into acoustic speech parameters that we can understand (such as pitch, whether the sound is uttered, loudness, and formant frequency, etc. ); the other part is the speech synthesizer, which converts these speech parameters into a spectrogram.
研究人員建構(gòu)了一個可微分語音合成器,這使得在訓(xùn)練ECoG解碼器的過程中,語音合成器也可以參與訓(xùn)練,共同優(yōu)化以減少頻譜圖重建的誤差。這個低維度的潛在空間具有很強(qiáng)的可解釋性,加上輕量級的預(yù)訓(xùn)練語音編碼器產(chǎn)生參考用的語音參數(shù),幫助研究者建立了一個高效的神經(jīng)語音解碼框架,克服了數(shù)據(jù)稀缺的問題。
該框架能產(chǎn)生非常接近說話者自己聲音的自然語音,並且ECoG解碼器部分可以插入不同的深度學(xué)習(xí)模型架構(gòu),也支援因果操作(causal operations)。研究人員共收集並處理了48名神經(jīng)外科病人的ECoG數(shù)據(jù),使用多種深度學(xué)習(xí)架構(gòu)(包括卷積、循環(huán)神經(jīng)網(wǎng)路和Transformer)作為ECoG解碼器。
該框架在各種模型上都展現(xiàn)出了高準(zhǔn)確度,其中以卷積(ResNet)架構(gòu)獲得的性能最好,原始與解碼頻譜圖之間的皮爾森相關(guān)係數(shù)(PCC)達(dá)到了0.806。研究者提出的框架僅透過因果操作和相對較低的採樣率(low-density, 10mm spacing)就能達(dá)到高準(zhǔn)確度。
研究者也展示了能夠從大腦的左右半球都進(jìn)行有效的語音解碼,將神經(jīng)語音解碼的應(yīng)用擴(kuò)展到了右腦。
研究相關(guān)程式碼開源:https://github.com/flinkerlab/neural_speech_decoding
該研究的重要創(chuàng)新是提出了一個可微分的語音合成器(speech synthesizer),這使得語音的重合成任務(wù)變得非常高效,可以用很小的語音合成高保真的貼合原聲的音訊。
可微分語音合成器的原理借鑒了人的發(fā)生系統(tǒng)原理,將語音分為Voice(用於建模元音)和Unvoice(用於建模輔音)兩部分:
Voice部分可以先用基頻訊號產(chǎn)生諧波,由F1-F6的共振峰組成的濾波器濾波得到母音部分的頻譜特徵;對於Unvoice部分,研究者則是將白噪聲用對應(yīng)的濾波器濾波得到對應(yīng)的頻譜,一個可學(xué)習(xí)的參數(shù)可以調(diào)控兩部分在每個時刻的混合比例;在此之後透過響度訊號放大,加入背景雜訊來得到最終的語音頻譜。基於此語音合成器,本文設(shè)計了一個高效率的語音重合成框架以及神經(jīng)-語音解碼框架。
研究結(jié)果
具有時序因果性的語音解碼結(jié)果
首先,研究者直接比較不同模型架構(gòu)(卷積(ResNet)、循環(huán)(LSTM)和Transformer(3D Swin)在語音解碼性能上的差異。值得注意的是,這些模型都可以執(zhí)行時間上的非因果(non-causal)或因果操作。森相關(guān)係數(shù)(PCC),非因果和因果的平均PCC分別為0.806和0.797,緊接而來的是Swin模型(非因果和因果的平均PCC分別為0.792和0.798)(圖2a)。
透過STOI 指標(biāo)的評估也得到了相似的發(fā)現(xiàn)。也會使用未來的神經(jīng)訊號。 #研究發(fā)現(xiàn),即使是因果版本的ResNet模型也能與非因果版本媲美,二者之間沒有顯著差異。非因果版本,因此研究者後續(xù)主要關(guān)注ResNet和Swin模型。交叉驗(yàn)證,這意味著相同單字的不同試驗(yàn)不會同時出現(xiàn)在訓(xùn)練集和測試集中。在訓(xùn)練期間未見過的單詞,模型也能夠很好地進(jìn)行解碼,這主要得益於本文構(gòu)建的模型在進(jìn)行音素(phoneme)或類似水平的語音解碼。進(jìn)一步,研究者展示了ResNet因果解碼器在單字層級的表現(xiàn),展示了兩位參與者(低密度取樣率ECoG)的數(shù)據(jù)。解碼後的頻譜圖準(zhǔn)確地保留了原始語音的頻譜-時間結(jié)構(gòu)(圖2c,d)。
研究人員也比較了神經(jīng)解碼器預(yù)測的語音參數(shù)與語音編碼器編碼的參數(shù)(作為參考值),研究者展示了幾個關(guān)鍵語音參數(shù)的平均PCC值(N=48),包括聲音權(quán)重(用於區(qū)分母音和子音)、響度、音高f0、第一共振峰f1和第二共振峰f2。準(zhǔn)確地重建這些語音參數(shù),尤其是音高、聲音權(quán)重和前兩個共振峰,對於實(shí)現(xiàn)精確的語音解碼和自然地模仿參與者聲音的重建至關(guān)重要。
研究發(fā)現(xiàn)表明,無論是非因果或因果模型,都能得到合理的解碼結(jié)果,這為未來的研究和應(yīng)用提供了積極的指引。
對左右大腦神經(jīng)訊號語音解碼以及空間取樣率的研究
研究者進(jìn)一步對左右大腦半球的語音解碼結(jié)果進(jìn)行了比較。多數(shù)研究集中關(guān)注主導(dǎo)語音和語言功能的左腦半球。然而,我們對於如何從右腦半球解碼語言訊息所知甚少。針對這一點(diǎn),研究者比較了參與者左右大腦半球的解碼表現(xiàn),以驗(yàn)證使用右腦半球進(jìn)行語音恢復(fù)的可能性。
在研究收集的48位受試者中,有16位受試者的ECoG訊號是從右腦中擷取。透過比較ResNet 與Swin 解碼器的表現(xiàn),研究者發(fā)現(xiàn)右腦半球也能夠穩(wěn)定地進(jìn)行語音解碼(ResNet 的PCC值為0.790,Swin 的PCC值為0.798),與左腦半球的解碼效果相差較?。ㄈ鐖D3a 所示)。
這項(xiàng)發(fā)現(xiàn)同樣適用於 STOI 的評估。這意味著,對於左腦半球受損、失去語言能力的患者來說,利用右腦半球的神經(jīng)訊號恢復(fù)語言也許是可行的方案。
接著,研究者探討了電極取樣密度對語音解碼效果的影響。先前的研究多採用較高密度的電極網(wǎng)格(0.4 mm),而臨床上通常使用的電極網(wǎng)格密度較低(LD 1 cm)。
有五位參與者使用了混合類型(HB)的電極網(wǎng)格(見圖 3b),這類網(wǎng)格雖然主要是低密度採樣,但其中加入了額外的電極。剩餘的四十三位參與者都採用低密度採樣。這些混合取樣(HB)的解碼表現(xiàn)與傳統(tǒng)的低密度取樣(LD)相似,但在 STOI 上表現(xiàn)稍好。
研究者比較了僅利用低密度電極與使用所有混合電極進(jìn)行解碼的效果,發(fā)現(xiàn)兩者之間的差異並不顯著(參見圖3d),這表明模型能夠從不同空間採樣密度的大腦皮層中學(xué)習(xí)到語音訊息,這也暗示臨床通常使用的採樣密度對於未來的腦機(jī)介面應(yīng)用也許是足夠的。
對於左右腦不同腦區(qū)對語音解碼貢獻(xiàn)度的研究

最後,研究者檢視了大腦的語音相關(guān)區(qū)域在語音解碼過程中的貢獻(xiàn)程度,這對於未來在左右腦半球植入語音恢復(fù)設(shè)備提供了重要的參考。研究者採用了遮蔽技術(shù)(occlusion analysis)來評估不同腦區(qū)對語音解碼的貢獻(xiàn)度。
簡而言之,如果某個區(qū)域?qū)獯a至關(guān)重要,那麼遮蔽該區(qū)域的電極訊號(即將訊號設(shè)為零)會降低重構(gòu)語音的準(zhǔn)確率(PCC值)。
透過這種方法,研究者測量了遮蔽每個區(qū)域時,PCC值的減少情況。透過對比ResNet 和Swin 解碼器的因果與非因果模型發(fā)現(xiàn),聽覺皮層在非因果模型中的貢獻(xiàn)更大;這強(qiáng)調(diào)了在即時語音解碼應(yīng)用中,必須使用因果模型;因?yàn)樵诩磿r語音解碼中,我們無法利用神經(jīng)回饋訊號。
此外,無論是在右腦或左腦半球,感測運(yùn)動皮質(zhì)尤其是腹部區(qū)域的貢獻(xiàn)度相似,這暗示在右半球植入神經(jīng)義肢也許是可行的。
結(jié)論&啟發(fā)展望
研究者開發(fā)了一個新型的可微分語音合成器,可以利用一個輕型的捲積神經(jīng)網(wǎng)路將語音編碼為一系列可解釋的語音參數(shù)(如音高,響度,共振峰頻率等)並透過可微分語音合成器重新合成語音。
透過將神經(jīng)訊號映射到這些語音參數(shù),研究者建構(gòu)了一個高度可解釋且可應(yīng)用於小數(shù)據(jù)量情形的神經(jīng)語音解碼系統(tǒng),可產(chǎn)生聽起來自然的語音。此方法在參與者間高度可重複(共48人),研究者成功展示了利用卷積和Transformer(3D Swin)架構(gòu)進(jìn)行因果解碼的有效性,均優(yōu)於循環(huán)架構(gòu)(LSTM)。
該框架能夠處理高低不同空間取樣密度,並且可以處理左、右半球的腦電訊號,顯示出了強(qiáng)大的語音解碼潛力。
大多數(shù)先前的研究沒有考慮到即時腦機(jī)介面應(yīng)用中解碼操作的時序因果性。許多非因果模型依賴聽覺感覺回饋訊號。研究者的分析顯示,非因果模型主要依賴顳上回(superior temporal gyrus)的貢獻(xiàn),而因果模型則基本上消除了這一點(diǎn)。研究者認(rèn)為,由於過度依賴回饋訊號,非因果模型在即時BCI應(yīng)用中的通用性受限。
有些方法嘗試避免訓(xùn)練中的回饋,如解碼受試者想像中的語音。儘管如此,大多數(shù)研究仍採用非因果模型,無法排除訓(xùn)練和推論過程中的回饋影響。此外,文獻(xiàn)中廣泛使用的循環(huán)神經(jīng)網(wǎng)路通常是雙向的,導(dǎo)致非因果行為和預(yù)測延遲,而研究者的實(shí)驗(yàn)表明,單向訓(xùn)練的循環(huán)網(wǎng)路表現(xiàn)最差。
儘管研究並沒有測試即時解碼,但研究者實(shí)現(xiàn)了從神經(jīng)訊號合成語音小於50毫秒的延遲,幾乎不影響聽覺延遲,允許正常語音產(chǎn)出。
研究中探討了是否更高密度的覆蓋能改善解碼性能。研究者發(fā)現(xiàn)低密度和高(混合)密度網(wǎng)格覆蓋都能達(dá)到高解碼效能(見圖 3c)。此外,研究者發(fā)現(xiàn)使用所有電極的解碼性能與僅使用低密度電極的性能沒有顯著差異(圖3d)。
這證明了只要圍顳覆蓋足夠,即使在低密度參與者中,研究者提出的ECoG解碼器也能夠從神經(jīng)訊號中提取語音參數(shù)用於重建語音。另一個顯著的發(fā)現(xiàn)是右半球皮質(zhì)結(jié)構(gòu)以及右圍顳皮質(zhì)對語音解碼的貢獻(xiàn)。儘管先前的一些研究展示了對元音和句子的解碼中,右半球可能提供貢獻(xiàn),研究者的結(jié)果提供了右半球中魯棒的語音表示的證據(jù)。
研究者也提到了目前模型的一些限制,例如解碼流程需要有與ECoG記錄配對的語音訓(xùn)練數(shù)據(jù),這對失語癥患者可能不適用。未來,研究者也希望開發(fā)能處理非網(wǎng)格資料的模型架構(gòu),以及更好地利用多病人、多模態(tài)腦電資料。
本文第一作者:Xupeng Chen, Ran Wang,通訊作者:Adeen Flinker。
基金支持:National Science Foundation under Grant No. IIS-1912286, 2309057 (Y.W., A.F.) and National Institute of Health R01NS109367, R01NS115929, R01DC018805 (A.F.)##。
更多關(guān)於神經(jīng)語音解碼中的因果性討論,可以參考作者們的另一篇論文《Distributed feedforward and feedback cortical processing supports human speech production 》:https ://www.pnas.org/doi/10.1073/pnas.2300255120
來源:腦機(jī)介面社群The above is the detailed content of AI helps brain-computer interface research, New York University's breakthrough neural speech decoding technology, published in Nature sub-journal. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

To identify fake altcoins, you need to start from six aspects. 1. Check and verify the background of the materials and project, including white papers, official websites, code open source addresses and team transparency; 2. Observe the online platform and give priority to mainstream exchanges; 3. Beware of high returns and people-pulling modes to avoid fund traps; 4. Analyze the contract code and token mechanism to check whether there are malicious functions; 5. Review community and media operations to identify false popularity; 6. Follow practical anti-fraud suggestions, such as not believing in recommendations or using professional wallets. The above steps can effectively avoid scams and protect asset security.

This article has selected several top Python "finished" project websites and high-level "blockbuster" learning resource portals for you. Whether you are looking for development inspiration, observing and learning master-level source code, or systematically improving your practical capabilities, these platforms are not to be missed and can help you grow into a Python master quickly.

As a pioneer in the digital world, Bitcoin’s unique code name and underlying technology have always been the focus of people’s attention. Its standard code is BTC, also known as XBT on certain platforms that meet international standards. From a technical point of view, Bitcoin is not a single code style, but a huge and sophisticated open source software project. Its core code is mainly written in C and incorporates cryptography, distributed systems and economics principles, so that anyone can view, review and contribute its code.

To add a subtree to a Git repository, first add the remote repository and get its history, then merge it into a subdirectory using the gitmerge and gitread-tree commands. The steps are as follows: 1. Use the gitremoteadd-f command to add a remote repository; 2. Run gitmerge-srecursive-no-commit to get branch content; 3. Use gitread-tree--prefix= to specify the directory to merge the project as a subtree; 4. Submit changes to complete the addition; 5. When updating, gitfetch first and repeat the merging and steps to submit the update. This method keeps the external project history complete and easy to maintain.

What are the key points of the catalog? UselessCoin: Overview and Key Features of USELESS The main features of USELESS UselessCoin (USELESS) Future price outlook: What impacts the price of UselessCoin in 2025 and beyond? Future Price Outlook Core Functions and Importances of UselessCoin (USELESS) How UselessCoin (USELESS) Works and What Its Benefits How UselessCoin Works Major Advantages About USELESSCoin's Companies Partnerships How they work together

There are three main ways to set environment variables in PHP: 1. Global configuration through php.ini; 2. Passed through a web server (such as SetEnv of Apache or fastcgi_param of Nginx); 3. Use putenv() function in PHP scripts. Among them, php.ini is suitable for global and infrequently changing configurations, web server configuration is suitable for scenarios that need to be isolated, and putenv() is suitable for temporary variables. Persistence policies include configuration files (such as php.ini or web server configuration), .env files are loaded with dotenv library, and dynamic injection of variables in CI/CD processes. Security management sensitive information should be avoided hard-coded, and it is recommended to use.en

To allow PHP services to pass through the Windows 11 firewall, you need to create inbound rules to open the corresponding port or program. 1. Determine the port that PHP is actually listening. If the built-in server is started with php-Slocalhost:8000, the port is 8000. If using Apache or IIS, it is usually 80 or 443. 2. Open the advanced settings of "WindowsDefender Firewall", create a new inbound rule, select "Program" or "Port", fill in the PHP or Web server path or specify the port number. 3. Select Allow Connections, check the applicable network configuration file, name the rules and add a description. The IP addresses that are allowed to access, such as local network or specific IP, can be restricted through the scope. Safety

The core role of Homebrew in the construction of Mac environment is to simplify software installation and management. 1. Homebrew automatically handles dependencies and encapsulates complex compilation and installation processes into simple commands; 2. Provides a unified software package ecosystem to ensure the standardization of software installation location and configuration; 3. Integrates service management functions, and can easily start and stop services through brewservices; 4. Convenient software upgrade and maintenance, and improves system security and functionality.
