Generative AI

Ukuvala 'I-Expressivity Gap': I-Mistral's Voxtral TTS Ichaza Kabusha I-Voxtral Voice Cloning nge-Hybrid Autoregressive and Flow-Matching Architecture

I-Voice AI inemfihlo engcolile. Amasistimu amaningi ombhalo-ube-enkulumweni azwakala ekahle — aze angakwenzi lokho. Bangafunda umusho. Abangakwazi ukukwenza kusho yona. Isigqi sivaliwe. Umzwelo uphansi. Isipikha sizwakala njengaso imizuzwana emibili, bese sikhukhuleka siye endaweni yokwenziwa ejwayelekile. Lelo gebe phakathi komsindo ozwakalayo kanye nenkulumo ezwakalayo, ethembekile yesikhulumi yilokho esikubiza ngokuthi 'I-Expressivity Gap' – futhi kube yisithiyo esichazayo kuwo wonke unjiniyela ozama ukwakha ama-ejenti wezwi wokukhiqiza, amapayipi e-audiobook, noma amasistimu okusekelwa kwamakhasimende ngezilimi eziningi abambelela ekubhekweni komuntu.

Ukukhishwa okusha kwe-Mistral AI, I-Voxtral TTSkuwumzamo oqondile wokuvala lelo gebe. Imodeli yokuqala ye-Mistral yokuguqula umbhalo ube inkulumo, ekhishwe ngesikhathi esisodwa njengezisindo ezivulekile ku-Hugging Face kanye ne-API, futhi yenza ukubheja kwezakhiwo ezinesibindi: sebenzisa ama-paradigm okumodela amabili ahluke ngokuphelele – isizukulwane esizenzakalelayo nokumatanisa kokugeleza – ezinkingeni ezimbili ezihluke ngokuphelele ezibandakanya i-cloning yezwi.

Umphumela uba imodeli enengqikithi cishe 4B amapharamitha — i-3.4B decoder backbone, 390M flow-matching acoustic transformer, kanye ne-300M neural audio codec — ekhiqiza inkulumo engokwemvelo, ethembekile yesikhulumi ngezilimi eziyi-9 isuka emizuzwini emi-3 yomsindo oyireferensi, izuze 68.4% izinga lokuwina ngaphezulu kwe-ElevenLabs Flash v2.5 ekuhlolweni kokuhlanganisa izwi ngezilimi eziningi okwenziwa izichasiselo zesikhulumi somdabu, futhi kusiza abasebenzisi abangaphezu kuka-30 ngesikhathi esisodwa kusukela ku-NVIDIA H200 eyodwa ekubambezelekeni kwe-sub-600ms.

I-Expressivity Gap: Kungani Imodeli Eyodwa Ingakwazi Ukwenza Konke

Cabanga ngenkulumo njengezimpawu ezimbili ezihlukene ngokuphelele ezihamba ngendlela efanayo yamagagasi. Kukhona ungqimba lwe-semantic — amagama, uhlelo lolimi, ukwakheka kolimi. Futhi kukhona ungqimba lwe-acoustic – ubunikazi besikhulumi, irejista yabo yemizwa, i-prosody yabo nesigqi.

Lezi zingqimba ezimbili zinezici zezibalo ezihluke kakhulu, futhi ukuphoqa indlela eyodwa yokumodela ukuze kusingathwe zombili ngesikhathi esisodwa kuphoqa ukuvumelana okubuhlungu. Amamodeli e-autoregressive mahle ekuhambisaneni kwebanga elide – ukugcina isipikha sizwakala njengaso esigabeni esigcwele – kodwa ayanensa futhi ayabiza uma esetshenziswa kumathokheni e-codebook angama-36 achaza ukuthungwa komsindo ohlaziywe kahle kuhlaka ngalunye. Amamodeli asuselwe ekugelezeni ahlukile ekukhiqizeni ukuhlukahluka kwe-acoustic okucebile, okuqhubekayo, kodwa awanayo inkumbulo elandelanayo eyenza isipikha sizwakale sihambisana ngokuhamba kwesikhathi.

I-Voxtral TTS Architecture: Imisebenzi Emibili, Amamodeli Amabili

I-Voxtral TTS yakhiwe eduze kwezingxenye ezintathu ezisebenza ndawonye emgqeni owodwa osuka ekupheleni ukuya ekupheleni.

1. I-Voxtral Codec — I-Audio Tokenizer

  • Isakhiwo: I-autoencoder ye-convolutional-transformer eqeqeshwe kusukela ekuqaleni nge-hybrid I-VQ-FSQ quantization scheme.
  • Indlela Esebenza Ngayo: Ithatha i-mono waveform engu-24 kHz futhi iyiminyanisa kumafreyimu angu-12.5 Hz — uhlaka olulodwa ngo-80ms womsindo ngamunye. Uhlaka ngalunye luba amathokheni ahlukene angama-37: ithokheni ye-semantic engu-1 (kusetshenziswa i-Vector Quantization ene-codebook yokufakiwe okungu-8,192) kanye namathokheni we-acoustic angu-36 (kusetshenziswa i-Finite Scalar Quantization kumaleveli angu-21 ngobukhulu). Isamba se-bitrate: ~2.14 kbps. Ithokheni ye-semantic iqeqeshwa kusetshenziswa imodeli ye-Whisper ASR efriziwe njengethagethi yokugaya, ngakho ifunda izethulo eziqondaniswe nombhalo ngaphandle kokudinga ukuqondisa okuphoqiwe kwangaphandle.
  • Okungcono kakhulu: Ukucindezela izinkomba zezwi zokukhiqiza ezansi komfula kanye nokuqopha amathokheni akhiqiziwe abuyele kufomethi yegagasi.
  • Kungani: Uma kuqhathaniswa ne-Mimi (i-codec e-Moshi) kuma-bitrate afanayo, i-Voxtral Codec idlula ibanga le-Mel, ibanga le-STFT, i-PESQ, i-ESTOI, izinga lephutha legama le-ASR, nokufana kwesipika kubhentshimakhi ye-Expresso.

2. I-Autoregressive Decoder Backbone — Injini Ye-Semantic

  • Isakhiwo: Isiguquli sedekhoda kuphela esiqaliswe kusukela Ungqongqoshe 3Bnamathokheni omsindo alungiselelwe amathokheni ombhalo njengomongo.
  • Indlela Esebenza Ngayo: Ireferensi yezwi (imizuzwana engu-3–30) ibhalwa kumathokheni omsindo nge-Voxtral Codec futhi ibekwe ekuqaleni kokulandelana kokufakwayo. Umbhalo ozokhulunywa uyalandela. I-decoder ikhiqiza ngokuzenzakalelayo ithokheni ye-semantic eyodwa ngohlaka ngalunye – eyodwa ngama-80ms – ize ikhiqize okukhethekile. (Ukuphela Komsindo) ithokheni. Ikhanda lomugqa libonisa izimo ezifihliwe zedekhoda ukuze lingene ngaphezu kwamagama angu-8,192 e-semantic.
  • Okungcono kakhulu: Ukugcina ukuvumelana kwesipikha sebanga elide nokuzivumelanisa nobunikazi obusungulwe kusithenjwa sezwi.
  • Kungani: Lena ingxenye yesistimu eqinisekisa ukuthi isikhulumi sizwakala njengaso kusukela egameni lokuqala kuye kwelokugcina. Isizukulwane se-Autoregressive sihamba phambili kuloluhlobo ncamashi lokuhambisana okulandelanayo.

3. I-Flow-Matching Transformer — Injini Ye-Acoustic

  • Isakhiwo: Isiguquli esinezingqimba ezi-3 esimodela amathokheni e-acoustic esikhaleni esiqhubekayo sisebenzisa ukugeleza okufanayo nesiqondiso samahhala se-classifier (CFG).
  • Indlela Esebenza Ngayo: Esinyathelweni ngasinye sesizukulwane, isimo esifihliwe esivela kumgogodla we-decoder sidluliselwa kusiguquli se-FM. Kusukela kumsindo we-Gaussian, isiguquli sisebenzisa ukuhlolwa kokusebenza okungu-8 (ama-NFE) sisebenzisa indlela ye-Euler, enesilinganiso se-CFG esingu-α = 1.2, ukukhiqiza amanani wethokheni ye-acoustic angu-36 yalolo hlaka. Amanani okuntanta abe esehlukaniswa abe amazinga angu-21 FSQ ngaphambi kwesinyathelo esilandelayo sokuqopha i-AR.
  • Okungcono kakhulu: Ikhiqiza ukuthungwa kwe-acoustic ehlanjululwe kahle – i-timbre yesipika, ukuzwakalisa, umbala ongokomzwelo – okwenza inkulumo ehlanganisiwe izwakale iphila kunerobhothi.
  • Kungani: I-ablation ephepheni locwaningo iqhathanise ukugeleza okufanayo ngokumelene ne-MaskGIT kanye ne-Depth Transformer yokubikezela kwe-acoustic. Ukuqhathanisa nokugeleza kuwine ekuchazeni ukuhlaziya komuntu futhi nakho kuphakeme ngokwezibalo: I-Depth Transformer idinga izinyathelo ezingama-36 zokuqopha ngokuzenzakalelayo ifreyimu ngayinye; i-FM transformer idinga ama-NFE ayi-8 kuphela.

Ngemuva Kokuqeqeshwa: I-DPO Iyenza Kanjani Imodeli IRobhothi Encane

Ngemva kokuziqeqeshela kusengaphambili kumsindo obhangqiwe kanye nokulotshiwe, i-Voxtral TTS iqeqeshwa ngemva kwesikhathi kusetshenziswa I-Direct Preference Optimization (DPO). Ngenxa yokuthi amathokheni e-acoustic asebenzisa ukufanisa ukugeleza esikhundleni kunekhanda elivamile elihlukanisiwe, ithimba labacwaningi liguqule inhloso ye-DPO esekelwe ekugelezeni kanye nokulahlekelwa okujwayelekile kwe-DPO kwe-codebook ye-semantic.

Amasampula ohlulwe ophumelele amapheya akhiwa kusetshenziswa izinga lephutha legama (WER), amaphuzu okufana kwesipika, ukungaguquguquki komsindo, i-UTMOS-v2, kanye namamethrikhi ejaji le-LM. Ukuthola okubalulekile: ukuqeqeshwa kwenkathi engaphezu kweyodwa kudatha yokwenziwa ye-DPO kwenza imodeli izwakale njengerobhothi kakhulu — hhayi ngaphansi. Inkathi eyodwa iyindawo emnandi.

Inkokhelo iyalinganiseka. I-WER yaseJalimane yehla isuka ku-4.08% yaya ku-0.83%. I-French WER yehla isuka ku-5.01% yaya ku-3.22%. Izikolo ze-UTMOS ziyathuthuka kuzo zonke izilimi eziyisishiyagalolunye. Imodeli ibona izinto ezingekho ngaphansi, yeqa amagama ambalwa, futhi ayisacunuli ngevolumu kuzo zonke izinkulumo ezinde. I-Caveat eyodwa: Isi-Hindi i-WER yehla kancane nge-DPO (3.39% → 4.99%) — ithimba locwaningo lihlaba umkhosi ngokusobala, futhi iwukuphela kolimi lapho izinga lephutha lamagama lihamba libheke endaweni engafanele.

Isithombe Esigcwele Sokuncintisana: Lapho I-Voxtral Iwina

Imiphumela yokuhlolwa komuntu idinga ukufundwa okuphelele kunezinga lokuwina lesihloko lilodwa.

Ku i-zero-shot voice cloning (amandla acacile emodeli), i-Voxtral TTS yehlula i-ElevenLabs Flash v2.5 ku-68.4% iyonke – futhi igebe liyakhula uma ubheka ukufana kwesipika kumabhentshimakhi azenzakalelayo. Ku-SEED-TTS, i-Voxtral ithola ukufana kwesipikha okungu-0.628 uma iqhathaniswa no-0.392 we-ElevenLabs v3 kanye no-0.413 we-ElevenLabs Flash v2.5.

Ku ukuhlolwa kwezwi le-flagship ngokuqondisa kwemizwa okusobala (imodeli ingenisa umuzwa ovela embhalweni ngaphandle kwanoma imaphi amathegi), i-Voxtral TTS ihlula womabili amamodeli e-ElevenLabs: 55.4% ngaphezulu kwe-v3 kanye no-58.3% phezu kwe-Flash v2.5.

I-Gemini 2.5 Flash TTS njengamanje ubambe iqhaza ekuholeni Ukuqondisa Kwemizwa Okusobala (kulandela imiyalo yombhalo oqondile njengokuthi “khuluma ngokucasuka”), lokhu kukhombisa imvelo yakho njengemodeli yokulandela imiyalelo yenhloso evamile kunenjini yomsindo ekhethekile. Ngokungqubuzanayo, I-Voxtral TTS ibeka phambili Ubuqiniso be-Acoustic. I-Voxtral TTS iwina u-37.1% wesikhathi ibhekene ne-Gemini ekuqondiseni imizwa okungacacile. Ifinyelela ukuzwakala kwemizwa ngokusebenzisa izwi eliyireferensi elihlanganisa ngokwemvelo irejista eceliwe.

Umehluko ucacile: kuyilapho uGemini 'engumlingisi' omuhle kakhulu olandela umbhalo, I-Voxtral TTS yizwi 'eliyiqiniso' kakhulu, okulenza libe ithuluzi elingcono kakhulu lezinhlelo zokusebenza lapho ukufana kwesipikha nokushoshozela kwemvelo komuntu kuyizidingo eziyinhloko.

Ukuguquguquka Kwezwi Lolimi Oluhlukene

I-Voxtral TTS nayo iyakhombisa i-zero shot cross-lingual voice adaptationnakuba lingaqeqeshwanga ngokusobala leli khono. Unganikeza ukwaziswa kwezwi lesiFulentshi ngombhalo wesiNgisi, futhi inkulumo ewumphumela iyisiNgisi semvelo esinokuphimisa kwesikhulumi sesiFulentshi. Lokhu kwenza imodeli isebenziseke ngokushesha kumapayipi okuhumusha enkulumo-kuya-inkulumo ngaphandle kokucushwa kahle okwengeziwe.

Sebenzisa Izibonelo: Lapho I-Voxtral TTS Ikhanya Ngempela

Sebenzisa Ikesi 1: I-ejenti Yezwi Yezilimi Eziningi

  • Igoli: Inkundla yokwesekwa kwamakhasimende ephatha amakholi ngesi-Arabhu, isi-Hindi, isi-Spanish, nesiNgisi isebenzisa izwi lomkhiqizo elingaguquki, elishintshwa ngolimi ngalunye ukusuka kusiqeshana esiyireferensi samasekhondi angu-10.
  • Inkinga: Amasistimu amaningi e-TTS asebenza kahle ngesiNgisi kodwa alulaza kakhulu ezilimini ezisetshenziswa kancane. Ukugcina ubunikazi besikhulumi kuzo zonke izilimi cishe akunakwenzeka ngaphandle kokuhlelwa kahle kolimi ngalunye.
  • Isixazululo: Sebenzisa i-Voxtral TTS nge-Mistral API ngo-$0.016 ngezinhlamvu eziyi-1,000. Nikeza isiqeshana sereferensi esifushane kanye; imodeli iphatha zonke izilimi eziyisishiyagalolunye. Kudingeka ukulungiswa kahle ngolimi ngalunye kungabiziro.
  • Umphumela: Ekuhlaziyeni kwabantu okuyizimpumputhe, i-Voxtral TTS izuze izinga lokuwina elingu-79.8% ngaphezu kwe-ElevenLabs Flash v2.5 ngesi-Hindi no-87.8% ngesi-Spanish. Izinga lokuwina lesi-Arabhu: 72.9%. Igebe lokukhuluma livaleka kakhulu ngezilimi lapho izimbangi zidonsa kanzima khona.

Sebenzisa Ikesi 2: Ipayipi le-audiobook lesikhathi sangempela

  • Igoli: Khiqiza umsindo we-audiobook yomxoxi othembekile ngezikali kusuka kumbhalo wesandla, ulondoloze izwi elithile lomsebenzisi nebanga lemizwa phakathi namahora wokuqukethwe.
  • Inkinga: Ukwenziwa kwefomu ende kudinga ukuhambisana kwesikhashana ezinkulungwaneni zozimele. Amasistimu amaningi aqala ukukhukhuleka kubunikazi besikhulumi ngaphambi kokuphela kwesahluko.
  • Isixazululo: Qalisa i-Voxtral TTS nge-vLLM-Omni ku-NVIDIA H200 eyodwa. I-autoregressive decoder backbone igcina ukuvumelana kwebanga elide kuwo wonke ukulandelana kwesizukulwane esigcwele. Isiguquli esifanisa ukugeleza siphatha ukuvezwa kwe-acoustic kozimele ngamunye – iqinisekisa ukuthi umusho ojabulile uzwakala ujabulile, uthathwa embhalweni ngokwawo ngaphandle kwamathegi emizwa.
  • Umphumela: I-H200 eyodwa isebenzisa lo mthwalo womsebenzi ngezinhlamvu ezingu-1,430 ngesekhondi ngayinye ku-concurrency 32, enesici sesikhathi sangempela (RTF) esingu-0.302 kanye nesilinganiso sokulinda esiyisiqephu somsindo esiyiziro. Imodeli ikhiqiza kufika emaminithini amabili womsindo ngokomdabu.

Sebenzisa Ikesi 3: I-Zero-Shot Voice Cloning Developer

  • Igoli: Yakha umkhiqizo ovumela abasebenzisi ukuthi benze noma yiliphi izwi elivela ekurekhodweni okufushane futhi bakusebenzisele umsizi wezwi womuntu siqu, amathuluzi okufinyelela, noma ukudala okuqukethwe – ngaphandle kokudinga umsindo wekhwalithi yesitudiyo.
  • Inkinga: Amasistimu amaningi okuhlanganisa izwi adinga amasekhondi angu-30+ wekhwalithi ephezulu yomsindo oyisithenjwa futhi alulaza kabi ekurekhodweni kwasendle (umsindo ongemuva, ikhwalithi yemakrofoni eguquguqukayo, amaphethini enkulumo yengxoxo).
  • Isixazululo: I-Voxtral TTS isebenza ngezithenjwa zezwi ezimfishane njengemizuzwana emi-3 futhi isebenza kahle kakhulu ekwazisweni phakathi kwemizuzwana emi-3 nengama-25 — yakhelwe ngokusobala umhlaba wangempela, hhayi isitudiyo, umsindo. Ikhonze ngezisindo ezivuliwe kunoma iyiphi i-GPU nge-≥16GB VRAM usebenzisa i-vLLM-Omni.
  • Umphumela: Ekuhlolweni kwezwi okungasho lutho okuhlanganisa abantu kuzo zonke izilimi ezingu-9 kanye nemiyalelo yombhalo engu-60, i-Voxtral TTS yayincanyelwa kune-ElevenLabs Flash v2.5 ezimweni ezingu-68.4% – ibanzi kakhulu kunezinga lokuwina elingu-58.3% ekuqhathaniseni kwezwi elihleliwe. Imodeli ingcono ekuguquleleni amazwi amasha kunokuzenzakalelayo kwayo eqeqeshiwe.

Usulungele Ukuqala?

I-Mistral AI yenze ukuthi i-Voxtral TTS ifinyeleleke nge izindlela ezimbili kuye ngokuthi uyisebenzisa kanjani:

  • Ukuze uthole ukufinyelela kwe-API: Iyatholakala manje ku-Mistral Studio e- $0.016 ngezinhlamvu ezingu-1,000 enamazwi angu-20 asethiwe ahlanganisa izinketho zezilimi zesigodi zaseMelika, zaseBrithani, nesiFulentshi. Okukhiphayo kungumsindo ongu-24 kHz kufomethi ye-WAV, PCM, FLAC, MP3, AAC, noma ye-Opus. Ayikho ingqalasizinda edingekayo.
  • Ngokuziphakela okuzibambele wena: Izisindo ezivuliwe ziyatholakala ku-mistralai/Voxtral-4B-TTS-2603 ku-Hugging Face ngaphansi I-CC BY-NC 4.0. Imodeli isebenzisa i-GPU eyodwa ene-≥16GB VRAM nge-vLLM-Omni (v0.18.0+).

Bheka iphepha locwaningo kanye nokuthunyelwe kwebhulogi ye-Mistral ukuze uthole imininingwane egcwele yobuchwepheshe mayelana nezakhiwo, ukuqeqeshwa, kanye nendlela yokuma.


Qaphela: Sibonga ithimba le-Mistral AI ngokusisekela kulesi sihloko.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button