I-Apple Wrasempinpion ekuqondeni kolimi lwemvelo 2024

Inqubekela phambili ekucutshungulweni kwezilimi zemvelo inika izindlela ezinembile zokuxhumana nobuchwepheshe. Isibonelo, imikhiqizo nezinsizakalo eziningi ze-apula, kufaka phakathi i-siri nokusesha, sebenzisa ukuqonda kwezilimi zemvelo nesizukulwane ukunika amandla isipiliyoni esibonakalayo esibonakalayo sabasebenzisi. Ulimi lwemvelo luyindawo eshukumisayo esheshayo yocwaningo lokufunda lomshini, futhi luhlanganisa umsebenzi ekulweni kwedatha ephezulu kwezilimi eziningi, izakhiwo zenoveli nama-algorithms, kanye nokwakhiwa kwemininingwane okusha okubandakanya ubumfihlo nokuphepha, kanye nokusebenza kanye nokusebenza kahle.
Ukuxoxa ngokushintsha kwalombuso oguqukayo ngokushesha, i-Apple yabamba umhlangano wokucwaninga ngesihloko sokuqonda kolimi lwemvelo, ukuhlanganisa ama-apula kanye nomphakathi ocwaningweni lwezinsuku eziningi ogxile entuthukweni yakamuva (LLMS).
Kulesi sihloko sabelana ngezinto ezinhle kakhulu ezivela ezingxoxweni zokusebenzela kanye nokuqoshwa kwezinkulumo zokusebenzela ezikhethiwe.
I-Apple Wrasempisha kumavidiyo wemvelo wokuqonda wemvelo
Ukubimba
I-LLMS ihlose amandla okwenza imisebenzi ehlukahlukene, futhi manje isejwayele ukusetshenziswa kwezizinda eziningi kanye nezicelo. Ngenxa yalokho, kunentshisekelo ekhulayo kumamodeli asebenza kahle, futhi izinkulumo eziningana zaxoxa ngezikhombisi ezingaba khona zokuba nalo.
Izintshumayelo ezimbili ezichaze ezinye zezakhiwo ezisekelweni amamodeli we-Transformer asuselwa ekusekelweni:
-
I-Sasha Rush of Cornell University, kwi-SSMS “SSMS kanye ne-Foundation Model Design Space”, ichaze amamodeli wesikhala sezwe (i-SSMS), isiqondiso esithembisayo futhi esakha ngokushesha esibonisa ukunemba okuncintisana nokulinganisa. Izakhiwo ze-SSM ziphinde zivule ama-model design design (ngokwesibonelo, ama-LLMs we-Byte-Level kanye ne-lyms ye-LLMS kwi-SSM ukuze asheshise amandla okuvuselelwa) kanye namamodeli wesisekelo esizenzakalelayo.
-
I-Yoon Kim yeMassachusetts Institute of Technology eyethulwe i-Reginglent Neural Network (RNN) izakhiwo zezakhiwo ezinama-matrix-valion afihlekile kanye namandla okufakwayo okufakwayo okufakwayo. Le ndlela ivumela ukuqeqeshwa okusebenzayo kanye nokuphamba okuphezulu kunabaguquli, njengoba kuchazwe ephepheni ama-Gated Vated Linear Transformers ngokuqeqeshwa nge-Hardware.
I-YeJin Choi, njengamanje exhumene neStanford naseNvidia, yaveza inkulumo ethi “amathuba angenakwenzeka ngohlobo olukhethekile lwezilimi”, ephikisana ngokuthi amamodeli akhethekile ahlutshiwe “futhi lelo khwalithi libaluleke kakhulu. Le nkulumo yanikezwa ngaphansi kokuhlangana kwangaphambili kwe-Choi ne-University of Washington.
Enye indlela egxile ekuhlolweni kokuqashwa kokuzithoba kumadivayisi anenkumbulo ekhawulelwe. I-Apple's Mehrdad Farajtabar ikhombise ukuthi ukuqwashisa ngesigaqa, ukulayisha okuvumelanayo komongo, kanye ne-design ethambisayo ye-Hardware esheshayo ngo-4-5-2x ku-GPUS ku-Cpus, kuchazwe ngamaphepha amakhulu we-pus nge-pus.
Ukubonisana nokuhlela
Izindlela ezijwayelekile zokuqeqesha zinika amandla i-LLMS ukwenza imisebenzi efana nokuphendula kombuzo, ukuhumusha, kanye nokufingqa lapho kuhlinzekwa ngezimpikiswano ezifanele. Izicelo eziyinkimbinkimbi ngokwengeziwe, zingadinga ukuhlukaniswa zibe amayunithi amancane ukuze aphathwe ngokwahlukile. Umsebenzi wakamuva wokubonisana futhi uhlela ugxile ekuthuthukiseni amandla ama-LLM ukubola futhi abonise amasu anjengochungechunge lokusebenza emikhakheni ehlukahlukene, okubandakanya ama-mathematics, okubandakanya i-mathematics, ukusetshenziswa kwamathuluzi, kanye ne-ejensi. Enkulumweni ethi “ukuhlola kanye nokwenza ngcono amakhono okuhlela amamodeli ezilimi”, i-Apple's Navdeep yaletha uchungechunge lwezivivinyo ngalesi sihloko, kufaka phakathi:
-
Ukuhlola amasu wokuhlela we-LLM ekukhiqizeni izinyathelo zokuxazulula izinkinga usebenzisa imibuzo engamashumi amabili, eveza izindlela zawo zamasu ekuxhumaneni okuningi;
-
Indlela eyi-hybrid lapho i-LLM encane ikhiqiza amasu amaningi, futhi i-LLM enkulu yenza izinyathelo ngazinye, ngempumelelo amakhono okuhlela ngaphandle ngaphandle kokulahlekelwa ukusebenza kwezinkinga;
-
Imodeli yolimi ye-latent ye-latent ekhiqiza isigaba esiphelele phambi kwamathokheni ngamanye, ukuthuthukisa ukuhambisana kanye nomongo. Lo mbono uchazwe ku-Planner yephepha: ukukhiqiza isigaba esiveziwe ngemodi ye-Latent yoLimi Lokuguquguquka
Okunye ukuthuthukiswa kokusebenza okungaba khona kwanikezwa uWilliam Wang we-University of California Santa Barbara, enkulumweni enesihloko esithi “Ukuqonda Emininingwane Yezindlela Zokubonisana Ngezinkulumo Olimisiwe”. Le nkulumo iphakamise ukuthi ikhono lokubonisana le-LLM livela ekuxhumekeni okususelwa embhalweni phakathi kwemiqondo (“izindlela zokubonisana”) zibonwe kwimininingwane yokuqeqesha, futhi zibonisa ukuhlanganiswa kwezindlela zokubonisana kunamandla okusebenza kahle kwe-LLM. Ukuze ufunde okwengeziwe, bheka ukuqonda ukuqonda ukucabanga ngezimo zolimi ngokombono wokuhlanganisa izindlela.
I-Yu Su of Ohio State University yaqokomisa ukuthambekela okusha kucwaningo olususelwa ekuhlelweni lwe-LLM: Indima ye-LLMS njengama-ejenti asebenzisa amathuluzi angaphandle ukwelula amakhono awo. Njengoba kuchazwe ephepheni LLMS ku-Imaginarium: Ukufunda ithuluzi ngephutha elenziwe ngephutha, ukusetshenziswa kwethemende okusebenzayo kungadinga amandla okulingisa izinhlelo zokusebenza zokulingwa ukuze ama-LLM angafunda emaphutheni abo.
I-Subbaraa KaMBhampati yase-Arizona State University yashaya inothi, iphinde yathi, naphezu kwezimangalo ezivame ukwedlula, isizukulwane samanje se-LLMS azihleleli ngokwazo kunoma yimuphi umqondo oqondile, kepha singasiza ekuhleleni ngohlaka oluqondile. Le ngxabano ichazwa ephepheni, i-LLMS ayikwazi ukuhlela, kepha ingasiza ekuhleleni ezinhlanganweni ze-LLM-Modulo.
Amamodeli we-Multilingual
Esigabeni se-satellite esibandakanya abacwaningi besifunda sase-Asia-Pacific, izindlela eziningi ezithembisayo zokuqhubekisela phambili ukuqonda kwezilimi eziningi ngendlela ephumelelayo yedatha kwavezwa futhi kuxoxwa ngakho. Ukugxila okuyisihluthulelo kwakukuvumelanisa amamodeli angenamikhawulo wesiNgisi kwezinye izilimi.
U-Yen Yu we-Apple wethule inkulumo ethi “tuning llms ngemiyalo yokuqondanisa yokuqondanisa yokuhunyushwa komshini ezilimini ezingabonakali, ezilimini eziphansi”, e-ACL2024 Workshop. Le nkulumo ikhombisile ukuthi ungayisebenzisa kanjani kuphela ukulungiswa okuncane kokufundisa i-LLM ukuqonda ulimi olungaziwa ngaphambili, indlela ekhombisa isethembiso sokuthola ama-LLMs ngezilimi eziphansi ngempumelelo nangokomnotho. Ukuze ufunde kabanzi, bheka iphepha lesihloko esifanayo lapha.
UNaaaki Okazaki weTokyo Institute of Technology wethule inkulumo ethi “Ukuthuthukisa ama-LLM aseJapan ngokuqeqeshwa okuqhubekayo kwangaphambili”, okulandela isu elifanayo lokuvumelanisa amamodeli akhona akhona kuma-Japan. Inkulumo ihlanganise izihloko eziningi ezisebenzayo esilungiselelweni sezilimi eziningi, kufaka phakathi amasu angezelele amagama wemodeli eyisisekelo enamathokheni anezilimi ezihlosiwe ukugwema ukwehluleka “kwesilulumagama”. I-Okazaki ikhombisile ukuthi ukuqeqeshwa kwangaphambi kokuqeqeshwa kwangaphambili amamodeli akhona angagcina amakhono wemodeli eyisisekelo ngenkathi kungokusiza ngolimi lwesiJapan, ulwazi nokuqonda kwamasiko. Njengoba ukufundwa komyalo ngokuvamile kuyisinyathelo sokugcina sokuqeqeshwa, kungaba yinkinga yokwenza lokhu kuqeqeshwa kwangaphambili kwangaphambili ngemuva kokuqeda ukufunwa kwemilingo. I-Okazaki ngakho-ke ibuye ikhombise indlela yokuphinda usebenzise kahle imiyalo yemiyalo evela kwimodeli eyisisekelo ngendlela yeBlackbox, ngaleyo ndlela igweme isidingo sokuvuselela ukufundiswa komlando.
Ukuhambisana
Njengoba i-LLMS ihanjiswa kabanzi kuzinhlelo zokusebenza zokukhiqiza, inqubekela phambili ngokuhambisana nokuqinisekisa ukuphuma okuthembekile nokuphephile okuvela kulezi zinhlobo kuya ngokuya kubaluleke kakhulu, futhi izinkinga eziningana zokuqondanisa zahlolwa kumhlangano.
UHadas Kotek we-Apple, neHadas Orgad of Technion, okuxoxwe ngawo ubuchwepheshe bezinkulumo “Ukuhlola Izinkinga Zokuphepha kuma-LLMS: Ukuhlaziywa kwemibono”. Bachaze amasethi ahlukahlukene wokuhlola ukubhekisisa ama-gender bias futhi bakhombisa ukuthi amamodeli amaningi akhona njengamanje akhombisa ukuthi abonakale enjalo, kodwa nasezinhlotsheni zawo ngezinqumo abazenzayo ekunqumeni izikhalazo eziqondene nobulili. Baphinde bachaze nokuthi baqeqesha kanjani ama-Cressifier Cons ku-Transformers akwazi ukubikezela ukunemba kokukhipha okulandelayo, njengendlela yokunciphisa inani lokuphuma okungathembeki, noma okungenani kunikeze izexwayiso ezifanele. Ngemininingwane engaphezulu, bheka amaphepha ahlobene nobulili be-Gender Bias eLLMS.
Ukuqondanisa okunembile ngezintshisakalo zabantu kudinga idatha ekhombisa njengokuziphatha okungafanele nokufanele. UDavid Q. Ilanga le-Apple lichaze ukwakhiwa nokwakhiwa kwedatha yemibuzo ephikisanayo, njengoba kuhlolwe ku-Delphi ephepheni: Idatha yokuhlola ukusebenza kwe-LLMS ekuphatheni izindaba eziphikisanayo. Le datha ifaka imibuzo efana 'Ingabe Sizalwa Sihle Noma Ebi?' Futhi kuqhathanisa ukusebenza kwamamodeli amaningana akhona kule datha.
I-Tatsunori Hashimoto yeStanford University, ezindleleni zezibalo zokulungiswa kolimi onokwethenjelwa, “kubhekiswe ukuxhumeka noma ukuntuleka kweqiniso kokuphuma okujulile (kanye nezokwelashwa noma ezomthetho). UHashimoto wakhombisa ukuthi izindlela ze-ProCcinatic zingasiza kanjani amamodeli alinganise amazinga abo okuzethemba, noma ngaphandle kokuthola isiqiniseko esiyi-100%.
I-Hashimoto isebenzise izindlela ze-classical zezibalo ukuthola ukungcoliswa kwamabhentshi asetshenziswe kabanzi kanye namasethi wokuhlola kudatha yokuqeqeshwa emodeli, eveza ukuthi amabhentshi amaningi omphakathi ahambisana kakhulu, noma ngabe kungcoliswe kahle. I-Hashimoto ikhuthaze ukuqoqwa kweqoqo elihlukahlukene lamasethi amasha wokuhlola, kanye nezinqubomgomo zokuphindaphindeka ekusetshenzisweni komphakathi wamanje, ukuze kufake phakathi ukucwaningwa kwama-aughtives okungcoliswa kokuhlolwa.
Ukuvikeleka
Ngaphezu kwalezi zingqinamba zokuqondanisa, i-jailbreak kanye nokusabisa okusheshayo kwezinsongo zibenye inselelo ye-LLMS yamanje. I-Chaowei xiao, ye-University of Wisconsin naseNvidia, ikhombise ukuthi amasu ahlukene okuvikela angaqashwa kanjani ukunciphisa lezi zingozi zokuphepha nezokuphepha, kanye nokuthi ungawandisa kanjani la masu ephepheni: Njengoba kuchazwe kanjani ephepheni ama-autodan: njengoba kuchazwe kanjani ephepheni ama-Autodan: Uma kuchazwa kanjani ephepheni le-Autodan: Kuchazwe kanjani ephepheni le-Autodan: Kuchazwe kanjani ephepheni le-Autodan: Kuchazwe kanjani ephepheni le-Autodan:
Njengoba siqhubeka nokukhulisa izindlela zemvelo zemvelo nezinembile zokuxhumana namadivayisi aqondene nomuntu siqu, ama-LLMS kanye nobuchwepheshe obuhlobene nokuqonda kwezilimi zemvelo nesizukulwane kugxilwe endabeni yezemfundo kanye nasembonini, njengoba le ncwadi yokufunda iyakhombisa.
Izinsizakusebenza Zokusebenzela
Umsebenzi Ohambisanayo
I-Autodan: Ukukhiqiza i-Stealthy Breattleak kuvumela amamodeli amakhulu wezilimi ngoXiaoogeng Liu, nan Xu, Muhao Chen, noChaowei Xiao
I-Delphi: Idatha yokuhlola ukusebenza kwe-LLMS ekuphatheni izindaba eziphikisanayo ngoDavid Q. Ilanga, uHadas Kotek, u-Christopher XIIN, noJason D. Williams, noJason D. Williams
Abakwa-Geted Linear Ukunakwa kwe-Transformers nge-Hardware – Ukuqeqeshwa Okusebenza Nge-Hardware by Songlin Yang, Bailin Wang, Yikang Shen, Rameng Shen, Rameskwar Panda
Ukufuya ubulili ku-llms nguHadas Kotek, uRikker Dockum, noDavid Q. Ilanga
I-LLMS ayikwazi ukuhlela, kepha ingasiza ekuhleleni kuzinhlaka ze-LLM-Modulo nge-Subbarao Kammbampati, Karthik Valmeekati, bheka Guan, i-Siddhant Bhambri, no-Siddhant Bhambri, noLucas Saldyt, kanye ne-Anil verthy
I-LLM kwi-Flash: Ukutholwa okuyisisekelo kwezilimi ezinkulu ngememori elinganiselwe yi-Keivan Alizadeh, Iman Mirzadeh, e-DIHAMMAD CO, CARLO C DEL Mundo, Mohammad RELME
I-LLMS ku-Imaginarium: Ukufundwa kwamathuluzi ngokusebenzisa isivivinyo esenziwe ngecala nephutha nguBoshi Wang, Hao Fang, Jason Vang, uBenjamin Vaver Durme, kanye yu su
I-Planner: Ukukhiqiza isigaba esiveziwe ngemodeli ye-Latent yoLimi Ukuphazamiseka nguYizhe Zhang, Juatao Gu, Zhuangfei Zhai, Johfei Zhais, Josh Susssted
Relust Stikes emuva
I-Tuning LLMS enemiyalo yokuqondanisa yokuqondanisa yokuhunyushwa komshini ezilimini ezingezinhle, izilimi eziphansi zabasebenzi ngeZhuoyuan Mao nase-Yen Yu
Ukuqonda ikhono lokucabanga lokubonisa amamodeli olimi ngokombono wokubonisa izindlela zokuhlangana nguXiny Wang, Kexunso Amayuelas, Kexun Zhang, Willun Chen, noWillum Yang wang
Ukwamukela
Abantu abaningi baba nomthelela kulo mhlangano, okubalwa no-Alex Acero, uRavi Anantha, Mehrdad Farajtabar, Duhrdad Farajtabar, Stewina Liu, uStephen Pilman, uDavid Q. Ilanga, yen Yu, uYzhe Zhang, noCharlie Zhou.



