Faka ubunjiniyela obunama-LLM: Amasu & Izibonelo zePython

Ubunjiniyela besici buyisisekelo sezinhlelo zokufunda zomshini eziqinile, kodwa inqubo evamile ivamise ukuba ngezandla, idla isikhathi, futhi incike kulwazi lwesizinda. Nakuba isebenza kahle, ingageja amasiginali ajulile afihlwe kudatha engahlelekile njengombhalo, amalogi, nokusebenzisana komsebenzisi.
Amamodeli Olimi Amakhulu ashintsha lokhu ngokusiza imishini iqonde ulimi, ikhiphe incazelo, futhi ikhiqize izici ezinothile ngokuzenzakalelayo. Lolu shintsho luvula izindlela ezintsha zokwakha amapayipi e-ML ahlakaniphile. Lesi sihloko sinikeza umhlahlandlela osebenzayo wokufaka ubunjiniyela usebenzisa ama-LLM.
Yini i-Feature Engineering enama-LLM?
Inqubo yobunjiniyela besici nama-LLM isebenzisa amamodeli olimi amakhulu ukuze ithuthukise futhi iguqule izici zokufaka ezidingwa amasistimu okufunda omshini. Isistimu yakho ikhipha incazelo ye-semantic namasiginali ahlelekile kudatha eluhlaza ngokusebenzisa ama-LLM esikhundleni sokusebenzisa ukuguqulwa okwenziwa mathupha kuphela.
Indlela entsha yokufaka ubunjiniyela inika amandla onjiniyela ukuthi bathuthukise amamodeli okufunda omshini ngokusebenzisa izindlela ezihlukene ezihlanganisa kokubili ukuguqulwa kwezinombolo nokumelela okusekelwe komongo.
Isici sonjiniyela esinama-LLM sisebenzisa amamodeli olimi aqeqeshwe kusengaphambili ukuze aguqule okokufaka okungavuthiwe kube izethulo ezihlelekile zobude obuphakeme ezisiza amamodeli afinyelele ukusebenza okungcono. Amamodeli asebenzisa umongo ukuze anqume ubudlelwano phakathi kwama-elementi kuyilapho edala izici eziveza incazelo engaphezu kwamaphethini ezibalo.
Ihluke Kanjani Kobunjiniyela Besici Sendabuko
Ubunjiniyela besici esivamile budala imithetho futhi busebenzise izindlela zokuhlanganisa nezindlela zokuguqula ukwakha izici. Isici sobunjiniyela esisekelwe ku-LLM sikhipha incazelo nezinhloso zomsebenzisi kanye nedatha yobudlelwano lapho umbhalo wekhodi owenziwe ngesandla ohlulekayo ukuwuthwebula.
I-Shift: Ukusuka Ezicini Zezandla kuya Ezicini Ze-Semantic
Ukufunda ngomshini kuthuthukisa amamodeli ngokusebenzisa kwawo izici ezenziwe ngezandla ezihlanganisa amavekhtha ashisayo kanye ne-TF-IDF kanye namanani ezinombolo amisiwe. Izici ezenziwa ngesandla ziza nemikhawulo ngenxa yokuthi aziwucabangeli umongo futhi zidinga ulwazi olukhethekile futhi aziwusimbi umehluko ocashile. Indlela ye-TF-IDF iphatha amagama njengezinhlaka ezihlukene okuholela ekulahlekeni kobudlelwano bamagama nencazelo yomzwelo.
- Imikhawulo yezindlela zendabuko: Ukwakhiwa kwesici okwenziwa mathupha kudinga uxhumo lwesistimu unomphela kanye nolwazi oluthile lwesizinda. Uhlelo luyehluleka ukufaka kokubili ulwazi olujwayelekile kanye nokuxhumana okuyinkimbinkimbi. Imodeli yesikhwama samagama idinga ulwazi oluningi kunokuthi “ukudla okubandayo” ukubona imizwa engemihle. Izinsiza zabantu zidinga ukuchitha isikhathi esiningi ukuhlonza zonke izimo ezihlukile.
- Iqhaza lama-LLM kumongo: Ama-LLM asebenza ezimeni zawo ezihlukene ngokusebenzisa ama-LLM asebenzisa ukuqeqeshwa kwawo kusuka kusizindalwazi esibanzi sombhalo ukuze athole ulwazi futhi abone amaphethini. Uhlelo luqonda umongo wolimi ngokuba khona kwabo kolwazi lomhlaba kanye nekhono lokuqonda imiyalezo efihliwe. Uhlelo lukhipha izici ze-semantic kudatha ngama-LLM ezidala izici ezizenzakalelayo ezihlonza izici zedatha njengemizwa nesihloko nezigaba zobungozi.
- Kungani lolu shintsho lubalulekile: Ukubaluleka kwalolu shintsho kuvela ekhonweni layo lokubonisa ukuthi izici ze-semantic ziletha imiphumela engcono kunezici ezidalwe umuntu lapho usebenza nemisebenzi eyinkimbinkimbi. Isistimu idinga izici ezimbalwa ze-heuristics ekusebenzeni kwayo okuholela kuzinqubo zokuhlola ngokushesha.
Amasu Abalulekile ku-Feature Engineering nama-LLM
Lesi sigaba sizobonisa izindlela ezibalulekile ngezibonelo zamakhodi. Senza idatha yesampula encane futhi sibonisa ukuthi izici zitholwa kanjani.
Ukushumeka Njengezici
Ama-LLM akhiqiza ama-semantic vectors aminyene embhalweni. Ukushumeka okukhishiwe kusebenza njengezici zezinombolo ezenza imodeli iqonde incazelo eyeqa imvamisa yamagama ayisisekelo. Singasebenzisa imodeli ye-transformer ukudala ukushumeka kwemisho enezinhlangothi ezingu-384 ngokusebenzisa umbhalo wekhodi wemisho.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('all-MiniLM-L6-v2')
sentences = ["I love machine learning", "The movie was fantastic"]
embeddings = model.encode(sentences)
print("Embeddings shape:", embeddings.shape)
Okukhiphayo:
Embeddings shape: (2, 384)
Umumo ophumayo (2, 384) ubonisa imisho emibili eklanywe kumavektha aminyene anezinhlangothi ezingama-384 (owodwa umusho ngamunye). Ama-vector amelela izici ze-semantic zombhalo ezihlanganisa izincazelo ezihlobene nezinkulumo ezithinta imizwa.
Kusetshenziswa nini ukushumeka uma kuqhathaniswa nezici zendabuko:
from sklearn.feature_extraction.text import TfidfVectorizer
docs = [
"The cat sat on the mat",
"The dog ate the cat",
]
# Traditional TF-IDF: sparse bag-of-words
tfidf = TfidfVectorizer()
X_tfidf = tfidf.fit_transform(docs)
# LLM embeddings: dense semantic features
X_emb = model.encode(docs)
print("TF-IDF feature shape:", X_tfidf.shape)
print("LLM embedding feature shape:", X_emb.shape)
Okukhiphayo:
TF-IDF feature shape: (2, 6)LLM embedding feature shape: (2, 384)
Izici ze-TF-IDF zakha (2×6) i-matrix sparse equkethe amatemu ayisithupha ayingqayizivele, kuyilapho okushumekiwe kwe-LLM kukhona njengamavektha (2×384) aminyene. Ukushumeka kwethula incazelo yamagama esimweni sawo ngoba kukhombisa ukuthi amagama afanayo ahlobana kanjani nesibonelo “sekati” kanye “nenja”. Sebenzisa izici ze-semantic ezishumekiwe kuyilapho izici ezivamile zisebenzela idatha yezinombolo ezilula kanye nedatha yesigaba yemvamisa ephezulu edinga umbhalo wekhodi omncane.
Singakwazi ukwazisa i-LLM ukuthi ikhiphe ulwazi oluthile oluhlelekile embhalweni. Imiphumela yemodeli ingahlukaniselwa izici.
from transformers import pipeline
reviews = [
"The phone battery lasts all day and performance is smooth",
"The laptop overheats and is very slow",
]
extractor = pipeline("text2text-generation", model="google/flan-t5-base")
prompt = """
Extract features: sentiment, product_issue, performance
Text: The laptop overheats and is very slow
"""
result = extractor(prompt, max_length=50)
print(result[0]["generated_text"])
Okukhiphayo:
sentiment: negative, product_issue: overheating, performance: slow
Sisebenzisa umyalo we-LLM othi “Khipha imizwa (emihle/embi), isihloko, kanye nokuphuthuma (okuphansi/okumaphakathi/okuphezulu) kulokhu kubuyekezwa.” Imodeli ibuyisela izici ezihlelekile njengesichazamazwi esifana ne-JSON. Izici zomuzwa, isihloko, kanye nokuphuthuma manje sezikhona njengamakholomu ahlukene esingawafaka ohlelweni lwethu lokuhlukanisa
I-schema se-JSON singaphoqelelwa ekuncengeni ukuze kuqinisekiswe imiphumela engaguquki. Ngokwesibonelo:
prompt = """
Extract in JSON format:
{
"sentiment": "",
"issue": "",
"performance": ""
}
Text: The phone battery lasts all day and performance is smooth
"""
result = extractor(prompt, max_length=100)
print(result[0]["generated_text"])
Okukhiphayo:
{
"sentiment": "positive",
"issue": "none",
"performance": "smooth"
}
I-Semantic Feature Generation
Ama-LLM akhiqiza izibaluli ezintsha ezichazayo ezingasetshenziswa kuyo yomibili imigqa eyodwa namanani edatha ngayinye.
data = [
{"review": "Great camera quality but battery drains fast"},
{"review": "Affordable and durable, good for daily use"},
]
prompt = """
Generate a new feature called 'user_intent' from this review:
Review: Great camera quality but battery drains fast
"""
result = extractor(prompt, max_length=50)
print(result[0]["generated_text"])
Okukhiphayo:
user_intent: photography-focused but concerned about battery
I-LLM ikhipha inhloso yomsebenzisi ekubuyekezweni ngokuhlaziywa kwayo kombhalo. Uhlelo luguqula umbhalo ongacutshunguliwe ube izici ezihlelekile ezibonisa ukukhetha komsebenzisi kwamakhamera kanye nokukhathazeka kwawo mayelana nempilo yebhethri. Uhlelo luvumela abasebenzisi ukuthi bengeze amakholomu amasha athuthukisa ukuqonda kwemodeli yamaphethini omsebenzi womsebenzisi.
Ukudalwa Kwesici Sokuqaphela Umongo
Ama-LLM angakwazi ukukhiqiza izici zombhalo lapho esebenzisa ulwazi lwawo ukuze ahlaziye inani lesici ngaphakathi kwezimo ezithile. I-LLM isebenzisa imininingwane yekhodi yeposi ukuchaza indawo ehambisana nayo.
prompt = """
Infer customer type:
Review: Affordable and durable, good for daily use
"""
result = extractor(prompt, max_length=50)
print(result[0]['generated_text'])
Okukhiphayo:
customer_type: budget-conscious practical user
I-LLM isebenzisa ulwazi lokubuyekeza ikhasimende ukuze inqume ukuthi umbuyekezi ukuliphi iqembu lekhasimende. Uhlelo luguqula umbhalo wokufakwayo wenze ilebula emisiwe ebonisa izinto ezimbili ezithandwa kakhulu zomsebenzisi zemikhiqizo ethengekayo nehlala isikhathi eside. Uhlelo luvumela abasebenzisi ukuthi basebenzise isici esisha esivumela amamodeli ukuthi ahlukanise abasebenzisi ngokwezigaba ngokuya ngamaphethini abo okuziphatha kanye nezintandokazi ezithile.
Izikhala zesici seHybrid (Amapayipi Anezindlela Eziningi)
Ukuhlanganisa Ithebula, Umbhalo, Nokushumeka
Siqala ngezici zezinombolo nezici ze-semantic esizihlanganisa zibe i-hybrid vector.
import pandas as pd
import numpy as np
df = pd.DataFrame({
"price": [1000, 500],
"rating": [4.5, 3.0],
"review": [
"Excellent performance and battery life",
"Slow and heats up quickly",
],
})
embeddings = model.encode(df["review"].tolist())
final_features = np.hstack([
df[["price", "rating"]].values,
embeddings,
])
print("Final feature shape:", final_features.shape)
Okukhiphayo:
Final feature shape: (2, 386)
Idathasethi ephelele manje iqukethe imigqa emi-2 equkethe izici ezingama-386. Idatha yethebula yoqobo (intengo futhi ukukala) kuhlanganiswe nokushumeka umbhalo ovela kuzibuyekezo. Uhlelo luthuthukisa izici ezithuthukile ngokuhlanganiswa kwayo kwedatha ehlelekile nolwazi lombhalo we-semantic okuholela ekusebenzeni okungcono kwemodeli.
Amapayipi Esici Se-Multi-Modal
Siqala ngezici zezinombolo nezici ze-semantic esizihlanganisa zibe i-hybrid vector.
def feature_pipeline(row):
embedding = model.encode([row['review']])[0]
return list(row[['price', 'rating']]) + list(embedding)
features = df.apply(feature_pipeline, axis=1)
print(features.iloc[0][:5])
Okukhiphayo:
[1000, 4.5, 0.023, -0.045, 0.067]
Idathasethi ephelele manje iqukethe imigqa emi-2 equkethe izici ezingama-386. Idatha yethebula yoqobo (intengo nokulinganisa) ihlanganiswe nokushumeka kombhalo okuvela ezibuyekezweni. Uhlelo luthuthukisa izici ezithuthukile ngokuhlanganiswa kwayo kwedatha ehlelekile nolwazi lombhalo we-semantic okuholela ekusebenzeni okungcono kwemodeli.
Ukugeleza Kokusuka Ekugcineni (Idatha → LLM → Izici → Imodeli)
Kulesi sigaba sizodlula ekuboniseni ukuhamba komsebenzi esebenzisa ama-Transformers ukukhipha izici ezizosetshenziswa nesihlukanisi esiyisisekelo. Isibonelo, cabangela umsebenzi wokuhlukanisa imizwa. Kulokho ekuqaleni sizothatha isampula yedathasethi.
import pandas as pd
df = pd.DataFrame({
"review": [
"Amazing product, delivery was super fast and packaging was perfect",
"Terrible quality, broke after one use and support was unhelpful",
"Good value for money, does what it promises",
"The product is okay, not great but not bad either",
"Excellent performance, exceeded my expectations completely",
"Very slow delivery and the product quality is disappointing",
"I love the design and build quality, highly recommended",
"Waste of money, stopped working within two days",
"Decent product for the price, but could be improved",
"Customer service was helpful but the product is average",
"Fantastic experience, will definitely buy again",
"The item arrived late and was damaged",
"Pretty good overall, satisfied with the purchase",
"Not worth the price, quality feels cheap",
"Absolutely शानदार product, very happy with it",
"Works fine but nothing exceptional",
"Horrible experience, I want a refund",
"The features are useful and performance is smooth",
"Mediocre quality, expected better at this price",
"Superb build quality and fast performance",
"Product is fine, delivery took too long",
"Loved it, exactly what I needed",
"It’s okay, does the job but has some issues",
"Worst purchase ever, completely useless",
"Very good quality and quick delivery",
"Average product, nothing special",
"Highly durable and reliable, great buy",
"Poor packaging and damaged item received",
"Satisfied with the purchase, decent performance",
"Not happy with the product, quality is subpar",
],
"label": [
1, 0, 1, 1, 1,
0, 1, 0, 1, 1,
1, 0, 1, 0, 1,
1, 0, 1, 0, 1,
0, 1, 1, 0, 1,
1, 1, 0, 1, 0,
],
})
Manje, sizoqhubekela phambili ukwenza ipayipi le-ejenti elizosiza ekufakeni ubunjiniyela bomsebenzi othile. Njengalokhu izokwenza ukuhlaziya imizwa.
from transformers import pipeline
from sentence_transformers import SentenceTransformer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
import numpy as np
# Step 1: Initialize models
llm = pipeline("text2text-generation", model="google/flan-t5-base")
embedder = SentenceTransformer("all-MiniLM-L6-v2")
# Step 2: Feature Extraction Agent
def extract_features(text):
prompt = f"Extract sentiment (positive/negative): {text}"
result = llm(prompt, max_length=20)[0]["generated_text"]
return 1 if "positive" in result.lower() else 0
# Step 3: Build Feature Set
df["sentiment_feature"] = df["review"].apply(extract_features)
embeddings = embedder.encode(df["review"].tolist())
X = np.hstack([
df[["sentiment_feature"]].values,
embeddings
])
y = df["label"]
# Step 4: Train Model
X_train, X_test, y_train, y_test = train_test_split(
X,
y,
test_size=0.2
)
model = LogisticRegression()
model.fit(X_train, y_train)
# Step 5: Evaluate
accuracy = model.score(X_test, y_test)
print("Model Accuracy:", accuracy)
Okukhiphayo:
Model Accuracy: 0.95
Lokhu kubonisa ukusebenza kwesistimu okuphelele esebenza kusukela ekuqaleni kuye ekugcineni. I-LLM ikhipha isici semizwa ekubuyekezweni ngakunye, esihlanganiswa nokushumeka ukuze kwakhe okokufaka okucebile. Inqubo yobunjiniyela besici yale sistimu yenza imodeli iqonde kangcono umbhalo, okuphumela ekunembeni okuthuthukile kokubikezela imizwa.
Izicelo Zomhlaba Wangempela
Ukusetshenziswa kwama-LLM kubunjiniyela besici kudala izinguquko ezithinta izimboni ezahlukahlukene. Isixazululo sibonisa ikhono lokwenza imisebenzi ezindaweni ezahlukene zokusebenza.
- Ukwahlukaniswa kanye Nezinhlelo ze-NLP: Ama-LLM aletha izici zombhalo ezithuthukisiwe ezisekela ukuhlaziywa kwemizwa, ukuthuthukiswa kwe-chatbot, nemisebenzi yokuhlukanisa imibhalo ekuhlukaniseni nezinhlelo ze-NLP.
- Ukufunda ngomshini we-tabular: Ama-LLM anika amandla zonke izinhlobo zemisebenzi ukuze zithole izinzuzo emandleni azo. Ubuchwepheshe be-LLM buguqula idatha engahlelekile isuka emithonjeni eseceleni iyenze izici ezisebenzisekayo ezingaqondwa imodeli yethebula.
- Izimo zokusetshenziswa eziqondene nesizinda: Izici ze-LLM zithole izinhlelo zokusebenza ezintsha ezizindeni ezahlukahlukene ezifaka ezezimali nokunakekelwa kwezempilo kanye nomshwalense kanye nezimboni ezengeziwe. Uhlelo lwe-LLM lwezintengo zomshwalense luvumela izazi zezibalo ukuthi zenze izici ezizenzakalelayo ebezidinga ochwepheshe babantu ngaphambilini. Uhlelo lwe-LLM lusebenzisa izincazelo zemodeli yemoto ukuze lunqume izilinganiso zobungozi ezikhomba izimoto njengamamodeli “abafana bomjaho”.
Imikhawulo Nezinselele
Ubunjiniyela besici obunama-LLM buhlinzeka ngezinzuzo kubasebenzisi, kodwa budala izithiyo eziningi okudingeka zixazululwe. Inqubo yokusebenzisa idinga wonke amalungu eqembu ukuthi aqonde izingqinamba ezikhona. Lokhu kubandakanya:
- Ukwethembeka kanye Nokukhiqiza kabusha: Imiphumela yezinhlelo ze-LLM ibonisa ukuziphatha okungahambisani ngoba izinguquko zemodeli kanye noshintsho oluncane olusheshayo ludinga ukuhlolwa kwemodeli entsha. Uhlelo ludinga ukuloga ngokushesha kanye nezilungiselelo zezinga lokushisa elinguziro ukuze lifinyelele ukusebenza okungaguquki. Izinhlangano zibhekene nezinselelo ekusetshenzisweni kwe-LLM ngoba kufanele zibambe izici ezimbili ezihlanganisa ukufinyeleleka kwe-API nokulawula inguqulo.
- Ukuchema nokutolika: Amasistimu e-LLM enza izici zawo zibe nzima ukuyiqonda ngenxa yokuthi ukushumeka kwazo okuminyene kusebenza njengezingxenye eziyinhloko ze-LLM. Isistimu ingase idale ukuchema okusekelwe kudatha yokuqeqesha ngezinqubo zayo zokusebenza. I-LLM ikhiqiza isici esixhuma igama elithi “dokotela” nobulili obuthile ngendlela engacacile. Inqubo yokucwaninga kufanele ihlole izici ukuze inqume ukulunga kwazo.
- Ukuthembela Kakhulu Ezicini ze-LLM: Ama-LLM ahlinzeka ngokuzenzakalelayo okuphelele okuholela emiphumeleni eyingozi ngokusebenzisa i-facade yawo yokwethembeka. Ama-LLM akhiqiza izici ezingabalulekile lapho abasebenzisi benikeza imiyalo engalungile. Izici ze-LLM kufanele zisebenze njengamathuluzi angeziwe abasebenzisi okufanele bawasebenzise kanye nezici zesizinda eziyinhloko.
Isiphetho
Umkhakha wokuthuthukiswa kokufunda komshini uhlangabezana noshintsho olukhulu ngokusebenzisa isici sobunjiniyela nama-LLM. Inqubo manje ishintsha ukugcizelela kwayo kusukela emsebenzini wokuguqula idatha okwenziwa ngesandla iye ekudaleni izici ezizenzakalelayo ngokusebenzisa ukuqonda kwe-semantic. Le ndlela yenza abacwaningi bakwazi ukwenza izindlela ezintsha zokuhlaziya amasethi edatha ayinkimbinkimbi nangahlelekile.
Inqubo idinga ukuqaliswa okunembile kanye nezinqubo zokuhlola okuphelele nokuqinisekisa ukuze kuzuzwe impumelelo. Amakhono e-LLM ahlanganiswe nobungcweti bomuntu anika amandla ongoti ukuthi bakhe amasistimu e-AI asebenza ngamandla amakhulu kanye nokuqina nokusebenza ngempumelelo.
imibuzo ejwayelekile ukubuzwa
A. Isebenzisa ama-LLM ukuze iguqule idatha eluhlaza ibe yizici ze-semantic, ezihlelekile zamamodeli okufunda omshini.
A. Baguqula umbhalo ube amavekhtha aminyene athwebula incazelo, umongo, nobudlelwano ngale kwemvamisa yamagama alula.
Izici ezisuselwe ku-A. LLM zingase zingahambisani, zicheme, kube nzima ukuzichaza, futhi zibe yingozi uma zisetshenziswa ngaphandle kokuqinisekisa.
Ngena ngemvume ukuze uqhubeke ufunda futhi ujabulele okuqukethwe okukhethwe ngochwepheshe.



