Machine Learning

I-Agent AI Yokulinga Ukufunda Okujulile Kwesimanje

efunda amamethrikhi akho, ethola okudidayo, esebenzisa imithetho yokushuna echazwe kusengaphambili, eqala kabusha imisebenzi lapho kudingeka, futhi ebhale zonke izinqumo—ngaphandle kokuthi ugqolozele amajika okulahlekelwa ngo-2 ekuseni.

Kulesi sihloko, ngizohlinzeka nge-ejenti engasindi yakhelwe abacwaningi bokufunda okujulile nonjiniyela be-ML lokho kungaba:

• Thola ukwehluleka ngokuzenzakalelayo
• Isizathu esibonakalayo ngaphezu kwamamethrikhi okusebenza
• Sebenzisa amasu akho achazwe ngaphambilini e-hyperparameter
• Qalisa kabusha imisebenzi
• Bhala phansi zonke izenzo nomphumela

Alukho ukusesha kwezakhiwo. Ayikho i-AutoML. Akukho ukubhalwa kabusha okuhlaselayo kwe-codebase yakho.

Ukuqaliswa kuncane ngamabomu: faka isikripthi sakho sokuqeqesha, engeza i-ejenti encane esekwe ku-LangChain, chaza ama-hyperparameter ku-YAML, futhi uveze okuncamelayo ku-markdown. Cishe usuvele wenza okungu-50% kwalokhu.

Beka le ejenti kumanuwali yakho train.py ukuhamba komsebenzi bese usuka ku-0️⃣ uye ku-💯 ngosuku olulodwa.

Inkinga ngokuhlolwa kwakho okukhona

🤔 Uzindla ngokungapheli ngama-hyperparameter.

▶️ Ugijima i-train.py.

🐛 Ulungisa iphutha ku-train.py.

🔁 Wena phinda train.py

👀 Ugqolozele i-TensorBoard.

🫠 Uyabuza iqiniso.

🔄 Uyaphinda.

Wonke Unjiniyela Wokufunda Okujulile/Womshini Wokufunda emkhakheni wenza lokhu. Ungabi namahloni. Isithombe soqobo ngu-MART PRODUCTION nge-Pexels. I-Gif icatshangwe nguGrok

Yeka ukugqolozela izinombolo zemodeli yakho

Awuyena uJedi. Alikho inani lokugqolozela elizokwenza ube ngomlingo wakho [validation loss | classification accuracy | perplexity | any other metric you can name] hamba uye lapho ofuna khona.

Ukugada imodeli phakathi nobusuku ukuze kunyamalale/ukuqhuma kwegradient NaN kunethiwekhi esekelwe ku-transformer ejulile ongakwazi ukuyilandelela-futhi lokho kungase kungaze kuvele? Futhi a kunzima cha.

Kufanele uzixazulule kanjani izinkinga zangempela zocwaningo lapho isikhathi sakho esiningi usichitha emsebenzini okufanele wenziwe, kodwa unikele kancane kakhulu ekuqondeni kwangempela?

Uma u-70% wosuku lwakho esetshenziswa ukudonsa kokusebenza, ukucabanga kwenzeka nini?

Shintshela ekuhloleni okuqhutshwa yi-agent

Iningi lonjiniyela bokufunda okujulile nabacwaningi engisebenza nabo basaqhuba izivivinyo mathupha. Ingxenye ebalulekile yosuku iya kulokhu: ukuskena i-Weights & Biases noma i-TensorBoard ukuze uyisebenzise wayizolo ebusuku, ukuqhathanisa ama-run, ukuthumela ama-metrics, ukulungisa ama-hyperparameter, amanothi wokungena, ukuqalisa kabusha imisebenzi. Bese uphinda umjikelezo.

Umsebenzi owomile, oyisicefe, futhi ophindaphindayo.

Sizokhipha le misebenzi ephindaphindwayo ukuze ushintshe ukugxila kwakho emsebenzini onenani eliphezulu

Umqondo we-AutoML, ngokungananazi, uyahlekisa.

Eyakho [new] umenzeli ngeke enze izinqumo zokuthi ungashintsha kanjani i-topology yenethiwekhi yakho noma wengeze izici eziyinkimbinkimbi — umsebenzi wakho lowo. Izothatha indawo yomsebenzi weglue ophindaphindayo odla isikhathi esibalulekile ngenani elincane elingeziwe.

I-Agent Driven Experiments (ADEs)

Ukushintsha usuka ekuhloleni okwenziwa ngesandla uye ekuhambeni komsebenzi oshayelwa yi-ejenti kulula kunalokho okubonakala ekuqaleni. Akukho ukuphinda ubhale isitaki sakho, awekho amasistimu asindayo, akukho sikweletu sobuchwepheshe.

Isithombe nguMbhali

Emgogodleni wayo, i-ADE idinga izinyathelo ezintathu:

  1. Faka kusitsha isikripthi sakho sokuqeqesha esikhona
    • Goqa okwakho train.py esitsheni se-Docker. Akukho ukwenziwa kabusha kwemodeli yengqondo. Azikho izinguquko zezakhiwo. Umngcele wokwenza ophindaphindekayo nje.
  2. Engeza i-ejenti engasindi
    • Yethula umbhalo omncane osuselwa ku-LangChain ofunda amamethrikhi kudeshibhodi yakho, usebenzisa okuncamelayo, onqumayo ukuthi uzokwethulwa nini futhi kuphi, kumiswe noma kubhalwe phansi bese kuwuhlela no-cron nanoma yimuphi umhleli wemisebenzi.
  3. Chaza ukuziphatha nezintandokazi ngolimi lwemvelo
    • Sebenzisa ifayela le-YAML ukuze ucushwe kanye nama-hyperparameter
    • Sebenzisa idokhumenti ye-Markdown ukuze uxhumane ne-ejenti yakho

Yilo lonke uhlelo. Manje, Ake sibuyekeze isinyathelo ngasinye.

Gcina umbhalo wakho wokuqeqesha

Omunye angase aphikise ukuthi kufanele wenze lokhu noma kunjalo. Kwenza ukuqalisa kabusha nokuhlela kube lula kakhulu, futhi, uma uthuthela kuqoqo le-Kubernetes ukuze uqeqeshwe, ukuphazamiseka kwenqubo yakho ekhona kuphansi kakhulu.

Uma usuvele wenza lokhu, yeqela esigabeni esilandelayo. Uma kungenjalo, nansi ikhodi ewusizo ongayisebenzisa ukuze uqalise.

Okokuqala, ake sichaze isakhiwo sephrojekthi esizosebenza ne-Docker.

your experiment/
├── scripts/
│   ├── train.py                 # Main training script
│   └── health_server.py         # Health check server
├── requirements.txt             # Python dependencies
├── Dockerfile                   # Container definition
└── run.sh                       # Script to start training + health check

Kudingeka siqinisekise ukuthi yakho train.py iskripthi singalayisha ifayela lokumisa lisuka efwini, livumele umenzeli ukuthi alihlele uma kudingeka.

Ngincoma ukusebenzisa i-GitHub kulokhu. Nasi isibonelo sendlela yokufunda ifayela lokumisa elikude. Umenzeli uzoba nethuluzi elihambisanayo lokufunda nokulungisa leli fayela lokumisa.

import os
import requests
import yaml
from box import Box

# add this to `train.py`
GITHUB_RAW = (
    "
    "{owner}/{repo}/{ref}/{path}"
)

def load_config_from_github(owner, repo, path, ref="main", token=None):
    url = GITHUB_RAW.format(owner=owner, repo=repo, ref=ref, path=path)

    headers = {}
    if token:
        headers["Authorization"] = f"Bearer {token}"

    r = requests.get(url, headers=headers, timeout=10)
    r.raise_for_status()

    return Box(yaml.safe_load(r.text))


config = load_yaml_from_github(...)

# use params throughout your `train.py` script
optimizer = Adam(lr=config.lr)

Futhi sifaka iseva yokuhlola impilo ukuze isebenze eduze kwenqubo eyinhloko. Lokhu kuvumela abaphathi beziqukathi, njenge-Kubernetes, noma i-ejenti yakho, ukuthi igade isimo somsebenzi ngaphandle ukuhlola izingodo.

Uma isimo sesiqukathi sishintsha ngokungalindelekile, singaqalwa kabusha ngokuzenzakalelayo. Lokhu kwenza ukuhlola kwe-ejenti kube lula, njengoba ukufunda nokufingqa amafayela okungena kungase kubize kakhulu kumathokheni kunokuhlola nje impilo yesiqukathi.

# health_server.py
import time
from pathlib import Path
from fastapi import FastAPI, Response

app = FastAPI()

HEARTBEAT = Path("/tmp/heartbeat")
STATUS = Path("/tmp/status.json")  # optional richer state
MAX_AGE = 300  # seconds

def last_heartbeat_age():
    if not HEARTBEAT.exists():
        return float("inf")
    return time.time() - float(HEARTBEAT.read_text())

@app.get("/health")
def health():
    age = last_heartbeat_age()

    # stale -> training likely hung
    if age > MAX_AGE:
        return Response("stalled", status_code=500)

    # optional: detect NaNs or failure flags written by trainer
    if STATUS.exists() and "failed" in STATUS.read_text():
        return Response("failed", status_code=500)

    return {"status": "ok", "heartbeat_age": age}

Umbhalo wegobolondo elincane, run.sheqala i health_server inqubo eceleni kwe train.py

#!/bin/bash

# Start health check server in the background
python scripts/health_server.py &
# Capture its PID if you want to terminate later
HEALTH_PID=$!
# Start the main training script
python scripts/train.py

Futhi kunjalo, i-Dockerfile yethu, eyakhelwe esithombeni esiyisisekelo se-NVIDIA ukuze isiqukathi sakho sikwazi ukusebenzisa isisheshisi somsingathi ngaphandle kokungqubuzana okuyiziro. Lesi sibonelo esePytorch, kodwa ungamane usinwebe siye ku-Jax noma ku-Tensorflow uma kudingeka.

FROM nvidia/cuda:12.1.0-cudnn8-devel-ubuntu20.04

RUN apt-get update && apt-get install -y 
    python3 python3-pip git

RUN python3 -m pip install --upgrade pip

# Install PyTorch with CUDA support
RUN pip3 install torch torchvision torchaudio --extra-index-url 

WORKDIR /app

COPY . /app

CMD ["sh", "run.sh"]

✅ Ufakwe esitsheni. Elula futhi encane.

Engeza i-ejenti engasindi

Kunezinhlaka zama-ejenti eziningi ongakhetha kuzo. Kulo menzeli, ngithanda i-Langchain.

I-LangChain iwuhlaka lokwakha amasistimu aqhutshwa yi-LLM ahlanganisa ukucabanga nokwenza. Kwenza kube lula ukuhlanganisa amakholi wemodeli, ukuphatha inkumbulo, nokuhlanganisa amakhono angaphandle ukuze i-LLM yakho yenze okungaphezu kokukhiqiza umbhalo.

Ku-LangChain, Amathuluzi achazwa ngokusobala, imisebenzi eboshelwe ku-schema imodeli engayibiza. Ithuluzi ngalinye liyikhono elingenangqondo noma umsebenzi (isb, ukufunda ifayela, ukubuza nge-API, ukulungisa isimo).

Ukuze i-ejenti yethu isebenze, sidinga kuqala sichaze amathuluzi engawasebenzisa ukufeza inhloso yethu.

Izincazelo zamathuluzi

  1. funda_okuncamelayo
    • Ifunda kokuncanyelwayo komsebenzisi namanothi okuhlola kudokhumenti yokumaka
  2. check_tensorboard
    • Isebenzisa i-selenium ene-webdriver ye-chrome ukuze ithwebule amamethrikhi
  3. hlaziya_metric
    • Isebenzisa ukucabanga kwe-LLM ye-multimodal ukuqonda ukuthi kwenzakalani kusithombe-skrini
  4. check_container_zempilo
    • Ihlola ukuhlolwa kwethu okufakwe esitsheni kusetshenziswa isheke lezempilo
  5. qala kabusha_isitsha
    • Iqala kabusha isilingo uma ingenampilo noma i-hyperparameter idinga ukushintshwa
  6. lungisa_config
    • Ilungisa ifayela lokumisa elikude futhi izinikele ku-Github
  7. bhala_inkumbulo
    • Ubhala uchungechunge lwezenzo kwinkumbulo eqhubekayo (umaka phansi)

Leli sethi lamathuluzi lichaza imingcele yokusebenza yomenzeli wethu. Konke ukusebenzisana nokuhlola kwethu ngalawa mathuluzi, okwenza ukuziphatha kulawuleke futhi ngethemba, kubikezelwe.

Esikhundleni sokuhlinzeka ngalawa mathuluzi emgqeni – nansi i-github gist equkethe wonke amathuluzi achazwe ngenhla. Ungakwazi ukukuxhuma kumenzeli wakho noma ulungise njengoba ubona kufaneleka.

I-ejenti

Uma ngikhuluma iqiniso impela, okokuqala lapho ngizama ukugxeka imibhalo esemthethweni ye-Langchain, ngavele ngawuvala wonke umqondo ndawonye.

Ine-verbose ngokweqile futhi inkimbinkimbi kunesidingo. Uma umusha kubasebenzeli, noma ungafuni ukuzulazula ku-labyrinth okuyimibhalo ye-Langchain, sicela uqhubeke nokufunda ngezansi.

Umkhandi? Ngaphandle okungahleliwe? Amathiphu wamathuluzi amancane yonke indawo? Ngizodlula ngokushaya lesi sitha esifanelekile. Ibungazwe ngu- Grok

Kafushane, nansi indlela ama-ejenti akwa-Langchain asebenza ngayo:

I-ejenti yethu isebenzisa a ngokushesha ukunquma ukuthi yini okufanele uyenze endaweni ngayinye isinyathelo.

Izinyathelo adalwe ngendlela eguquguqukayo ngokugcwalisa ukwaziswa ngomongo wamanje kanye nemiphumela yangaphambilini. Ikholi ngayinye ye-LLM [+ optional tool invocation] kuyisinyathelo, futhi okukhiphayo kudla kokulandelayo, kwakha a iketango.

Ukusebenzisa lokhu iluphu ephindaphindayo ngokucabangaumenzeli angacabanga futhi enze isenzo esihlosiwe esifanele kuzo zonke izinyathelo ezidingekayo. Zingaki izinyathelo ezincike ekhonweni le-ejenti lokucabanga nokuthi isimo sokunqanyulwa sichazwa ngokucace kangakanani.

I-Lang-chain. Yithole? 🤗

Ukwaziswa

Njengoba kuphawuliwe, ukwaziswa kuyi-glue ephindaphindayo egcina umongo kuyo yonke i-LLM kanye nezicelo zamathuluzi. Uzobona izimeli (ezichazwe ngezansi) zisetshenziswa lapho i-ejenti iqala ukuqaliswa.

Sisebenzisa okuncanyana kwenkumbulo eyakhelwe ngaphakathi ye-LangChain, efakwe nekholi yethuluzi ngalinye. Ngaphandle kwalokho, i-ejenti igcwalisa izikhala, inquma kokubili isinyathelo esilandelayo nokuthi iliphi ithuluzi okufanele lishayelwe.

Ukuze ifundeke, umyalo oyinhloko ungezansi. Ungakwazi ukuyixhuma ngokuqondile kusikripthi somenzeli noma uyilayishe kusukela kusistimu yefayela ngaphambi kokuyiqalisa.

"You are an experiment automation agent responsible for monitoring 
and maintaining ML experiments.

Current context:
{chat_history}

Your workflow:
1. First, read preferences from preferences.md to understand thresholds and settings
2. Check TensorBoard at the specified URL and capture a screenshot
3. Analyze key metrics (validation loss, training loss, accuracy) from the screenshot
4. Check Docker container health for the training container
5. Take corrective actions based on analysis:
   - Restart unhealthy containers
   - Adjust hyperparameters according to user preferences 
     and anomalous patterns, restarting the experiment if necessary
6. Log all observations and actions to memory

Important guidelines:
- Always read preferences first to get current configuration
- Use visual analysis to understand metric trends
- Be conservative with config changes (only adjust if clearly needed)
- Write detailed memory entries for future reference
- Check container health before and after any restart
- When modifying config, use appropriate values from preferences

Available tools: {tool_names}
Tool descriptions: {tools}

Current task: {input}

Think step by step and use tools to complete the workflow.
"""

Manje ngemigqa engu-~100ish, sesinomenzeli wethu. I-ejenti iyaqaliswa, bese sichaza uchungechunge lwezinyathelo. Ngesinyathelo ngasinye, i current_task Iziqondiso zigcwele ekwazisweni kwethu, futhi ithuluzi ngalinye libuyekeza isenzakalo senkumbulo eyabiwe ConverstationSummaryBufferMemory

Sizosebenzisa i-OpenAI kule ejenti, nokho, i-Langchain inikeza ezinye izindlela, okuhlanganisa nokubamba eyakho. Uma izindleko ziyinkinga, kukhona amamodeli anemithombo evulekile angasetshenziswa lapha.

import os
from datetime import datetime
from pathlib import Path
from langchain.agents import AgentExecutor, create_react_agent
from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.memory import ConversationSummaryBufferMemory

# Import tools from tools.py
from tools import (
    read_preferences,
    check_tensorboard,
    analyze_metric,
    check_container_health,
    restart_container,
    modify_config,
    write_memory
)

PROMPT=open("prompt.txt").read()
class ExperimentAutomation:
    def __init__(self, openai_key=None):
        """Initialize the agent"""
        self.llm = ChatOpenAI(
            temperature=0.8,
            model="gpt-4-turbo-preview",
            api_key=openai_key or os.getenv('OPENAI_API_KEY')
        )

        # Initialize memory for conversation context
        self.memory = ConversationSummaryBufferMemory(
            llm=self.llm,
            max_token_limit=32000,
            memory_key="chat_history",
            return_messages=True
        )

    def create_agent(self):
        """Create LangChain agent with imported tools"""
        tools = [
            lambda **kwargs: read_preferences(memory=self.memory, **kwargs),
            lambda **kwargs: check_tensorboard(memory=self.memory, **kwargs),
            lambda **kwargs: analyze_metric(memory=self.memory, **kwargs),
            lambda **kwargs: check_container_health(memory=self.memory, **kwargs),
            lambda **kwargs: restart_container(memory=self.memory, **kwargs),
            lambda **kwargs: modify_config(memory=self.memory, **kwargs),
            lambda **kwargs: write_memory(memory=self.memory, **kwargs)
        ]

        # Create the prompt template
        prompt = PromptTemplate.from_template(PROMPT)

        agent = create_react_agent(
            llm=self.llm,
            tools=tools,
            prompt=prompt
        )

        # Create agent executor with memory
        return AgentExecutor(
            agent=agent,
            tools=tools,
            memory=self.memory,
            verbose=True,
            max_iterations=15,
            handle_parsing_errors=True,
            return_intermediate_steps=True
        )

    def run_automation_cycle(self):
        """Execute the full automation cycle step by step"""
        write_memory(
            entry="Automation cycle started",
            category="SYSTEM",
            memory=self.memory
        )

        try:
            agent = self.create_agent()

            # Define the workflow as individual steps
            workflow_steps = [
                "Read preferences from preferences.md to capture thresholds and settings",
                "Check TensorBoard at the specified URL and capture a screenshot",
                "Analyze validation loss, training loss, and accuracy from the screenshot",
                "Check Docker container health for the training container",
                "Restart unhealthy containers if needed",
                "Adjust hyperparameters according to preferences and restart container if necessary",
                "Write all observations and actions to memory"
            ]

            # Execute each step individually
            for step in workflow_steps:
                result = agent.invoke({"input": step})

                # Write step output to memory
                if result.get("output"):
                    memory_summary = f"Step: {step}nOutput: {result['output']}"
                    write_memory(entry=memory_summary, category="STEP", memory=self.memory)

            write_memory(
                entry="Automation cycle completed successfully",
                category="SYSTEM",
                memory=self.memory
            )

            return result

        except Exception as e:
            error_msg = f"Automation cycle failed: {str(e)}"
            write_memory(entry=error_msg, category="ERROR", memory=self.memory)
            raise


def main():
    try:
        automation = ExperimentAutomation(openai_key=os.environ["OPENAI_API_KEY"])
        result = automation.run_automation_cycle()

        if result.get('output'):
            print(f"nFinal Output:n{result['output']}")

        if result.get('intermediate_steps'):
            print(f"nSteps Executed: {len(result['intermediate_steps'])}")

        print("n✓ Automation cycle completed successfully")

    except Exception as e:
        print(f"n✗ Automation failed: {e}")
        write_memory(entry=f"Critical failure: {str(e)}", category="ERROR")
        import sys
        sys.exit(1)


if __name__ == "__main__":
    main()

Manje njengoba sesine-ejenti yethu, namathuluzi, ake sixoxe ngokuthi siyiveza kanjani ngempela eyethu inhloso njengomcwaningi – isiqephu esibaluleke kakhulu.

Chaza ukuziphatha nezintandokazi ngolimi lwemvelo

Njengoba kuchaziwe, ukuchaza esikufunayo uma siqala ukuhlola kubalulekile ukuze sithole ukuziphatha okulungile kumenzeli.

Yize amamodeli okucabanga esekude kakhulu, futhi anomongo omuhle, asenazo izindlela zokuhamba ngaphambi kokuthi aqonde ukuthi ijika elihle lokulahlekelwa kwenqubomgomo libukeka kanjani ku-Hierarchical Policy Optimization, noma ukuthi ukudideka kwe-codebook kufanele kubukeke kanjani kuVector Quantized Variational Autoencoder, into ebengiyithuthukisa evikini eledlule.

Ngalokhu, siqalisa noma yikuphi ukucabanga okuzenzakalelayo nge preferences.md.

Ake siqale ngezinye izilungiselelo ezijwayelekile

# Experiment Preferences

This file defines my preferences for this experiment.
The agent should always read this first before taking any action.

---

## General Settings

- experiment_name: vqvae
- container_name: vqvae-train
- tensorboard_url: 
- memory_file: memory.md
- maximum_adjustments_per_run: 4
---
## More details
You can always add more sections here. The read_preferences task will parse
and reason over each section. 

Manje, ake sichaze amamethrikhi entshisakalo. Lokhu kubaluleke kakhulu endabeni yokucabanga okubonakalayo.

Ngaphakathi kwedokhumenti yokumaka, chaza yaml amabhlogo azocutshungulwa yi-ejenti esebenzisa i read_preferences ithuluzi. Ukwengeza lesi sakhiwo esincane kuyasiza ekusebenziseni okuthandwayo njengama-agumenti kwamanye amathuluzi.

```yaml
metrics:
  - name: perplexity
    pattern: should remain high through the course of training
    restart_condition: premature collapse to zero
    hyperparameters: |
        if collapse, increase `perplexity_weight` from current value to 0.2
  - name: prediction_loss
    pattern: should decrease over the course of training
    restart_condition: increases or stalls
    hyperparameters: |
        if increases, increase the `prediction_weight` value from current to 0.4
  - name: codebook_usage
    pattern: should remain fixed at > 90%
    restart_condition: drops below 90% for many epochs
    hyperparameters: |
        decrease the `codebook_size` param from 512 to 256. 

```

Umbono oyinhloko ukuthi preferences.md kufanele ihlinzeke imininingwane eyanele ehlelekile nechazayo ngakho-ke i-ejenti ingakwazi:

Qhathanisa ukuhlaziya kwakho ngokumelene nenjongo yakhoisb, uma i-ejenti ibona ukulahlekelwa kokuqinisekisa = 0.6 kodwa izintandokazi zithi val_loss_threshold should be 0.5iyazi ukuthi isenzo sokulungisa kufanele sibe yini

Funda imikhawulo kanye nemikhawulo (I-YAML noma inani elingukhiye) lamamethrikhi, amapharamitha, nokuphathwa kweziqukathi.

Qonda inhloso noma amaphethini wenhloso echazwe ezigabeni ezifundeka ngabantu, njengokuthi “lungisa kuphela izinga lokufunda uma ukulahlekelwa kokuqinisekisa kudlula umkhawulo futhi ukunemba kukhula.”

Ukuhlanganisa konke ndawonye

Manje njengoba sesinokuhlolwa okufakwe esitsheni + i-ejenti, sidinga ukuhlela umenzeli. Lokhu kulula njengokusebenzisa inqubo ye-ejenti ngomsebenzi we-cron. Lokhu kusebenzisa i-ejenti yethu kanye ngehora, ihlinzeka ngokuhwebelana phakathi kwezindleko (ngamathokheni) vs. Ukusebenza kahle.

0 * * * * /usr/bin/python3 /path/to/agent.py >> /var/log/agent.log 2>&1

Ngithole ukuthi le ejenti ayidingi imodeli yakamuva yokucabanga futhi isebenza kahle nezizukulwane ezedlule kusukela ku-Anthropic ne-OpenAI.

Ukusonga

Uma isikhathi socwaningo sinomkhawulo, kufanele sisetshenziswe ocwaningweni, hhayi ekuhlolweni kokugada izingane.

Umenzeli wakho kufanele aphathe ukuqapha, ukuqalisa kabusha, nokulungiswa kwepharamitha ngaphandle kokugadwa njalo. Lapho ukudonsa kunyamalala, okusalayo kuwumsebenzi wangempela: ukwakha imibono, ukuklama amamodeli angcono, nemibono yokuhlola ebalulekile.

Ngethemba, lo menzeli uzokukhulula kancane ukuze uphuphe umbono omkhulu olandelayo. Jabulela.

Izithenjwa

Müller, T., Smith, J., & Li, K. (2023). I-LangChain: Uhlaka lokuthuthukisa izinhlelo zokusebenza ezinamamodeli amakhulu olimi. Indawo yokugcina ye-GitHub.

I-OpenAI. (2023). Imibhalo ye-OpenAI API.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button