Generative AI

Creating a perfect completed Last Reading Pipeline using MLE-Agent and Ollama in your area

We start this lesson by showing how we can meet Mle-Agent For Ollama to create a complete, API-free machine. We place a renewable area on Google Colab, producing a small production dataset, and directed an agent to write a training text. To make it strong, we cleanse normal errors, and confirm the correct import, and add guaranteed fall text. In this way, we keep the job movement smooth during the default benefit. Look Full codes here.

import os, re, time, textwrap, subprocess, sys
from pathlib import Path


def sh(cmd, check=True, env=None, cwd=None):
   print(f"$ {cmd}")
   p = subprocess.run(cmd, shell=True, env={**os.environ, **(env or {})} if env else None,
                      cwd=cwd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True)
   print(p.stdout)
   if check and p.returncode!=0: raise RuntimeError(p.stdout)
   return p.stdout

It describes how to help you use the shell commands. We print a command, hold the outgoing, and suggest an error when it fails so that we can look to be killed in real time. Look Full codes here.

WORK=Path("/content/mle_colab_demo"); WORK.mkdir(parents=True, exist_ok=True)
PROJ=WORK/"proj"; PROJ.mkdir(exist_ok=True)
DATA=WORK/"data.csv"; MODEL=WORK/"model.joblib"; PREDS=WORK/"preds.csv"
SAFE=WORK/"train_safe.py"; RAW=WORK/"agent_train_raw.py"; FINAL=WORK/"train.py"
MODEL_NAME=os.environ.get("OLLAMA_MODEL","llama3.2:1b")


sh("pip -q install --upgrade pip")
sh("pip -q install mle-agent==0.4.* scikit-learn pandas numpy joblib")


sh("curl -fsSL  | sh")
sv = subprocess.Popen("ollama serve", shell=True)
time.sleep(4); sh(f"ollama pull {MODEL_NAME}")

We set up our ways to work for colobs and file names, and enter the exact Python information we need. We enter and open Ollama in your area, pull the selected model, and save the server works to produce the code without any external API buttons. Look Full codes here.

import numpy as np, pandas as pd
np.random.seed(0)
n=500; X=np.random.rand(n,4); y=([email protected]([0.4,-0.2,0.1,0.5])+0.15*np.random.randn(n)>0.55).astype(int)
pd.DataFrame(np.c_[X,y], columns=["f1","f2","f3","f4","target"]).to_csv(DATA, index=False)


env = {"OPENAI_API_KEY":"", "ANTHROPIC_API_KEY":"", "GEMINI_API_KEY":"",
      "OLLAMA_HOST":" "MLE_LLM_ENGINE":"ollama","MLE_MODEL":MODEL_NAME}
prompt=f"""Return ONE fenced python code block only.
Write train.py that reads {DATA}; 80/20 split (random_state=42, stratify);
Pipeline: SimpleImputer + StandardScaler + LogisticRegression(class_weight="balanced", max_iter=1000, random_state=42);
Print ROC-AUC & F1; print sorted coefficient magnitudes; save model to {MODEL} and preds to {PREDS};
Use only sklearn, pandas, numpy, joblib; no extra text."""
def extract(txt:str)->str|None:
   txt=re.sub(r"x1B[[0-?]*[ -/]*[@-~]", "", txt)
   m=re.search(r"```(?:python)?s*([sS]*?)```", txt, re.I)
   if m: return m.group(1).strip()
   if txt.strip().lower().startswith("python"): return txt.strip()[6:].strip()
   m=re.search(r"(?:^|n)(froms+[^n]+|imports+[^n]+)([sS]*)", txt);
   return (m.group(1)+m.group(2)).strip() if m else None


out = sh(f'printf %s "{prompt}" | mle chat', check=False, cwd=str(PROJ), env=env)
code = extract(out) or sh(f'printf %s "{prompt}" | ollama run {MODEL_NAME}', check=False, env=env)
code = extract(code) if code and not isinstance(code, str) else (code or "")
(Path(RAW)).write_text(code or "", encoding="utf-8")

We produce dataset with a small text and set the natural variables so that we can drive MLE-agent with local Ollama. We are built a solid rail .phy and describes a released assistant who only pulls the python code fenced. We then request a MLE-agent (back to Ollama and run when needed) and maintain mature text to disk cleaning Sanitization. Look Full codes here.

def sanitize(src:str)->str:
   if not src: return ""
   s = src
   s = re.sub(r"r","",s)
   s = re.sub(r"^pythonb","",s.strip(), flags=re.I).strip()
   fixes = {
       r"froms+sklearn.pipelines+imports+SimpleImputer": "from sklearn.impute import SimpleImputer",
       r"froms+sklearn.preprocessings+imports+SimpleImputer": "from sklearn.impute import SimpleImputer",
       r"froms+sklearn.pipelines+imports+StandardScaler": "from sklearn.preprocessing import StandardScaler",
       r"froms+sklearn.preprocessings+imports+ColumnTransformer": "from sklearn.compose import ColumnTransformer",
       r"froms+sklearn.pipelines+imports+ColumnTransformer": "from sklearn.compose import ColumnTransformer",
   }
   for pat,rep in fixes.items(): s = re.sub(pat, rep, s)
   if "SimpleImputer" in s and "from sklearn.impute import SimpleImputer" not in s:
       s = "from sklearn.impute import SimpleImputern"+s
   if "StandardScaler" in s and "from sklearn.preprocessing import StandardScaler" not in s:
       s = "from sklearn.preprocessing import StandardScalern"+s
   if "ColumnTransformer" in s and "from sklearn.compose import ColumnTransformer" not in s:
       s = "from sklearn.compose import ColumnTransformern"+s
   if "train_test_split" in s and "from sklearn.model_selection import train_test_split" not in s:
       s = "from sklearn.model_selection import train_test_splitn"+s
   if "joblib" in s and "import joblib" not in s: s = "import joblibn"+s
   return s


san = sanitize(code)


safe = textwrap.dedent(f"""
import pandas as pd, numpy as np, joblib
from pathlib import Path
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score, f1_score
from sklearn.compose import ColumnTransformer


DATA=Path("{DATA}"); MODEL=Path("{MODEL}"); PREDS=Path("{PREDS}")
df=pd.read_csv(DATA); X=df.drop(columns=["target"]); y=df["target"].astype(int)
num=X.columns.tolist()
pre=ColumnTransformer([("num",Pipeline([("imp",SimpleImputer()),("sc",StandardScaler())]),num)])
clf=LogisticRegression(class_weight="balanced", max_iter=1000, random_state=42)
pipe=Pipeline([("pre",pre),("clf",clf)])
Xtr,Xte,ytr,yte=train_test_split(X,y,test_size=0.2,random_state=42,stratify=y)
pipe.fit(Xtr,ytr)
proba=pipe.predict_proba(Xte)[:,1]; pred=(proba>=0.5).astype(int)
print("ROC-AUC:",round(roc_auc_score(yte,proba),4)); print("F1:",round(f1_score(yte,pred),4))
import pandas as pd
coef=pd.Series(pipe.named_steps["clf"].coef_.ravel(), index=num).abs().sort_values(ascending=False)
print("Top coefficients by |magnitude|:n", coef.to_string())
joblib.dump(pipe,MODEL)
pd.DataFrame({{"y_true":yte.reset_index(drop=True),"y_prob":proba,"y_pred":pred}}).to_csv(PREDS,index=False)
print("Saved:",MODEL,PREDS)
""").strip()

We cleanse the script generated by agent by issuing lost skikics and automatic updating the automatic mistakes that match it, then we prepare for any other export required. We also prepare a safe train, a visual observation to restore. Look Full codes here.

chosen = san if ("import " in san and "sklearn" in san and "read_csv" in san) else safe
Path(SAFE).write_text(safe, encoding="utf-8")
Path(FINAL).write_text(chosen, encoding="utf-8")
print("n=== Using train.py (first 800 chars) ===n", chosen[:800], "n...")


sh(f"python {FINAL}")
print("nArtifacts:", [str(p) for p in WORK.glob('*')])
print("✅ Done — outputs in", WORK)

We decide that we use Sanitized agent or return to a secure script, and then save the surveillance. It removes a selected train.

We conclude using the sanitised or safe translation of the Training Script, checking the ROC-AUC and F1, printing the printing, saves all the arts. With this process, we show how we can combine local llm with the ML pipes while maintaining honesty and security. The result is a manifestation that enables us to control the killing, avoid outdoor keys, and there are still real training on the world.


Look Full codes here. Feel free to look our GITHUB page for tutorials, codes and letters of writing. Also, feel free to follow it Sane and don't forget to join ours 100K + ml subreddit Then sign up for Our newspaper.


Asphazzaq is a Markteach Media Inc. According to a View Business and Developer, Asifi is committed to integrating a good social intelligence. His latest attempt is launched by the launch of the chemistrylife plan for an intelligence, MarktechPost, a devastating intimate practice of a machine learning and deep learning issues that are clearly and easily understood. The platform is adhering to more than two million moon visits, indicating its popularity between the audience.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button