Building and doing well for the teaching machine reading machine in the TPOT for full change and operation of performance

We start this lesson to show how to keep it Tpot To change and fit the machine study pipes. By working directly to Google Colab, we ensure that the setup is a light, construction, and available. We travel by uploading data, describing a custom suspect, to synchronize the search systems in advanced models such as xgboost, and set the verification strategy. As we continue, we test that evolutional algoriths seeking the most active pipes, which give us the clarity of the US through pareto lines and checkpoints. Look Full codes here.
!pip -q install tpot==0.12.2 xgboost==2.0.3 scikit-learn==1.4.2 graphviz==0.20.3
import os, json, math, time, random, numpy as np, pandas as pd
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import make_scorer, f1_score, classification_report, confusion_matrix
from sklearn.pipeline import Pipeline
from tpot import TPOTClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier, GradientBoostingClassifier
from xgboost import XGBClassifier
SEED = 7
random.seed(SEED); np.random.seed(SEED); os.environ["PYTHONHASHSEED"]=str(SEED)
We start by installing libraries and invites all important modules that support data management, model structure, and pipeline. We set random seeds to ensure our results remains recycling every time we use a writing letter. Look Full codes here.
X, y = load_breast_cancer(return_X_y=True, as_frame=True)
X_tr, X_te, y_tr, y_te = train_test_split(X, y, test_size=0.3, stratify=y, random_state=SEED)
scaler = StandardScaler().fit(X_tr)
X_tr_s, X_te_s = scaler.transform(X_tr), scaler.transform(X_te)
def f1_cost_sensitive(y_true, y_pred):
return f1_score(y_true, y_pred, average="binary", pos_label=1)
cost_f1 = make_scorer(f1_cost_sensitive, greater_is_better=True)
Here, we upload a breast cancer dataset and distinguish from the training and test of set while storing class balance. We estimate the FREQUALLES and explain the F1-based custom Scorer, allowing us to check the pipelines by focusing on good cases. Look Full codes here.
tpot_config = {
'sklearn.linear_model.LogisticRegression': {
'C': [0.01, 0.1, 1.0, 10.0],
'penalty': ['l2'], 'solver': ['lbfgs'], 'max_iter': [200]
},
'sklearn.naive_bayes.GaussianNB': {},
'sklearn.tree.DecisionTreeClassifier': {
'criterion': ['gini','entropy'], 'max_depth': [3,5,8,None],
'min_samples_split':[2,5,10], 'min_samples_leaf':[1,2,4]
},
'sklearn.ensemble.RandomForestClassifier': {
'n_estimators':[100,300], 'criterion':['gini','entropy'],
'max_depth':[None,8], 'min_samples_split':[2,5], 'min_samples_leaf':[1,2]
},
'sklearn.ensemble.ExtraTreesClassifier': {
'n_estimators':[200], 'criterion':['gini','entropy'],
'max_depth':[None,8], 'min_samples_split':[2,5], 'min_samples_leaf':[1,2]
},
'sklearn.ensemble.GradientBoostingClassifier': {
'n_estimators':[100,200], 'learning_rate':[0.03,0.1],
'max_depth':[2,3], 'subsample':[0.8,1.0]
},
'xgboost.XGBClassifier': {
'n_estimators':[200,400], 'max_depth':[3,5], 'learning_rate':[0.05,0.1],
'subsample':[0.8,1.0], 'colsample_bytree':[0.8,1.0],
'reg_lambda':[1.0,2.0], 'min_child_weight':[1,3],
'n_jobs':[0], 'tree_method':['hist'], 'eval_metric':['logloss'],
'gamma':[0,1]
}
}
cv = StratifiedKFold(n_splits=5, shuffle=True, random_state=SEED)
It describes customized tpot configurations that combine direct models, students based on trees, insert, and xgboost, using carefully selected hyperperper. We have also established an integrated 5-fold cross-Revacy Stratement strategy, ensures that all elections are well-evaluated in all datasets. Look Full codes here.
t0 = time.time()
tpot = TPOTClassifier(
generations=5,
population_size=40,
offspring_size=40,
scoring=cost_f1,
cv=cv,
subsample=0.8,
n_jobs=-1,
config_dict=tpot_config,
verbosity=2,
random_state=SEED,
max_time_mins=10,
early_stop=3,
periodic_checkpoint_folder="tpot_ckpt",
warm_start=False
)
tpot.fit(X_tr_s, y_tr)
print(f"nā±ļø First search took {time.time()-t0:.1f}s")
def pareto_table(tpot_obj, k=5):
rows=[]
for ind, meta in tpot_obj.pareto_front_fitted_pipelines_.items():
rows.append({
"pipeline": ind, "cv_score": meta['internal_cv_score'],
"size": len(str(meta['pipeline'])),
})
df = pd.DataFrame(rows).sort_values("cv_score", ascending=False).head(k)
return df.reset_index(drop=True)
pareto_df = pareto_table(tpot, k=5)
print("nTop Pareto pipelines (cv):n", pareto_df)
def eval_pipeline(pipeline, X_te, y_te, name):
y_hat = pipeline.predict(X_te)
f1 = f1_score(y_te, y_hat)
print(f"n[{name}] F1(test) = {f1:.4f}")
print(classification_report(y_te, y_hat, digits=3))
print("nEvaluating top pipelines on test:")
for i, (ind, meta) in enumerate(sorted(
tpot.pareto_front_fitted_pipelines_.items(),
key=lambda kv: kv[1]['internal_cv_score'], reverse=True)[:3], 1):
eval_pipeline(meta['pipeline'], X_te_s, y_te, name=f"Pareto#{i}")
We begin the evolution search with TPOT, cap time to work in order to work, and the progress of evaluation, enabling us to be rebooted. We then test the Pareto in front of the highlights of high tpracaction, converted you into a combined table, and select leaders based on the confirmation score. Finally, we examine the best elections in the curset Set test to ensure the actual performance of land with F1 and full division report. Look Full codes here.
print("nš Warm-start for extra refinement...")
t1 = time.time()
tpot2 = TPOTClassifier(
generations=3, population_size=40, offspring_size=40,
scoring=cost_f1, cv=cv, subsample=0.8, n_jobs=-1,
config_dict=tpot_config, verbosity=2, random_state=SEED,
warm_start=True, periodic_checkpoint_folder="tpot_ckpt"
)
try:
tpot2._population = tpot._population
tpot2._pareto_front = tpot._pareto_front
except Exception:
pass
tpot2.fit(X_tr_s, y_tr)
print(f"ā±ļø Warm-start extra search took {time.time()-t1:.1f}s")
best_model = tpot2.fitted_pipeline_ if hasattr(tpot2, "fitted_pipeline_") else tpot.fitted_pipeline_
eval_pipeline(best_model, X_te_s, y_te, name="BestAfterWarmStart")
export_path = "tpot_best_pipeline.py"
(tpot2 if hasattr(tpot2, "fitted_pipeline_") else tpot).export(export_path)
print(f"nš¦ Exported best pipeline to: {export_path}")
from importlib import util as _util
spec = _util.spec_from_file_location("tpot_best", export_path)
tbest = _util.module_from_spec(spec); spec.loader.exec_module(tbest)
reloaded_clf = tbest.exported_pipeline_
pipe = Pipeline([("scaler", scaler), ("model", reloaded_clf)])
pipe.fit(X_tr, y_tr)
eval_pipeline(pipe, X_te, y_te, name="ReloadedExportedPipeline")
report = {
"dataset": "sklearn breast_cancer",
"train_size": int(X_tr.shape[0]), "test_size": int(X_te.shape[0]),
"cv": "StratifiedKFold(5)",
"scorer": "custom F1 (binary)",
"search": {"gen_1": 5, "gen_2_warm": 3, "pop": 40, "subsample": 0.8},
"exported_pipeline_first_120_chars": str(reloaded_clf)[:120]+"...",
}
print("nš§¾ Model Card:n", json.dumps(report, indent=2))
We continue to search for warm beginnings, using a warm warm start to dip selected and select the best manufacturer in our testing area. We send a winning pipe, redirect our scales to imitate, and confirm its effects. Finally, we generate a combined model card to write down information, search settings, and a summary of the exterior to fulfill.
In conclusion, we see how the TPOT lets us move the model of the test model and error and instead depend on the automatic, productive, and descriptive processes. We send a very beautiful pipe, and we confirmed in the invisible data, and reset the use of style, which ensures that work movement is not just a test but production – ready for production. By reinstation, flexibility, and study, ends with a solid framework that can apply in general datasets and real world problems.
Look Full codes here. Feel free to look our GITHUB page for tutorials, codes and letters of writing. Also, feel free to follow it Sane and don't forget to join ours 100K + ml subreddit Then sign up for Our newspaper.
Asphazzaq is a Markteach Media Inc. According to a View Business and Developer, Asifi is committed to integrating a good social intelligence. His latest attempt is launched by the launch of the chemistrylife plan for an intelligence, MarktechPost, a devastating intimate practice of a machine learning and deep learning issues that are clearly and easily understood. The platform is adhering to more than two million moon visits, indicating its popularity between the audience.



