Generative AI

Train, Serve, and Deploy a Scikit-learn Model with FastAPI

In this article, you will learn how to train a Scikit-learn classification model, serve it with FastAPI, and deploy it to FastAPI Cloud.

Topics we will cover include:

  • How to structure a simple project and train a Scikit-learn model for inference.
  • How to build and test a FastAPI inference API locally.
  • How to deploy the API to FastAPI Cloud and prepare it for more production-ready usage.

Train, Serve, and Deploy a Scikit-learn Model with FastAPI
Image by Author

Introduction

FastAPI has become one of the most popular ways to serve machine learning models because it is lightweight, fast, and easy to use. Many machine learning and AI applications use FastAPI to turn trained models into simple APIs that can be tested, shared, and deployed in production.

In this guide, you will learn how to train, serve, and deploy a Scikit-learn model with FastAPI. We will start by setting up a simple project, then train a model on a toy dataset, build a FastAPI inference server, test it locally, and finally deploy it to FastAPI Cloud.

1. Setting Up the Project

Start by creating a new folder for your project and setting up a simple directory structure. This will help keep your training code, application code, and saved model files organized from the beginning.

Run the following commands in your terminal:

After that, your project structure should look like this:

Next, create a requirements.txt file and add the following dependencies:

These packages will be used to build and run the API, train the Scikit-learn model, save the trained model, and handle numerical input data.

Once the file is ready, install the dependencies with:

At this point, the project skeleton is ready, and you can move on to training your first Scikit-learn model.

2. Training the Machine Learning Model

In this section, you will train a simple Scikit-learn classification model using the built-in breast cancer dataset.

The script loads the dataset, splits it into training and testing sets, trains a RandomForestClassifier, evaluates its accuracy, and saves everything needed for inference into a .joblib file inside the artifacts folder.

Create a train.py file with the following code:

Once the file is ready, run the training script from your terminal:

You should see output similar to this:

This means the model was trained successfully, evaluated on the test split, and saved for later use in the FastAPI application.

3. Building the FastAPI Server

Now that the model has been trained and saved, the next step is to build a FastAPI server that loads the saved model and serves predictions through an API.

This application loads the model once when the server starts, provides a simple health check endpoint, and exposes a /predict route that accepts feature values and returns both the predicted class and class probabilities.

Create app/main.py with the following code:

This FastAPI app does three useful things.

It loads the trained model once during startup, exposes a /health endpoint so you can quickly check whether the server is running, and provides a /predict endpoint that accepts input features and returns an inference result. This makes it easy to turn your Scikit-learn model into a reusable API that other applications or services can call.

4. Testing the Model Inference Server Locally

With the FastAPI app ready, the next step is to run it locally and test whether the prediction endpoint works as expected. FastAPI makes this easy because it automatically detects your application, starts a local development server, and provides built-in interactive API documentation that you can use directly from the browser.

Start the server with:

Once the server starts, FastAPI will serve the API locally, usually on port 8000.

Train, Serve, and Deploy a Scikit-learn Model with FastAPI

FastAPI will serve the API locally

Next, open the interactive API docs in your browser:

Inside the docs page, you can test the /predict endpoint directly. Expand the endpoint, click Try it out, paste in the input values, and execute the request.

You can also test the API from the terminal using curl:

The response will be returned as JSON, including the predicted class ID, the predicted label, and the probability scores for each class.

This confirms that the inference server is working locally and is ready to be deployed.

5. Deploying the API to the Cloud

Once you have finished testing the API locally, you can stop the development server by pressing CTRL + C. The next step is to deploy the application to FastAPI Cloud. FastAPI Cloud supports deployment directly from the CLI, and the standard flow is fastapi login followed by fastapi deploy.

Log in with:

After logging in, deploy the app with:

During the first deployment, the CLI can guide you through setup, such as selecting or creating a team and choosing whether to create a new app or link to an existing one.

FastAPI Cloud then packages and uploads your code, installs dependencies in the cloud, deploys the application, and verifies that deployment completed successfully. After the first deploy, it also creates a .fastapicloud directory in your project so later deployments are simpler.

A successful deployment will end with output similar to this:

Once the app is live, open the deployed docs page in your browser to check that the endpoints are working.

You can also test the deployed API from the terminal by replacing the local URL with your cloud URL.

Finally, you can go to the FastAPI Cloud dashboard, click your deployed app, and check the logs to monitor builds, startup behavior, and runtime issues.

What to Do Next

You now have a complete end-to-end workflow in place: a trained machine learning model, a FastAPI application for inference, local testing, and a deployment on FastAPI Cloud.

To take this further and reach a real production level, the next step is to make the API secure, tested, monitored, and able to handle real-world traffic reliably at scale.

  1. Secure the API by adding API key protection or a stronger authentication layer.
  2. Strengthen error handling so failures are clear, consistent, and easier to troubleshoot.
  3. Improve performance so the API can respond efficiently under heavier traffic.
  4. Test more deeply with unit tests, endpoint tests, and load testing.
  5. Add monitoring to track uptime, latency, errors, and overall usage.
  6. Refine deployment workflows with versioning, rollback plans, and safer releases.

That is what turns a working deployed API into one that can operate more reliably in the real world.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button