Three Strategies Hyperparameter strategies for better machine study models

Learning model (ML) should not devise Training data. Instead, you should read well in the training details provided to be able to general Good new, invisible data.
The default ML model settings may not work properly throughout the nature of the problem we are trying to resolve. We need to manually change these settings for better results. Here, “Settings “ Watch on the Hyperameters.
What is hyperparameter in the ML model?
The user manually describes hyperparameter value before the training process, and does not learn its value from the information during the model training process. When described, its value remains fixed until the user is changed.
We need to distinguish between hyperparameter and parameter.
The parameter reads its value from the information provided, and its value depends on hyperparemeter values. The value of parameter is renewed during training.
Here is an example of how different hyperparameter values affect the vector of the vector of support (SVM).
from sklearn.svm import SVC
clf_1 = SVC(kernel='linear')
clf_2 = SVC(C, kernel='poly', degree=3)
clf_3 = SVC(C, kernel='poly', degree=1)
Both cff_1 including cff_3 Models make the separation of the SVM line, while cff_2 The model makes an illegal subdivision. In this case, the user can do direct and quartered separation activities in exchange for changing the value of the 'kernel' Hyperparameter in SVC () category.
What is a hyperparameter setting?
Hyperparameter Tuning is a varying process to do the operation of the model by finding relevant values of hyperparameter without causing too much.
Sometimes, as in the example of the above SVM, the selection of other hyperparemers depends on the form of a problem (a regussion or division) that wants to resolve it. In that case, the user can set 'Lineear' The direct separation and 'Poly' of illegal separation. A simple choice.
However, for example, the user requires to use advanced search methods to select the number of these 'degree' Hyperparameter.
Before discussing search methods, we need to understand two important explanations: Hyperparameter search space including Distribution of HyperParameter.
Hyperparameter search space
Hyperparameter search space consists of a collection of combination of a user-defined hyperpareter values. Search will be limited in this area.
The search space can be NRTIthere ni A good number.
The amount of size in the search space is the hyperparemeter number. (eg.
The search space is defined as the Python dictionary containing hyperparameter as the buttons and prices for those hyperparemers as price lists.
search_space = {'hyparam_1':[val_1, val_2],
'hyparam_2':[val_1, val_2],
'hyparam_3':['str_val_1', 'str_val_2']}
Distribution of HyperParameter
The basic distribution of hyperparameter is also important because they decide how each value will be tested during the redemption process. There are four types of popular submission.
- The same distribution: Entire Prices may be available within the search space will be equally selected.
- LOG-Uniform distribution: A The logarithmic scale is used in the same prices. This is useful when a list of hyperparematers is great.
- General distribution: Prices are distributed around Zero to say and a regular 1.
- General distribution of normal: A The logarithmic measure is used in normal distributions. This is useful when a list of hyperparematers is great.
The selection of distribution also depends on the type of hyperparepameter. Hyperparameter can take different or ongoing values. A comprehension amount can be the amount or thread, while the continuous value always picks up floating points.
from scipy.stats import randint, uniform, loguniform, norm
# Define the parameter distributions
param_distributions = {
'hyparam_1': randint(low=50, high=75),
'hyparam_2': uniform(loc=0.01, scale=0.19),
'hyparam_3': loguniform(0.1, 1.0)
}
- Randint (50, 75): Chooses random numbers between 50 and 74
- uniform (0.01, 0.49): Selects flow-point numbers between 0.01 and 0.5 (continuous continuous distribution)
- Locumniform (0.1, 1.0): Chooses values between 0.1 and 1.0 on the log scale (Log-Uniform Distribution)
Hyperparameter planning methods
There are many different types of hyperparameter tuning methods. In this article, we will focus on only three ways falling under the A complete search Category. In the full search, the search algorithm processes all search location. There are three ways in this section: Manually search, Grid and random search.
Hand-up search
No search algorithm do a bookmack on the brochure. The user just places some values based on the environment and they see the results. If the result is not correct, the user is trying another quantity and so on. The user reads from previous efforts you will set better prices in future attempts. Therefore, hand-hand search is under the Informative search Category.
No clear description of the Syperparameter search space in hand search. This approach can be time-consuming, but may be useful when combining with other methods such as a grid search or random searches.
Manual search becomes difficult when we have to add two or more hyperpameter at the same time.
An example of hand search that the user can set up just 'Lineear' The direct separation and 'Poly' of illegal separation in the SVM model.
from sklearn.svm import SVC
linear_clf = SVC(kernel='linear')
non_linear_clf = SVC(C, kernel='poly')
Grid's search
In the grid quest, algorithm testing to search all the combination of hyperparameter described in the search location. Therefore, this approach is a Brute-Force method. This method consumes time and requires energy involvement, especially where the number of hyperparemers increases (the curses of size).
To use this method effective, we need to have a well-defined search space. Besides, we will spend a lot of time testing unnecessary combination.
However, the user does not require clarification of the distribution of hyperparemeter.
The search algorithm is not read in previous attempts (ITERATIONS) so it does not try better prices in future attempts. Therefore, the Grid Search falls below the Unplanned search Category.
Random searches
In random searches, the search algorithm testing the prices of hyperparameter prices in each context. As a grid search, we don't learn in previous efforts so it does not try better prices in future efforts. Therefore, random searches also falls below Unplanned search.
The random search is much better than a grid search where there is a large search area and we do not have an idea about hyperparameter space. Considered and effective.
When we give the same size of hyperparameter space for a grid search and random search, we cannot see a big difference between the two. We must specify a large search space to use a fixed search opportunity above the Grid Search.
There are two ways to increase the size of the hyperparameter search location.
- By adding the size (add new hyperpareter)
- By extinguishing a range of hyperparemers
It is recommended to describe the basic distribution of each hyperpareter. If not defined, the algorithm will use the default, which is a uniform distribution where the combination will exist the same option.
There are two important hyperpameters in the random search method itself!
- n_it: The amount of Iterations or random sample size of hyperparameter combination for inspection. It takes a value. This is trading about running running vs the quality of exit. We need to explain this to allow algorithm to check the random sample of combination.
- Planned_state: We need to explain this hyperparameter to find the same effect on all multiple operational calls.
Serious miscarriage of random searches that produce great distinctions in all multiple-operative work calls.
This is the end of today's essay.
Please let me know if you have any questions or answers.
What About AI Kings?
See you in the next article. Happy learning to you!
Designed and written by:
Rukshan pramoditha
2025-08-22



