Negative Selection Algorithm
On this page, you will find a collection of practical examples that demonstrate how to use the negative selection classes implemented in our package.
The examples are organized below:
Data Normalization:
Shows how to normalize data using negative selection classes. In the real-valued version, the data is normalized between 0 and 1. In the binary version, it is normalized into a bit vector.
K-fold Cross Validation with 50 Interactions:
In this example, the data is divided into training and test sets and model performance is evaluated by cross-validation. So with dividing the training data into k parts. In each iteration, 10% of the training data is reserved for testing.
Training:
The trained model is tested in this example with all available training data.
The examples below show various functionality of negative selection classes so that you know how to use them in your project. Feel free to explore these examples and adapt them as needed to meet your specific needs.
Access the notebooks with the option to run them online using Binder:
BNSA (Binary Negative Selection Algorithm)
In the example present in this notebook 1000 random samples were generated, arranged in two groups, one for each class.
It uses the mushrooms database, which contains information about edible and poisonous mushrooms.
RNSA (Real-Valued Negative Selection Algorithm)
- Example with random samples
In the example present in this notebook 500 random samples were generated, arranged in two groups, one for each class, we can see the non-self detectors generated below
- iris_dataBase_example
Example using the NSA iris database, which contains four-dimensional samples and three output classes (Setosa, Versicolor and Virginica).
- geyser_dataBase_example
To classify geyser eruptions in Yellowstone National Park, this notebook uses the Old Faithful database.