**What’s the Problem?**

On the IoT Cybersecurity Team at Wat.ai, we’re trying to classify cyberattacks from a LARGE dataset with over 50 features from the Canadian Institute for Cybersecurity using various machine learning and algorithmic techniques.

Having lots of data and tons of features to analyze is good, right?

The problem is that we want a model that can **rapidly identify cyberattacks on small IoT devices**, which have limited memory, compute and power. These constraints mean we need to create models that **don’t** take gigabytes of memory just to run, let alone run fast enough to identify cyberattacks in real-time.

A model that runs through dozens of features for *every* incoming packet is probably just too slow. Neural networks, one of the most popular subset of machine learning (ML) models, may simply not work on many IoT devices.

However, our team has identified a possible solution: **bio-inspired algorithms.** Bio-inspired algorithms are inspired by natural processes, and while they typically aren’t as efficient as purely mathematical algorithms like gradient descent, they may be better when optimizing for **highly non-linear and high-dimensional data**.

And most importantly, it’s much more likely we will be able to create **bio-inspired algorithms that can abide by the computational constrants of IoT devices.**

We are attempting to use bio-inspired algorithms to:

Conduct feature selection on our massive dataset, thereby reducing the features we must account for in our cyberattack classification model;

Create a cyberattack classification model that is deployable to real IoT devices.

In previous posts,

and have investigated genetic algorithms, the Grey Wolf Optimizer and Particle Swarm Optimization in order to achieve goal #1. Stay tuned for future posts about progress on goal #2 :) In this post, I’ll be exploring one more bio-inspired algorithm: the Jellyfish Search Optimizer.**What is the Jellyfish Search Optimizer?**

Just a note before we begin: There are MANY bio-inspired algorithms out there. Every year, new algorithms are created by researchers, industry and makers. Some of the most common ones can be seen in the graph below:

My choice of exploring the Jellyfish Search Optimizer was due to many of the benefits the authors claim about the algorithm, which I will outline at the end of this post. However, if you’re interested in learning more about other bio-inspired algorithms, I encourage you to do your own exploration of the literature and try implementing some of your own.

Anyways, back to the jellyfish!

The Jellyfish Search Optimizer (JSO), designed by Jui-Sheng Chou and Dinh-Nhat Truong, 2020, **emulates the food search behavior and movement of jellyfish** in the ocean. Initially, jellyfish follow ocean currents, followed by movements inside jellyfish swarms as time goes by, and then a time control mechanism for switching between these mechanisms.

The exploration stage begins when jellyfish follow ocean currents to find the best locations (**exploration stage**). As time passes, a jellyfish swarm is established, and each jellyfish continues to move inside the swarm to find a better location, including active and passive motions, making the exploitation in this stage (**exploitation stage**). After repeating the loop, jellyfish bloom occurs, which is the **optimum phase**.

The JSO is based on the following three principles:

Jellyfish either

**follow the ocean current or move inside the swarm**, and a “time control mechanism” governs the switching between these types of movementsJellyfish

**move in the ocean in search of food**. They are more attracted to locations where the available quantity of food is greater.The quantity of food found is determined by the

**location and its corresponding objective function**.

The jellyfish are essentially searching for the minimum of a defined objective function. For our use case, this objective function might be an error function, which calculates the error of our cyberattack classification model depending on which features are selected.

Below is a flowchart of the algorithm:

Check out the original paper for the mathematical details!

## Implementation

Let’s first import the necessary packages:

```
import numpy as np
from math import cos, exp, pi
```

Now we’re going to define our objective function. We *could* define multiple objective functions, as I’ve illustrated below, in case we want to easily switch between them later on:

```
def fobj(x: np.array, fnumber: int):
"""
Parameters
----------
x: np.array
Current state of population
fnumber: int
Number identifying the chosen objective function
"""
if fnumber == 1:
y = -np.cos(x[0])*np.cos(x[1])*np.exp(-(x[0]-np.pi)**2 -(x[1]-np.pi)**2)
'''
if fnumber == 2:
...
if fnumber == 3:
...
etc.
'''
return y
```

For testing purposes, I defined our objective function for fnumber = 1 as a random real-valued function. The minimum (or “best”) value of this function is -1, so this is the value we expect the JSO to find for us when we test the implementation.

We’re now going to set the boundaries of the search space. Again, you can have many boundaries defined in `boundcondition`

, one for each function defined in `fobj`

.

```
def boundcondition(funnum: int):
"""
Parameters
----------
funnum: int
Number identifying the chosen objective function
"""
switcher = {
# (lower bound, upper bound, number of dimensions)
1: (-100.0,100.0,2),
'''
2: ...
3: ...
etc.
'''
}
Lb, Ub, nd = switcher.get(funnum)
return Lb, Ub, nd
```

Now, we’re going to initialize the population of jellyfish:

```
def initialization(num_pop: int, nd: int, Ub: float, Lb: float):
"""
Parameters
----------
num_pop: int
Size of population
nd: int
Number of dimensions
Ub: float
Uupper bound
Lb: float
Lower bound
"""
if len(Lb) == 1:
Lb = Lb*nd
Ub = Ub*nd
x = np.random.rand(1, nd)
a = 4
for i in range(num_pop-1):
newrow = a * x[i,:] * (1 - x[i,:])
x = np.vstack([x, newrow])
pop = np.zeros((num_pop, nd))
for k in range(nd):
for i in range(num_pop):
pop[i][k] = Lb[k] + x[i][k] * (Ub[k] - Lb[k])
return pop
```

Next, we have to define a function `simplebounds`

, which will be called from the main JSO function, to take the current vector of jellyfish positions and keep them in the bounded search space.

```
def simplebounds(s: list, Lb: float, Ub: float):
"""
Parameters
----------
s: list
Size of population
Ub: float
Upper bound
Lb: float
Lower bound
"""
ns_tmp = np.array(s)
I = ns_tmp[0] < Lb
while np.sum(I) != 0:
ns_tmp[I] = np.array(Ub)[I] + (ns_tmp[I] - np.array(Lb)[I])
I = ns_tmp < Lb
J = ns_tmp > Ub
while np.sum(J) != 0:
ns_tmp[J] = np.array(Lb)[J] + (ns_tmp[J] - np.array(Ub)[J])
J = ns_tmp > Ub
return ns_tmp
```

Finally, we can write our overall JSO function. Much of the functionality of `jso`

was not explained in this blog post, so if you’re curious, I highly suggest you read the original JSO paper.

```
from typing import Callable
def jso(CostFunction: Callable, fnumber: int, Lb: float, Ub: float, nd: int,
para: tuple):
"""
Parameters
----------
costFunction: Callable
Objective function
fnumber: int
Number identifying the chosen objective function
Lb: float
Lower bound
Ub: float
Upper bound
nd: int
Number of dimensions
"""
VarSize = (1, nd) #Size of decision variables matrix
if Lb is not list:
VarMin = [Lb] * nd
VarMax = [Ub] * nd
else:
VarMin = Lb
VarMax = Ub
MaxIt = para[0] # Maximum Number of Iterations
nPop = para[1] # Population Size
popi = initialization(nPop, nd, VarMax, VarMin)
print(popi)
popCost = np.zeros(nPop)
for i in range(nPop):
popCost[i] = CostFunction(popi[i], fnumber)
fbestvl = np.zeros(MaxIt)
for it in range(MaxIt):
print(it)
Meanvl = np.mean(popi, axis=0)
sorted_indices = np.argsort(popCost)
BestSol = popi[sorted_indices[0]]
BestCost = popCost[sorted_indices[0]]
for i in range(nPop):
Ar = (1 - (it / MaxIt)) * (2 * np.random.rand() - 1)
if abs(Ar) >= 0.5:
print("abs(Ar) >= 0.5")
newsol = popi[i] + (np.random.rand(*VarSize)[0] * (BestSol - (3 * np.random.rand() * Meanvl)))
newsol = simplebounds(newsol, VarMin, VarMax)
newsolCost = CostFunction(newsol, fnumber)
if newsolCost < popCost[i]:
popi[i] = newsol
popCost[i] = newsolCost
if popCost[i] < BestCost:
BestCost = popCost[i]
BestSol = popi[i]
else:
if np.random.rand() <= (1 - Ar):
j = i
while j == i:
j = np.random.randint(0, nPop)
Step = popi[i] - popi[j]
if popCost[j] < popCost[i]:
Step = -Step
newsol = popi[i] + np.random.rand(*VarSize)[0] * Step
else:
newsol = popi[i] + (0.1 * (np.array(VarMax) - np.array(VarMin)) * np.random.rand(*VarSize)[0])
newsol = simplebounds(newsol, VarMin, VarMax)
newsolCost = CostFunction(newsol, fnumber)
if newsolCost < popCost[i]:
popi[i] = newsol
popCost[i] = newsolCost
if popCost[i] < BestCost:
BestCost = popCost[i]
BestSol = popi[i]
fbestvl[it] = BestCost
print(fbestvl)
if (it >= 2000) and (abs(fbestvl[it] - fbestvl[it - 100]) < 1e-350):
break
u = BestSol
fval = fbestvl[it]
NumEval = it * nPop
return u, fval, NumEval, fbestvl
```

We now finished constructing the Jellyfish Search Optimizer! All we have to do now is set our parameters and run the functions we defined above:

```
def main():
# Select function
fnumber = 1
lb, ub, dim = boundcondition(fnumber)
# Set the parameters
Npop = 50 #Size of population
Max_iteration = 10000 #Max number of iterations
para = [Max_iteration, Npop]
# Run JS optimizer
u, fval, NumEval, fbestvl = jso(fobj, fnumber, lb, ub, dim, para)
# Display optimal results
print("The best solution obtained by the JSO is:", u)
print("The best optimal value of the obj function found by JSO is:", fval)
```

When we run `main()`

, the output is:

```
The best solution obtained by JS is: [3.14159266 3.14159265]
The best optimal value of the objective function found by JS is: -1.0
```

This is correct! The JSO works!

## Why use the JSO over other algorithms?

Succesfully testing the JSO *once* on just *one* function is clearly no justification for the superiority of the JSO over other bio-inspired algorithms.

However, the designers of the algorithm tested the optimizer on *many* other small, average and large-scale mathematical functions, and compare the results to an array of other common bio-inspired optimizers. They find that the JSO can requires **less time, converges faster, and has higher success rates** than other algorithms.

The JSO outperforms other popular algorithms, including the Firefly Algorithm, Gravitational Search Algorithm, Artificial Bee Colony, **Genetic Algorithm**, Whale Optimization Algorithm, **Differential Evolution**, and even **Particle Swarm Optimization**.

The authors suggest this is because the JSO is superior in its:

**Capacity of Exploration:**By testing the JSO on multimodal benchmark functions, which have many local optima, it was observed that the JSO can explore the search space more effectively than most popular algorithms.**Capacity of Exploitation:**By testing the JSO on unimodal benchmark functions, which only have one optimum, it was observed that the JSO demonstrates excellent ability to exploit the search space, outperforming all other popular algorithms tested.

The JSO therefore has strong ability to **visit many and different regions of the search space**, but also obtain **high quality solutions within those regions**. The JSO strikes a good balance between exploration and exploitation, allowing it to both escape local optima and find global optima.

## What’s Next?

We will continue exploring the possibilites of bio-inspired algorithms, and possibly use the Jellyfish Search Optimizer for feature selection or other optimizations of our cyberattack classifier!