Why We Exist
We believe that AI developers have a responsibility to do their best for the good of society. That’s why we’re disappointed by the elementary flaws holding back progress in cybersecurity applications of AI.
We think cybersecurity is mission critical to safe food production, healthcare, energy provision, and more. That’s why we wanted to train ourselves to fix this status quo.
How We Work
We prioritise some simple best practices to keep ourselves honest:
Be inclusive. We try to include members from multiple faculties, education levels, and demographics. More diversity = more ideas + better decisions.
Be honest. A flashy result of 99% accuracy isn’t good enough. We want to examine skewed datasets, spurious correlations, false alarms, ...
Be curious. Week after week, we’re here to help each other to grow and learn.
Avoid harm. All research has the potential to do good or bad. We review and reduce the risks involved with what we work on and publish.
What We Do
We’re affiliated with Wat.ai: a student design team at the University of Waterloo creating more opportunities for undergrads to learn about AI!
Our team develops cybersecurity applications for AI. Currently, we’re benchmarking bio-inspired algorithms like artificial immune systems. We’re using them to detect attempts to hack IoT devices like smart home assistants.
Our work is based on research done at the University of New Brunswick’s Canadian Institute for Cybersecurity. Specifically, we’re comparing computational cost and security performance of various ML algorithms to detect 7 types of cyberattacks in the CIC-IoT-2023 dataset.