Behavior is the new evidence

we are living through a paradigm shift in how we prove who we are online. Instead of asking What do you know? (password, PIN, mother's maiden name) or What do you look like? (Face ID, fingerprint) the question has become How do you behave?
Generative AI and advances in malware technologies such as RATs (Remote Access Trojans) have enabled cybercriminals to scale attacks and bypass even security measures such as Face ID or MFA, which were once considered bulletproof.
The analysis of behavioral biometrics is now a common practice in banks, which are responsible for paying losses from cybercrime unless the security measures they implement meet the challenges of these new attack environments.
Computational Motor Control Theory
When you scroll through a drop-down menu or drag a slider on your phone, your brain runs a complex feedback loop, correcting invisible errors along the way as you go with each unconscious millimeter and millisecond of touch.
In its infancy, behavioral biometrics sought to distinguish human behavior from bot behavior. Researchers soon discovered that the same technology could be used to distinguish one person's behavior from another person's behavior.
Computational motor control theory, a multidisciplinary field that combines neuroscience with biomechanics and computer science, provides researchers with a framework for understanding the most discriminating aspects of human behavior.
Research shows that what we think of as “robot” – these unconscious emotional adjustments – are actually what make a person's behavioral profile impossible to replicate. A 2012 study at the University of California at Berkeley called Touchalytics, which analyzed the scrolling patterns of 41 participants as they filtered text and images on their smartphones, proved that after only 11 scrolls the behavioral models could identify a specific user in the group without error.
Digital Tells
The Berkeley study identified 30 behavioral characteristics unique to each user's scrolling habits, including stroke length, trajectory, speed, direction, curvature, stroke time and finger position each participant used was found to be unique. For example, some users stop completely when they lift their finger at the end of scrolling. Others lift while the finger is still moving in what scientists call a “ballistic” roll.

But behavioral intelligence reaches beyond scrolling. Typing rhythms, field navigation, and even subtle changes in how a user holds their phone discriminate one user from the next.
AI Arms Race
Certain behavioral signs, taken alone, can help banks spot obvious fraud. A device found upside down during a purchase, for example, is a huge red flag. Superhuman typing speed, direct cursor movement, or devices starting an activity while in screen lock mode can also sound the alarm.
However, behavioral biometrics systems are much more than rules-based systems. Using linear algebra and statistics, AI models can combine highly complex human-computer communication signals to create user-specific models that continuously authenticate the user, even after they've gone through timeless gateways, such as login or FaceID.
At AppGate's Center for AI Excellence – where I work as a machine learning engineer – we train user-specific behavioral models based on mobile sensor data. These models enable us to provide live analysis of whether the movements on your device, or any device that has access to your bank account, are actually you.
Our user-specific anomaly detection models, combined with global, rule-based signals, help banks protect against Account Takeover (ATO) and Device Takeover (DTO). In many cases, behavioral models provide better security than traditional biometric markers, such as fingerprints or facial recognition technology.
Cyber Supply Chain
Seniors are the most common victims of Account Takeover (ATO) or identity fraud. Common attacks are often multi-step, multi-organizational operations, often starting with phishing URLs, or social engineering (well-researched psychological manipulation over the phone) where criminals harvest the victim's credentials and sell them to a different criminal organization or large black market organizations, such as Genesis Market, a dark web platform that holds more than 02 million credentials of more than 2 million people.

These digital fingerprints are exchanged in the market as a common commodity, and often change hands several times before reaching the developer or bot that is actually trying to log into your account. This chain of procurement makes it very difficult for the authorities to catch the criminal or criminals once the fraud is reported.
A typical ATO means that criminals bypass point-in-time verification (login) from a different device, usually unknown to the bank. However, standard online security measures used by most banks use some form of device intelligence, OTPs, MFA or other device authentication to stop attacks. But alarming new ways are emerging in which criminals can make these methods ineffective.
Emerging attack sites
Today there is malware that can block online forms, enter keys remotely as you type, and even hack right into your phone to capture MFAs in what's called a Device Takeover (DTO), the dreaded cousin of the ATO. And with the rise of productive AI, the fear that cybercriminals are just getting started is coming true.
For example, a deep forgery tool used in the cybercrime world called ProKYC allows threat actors to defeat two-factor authentication, facial recognition and even live verification tests using deep fake videos. A RAT (remote access trojan) called BingoMod, distributed via smishing (phishing SMS), poses as a legitimate anti-virus application for Android phones, the permissions used on the device allow a remote threat actor to silently steal sensitive information, such as credentials and SMS messages, and use it on an infected phone from an infected phone.
Once the device is compromised, all bank authentication methods are fully controlled by the attacker. From the bank's perspective, the device's fingerprint is correct, the IP address is correct, MFA codes and authentication applications are all in line. Due to increasing social engineering, even security questions, i.e. your mother's maiden name, are not so comfortable.
This means that the only way to protect against cybercrime is the honesty of a person's behavior.
Continuous validation, few interruptions
The growing sophistication of cyber attacks, along with sophisticated cyber security, has led to one positive outcome for online banking customers: a better user experience.
As behavioral models can continuously authenticate users, the need to keep sending MFA or OTP decreases and a legitimate banking session goes more smoothly for customers.

The product I'm currently working on, called 360 Risk Control, brings together signals from bot detection, device intelligence, desktop behavioral biometrics models and mobile device behavioral biometrics into one continuous risk assessment analysis that works throughout banking, long after point-in-time authentication (eg login, FaceID).
If danger signals are raised, the system can increase authentication, request additional verification, or stop the operation altogether. But if the behavior matches the established user profile, the session continues seamlessly.
In this way, behavioral biometrics represent a sea change, from active (users need to do something) to passive (natural behavior becomes evidence), from point-in-time verification to continuous verification, from fragmented user experience to internal and secure workflows.
Further reading:
“Touchlytics” –
“ProKYC” –
“BingoMod” –
FBI Internet Crime Report –



