Follow AI Footpaths | About Data Science

any city park and you'll see little dirt paths cutting through the grass. They appear between sidewalks, across lawns, and in corners planners didn't intend for people to cross.
Urban planners call these look for ways.
They form when people choose their own routes instead of official routes. Over time the grass disappears and the informal trail becomes visible evidence of how humans travel through space.
For decades, editors treated these methods as errors. Today many see them differently. Desire methods reveal something important. They show where the original design failed to match human behavior.
The same thing happens in modern organizations.
Employees are already using artificial intelligence to write emails, analyze data, summarize documents, and generate ideas. A marketing manager may use a language model to prepare campaign copy. A financial analyst can summarize reports with an AI assistant. A product manager may test ideas by using productivity tools.
Often this assessment takes place quietly, without formal programs or policies.
This phenomenon has a name: Shadow AI.
The name echoes the old concept of shadow IT, where employees install software without permission from corporate IT departments. Today the pattern is repeating itself with artificial intelligence. Employees brought productivity tools into their daily operations before organizations established governance structures or authoritative platforms.
This raises an obvious concern. Sensitive company information can get into external systems without clear visibility into how that data is being processed or stored. Regulatory frameworks such as GDPR or the EU AI Law can be inadvertently violated. Security teams lose control over how information flows through the organization.
Yet focusing only on risk misses something important.
Shadow AI often reveals when existing systems are no longer compatible with how people need to work. Like the whimsical ways in the park, Shadow AI emerges when workers are looking for faster and smarter ways to complete daily tasks.
If this behavior was abnormal it could be controlled. The numbers suggest otherwise.
Surveys show that nearly four out of five people using AI at work bring their own tools rather than relying on systems provided by their employer. Many interact with these tools through personal accounts instead of business platforms designed to protect sensitive data.
The results are starting to show. Research suggests that more than half of employees are willing to put confidential information into AI systems. Organizations experiencing widespread use of Shadow AI report higher breach costs and greater exposure to regulatory risk.
In other words, artificial intelligence is already spreading to workplaces at scale. Governance, training, and security structures come later.
This gap creates real dangers. It also reveals something about how technological change occurs within organizations.
Shadow AI as an organizational signal
There is another way to interpret Shadow AI.
When employees use new tools outside of formal channels they are not only bypassing governance structures. They also point out where the current workflow fails them.
For many organizations, productive AI is first seen on the fringes of daily work. Employees struggle with writing quick emails, summarizing documents, analyzing spreadsheets, preparing presentations, or testing ideas. This testing happens silently because the official systems available to them do not support these capabilities.
What security teams perceive as unauthorized use can serve as a way to identify an organization. Shadow AI reveals where people are trying to move faster than the systems around them allow.
Urban thinkers have long seen a similar trend in cities. Jane Jacobs argued that cities should be designed the way people move through them, not the way planners think. Informal paths through parks and campuses provide a map of actual behavior.
Organizations facing the rise of Shadow AI may need to adopt a similar mindset.
Instead of viewing Shadow AI as a failure of governance alone, leaders can take it as an early sign of where artificial intelligence may deliver the greatest value. Informal evaluations from across teams often identify workflows where automation, augmentation, or improved access to information can significantly increase productivity.
When organizations approach these patterns with curiosity rather than fear, widespread testing begins to reveal something important. They highlight repetitive tasks that workers are already trying to speed up and expose processes where better tools can unlock meaningful skill gains.
What at first appears to be chaos often points to opportunities for integration. Instead of a bunch of fragmented tests across departments, organizations can identify common needs and build controlled, scalable solutions around them.
If managed well, this change does more than reduce risk. It empowers employees with secure tools that support the way they already work, transforming artificial intelligence from something that needs constant monitoring into an iteration of creativity and innovation. Ignoring Shadow AI means missing these signs. It allows expensive and uncoordinated testing to continue in the shadows while organizations ignore the insights that can guide smarter discovery.
Learning in the ways of AI
Organizations that want to manage artificial intelligence effectively must first understand how it is being used.
Shadow AI should not only be investigated as a compatibility issue. It should be considered a sign of when employees are trying to move faster than the systems around them allow. The first step is visibility. Leaders need to understand which tools are already being used by employees and why. Employee surveys, technical audits, and open-ended interviews across departments often reveal when evaluations take place first. Marketing, sales, finance, HR, and product teams are as common as ever.
Once these patterns are identified the challenge shifts from pressure to structure. Organizations must define which tools are appropriate, establish management policies related to data sensitivity and control, and design processes that reflect how work actually happens within the organization.
Culture is as important as policy. Employees should feel safe discussing how they are experimenting with artificial intelligence rather than hiding it. If people are afraid of punishment or additional burden of using new tools, the experiment will not stop. It just keeps going into the shadows.
Effective governance therefore requires more than rules. It requires an environment where honest inquiry is encouraged and guided. Training, access to approved tools, and clear guidelines allow organizations to turn widespread testing into systematic progress.
Understanding what already exists in the shadows is often the first step to building a strong and intelligent AI strategy.
A final thought
In practice, Shadow AI is rarely the result of evil. It often reflects disorganization and lack of communication within the organization. If employees do not feel safe sharing their research, where curiosity is met primarily with correction, the predictable result is silence.
People never stop trying. They just stop sharing.
If organizations want to manage AI effectively, they must start by creating environments where thoughtful experimentation can take place. Training, practical examples, and clear guidelines make responsible assessment visible instead of hidden.
But culture is very important. When curiosity replaces suspicion, exploration moves out of the shadows and into the open.
The first step to mastering Shadow AI is simple: understand where people are going.
About Aleksandra Osipova
Aleksandra Osipova is the founder of Apricity Lab, where she works with leaders and organizations navigating the transition to AI-enabled systems.
He writes about artificial intelligence, systems thinking, and the future of work. His other activities and information can be found on his LinkedIn.



