Even though we may be decades away from fully autonomous roads, more and more fleet safety technology companies are starting to use artificial intelligence (AI) in their products today. There are many ways to implement AI—for example, on the network edge (in this case, in the vehicle), in the cloud, or end-to-end from the edge to the cloud. We know it’s not a simple topic—that’s why we’re here to break it down, so you know exactly how to evaluate different AI software for driver and fleet safety.
Finding the right application is critical for your fleet to measurably improve driver behavior and reduce losses from collisions. Here is some evaluation criteria:
- Can your AI model identify the same collision risks as your safest driver?
Challenge
Amidst growing fleets and rising driver turnover, how can safety and fleet managers train drivers to operate as safely as their best performing driver while ride-alongs and 1:1 coaching becomes more and more unscalable?
Technology requirement: multi-task convolutional neural networks (CNN) and multi-sensor data fusion
To help drivers by using AI, a fleet will need AI capable of recognizing and intervening on driver behavior that a human coach can identify.
- Does your AI model work in the real world?
Challenge
Once AI can understand driver behavior, traffic elements and vehicle movement risks as a human coach would, it then needs to be trained to recognize the same behaviors across vastly varying driving ecosystems, including driver characteristics, attire, cabin sizes, lighting conditions, road conditions, traffic patterns, and more.
Technology requirement: continuous learning CNN models
Similar to the human eye and brain, AI will only be able to account for such varying conditions if it is continuously trained with new real-world driving data.
- How is your AI model deployed to most effectively impact driver behavior?
Challenge
Finally, if AI can be trusted to understand driver behavior, traffic elements, vehicle movement and critical contextual data across driving ecosystems, it only works if it can be successfully deployed in the vehicle to help drivers when it matters the most.
Technology requirement: edge-to-cloud software architecture
Thanks to advancements in IoT via edge computing and high-speed cellular networks, it is possible to deploy AI in the vehicle to help alert drivers and ultimately prevent and avoid collisions once a high-risk event is detected.
Deep Learning Neural Network
The first step in creating relevant, robust AI software for driver and fleet safety is to develop a deep learning neural network, or a set of algorithms designed to recognize patterns with accuracy and speeds similar to the human brain, but in an automated, efficient manner.
To successfully recognize patterns and insights in the complex transportation ecosystem, deep learning systems require significant investment, time, high-quality video, and specialized knowledge.
Since Nauto’s beginning, its technology incorporates neural network science that recognizes patterns using multiple transportation inputs, including:
- Driver behavior: high-risk behaviors such as distracted driving, drowsiness, fatigue, phone use, and eating regardless of lighting conditions, cabin types, and driver characteristics and attire
- Traffic elements: vehicle and traffic scenes such as lead vehicle position and speed, traffic lights, stop signs, pedestrians, and cyclists
- Vehicular movement: including harsh maneuvers such as hard acceleration, braking and cornering, and speeding
- Contextual data: continuous learning from an ever-growing set off real-world transportation data, such as driver response times, weather, traffic patterns, and collision history
Multi-Tasked Convolutional Neural Networks (CNN)
While many AI solutions claim to use deep learning neural networks, a more advanced evolution of AI systems leverages multi-tasked convolutional neural networks (CNN). Once you have created the algorithms that are required to recognize driver behavior and transportation patterns, you need to evolve the models created by the neural network to enable multi-tasked convolution. CNN is a specialized field of study within neural network science, but in simple terms, it’s what enables the Nauto driver and fleet safety management software in the vehicle and in the cloud to analyze a video image and interpret what is happening in the interior cabin, the exterior road ahead, and to the vehicle.
Taking our interior image sensor as an example, the CNN model is able to analyze a video image and identify different forms of driver inattention, including:
- Distracted driving and drowsiness
- Cell phone use
- Smoking
- Improper seat belt use
- And many more
The CNN model interprets the video image, and based on pattern recognition from the deep learning neural networks, deduces the levels of risk.
Multi-Sensor Data Fusion CNN
Once you can interpret a single video image and draw conclusions based on pattern recognition, the natural evolution is to layer the CNN model with images from multiple sensors simultaneously. The CNN model must have the sophistication to fuse driver behavior, traffic elements, vehicular movement images to form a more comprehensive understanding of risk. The technology can deduce risk better, faster and more accurately than the human brain.
Today, Nauto fuses driver behavior, traffic elements, vehicle movement, and critical contextual data in the CNN model to determine the severity of Predictive Collision Alerts to help drivers avoid imminent collisions, as well as Driver Behavior Alerts for automated, real-time, in-vehicle coaching. For example, Nauto Driver Behavior Alerts for distracted driving are a set progressive audio and verbal alerts that trigger based on the duration of distraction (driver behavior assessment) and vehicle speed (vehicular movement assessment) and provide full context to fleet management and safety leaders for Self-Guided Coaching for drivers or 1:1 Manager-Led Coaching.
Training CNN Models
To develop highly accurate, robust pattern recognition in AI-based solutions, the CNN models must be trained with diverse, real-world data and purpose-built classifications that enable continuous learning and improvement.
Like a student learning math, CNN models are only as good as their teacher, which comes in the form of labeled, trained data. In order to effectively “teach” the CNN model to identify and predict high-risk events, Nauto’s proprietary analytics workflow tools are purpose-built to review complex, real-world situations and identify predictors of high-risk driving events, such as:
- Diverse driver attributes and attire, such as face shapes, sunglasses, masks and hats
- Vehicle cabin size from passenger cars to class 8 trucks
- Time of day, day to night and other lighting conditions
- Weather conditions such as rainy or cloudy days
- Road types
- And many more
To account for high variation while maintaining accuracy, Nauto’s customer fleets around the world empower CNN model development with large amounts of diverse, high-quality driver behavior data. With over 650 million AI-analyzed video miles analyzed and growing, Nauto continuously trains and builds its CNN models to take into account the real-world variables found in the complex transportation ecosystem.
Edge-to-Cloud Software Implementation
To make the best use of hardware resources, AI-based driver safety solutions should be implemented across both the edge and the cloud taking advantage of what both do best.
Closed-Loop Active Learning: in order to efficiently train highly accurate CNN models, Nauto leverages an active-learning process to mine high-value, relevant data on the edge to feed into the CNN model development model cycle.
On-Device, Edge AI Processing: when it comes to driver safety and saving lives, time is critical. Collision prevention starts in the vehicle, and that’s why Nauto’s software runs on the edge, or in the vehicle.
Unlike typical solutions that solely rely on AI processing in the cloud, Nauto’s edge processing responds to detected risks immediately, enabling Predictive Collision Alerts and Driver Behavior Alerts to help prevent collisions in real-time.
The latency within the cloud is 3x longer in duration when compared to a request processed on the edge.
Scalable Cloud Platform: while edge AI processing is purpose-built for a real-time collision avoidance system , Nauto’s cloud-native software enables rapid iteration for model improvement, and offers advantages of high availability, scalability and reliability.
Even before deployment of its CNN model on the edge, Nauto builds new detection types and enhancements within the cloud, where millions of data points from 650+ million AI-analyzed video miles are securely stored, meticulously processed, and optimized for driver improvement. From within the cloud, Nauto tests and refines all new and existing CNN model inputs and outputs to reach acceptable accuracy levels before deploying over the air to create an impact on driver behavior on the edge, in real-time.
Conclusion
All of this means if your AI Driver and Fleet Safety Platform does not have a multi-sensor data fusion, multi-tasked convolutional neural network foundation, nor is it optimized for an edge-to-cloud implementation, then it cannot help you predict, prevent and reduce the occurrence of high-risk events in highly complex driving environments before they happen.