Technology/AI Archives - Uber Freight Fri, 01 Mar 2024 17:18:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.uberfreight.com/wp-content/uploads/2023/09/cropped-uf-logo-512-32x32.png Technology/AI Archives - Uber Freight 32 32 Reimagining Uber Freight’s carrier pricing algorithm to drive better outcomes https://www.uberfreight.com/blog/reimagining-uber-freights-carrier-pricing-algorithm-to-drive-better-outcomes/ Fri, 08 Dec 2023 13:30:25 +0000 https://www.uberfreight.com/?p=991996 By: Kenneth Chong (Sr. Applied Scientist), Joon Ro (Sr. Applied Scientist), Emir Poyraz (Sr. Engineer) and May Wu (Applied Science Manager) An innovative spin on a classic algorithm  Uber Freight operates a two-sided marketplace, which separately interfaces with shippers (who seek to move loads from point A to point B) and with carriers (truck drivers...

The post Reimagining Uber Freight’s carrier pricing algorithm to drive better outcomes appeared first on Uber Freight.

]]>
By: Kenneth Chong (Sr. Applied Scientist), Joon Ro (Sr. Applied Scientist), Emir Poyraz (Sr. Engineer) and May Wu (Applied Science Manager)

An innovative spin on a classic algorithm 

Uber Freight operates a two-sided marketplace, which separately interfaces with shippers (who seek to move loads from point A to point B) and with carriers (truck drivers who move these loads).  Both sides of this marketplace are powered by dynamic pricing algorithms. Each is structured a bit differently based on the needs of the respective marketplace.

Recently, our team identified the need to change the algorithm dictating carrier pricing. The algorithm generates a rate based on a variety of factors like market conditions, the size of the load, distance to move the load, etcetera. Up until this point,  we used an algorithm based upon Markov Decision Processes (MDP) to set carrier prices (see this earlier blog post for a full description of how the MDP model works) because it is natural to our problem: we need to make pricing decisions sequentially over time.  Over time, though, we realized that while the algorithm was performant, it had shortcomings that were challenging to address with a standard MDP implementation. 

We overcame these by:

 1) Truncating the backward induction

2) Relying more heavily on machine learning (ML) to approximate the value function 

3) Developing a new clustering algorithm to identify “similar” loads that may be useful in estimating cost—creatively combining optimization and ML to develop what we think is a more robust version of MDP.  

MDP is a natural choice of algorithm, but implementation is challenging

To briefly illustrate the problem of carrier pricing, suppose that we have made a commitment with the shipper to haul a load from Chicago, IL to Dallas, TX. The pickup appointment begins exactly 5 days from now: in other words, the lead time T is 120 hours. Because it becomes more difficult to find carriers on shorter notice, prices tend to increase as lead time decreases. Thus, we have a sequence of decisions to make over time, which we refer to as a price trajectory:

Figure 1: Example of a price trajectory over 5-day lead time

Because we update prices hourly, this trajectory can be represented as a sequence {pt}t{T, T-1, …, 0}, where pt is the price offered t hours from pickup. The goal of our algorithm is to pick a trajectory that minimizes, in expectation, the total cost to cover the load. We refer to this quantity as eCost:

eCostT=t=T0ptPr(Bt=1 | pt,St),

where Bt = 1 if load i is booked at lead time t (0 otherwise), and St denotes (possibly time-varying) state variables other than price that can affect the booking probability. This quantity usually increases as time passes and the load remains unbooked. 

With MDP, computing eCost entails solving the Bellman equation

 Vt =minptPr(Bt | pt, St)pt +1-Pr(Bt | pt, St)Vt-1           t{0, 1, …, T}

The quantities {Vt} are directly interpretable as eCost, conditional on our pricing policy. However, a crucial input into MDP is accurate estimates of the booking probabilities Pr(Bt=1 | pt , St). Although the ML model we use for this performs well, errors can compound over time.  This means we become vulnerable to the optimizer’s curse, which states that in the presence of noise, total cost estimates associated with the optimal decision are likely to be understated. This, in turn, leads to lower, overly-optimistic prices that tend to decrease further at higher lead times. 

In addition, the curse of dimensionality presents significant challenges in modeling more complex state transitions. For example, incorporating near-real-time (NRT) features, such as app activity, into our prices entails adding those to state variables – which quickly makes our problem computationally infeasible.

Our approach: carefully blending prediction and optimization

To improve pricing accuracy, we explored using a predictive model, rather than a standard backward induction, to estimate eCost. In particular, we perform K ≤ T stages of backward induction, and if the load remains unbooked after K hours, we substitute for the continuation value VT-K a prediction from this model. In this case, the  Bellman equations become:

Vt =minptPr(Bt | pt, St)pt +1-Pr(Bt | pt, St)Vt-1       t{T-K+1, …, T-1, T}

with the boundary condition VT-K =eCostT-K.

Figure 2: Illustration of sequential decision making in MDP

Picking an appropriate value for K required some iteration, but this truncated backward induction helps to mitigate each of the issues we highlighted above

An initial candidate for such a predictive model was an XGBoost ensemble that we previously used elsewhere in carrier pricing. The training data consisted of historical loads, their characteristics (we describe these in more detail below), the lead times at which they were booked, as well as their realized costs. 

However, the predictions generated by this model were not suitable for estimating eCost for two reasons. 

  1. There is an important distinction to be made between a load’s remaining lead time, and its available lead time. While we would include the former in the feature vector at time of serving, there are systematic patterns in the latter. That is, similar loads tend to become available at specific lead times. For example, longer hauls tend to become available with higher lead times than shorter hauls. Thus, we would have concentrations of data points around certain available lead times, making predictions at other lead times less reliable.
  2. The training data consisted solely of outcomes—the realizations of random variables, rather than their expectations— so predictions from this model could also not be interpreted as an eCost. It was not clear how to modify the training set so that predictions from the XGBoost model could be treated as expectations

Instead of trying to predict eCost directly, we borrow principles from clustering to find a set of loads that are similar to the one to be priced, and take a weighted average of their total costs. With this approach, remaining lead times can be explicitly incorporated into eCost — we can exclude, from the set of similar loads, ones that were booked sooner than the remaining lead time of the current load. Moreover, each of the similar loads identified can be viewed as a potential outcome for the current load, which are assigned probabilities equal to the weights used in the averaging.

The primary challenge here is developing a way to measure similarity between loads, which we describe with a number of attributes:

  • Locations of the pickup and delivery facilities
  • Time of day, day of week the load is scheduled to be picked up or delivered
  • Market conditions around the time the load is scheduled to be moved

There is a mixture of continuous and categorical features, and it is not at all clear how these should be weighted against each other when computing similarity.

…with an unconventional clustering algorithm

We developed a new algorithm where our aforementioned XGBoost model is used to perform clustering. Consider the simple case where our model is a single (shallow) decision tree with a lone feature: driving distance. We might obtain a tree like the following:

Figure 3: Illustration of a single decision tree for clustering

Two loads can be viewed as  “similar” if their predictions are made from the same leaf node. In the case of leaf node #6, we know that their route distances are both between 282 and 458 miles. 

This might seem like a crude way of measuring similarity, but by growing deeper decision trees and incorporating more features, we require the two loads to have comparable values on a number of dimensions to be considered “similar.”  We can further differentiate by growing additional trees. In a nutshell, we associate with each load an embedding vector,  consisting of nonzeros at locations matching with the leaf nodes the load falls into. We measure similarity between loads through the cosine similarity of their embeddings, which is directly related to the number of trees in which both have common leaf nodes.

Smoother prices led to improved booking outcomes

Given the fairly substantial changes we made to the algorithm, we performed a rollout in two main phases.  First, we incorporated eCost into the algorithms we used in secondary booking channels, and conducted an A/B test to ensure no performance decline. Second, we ran a switchback experiment comparing it with the previous version of MDP. The net effect of these two experiments was a statistically significant reduction in the total cost of covering loads, without any negative impact on secondary metrics. We attribute this to the smoother price trajectories generated by the new algorithm, which results in loads being booked earlier in their lifecycles. This is particularly relevant, as costs tend to increase sharply once lead times fall below 24 hours. 

With the rollout of this new algorithm comes a number of secondary benefits. The earlier booking of loads, due to smoother price trajectories, not only reduces costs but is crucial in avoiding the sharp cost increases associated with shorter lead times.  Reducing the total cost to cover loads allows us to offer more competitive pricing to shippers— which, in turn, increases volume. Additionally, flatter price trajectories might lead to carriers checking loads on the app earlier in their life cycles. Beyond these business advantages, the algorithm has also contributed to reducing technical debt by unifying pricing across booking channels, as well as by removing nonessential components used in the previous implementation of MDP. 

Though we think there are areas for further improvement, this new pricing algorithm provides a blueprint for how MDP can be applied to new use cases within Uber Freight.

*Since we are also using eCost as a sort of value function approximator, the problem we now solve shares some similarities with an approximate dynamic program.

The post Reimagining Uber Freight’s carrier pricing algorithm to drive better outcomes appeared first on Uber Freight.

]]>
Recommendations at Uber Freight: Achieving Better Load Matching with AI https://www.uberfreight.com/blog/better-load-matching-with-ai/ Sun, 24 Sep 2023 19:14:29 +0000 https://www.uberfreight.com/?p=991625 By: Ran Sun, Sr. Product Manager, Jia Wang, Sr. Machine Learning Engineer, Gowtham Suresh, Sr. Applied Scientist Problem: How to match shippers’ loads to interested carriers You might think of Uber Freight’s digital brokerage business as an online dating platform. Just as online dating seeks to match individuals, the brokerage aims to pair shippers’ loads...

The post Recommendations at Uber Freight: Achieving Better Load Matching with AI appeared first on Uber Freight.

]]>
By: Ran Sun, Sr. Product Manager, Jia Wang, Sr. Machine Learning Engineer, Gowtham Suresh, Sr. Applied Scientist

Problem: How to match shippers’ loads to interested carriers

You might think of Uber Freight’s digital brokerage business as an online dating platform. Just as online dating seeks to match individuals, the brokerage aims to pair shippers’ loads with carriers willing to take them. Specifically, shippers turn to us to reach a carrier base broader than what they have access to. Not only does this maximize their potential to achieve the best possible cost and service outcomes, but also both parties can rely on our expertise and technology in managing the downstream load lifecycle.

To find that one true love load, we could ask the carrier to specify their preferences via search – are you a homebody so you need a recurring route that gets you home, or do you have a wanderlust streak and are willing to drive any route? But people can’t always articulate what they want, nor how much those wants matter.

This nuance was the impetus for launching a recommendations system at Uber Freight. We had been putting the burden on carriers to tell us what they want through search, but searching is tedious, carriers have multiple search intents, and they do not always make their preferences explicit. By enabling an automated discovery process, we can service loads more efficiently and cost-effectively. This builds trust with shippers and increases the amount of freight they tender to our platform—which benefits carriers, giving them more options to find that perfect load. More load volume draws more carriers to the platform, which increases shippers’ ability to efficiently and cost-effectively tender their freight. In sum, a recommendations system is one way to enable the brokerage flywheel, lending a helping hand to shippers and carriers by playing matchmaker.

The brokerage flywheel. Recommendations are a component of an automated, instant network and facilitate better carrier utilization.

What is load matching?

Before we delve into our solution, let’s take a moment to define load matching further. In a perfectly optimized network, we’d be able to match the exact right load to the exact right carrier across the nation. However, carrier capabilities and preferences complicate load matching, as do the requirements of the load, such as cargo type, distance to be traveled, and delivery deadline. Our recommendations system judges how strong of a fit may exist between a carrier and a load in order to serve up the best combinations.

What’s the difference between load matching and digital freight matching (DFM)?

Load matching and DFM are closely related concepts. Load matching is the broader term; it refers to the process of connecting shippers with carriers to efficiently utilize transportation capacity. Load matching can be done digitally or non-digitally (e.g. through phone calls or personal relationships). DFM refers specifically to load matching via digital methods. It enables real-time visibility into network needs and carrier capacity, reducing the need for manual intervention.

Solution: A recommendations system

We hypothesized that by replacing the default search page with a page of recommendations, we could move carriers away from a “spearfishing” search model and towards a discovery experience in which they found loads they didn’t even think to look for. Both the design of the user experience and the algorithm itself were critical to the success of this product. In this post, we’ll deep dive into the algorithm.

Our recommendations system prioritizes conversion and engagement, similar to those deployed in the e-commerce industry. There are two components: candidate generation and ranker/booster.

Step #1: Candidate generation

We initially wanted to utilize collaborative filtering to generate candidate loads to recommend. Collaborative filtering examines users with similar behavior to infer the preferences of the user in question—similar to how your favorite streaming service leverages user data to recommend content. However, employing this technique using load bookings as the unit of similarity has challenges, since load bookings are exclusive. In the streaming service context, two users can like the same movie. In our context, two users cannot book the same load. 

To circumvent this issue, we tried methods such as looking at lane bookings and loads saved as positive signals. (Lane bookings are multiple loads booked on the same geographical route, e.g. San Francisco to Los Angeles.) Both of these do not suffer from exclusivity. However, the viability of these approaches was nixed by the sparseness of the data.

Ultimately, we used several signs of interest from carriers to generate load recommendation candidates: 

  • Saved loads
  • Clicked on loads
  • Loads similar to prior bookings
  • Loads taking freight drivers to their billing address as a proxy for home
  • Searches executed on our partner vendor sites

Precision and Recall Comparison for Different Candidate Generation Types

The above chart represents the precision and recall by considering the top 20 ranked loads from each of the candidate generation types. Precision measures how many of the recommendations were actually booked, whereas recall checks what share of bookings were recommended.  Typically, we prioritize precision over recall when it comes to recommendations as it is a direct assessment of relevance. Our primary objective is to cultivate trust among carriers by presenting them with the most pertinent loads. Even when such loads are scarce, we lean towards displaying fewer recommendations rather than cluttering the space with less relevant ones. In our offline testing, saved loads consistently yielded the highest precision, making them a robust signal. Clicks and bookings exhibit relatively similar precision and contribute significantly to recall. We intentionally provide an additional boost to candidates from saved loads due to their exceptionally high precision performance.

Step #2: Ranker/Booster

We used the XGBoost algorithm to order the candidate loads generated from the sources mentioned above. The algorithm utilizes load characteristics, carrier personas established in user research and substantiated through data, and the intersection between load characteristics and carrier preferences as features. It then uses the label of whether the load was booked for training and prediction purposes. 

Below are the SHAP values of the top 3 features for a sample candidate load. The order of the features illustrates feature importance, the color indicates positive (red) or negative (blue) impact on the outcome variable, and the x-axis indicates the magnitude of impact. Here, the outcome variable is booking probability. Derived_lead_time is the most important feature in determining booking probability. Lead time for this sample indicates low booking probability. Note that feature importance ordering and their magnitude vary by load.

Feature importance using SHAP values for a sample load

We highlight three key features as follows:

  • Lead time to pickup (“derived_lead_time”): Loads in the freight marketplace can be considered akin to perishable goods, wherein timely booking is crucial. If a load remains unbooked before its scheduled pickup, then we have to reschedule the load, leading to strained relationships with shippers. Incorporating lead time as a feature in the algorithm allows us to capture the dynamic of carriers becoming more inclined to book as the pickup time approaches (because price rises).
  • Repeat booking (“derived_is_repeated”): A lot of carriers prioritize familiarity. Perhaps familiarity means driving the same route and getting to know the best diner along the drag, or making friends with the warehouse receivers and getting unloaded earlier. To simulate this desire, we developed an exponential kernel function that quantifies the geographical similarity between a given load and the user’s previous bookings.
  • Distance (“derived_route_distance_score”): Much like how some people enjoy long journeys while others prefer short getaways, carriers also have distance inclinations. We considered a carrier’s previously booked lengths of haul to infer their preference in this respect. A preference score was then assigned to each candidate load based on its length of haul.

Results: Better load matching increases bookings

We tested our new recommendations system with a user-level A/B experiment, to positive results. Lifts occurred throughout the carrier booking funnel, most notably an increase of 12% in bookings for active users and an overall increase of 3% in bookings and 5% in clicks. 

What’s next? We’ll investigate several iterations, from improving the experience for select pockets of users to leveraging more data sources in generating load candidates. With these changes, carriers can rely on Uber Freight to help them find the best possible load given their preferences, as efficiently as possible. Now that’s a happy ending.

Interested in learning more about how our team is using AI-driven algorithms to improve logistics for shippers and carriers alike? Read about our probability of late arrival (PLA) model.

 

The post Recommendations at Uber Freight: Achieving Better Load Matching with AI appeared first on Uber Freight.

]]>
A look inside the AI engine powering on-time arrivals https://www.uberfreight.com/blog/estimating-shipment-arrival-ai/ Tue, 11 Jul 2023 21:48:21 +0000 https://www.uberfreight.com/?p=990909 By: Mudit Gupta, Sr. Data Scientist; Mohit Gulla, Applied Scientist; and Angelo Mancini, Applied Science Manager Will my load arrive on time? In the logistics industry, understanding when a load is running late is fundamental to mitigating poor service outcomes. Given advance notice, Uber Freight can work with the carrier and shipper to mitigate the...

The post A look inside the AI engine powering on-time arrivals appeared first on Uber Freight.

]]>
By: Mudit Gupta, Sr. Data Scientist; Mohit Gulla, Applied Scientist; and Angelo Mancini, Applied Science Manager

Will my load arrive on time?

In the logistics industry, understanding when a load is running late is fundamental to mitigating poor service outcomes. Given advance notice, Uber Freight can work with the carrier and shipper to mitigate the impact of a late arrival. However, late arrivals are frequently detected too late for any adjustments to be made, or not detected until the appointment time has come and gone.

At Uber Freight, our tracking system is designed to provide the highest quality service to shippers. By combining our in-house tracking data with our deep understanding of logistics and our machine learning expertise, we’ve developed a system that continuously refines our data on facility locations and builds on these data to surface real-time predictions of late arrivals to our operations team.

The problem and our approach to solving it

At its core, predicting whether or not a carrier will arrive on-time at a facility requires three key ingredients: (1) the location of the facility, (2) geofences around the facility that we can use to detect when a carrier has arrived at or departed from a facility, and (3) a model that can make real-time predictions of late arrival given a carrier’s location and the location of the facility. As we’ll see in the following example, if any of these components fails, the whole system falls apart.

Figure 1: Two trucks headed to the same facility (green pin) generate different types of tracking errors when the system has an inaccurate facility location (the brown circle, with 1.5 mile arrival geofence and 6 mile departure geofence).

In the case of the green truck, neither geofence is triggered, which means that as far as the system can tell, the carrier never arrived at the facility (even though they may actually have arrived on time). In the case of the brown truck, both geofences are triggered <while the carrier is in transit to the actual facility, which means the system will incorrectly log the carrier as having arrived at the facility at the incorrect time, record the carrier as having spent very little time at the facility (since the carrier is really just driving by, this is known as a ‘dwell defect’), and log the carrier as having left the facility at the incorrect time. For both trucks, any late-arrival predictions made while they were in transit would not be reliable as the model would be making predictions using the <incorrect facility location.

This example motivates the approach we took to building our tracking system: refine the fundamentals (location and geofences) in order to build a high quality late arrival model. Throughout the project, we leveraged the vast amount of historical load and tracking data at our disposal.

Step 1: Laying the foundation with Project Pinpoint

Getting location data for shipping facilities sounds straightforward, right? After all, it’s the trucks that are moving, not the facilities. Unfortunately, it’s not that simple. When we reviewed our facility location data at the beginning of our project, we found that the locations we’d been getting from legacy GPS navigation companies were frequently incorrect. For a sample of 500 of the largest facilities in our network, ~40% had incorrect GPS locations, including 10% of cases where the facility’s location was off by at least 0.3 miles (most likely mapped to the center of the facility’s postal code). 0.3 miles may not seem like a sizable error, but for these facilities, our data indicated that ~24% of loads had incorrect arrival/departure times logged by the system.

We turned to our internal GPS tracking data, which we collect through the Uber Freight mobile carrier app for thousands of shipments on a daily basis. Figure 2 below illustrates our approach.

Figure 2: The facility location in the system (red circle) is incorrect; the correct location can be identified by analyzing the location of GPS pings for carriers visiting the facility.

It’s clear from this example that the system’s facility location is incorrect, and that the facility is actually located near the cluster of pings at the top right. However, doing this analysis manually for every facility simply isn’t feasible given the scale of our network. Instead, we built a machine learning model to analyze our historical GPS data and identify clusters of pings associated with facilities. To ensure the accuracy of the algorithm, we added common sense checks like distinguishing between pings received from carriers that were moving versus those that were at rest; discarding clusters associated with rest stops, fuel stops and other spurious locations; and ruling out GPS signals from dispatchers that were not actually in transit.

After cleaning our historical facility location data, we now run the Pinpoint algorithm on a recurring basis to make sure that our facility locations are always up to date, and that we identify locations for new facilities joining our network as quickly as possible.

Step 2: Building better geofences with Project Lasso

As we discussed in the first example, Uber Freight’s automated tracking system (and many tracking systems in the freight industry) uses geofences to determine the time a carrier arrives at and departs from a facility. There are several challenges when trying to draw ‘good’ geofences:

  1. Geofences that are too large can lead to inaccurate arrival/departure events, while fences that are too small may miss actual arrivals and departures.
  2. There’s no single best radius for a geofence since facilities vary widely by size: some facilities are small/single buildings, while a superstore or major distribution center could have one address and ‘location’ tied to a complex of buildings and a spacious parking lot, none of which represent the actual loading dock.
  3. If the facility is shared by multiple shippers, then we may need to shift the location of the geofence to focus on the area of the facility relevant to the load being tracked.

To address these challenges, we again turned to our high-quality, in-house GPS data. In Project Lasso we developed an algorithm that analyzes hundreds of thousands of GPS signals to automatically create tailored geofence for our facilities. Whereas the standard geofence is a circle with a 1.5 mile radius, we’ve been able to produce geofences with radii as small as 0.1-0.3 miles around the facility or loading dock within the facility. Figures 3 and 4 below show how we converted GPS data into genfeces for a variety of facilities.

Figure 3: (Left) Raw GPS pings for carriers visiting the facility; (Right) Facility location and geofence automatically derived from GPS pings.

Figure 4: Two different locations and geofences identified at the same facility but connected to different shippers.

Project Lasso has helped both shippers and carriers save time by reducing disputes about arrival and departure times. We have also seen a 60% (relative) reduction in the fraction of loads exhibiting some type of obviously incorrect behavior, i.e. dwell defects in which a carrier seems to have spent < 15 minutes between arrival and departure, and transit defects in which the carrier appears to have traveled faster than 80 mph on average between two stops. These improvements give us confidence in the arrival/departure times logged in our system.

Figure 5: Project Lasso drove a sustained 60% (relative) reduction in tracking defect rates after full launch.

3: Predicting late arrivals with machine learning

After firming up the foundations of our system by correcting facility locations and building tailored geofences, we were ready to tackle our original challenge: providing high quality, real-time late arrival predictions at scale. We followed a four-step process:

  1. We used the newly-corrected facility locations and geofences to <retroactively clean our historical data to make sure we used the most accurate data possible to build our model.
  2. We then curated the historical loads in our data to make sure that only those loads for which we had sufficiently high quality GPS tracking data available were included, again to help the model learn from the highest quality data.
  3. Next, we enriched the data available to the model; in addition to a carrier’s position, we included information such as the carrier’s direction of travel, speed, pacing, etc., as well as their historical on-time performance.
  4. Finally, we trained and tuned our model using our in-house data science platform, obtaining a model that could predict risk of late arrival starting at six hours in advance of pick-up.

With these predictions in hand, our operations team can focus their attention on loads likely to arrive late and take mitigating steps such as reaching out to the carrier to check in, alerting the shipper so they can extend or reschedule the appointment, or reassigning the load to a different carrier (also known as ‘bouncing’ the load). Since launching the probability of late arrival (PLA) model in production, we’ve seen a significant improvement both in our ability to flag late loads as well as detect bounced loads.

What’s next

We view the late-arrival machine learning project discussed here as the first step in an ambitious roadmap to improve service outcomes for our shippers. We are actively working on an estimated-time-to-arrival (ETA) model to accompany our PLA model. ETA modeling is ubiquitous in the consumer and retail space (we’ve all used a navigation app or seen ETAs for food deliveries), but ETA prediction for freight poses distinct challenges. For instance, freight trips often last much longer than food delivery or rideshare trips, cross through multiple geographies with different traffic and weather conditions, and are impacted by both regulations on hours-of-service and carrier behavior (e.g. stopping to sleep, refuel, etc.).

With PLA and ETA models in place, we will develop automated ‘self-healing’ workflows to further improve service for our shippers. For instance, we can use the PLA model to flag loads at risk of arriving late to pick-up, then use the ETA and PLA models to identify which carriers nearby might be able to pick up the load in time, and finally automatically ask the most promising carriers if they can step in to avoid a missed appointment. We are also leveraging the foundations developed in the late-arrival project to go beyond load-level tracking for shippers. For instance, using the improved arrival and departure times generated by Projects Pinpoint and Lasso, we have produced much more accurate estimates of how long carriers spend waiting at facilities (‘dwell time’) and have combined those estimates with our in-house carrier ratings data to quantify the impact of poorly operated shipper facilities on carrier satisfaction and, ultimately, shipper costs.

The post A look inside the AI engine powering on-time arrivals appeared first on Uber Freight.

]]>