Clicky

🗺️ Preprocessing for Routing Problems - Part 2

In the previous post, we considered preprocessing for the vehicle routing problem where the vehicles have different starting locations. Our goal was to create potentially overlapping regions for the entire US which we could later use for route construction. We defined these regions using all 5-digit zip codes in the continental US for which one of our regional headquarters is the closest, or one of $n$ closest, headquarters in terms of Euclidean distance. The resulting regions gave us some flexibility in terms of how much redundancy we allow in our coverage of the country. ...

June 27, 2014 · Ryan O'Neil

🗺️ Preprocessing for Routing Problems - Part 1

Consider an instance of the vehicle routing problem in which we have drivers that are geographically distributed, each in a unique location. Our goal is to deliver goods or services to a set of destinations at the lowest cost. It does not matter to our customers which driver goes to which destination, so long as the deliveries are made. One can think of this problem as a collection of travelling salesman problems, where there are multiple salespeople in different locations and a shared set of destinations. We attempt to find the minimum cost schedule for all salespeople that visits all destinations, where each salesman can technically go anywhere. ...

May 28, 2014 · Ryan O'Neil

👉 Affine Scaling in R

I recently stumbled across an implementation of the affine scaling interior point method for solving linear programs that I’d coded up in R once upon a time. I’m posting it here in case anyone else finds it useful. There’s not a whole lot of thought given to efficiency or numerical stability, just a demonstration of the basic algorithm. Still, sometimes that’s exactly what one wants. solve.affine <- function(A, rc, x, tolerance=10^-7, R=0.999) { # Affine scaling method while (T) { X_diag <- diag(x) # Compute (A * X_diag^2 * A^t)-1 using Cholesky factorization. # This is responsible for scaling the original problem matrix. q <- A %*% X_diag**2 %*% t(A) q_inv <- chol2inv(chol(q)) # lambda = q * A * X_diag^2 * c lambda <- q_inv %*% A %*% X_diag^2 %*% rc # c - A^t * lambda is used repeatedly foo <- rc - t(A) %*% lambda # We converge as s goes to zero s <- sqrt(sum((X_diag %*% foo)^2)) # Compute new x x <- (x + R * X_diag^2 %*% foo / s)[,] # If s is within our tolerance, stop. if (abs(s) < tolerance) break } x } This function accepts a matrix A which contains all technological coefficients for an LP, a vector rc containing its reduced costs, and an initial point x interior to the LP’s feasible region. Optional arguments to the function include a tolerance, for detecting when the method is within an acceptable distance from the optimal point, and a value for R, which must be strictly between 0 and 1 and controls scaling. ...

April 27, 2011 · Ryan O'Neil

📈 Simulating GDP Growth

I hope you saw “China’s way to the top” on the Post’s website recently. It’s a very clear presentation of their statement and is certainly worth a look. So say you’re an economist and you actually do need to produce a realistic estimate of when China’s GDP surpasses that of the USA. Can you use such an approach? Not really. There are several simplifying assumptions the Post made that are perfectly reasonable. However, if the goal is an analytical output from a highly random system such as GDP growth, one should not assume the inputs are fixed. (I’m not saying I have any gripe with their interactive. This post has a different purpose.) ...

February 23, 2011 · Ryan O'Neil

🧐 Data Fitting 2a - Very, Very Simple Linear Regression in R

Note: This post was updated to include an example data file. I thought it might be useful to follow up the last post with another one showing the same examples in R. R provides a function called lm, which is similar in spirit to NumPy’s linalg.lstsq. As you’ll see, lm’s interface is a bit more tuned to the concepts of modeling. We begin by reading in the example CSV into a data frame: ...

February 16, 2011 · Ryan O'Neil