My first thought was that any minimum would be a point where the slope to all adjacent points was positive. If there were two or more points with that condition, then one would have to be a local minimum, and you could stop calculating. However, I may totally misunderstand the problem, or that may be as much of a brute force calculation as the original problem.
An non-exhaustive search points to using Fermat's Theorem http://en.wikipedia.org/wiki/Fermat%27s_theorem_%28stationary_points%29
None of this may be helpful, but I loved learning what I learned while I was looking.
Your first thought is correct (it's equivalent to the actual halting criterion of my improved algorithm), but it's not a net savings over the original brute-force algorithm to prove that the solution is unique that way.
Glad I was able to spur some learning! :D
I think the expression on my face right now must be the same one that I get from people at work when I whip out the Dictionary of Saints or the The Oxford Book of anything.
Dude, I said "Math" in the post title!
The Oxford Book of Anything sounds awesome. Where can I pick up a copy?
It is sadly long out of print. I did just send for The Cambridge Book of the Beatles. I'll let you know how that goes.
Each node has 4 edges, right? If you take a regular grid, and tug on one point at a 45 degree angle so that all the edges are pointing roughly in that direction, presto, local minimum.
I... don't quite follow what you mean. You can't adjust the grid; it's a given.
I'm not looking for local minima; in fact I'd much prefer it if there weren't any. I can imagine cases where they will exist, but those will be pretty uncommon.
What I'm wondering is if there's any simple test that would guarantee that there are only global minima (or guarantee that there aren't.)
If you take a regular Cartesian grid, and warp it the way I described it, that one point will be a local minimum.
Without xpaint here, hmm. Okay, imagine a regular Cartasian grid, 1-spaced. You are iterating to find a point closest to (2.5,2.5).
the node (1,1) has been moved to (1.9,1.9). Its neighbors are (1,0), (0,1), (2,1) and (1,2). If your iterative search goes to (1,0) or (0,1), the next step will be to (1.9,1.9). And a local minimum.
Oh, I know it's possible to have local minima. I'm checking diagonals also, so it'd have to be a little more involved than your example, but it's absolutely possible, yes. What I'm wondering is if there are any simple tests that can tell me that I do or don't have them. (Either would be useful.)
One example: if you have a corner of the grid curving around so that it nearly touches an edge, depending on where it starts, the algorithm could get hung up on the nearest edge point instead of following the grid around to the corner. That's a pretty uncommon setup, but if it's cheap to detect, it would be nice to have an automated check for it.
You mean tests that use whatever a priori knowledge you have about your mesh's composition, right?
Not necessarily. There may not be any a priori knowledge.
It would be useful to be able to say "If you know that your grid has / doesn't have the following characteristics, this function is guaranteed to work / not work correctly", but not as useful as an automated checker that would test for them.
Given Rawhide's comment, probably the best I can do is throw a warning when failure is likely, rather than be certain of success/failure, but that's still worth doing, if it's an option.
I'm a little confused. Where is this grid coming from? Is it some mathematical function or real data? Is it feasible to check every point for some local condition? If not, it sounds like you're asking for some kind of sublinear algorithm (that is, an algorithm that uses less time than it takes to look at its input). These are almost always approximations.
The grid is a given, and is real data.
The brute-force algorithm that I'm trying to improve over calculates a great-circle distance for every point. It's fast enough to be usable, but slow enough to be annoying. So a check on every point is feasible, but only if it's quite a bit simpler than that, or we'll lose all the speedup. The simplest great-circle distance calculation is 4 cosines, 1 arccos, 3 multiplies and 4 subtracts.
An approximate sublinear algorithm that I could use to throw a warning would still be useful. I'm writing the function with the option to delegate to the original slow-but-guaranteed-correct brute-force approach, so warning "you may want to use safe mode" is almost as good as warning "you definitely should use safe mode".
If the data is real and there is no mathematical reason your function is convex, I have a difficult time imagining a criterion that would assure you that your local optimum (found through hill climbing) is a global optimum that doesn't also evaluation a function at every point in the grid, which is what you are trying to avoid in the first place.
Is Euclidean distance a lower-bound for great circle distance? Is your data indexed in such a way that you can access a 2D grid of points without looking at every point? It seems to me that instead of hill climbing, you need to take advantage of the structure of your data to eliminate points as possible candidates through bounds. It seems you should be able to find an exact sublinear algorithm or, in the worst case, compute a cheap bound for each point which tells you whether you need to do the more expensive great circle calculation.
Now it's my turn to be confused. What do you mean by "accessing a 2D grid of points"?
By Euclidean distance, do you mean treating lat & lon as x/y coordinates in the cartesian plane? That's definitely not a lower bound. If the points are near a pole or a cyclic point, the euclidean distance will be much greater than the great-circle distance.
This is fun; it's like a job interview question. I was wondering if it was quicker to compute the straight-line distance (through the Earth) then computing he great circle route. However, it seems like it would be just as complicated as you'd have to convert geographic coordinates into Cartesian coordinates.
However, it should be pretty simple to compute bounds, no? If your target T has coordinates (X,Y) and the closest point you've found so far is distance D, then if you go distance D due north or south from T you find bounds on the latitude of the closest point: anything above or below those bounds is farther than D from your target. You could do the same thing with longitude. So you can pass through your points and instead of computing a great circle distance you first compare with your bounds, which is a simple comparison. If a point is within your bounds, you do the long calculation and, if it is closer, update your bounds.
As for "accessing a 2D grid of points" the question is simply: if you get new bounds, can you limit the remaining data to just that which is within your bounds? This would be the big win, because you wouldn't have to even do a comparison or any computation for points outside the bounds. If your data was handed to you as a random array of 2D points, you wouldn't be able to do this. But if you could index into them because they were given to you sorted you could speed up the process.
Also, any indexing of your data would be at least linear time. Sorting would be O(n lg n), for example. I think if your data was already sorted by latitude and longitude (in two arrays) you could do this in O(lg n) time, but recurrences were never my strong suite. Certainly it's O(lg n) in the case of a 1 dimensional array of points.
You are smarter than me. Yet you do not intimidate me. This is new. HUGS.
Yay! That makes me happy to hear. :)