Out-of-the-box doesn't work. Now what?





A.K.A. your shinny solver sucks.

Well. If you're reading this, you know that MINLP problems are NP-Hard. NP-Hardness doesn't mean we can't solve a lot of very hard problems in practice, but it does mean that we'll never have a single algorithm to solve every MINLP we can come up with automatically. A general-purpose solver like Octeract Engine is tuned in a way that produces solid performance across a wide spectrum of problems and use cases. While this works most of the time, sometimes a solver's default tuning is just not going to work for some problems, or it's going to be subpar.

If you are in that unlucky 0.1% of users, you would normally need to know the solver inside and out, and you would need to know exactly how its algorithms work, what is likely to have triggered in your case, and what the options you can play around with are. But who has a few months to do this right?

As of Octeract Engine v3.4, the solver comes with a "Suggestions" system. The solver collects internal telemetry and identifies issues during the solving process and reports what the user could change on subsequent runs to improve things. Effectively, this saves people the trouble of reading tons of documentation, and trying to edit things that end up making things worse, or not being relevant at all. In a way, the solver is intelligent - it deduces which default settings could have had great impact, and informs the user. It will also print a helpful link to this website, so that you can read the related documentation and figure it out.





Ummm, what?

Hold your horses, it's simple. Here's an example of how this works.


Say you try something seemingly trivial, e.g., the following:


MINLP problem




The engine usually knows when the numerics were off, but so do many solvers. What other solvers don't do, is propose a way to make your problem more numerically stable.

In this case, the Engine has identified a small crossover between the LLB and BUB. This can happen sometimes if e.g., we expand a large degree polynomial, where the expansion can cause chaotic accumulation of numerical error. Why don't we simply not expand by default then? Well, that's because it's often the case that one of the two forms of the problem, EXPANDED and UNEXPANDED, has much better bounding and numerical properties. The engine uses a heuristic to decide which version of the problem to keep, but it can also deduce that certain numerical errors can be due to picking one of the two formulations. Now, we know for certain that not in a million years would anyone be able to make that deduction, so the solver will simply tell you directly - it recommends adjusting USE_AUTOMATIC_EXPANSION and FORCE_EXPANSION to avoid the crossover.



You can tweak pretty much everything important
Other than improving numerics, the engine will guide you to tweak internal timeouts, tolerances, even to turn entire algorithms and problem-wide reformulations on or off. All Suggestions link back to this knowledge base, so you'll be able to build up understanding. And don't forget: unlike other solvers, the developers of this one are not paranoid about people knowing what the solver is doing. We win simply because we are better at solver design than other people, not because we obfuscate information. If the solver did something, it will tell you, so you'll know what to do next.




OK, so what does this mean for me?

It means that if your problem doesn't solve correctly and ultra-fast, you're not an inclined plane wrapped helically around an axis. Now, specifically in the HPC case, this also means that you can tweak settings on your desktop until you get good bounding behaviour, and then you can use those settings to run with 1,000 cores. In fact, we absolutely recommend that, since waiting in the queue for a week only to get a bad run can be pretty annoying. Don't be a n00b.

Was this helpful?
Please make sure you fill the comments section before sending.

Thank you for your comments.
Please contact us if you need any further support.
blog/blog1001