# Strong Branching

## What is Strong Branching?

The idea here is to select a number of variables, branch on them, and determine which one improves the gap between the Least Lower Bound (LLB) and Best Upper Bound (BUB) the most. You can see that if the number of variables is large, this can be very time-consuming.

For example, it requires solving a large number of LP or MILP relaxations (one per variable that is tested).

## Putting this into Practice

Assuming that we are testing variable \({x\in[x_{lb},x_{ub}]}\) (Parent Node) and we branch in its midpoint, we create two new problems where the variable ranges between \({x\in[x_{lb},\frac{x_{lb}+x_{ub}}{2}]}\) and \({x\in[\frac{x_{lb}+x_{ub}}{2},x_{ub}]}\).

We will call the two new Child Nodes “Left” and “Right” respectively. The solution of the relaxation in the Left Child Node yields \({x^*_{L}}\) and the one of the Right Child Node \({x^*_{R}}\). The corresponding objective function values of the relaxation are \({O(x^*_{L})}\) and \({O(x^*_{R})}\). We denote \({O(x^*)}\) as the value of the objective function of the relaxation at the Parent Node.

If both \({O(x^*_{L})}\) and \({O(x^*_{R})}\) are infeasible, both Child Nodes can be safely fathomed.

If one of \({O(x^*_{L})}\) or \({O(x^*_{R})}\) is infeasible the corresponding Child Node can be Fathomed.

If both \({O(x^*_{L})}\) and \({O(x^*_{R})}\) are feasible, the largest objective value is used to characterise the effect of branching on variable \(x\) on the relaxation.

If one of \({O(x^*_{L})}\) or \({O(x^*_{R})}\) is infeasible the corresponding Child Node can be Fathomed.

If both \({O(x^*_{L})}\) and \({O(x^*_{R})}\) are feasible, the largest objective value is used to characterise the effect of branching on variable \(x\) on the relaxation.

The effect is characterised by the calculation of a score per variable that undergoes strong branching.

The score is calculated by:

\({s=w(max(O(x^*_{L})-O(x^*),O(x^*_{R})-O(x^*)))+(1-w)(min(O(x^*_{L})-O(x^*),O(x^*_{R})-O(x^*)))}\), where \(max()\) and \(min()\) are the maximum and minimum between two values operators and \(w\) a weight equal to 0.15.

\({s=w(max(O(x^*_{L})-O(x^*),O(x^*_{R})-O(x^*)))+(1-w)(min(O(x^*_{L})-O(x^*),O(x^*_{R})-O(x^*)))}\), where \(max()\) and \(min()\) are the maximum and minimum between two values operators and \(w\) a weight equal to 0.15.

By repeating the above process for all variables below a user-specified limit, we obtain a score on each variable. We can then choose the variable with the greatest score to branch on.