5.2 Advanced Features

In the previous section we described the first steps for the implementation of a linear-programming based branch-and-bound algorithm with ABACUS. Now, we present several advanced features of ABACUS. We show how various built-in strategies can be used instead of the default strategies and how new problem specific concepts can be integrated.

5.2.1 Using other Pools

By default, ABACUS provides one variable pool, one constraint pool for constraints of the problem formulation, and another constraint pool for cutting planes. For certain applications the implementation of a different pool concept can be advantageous. Suppose we would like to provide two different pools for cutting planes for our application instead of our default cutting plane pool. These pools have to be declared in the class MYMASTER and we also provide two public functions returning pointers to these pools.

  class MYMASTER : public ABA_MASTER {  
    public:  
      ABA_STANDARDPOOL<ABA_CONSTRAINT, ABA_VARIABLE> *myCutPool1()  
      {  
          return myCutPool1_;  
      }  
      ABA_STANDARDPOOL<ABA_CONSTRAINT, ABA_VARIABLE> *myCutPool2()  
      {  
        return myCutPool2_;  
      }  
    private:  
      ABA_STANDARDPOOL<ABA_CONSTRAINT, ABA_VARIABLE> *myCutPool1_;  
      ABA_STANDARDPOOL<ABA_CONSTRAINT, ABA_VARIABLE> *myCutPool2_;  
  };

Now, instead of the default cutting plane pool we set up our two problem specific cut pools in the function initializeOptimization(). This is done by using 0 as last argument of the function initializePools(), which sets the size of the default cut pool to 0. The size of the variable pool is chosen arbitrarily. Then, we allocate the memory for our pools. For simplification, we suppose that the size of each cut pool is 1000.

  void MYMASTER::initializeOptimization()  
  {  
    // initialize the constraints and variables  
    initializePools(constraints, variables, 3*variables.number(), 0);  
 
    myCutPool1_ = new ABA_STANDARDPOOL<ABA_CONSTRAINT, ABA_VARIABLE>(this, 1000);  
    myCutPool2_ = new ABA_STANDARDPOOL<ABA_CONSTRAINT, ABA_VARIABLE>(this, 1000);  
  }

The following redefinition of the function separate() shows how constraints can be separated from and added to our pools instead of the default cut pool. If a pointer to a pool is specified as an argument of the function constraintPoolSeparation(), then constraints are regenerated from this pool instead of the default cut pool. By specifying a constraint pool as the second argument of the function addCons() the constraints are added to this pool instead of the default cut pool. As the member master_ of the base class ABA_SUB is a pointer to an object of the class ABA_MASTER we require an explicit cast to call the member functions myCutPool1() and myCutPool2() of the class MYMASTER.

  int MYSUB::separate()  
  {  
    ABA_BUFFER<ABA_CONSTRAINT*> newCuts(master_, 100);  
    int nCuts = constraintPoolSeparation(0, ((MYMASTER*) master_)->myCutPool1());  
    if (!nCuts) {  
      nCuts = mySeparate1(newCuts);  
      if (nCuts) nCuts = addCons(newCuts, ((MYMASTER*) master_)->myCutPool1());  
    }  
    if (!nCuts) {  
      nCuts = constraintPoolSeparation(0, ((MYMASTER*) master_)->myCutPool2());  
      if (!nCuts) {  
        nCuts = mySeparate2(newCuts);  
        if (nCuts) nCuts = addCons(newCuts, ((MYMASTER*) master_)->myCutPool2());  
      }  
    }  
    return nCuts;  
  }

Using application specific variable pools can be done in an analogous way with the two functions variablePoolSeparation() and addVars().

5.2.2 Pool without Multiple Storage of Items

One of the data structures using up very large parts of the memory are the pools for constraints and variables. Limiting the size of the pool has two side effects. First, pool separation or pricing is less powerful with a small pool. Second, the branch-and-bound tree might be processed with reduced speed, since subproblems cannot be initialized with the constraint and variable system of the father node.

On the other hand it can be observed that the same constraint or variable is generated several times in the course of the optimization. This could be avoided by scanning completely the pool before separating or pricing from scratch. But, if direct separation or pricing are fast, such a strategy can be less advantageous.

Therefore ABACUS provides the template class ABA_NONDUPLPOOL that avoids storing the same constraint or variable more than once in a pool. More precisely, when an item is inserted in such a pool, the inserted item is compared with the already available items. If it is already present in the pool, the inserted item is deleted and replaced by the already available item.

In order to use this pool, you have to set up your own pool as explained in Section 5.2.1. Instead of a ABA_STANDARDPOOL you have to use now an ABA_NONDUPLPOOL. For constraints or variables that are inserted in a pool of the template class ABA_NONDUPLPOOL, the virtual functions hashKey, name, and equal of the base class ABA_CONVAR have to be redefined. These functions are used in the comparison of a new item and the items that are already stored in the pool. For the details of these functions we refer to the reference manual.

5.2.3 Constraints and Variables

We discussed the concept of expanding and compressing constraints and variables already in Section 4.2.4. This feature can be activated for a specific constraint or variable class if the virtual dummy functions expand() and compress() are redefined. Here we give an example for constraints, but it can be applied to variables analogously. We discussed the expanded and compressed format of the subtour elimination constraints already in Section 4.2. The nodes defining the subtour elimination constraint are contained in the buffer nodes_. When the constraint is expanded each node of the subtour elimination constraint is marked.

  void SUBTOUR::expand()  
  {  
    if(expanded()) return;  
    marked_ = new bool[graph_->nNodes() + 1];  
    int nGraph = graph_->nNodes();  
    for (int v = 1; v <= nGraph; v++)  
      marked_[v] = false;  
    int nNodes = nodes_.size();  
    for (int v = 0; v < nNodes; v++)  
      marked_[nodes_[v]] = true;  
  }

For the compression of the constraint only the allocated memory is deleted.

  void SUBTOUR::compress()  
  {  
    if (!expanded()) return;  
    delete marked_;  
  }

5.2.3.1 Constraints

Often, the definition of constraint specific expanded and compressed formats provides already sufficiently efficient running times for the generation of the row format, the computation of the slack of a given LP-solution, or the check if the constraint is violated.

If nevertheless further tuning is required, then the functions genRow() and slack() can be redefined. The function

  virtual int genRow(ABA_ACTIVE<ABA_VARIABLE, ABA_CONSTRAINT> *variables,  
                     ABA_ROW &row);

stores the row format associated with the variable set variables in row and returns the number of nonzero coefficients stored in row.

The function

  virtual double slack(ABA_ACTIVE<ABA_VARIABLE, ABA_CONSTRAINT> *variables,  
                       double *x);

returns the slack of the vector x associated with the variable set variables. Instead of redefining the function violated() due to performance issues, the function slack() should be redefined because this function is called from the function violated() and uses most of the joint running time.

5.2.3.2 Variables

The equivalents of the class ABA_VARIABLE to the functions genRow() and slack() of the class ABA_CONSTRAINT are the functions genColumn() and redCost(). Also for these two functions a redefinition due to performance reasons can be considered if the expansion/compression concept is not sufficient or cannot be applied.

The function

  virtual int genColumn(ABA_ACTIVE<ABA_CONSTRAINT, ABA_VARIABLE> *constraints,  
                        ABA_COLUMN &col);

stores the column format of the variable associated with the constraint set constraints in the argument col and returns the number of nonzero coefficients stored in col.

The function

  virtual double redCost(ABA_ACTIVE<ABA_CONSTRAINT, ABA_VARIABLE> *constraints,  
                         double *y);

returns the reduced cost of the variable corresponding to the dual variables y of the active constraints constraints. As a redefinition of the virtual member function slack() of the class ABA_CONSTRAINT might speed up the function violated(), also a redefinition of the function redCost() can speed up the function violated() of the class ABA_VARIABLE.

5.2.4 Infeasible Linear Programs

As long as we do not generate variables dynamically, a subproblem can be immediately fathomed if the LP-relaxation is infeasible. However, if not all variables are active we have to check if the addition of an inactive variable can restore the feasibility. An infeasibility can either be detected when the linear program is set up, or later by the LP-solver (see [Thi95]).

If fixed and set variables are eliminated, it might happen when the row format of a constraint is generated in the initialization of the linear program that a constraint has a void left hand side but can never be satisfied due to its right hand side. In this case, the function

  virtual int initMakeFeas(ABA_BUFFER<ABA_INFEASCON*> &infeasCon,  
                           ABA_BUFFER<ABA_VARIABLE*> &newVars,  
                           ABA_POOL<ABA_VARIABLE, ABA_CONSTRAINT> **pool);

is called. The default implementation always returns 1 to indicate that no variables could be added to restore feasibility. If it might be possible that in our application the addition of variables could restore the feasibility, then this function has to be redefined in a derived class.

The buffer infeasCon stores pointers to objects storing the infeasible constraints and the kind of infeasibility. The new variables should be added to the buffer newVars, and if the variables should be added to an other pool than the default variable pool, then a pointer to this pool should be assigned to *pool. If variables have been added that could restore the feasibility for all infeasible constraints, then the function should return 0, otherwise it should return 1.

If an infeasible linear program is detected by the LP-solver, then the function

  virtual int makeFeasible();

is called. The default implementation of the virtual dummy function does nothing except returning 1 in order to indicate that the feasibility cannot be restored. Otherwise, an iteration of the dual simplex method has to be emulated according to the algorithm outlined in [Thi95]. When the function is called it is guaranteed that the current basis is dual feasible. Exactly one of the member variables infeasVar_ or infeasCon_ of the class ABA_SUB is nonnegative. If infeasVar_ is nonnegative, then it holds the number of an infeasible variable, if infeasCon_ is nonnegative, then it holds the number of an infeasible slack variable. The array bInvRow_ stores the row of the basis inverse corresponding to the infeasible variable (only basic variables can be infeasible). Then the inactive variables have to be scanned like in the function pricing(). Variables that might restore the feasibility have to be added by the function addCons(). If no such candidate is found the subproblem can be fathomed.

5.2.5 Other Enumeration Strategies

With the parameter EnumerationStrategy in the file .abacus the enumeration strategies best-first search, breadth-first search, depth-first search, and a diving strategy can be controlled (see Section 5.2.26). Another problem specific enumeration strategy can be implemented by redefining the virtual function

  virtual int enumerationStrategy(ABA_SUB *s1, ABA_SUB *s2);

which compares the two subproblems s1 and s2 and returns 1 if the subproblem s1 should be processed before s2, returns -1 if the subproblem s2 should be processed before s1, and returns 0 if the two subproblems have the same precedence in the enumeration.

We provide again an implementation of the depth-first search strategy as an example for a reimplementation of the function enumerationStrategy().

  int MYMASTER::enumerationStrategy(ABA_SUB *s1, ABA_SUB *s2)  
  {  
    if(s1->level() > s2->level()) return  1;  
    if(s1->level() < s2->level()) return -1;  
    return 0;  
   }

In the default implementation of the depth-first search strategy we do not return 0 immediately if the two subproblems have the same level in the enumeration tree, but we call the virtual function

  int ABA_MASTER::equalSubCompare(ABA_SUB *s1, ABA_SUB *s2);

which return 0 if both subproblems have not been generated by setting a binary variable. Otherwise, that subproblem has higher priority where the branching variable is set to the upper bound, i.e., it returns 1 if the branching variable of s1 is set to the upper bound, -1 if the branching variable of s2 is set to the upper bound, and 0 otherwise. Other strategies for resolving equally good subproblems for the built-in enumeration strategies depth-first search and best-first search can be implemented by a redefinition of this virtual function. Moreover, this function can also be generalized for other enumeration strategies.

5.2.6 Selection of the Branching Variable

The default branching variable selection strategy can be changed by the redefinition of the virtual function

  int ABA_SUB::selectBranchingVariable(int &variable);

in a class derived from the class ABA_SUB. If a branching variable is found it has to be stored in the argument variable and the function should return 0. If the function fails to find a branching variable, it should return 1. Then, the subproblem is automatically fathomed.

Here we present an example where the first fractional variable is chosen as branching variable. In general, this is not a very good strategy.

  int MYSUB::selectBranchingVariable(int &variable)  
  {  
    for (int i = 0; i < nVar(); i++)  
      if (fracPart(xVal_[i]) > master_->machineEps()) {  
        variable = i;  
        return 0;  
      }  
 
    return 1;  
  }

The function fracPart(double x) returns the absolute value of the fractional part of x.

5.2.7 Using other Branching Strategies

Although branching on a variable is often an adequate strategy for branch-and-cut algorithms, it is in general useless for branch-and-price algorithms. But also for branch-and-cut algorithms other branching strategies, e.g., branching on a constraint can be interesting alternatives.

For the implementation of different branching strategies we have introduced the concept of branching rules in the class ABA_BRANCHRULE (see Section 4.2.7.3). The virtual function

  int ABA_SUB::generateBranchRules(ABA_BUFFER<ABA_BRANCHRULE*> &rules);

returns 0 if it can generate branching rules and stores for each subproblem, that should be generated, a branching rule in the buffer rules. If no branching rules can be generated, this function returns 1 and the subproblem is fathomed. The default implementation of the function generateBranchRules() generates two rules for two new subproblems by branching on a variable. These rules are represented by the classes ABA_SETBRANCHRULE for binary variables and ABA_BOUNDBRANCHRULE for integer variables, which are derived from the abstract class ABA_BRANCHRULE. Moreover, we provide also rules for branching on constraints (ABA_CONBRANCHRULE), and for branching by setting an integer variable to a fixed value (ABA_VALBRANCHRULE). Other branching rules have to be derived from the class ABA_BRANCHRULE. The default branching strategy can be replaced by the redefinition of the virtual function generateBranchRules() in a class derived from the class ABA_SUB.

5.2.7.1 Branching on a Variable

The default branching strategy of ABACUS is branching on a variable. Different branching variable selection strategies can be chosen in the parameter file (see Section 5.2.26). If a problem specific branching variable selections strategy should be implemented it is not required to redefine the function ABA_SUB::generateBranchRule(), but a redefinition of the function

  int ABA_SUB::selectBranchingVariable(int &variable)

is sufficient. If a branching variable is found it should be stored in the function argument variable and selectBranchingVariable() should return 0, otherwise it should return 1.

If no branching variable is found, the subproblem is fathomed.

5.2.7.2 Branching on a Constraint

As all constraints used in ABACUS, also branching constraints have to be inserted in a pool. The function ABA_POOL::insert() returns a pointer to the pool slot the constraint is stored in that is required in the constructor of ABA_CONBRANCHRULE. Although the default cut pool can be used for the branching constraints, an extra pool for branching constraints is recommended, because first no redundant work in the pool separation is performed, and second the branching constraint pool should be dynamic such that all branching constraints can be inserted. This pool for the branching constraints should be added to your derived master class. It is sufficient that the size of the branching pool is only a rough estimation. If the branching pool is dynamic, it will increase automatically if required.

  class MYMASTER : ABA_MASTER {  
   ABA_STANDARDPOOL<ABA_CONSTRAINT, ABA_VARIABLE> *branchingPool_;  
  }  
 
  MYMASTER::MYMASTER(const char *problemName) :  
    ABA_MASTER(problemName, true, false)  
  {  
    branchingPool_ = new ABA_STANDARDPOOL<ABA_CONSTRAINT, ABA_VARIABLE>(this,  
                                                                        size,  
                                                                        true);  
  }  
 
  MYMASTER::~MYMASTER()  
  {  
    delete branchingPool_;  
  }

The constraint branching rules have to be generated in the function MYSUB::generateBranchRules(). It might be necessary to introduce a new class derived from the class ABA_CONSTRAINT for the representation of your branching constraint. For simplification we assume here that your branching constraint is also of type MYCONSTRAINT. Each constraint is added to the branching pool.

If the generation of branching constraints failed, you might try to resort to the standard branching on variables.

  int MYSUB::generateBranchRules(ABA_BUFFER<ABA_BRANCHRULE*> &rules)  
  {  
    if (/* branching constraints can be found */) {  
      ABA_POOLSLOT<ABA_CONSTRAINT, ABA_VARIABLE> *poolSlot;  
 
      /* generate the branching rule for the first new subproblem */  
 
      MYCONSTRAINT *constraint1 = new MYCONSTRAINT(...);  
      poolSlot = ((MYMASTER *) master_)->branchingPool_->insert(constraint1);  
      rules.push(new ABA_CONBRANCHRULE(master_, poolSlot);  
 
      /* generate the branching rule for the second new subproblem */  
      MYCONSTRAINT *constraint2 = new MYCONSTRAINT(...);  
      poolSlot = ((MYMASTER *) master_)->branchingPool_->insert(constraint2);  
      rules.push(new ABA_CONBRANCHRULE(master_, poolSlot);  
 
      return 0;  
    }  
    else  
      return ABA_SUB::generateBranchRules(rules);  // resort to standard branching  
  }

Moreover, a branching constraint should be locally valid and not dynamic. This has to be specified when calling the constructor of the base class ABA_CONSTRAINT. Of course, the subproblem defined by the branching constraint is not available at the time when the branching constraint is generated. However, any locally valid constraint requires an associated subproblem in the constructor. Therefore, the (incorrect) subproblem in which the branching constraint is generated should be used. ABACUS will modify the associated subproblem later in the constructor of the subproblem generated with the constraint branching rule.

When the subproblem generated by the branching constraint is activated at the beginning of its optimization the branching constraint is not immediately added to the linear program and the active constraints, but it is inserted into the buffer for added constraints similarly as cutting planes are added (see Section 5.2.16).

5.2.7.3 Problem Specific Branching Rules

A problem specific branching rule is introduced by the derivation of a new class MYBRANCHRULE from the base class ABA_BRANCHRULE. As example we show how a branching rule for setting a variable to its lower or upper bound is implemented. This example has some small differences to the ABACUS class ABA_SETBRANCHRULE.

  class MYBRANCHRULE : public ABA_BRANCHRULE {  
    public:  
      MYBRANCHRULE(ABA_MASTER *master, int variable, ABA_FSVARSTAT::STATUS status);  
      virtual ~MYBRANCHRULE();  
      virtual int extract(ABA_SUB *sub);  
 
    private:  
      int               variable_;  // the branching variable  
      ABA_FSVARSTAT::STATUS status_;    // the status of the branching variable  
  };

The constructor initializes the branching variable and its status (ABA_FSVARSTAT::SetToLowerBound or ABA_FSVARSTAT::SetToUpperBound).

  MYBRANCHRULE::MYBRANCHRULE(ABA_MASTER *master,  
                             int variable,  
                             ABA_FSVARSTAT::STATUS status) :  
    ABA_BRANCHRULE(master),  
    variable_(variable),  
    status_(status)  
  { }  
 
  MYBRANCHRULE::~MYBRANCHRULE()  
  { }

The pure virtual function extract() of the base class ABA_BRANCHRULE has to be defined in every new branching rule. This function is called when the subproblem is activated at the beginning of its optimization. During the activation of the subproblem a copy of the final constraint and variable system of the father subproblem is made. The function extract() should modify this system according to the branching rule.

In our example we first check if setting the branching variable causes a contradiction. In this case we return 1 in order to indicate that the subproblem can be fathomed immediately. Otherwise we set the branching variable and return 0.

  int MYBRANCHRULE::extract(ABA_SUB *sub)  
  {  
    if (sub->fsVarStat(variable_)->contradiction(status_))  
      return 1;  
 
    sub->fsVarStat(variable_)->status(status_);  
    return 0;  
  }

As a second example for the design of a branching rule we show how the constraint branching rule of ABACUS is implemented. After inserted the branching constraint in a pool slot the constraint branching rule can be constructed with this pool slot.

  class ABA_CONBRANCHRULE : public ABA_BRANCHRULE {  
    public:  
      ABA_CONBRANCHRULE(ABA_MASTER *master,  
                        ABA_POOLSLOT<ABA_CONSTRAINT,  
                        ABA_VARIABLE> *poolSlot);  
      virtual ~ABA_CONBRANCHRULE();  
      virtual int extract(ABA_SUB *sub);  
 
    private:  
      ABA_POOLSLOTREF<ABA_CONSTRAINT, ABA_VARIABLE> poolSlotRef_;  
  };  
 
 
  ABA_CONBRANCHRULE::ABA_CONBRANCHRULE(ABA_MASTER *master,  
                             ABA_POOLSLOT<ABA_CONSTRAINT, ABA_VARIABLE> *poolSlot) :  
    ABA_BRANCHRULE(master),  
    poolSlotRef_(poolSlot)  
  { }  
 
  ABA_CONBRANCHRULE::~ABA_CONBRANCHRULE()  
  { }

In the function extract() the branching constraint is added to the subproblem. This should always be done with the function ABA_SUB::addBranchingConstraint(). Since adding a branching constraint cannot cause a contradiction, we always return 0.

  int ABA_CONBRANCHRULE::extract(ABA_SUB *sub)  
  {  
    if (sub->addBranchingConstraint(poolSlotRef_.slot())) {  
      master_->err() << ~ABA_CONBRANCHRULE::extract(): addition of branching  ~;  
      master_->err() << ~constraint to subproblem failed.~ << endl;  
      exit(Fatal);  
    }  
 
    return 0;  
  }

5.2.8 Strong Branching

In order to reduce the size of the enumeration tree, it is important to select “good” branching rules. We present a framework for measuring the quality of the branching rules. First, we describe the basic idea and explain the details later.

A branching step is performed by generating a set of branching rules, each one defines a son of the current subproblem. We call such a set of branching rules a sample. For instance, if we branch on a single binary variable, the corresponding sample consists of two branching rules, one defining the subproblem in which the branching variable is set to the upper bound, the other one the subproblem in which the branching variable is set to the lower bound. Instead of generating a single branching sample, it is now possible to generate a set of branching samples and selecting from this set the “best” sample for generating the sons of the subproblem. In this evaluation process for each branching rule of each branching sample a rank is computed. In the default implementation this rank is given by performing a limited number of iterations of the dual simplex method for the first linear program of the subproblem defined by the branching rule. For maximization problems we select that sample for which the maximal rank of its rules is minimal. For minimization problems we select that sample for which the minimal rank of its rules is maximal.

Both the computation of the ranks and the comparison of the rules can be adapted to problem specific criteria.

5.2.8.1 Default Strong Branching

Strong branching can be turned on for the built-in branching strategies that are controlled by the parameter BranchingStrategy of the configuration file. With the parameter NBranchingVariableCandidates the number of tested branching variables can be indicated (see also Section 5.2.26).

5.2.8.2 Strong Branching with Special Branching Variable Selection

In order to use strong branching in combination with a problem specific branching variable selection strategy, it is only necessary to redefine the virtual function

  int ABA_SUB::selectBranchingVariableCandidates(ABA_BUFFER<int> &candidates)

in the problem specific subproblem class. In the buffer candidates the indices of the variables that should be tested as branching variables are collected. If at least one candidate is found, the function should return 1, otherwise 0.

ABACUS tests all candidates by solving (partially) the first linear program of all potential sons and selects the branching variable as previously explained.

5.2.8.3 Ranking Branching Rules

In the default version the rank of a branching rule is computed by the function lpRankBranchingRule(). The rank can be determined differently by redefining the virtual function

  double ABA_SUB::rankBranchingRule(ABA_BRANCHRULE *branchRule)

that returns a floating point number associated with the rank of the branchRule.

5.2.8.4 Comparing Branching Samples

After a rank to each rule of each branching sample has been assigned by the function rankBranchingRule() all branching samples are compared and the best one is selected. This comparison is performed by the virtual function

  int ABA_SUB::compareBranchingSampleRanks(ABA_ARRAY<double> &rank1,  
                                           ABA_ARRAY<double> &rank2)

that compares the ranks rank1 of all rules of one branching sample with the ranks rank2 of the rules of another branching sample. It returns 1 if the ranks stored in rank1 are better, 0 if both ranks are equal, and -1 if the ranks stored in rank2 are better.

For maximization problems in the default version of compareBranchingSampleRanks() the array rank1 is better if its maximal entry is less than the maximal entry of rank2 (min-max criteria). For minimization problems rank1 is better if its minimal entry is greater than the minimal entry of rank2 (max-min criteria).

Problem specific orders of the ranks of branching samples can be implemented by redefining the virtual function compareBranchingSampleRanks().

5.2.8.5 Selecting Branching Samples

If the redefinition of the function compareBranchingSample() is not adequate for a problem specific selection of the branching sample, then the virtual function

  int ABA_SUB::selectBestBranchingSample(int nSamples,  
                                         ABA_BUFFER<ABA_BRANCHRULE*> **samples)

can be redefined. The number of branching samples is given by the integer number nSamples, the array samples stores pointers to buffers storing the branching rules of the samples. The function should return the number of the best branching sample.

5.2.8.6 Strong Branching with other Branching Rules

As explained in Section 5.2.7 other branching strategies than branching on variables can be chosen by redefining the virtual function

  int ABA_SUB::generateBranchRules(ABA_BUFFER<ABA_BRANCHRULE*> &rules);

in the problem specific subproblem class. Instead of generating immediately a single branching sample and storing it in the buffer rules it is possible to generate first a set of samples and selecting the best one by calling the function

  int ABA_SUB::selectBestBranchingSample(int nSamples,  
                                         ABA_BUFFER<ABA_BRANCHRULE*> **samples).

For problem specific branching rules that are not already provided by ABACUS, but derived from the base class ABA_BRANCHRULE, it is necessary to redefine the virtual function

  void ABA_BRANCHRULE::extract(ABA_LPSUB *lp)

if the ranks of the branching rules are computed by solving the first linear program of the potential sons as ABACUS does in its default version. Similar as the function

  int ABA_BRANCHRULE::extract(SUB *sub)

(see Section 5.2.7.3) modifies the subproblem according to the branching rule, the virtual function

  void extract(ABA_LPSUB *lp)

should modify the linear programming relaxation in order to evaluate the branching rule.

In addition the virtual function

  void ABA_BRANCHRULE::unextract(ABA_LPSUB *lp)

must also be redefined. It should undo the modifications of the linear programming relaxation performed by extract(ABA_LPSUB *lp).

5.2.9 Activating and Deactivating a Subproblem

Entry points at the beginning and at the end of the subproblem optimization are provided by the functions activate() and deactivate().

5.2.10 Calling ABACUS Recursively

The separation or pricing problem in a branch-and-bound algorithm can again be a mixed integer optimization problem. In this case, it might be appropriate to solve this problem again with an application of ABACUS. The pricing problem of a solver for binary cutting stock problems, e.g., is under certain conditions a general mixed integer optimization problem [VBJN94]. The following example shows how this part of the function pricing() could look like for the binary cutting stock problem. First, we construct an object of the class LPFORMAT, storing the pricing problem formulated as a mixed integer optimization problem, then we initialize the solver of the pricing problem. The class MIP is derived from the class ABA_MASTER for the solution of general mixed integer optimization problems (the classes LPFORMAT and MIP are not part of the ABACUS kernel but belong to a not publicly available ABACUS application). After the optimization we retrieve the value of the optimal solution.

  LPFORMAT knapsackProblem(master_, nOrigVar_, 1 + nSosCons_, &optSense,  
                           origObj_, lBound, uBound, varType, constraints);  
 
  MIP *knapsackSolver = new MIP(&knapsackProblem, ~CSP-Pricer~);  
 
  knapsackSolver->optimize();  
 
  optKnapsackValue = knapsackSolver->primalBound();

5.2.11 Selecting the LP-Method

Before the linear programming relaxation is solved, the virtual function

  ABA_LP::METHOD ABA_SUB::chooseLpMethod(int nVarRemoved, int nConRemoved,  
                                         int nVarAdded, int nConAdded)

is called in each iteration of the cutting plane algorithm, if approximate solving is disabled (the default). If the usage of the approximate solver is enabled (by setting the parameter SolveApprox to true in the configuration file .abacus), the virtual function ABA_SUB::solveApproxNow() is called first. If this function returns true the LP method is set to ABA_LP::Approximate (if the current situation in the cutting plane algorithm does not require an exact solution, e.g. to prepare branching).

The parameters of the function ABA_SUB::chooseLpMethod refer to the number of removed and added variables and constraints. If a linear programming relaxation should be solved with a strategy different from the default strategy, then this virtual function must be redefined in the class MYSUB. According to the criteria of our new application the function chooseLpMethod() must return ABA_LP::BarrierAndCrossover, ABA_LP::BarrierNoCrossover, ABA_LP::Primal, or ABA_LP::Dual. The LP methods ABA_LP::BarrierAndCrossover and ABA_LP::BarrierNoCrossover are provided only for compatibility with older versions of ABACUS and custom solver interfaces as the current interface only supports the methods ABA_LP::Primal and ABA_LP::Dual (and ABA_LP::Approximate, see above).

5.2.12 Generating Output

We recommend to use also for problem specific output the built-in output and error streams via the member functions out() and err() of the class ABA_GLOBAL:

  master_->out() << ~This is a message for the output stream.~ << endl;  
  master_->err() << ~This is a message for the error stream.~ << endl;

For messages output from members of the class ABA_MASTER and its derived classes dereferencing the pointer to the master can be omitted:

  out() << ~This is a message for the output stream from a master class.~ << endl;  
  err() << ~This is a message for the error stream from a master class.~ << endl;

The functions out() and err() can receive optionally an integer number as argument giving the amount of indentation. One unit of indentation is four blanks.

The amount of output can be controlled by the parameter OutLevel in the file .abacus (see Section 5.2.26). If some output should be generated although it is turned off for a certain output level at this point of the program, then it can be turned temporarily on.

  int MYSUB::myFunction()  
  {  
    if (master_->outLevel() == ABA_MASTER::LinearProgram) master_->out().on();  
    master_->out() << ~This output appears only for output level ~;  
    master_->out() << ~‘LinearProgram’.~ << endl;  
    if (master_->outLevel() == ABA_MASTER::LinearProgram) master_->out().off();  
  }

5.2.13 Memory Management

The complete memory management of data allocated in member functions of application specific classes has to be performed by the user, i.e., memory allocated in such a function also has to be deallocated in an application specific function. However, there are some exceptions. As soon as a constraint or a variable is added to a pool its memory management is passed to ABACUS. This also holds if the constraint or variable is added to a pool with the functions ABA_SUB::addCons() or ABA_SUB::addVars(). Constraints and variables are allocated in problem specific functions, but deallocated by the framework.

Another exception are branching rules added to a subproblem. But this is only relevant for applications that add a problem specific branching rule. If variables are fixed or set by logical implications, then objects of the class ABA_FSVARSTAT are allocated. Also for these objects the further memory management is performed by the framework.

In order to save memory a part of the data members of a subproblem can be accessed only when the subproblem is currently being optimized. These data members are listed in Table 5.1.




Member Description


tailOff_ tailing off manager
lp_ linear programming relaxation
addVarBuffer_ buffer for adding variables
addConBuffer_ buffer for adding constraints
removeVarBuffer_ buffer for removing variables
removeConBuffer_ buffer for removing constraints
xVal_ values of the variables in the last solved ABA_LP
yVal_ values of the dual variables in the last solved ABA_LP



Table 5.1: Activated members of ABA_SUB.

5.2.14 Eliminating Constraints

In order to keep the number of active constraints within a moderate size active constraints can be eliminated by setting the built-in parameter ConstraintEliminationMode to Basic or NonBinding (see Section 5.2.26). Other problem specific strategies can be implemented by redefining the virtual function

  void MYSUB::conEliminate(ABA_BUFFER<int> &remove)  
  {  
    for (int i = 0; i < nCon(); i++)  
      if (/* constraint i should be eliminated */)  
        remove.push(i);  
  }

within the subproblem of the new application.

The function conEliminate() is called within the cutting plane algorithm. Moreover, we provide an even more flexible method for the elimination of constraints by the functions removeCon() and removeCons(), which can be called from any function within the cutting plane method. The functions

  void ABA_SUB::removeCon(int i);  
  void ABA_SUB::removeCons(ABA_BUFFER<int> &remove);

which remove the constraint i or the constraints stored in the buffer remove, respectively.

Both constraints removed by the function conEliminate() and by explicitly calling the function remove() are not removed immediately from the active constraints and the linear program, but buffered, and the updates are performed at the beginning of the next iteration of the cutting plane method.

5.2.15 Eliminating Variables

Similarly to the constraint elimination, variables can be eliminated either by setting the parameter VariableEliminationMode to ReducedCost or by redefining the virtual function varEliminate() according to the needs of our application.

  void  ABA_SUB::varEliminate(ABA_BUFFER<int> &remove)  
  {  
    for (int i = 0; i < nVar(); i++)  
      if (/* variable i should be eliminated)  
        remove.push(i);  
  }

By analogy to the removal of constraints we provide functions to remove variables within any function of the cutting plane algorithm. The functions

  void ABA_SUB::removeVar(int i);  
  void ABA_SUB::removeVars(ABA_BUFFER<int> &remove);

which remove the variable i or the variables stored in the buffer remove, respectively.

Like eliminated constraints eliminated variables are buffered and the update is performed at the beginning of the next iteration of the cutting plane algorithm.

5.2.16 Adding Constraints/Variables in General

The functions separate() and pricing() provide interfaces where constraints/variables are usually generated in the cutting plane or column generation algorithm. Moreover, to provide a high flexibility we allow the addition and removal of constraints and variables within any subroutine of the cutting plane or column generation algorithm as we have already pointed out.

Note, while constraints or variables added with the function addCons() or addVars() are usually allocated by the user, they are deleted by ABACUS. They must not be deleted by the user (see Section 5.2.13).

The sizes of the buffers that store the constraints/variables being added can be controlled by the parameters MaxConBuffered and MaxVarBuffered in the parameter file .abacus. At the start of the next iteration the best MaxConAdd constraints and the best MaxVarAdd variables are added to the subproblem. This evaluation of the buffered items is only possible if a rank has been specified for each item in the functions addCons() and addVars(), respectively.

Moreover, we provide further features for the addition of cutting planes with the function addCons():

   virtual int addCons(ABA_BUFFER<ABA_CONSTRAINT*>            &constraints,  
                       ABA_POOL<ABA_CONSTRAINT, ABA_VARIABLE> *pool = 0,  
                       ABA_BUFFER<bool>                       *keepInPool = 0,  
                       ABA_BUFFER<double>                     *rank = 0);

The buffer constraints holds the constraints being added. All other arguments are optional or ignored if they are 0. If the argument pool is not 0, then the constraints are added to this pool instead of the default pool. If the flag (*keepInPool)[i] is true for the i-th added constraint, then this constraint will even be stored in the pool if it is not added to the active constraints. In order to define an order of the buffered constraints a rank has to be specified for each constraint in the function addCons().

As constraints can be added with the function addCons(), the function

  virtual int addVars(ABA_BUFFER<ABA_VARIABLE*>              &variables,  
                      ABA_POOL<ABA_VARIABLE, ABA_CONSTRAINT> *pool = 0,  
                      ABA_BUFFER<bool>                       *keepInPool = 0,  
                      ABA_BUFFER<double>                     *rank = 0);

can be used for a flexible addition of variables to the buffer in a straightforward way.

The function pricing() handles non-liftable constraints correctly (see Section 4.2.3.12). However, if variables are generated within another part of the cutting plane algorithm and non-liftable constraints are present, then run-time errors or wrong results can be produced. If ABACUS is compiled in the safe mode (-DABACUSSAFE) this situation is recognized and the program stops with an error message. If in an application both non-liftable constraints are generated and variables are added outside the function pricing(), then the user has to remove non-liftable constraints explicitly to avoid errors.

5.2.16.1 Activation of a Subproblem

After a subproblem becomes active the virtual function activate() is called. Its default implementation in the class ABA_SUB does nothing but it can be redefined in the derived class MYSUB. In this function application specific data structures that are only required for an active subproblem can be set up, e.g., a graph associated with the subproblem:

  void MYSUB::activate()  
  { }

5.2.16.2 Deactivation of a Subproblem

The virtual function deactivate() is the counterpart of the function activate(). It is called at the end of the optimization of a subproblem and again its default implementation does nothing. In this function, e.g., memory allocations performed in the function activate() can be undone:

  void MYSUB::deactivate()  
  { }

5.2.17 Fixing and Setting Variables by Logical Implications

Variables can by fixed and set by logical implications by redefining the virtual functions

  void MYSUB::fixByLogImp(ABA_BUFFER<int> &variables,  
                          ABA_BUFFER<ABA_FSVARSTAT*> &status)  
  {}

and

  void MYSUB::setByLogImp(ABA_BUFFER<int> &variables,  
                          ABA_BUFFER<ABA_FSVARSTAT*> &status)  
  {}

The buffers variables hold the variables being fixed or set, respectively, and the buffers status the statuses they are fixed or set to, respectively. The following piece of code gives a fragment of an implementation of the function fixByLogImp().

  void MYSUB::fixByLogImp(ABA_BUFFER<int> &variables,  
                          ABA_BUFFER<ABA_FSVARSTAT*> &status)  
  {  
    for (int i = 0; i < nVar(); i++)  
      if (/* condition for fixing i to lower bound holds */) {  
        variables.push(i);  
        status.push(new ABA_FSVARSTAT(master_, ABA_FSVARSTAT::FixedToLowerBound));  
      }  
      else if (/* condition for fixing i to upper bound holds */) {  
        variables.push(i);  
        status.push(new ABA_FSVARSTAT(master_, ABA_FSVARSTAT::FixedToUpperBound));  
      }  
  }

Setting variables by logical implications can be implemented analogously by replacing “FixedTo” with “SetTo”.

5.2.18 Loading an Initial Basis

By default, the barrier method is used for the solution of the first linear program of the subproblem. However, a basis can be also loaded, and then, the LP-method can be accordingly selected with the function chooseLpMethod() (see Section 5.2.11). The variable and slack variable statuses can be initialized in the constructor of the root node like in the following example.

  MYSUB::MYSUB(ABA_MASTER *master) :  
  ABA_SUB(master, 50.0, 0.0, 100.0)  
  {  
    ABA_LPVARSTAT::STATUS  lStat;  
    for (int i = 0; i < nVar(); i++) {  
      lStat = /* one of ABA_LPVARSTAT::AtLowerBound, ABA_LPVARSTAT::Basic,  
                 or ABA_LPVARSTAT::AtUpperBound */;  
      lpVarStat(i)->status(lStat);  
    }  
    ABA_SLACKSTAT::STATUS sStat;  
    for (int i = 0; i < nCon(); i++) {  
      sStat = /* one of  ABA_SLACKSTAT::Basic or ABA_SLACKSTAT::NonBasicZero */;  
      slackStat(i)->status(sStat)  
    }  
  }

5.2.19 Integer Objective Functions

If all objective function values of feasible solutions have integer values, then a subproblem can be fathomed earlier because its dual bound can be rounded up for a minimization problem, or down for a maximization problem, respectively. This feature can be controlled by the parameter ObjInteger of the parameter file (see Section 5.2.26).

This feature can depend on the specific problem instance. Moreover, if variables are generated dynamically, it is even possible that this attribute depends on the currently active variable set. Therefore, we provide the function

  void ABA_MASTER::objInteger(bool switchedOn);

with which the automatic rounding of the dual bound can be turned on (if switchedOn is true) or off (if switchedOn is false).

Helpful for the analysis if all objective function values of all feasible solutions are integer with respect to the currently active variable set of the subproblem might be the function

  bool ABA_SUB::objAllInteger();

that returns true if all active variables of the subproblem are discrete and their objective function coefficients are integer, and returns false otherwise.

If the set of active variables is static, i.e., no variables are generated dynamically, then the function objAllInteger() could be called in the constructor of the root node of the enumeration tree and according to the result the flag of the master can be set:

  MYSUB::MYSUB(ABA_MASTER *master) :  
    ABA_SUB(master, 50.0, 0.0, 100.0)  
  {  
    master_->objInteger(objAllInteger());  
  }

By default, we assume that the objective function values of feasible solutions can also have noninteger values.

5.2.20 An Entry Point at the End of the Optimization

While the virtual function initializeOptimization() is called at the beginning of the optimization and can be redefined for the initialization of application specific data (e.g., the variables and constraints), the virtual function terminateOptimization() is called at the end of the optimization. Again, the default implementation does nothing and a redefined version can be used, e.g., for visualizing the best feasible solution on the screen.

5.2.21 Output of Statistics

At the end of the optimization a solution history and some general statistics about the optimization are output. Problem specific statistics can be output by redefining the virtual function output() of the class ABA_MASTER in the class MYMASTER. The default implementation of the function output() does nothing. Of course, application specific output can be also generated in the function terminateOptimization(), but then this output appears before the solution history and some other statistics. If the function output() is used, problem specific statistics are output between the general statistics and the value of the optimum solution.

5.2.22 Accessing Internal Data of the LP-Solver

The class ABA_SUB has the member function ABA_LPSUB *lp() that allows a direct access of the data of the linear program solved within the subproblem. If the member functions of the class ABA_LPSUB and its base class ABA_LP are not sufficient to retrieve a specific information, a direct access of the data of the LP-Solvers is possible.

The data retrieved from your LP-solver in this direct way has to be interpreted very carefully. Since variables might be automatically eliminated the actual linear program submitted to the LP-solver might differ from the linear programming relaxation. Only if LP-data is accessed through the member functions of the class ABA_LPSUB the “real” linear programming relaxation is obtained.

Warning: Do not modify the data of the LP-solver using the pointers to the internal data structures and the functions of the solver interface. A correct modification of the LP-data is only guaranteed by the member functions of the class ABA_SUB.

5.2.22.1 Accessing Internal Data of the LP-solver

Internal data of the solver is retrieved with the function

  OsiSolverInterface* ABA_OSIIF::osiLP();

that returns a pointer to the OsiSolverInterface object that manages the interaction with the LP-solver.

Since the linear programming relaxation of a subproblem is designed independently from the LP-solver an explicit cast to the class ABA_LPSUBOSI is required:

  OsiSolverInterface* LpInterface = ((ABA_LPSUBOSI*) lp())->osiLP();

The class ABA_LPSUBOSI is derived from the classes ABA_LPSUB and ABA_OSIIF.

5.2.23 Problem Specific Fathoming Criteria

Sometimes structural problem specific information can be used for fathoming a subproblem. Such criteria can be implemented by redefining the virtual function ABA_SUB::exceptionFathom(). This function is called before the separation or pricing is performed. If this function returns false (as the default implementation in the base class ABA_SUB does), we continue with separation or pricing. Otherwise, if it returns true, the subproblem is fathomed.

5.2.24 Enforcing a Branching Step

ABACUS enforces a branching step if a tailing off effect is observed. Other problem specific criteria for branching instead of continuing the cutting plane or column generation algorithm can be specified by redefining the function ABA_SUB::exceptionBranch(). This criterion is checked before the separation or pricing is performed. If the function returns true, a branching step is performed. Otherwise, we continue with the separation or pricing. The default implementation of the base class ABA_SUB always returns false.

5.2.25 Advanced Tailing Off Control

ABACUS automatically controls the tailing off effect according to the parameters TailOffNLps and TailOffPercent of the configuration file .abacus. However, sometimes it turns out that certain solutions of the LP-relaxations should be ignored in the tailing off control. The function ignoreInTailingOff() can be used to control better the tailing off effect. If this function is called, the next LP-solution is ignored in the tailing-off control. Calling ignoreInTailingOff() can, e.g., be considered in the following situation: If only constraints that are required for the integer programming formulation of the optimization problem are added then the next LP-value could be ignored in the tailing-off control. Only “real” cutting planes should be considered in the tailing-off control (this is only an example strategy that might not be practical in many situations, but sometimes turned out to be efficient).

5.2.26 System Parameters

The setting of several parameters heavily influences the running time. Good candidates are the modification of the enumeration strategy with the parameter EnumerationStrategy, the control of the tailing off effect with the parameters TailOffNLps and TailOffPercent, an adaption of the skipping method for the cut generation with the parameters SkipFactor and SkipByNode, and the parameters specific to the used LP-solver.

Here we present a complete list of the parameters that can be modified for the fine tuning of the algorithm in the file .abacus. Almost all parameters can be modified with member functions of the class ABA_MASTER. Usually, these member functions have the same name as the parameter, but the first letter is a lower case letter. The parameters specific to the LP-solver can be set by redefining the virtual function ABA_MASTER::setSolverParameters(), see Section  5.2.27 for details.

Warning: The integer numbers used in the parameter files must not exceed the value of INT_MAX given in the file <limits.h>. The default values are correct for platforms representing the type int with 32 bits (usually 2147483647 on machines using the b-complement).

5.2.26.1 EnumerationStrategy

This parameter controls the enumeration strategy in the branch-and-bound algorithm.

Valid settings:

BestFirst

best-first search

BreadthFirst

breadth-first search

DepthFirst

depth-first search

DiveAndBest

depth-first search until the first feasible solution is found, then best-first search

Default value: BestFirst

5.2.26.2 Guarantee

The branch-and-bound algorithm stops as soon as a primal bound and a global dual bound are known such that it can be guaranteed that the value of an optimum solution is at most Guarantee percent better than the primal bound. The value 0.0 means determination of an optimum solution. If the program terminates with a guarantee greater than 0, then the status of the master is ABA_MASTER::Guarantee instead of ABA_MASTER::Optimal.

Valid settings:

A nonnegative floating point number.

Default value: 0.0

5.2.26.3 MaxLevel

This parameter indicates the maximal level that should be reached in the enumeration tree. Instead of performing a branching operation any subproblem having level MaxLevel is fathomed. If the value of MaxLevel is 1, then no branching is done, i.e., a pure cutting plane algorithm is performed. If the maximal enumeration level is reached, the master of the optimization receives the status MaxLevel in order to indicate that the problem does not necessarily terminate with an optimum solution.

Valid settings:

A positive integer number.

Default value: 999999

5.2.26.4 MaxCpuTime

This parameter indicates the maximal CPU time that may be used by the optimization process. If the CPU time exceeds this value, then the master of the optimization receives the status MaxCpuTime in order to indicate that the problem does not necessarily terminate with an optimum solution. In this case, the real CPU time can exceed this value since we check the used CPU time only in the main loop of the cutting plane algorithm. Under the operating system UNIX a more exact check can be done with the command limit, which kills the process if the maximal CPU time is exceeded, whereas our CPU time control “softly” terminates the run, i.e., the branch-and-bound tree is cleaned, all relevant destructors are called, and the final output is generated.

Valid settings:

A string in the format h{h}:mm:ss, where the first number represents the hours, the second one the minutes, and the third one the seconds. Note, internally this string is converted to seconds. Therefore, its value must be less than INT_MAX seconds.

Default value: 99999:59:59

5.2.26.5 MaxCowTime

This parameter indicates the maximal elapsed time (wall clock time) that may be used by the process. If the elapsed time exceeds this value, then the master of the optimization receives the status MaxCowTime in order to indicate that the problem does not necessarily terminate with an optimum solution. In this case, the real elapsed time can exceed this value since we check the elapsed time only in the main loop of the cutting plane algorithm.

Valid settings:

A string in the format h{h}:mm:ss, where the first number represents the hours, the second one the minutes, and the third one the seconds. Note, internally this string is converted to seconds. Therefore, its value must be less than INT_MAX seconds.

Default value: 99999:59:59

5.2.26.6 ObjInteger

If this parameter is true, then we assume that all feasible solutions have integer objective function values. In this case, we can fathom a subproblem in the branch-and-bound algorithm already when the gap between the solution of the linear programming relaxation and the primal bound is less than 1.

Valid settings:

false or true

Default value: false

5.2.26.7 TailOffNLps

This parameter indicates the number of linear programs considered in the tailing off analysis (see parameter TailOffPercent).

Valid settings:

An integer number. If this number is nonpositive, then the tailing off control is turned off.

Default value: 0

5.2.26.8 TailOffPercent

This parameter indicates the minimal change in percent of the objective function value between the solution of TailOffNLps successive linear programming relaxations in the subproblem optimization which is required such that we do not try to stop the cutting plane algorithm and to enforce a branching step.

Valid settings:

A nonnegative floating point number.

Default value: 0.0001

5.2.26.9 DelayedBranchingThreshold

This number indicates how often a subproblem should be put back into the set of open subproblems before a branching step is executed. The value 0 means that we branch immediately at the end of the first optimization, if the subproblem is not fathomed. We try to keep the subproblem MinDormantRounds untouched, i.e., other subproblems are optimized if possible before we turn back to the optimization of this subproblem.

Valid settings:

A positive integer number.

Default value: 0

5.2.26.10 MinDormantRounds

The minimal number of iterations we try to keep a subproblem dormant if delayed branching is applied.

Valid settings:

A positive integer number.

Default value: 1

5.2.26.11 OutputLevel

We can control the amount of output during the optimization by this parameter.

For the parameter values Subproblem and LinearProgram a seven column output is generated with the following meaning:

#sub total number of subproblems
#open current number of open subproblems
current the number of the currently optimized subproblem
#iter number of iterations in the cutting plane algorithm
ABA_LP value of the LP-relaxation
dual global dual bound
primal primal bound

Valid settings:

Silent

No output.

Statistics

Output of the result and some statistics at the end of the optimization.

Subproblem

Additional one-line output after the first solved ABA_LP of the root node and at the end of the optimization of each subproblem.

LinearProgram

Additional one-line output after the solution of a linear program.

Full

Detailed output in all phases of the optimization.

Default value: Full

5.2.26.12 LogLevel

We can control the amount of output written to the log file in the same way as the output to the standard output stream.

Valid settings:

See parameter OutputLevel. If the LogLevel is not Silent two log files are created. While the file with the name of the problem instance and the extension .log contains the output written to ABA_MASTER::out() (filtered according the LogLevel), the all messages written to ABA_MASTER::err() are also written to the file with the name of the problem instance and the extension .error.log.

Default value: Silent

5.2.26.13 PrimalBoundInitMode

This parameter controls the initialization of the primal bound. The modes Optimum and OptimumOne are useful for tests.

Valid settings:

None

The primal bound is initialized with “infinity” for minimization problems and “minus infinity” for maximization problems, respectively.

Optimum

The primal bound is initialized with the value of an optimum solution, if it can be read from the file with the name of the parameter OptimumFileName.

OptimumOne

The primal bound is initialized with the value of an optimum solution plus one for minimization problems, and the value of an optimum solutions minus one for maximization problems. This is only possible if the value of an optimum solution can be read from the file with the name given by the parameter OptimumFileName.

Default value: None

5.2.26.14 PricingFrequency

This parameter indicates the number of iterations between two additional pricing steps in the cutting plane phase for algorithms performing both constraint and variable generation. If this number is 0, then no additional pricing is performed.

Valid settings:

A nonnegative integer number.

Default value: 0

5.2.26.15 SkipFactor

This parameter indicates the frequency of cutting plane and variable generationskipping!factor in the subproblems according to the parameter SkippingMode. The value 1 means that cutting planes and variables are generated in every subproblem independent from the skipping mode.

Valid settings:

A positive integer number.

Default value: 1

5.2.26.16 SkippingMode

This parameter controls the skipping mode, i.e., if constraints or variables are generated in a subproblem.

Valid settings:

SkipByNode

Generate constraints and variables only every SkipFactor processed node.

SkipByLevel

Generate constraints and variables only every SkipFactor level in the branch-and-bound tree.

Default value: SkipByNode

5.2.26.17 FixSetByRedCost

Variables are fixed and set by reduced cost criteria if and only if this parameter is true. The default setting is false, as fixing or setting variables to 0 can make the pricing problem intractable in branch-and-price algorithms.

Valid settings:

false or true

Default value: false

5.2.26.18 PrintLP

If this parameter is true, then the linear program is output every iteration. This is only useful for debugging.

Valid settings:

false or true

Default value: false

5.2.26.19 MaxConAdd

This parameter determines the maximal number of constraints added to the linear programming relaxation per iteration in the cutting plane algorithm.

Valid settings:

A nonnegative integer number.

Default value: 100

5.2.26.20 MaxConBuffered

After the cutting plane generation the MaxConAdd best constraints are selected from all generated constraints that are kept in a buffer. This parameter indicates the size of this buffer.

Valid settings:

A nonnegative integer number.

Default value: 100

5.2.26.21 MaxVarAdd

This parameter determines the maximal number of variables added to the linear programming relaxation per iteration in the cutting plane algorithm.

Valid settings:

A nonnegative integer number.

Default value: 100

5.2.26.22 MaxVarBuffered

After the variable generation the MaxVarAdd best variables are selected from all generated variables that are kept in a buffer. This parameter indicates the size of this buffer.

Valid settings:

A nonnegative integer number.

Default value: 100

5.2.26.23 MaxIterations

The parameter limits the number of iterations of the cutting plane phase of a single subproblem.

Valid settings:

A nonnegative integer number or -1 if unlimited.

Default value: -1

5.2.26.24 EliminateFixedSet

Fixed and set variables are eliminated from the linear program submitted to the LP-solver if this parameter is true and the variable is eliminable. By default, a variable is eliminable if it has not been basic in the last solved linear program.

Valid settings:

false or true

Default value: false

5.2.26.25 NewRootReOptimize

If the root of the remaining branch-and-bound tree changes and this node is not the active subproblem, then we reoptimize this subproblem, if this parameter is true. The reoptimization might provide better criteria for fixing variables by reduced costs.

Valid settings:

false or true

Default value: false

5.2.26.26 OptimumFileName

This parameter indicates the name of a file storing the values of the optimum solutions. Each line of this file consists of a problem name and the value of the corresponding optimum solution. This is the only optional parameter. Having the optimum values of some instances at hand can be very useful in the testing phase.

Valid settings:

A string.

Default value: This parameter is commented out in the file .abacus.

5.2.26.27 ShowAverageCutDistance

If this parameter is true, then the average Euclidean distance of the fractional solution from the added cutting planes is output every iteration of the cutting plane phase.

Valid settings:

false or true

Default value: false

5.2.26.28 ConstraintEliminationMode

The parameter indicates the method how constraints are eliminated in the cutting plane algorithm.

Valid settings:

None

No constraints are eliminated.

NonBinding

The non-binding dynamic constraints are eliminated.

Basic

The dynamic constraints with basic slack variables are eliminated.

Default value: Basic

5.2.26.29 ConElimEps

The parameter indicates the tolerance for the elimination of constraints by the method NonBinding.

Valid settings:

A nonnegative floating point number.

Default value: 0.001

5.2.26.30 ConElimAge

The number of iterations an elimination criterion for a constraint must be satisfied until the constraint is eliminated from the active constraints.

Valid settings:

A nonnegative integer.

Default value: 1

5.2.26.31 VariableEliminationMode

This parameter indicates the method how variables are eliminated in a column generation algorithm.

Valid settings:

None

No variables are eliminated.

ReducedCost

Nonbasic dynamic variables that are neither fixed nor set and for which the absolute value of the reduced costs exceeds the value given by the parameter VarElimEps are removed.

Default value: ReducedCost

5.2.26.32 VarElimEps

This parameter indicates the tolerance for the elimination of variables by the method ReducedCost.

Valid settings:

A nonnegative floating point number.

Default value: 0.001

5.2.26.33 VarElimAge

The number of iterations an elimination criterion for a variable must be satisfied until the variable is eliminated from the active variables.

Valid settings:

A nonnegative integer.

Default value: 1

5.2.26.34 VbcLog

This parameter indicates if a log-file of the enumeration tree should be generated, which can be read by the VBC-tool [Lei95]. The VBC-tool is a utility for the visualization of the branch-and-bound tree.

Valid settings:

None

No file for the VBC-Tool is generated.

File

The output is written to a file with the name <name>.<pid>.tree. <name> is the problem name as specified in the constructor of the class ABA_MASTER and <pid> is the process id.

Pipe

The control instructions for the VBC-Tool are written to the global output stream. Each control instuction starts with a $ sign. If the standard output of an ABACUS application is piped through the VBC-Tool, lines starting with a $ sign are regarded as control instructions, all other lines written to a text window.

Default value: None

5.2.26.35 NBranchingVariableCandidates

This number indicates how many candidates for branching variables should be tested according to the BranchingStrategy. If this number is 1, a single variable is determined (if possible) that is the branching variable. If this number is greater than 1 each candidate is tested and the best branching variable is selected, i.e., for each candidate the two linear programs of potential sons are solved. The variable for which the minimal change of the two objective function values is maximal is selected as branching variable.

Valid settings:

Positive integer number.

Default value: 1

5.2.26.36 DefaultLpSolver

This parameter determines the LP-solver that should be applied per default for each subproblem. Please note that these are the solvers supported by the Open Solver Interface and hence by ABACUSṄevertheless not all of these solvers may be suitable for solving LP relaxations.

Valid settings:

Cbc

Clp

CPLEX

DyLP

FortMP

GLPK

MOSEK

OSL

SoPlex

SYMPHONY

Vol

XPRESS_MP

Default value: Clp

5.2.26.37 SolveApprox

If set to true usage of the Volume Algorithm to solve LP relaxations is enabled. This parameter only enables usage of the approximate solver in general. Whether or not a specific LP relaxation is solved exact or approximate is determined by the function ABA_MASTER::solveApproxNow().

Valid settings:

F

Default value: o r an example reimplementation of this function see the file tspsub.w in the example directory of the ABACUS source code. false or truefalse

5.2.27 Solver Parameters

Setting parameters for specific LP-solvers is done by redefining the virtual function ABA_MASTER::setSolverParameters(OsiSolverInterface* interface, bool solverIsApprox). The parameter interface is a generic pointer to an object of type OsiSolverInterface, it has to be typecast to a pointer to a specific solver interface. Via this pointer all the internals of the solver can be accessed. The parameter solverIsApprox is true if the solver for which parameters are set is approximate, i.e. the Volume Algorithm. To set the primal column pivot algorithm for Clpi to "steepest", for example, one would do:

bool MYMASTER::setSolverParameters(OsiSolverInterface*interface,bool solverIsApprox)  
{  
OsiClpSolverInterface* clpIf = dynamic_cast<OsiClpSolverInterface*> (interface);  
ClpSimplex* clp_simplex = clpIf->getModelPtr();  
ClpPrimalColumnSteepest steepestP;  
clp_simplex->setPrimalColumnPivotAlgorithm(steepestP);  
return true;  
}

For a more complex reimplementation of this function see the file tspmaster.w in the example directory of the ABACUS source code.

5.2.28 Parameter Handling

ABACUS provides a concept for the implementation of application parameter files, which is very easy to use. In these files it is both possible to overwrite the values of parameters already defined in the file .abacus and to define extra parameters for the new application.

The format for parameter files is very simple. Each line contains the name of a parameter separated by an arbitrary number of whitespaces from its value. Both parameter name and parameter value can be an arbitrary character string. A line may have at most 1024 characters. Empty lines are allowed. All lines starting with a ‘#’ are considered as comments.

The following lines give an example for the parameter file .myparameters.

#

# First, we overwrite two parameters from the file .abacus.

#

EnumerationStrategy DepthFirst

OutputLevel LinearProgram

#

#

# Here are the parameters of our new application.

#

#

# Our application has two different separation strategies

# ’All’ calls all separators in each iteration

# ’Hierarchical’ follows a hierarchy of the separators

#

SeparationStrategy All

#

# The parameter MaxNodesPerCut limits the number of nodes involved

# in a cutting plane that is defined by a certain subgraph.

#

MaxNodesPerCut 1000

Here, we suppose that the class MYMASTER has two members that are initialized from the parameter file.

  class MYMASTER : public ABA_MASTER {  
    /* public and protected members */  
    private:  
      enum SEPSTRAT {All,Hierachical};  
      ABA_STRING separationStrategy_;  
      int maxNodesPerCut_;  
      /* other private members */  
   };

The parameter file can be read by redefining the virtual function initializeParameters(), which does nothing in its default implementation.

Parameter files having our format can be read by the function ABA_GLOBAL::readParameters(), which inserts all parameters in a table. Then, the parameters can be extracted from the table with the functions ABA_GLOABAL::assignParameter(), ABA_GLOABAL::findParameter(), ABA_GLOABAL::getParameter() which are overloaded in different ways.

For our application, the code could look like

  void MYMASTER::initializeParameters()  
  {  
    readParameters(~.myparameters~);  
 
    /* terminate the program if the parameters are not found  
       in the table (which was filled by readParamters() ) */  
 
    const char* SeparationStrategy[]={~All~,~Hierachical~};  
    separationStrategy_=(SEPSTRAT)  
      findParameter(~SeparationStrategy~,2, SeparationStrategy);  
 
    /* allow only values between 1 and 5000; */  
    assignParameter(maxNodesPerPerCut_, ~MaxNodesPerCut~, 1, 5000);  
 
  }

Parameters of the base class ABA_MASTER that are redefined in the file .myparameters do not have to be extracted explicitly, but are initialized automatically. Note, the parameters specified in the file .abacus are read in the constructor of the class ABA_MASTER, but an application specific parameter file is read when the optimization starts (function ABA_MASTER::optimize()).

A branch-and-cut optimization can be performed even without reading the file .abacus. This can be achieved by setting the 8th parameter of the constructor of ABA_MASTER to false. In this case, ABACUS starts with default settings for the parameters, which can be overwritten by the function ABA_GLOBAL::insertParameter().