# Introduction

`ConicBundle.jl`

is the Julia port to the ConicBundle library. The functions that are centered around the `CBProblem`

type address the official C interface of ConicBundle; they were translated by hand, adapted to a Julian way of coding and should work without any problems. However, the C interface is quite restricted and ConicBundle has a lot more to offer. For this reason, a lot of the functionality of the C++ interface was also exported for this Julia port and can be accessed by the C++ interface. Do not mix those two interfaces! The C interface has automatic memory management, the C++ interface does not. Additionally, the latter was created automatically using a script and lots of regular expressions; it comes without any guarantees that the functions will even be callable. Furthermore, the way of just exporting everything is not particularly efficient, as Julia would be able to do a lot of the functionality that is inlined in C++ on its own.

# Overview

`ConicBundle.ConicBundle`

— Module`module ConicBundle`

Solve $\min_{y\in\mathbf{R}^m} f_0(y) + f_1(y) + ... + f_k(y)$ for convex functions $f_i$, the y-variables may be bounded or box constrained. The most important steps are explained here. Internal details are sketched in internal_cinterface.

**Setting up the Problem, the Functions, and the Main Loop**

First open a new problem by constructing a `CBProblem`

object [use `CBProblem(1)`

in order to employ a minimal bundle solver with just one aggregate and one new subgradient in each iteration; this is an attractive choice, if fast iterations and/or little memory consumption are of special importance]. The object will be needed for every manipulation of this problem. Cleanup is performed automatically.

Next, set the dimension of the design variables/argument as well as possible box constraints on these by the function `cb_init_problem!`

.

Now set up your functions $f_i$ as functions that respect the call signature detailed in `CBFunction`

. Via these functions you will supply, for a given argument, the function value and a subgradient (=the gradient if the function is differentiable) to the solver.

The callbacks have to be added to the solver using the routine `cb_add_function!`

.

Once all functions are added, the optimization process can be started. If you know a good starting point then set it with `cb_set_new_center_point!`

now, otherwise the method will pick the zero vector or, in the case of box constraints, the point closest to zero as starting point.

Finally, set up a loop that calls `cb_solve!`

until `cb_termination_code`

is nonzero.

After the first call to `cb_solve!`

you can retrieve, at any time, the current objective value by `cb_get_objval`

and the argument leading to this value by `cb_get_center`

. For some screen output, use `cb_set_print_level!`

.

**Lagrangean Relaxation, Primal Approximations, and Cutting Planes**

If you are optimizing a Lagrangean relaxation, you might be interested in getting an approximation to your primal optimal solution. This can be done by specifying in each function for each (epsilon) subgradient the corresponding primal vectors that generate it, see `CBFunction`

and `cb_add_function!`

as a start. Then for each of your functions, you can retrieve the current primal approximation using `cb_get_approximate_primal!`

.

If, in addition, you plan to improve your primal relaxation via cutting planes, that are strongly violated by the current primal approximation, you should have a look at `cb_append_variables!`

, `CBSubgExt`

, `cb_reinit_function_model!`

, `cb_get_approximate_slacks`

, and `cb_delete_variables!`

.