Scilab Function
Last update : 7/7/2006

Ipopt - Execute an optimization using the Ipopt Solver

Calling Sequence

[Xfinal,ObjFinal,mlb,mub,lambda,info] = Ipopt ('eval_obj_f','eval_constraints','eval_obj_grad','eval_constr_jac','eval_lagrange_hess', NoEqu,Xo, i_lb, val_lb, i_ub,val_ub,Par)

[Xfinal,ObjFinal,mlb,mub,lambda,info,XInfo,ObjInfo] = Ipopt('eval_obj_f','eval_constraints','eval_obj_grad','eval_constr_jac','eval_lagrange_hess', NoEqu,Xo, i_lb, val_lb, i_ub,val_ub,Par)

[Xfinal,ObjFinal,mlb,mub,lambda,info,XInfo,ObjInfo] = Ipopt('eval_obj_f','eval_constraints','eval_obj_grad','eval_constr_jac','eval_lagrange_hess', NoEqu,Xo, i_lb, val_lb, i_ub,val_ub,Par, 'Interface_options',intf_opt)

[Xfinal,ObjFinal,mlb,mub,lambda,info,XInfo,ObjInfo] = Ipopt('eval_obj_f','eval_constraints','eval_obj_grad','eval_constr_jac','eval_lagrange_hess', NoEqu,Xo, i_lb, val_lb, i_ub,val_ub,Par, 'Interface_options',intf_opt, 'Ipopt_internal_options',iptopt_opt)

Parameters

Description

Perform an optimization using Ipopt as solver. You can either use analytical gradients/Jacobians or numerical ones. Make sure that you loaded the function "IpoptFD.sci".

The Interface requires you to provide 1 to 5 functions. In any case an objective function is required. This is the function IPOPT tries to minimize. If the optimization problem has constraints, these also have to be provided in form of a Scilab function. IPOPT requires 3 more functions, evaluating the gradient of the objective, the Jacobian of the constraints and the Hessian of the Lagrangian. However the last 3 functions can be evaluated using finite differences. The Hessian can also be calculated using an IPOPT internal approximation method. The functions are in detail:

1.)Evaluation of the objective function. This function takes two input arguments - the point at which the objective function is to be evaluated (a vector of length n, with n = number of optimization variables) and a vector of parameters, if provided. The function returns the value of the objective function (a single scalar). Note: Even if you do not pass parameters to this function, it has to take two input arguments. Note: In the provided example this function is implemented in the file eval_f.sci

2.) Evaluation of the constraint function. This function takes two input arguments - the point at which the constraint function is to be evaluated (a vector of length n, with n = number of optimization variables) and a vector of parameters, if provided. The function returns the value of the constraints function (a single scalar or a column-vector). Note: Even if you do not pass parameters to this function, it has to take two input arguments. Note: In the provided example this function is implemented in the file eval_c.sci

3.) Evaluation of the gradient of the objective function: This function takes two input argument - the point at which the gradient of the objective function is to be evaluated - (a vector of length n, with n = number of optimization variables) and a vector of parameters, if provided. The function returns a vector of length n, with n = number of optimization variables. Note: Even if you do not pass parameters to this function, it has to take two input arguments. Note: The vector returned has to be a not sparse row vector. Note: In the provided example this function is implemented in the file eval_g.sci

4.) Evaluation of the Jacobian of the constraints This function takes two input arguments - the point at which the Jacobian of the constraints is to be evaluated - (a vector of length n, with n = number of optimization variables) and a vector of parameters, if provided. The function returns a matrix of size m by n with m = number of constraints and n = number of optimization variables. The matrix may either be full or sparse. Note: Even if you dont pass parameters to this function, it has to take two input arguments. Note: In the provided example this function is implemented in the file eval_a.sci

5.) Evaluation of the Hessian of the Lagrangian This function takes 3 input arguments: 1.) The point at which the Hessian of the Lagrangian is to be evaluated (a vector of length n, with n = number of optimization variables) 2.) A vector of parameters, if provided 3.) The values of Lambda at that point (a vector of length m with m = number of constraints). The function returns a matrix of size n by n with n = number of optimization variables. The matrix may be either sparse or full. The matrix has to be an upper or lower tridiagonal matrix ! Note: Even if you do not pass parameters to this function, it has to take 3 input arguments. Note: In the provided example this function is implemented in the file eval_h.sci

Examples

StartPoint = [1 5 5 1 -24];
NoEquCons  = 2; // number of equality constratins
i_lb   = [1,2,3,4,5]; 
val_lb = [1,1,1,1,0];
i_ub   = [1,2,3,4];
val_ub = [5;5;5;5];
Par = [];

//Evaluate the jacobians/gradients analiticaly...

[Xfinal,ObjFinal,mlb,mub,lambda,info,XInfo,ObjInfo] = Ipopt('eval_f','eval_c','eval_g','eval_a','eval_h',NoEquCons,StartPoint,i_lb,val_lb,i_ub,val_ub,Par,'print',1)

//... or evaluate the jacobians/gradients by finite differences...

[Xfinal,ObjFinal,mlb,mub,lambda,info,XInfo,ObjInfo] = Ipopt
('eval_f','eval_c','fd','fd','fd',NoEquCons,StartPoint,i_lb, val_lb,i_ub,val_ub,Par,'print',1)

//...you can also use the internal option of Ipopt "IQUASI" to evaluate the Hessian's Jacobian 
// by finite differences (it is much faster). 

[Xfinal,ObjFinal,mlb,mub,lambda,info,XInfo,ObjInfo] = Ipopt
('eval_f','eval_c','fd','fd','fd', NoEquCons,StartPoint, i_lb, val_lb, i_ub,val_ub,Par,'print',1,'IQUASI',6)

 

See Also

Authors

Edson Cordeiro do Valle - VRTech Tecnologias Industriais Ltda. www.vrtech.com.br.