Skip to content

Instantly share code, notes, and snippets.

@denis-bz
Last active April 15, 2025 06:53
Show Gist options
  • Select an option

  • Save denis-bz/c951e3e59fb4d70fd1a52c41c3675187 to your computer and use it in GitHub Desktop.

Select an option

Save denis-bz/c951e3e59fb4d70fd1a52c41c3675187 to your computer and use it in GitHub Desktop.
Optimizing MOPTA08 (128 variables, 68 constraints) with SLSQP from python

Mopta08-py

MOPTA08 is a test case for optimization from D.R. Jones, "Large-scale multi-disciplinary mass optimization in the auto industry" , 2008. It has 124 variables in 0 .. 1, a linear objective function, and 68 nonlinear constraints.

The purpose of mopta08-py is to try out various optimizers -> python -> MOPTA08, 10 years on, and see if we can learn anything.

Keywords: nonlinear optimization, benchmark, python, SLSQP, SQP, Sequential Quadratic Programming, MOPTA08.

2nov-plot-mopta

The files here:

1-slsqp-mopta.log: the log from a sample run. f reaches 222.4 in < 100 iterations, ~ 20 minutes on an iMac.

mopta.py. Usage:

from mopta import func, gradf, constr, x0

f = func( x )       # the objective function, linear
grad = gradf( x )   # gradient, constant
cons = constr( x )  # 68 nonlinear constraints, that we want <= 0
x0                  # from start.txt

N.B. python arrays are 0-origin -- py constraints 0 .. 67 are fortran 1 .. 68.

mopta-f2py: a shell script to compile *.f func.f90 funcpy.f90 and generate 2 macos dynamic libraries moptafunc.so libmopta.so, using f2py. You'll have to change this for other platforms, and even on my mac 10.10 the dyld part is murky.

funcpy.f90: function func instead of subroutine, simplifies f2py

slsqp-mopta.py: runs the scipy SLSQP optimizer on mopta. It uses datalogger to save f x constr at each iteration, to plot later.

(gist.github displays these in ascii order, hence leading 0- 1- in some filenames.)

Notes on SLSQP

The plot and logfile above show f -> its final value from below, and violated constraints -> 0 from above, the "wrong side":

# iter  sec   func      -df        |dx|max     constraints: nviol, sum, biggest 2 
10   168   220.246      -2.3       dx 1        c 30  7.5  [2 0.6] [4 3]
20   327   218.52       -2.8       dx 0.77     c 26  1.6  [0.5 0.3] [4 3]
30   485   220.685      0.31       dx 0.35     c 26  1  [0.2 0.2] [4 3]
40   644   222.015      0.025      dx 0.17     c 23  0.22  [0.05 0.04] [4 3]
50   803   222.414      -0.075     dx 0.14     c 17  0.14  [0.05 0.02] [ 4 59]
60   962   222.402      -0.046     dx 0.17     c 15  0.019  [0.009 0.004] [4 3]
70  1121   222.427      6.6e-05    dx 0.0086   c  8  0.00048  [0.0001 0.0001] [66 59]

Are such "wrong side" methods good for other problems ? (Do interior-point methods care about "wrong side" at all ? Don't know.)

The original Fortran SLSQP is by Dieter Kraft, "A Software Package for Sequential Quadratic Programming", DFVLR-FB 88-28, 1988. Scipy.optimize wraps slsqp_optmz.f in slsqp.py.

See also:
Nocedal & Wright, Numerical Optimization, chapter 18, Sequential Quadratic Programming pages 529 - 562.
Williams, SLSQP in fortran2008, http://degenerateconic.com/slsqp and https://github.com/jacobwilliams/slsqp (but f2py doesn't do fortran2008).

Etc.

constr is called often (and it's quite slow, timeit ~ 120 ms):

number of calls of func, gradf, constr: [228, 77, 9855]

Almost all of these calls are I guess for finite-difference approximations to the 68 x 124 Jacobian of constr(x). Speedup, active set ? Plotting and scaling constraints must be more important.


Zoo of optimization test cases ?

A single problem, even one as good as MOPTA08, doesn't help us much in choosing a method for other problems. Is there a "zoo" of 5 or 10 real or realistic problems on the web, with data from complete runs ? Just as a real zoo helps zoology students learn about different kinds of animals, a "problem zoo" would help students of optimization.

We need not only a range of problems, but better ways of visualizing optimization paths in 100 dimensions — of making optimizers human-understandable.

Comments welcome, test cases most welcome

cheers
— denis-bz-py at t-online dot de

Last change: 2018-11-03 Nov

# from: slsqp-mopta.py ftol=5e-5 margin=1e-5 eps=.001 tag="2nov/slsqp-mopta-ftol5e-5-margin1e-5-eps.001"
# 2 Nov 2018 15:46 in ~bz/py/opt/mopta/min Denis-iMac 10.10.5
versions: numpy 1.15.3 scipy 1.1.0 python 2.7.15 mac 10.10.5
================================================================================
slsqp-mopta.py func: func gradf: gradf ftol: 5e-05 eps: 0.001 margin: 1e-05
/Library/Python/2.7/site-packages/numpy/core/_methods.py:28: RuntimeWarning: invalid value encountered in reduce
return umr_maximum(a, axis, None, out, keepdims, initial)
# iter sec func -df |dx|max constraints: nviol, sum, biggest 2
0 15 204.397 nan dx nan c 34 7.1 [1 1] [4 3]
1 31 234.094 -30 dx 1 c 32 7.5 [1 0.7] [ 4 47]
2 46 236.505 -2.4 dx 1 c 30 7.5 [2 0.7] [ 4 47]
3 61 219.422 17 dx 1 c 32 7.5 [2 0.8] [ 4 47]
4 76 216.952 2.5 dx 1 c 39 7.3 [2 0.8] [ 4 47]
5 91 248.082 -31 dx 1 c 38 16 [2 2] [47 4]
6 106 297.724 -50 dx 1 c 21 4.3 [1 0.5] [ 4 47]
7 121 320.563 -23 dx 1 c 18 3.5 [1 0.6] [4 3]
8 137 305.822 15 dx 1 c 17 3.7 [1 0.5] [ 4 21]
9 152 217.974 88 dx 1 c 35 9.1 [1 1] [47 4]
10 168 220.246 -2.3 dx 1 c 30 7.5 [2 0.6] [4 3]
11 184 216.081 4.2 dx 1 c 28 4.5 [0.9 0.7] [47 4]
12 200 214.13 2 dx 1 c 23 3.3 [0.8 0.6] [4 7]
13 216 204.404 9.7 dx 1 c 34 8.8 [2 0.9] [4 3]
14 232 210.823 -6.4 dx 0.88 c 31 4.3 [1 0.6] [4 3]
15 248 215.552 -4.7 dx 0.9 c 29 4.5 [1 0.8] [4 3]
16 263 216.6 -1 dx 0.76 c 27 4.6 [0.7 0.6] [4 3]
17 279 214.653 1.9 dx 0.81 c 27 3.7 [1 0.6] [ 4 67]
18 295 215.748 -1.1 dx 0.9 c 26 2 [0.5 0.3] [ 4 67]
19 311 215.727 0.021 dx 0.81 c 28 2.3 [0.5 0.3] [4 3]
20 327 218.52 -2.8 dx 0.77 c 26 1.6 [0.5 0.3] [4 3]
21 343 218.832 -0.31 dx 0.49 c 22 1.6 [0.5 0.3] [ 4 66]
22 359 218.776 0.056 dx 0.58 c 19 0.76 [0.2 0.2] [4 7]
23 375 218.978 -0.2 dx 0.61 c 24 2.5 [0.7 0.4] [ 4 59]
24 390 219.852 -0.87 dx 0.83 c 27 1.9 [0.4 0.4] [4 3]
25 406 219.838 0.015 dx 0.35 c 30 2.3 [0.5 0.3] [65 4]
26 422 220.289 -0.45 dx 0.61 c 25 0.81 [0.1 0.1] [ 7 66]
27 438 220.078 0.21 dx 0.43 c 25 0.93 [0.2 0.1] [ 7 59]
28 454 220.059 0.019 dx 0.45 c 23 1.3 [0.3 0.2] [59 67]
29 469 220.995 -0.94 dx 0.63 c 25 0.91 [0.2 0.2] [67 66]
30 485 220.685 0.31 dx 0.35 c 26 1 [0.2 0.2] [4 3]
31 501 221.234 -0.55 dx 0.48 c 21 0.6 [0.2 0.08] [4 3]
32 517 220.778 0.46 dx 0.51 c 23 1.5 [0.3 0.2] [66 4]
33 533 221.543 -0.77 dx 0.54 c 26 0.66 [0.1 0.1] [66 4]
34 549 221.743 -0.2 dx 0.32 c 23 0.63 [0.1 0.1] [ 4 59]
35 565 221.772 -0.029 dx 0.24 c 24 0.94 [0.4 0.1] [4 3]
36 581 221.879 -0.11 dx 0.2 c 22 0.51 [0.2 0.06] [66 65]
37 596 221.839 0.04 dx 0.22 c 23 1.3 [0.4 0.4] [4 5]
38 612 222.048 -0.21 dx 0.35 c 22 0.19 [0.04 0.03] [67 59]
39 628 222.039 0.0087 dx 0.22 c 20 0.23 [0.06 0.04] [66 67]
40 644 222.015 0.025 dx 0.17 c 23 0.22 [0.05 0.04] [4 3]
41 660 221.851 0.16 dx 0.25 c 22 0.43 [0.1 0.04] [4 3]
42 676 221.927 -0.076 dx 0.29 c 21 0.41 [0.1 0.09] [4 3]
43 692 222.074 -0.15 dx 0.23 c 19 0.32 [0.1 0.08] [4 3]
44 708 222.254 -0.18 dx 0.17 c 16 0.093 [0.02 0.02] [ 4 59]
45 724 222.2 0.054 dx 0.15 c 23 0.15 [0.03 0.02] [66 34]
46 739 222.274 -0.075 dx 0.29 c 20 0.1 [0.02 0.02] [59 34]
47 755 222.326 -0.051 dx 0.16 c 18 0.099 [0.02 0.02] [ 4 62]
48 771 222.374 -0.049 dx 0.17 c 22 0.13 [0.03 0.02] [3 4]
49 787 222.34 0.035 dx 0.18 c 20 0.16 [0.04 0.03] [3 4]
50 803 222.414 -0.075 dx 0.14 c 17 0.14 [0.05 0.02] [ 4 59]
51 819 222.42 -0.0062 dx 0.16 c 21 0.13 [0.02 0.02] [59 4]
52 835 222.414 0.0065 dx 0.16 c 16 0.14 [0.05 0.03] [ 4 59]
53 851 222.4 0.014 dx 0.17 c 19 0.14 [0.08 0.02] [ 4 59]
54 867 222.445 -0.045 dx 0.21 c 17 0.013 [0.004 0.002] [ 3 66]
55 883 222.422 0.023 dx 0.11 c 19 0.025 [0.006 0.006] [66 67]
56 899 222.355 0.067 dx 0.22 c 17 0.049 [0.009 0.009] [3 4]
57 914 222.388 -0.033 dx 0.14 c 20 0.043 [0.01 0.009] [67 59]
58 930 222.38 0.0081 dx 0.092 c 19 0.025 [0.004 0.004] [ 3 59]
59 946 222.356 0.024 dx 0.11 c 18 0.026 [0.006 0.006] [59 4]
60 962 222.402 -0.046 dx 0.17 c 15 0.019 [0.009 0.004] [4 3]
61 978 222.405 -0.0032 dx 0.05 c 17 0.014 [0.007 0.001] [4 3]
62 994 222.411 -0.0067 dx 0.054 c 18 0.02 [0.005 0.003] [ 4 66]
63 1010 222.42 -0.0086 dx 0.063 c 18 0.0033 [0.0007 0.0007] [66 59]
64 1026 222.424 -0.0037 dx 0.028 c 15 0.0023 [0.0007 0.0004] [66 59]
65 1041 222.423 0.0011 dx 0.027 c 17 0.0058 [0.002 0.001] [66 59]
66 1057 222.425 -0.0023 dx 0.022 c 14 0.0024 [0.0006 0.0005] [59 66]
67 1073 222.426 -0.001 dx 0.012 c 13 0.00054 [0.0001 0.0001] [ 4 67]
68 1089 222.427 -0.0006 dx 0.01 c 9 0.00023 [5e-05 5e-05] [ 3 59]
69 1105 222.427 -0.00018 dx 0.0095 c 7 0.00017 [6e-05 4e-05] [4 3]
70 1121 222.427 6.6e-05 dx 0.0086 c 8 0.00048 [0.0001 0.0001] [66 59]
71 1137 222.427 -0.00038 dx 0.0077 c 1 2.8e-06 [3e-06] [34]
72 1151 222.427 0.00017 dx 0.0049 c 7 0.00017 [5e-05 4e-05] [66 59]
73 1167 222.427 9.2e-05 dx 0.0062 c 7 0.00022 [7e-05 6e-05] [59 67]
74 1183 222.427 -0.00032 dx 0.0047 c 3 0.00013 [6e-05 5e-05] [66 59]
75 1199 222.427 -9.8e-05 dx 0.0041 c 1 2.5e-06 [3e-06] [4]
76 1215 222.427 -1.6e-05 dx 0.00099 c 0 0 [] []
Optimization terminated successfully. (Exit mode 0)
Current function value: 222.427088142
Iterations: 78
Function evaluations: 151
Gradient evaluations: 77
fmin: 222.427088
xmin * 1000:
[[ 257 0 0 123 10 59 0 400 0 757 0 0 0 0 0 491 0 0 0 0]
[ 0 21 0 0 0 47 339 0 0 689 0 310 0 192 1000 227 856 622 589 278]
[ 476 710 116 0 0 0 800 406 281 32 0 0 159 0 0 0 290 0 423 266]
[ 72 416 937 285 0 733 663 355 0 110 217 0 0 181 23 299 0 331 767 0]
[ 814 848 393 838 0 837 263 341 422 68 452 674 32 0 0 0 136 0 653 846]
[ 0 1000 786 15 76 465 0 164 235 149 614 1000 1000 718 463 199 387 115 1000 735]]
[ 0 897 0 1000]
constr > 0: [] at []
- constr * 100:
[[ 0 30 4 0 0 22 36 0 0 0 0 0 4 5 0 0 34]
[ 26 5 0 20 0 41 0 9 2 8 2 41 0 6 2 2 7]
[ 0 7 81 0 87 9 61 0 10 0 10 9 75 0 84 2 78]
[ 12 80 4 95 67 61 49 46 0 20 5 0 100 91 26 0 0]]
number of calls of func, gradf, constr: [228, 77, 9855]
Datalogger: saving cons f x to 2nov/slsqp-mopta-ftol5e-5-margin1e-5-eps.001.npz
cons : (77, 68) float32 min av max -2.99 -0.194 2.38
f : (77,) min av max 204 224 321
x : (77, 124) float32 min av max 0 0.287 1
fmin : 222.427
info : slsqp-mopta.py func: func gradf: gradf ftol: 5e-05 eps: 0.001 margin: 1e-05
1216.04 real 1215.50 user 0.71 sys
function func_and_constr(x,g) ! x in, g constraints inout, f return value
! replaces subroutine func(nvar,ncon,x,f,g) in func.f90 (easier for f2py)
implicit none
integer, parameter:: nvar = 124
integer, parameter:: ncon = 68
real*8 x(nvar)
real*8 g(ncon)
real*8 func_and_constr
real*8 f
integer d
integer duuuu
integer deeee
integer dssss
integer drrrr
integer ny
parameter (d=124)
parameter (duuuu=91)
parameter (deeee=96)
parameter (dssss=84)
parameter (drrrr=44)
parameter (ny=68)
integer uuuu2opt(duuuu)
integer eeee2opt(deeee)
integer ssss2opt(dssss)
integer rrrr2opt(drrrr)
integer ssss(ny)
integer i
integer k
real*8 xuuuu(duuuu)
real*8 xeeee(deeee)
real*8 xssss(dssss)
real*8 xrrrr(drrrr)
real*8 mmmm(d)
real*8 aaaa(d)
real*8 xl(nvar)
real*8 xu(nvar)
real*8 gu(ncon)
real*8 xxxxx(nvar)
logical first
data first /.true./
SAVE
if (first) then
first = .false.
call get_uuuu2opt( uuuu2opt )
call get_eeee2opt( eeee2opt )
call get_ssss2opt( ssss2opt )
call get_rrrr2opt( rrrr2opt )
call get_mmmm( mmmm )
call get_aaaa( aaaa )
call get_ssss( ssss )
endif
call get_bbbb(xl, xu, gu)
do i=1,nvar
xxxxx(i) = xl(i) + x(i) * (xu(i) - xl(i))
enddo
do k=1,duuuu
i = uuuu2opt(k)
if (i > 0) then
xuuuu(k) = xxxxx(i)
else
xuuuu(k) = aaaa(-i)
endif
enddo
do k=1,deeee
i = eeee2opt(k)
if (i > 0) then
xeeee(k) = xxxxx(i)
else
xeeee(k) = aaaa(-i)
endif
enddo
do k=1,dssss
i = ssss2opt(k)
if (i > 0) then
xssss(k) = xxxxx(i)
else
xssss(k) = aaaa(-i)
endif
enddo
do k=1,drrrr
xrrrr(k) = xxxxx( rrrr2opt(k) )
enddo
f = 0.0d0
do i=1,d
f = f + mmmm(i) * xxxxx(i)
enddo
call g1(xuuuu,g(1))
call g2(xeeee,g(2))
call g3(xeeee,g(3))
call g4(xeeee,g(4))
call g5(xeeee,g(5))
call g6(xeeee,g(6))
call g7(xssss,g(7))
call g8(xssss,g(8))
call g9(xssss,g(9))
call g10(xxxxx,g(10))
call g11(xxxxx,g(11))
call g12(xxxxx,g(12))
call g13(xxxxx,g(13))
call g14(xxxxx,g(14))
call g15(xxxxx,g(15))
call g16(xxxxx,g(16))
call g17(xxxxx,g(17))
call g18(xxxxx,g(18))
call g19(xxxxx,g(19))
call g20(xxxxx,g(20))
call g21(xxxxx,g(21))
call g22(xxxxx,g(22))
call g23(xxxxx,g(23))
call g24(xxxxx,g(24))
call g25(xxxxx,g(25))
call g26(xxxxx,g(26))
call g27(xxxxx,g(27))
call g28(xxxxx,g(28))
call g29(xxxxx,g(29))
call g30(xxxxx,g(30))
call g31(xxxxx,g(31))
call g32(xxxxx,g(32))
call g33(xxxxx,g(33))
call g34(xxxxx,g(34))
call g35(xxxxx,g(35))
call g36(xxxxx,g(36))
call g37(xxxxx,g(37))
call g38(xxxxx,g(38))
call g39(xxxxx,g(39))
call g40(xxxxx,g(40))
call g41(xxxxx,g(41))
call g42(xxxxx,g(42))
call g43(xxxxx,g(43))
call g44(xxxxx,g(44))
call g45(xxxxx,g(45))
call g46(xxxxx,g(46))
call g47(xxxxx,g(47))
call g48(xxxxx,g(48))
call g49(xxxxx,g(49))
call g50(xxxxx,g(50))
call g51(xxxxx,g(51))
call g52(xxxxx,g(52))
call g53(xxxxx,g(53))
call g54(xxxxx,g(54))
call g55(xxxxx,g(55))
call g56(xrrrr,g(56))
call g57(xrrrr,g(57))
call g58(xrrrr,g(58))
call g59(xrrrr,g(59))
call g60(xrrrr,g(60))
call g61(xrrrr,g(61))
call g62(xrrrr,g(62))
call g63(xrrrr,g(63))
call g64(xrrrr,g(64))
call g65(xrrrr,g(65))
call g66(xrrrr,g(66))
call g67(xrrrr,g(67))
call g68(xrrrr,g(68))
do i=1,ny
g(i) = ssss(i) * g(i)
g(i) = ( g(i) - gu(i) ) / abs(gu(i))
enddo
func_and_constr = f
end
#!/usr/bin/env python2
""" mopta.py: python wrapper for the MOPTA08 optimization benchmark
Usage:
from mopta import func, gradf, constr, x0
f = func( x ) # the objective function, linear
grad = gradf( x ) # gradient, constant
cons = constr( x ) # 68 nonlinear constraints, that we want <= 0
x0 # from start.txt
See D.R. Jones, Large-scale multi-disciplinary mass optimization in the auto industry, 2008
Note that python arrays are 0-origin -- py constraints 0 .. 67 are fortran 1 .. 68
"""
__version__ = "2018-11-02 Nov denis"
import numpy as np
from moptafunc import func_and_constr # libmopta.so moptafunc.so
def func( x ):
ncalls[0] += 1
return _func0 + _grad.dot( x )
def gradf( *ignore_x ):
ncalls[1] += 1
return _grad # constant
def constr( x, verbose=0 ):
""" -> all 68 constraints from mopta func.f90 """
# slow: memoize for active-set ?
ncalls[2] += 1
c = np.zeros( 68 )
f = func_and_constr( x, c ) # c: fortran inout
if verbose:
print "constr > 0: %s at %s " % _biggest( c, c > 0 )
return c
def _biggest( X, cond, n=0 ):
""" -> n biggest X[cond] sorted, indices """
# small n: nsmallest_slott_bisect
if np.isscalar( cond ): # True == all
J = np.arange( len(X) )
else:
J = np.where( cond )[0]
jsort = X[J].argsort()[::-1]
if n:
jsort = jsort[:n]
J = J[jsort]
return X[J], J
def xintstr( x ):
""" -> ints( x ) string, e.g. x * 100 """
xint = x.round().astype(int)
return "%s \n %s" % (
xint[:120] .reshape( 6, 20 ), xint[120:] )
def cintstr( c ):
cint = c.round().astype(int)
return "%s" % cint.reshape( 4, 17 )
ncalls = [0, 0, 0] # func, gradf, constr
# const arrays from func.f90 --
mmmm = np.array([
0.55, 0.6875, 0.6125, 2.24166666666667, 8, 3.97142857142857, 0.291666666666667, 0.159090909090909, 0.136363636363636, 0.241666666666667, 5.83333333333333, 0.85, 0.8, 3.44, 1.06, 0.905, 2.37142857142857, 1.9125, 1.27142857142857, 1.0625, 7.74285714285714, 7.65714285714286, 0.795, 1.76, 0.23, 1.265, 0.16, 1.59166666666667, 0.291666666666667, 2.7, 0.34, 4.35, 1.07777777777778, 0.466666666666667, 0.466666666666667, 1.15, 0.85, 2.5625, 0.7875, 2.17142857142857, 1.08571428571429, 5.17142857142857, 1.92, 0.542857142857143, 3.66666666666667, 3.45, 0.26, 0.26, 5.4, 1.7, 1.04, 0.542857142857143, 1.11666666666667, 0.383333333333333, 0.173333333333333, 0.633333333333333, 0.13, 0.9875, 0.14, 0.45, 2.13333333333333, 2.91818181818182, 2.54545454545454, 1.01111111111111, 0.27, 1.42222222222222, 0.256, 0.516666666666667, 2.61666666666667, 2.5875, 0.766666666666667, 7.17142857142857, 2.08, 1.6, 1.45, 6.68333333333333, 1.21333333333333, 1.225, 0.566666666666667, 0.293333333333333, 0.444444444444444, 0.34, 5.51764705882353, 1.85, 2.98666666666667, 3.16666666666667, 2.56551724137931, 1.63333333333333, 1.88333333333333, 7.62857142857143, 2.02222222222222, 2.51666666666667, 2.05333333333333, 0.4375, 2.16666666666667, 0.8, 1.45, 0.3, 1.54285714285714, 4.63333333333333, 0.438095238095238, 0.411111111111111, 0.6, 2.44444444444444, 0.3, 0.28, 0.21, 0.24, 0.16, 0.57, 0.28, 0.22, 0.136, 0.672, 0.988, 0.513333333333333, 0.27, 0.24, 0.3125, 0.357142857142857, 0.4825, 2.49444444444444, 2.46111111111111, 0.395454545454545,
])
xl = np.array([
0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 1.5, 1.5, 1.5, 1.5, 2.5, 2.5, 2.5, 2.5, 1.5, 1.5, 1.5,
])
xu = np.array([
2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.4, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.5, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.5, 3, 3, 3.5, 3.5, 4.2, 4.2, 4.2, 4.2, 2, 2, 2.5,
])
x0 = np.array([ # from start.txt
0.0666666666666667, 0.0666666666666667, 0.0666666666666667, 0.333333333333333, 0.0666666666666667, 0, 0.333333333333333, 1, 1, 1, 0.133333333333333, 0.866666666666667, 0.866666666666667, 0.2, 0.2, 0.866666666666667, 0, 0.0666666666666667, 0, 0.0666666666666667, 0, 0.0333333333333334, 0.8, 0.2, 0.0333333333333334, 0.666666666666667, 0.566666666666667, 0.333333333333333, 0.333333333333333, 0.633333333333333, 0.266666666666667, 0.333333333333333, 0.266666666666667, 0.8, 0.866666666666667, 0.533333333333333, 0.866666666666667, 0.7, 0.666666666666667, 0, 0.533333333333333, 0.633333333333333, 0.1, 0.133333333333333, 0.1, 0.0333333333333334, 0.266666666666667, 0.4, 0.2, 0.0666666666666667, 0.2, 0, 0.4, 0.3, 0.466666666666667, 0.333333333333333, 0.2, 0, 0.666666666666667, 0.6, 0.1, 0.233333333333333, 1, 0.733333333333333, 0.866666666666667, 0.733333333333333, 1, 0.333333333333333, 0.0333333333333334, 0.233333333333333, 0.233333333333333, 0.233333333333333, 0.2, 0.333333333333333, 0.233333333333333, 0.333333333333333, 0.533333333333333, 0.0666666666666667, 0.733333333333333, 0.233333333333333, 0.733333333333333, 0.433333333333333, 0.666666666666667, 0.9, 0.266666666666667, 0.866666666666667, 0.5, 0.333333333333333, 0.366666666666667, 0.0333333333333334, 0.366666666666667, 0.466666666666667, 0.533333333333333, 0.633333333333333, 0.333333333333333, 0, 0.333333333333333, 0.6, 0.6, 0.733333333333333, 0.933333333333333, 0.666666666666667, 0.633333333333333, 0.199999999999999, 0.866666666666667, 0.6, 0.5, 0.0666666666666661, 0.366666666666667, 1, 0.866666666666667, 0.866666666666667, 1, 0.666666666666667, 1, 0.65, 0.25, 0.5, 0.911764705882353, 0.735294117647059, 0.147058823529412, 1, 0.3, 0.55,
])
_grad = mmmm * (xu - xl) # constant
_func0 = mmmm.dot( xl)
#...............................................................................
if __name__ == "__main__": # sanity test
np.set_printoptions( threshold=20, edgeitems=10, linewidth=140,
formatter = dict( float = lambda x: "%.2g" % x )) # float arrays %.2g
print "func( x0 ): %g" % func( x0 )
print "x0 * 100:\n", xintstr( x0 * 100 )
grad = np.sort( gradf() )
print "\ngrad, sorted: \n", grad[-120:].reshape( 6, 20 )
c = - constr( x0 )
print "\n- constr( x0 ) * 100: \n", cintstr( c * 100 )
print "\n# which constraints go > 0 near x0 ?"
for h in [0, 1e-4, .001, .002]:
print "x0 - %-6g grad " % h ,
c = constr( x0 - h * gradf(), verbose=1 )
#!/bin/sh
# mopta08-f2py [-c]: wrap the MOPTA08 optimization benchmark for python, using f2py
# in:
# *.f func.f90 from mopta_fortran_unix.tar.gz from https://www.miguelanjos.com/jones-benchmark
# funcpy.f90 with function func_and_constr(x,g)
# out:
# 2 mac dynamic libraries moptafunc.so -> libmopta.so
# .log
# see:
# D.R. Jones, Large-scale multi-disciplinary mass optimization in the auto industry, 2008
# denis-bz-py t-online de 2018-10-12 oct
case $1 in -h | --h* )
exec cat $0
esac
: ${log=`basename $0`.log}
mv $log $log- 2> /dev/null
set -x
{ # tee $log
#...............................................................................
# first run, -c: compile *.f -> libmopta.so --
# https://stackoverflow.com/questions/27270543/including-a-compiled-module-in-module-that-is-wrapped-with-f2py-minimum-working
case $1 in -c )
rm *.o 2> /dev/null
time gfortran -c -fPIC -O1 $* *.f
# -O1 time real 1882 sec mac
# https://gcc.gnu.org/onlinedocs/gfortran
# https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
gfortran -shared -o libmopta.so -fPIC *.o
# list: nm -g libmopta.so
rm mopta.so 2> /dev/null
ln -s libmopta.so mopta.so # ??
esac
#...............................................................................
# f2py https://sysbio.ioc.ee/projects/f2py2e
# https://shocksolution.com/2009/09/23/f2py-binding-fortran-python/
f2py -c --f90flags=-Wno-tabs -L. -I. -lmopta -m moptafunc funcpy.f90 # func.f90
# > moptafunc.so
# ignore #warning "Using deprecated NumPy API, disable it by "
#...............................................................................
# test import in python --
python -c 'from moptafunc import func'
} 2>&1 | tee $log
# mv moptafunc.so libmopta.so to a dir in PYTHONPATH ? nope
# Library not loaded: libmopta.so
# Referenced from: /opt/local/py/site/moptafunc.so
# Reason: image not found
# man dyld, export DYLD_FALLBACK_LIBRARY_PATH ...
#!/usr/bin/env python2
""" SLSQP Sequential_quadratic_programming on mopta08
D.R. Jones, Large-scale multi-disciplinary mass optimization in the auto industry, 2008
see Mopta08-py.md under https://github.com/denis-bz
"""
from __future__ import division
import sys
from time import clock
import numpy as np
from scipy.optimize import fmin_slsqp # $scopt/slsqp.py
from mopta import func, gradf, constr, x0, xintstr, cintstr, ncalls, _biggest
from opt.util.datalogger import Datalogger # log f= x= to plot
norm = np.linalg.norm
__version__ = "2018-11-02 Nov denis"
np.set_printoptions( threshold=10, edgeitems=10, linewidth=120,
formatter = dict( float = lambda x: "%.2g" % x )) # float arrays %.2g
thispy = __file__.split("/")[-1]
print "\n" + 80 * "="
#...........................................................................
method = "SLSQP" # $scopt/slsqp.py slsqp_optmz.f
maxiter = 100
ftol = 5e-5 # dithering
eps = 1e-3
margin = 0 # 1e-5
nbig = 2 # monitor
tag = "tmp" # > tag.npz etc.
save = 1
# to change these params, run this.py a=1 b=None 'c = "str"' in sh or ipython
for arg in sys.argv[1:]:
exec( arg )
#...........................................................................
info = "%s func: %s gradf: %s margin: %g ftol: %g eps: %g " % (
thispy, func.__name__, gradf.__name__, margin, ftol, eps )
print info
# todo: class Monitor with these --
logger = Datalogger( "f x cons" ) # log f= x= to plot later
niter = -1
fprev, xprev = np.NaN, np.NaN
t0 = clock()
#...........................................................................
def monitor( x ):
""" callback=monitor: collect, log, print f, x ... at each iteration """
global niter, fprev, xprev
niter += 1
f = func(x)
# grad = gradf( x ) # mopta: const
cons = constr( x ) # mopta: want < 0
df = f - fprev
dx = x - xprev
fprev = f
xprev = x.copy()
logger( f=f, x=x, cons=cons )
cviol = (cons > 0)
cbig, jcbig = _biggest( cons, cviol, n=nbig )
dxmax = norm( dx, ord=np.inf )
if niter == 0:
print "\n# iter sec func -df |dx|max constraints: nviol, sum, biggest %d " % nbig
with np.printoptions( threshold=5, edgeitems=2, linewidth=140,
formatter = dict( float = lambda x: "%.1g" % x )): # float arrays %.1g
print "%2d %4.0f %-12g %-8.2g dx %-6.2g c %2d %.2g %s %s" % (
niter, clock() - t0,
f, - df, dxmax,
cviol.sum(), cons[cviol].sum(), cbig, jcbig )
def constrge0( x ):
return - constr( x ) - margin # flip mopta <= 0 to slsqp >= 0
bounds = np.ones(( 124, 2 )) * [0, 1]
#...........................................................................
res = fmin_slsqp( func, x0, f_ieqcons=constrge0,
bounds=bounds,
fprime=gradf,
epsilon=eps, # finite-diff
iprint=1, # 2: each iter
iter=maxiter, acc=ftol, full_output=True,
callback=monitor,
)
# def fmin_slsqp(func, x0, eqcons=(), f_eqcons=None, ieqcons=(), f_ieqcons=None,
# bounds=(), fprime=None, fprime_eqcons=None, fprime_ieqcons=None, args=(),
# iter=100, acc=1.0E-6, iprint=1, disp=None, full_output=0, epsilon=_epsilon,
# callback=None):
x, f, niter = res[:3] # x, f, niter, exitmode, message
print "fmin: %f" % f
print "xmin * 1000: \n", xintstr( x * 1000 )
c = constr( x, verbose=1 ) # != last callback ?
print "- constr * 100: \n", cintstr( - c * 100 )
print "number of calls of func, gradf, constr:", ncalls # including those in monitor
if save:
logger.savez( tag + ".npz", fmin=f, info=info ) # plot-mopta.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment