本文整理汇总了C#中mincgstate类的典型用法代码示例。如果您正苦于以下问题:C# mincgstate类的具体用法?C# mincgstate怎么用?C# mincgstate使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
mincgstate类属于命名空间,在下文中一共展示了mincgstate类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的C#代码示例。
示例1: MinCGResults
/*************************************************************************
Conjugate gradient results
Buffered implementation of MinCGResults(), which uses pre-allocated buffer
to store X[]. If buffer size is too small, it resizes buffer. It is
intended to be used in the inner cycles of performance critical algorithms
where array reallocation penalty is too large to be ignored.
-- ALGLIB --
Copyright 20.04.2009 by Bochkanov Sergey
*************************************************************************/
public static void mincgresultsbuf(mincgstate state, ref double[] x, mincgreport rep)
{
mincg.mincgresultsbuf(state.innerobj, ref x, rep.innerobj);
return;
}
开发者ID:Ring-r,项目名称:opt,代码行数:17,代码来源:optimization.cs
示例2: fileds
/*************************************************************************
Clears request fileds (to be sure that we don't forgot to clear something)
*************************************************************************/
private static void clearrequestfields(mincgstate state)
{
state.needf = false;
state.needfg = false;
state.xupdated = false;
state.lsstart = false;
state.lsend = false;
state.algpowerup = false;
}
开发者ID:orlovk,项目名称:PtProject,代码行数:12,代码来源:optimization.cs
示例3: temporaries
/*************************************************************************
This function calculates preconditioned product x'*H^(-1)*y. Work0[] and
Work1[] are used as temporaries (size must be at least N; this function
doesn't allocate arrays).
-- ALGLIB --
Copyright 13.10.2010 by Bochkanov Sergey
*************************************************************************/
private static double preconditionedmultiply2(mincgstate state,
ref double[] x,
ref double[] y,
ref double[] work0,
ref double[] work1)
{
double result = 0;
int i = 0;
int n = 0;
int vcnt = 0;
double v0 = 0;
double v1 = 0;
int i_ = 0;
n = state.n;
vcnt = state.vcnt;
//
// no preconditioning
//
if( state.prectype==0 )
{
v0 = 0.0;
for(i_=0; i_<=n-1;i_++)
{
v0 += x[i_]*y[i_];
}
result = v0;
return result;
}
if( state.prectype==3 )
{
result = 0;
for(i=0; i<=n-1; i++)
{
result = result+x[i]*state.s[i]*state.s[i]*y[i];
}
return result;
}
alglib.ap.assert(state.prectype==2, "MinCG: internal error (unexpected PrecType)");
//
// low rank preconditioning
//
result = 0.0;
for(i=0; i<=n-1; i++)
{
result = result+x[i]*y[i]/(state.diagh[i]+state.diaghl2[i]);
}
if( vcnt>0 )
{
for(i=0; i<=n-1; i++)
{
work0[i] = x[i]/(state.diagh[i]+state.diaghl2[i]);
work1[i] = y[i]/(state.diagh[i]+state.diaghl2[i]);
}
for(i=0; i<=vcnt-1; i++)
{
v0 = 0.0;
for(i_=0; i_<=n-1;i_++)
{
v0 += work0[i_]*state.vcorr[i,i_];
}
v1 = 0.0;
for(i_=0; i_<=n-1;i_++)
{
v1 += work1[i_]*state.vcorr[i,i_];
}
result = result-v0*v1;
}
}
return result;
}
开发者ID:orlovk,项目名称:PtProject,代码行数:81,代码来源:optimization.cs
示例4: MinCGSetPrecDiag
/*************************************************************************
Faster version of MinCGSetPrecDiag(), for time-critical parts of code,
without safety checks.
-- ALGLIB --
Copyright 13.10.2010 by Bochkanov Sergey
*************************************************************************/
public static void mincgsetprecdiagfast(mincgstate state,
double[] d)
{
int i = 0;
apserv.rvectorsetlengthatleast(ref state.diagh, state.n);
apserv.rvectorsetlengthatleast(ref state.diaghl2, state.n);
state.prectype = 2;
state.vcnt = 0;
state.innerresetneeded = true;
for(i=0; i<=state.n-1; i++)
{
state.diagh[i] = d[i];
state.diaghl2[i] = 0.0;
}
}
开发者ID:orlovk,项目名称:PtProject,代码行数:23,代码来源:optimization.cs
示例5: part
/*************************************************************************
This function updates variable part (diagonal matrix D2)
of low-rank preconditioner.
This update is very cheap and takes just O(N) time.
It has no effect with default preconditioner.
-- ALGLIB --
Copyright 13.10.2010 by Bochkanov Sergey
*************************************************************************/
public static void mincgsetprecvarpart(mincgstate state,
double[] d2)
{
int i = 0;
int n = 0;
n = state.n;
for(i=0; i<=n-1; i++)
{
state.diaghl2[i] = d2[i];
}
}
开发者ID:orlovk,项目名称:PtProject,代码行数:23,代码来源:optimization.cs
示例6: MinCGSetGradientCheck
/*************************************************************************
Conjugate gradient results
INPUT PARAMETERS:
State - algorithm state
OUTPUT PARAMETERS:
X - array[0..N-1], solution
Rep - optimization report:
* Rep.TerminationType completetion code:
* -8 internal integrity control detected infinite
or NAN values in function/gradient. Abnormal
termination signalled.
* -7 gradient verification failed.
See MinCGSetGradientCheck() for more information.
* 1 relative function improvement is no more than
EpsF.
* 2 relative step is no more than EpsX.
* 4 gradient norm is no more than EpsG
* 5 MaxIts steps was taken
* 7 stopping conditions are too stringent,
further improvement is impossible,
we return best X found so far
* 8 terminated by user
* Rep.IterationsCount contains iterations count
* NFEV countains number of function calculations
-- ALGLIB --
Copyright 20.04.2009 by Bochkanov Sergey
*************************************************************************/
public static void mincgresults(mincgstate state,
ref double[] x,
mincgreport rep)
{
x = new double[0];
mincgresultsbuf(state, ref x, rep);
}
开发者ID:orlovk,项目名称:PtProject,代码行数:38,代码来源:optimization.cs
示例7: problems
/*************************************************************************
This subroutine restarts CG algorithm from new point. All optimization
parameters are left unchanged.
This function allows to solve multiple optimization problems (which
must have same number of dimensions) without object reallocation penalty.
INPUT PARAMETERS:
State - structure used to store algorithm state.
X - new starting point.
-- ALGLIB --
Copyright 30.07.2010 by Bochkanov Sergey
*************************************************************************/
public static void mincgrestartfrom(mincgstate state,
double[] x)
{
int i_ = 0;
alglib.ap.assert(alglib.ap.len(x)>=state.n, "MinCGRestartFrom: Length(X)<N!");
alglib.ap.assert(apserv.isfinitevector(x, state.n), "MinCGCreate: X contains infinite or NaN values!");
for(i_=0; i_<=state.n-1;i_++)
{
state.x[i_] = x[i_];
}
mincgsuggeststep(state, 0.0);
state.rstate.ia = new int[1+1];
state.rstate.ra = new double[2+1];
state.rstate.stage = -1;
clearrequestfields(state);
}
开发者ID:orlovk,项目名称:PtProject,代码行数:31,代码来源:optimization.cs
示例8: conditions
/*************************************************************************
This function sets scaling coefficients for CG optimizer.
ALGLIB optimizers use scaling matrices to test stopping conditions (step
size and gradient are scaled before comparison with tolerances). Scale of
the I-th variable is a translation invariant measure of:
a) "how large" the variable is
b) how large the step should be to make significant changes in the function
Scaling is also used by finite difference variant of CG optimizer - step
along I-th axis is equal to DiffStep*S[I].
In most optimizers (and in the CG too) scaling is NOT a form of
preconditioning. It just affects stopping conditions. You should set
preconditioner by separate call to one of the MinCGSetPrec...() functions.
There is special preconditioning mode, however, which uses scaling
coefficients to form diagonal preconditioning matrix. You can turn this
mode on, if you want. But you should understand that scaling is not the
same thing as preconditioning - these are two different, although related
forms of tuning solver.
INPUT PARAMETERS:
State - structure stores algorithm state
S - array[N], non-zero scaling coefficients
S[i] may be negative, sign doesn't matter.
-- ALGLIB --
Copyright 14.01.2011 by Bochkanov Sergey
*************************************************************************/
public static void mincgsetscale(mincgstate state,
double[] s)
{
int i = 0;
alglib.ap.assert(alglib.ap.len(s)>=state.n, "MinCGSetScale: Length(S)<N");
for(i=0; i<=state.n-1; i++)
{
alglib.ap.assert(math.isfinite(s[i]), "MinCGSetScale: S contains infinite or NAN elements");
alglib.ap.assert((double)(s[i])!=(double)(0), "MinCGSetScale: S contains zero elements");
state.s[i] = Math.Abs(s[i]);
}
}
开发者ID:orlovk,项目名称:PtProject,代码行数:43,代码来源:optimization.cs
示例9: rep
/*************************************************************************
This function turns on/off reporting.
INPUT PARAMETERS:
State - structure which stores algorithm state
NeedXRep- whether iteration reports are needed or not
If NeedXRep is True, algorithm will call rep() callback function if it is
provided to MinCGOptimize().
-- ALGLIB --
Copyright 02.04.2010 by Bochkanov Sergey
*************************************************************************/
public static void mincgsetxrep(mincgstate state,
bool needxrep)
{
state.xrep = needxrep;
}
开发者ID:orlovk,项目名称:PtProject,代码行数:18,代码来源:optimization.cs
示例10: MinCGCreate
/*************************************************************************
The subroutine is finite difference variant of MinCGCreate(). It uses
finite differences in order to differentiate target function.
Description below contains information which is specific to this function
only. We recommend to read comments on MinCGCreate() in order to get more
information about creation of CG optimizer.
INPUT PARAMETERS:
N - problem dimension, N>0:
* if given, only leading N elements of X are used
* if not given, automatically determined from size of X
X - starting point, array[0..N-1].
DiffStep- differentiation step, >0
OUTPUT PARAMETERS:
State - structure which stores algorithm state
NOTES:
1. algorithm uses 4-point central formula for differentiation.
2. differentiation step along I-th axis is equal to DiffStep*S[I] where
S[] is scaling vector which can be set by MinCGSetScale() call.
3. we recommend you to use moderate values of differentiation step. Too
large step will result in too large truncation errors, while too small
step will result in too large numerical errors. 1.0E-6 can be good
value to start with.
4. Numerical differentiation is very inefficient - one gradient
calculation needs 4*N function evaluations. This function will work for
any N - either small (1...10), moderate (10...100) or large (100...).
However, performance penalty will be too severe for any N's except for
small ones.
We should also say that code which relies on numerical differentiation
is less robust and precise. L-BFGS needs exact gradient values.
Imprecise gradient may slow down convergence, especially on highly
nonlinear problems.
Thus we recommend to use this function for fast prototyping on small-
dimensional problems only, and to implement analytical gradient as soon
as possible.
-- ALGLIB --
Copyright 16.05.2011 by Bochkanov Sergey
*************************************************************************/
public static void mincgcreatef(int n,
double[] x,
double diffstep,
mincgstate state)
{
alglib.ap.assert(n>=1, "MinCGCreateF: N too small!");
alglib.ap.assert(alglib.ap.len(x)>=n, "MinCGCreateF: Length(X)<N!");
alglib.ap.assert(apserv.isfinitevector(x, n), "MinCGCreateF: X contains infinite or NaN values!");
alglib.ap.assert(math.isfinite(diffstep), "MinCGCreateF: DiffStep is infinite or NaN!");
alglib.ap.assert((double)(diffstep)>(double)(0), "MinCGCreateF: DiffStep is non-positive!");
mincginitinternal(n, diffstep, state);
mincgrestartfrom(state, x);
}
开发者ID:orlovk,项目名称:PtProject,代码行数:55,代码来源:optimization.cs
示例11: selection
/*************************************************************************
This function sets stopping conditions for CG optimization algorithm.
INPUT PARAMETERS:
State - structure which stores algorithm state
EpsG - >=0
The subroutine finishes its work if the condition
|v|<EpsG is satisfied, where:
* |.| means Euclidian norm
* v - scaled gradient vector, v[i]=g[i]*s[i]
* g - gradient
* s - scaling coefficients set by MinCGSetScale()
EpsF - >=0
The subroutine finishes its work if on k+1-th iteration
the condition |F(k+1)-F(k)|<=EpsF*max{|F(k)|,|F(k+1)|,1}
is satisfied.
EpsX - >=0
The subroutine finishes its work if on k+1-th iteration
the condition |v|<=EpsX is fulfilled, where:
* |.| means Euclidian norm
* v - scaled step vector, v[i]=dx[i]/s[i]
* dx - ste pvector, dx=X(k+1)-X(k)
* s - scaling coefficients set by MinCGSetScale()
MaxIts - maximum number of iterations. If MaxIts=0, the number of
iterations is unlimited.
Passing EpsG=0, EpsF=0, EpsX=0 and MaxIts=0 (simultaneously) will lead to
automatic stopping criterion selection (small EpsX).
-- ALGLIB --
Copyright 02.04.2010 by Bochkanov Sergey
*************************************************************************/
public static void mincgsetcond(mincgstate state,
double epsg,
double epsf,
double epsx,
int maxits)
{
alglib.ap.assert(math.isfinite(epsg), "MinCGSetCond: EpsG is not finite number!");
alglib.ap.assert((double)(epsg)>=(double)(0), "MinCGSetCond: negative EpsG!");
alglib.ap.assert(math.isfinite(epsf), "MinCGSetCond: EpsF is not finite number!");
alglib.ap.assert((double)(epsf)>=(double)(0), "MinCGSetCond: negative EpsF!");
alglib.ap.assert(math.isfinite(epsx), "MinCGSetCond: EpsX is not finite number!");
alglib.ap.assert((double)(epsx)>=(double)(0), "MinCGSetCond: negative EpsX!");
alglib.ap.assert(maxits>=0, "MinCGSetCond: negative MaxIts!");
if( (((double)(epsg)==(double)(0) && (double)(epsf)==(double)(0)) && (double)(epsx)==(double)(0)) && maxits==0 )
{
epsx = 1.0E-6;
}
state.epsg = epsg;
state.epsf = epsf;
state.epsx = epsx;
state.maxits = maxits;
}
开发者ID:orlovk,项目名称:PtProject,代码行数:54,代码来源:optimization.cs
示例12: F
/*************************************************************************
NONLINEAR CONJUGATE GRADIENT METHOD
DESCRIPTION:
The subroutine minimizes function F(x) of N arguments by using one of the
nonlinear conjugate gradient methods.
These CG methods are globally convergent (even on non-convex functions) as
long as grad(f) is Lipschitz continuous in a some neighborhood of the
L = { x : f(x)<=f(x0) }.
REQUIREMENTS:
Algorithm will request following information during its operation:
* function value F and its gradient G (simultaneously) at given point X
USAGE:
1. User initializes algorithm state with MinCGCreate() call
2. User tunes solver parameters with MinCGSetCond(), MinCGSetStpMax() and
other functions
3. User calls MinCGOptimize() function which takes algorithm state and
pointer (delegate, etc.) to callback function which calculates F/G.
4. User calls MinCGResults() to get solution
5. Optionally, user may call MinCGRestartFrom() to solve another problem
with same N but another starting point and/or another function.
MinCGRestartFrom() allows to reuse already initialized structure.
INPUT PARAMETERS:
N - problem dimension, N>0:
* if given, only leading N elements of X are used
* if not given, automatically determined from size of X
X - starting point, array[0..N-1].
OUTPUT PARAMETERS:
State - structure which stores algorithm state
-- ALGLIB --
Copyright 25.03.2010 by Bochkanov Sergey
*************************************************************************/
public static void mincgcreate(int n,
double[] x,
mincgstate state)
{
alglib.ap.assert(n>=1, "MinCGCreate: N too small!");
alglib.ap.assert(alglib.ap.len(x)>=n, "MinCGCreate: Length(X)<N!");
alglib.ap.assert(apserv.isfinitevector(x, n), "MinCGCreate: X contains infinite or NaN values!");
mincginitinternal(n, 0.0, state);
mincgrestartfrom(state, x);
}
开发者ID:orlovk,项目名称:PtProject,代码行数:51,代码来源:optimization.cs
示例13: make_copy
public override alglib.apobject make_copy()
{
mincgstate _result = new mincgstate();
_result.n = n;
_result.epsg = epsg;
_result.epsf = epsf;
_result.epsx = epsx;
_result.maxits = maxits;
_result.stpmax = stpmax;
_result.suggestedstep = suggestedstep;
_result.xrep = xrep;
_result.drep = drep;
_result.cgtype = cgtype;
_result.prectype = prectype;
_result.diagh = (double[])diagh.Clone();
_result.diaghl2 = (double[])diaghl2.Clone();
_result.vcorr = (double[,])vcorr.Clone();
_result.vcnt = vcnt;
_result.s = (double[])s.Clone();
_result.diffstep = diffstep;
_result.nfev = nfev;
_result.mcstage = mcstage;
_result.k = k;
_result.xk = (double[])xk.Clone();
_result.dk = (double[])dk.Clone();
_result.xn = (double[])xn.Clone();
_result.dn = (double[])dn.Clone();
_result.d = (double[])d.Clone();
_result.fold = fold;
_result.stp = stp;
_result.curstpmax = curstpmax;
_result.yk = (double[])yk.Clone();
_result.lastgoodstep = lastgoodstep;
_result.lastscaledstep = lastscaledstep;
_result.mcinfo = mcinfo;
_result.innerresetneeded = innerresetneeded;
_result.terminationneeded = terminationneeded;
_result.trimthreshold = trimthreshold;
_result.rstimer = rstimer;
_result.x = (double[])x.Clone();
_result.f = f;
_result.g = (double[])g.Clone();
_result.needf = needf;
_result.needfg = needfg;
_result.xupdated = xupdated;
_result.algpowerup = algpowerup;
_result.lsstart = lsstart;
_result.lsend = lsend;
_result.userterminationneeded = userterminationneeded;
_result.teststep = teststep;
_result.rstate = (rcommstate)rstate.make_copy();
_result.repiterationscount = repiterationscount;
_result.repnfev = repnfev;
_result.repvaridx = repvaridx;
_result.repterminationtype = repterminationtype;
_result.debugrestartscount = debugrestartscount;
_result.lstate = (linmin.linminstate)lstate.make_copy();
_result.fbase = fbase;
_result.fm2 = fm2;
_result.fm1 = fm1;
_result.fp1 = fp1;
_result.fp2 = fp2;
_result.betahs = betahs;
_result.betady = betady;
_result.work0 = (double[])work0.Clone();
_result.work1 = (double[])work1.Clone();
return _result;
}
开发者ID:orlovk,项目名称:PtProject,代码行数:68,代码来源:optimization.cs
示例14: problems
/*************************************************************************
This subroutine restarts CG algorithm from new point. All optimization
parameters are left unchanged.
This function allows to solve multiple optimization problems (which
must have same number of dimensions) without object reallocation penalty.
INPUT PARAMETERS:
State - structure used to store algorithm state.
X - new starting point.
-- ALGLIB --
Copyright 30.07.2010 by Bochkanov Sergey
*************************************************************************/
public static void mincgrestartfrom(mincgstate state, double[] x)
{
mincg.mincgrestartfrom(state.innerobj, x);
return;
}
开发者ID:Ring-r,项目名称:opt,代码行数:20,代码来源:optimization.cs
示例15: MinCGSetScale
/*************************************************************************
Modification of the preconditioner: scale-based diagonal preconditioning.
This preconditioning mode can be useful when you don't have approximate
diagonal of Hessian, but you know that your variables are badly scaled
(for example, one variable is in [1,10], and another in [1000,100000]),
and most part of the ill-conditioning comes from different scales of vars.
In this case simple scale-based preconditioner, with H[i] = 1/(s[i]^2),
can greatly improve convergence.
IMPRTANT: you should set scale of your variables with MinCGSetScale() call
(before or after MinCGSetPrecScale() call). Without knowledge of the scale
of your variables scale-based preconditioner will be just unit matrix.
INPUT PARAMETERS:
State - structure which stores algorithm state
NOTE: you can change preconditioner "on the fly", during algorithm
iterations.
-- ALGLIB --
Copyright 13.10.2010 by Bochkanov Sergey
*************************************************************************/
public static void mincgsetprecscale(mincgstate state)
{
state.prectype = 3;
state.innerresetneeded = true;
}
开发者ID:orlovk,项目名称:PtProject,代码行数:29,代码来源:optimization.cs
示例16: mincgsetdrep
/*************************************************************************
This function turns on/off line search reports.
These reports are described in more details in developer-only comments on
MinCGState object.
INPUT PARAMETERS:
State - structure which stores algorithm state
NeedDRep- whether line search reports are needed or not
This function is intended for private use only. Turning it on artificially
may cause program failure.
-- ALGLIB --
Copyright 02.04.2010 by Bochkanov Sergey
*************************************************************************/
public static void mincgsetdrep(mincgstate state,
bool needdrep)
{
state.drep = needdrep;
}
开发者ID:orlovk,项目名称:PtProject,代码行数:20,代码来源:optimization.cs
示例17: mincgsetcgtype
/*************************************************************************
This function sets CG algorithm.
INPUT PARAMETERS:
State - structure which stores algorithm state
CGType - algorithm type:
* -1 automatic selection of the best algorithm
* 0 DY (Dai and Yuan) algorithm
* 1 Hybrid DY-HS algorithm
-- ALGLIB --
Copyright 02.04.2010 by Bochkanov Sergey
*************************************************************************/
public static void mincgsetcgtype(mincgstate state,
int cgtype)
{
alglib.ap.assert(cgtype>=-1 && cgtype<=1, "MinCGSetCGType: incorrect CGType!");
if( cgtype==-1 )
{
cgtype = 1;
}
state.cgtype = cgtype;
}
开发者ID:orlovk,项目名称:PtProject,代码行数:23,代码来源:optimization.cs
示例18: MinCGResults
/*************************************************************************
Conjugate gradient results
Buffered implementation of MinCGResults(), which uses pre-allocated buffer
to store X[]. If buffer size is too small, it resizes buffer. It is
intended to be used in the inner cycles of performance critical algorithms
where array reallocation penalty is too large to be ignored.
-- ALGLIB --
Copyright 20.04.2009 by Bochkanov Sergey
*************************************************************************/
public static void mincgresultsbuf(mincgstate state,
ref double[] x,
mincgreport rep)
{
int i_ = 0;
if( alglib.ap.len(x)<state.n )
{
x = new double[state.n];
}
for(i_=0; i_<=state.n-1;i_++)
{
x[i_] = state.xn[i_];
}
rep.iterationscount = state.repiterationscount;
rep.nfev = state.repnfev;
rep.varidx = state.repvaridx;
rep.terminationtype = state.repterminationtype;
}
开发者ID:orlovk,项目名称:PtProject,代码行数:30,代码来源:optimization.cs
示例19: exp
/*************************************************************************
This function sets maximum step length
INPUT PARAMETERS:
State - structure which stores algorithm state
StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't
want to limit step length.
Use this subroutine when you optimize target function which contains exp()
or other fast growing functions, and optimization algorithm makes too
large steps which leads to overflow. This function allows us to reject
steps that are too large (and therefore expose us to the possible
overflow) without actually calculating function value at the x+stp*d.
-- ALGLIB --
Copyright 02.04.2010 by Bochkanov Sergey
*************************************************************************/
public static void mincgsetstpmax(mincgstate state,
double stpmax)
{
alglib.ap.assert(math.isfinite(stpmax), "MinCGSetStpMax: StpMax is not finite!");
alglib.ap.assert((double)(stpmax)>=(double)(0), "MinCGSetStpMax: StpMax<0!");
state.stpmax = stpmax;
}
开发者ID:orlovk,项目名称:PtProject,代码行数:24,代码来源:optimization.cs
示例20: mincgrequesttermination
/*************************************************************************
This subroutine submits request for termination of running optimizer. It
should be called from user-supplied callback when user decides that it is
time to "smoothly" terminate optimization process. As result, optimizer
stops at point which was "current accepted" when termination request was
submitted and returns error code 8 (successful termination).
INPUT PARAMETERS:
State - optimizer structure
NOTE: after request for termination optimizer may perform several
additional calls to user-supplied callbacks. It does NOT guarantee
to stop immediately - it just guarantees that these additional calls
will be discarded later.
NOTE: calling this function on optimizer which is NOT running will have no
effect.
NOTE: multiple calls to this function are possible. First call is counted,
subsequent calls are silently ignored.
-- ALGLIB --
Copyright 08.10.2014 by Bochkanov Sergey
*************************************************************************/
public static void mincgrequesttermination(mincgstate state)
{
state.userterminationneeded = true;
}
开发者ID:orlovk,项目名称:PtProject,代码行数:28,代码来源:optimization.cs
注:本文中的mincgstate类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论