Hi John,

I am probably more the recruitment guy around here. I wish is were

easier to write mathematics in this system, but I’ll try in text mode

anyway. The problem we are solving goes like this:

Minimize

Beta + e1*sum(muscle stresses) + e2*sum(muscle stresses^2)

Subject to

muscle stresses <= Beta

Equilibrium equations

Muscle forces >= 0

e1*e2 = 0

The last conditions expresses that either e1 or e2 must be zero as

mentioned in the previous post. The quadratic optimier can solve any

of the problems while the simplex optimizer can only solve the

problem where e2 = 0. But the simplex optimizer is more efficient and

more robust.

As you can see, with e1 = e2 = 0 we have the pure min/max problem.

When you increae either e1 and e2 from zero towards infinity, the

problem slowly becomes more like either a minimization of the sum or

squared sum of muscle stresses. But mathematically it is neither of

these, unless e1 or e2 has gone to infinity. So e1 = 1 or e2 = 1 will

not mathematically give you what you expected.

Since we cannot set e1 or e2 to infinity, it is relevant to ask: How

large should the be before the problem converts? The answer is that

the effects of e1 and e2 are rather different.

Very small values of e1 might stabilize an ill-conditioned problem

somewhat. So we often use a value of e1 = 1e-5 or thereabout just for

stabilization. The right size depends a bit on the overall activation

level. The problem converts rather rapidly to minimzing the sum of

muscle stress when you increase e1. Usually at e1 = 1.0 the problem

is converted completely.

The effect of e2 is different and actually better from a physiologicl

perspective. Small values of e2 usualy stabilize the problem. But

even e2 = 1 will usually not make the problem completely quadratic.

It will typically act like a quadratic problem for small activaton

levels and like a min/max problem for larger activations. This makes

a lot of physiological sense. To convert the problem completely to

quadrtic form you typically need e2 > 10. I normally use e2 = 1000 to

be sure.

I hope this clarifies the matter. This tutorial explains it in more

detail:

http://www.anybodytech.com/514.0.html

Best regards,

John

— In anyscript@yahoogroups.com, “johnzengwu” <jwu@…> wrote:

>

> Hi Soeren and John:

>

> Thanks for your responses. It makes sense that only one of the two

> optimization methods working at one time, LP or QP. I have tested

it,

> it really works like that.

>

> I still have problem with the range of variations for LP and QP.

What

> I understand is that the range of variations should be within 0-1.0.

>

> I have tested that some problems are not sensitive to the penalty

> values. These solutions are reliable. However, some problems are

> sensitive to the penalty values. The solutions vary with increasing

> penalty values. Typically, when LP or QP is taken values > 0.10, the

> solutions become stable, no furter variations with increasing LP or

QP

> values are observed. In this case, I would take LP or QP values to

be

> 0.1. Does this make sense?

>

> Thanks, John

>

> — In anyscript@yahoogroups.com, “AnyBody Support” <support@>

wrote:

> >

> > This as actually not completely accurate. The linear penalty has

no

> > effect when you use the quadratic optimizer. The formulation of

the

> > optimization problem is such that you have either linear or

quadratic

> > penalty on the problem, but you cannot have both. If you use the

> > quadratic optimizer, the linear penalty is ignored even if one if

> > defined. If you use the simplex optimizer, the quadratic penalty

is

> > ignored.

> >

> > Best regards,

> > John

> > AnyBody Support

> >

> >

> > — In anyscript@yahoogroups.com, “AnyBody Support” <support@>

> > wrote:

> > >

> > > Hi John

> > >

> > > You are correct about the solvers there are two types Simplex

and

> > QP.

> > >

> > > If you specify a linear penalty this will have an effect on

both

> > > solvers, but a quadratic penaly will only have an influence if

you

> > > are running a QP solver.

> > >

> > >

> > > The adjustments of the penalty will not change the solver type,

> > this

> > > is set by the solvertype definition.

> > >

> > >

> > > If you are using a linear penalty of one it will use a

objective

> > > function which is the highest activated muscle plus the sum of

all

> > > activities.

> > >

> > > If the penalties was 0.5Lp and 0.5QP the objective function

would

> > be

> > > max activated muscle + 0.5*sum of activities + 0.5*sum of

> > > activities^2 , the last term would only be in use if the solver

is

> > > quadratic.

> > >

> > >

> > > It is difficult to give a range for the penalties this is a

matter

> > > of taste, since it changes how the recruitment is done, but

usually

> > > a small linear penalty of for example 1e-5 can be used to avoid

> > > activity “pikes” originating from unwanted co-contraction,

since

> > > adding just a small penalty will punish this.

> > >

> > > I hope this made things more clear

> > >

> > >

> > > Best regards

> > > SÃ¸ren, AnyBody Support

> > >

> > >

> > >

> > >

> > >

> > > — In anyscript@yahoogroups.com, “johnzengwu” <jwu@> wrote:

> > > >

> > > > Hi Soeren:

> > > >

> > > > I am confused with the choice of recruitment solvers. What I

> > > undrstand

> > > > is that there are two solvers: linear programming (Simplex)

and

> > > > quadratic programming (QP), and a combination of these two

> > > algorithms

> > > > are applied usually. It is possible to choose the recruitment

> > > solvers

> > > > by adjusting the penalties LP and QP.

> > > >

> > > > So, e.g., RecruitmentLpPenalty=1.0 would mean using 100%

linear

> > > > programming; RecruitmentLpPenalty=0.5 would mean using 50%

linear

> > > and

> > > > 50% quadratic programming. Is it right?

> > > >

> > > > What is the range of variations for these LP and QP penalties?

> > > >

> > > >

> > > > Regards,

> > > >

> > > > John Wu

> > > >

> > >

> >

>