FDK - Different results: same model, different computers


Michael Skipper Andersen has been so kind and shared his extensive FDK knee model with us in December. We ran it on 5 different computers and had to change the Kinematics.KinematicTol to 1e-5 in order for the model to run at all. The problem is that we are getting different results from different computers (except for 2, which seem to produce the same results) and one computer often aborts the calculations at different steps. We can’t figure out why and Michael suggested to post that question in the forum…The model can be downloaded there: https://gigamove.rz.rwth-aachen.de/download/id/SYgexDcDpwT6Ei
The password is XXXXXXXXX

Also I was wondering, how you came up with the AgeFactor (0,9999999) and StrengthIndexLeg (1,53) for the Scaling of the muscles???


Hi Katharina,

Now we are running this model on two different computers.
After they will be finished, we would like to compare the results.

We’d like to ask of you what results are different from different computers?
Could you please explain it in more detail?

Best regards,

Hey Moonki,

I compared the different results for the total compressive force per body weight, you’ll find the results of the 4 different computers attached as txt files(3 of which have the exact same hardware, the one named “TUMIRNIX” is the exception), unfortunately I didn’t save the results from “MATAHARIS” when it ran through, the second time it didn’t run through I think because of a license update during the simulation…
Secondly running the analysis on “TUMIRNIX” caused a couple program terminations at different time steps before, but we assumed that there might be a hardware problem with that computer…
Thanks for your support!

Hi Katharina,

just to make sure, do you have the same version of AnyBody installed on all 4 different computers?
You can check that under “about” in the AMS. It seems a little strange, the starting value is the same for all. We will look into it.


Yes, the same version (5. 1. 0. 2588) is installed on all 4 computers. Our IT guy suggested that maybe TUMIRNIX has a hardware problem, since it’s an older computer. The other three were just bought in the beginning of April, GIBTERMINE differs from the other two computers as it has at least one more program installed, but I’m not aware of any other differences…

Hi Katharina,

we always run models n various computers to double check them before we release them. Over the last days, we also run knee model on two different computers without any differences. However we used the newest AnyBody version v5.2 (will be released soon/ within some weeks). We will start the same with the version you have installed again (v5.1.0) to check it.
In the meantime:
Did you experience any problems/differences in other models so far? Models without contact? Or could you generate a simple contact model and check if there are differences? I know Skippers Model is very complex, so it might be easier to check probable errors in a simpler model.

Hey Amir,

sorry for the delayed response! We just ran the model of Michael Skipper Anderson! I’ve been running it on even more computers now and it keeps on giving us different results…I added a tiny bit of code, so the model saves some output values as text files and it seems that it is even less stable…

Hi Katharina,

unfortunately we still have problems to reconstruct the problem with our computers here. We would be very pleased if you could help us a little bit to get closer to the source of the problem.
First, could you give us some more information abour how you used the model:

  1. When you ran the model, was it on a network drive or on a local disk of the different computers? In the second case I think you assured that all changes you made to the model are the same on each version you started.
  2. Was the workflow you started the models always the same, e.g. did you always start a new AnyBody, loaded the model and ran the InverseDynamics or have e.g. some simulations been started in after another model was loaded before, a model has been restarted after some tries, etc. ?
  3. As Amir mentioned before, the Michael Skipper Andersons model is quite complex. Do you have the possibility to have a look at some other values of the output, e.g. the ForceDepKinError?

Best regards

Hi Daniel,

  1. I saved model on the local drive, started with unpacking the model Michael Skipper Anderson sent us and added the same code each time as I made up text documents were I saved all the changes I would have to make, just to make sure not to get confused
    2.I sometimes forgot to change the model to the gait trial(did I even mention that I was working with the gait trial???), but as I ran it on 10 different computers so far I definitely ran it following the same protocol at least a couple of times
    3.I’ll have a look at the ForceDepKinError!


I’m taking part of this thread because it seems I have the same kind of problems. In my case, results are different between analysis on the same computer. Analysis either performs, or stops before the end, at several time steps. This appears, of course, without changing any input parameters.

Do you have more information about this problem ?


Over the past weeks we have evaluated this issue. We could see that there is in some rare cases, in particular very complex and long running fdk models, slight variation in the results from run to run.

The reason for the different results is that some of the core computational routines in AnyBody are exploiting the parallel nature of the processors today and the used approach does not guaranty that all computations are done in exactly the same order from time to time. This means that round-off can differ a little.

Of course this should not cause significant variations from execution to execution. However, if this is combined with a poorly conditioned problem, e.g., a sensitive problem, you may see problems like reported in this thread. FDK models and in particular FDK models with hard surface contacts are typically more sensitive models than pure inverse dynamics models.

In AMS version 5.3.1 (just released), a couple of improvements of the FDK and surface contact algorithms have been released. They significantly improve robustness and accuracy of the FDK problems we have tested it on. This might fix the serious variations, but you can still see slight, hopefully irrelevant variations from run to run.

In the future version 6.0, we will enter a new option for running in a so-called deterministic mode, that will ensure that you get the same numbers each time you run the same model on the same executable.