- No products in the cart.
Read through our frequently asked questions below
The Vero Optical family of cameras combine the benefits of affordable motion capture with the reliability that comes with Vicon’s three decades of experience.
Details on the Vero Optical and Vue Video cameras can be found in the associated pages below
Before you install Nexus 2, note the following limitations on supported systems:
Nexus 2 (Windows 10 configuration fully supported and tested) supports the following reference video options:
For the recommended and latest PC specifications, please refer to https://www.vicon.com/support/faqs/?q=what-are-the-latest-pc-specifications or contact Vicon Support
Nexus 2.13 does not support the use of Basler video cameras. To use Basler video cameras with Nexus, use Nexus 2.12.1 or earlier.
There are a couple reasons as to why cameras will not connect into the software. Below are some items to look at if you are encountering troubles.
1. Is the hardware on and connected to the computer? Make sure the Vicon connectivity device (i.e. POE+, Giganet or Ultranet) is connected to a configured network port. If you have not set up a network port please refer to the FAQ: How do I set up my network card?
2. Is the system in Live Mode? Please go over to the Systems Tab and make sure you are Live. This can be verified at the top of the 3D Perspective; it will either say “Live” or the name of the currently opened trial.
3. Is the Vicon Software being allowed through the Windows Firewall. To check – please see the following FAQ: I just installed the latest version of software but the cameras no longer connect when I run it. What can I do?
4. Do you have Anti-Virus installed? If so, is the active scanning on the Anti-Virus software turned off? The active scanning will disrupt communication between the computer and the cameras.
If the cameras are still not connecting into the software please contact support.
The most up to date Firmware can be found via the Vicon Firmware Update Utility.
The firmware update utility provides a robust guided workflow that allows you to update your camera firmware more reliably.
When the utility is installed and started, allow it through the Windows firewall in order for it to communicate with the cameras and reprogram them.
Note: Starting other Vicon software during the operation of the reprogramming tool may interrupt the updating process.
New updated firmware versions are designed to load onto legacy hardware (i.e. T-Series/Bonita), but do not (unless specially stated) contain any specific updates for these products and are functionally identical to previous legacy firmware builds. The ability to load newer firmware onto legacy hardware is provided for convenience when updating systems containing a supported mix of camera types.
If you own a system comprised of only legacy hardware such as Pre T-Series and Bonita please contact Vicon Support
Mixed Camera Systems:
When running a mixed system please ensure that the Firmware is the same for all cameras.
The Firmware version should correspond to the newest generation of camera in your system.
A mixed or Vantage/Vero camera system needs to be on the latest firmware. For version details, the FAQ: What is the latest version of Firmware?
If your computer is connected to the internet, the Vicon software will do a firmware check. If a firmware update is needed you just need to click on the notification symbol and follow the instructions. Otherwise follow the instructions below.
1. Please download the latest version of the Vicon Firmware Update Utility, extract the files from ViconFirmwareUpdateUtility_x.x.x.xxxxh.zip. Once complete, run the executable inside. This will install the Reprogramming Tool for the appropriate Firmware version on the computer.
2. Once the program has been installed make sure:
3. Double click on the Vicon Firmware Update Utility to start the program. The program will automatically search for all Vicon hardware which can be updated. Click Next when ready.
4. A list of existing Firmware for all devices is now presented. You have the option to reprogram all devices.
This process can be slow. Please do not interrupt the programming process. Once the reprogramming is completed you will be able to go to the next page and close out of the programming tool. If any cameras fail or you have further questions, please contact Vicon Support.
For a pure Legacy system (Pre T-Series and Bonita MX Hardware) Vicon hardware should be on Firmware 502.
1. Download the Firmware and extract the files from the Firmware_502.zip folder. Once extracted place MXFirmware_502.mxe in a location easy to find such as the Desktop or Downloads folder.
2. Make sure all Vicon hardware is turned on. Open the core Vicon Software (Tracker or Nexus).
3. Navigate to the System tab, right click on Local Vicon System and select Reprogram Vicon Firmware. In the new window, all Vicon hardware and current firmware versions is listed.
4. Click Browse and navigate to the saved MXFirmware_502. Once the file has been selected, check which devices need to be reprogrammed and click Reprogram.
The reprogramming process might take some time. Do not interrupt the process. If any cameras fail or if there are further questions, please contact support.
The steps below explains how to update the firmware:
1. Download the “.tgz” file and save it in a known location. As default, when upgrading to a new bundle, CaraLive will open this location: “C:\Users\YOURUSER\AppData\Local\Vicon\CaraLive\Firmware” so preferably, save it in that location.
2. There is no need to decompress the file. CaraLive will run its package contents once it is selected as the new bundle to upload.
3. To install it in the logger, go to the “Actions” option on the logger you want to upgrade and click on it. You will then be presented with three other options. Select “System”.
4. Once in the “System” sub-option, select from the list of available actions the one that reads “Upload New Firmware…”. A pop-up window should appear.
5. As mentioned earlier, CaraLive will open the location where it expects the “.tgz” file to be saved. However, if you have stored the bundle file in a different place, point CaraLive to that folder and select the “.tgz” file.
6. Make sure you have at least 50% of battery before continuing and do not disconnect the power during this process. A warning message will appear, make sure you read it, understand it and then press “Yes” to proceed with the update.
7. CaraLive will show a progress bar and after the process is done, the logger will restart.
8. That’s it! Once the logger is up and running it will have the latest version of CaraBundle.
This procedure is only to upload new bundles. To check and modify already installed ones (because the logger keeps all installed builds unless specifically deleted), the user needs to go to “Actions>System>Manage Firmware…”.
Link aggregation is used to describe various methods for using multiple parallel network connections to increase throughput beyond the limit that one link (one connection) can achieve. Link aggregation is supported in Tracker 1.3+, Nexus 1.8.5+, Blade 2+.
When setting up Link Aggregation ensure that you have the correct Network cards (Intel i340-T4 or the Intel i350-T4 cards) installed on your capture PC. Once you have the correct Network card(s) follow these steps:
1. Make sure your three network ports have fixed IP addresses 192.168.10.1, 192.168.10.2 and 192.168.10.3. A maximum of nine NICs are allowed (192.168.10.1 – 192.168.10.9 inclusive).
2. Connect the 192.168.10.1 and 192.168.10.2 ports to one Giganet/Power over Ethernet switch (POE) and 192.168.10.3 to the other Giganet/POE. You will need an extra cable connecting your Giganets/POEs.
3. Run Tracker/Nexus/Blade, set your workspace to Camera and select all the cameras in the System pane (you will need to expand Vicon Cameras). Please do note that there might be slight differences between the three applications.
4. Turn the Giganet/POE connected to 192.168.10.3 off then select all the cameras that just went red in the System pane.
Select the Destination IP Address drop-down and select 192.168.10.3.
5. Select the remaining (green) cameras then scroll down their Properties, select the Destination IP Address drop-down and select 192.168.10.2.
6. Turn the Giganet/POE connected to 192.168.10.3 back on. Select all the cameras in the System pane.
Save your System configuration.
You can remotely trigger the Vicon MX T-Series system to capture data, based on the input signals an MX Giganet receives from a supported third-party device connected to the Remote Start or Stop sockets.
You must create your own cable to plug into the Remote Start or Stop sockets in the rear of an MX Giganet, using RCA plugs.
More details on how to configure a remote trigger can be found in Chapter 15 of T-Series GoFurther v1.3.pdf.
The analog ADC card is a 64-channel device for generating 16-bit offset binary conversions from analog sources. The input impedance is 1 MΩ. The data sampling frequency is common to all channels; while it is independent of the camera frame rate, it is affected by the camera frame rate specified in Nexus. The maximum rate at which you can sample data via the ADC card is 192,000 samples/second (192 KHz).
No. of Channels
Max. Capture Frequency (KHz)
When you add a force plate in Nexus, you are also required to install the Calibration File into the appropriate dialogue box. The Calibration file generally comes with the device from the manufacturer. This can be a .Plt file or an .acl file for AMTI plates.
There may be occasion when the file located does not populate the drop down box on selection within Nexus.
In this instance, you may need to hand edit the file to remove any white space or extra characters, such as Commas and Carriage returns, in order for it to be read by Nexus.
Open the file in a text editor, and remove any white space and/or extra characters not required.
When thinking about latency in Vicon real-time data, it is important to remember that small amounts of latency are introduced at every stage of the pipeline. To most accurately track latency, it is important to be able to measure latency end- to-end over a multi-computer pipeline— starting with the physical motion event, capture and processing by the Vicon system, receipt of the 6-dof sample by a client processor, through the rendering processor pipeline, and finally to the data display.
One of our customers involved in real-time virtual reality came up with a clever set-up to measure latency at every stage of their processing pipeline. This testing method is described in general below.
“We built a custom external timing device to capture the start of a motion event and track the resultant motion sample through the system pipeline. Our latency measurement scheme uses the external timing device together with a manually propelled pendulum to correlate the real and tracked motion. The timing device consists of a 100-ns clock, 6 latched data arrays, and 3 serial ports. The pendulum has an IR emitter on the swing arm and an IR detector on the base. Vicon markers are attached to the swing arm so that its trajectory may be tracked in real time. The clock is started when the swing arm passes over the IR detector. Since this is a known point in space, an identifiable event sample will be generated by the Vicon system. Then any of our software, running on any computer, can send commands via the serial ports to latch the contents of the counter as the sample propagates through the system. This allows us to measure the latencies between different stages of a multi- workstation processing pipeline. A photo-sensor attached to the display screen will automatically set latch 6 when triggered, allowing us to measure the end-to-end latency. The stored timestamps may be read back at any time over any serial port to get a list of latencies.
These tests have been run while tracking different numbers of markers and objects to determine how latency increases with the number of objects tracked. (With the Vicon system, the latency appears to increase in a linear fashion as marker count increases.)”
Cleaning Pearl markers:
Place them in a mild solution of hand soap and water and gently shake them. Then rinse in clean plain water and let them dry.
Do not rub them, as this will reduce their retro-reflectivity by removing glass beads from the surface.
They should be handled with the hands as little as possible to reduce contamination from skin oils.
You should use Strobe mode whenever possible. However, in the following cases, Continuous mode may give better results:
NB: Strobe mode is required for mixed systems that include T-160 cameras.
If you are able to, re-calibrate your cameras before continuing data collection. If you are unable to calibrate until after data collection is completed, note the trial where the calibration needs updating.
1. After data collection, re-calibrate your system
2. Open the trial which needs an updated calibration (XCP)
a. Go to File > Import XCP
b. Within the Pipelines Tools pane, expand File Import and add Import XCP to your pipeline
4. Both options require you to navigate to the desired calibration file. The Latest Calibration file (XCP) is located here: C:\ProgramData\Vicon\Calibrations
5. Use the Import XCP file loading option or pipeline operation for each trial which needs an updated XCP file.
Tip: The Import XCP pipeline operation can be used in batch processing mode to update multiple trials efficiently.
While each camera warms up to meet the ambient temperature of its surroundings, its internal components inevitably change dimension. However, when the components reach operating temperature, their dimensions remain stable. Vicon measures the effects of warm-up and ambient temperature changes on all of its cameras.
For optimal accuracy, we recommend letting the cameras warm-up for at least 30 minutes before calibrating. For capture volumes with lower ambient temperatures, the camera warm-up time could be as high as 90 minutes. Please contact Vicon Support should have you have any further questions.
Important: Vicon Motion Systems Limited provides these specifications for guidance only and reserves the right to make changes to them without notice.
Before purchasing your own PC, please contact Vicon.
Important: These PC specifications are subject to continual updates. For the latest information, please contact Vicon Support.
Supported operating systems:
Supported operating systems:
For larger Shōgun systems, we recommend using an Advanced PC.
Supported operating systems:
Suitable for use with Theia3D
Supported operating systems:
Vicon does not supply this machine: the following information is provided as a guide, to enable you to buy a reliable, basic system that is capable of running up to 20 Valkyrie / Vantage / Vero cameras.
Supported operating systems:
Before purchasing your own PC, please contact Vicon.
Vicon is a registered trademark of Oxford Metrics plc.
The Bluetooth word mark is a registered trademark owned by Bluetooth SIG, Inc. and any use of such mark by Vicon is under license.
Intel is a trademark of Intel Corporation or its subsidiaries.
NVIDIA is a trademark and/or registered trademark of NVIDIA Corporation in the U.S. and/or other countries.
Blackfly is a registered trademark of Teledyne FLIR LLC.
Other trade names are those of their respective owners.
You can find the latest documentation for all current versions of software at here:
Vicon Core Software will also install documentation/help guide when you install the software.
Once installed, launch the software and select Help > View Installed Help
The following software will install Help:
Nexus 2, Shōgun, Tracker 3, Blade 3, Pegasus, CaraLive, CaraPost, Polygon 4
For all the new features in Nexus please follow the links below:
Nexus 2.3 introduces the following new features and updates.
Nexus 2.2 introduces the following new features and updates.
Nexus 2.1 introduced the following new features and updates:
The following new features were introduced in Nexus 2.0:
There are four gap filling options available in Nexus 2.
Woltring (Quintic spline)
This has slightly different behaviour for the pipeline operation compared to the manual fill.
Both versions generate a quintic spline using valid frames around the gap as seed data. The gap is filled using the interpolated values from the spline. If there are insufficient frames surrounding the gap, the fill is rejected.
Searches backwards and forwards from the gap looking for (Number of Gap Frames / 2) + 5 consecutive valid frames on each side, but will accept a minimum of 5 valid frames on either or both sides if the preferred range is not available. Searches the entire length of the clip looking for the valid frame ranges.
Searches up to (Number of Gap Frames / 2) + 5 frames backwards and forwards from the gap. Requires a minimum of 10 valid frames in this range – these are not required to be consecutive.
Manual fill operation only.
Generates linear interpolations between the valid frames either side of the gap and between the same frames in a donor trajectory. The interpolated value in the gap trajectory is then offset by the difference between the interpolated and true values in the donor trajectory. Mathematically:
Let F(t) be the value in the position of the trajectory to fill at frame t, and D(t) that of the donor trajectory. Let t0 and t1 be the valid frames before and after the gap, respectively. Then if we define the interpolated position V of trajectory Gat frame t as:
V(G(t)) = ( G(t1)-G(t0) ) * ( t – t0 ) / ( t1-t0 ) + G(t0)
F(t) = V(F(t)) – V(D(t)) + D(t)
Rejects the fill if the donor trajectory has any invalid frames within the gap region, or if the donor or fill trajectory are invalid at either t0 or t1
Takes a number of trajectories and assumes these move as a rigid body. The gaps in the selected trajectory are filled as if this trajectory is also a part of the same body. Manual filling is restricted to 3 donor trajectories and fills gaps in a single trajectory; the pipeline operation will use as many donor trajectories as possible, and will attempt to fill the gaps in each selected trajectory using all the other selected trajectories as donors.
Define the state at frame t as an (n x 3) matrix M(t) whose rows are the position vectors of the donor trajectories, P(t) as the position of the fill trajectory, and tx as a reference frame in which the positions of the donors and fill trajectory are all known.
We transform M into M by subtracting the mean value of the column i from each row entry M(t)(i,j):
M(i,j) = M(i,j) – O(j), where O(j) = ( (i=1->n) ∑ M(i,j) / n )
We then create a covariance matrix C = M(tx)’ M(t) and perform an SVD such that C = U S V*
We take L to be the the identity matrix, except that if det( V U* ) < 0, then L(3,3) = -1. Then we can generate a rotation matrix R(tx) = V L U* (This is effectively the Kabsch algorithm to find the optimal rotation between two point clouds)
The interpolated position at frame t based on reference frame tx is then defined as:
G(t, tx) = R(tx) ( P(t) – O(tx) ) + O(t)
F(t) = ( G(t, t1) – G(t, t0) ) * ( t – t0 ) / ( t1 – t0 ) + G(t, t0)
where t0 and t1 are the valid frames before and after the gap, respectively.
The fill is rejected if there are fewer than 3 valid donor trajectories at any frame t0 <= t <= t1, or if the trajectory to fill is invalid at t0 or t1.
Determines the fill based on the position and orientation of a segment. The manual operation operates on a single selected trajectory, while the pipeline operation attempts to fill gaps in all trajectories associated with the selected segment.
The mathematics of this operation are simply:
G(t, tx) = R(t) R(tx)’ ( P(t) – O(tx) ) + O(t)
F(t) = ( G(t, t1) – G(t, t0) ) * ( t – t0 ) / ( t1 – t0 ) + G(t, t0)
where R(t) is the rotation matrix defining the orientation of the segment in the world at frame t, O(t) is the origin position of the segment at frame t, and t0 and t1 are the valid frames before and after the gap, respectively.
The fill is rejected if there are no kinematics for the selected segment at any frame t0 <= t <= t1, or if the trajectory to fill is invalid at t0 or t1.
To configure force plates for analog data capture:
1. Go to the Resources pane > Systems tab, click the Go Live button.
2. In the System tab, right-click the Devices node, point to Add Analog Device and select the proper force plates.
The selected force plate node automatically expands to display the newly created device. If the appropriate type is not displayed, contact Vicon Support.
3. In the Properties section at the bottom of the System resources pane, select Show Advanced.
4. In the General section
5. In the Source section
Please Note: Expand force plate node to expose the Force, Moment and CoP (Center of Pressure) channels. A Green Arrow indicated a connected source device and a Yellow yield indicates that a channel has not been assigned a pin.
6. In the Dimensions section, add in values from force plate manufacturer’s manual if not already entered
7. In the Positions section, position the force plate in respect to the wand and the origin of the plate.
8. In the Orientation section, orient the plate so it makes sense in respect to your capture volume.
9. In the Origin section, add in values from force plate manufacturer’s manual if not already entered.
10. To tare the force plate at zero load:
11. In the capture volume, have someone step onto the force plate. You should see the force vector display in real time.
For further details on configuring analog force plates, please refer to the Configure Force Plates section of the installed Nexus Help.
To facilitate working with very large unprocessed data files, you can choose which files will be loaded (.x2d camera data and/or .x1d analog data), and how many frames of the trial are loaded.
To do this click Show Trial Loading Options on the ProEclipse/Data Management toolbar at the top right of the ProEclipse/Data Management window. A new area will appear called Raw Data Loading Options
How to work with large trial data:
1. In order to select only required frames, in the Raw Data Loading Options area, select Load Range From and type the frame to start from in the first box and the end frame in the second box.
2. If required, choose whether to load both MX centroid/grayscale data (X2D) and raw analog data (X1D) files, or only one of these options.
3. Process the file(s) as normal.
Only the selected range and files will be processed, it is recommended that you save the section under a new name using File | Copy As…
Vicon Nexus 2 is compatible with, and has been tested with MATLAB R2013b. Nexus may function with other versions of MATLAB, but other versions have not been extensively tested by Vicon.
To use MATLAB with Vicon Nexus 2, ensure that, in addition to installing MATLAB, you install .NET Framework version 4.5
To set MATLAB path:
Once Nexus, MATLAB and the appropriate .NET Framework version are installed, you will want to set the path.
Windows 7: Go to the Start Menu > All Programs > Vicon > Nexus 2.X > Set MATLAB Path.
Windows 10: Start > All Apps > Vicon > Set MATLAB Path
This will give MATLAB access to the Nexus scripting functions.
To Configure MATLAB for scripting with Nexus:
Within MATLAB, create an instance of the ViconNexus object to get access to its methods; type the following line in the Command Window:
vicon = ViconNexus()
To obtain MATLAB Command List:
To see which functions you have access to write the following line in the Command Window:
To obtain MATLAB Command Help:
If you need guidance for the use of any of the displayed functions you can run either of the following lines in the Command Window.
To troubleshoot MATLAB Scripts:
To troubleshoot or run your script, you must have a trial open within Nexus
For further information please see the installed or online help guide. This can be found under the Help tab within Nexus.
To launch Python:
1. Click Start and point to All Programs (or press the Windows key) and then start to type Python.
2. Click the Python symbol.
3. To automatically configure Python for scripting with Nexus, at the command prompt, enter the following:
import ViconNexus vicon = ViconNexus.ViconNexus()
To obtain Python Command List:
Ensure you have launched and configured Python as described above, then at the Python command prompt, enter:
To obtain Python Comman Help:
To obtain help on each command that you can use with Nexus, at the Python command prompt, enter:
Where commandName is the command for which you want to display help.
For example, the following command displays help on GetTrajectory:
Help on GetTrajectory is displayed.
Vicon Plug-in Gait (conventional gait model) has been ported into Matlab code and is freely available for download from the associated page below or by browsing the download section.
Plug-in Gait Matlab forms part of the Advanced Gait Workflow (AGW) installer.
This code runs Plug-in Gait on non AGW trials (as per conventional gait workflow). Plug-in Gait Matlab also makes use of functional joint centres (hip and knee) as created by SCoRE and SARA.
The Plug-in Gait Matlab code can also be run in conjunction with native Plug-in Gait, allowing a direct comparison between the two versions, which provide the same results, assuming the code has not been edited.
– Please note Plug-in Gait Matlab is not intended for Clinical use.
Optimum Common Shape Technique (OCST) is a mathematical approach that finds the Average or Common shape for selected sets of markers (3 or more). The first pass through all frames allows the process to see each shape configuration. From this a common shape is calculated. The second pass through the data for processing forces the common shape and creates virtual markers. Alternatively, the real trajectories can be left in place (not moved or replaced) but the Segment elements can be calculated using the new positions.
OCST is important as it allows a non-rigid cluster (skin based markers) to be described as if it were truly rigid. Forcing a virtual rigidity allows the use of other algorithms that are reliant on an expectation of rigidity (i.e SCoRE and SARA).
The OCST method has been implemented in Nexus 2.
Research Publication: W.R. Taylor, E.I. Kornaropoulos, G.N. Duda, S. Kratzenstein, R.M. Ehrig, A. Arampatzis, M.O. Heller. Repeatability and reproducibility of OSSCA, a functional approach for assessing the kinematics of the lower limb. Gait & Posture 32 (2010) 231–236
Symmetrical Center of Rotation Estimation (SCoRE) is an optimization algorithm that uses function calibration frames between a Parent and Child segment to estimate the Center Point of Rotation. The Parent and Child segments are expected to be rigid, these can either use Rigid Cluster or Skin Based Markers + OSCT processing.
The main value of this operation is to provide a more repeatable Hip Joint center location. SCoRE locates the joint center only, Kinematic and Kinetics are still calculated by a Full Biomechanical Model (ie Plug-in Gait).
The SCoRE method has been implemented in Nexus 2.
Research Publication: Rainald M. Ehrig, William R. Taylor, Georg N. Duda, Markus O. Heller. A survey of formal methods for determining the centre of rotation of ball joints. Journal of Biomechanics 39 (2006) 2798–2809
Symmetrical Axis of Rotation Approach (SARA) is an optimization algorithm that uses function calibration frames between a Parent and Child segment to estimate the Axis of Rotation. The Parent and Child segments are expected to be rigid, these can either use Rigid Cluster or Skin Based Markers + OSCT processing.
The main value of this operation is to provide a more repeatable Knee Joint Axis. SARA locates the joint axis only, Kinematic and Kinetics are still calculated by a Full Biomechanical Model (ie Plug-in Gait).
The SARA method has been implemented in Nexus 2.
Research Publication: Rainald M. Ehrig, William R. Taylor, Georg N. Duda, Markus O. Heller. A survey of formal methods for determining functional joint axes. Journal of Biomechanics 40 (2007) 2150–2157
There are several references available that pertain to the Woltring Filter. In-depth information about this filter can be found at the International Society of Biomechanics web page. The direct web address is:
There are numerous resources at this site that explain the Woltring filter as well as a link to download the original Fortran code.
The question of Butterworth vs. Woltring is actually not that complex a question. From the above-stated web, Woltring has shown that spline smoothing is equivalent to a double Butterworth filter. The difference is that with splines it is possible to process data with unequal sampling intervals and the boundary conditions are well defined, and in the text Three-Dimensional Analysis of Human Movement by Allard, Stokes, and Blanchi, on page 93 under the section: Spline Package GCVSPL. For periodic, equidistantly sampled splines, the equivalence with the double Butterworth filter (Equation 5.14) can be demonstrated via a Fourier transformation and a variational argument.
So essentially, using a Woltring filter is equivalent to using a Butterworth filter. Because the Butterworth filter is an analog filter that has been in use for a long time, you would naturally expect to find many references that use this filter. The history of the Woltring filter is relatively young, therefore its use may not be as well documented. The development of this filter was designed to apply more specifically to kinematic data which is prevalent in biomechanics research.
The MSE setting (mean squared error) and GCV setting (General Cross Validation) are documented in the website listed above.
Our own investigation yielded this response: GCV makes an estimate of noise by doing General Cross Validation for all the data points and uses some statistical processes to choose a noise level with which to filter to give the final results. The MSE method allows you to simply type the noise level in, and the spline is fitted to the data points allowing the given level of tolerance. The units are in mm^2. This processing method is thus quicker and ensures the same level of smoothing for all trajectories, whereas the GCV smoothing can vary from trajectory to trajectory. It is arguable which approach is better. If a particular site is very familiar with the details of this filter, they could measure the noise in their system and apply an appropriate MSE value. In truth, the MSE option allows people who want to get graphs as smooth as VCM to do that, by experimenting with their values.
Our implementation was taken directly from the work done by Herman Woltring.
See his original work in the following:
In addition, Mr. Woltring wrote up this topic in Chapter 5 (Smoothing and Differentation Tewchniques Applied to 3-D Data) in a text dedicated to the topic. This text, Three-Dimensional Analysis of Human Movement was edited by Paul Allard, Ian A.F. Stokes, and Jean-Pierre Blanchi. It was copyrighted in 1995 and published by Human Kinetics. They can be reached at 800-747-4457 or at http://www.humankinetics.com
Some others that may be useful:
VVID files can be viewed by using the VVID Viewer.
The VVID Video Viewer is a tool that allows users to view Nexus’ propriety raw video format – VVID.
This file can be downloaded from the Downloads > Utilities and SDKs section.
Bodybuilder Example Models can be downloaded from the associated pages below or by browsing the download section.
The file contains 34 models ranging from a Golf model to a Flow model calculating inter-segmental power flows.
There is a limit to the size that a Bodybuilder Model and its associated *.mp file can be. The limit on the length of the total combined model script (*.mod + *.mp) is 32766 characters. For Bodybuilder version 3.51 and later, a warning dialogue preventing the entry of too much text will be presented when the limit is reached. No warning dialogue will be presented in Bodybuilder versions prior to 3.51, but these will fail to save models that exceeded this limit. The most recent release of Bodybuilder has removed this limitation.
The Vicon DataStream Software Development Kit (SDK) allows easy programmable access to the information contained in the Vicon DataStream. The function calls within the SDK allow users to connect to and request data from the Vicon DataStream.
The Virtual-Reality Peripheral Network (VRPN) is a library that provides an interface between 3D immersive applications and tracking systems used for Virtools. Vicon Tracker 3 has a built-in VRPN server that will stream data natively into these applications or will allow for the development of simple interfaces using VRPN.
Virtools, a commercial application, has support for VRPN and can be configured to connect with Vicon Tracker as follows.
A full VRDevice.cfg file is included below.
Head@TrackerPC is the way Virtools connects to the VRPN server within Tracker. The format is object_name@PC_Name. This configuration file will look for an object called “Head” on the Tracker server called “TrackerPC.”
======================================= vrpnTracker_0 Head@TrackerPC neutralPosition_0 0.0 0.0 0.0 neutralQuaternion_0 0.0 0.0 0.0 1.0 axisPermute_0 0 2 1 axisSign_0 1 1 1 trackerScale_0 1 TrackerGroup_0 T0:0:6 =================
This VRDevice.cfg also contains other directives that:
axisPermute_0 0 2 1 axisSign_0 1 1 1
To complete the process, do the following:
in your VRDevice.cfg file to 0.001 (converts Vicon mm to Virtools m).
For a full description of any of these configuration options, please refer to the Virtools documentation.
The Integer format measures the maximum range between real data points, and determines a scale factor. The data is then scaled to that range when saved to the c3d file, and all values are written with the Integer format. When the data is then read into another program (i.e. Polygon), the scale factor is applied to the data converting it into Real data. The real data format saves the data as is without any multiplication by a scale factor and writes it to the c3d file utilizing the Real format. Certain types of data are best suited for the Real format option since no resolution is given up in the storage of the data. Not all programs may be able to read both Integer and Real formatted c3d files, so use care when choosing your preferred option. More details on the .c3d format are available on C3D.org
Plug In Gait, is based on the following journal papers:
Bell, A.L., Pedersen, D.R. & Brand, R.A. (1990). A comparison of the accuracy of several hip center location prediction methods. Journal of Biomechanics, 23, 617-621
Davis, R., Ounpuu, S., Tyburski, D. & Gage, J. (1991). A gait analysis data collection and reduction technique. Human Movement Sciences, 10, 575-587
Kadaba, M.P., Ramakrishnan, H.K. & Wooten, M.E. (1987). J.L. Stein, ed. Lower extremity joint moments and ground reaction torque in adult gait. Biomechanics of Normal and Prosthetic Gait. BED Vol (4)/DSC Vol 7. American Society of Mechanical Engineers. 87-92.
Kadaba, M.P., Ramakrishnan, H.K., Wootten, M.E, Gainey, J., Gorton, G. & Cochran, G.V.B (1989). Repeatability of kinematic, kinetics and electromyographic data in normal adult gait. Journal of Orthopaedic Research, 7, 849-860
Kadaba, M.P., Ramakrishnan, H.K. & Wooten, M.E. (1990). Lower extremity kinematics during level walking. Journal of Orthopaedic Research, 8, (3) 383-392
Macleod, A. And Morris, J.R.W. (1987). Investigation of inherent experimental noise in kinematic experiments using superficial markers. Biomechanics X-B, Human Kinetics Publishers, Inc., Chicago, 1035-1039.
Ramakrishnan, H.K., Wootten M.E & Kadaba, M.P. (1989). On the estimation of three dimensional joint angular motion in gait analysis. 35th annual Meeting, Orthopaedic Research Society, February 6-9, 1989, Las Vegas, Nevada.
Ramakrishnan, H.K., Masiello G. & Kadaba M.P. (1991). On the estimation of the three dimensional joint moments in gait. 1991 Biomechanics Symposium, ASME 1991, 120, 333-339.
Sutherland, D.H. (1984). Gait Disorders in Childhood and Adolescence. Williams and Wilkins, Baltimore.
Winter, D.A. (1990) Biomechanics and motor control of human movement. John Wiley & Sons, Inc.
These references have information on kinematic and kinetic calculations, as well as anthropometrics and repeatability of the model. The upper body model has not been validated in any peer reviewed journal papers and therefore there are no articles on repeatability of the upper body model.
The required measurements for full body plug-in gait and lower body plug-in gait include mass, height, leg length, knee width, ankle width, shoulder offset, elbow width, wrist width, and hand thickness.
These measurements should all be entered in either kilograms or millimetres. All lengths or distances will be required in millimetres. The measurements for inter-ASIS distance, ASIS-trochanter distance, and tibial torsion are all optional entries. If they are not entered in, the model will calculate them.
Here are the precise required measurements for the model:
Mass: The mass of the subject in Kilograms (2.2lb=1kg)
Height: The height of the subject.
Leg length: Measured from the ASIS to the medial malleolus. If a patient cannot straighten his/her legs, take the measurement in two pieces: ASIS to knee and knee to medial malleolus.
Knee width: Measurement of the knee width about the flexion axis.
Ankle width: Measurement of the ankle width about the medial and lateral malleoli.
Shoulder offset: The vertical distance from the center of the glenohumeral joint to the marker on the acromion clavicular joint. Some researchers have used the (anterior/posterior girth)/2 to establish a guideline for the parameter.
Elbow width: The distance between the medial and lateral epicondyles of the humerus.
Wrist width: Should probably be called “wrist thickness.” It is the anterior (palm side) and posterior (back) distance of the wrist at the position where a wrist marker bar is attached. If the wrist markers are attached directly to the skin, this value should be zero.
Hand thickness: The distance between the dorsal and palmar surfaces of the hand at the point where you attach the hand marker.
The following measurements are optional and/or calculated by the model:
Inter-ASIS distance: The model will calculate this distance based on the position of the LASI and RASI markers. If you are collecting data on an obese patient and cannot properly place the ASIS markers, place those markers laterally and preserve the vector direction and level of the ASIS. Palpate the LASI and RASI points and manually measure this distance, then input into the appropriate field.
Head Angle: The absolute angle of the head with the global coordinate system. This is calculated for you if you check the option box when processing the static trial.
ASIS-Trochanter distance: The perpendicular distance from the trochanter to the ASIS point. If this value is not entered, then a regression formula is used to calculate the hip joint center. If this value is entered, it will be factored into an equation which represents the hip joint center.
Tibial torsion: The angle between the ankle flexion axis and the knee flexion axis. The sign convention is that if a negative value of tibial torsion is entered, the ankle flexion/extension axis will be adjusted from the KAD’s defined position to a position dictated by the tibial torsion value.
Thigh rotation offset: When a KAD is used, this value is calculated to account for the position of the thigh marker. By using the KAD, placement of the thigh marker in the plane of the hip joint center and the knee joint center is not crucial. Please note that if you do not use a KAD, this value will be reported as zero because the model is assuming that the thigh marker has been placed exactly in the plane of the hip joint center and the knee joint center. This value is calculated for you.
Shank rotation offset: Similar to the thigh rotation offset. This value is calculated in a KAD is present and removes the importance of placing the shank marker in the exact plane of the knee joint center and ankle joint center. If you do not use a KAD, these values will be zero. This value is calculated for you.
The first step in the shoulder modelling process is the definition of the shoulder, elbow and wrist centres and the Thorax, Clavicle and Humerus segments. The shoulder angle calculations are then based on YXZ Euler angle rotations between the Thorax and the Humerus Segments as follows:
The explanation for the sometimes strange angles seen when using the above method for determining shoulder motion is the occurrence of ‘Gimbal Lock’ and the quirk in clinical descriptions of motion known as ‘Codman’s Paradox’. ‘Gimbal Lock’. Gimbal Lock occurs when using Euler angles and any of the rotation angles becomes close to 90 degrees, for example lifting the arm to point directly sideways or in front (shoulder abduction about an anterior axis or shoulder flexion about a lateral axis respectively).
In either of these positions the other two axes of rotation become aligned with one another, making it impossible to distinguish them from one another, a singularity occurs and the solution to the calculation of angles becomes unobtainable. For example, assume that the humerus is being rotated in relation to the thorax in the order Y,X,Z and that the rotation about the X-axis is 90 degrees. In such a situation, rotation in the Y-axis is performed first and correctly. The X-axis rotation also occurs correctly BUT rotates the Z axis onto the Y axis. Thus, any rotation in the Y-axis can also be interpreted as a rotation about the Z-axis.
True gimbal lock is rare, arising only when two axes are close to perfectly aligned. ‘Codman’s Paradox’: The second issue however, is that in each non-singular case there are two possible angular solutions, giving rise to the phenomenon of “Codman’s Paradox” in anatomy (Codman, E.A. (1934). The Shoulder. Rupture of the Supraspinatus Tendon and other Lesions in or about the Subacromial Bursa. Boston: Thomas Todd Company), where different combinations of numerical values of the three angles produce similar physical orientations of the segment. This is not actually a paradox, but a consequence of the non-commutative nature of three-dimensional rotations and can be mathematically explained through the properties of rotation matrices (Politti, J.C., Goroso, G., Valentinuzzi, M.E., & Bravo, O. (1998).
Codman’s Paradox of the Arm Rotations is Not a Paradox: Mathematical Validation. Medical Engineering & Physics, 20, 257-260). Codman proposed that the completely elevated humerus could be shown to be in either extreme external rotation or in extreme internal rotation by lowering it either in the coronal or sagittal plane respectively, without allowing any rotation about the humeral longitudinal axis.
For a demonstration of this, follow the sequence below:
This ambiguity can cause switching between one solution and the other, resulting in sudden discontinuities. A combination of ‘Gimbal Lock’ and ‘Codman’s Paradox’ can lead to unexpected results when joint modelling is carried out.
The table below displays the Upper Body Segment angles from Plug-in Gait.
All Upper Body angles are calculate in rotation order YXZ.
As Euler angles are calculated, each rotation causes the axis for the subsequent rotation to be shifted. X’ indicates an axis which has been acted upon and shifted by one previous rotation, X’’ indicates a rotation axis which has been acted upon and shifted by two previous rotations.
|Angles||Positive Rotation||Axis||Direction||Angles||Positive Rotation||Axis||Direction|
|LHeadAngles||1||Backward Tilt||Prg.Fm. Y||Clockwise||RHeadAngles||1||Backward Tilt||Prg.Fm. Y||Clockwise|
|2||Right Tilt||Prg.Fm. X’||Anti-clockwise||2||Left Tilt||Prg.Fm. X’||Clockwise|
|3||Right Rotation||Prg.Fm. Z’’||Clockwise||3||Left Rotation||Prg.Fm. Z’’||Anti-clockwise|
|LThoraxAngles||1||Backward Tilt||Prg.Fm. Y||Clockwise||RThoraxAngles||1||Backward Tilt||Prg.Fm. Y||Clockwise|
|2||Right Tilt||Prg.Fm. X’||Anti-clockwise||2||Left Tilt||Prg.Fm. X’||Clockwise|
|3||Right Rotation||Prg.Fm. Z’’||Clockwise||3||Left Rotation||Prg.Fm. Z’’||Anti-clockwise|
|LNeckAngles||1||Forward Tilt||Thorax Y||Clockwise||RNeckAngles||1||Forward Tilt||Thorax Y||Clockwise|
|2||Left Tilt||Thorax X’||Clockwise||2||Right Tilt||Thorax X’||Anti-clockwise|
|3||Left Rotation||Thorax Z’’||Clockwise||3||Right Rotation||Thorax Z’’||Anti-clockwise|
|LSpineAngles||1||Forward Thorax Tilt||Pelvis Y||Anti-clockwise||RSpineAngles||1||Forward Thorax Tilt||Pelvis Y||Anti-clockwise|
|2||Left Thorax Tilt||Pelvis X’||Clockwise||2||Right Thorax Tilt||Pelvis X’||Anti-clockwise|
|3||Left Thorax Rotation||Pelvis Z’’||Anti-clockwise||3||Right Thorax Rotation||Pelvis Z’’||Clockwise|
|LShoulderAngles||1||Flexion||Thorax Y||Anti-clockwise||RShoulderAngles||1||Flexion||Thorax Y||Anti-clockwise|
|2||Abduction||Thorax X’||Anti-clockwise||2||Abduction||Thorax X’||Clockwise|
|3||Internal Rotation||Thorax Z’’||Anti-clockwise||3||Internal Rotation||Thorax Z’’||Clockwise|
|LElbowAngles||1||Flexion||Humeral Y||Anti-clockwise||RElbowAngles||1||Flexion||Humeral Y||Clockwise|
|2||–||Humeral X’||–||2||–||Humeral X’||–|
|3||–||Humeral Z’’||–||3||–||Humeral Z’’||–|
|LWristAngles||1||Ulnar Deviation||Radius X||Clockwise||RWristAngles||1||Ulnar Deviation||Radius X||Anti-clockwise|
|2||Extension||Radius Y’||Clockwise||2||Extension||Radius Y’||Clockwise|
|3||Internal Rotation||Radius Z’’||Clockwise||3||Internal Rotation||Radius Z’’||Anti-clockwise|
The table below displays the Lower Body Segment angles from Plug-in Gait.
All Lower Body angles are calculate in rotation order YXZ except for Ankle Angles which are calculated in order YZX.
|Angles||Positive Rotation||Axis||Direction||Angles||Positive Direction||Axis||Direction|
|LPelvisAngles||1||Anterior Tilt||Prg.Fm. Y||Anti-clockwise||RPelvisAngles||1||Anterior Tilt||Prg.Fm. Y||Anti-clockwise|
|2||Upward Obliquity||Prg.Fm. X’||Anti-clockwise||2||Upward Obliquity||Prg.Fm. X’||Clockwise|
|3||Internal Rotation||Prg.Fm. Z’’||Clockwise||3||Internal Rotation||Prg.Fm. Z’’||Anti-clockwise|
|LFootProgressAngles||1||–||Prg.Fm. Y||RFootProgressAngles||1||–||Prg.Fm. Y||–|
|2||–||Prg.Fm. X’||2||–||Prg.Fm. X’||–|
|3||Internal Rotation||Prg.Fm. Z’’||Clockwise||3||Internal Rotation||Prg.Fm. Z’’||Anti-clockwise|
|LHipAngles||1||Flexion||Pelvis Y||Clockwise||RHipAngles||1||Flexion||Pelvis Y||Clockwise|
|2||Adduction||Pelvis X’||Clockwise||2||Adduction||Pelvis X’||Anti-clockwise|
|3||Internal Rotation||Pelvis Z’’||Clockwise||3||Internal Rotation||Pelvis Z’’||Anti-clockwise|
|LKneeAngles||1||Flexion||Thigh Y||Anti-clockwise||RKneeAngles||1||Flexion||Thigh Y||Anti-clockwise|
|2||Varus/Adduction||Thigh X’||Clockwise||2||Varus/Adduction||Thigh X’||Anti-clockwise|
|3||Internal Rotation||Thigh Z’’||Clockwise||3||Internal Rotation||Thigh Z’’||Anti-clockwise|
|LAnkleAngles||1||Dorsiflexion||Tibia Y||Clockwise||RAnkleAngles||1||Dorsiflexion||Tibia Y||Clockwise|
|2||Inversion/ Adduction||Tibia X’’||Clockwise||2||Inversion/ Adduction||Tibia X’’||Anti-clockwise|
|3||Internal Rotation||Tibia Z’||Clockwise||3||Internal Rotation||Tibia Z’||Anti-clockwise|
You can collect data without wands and still get similar results using Plug-in Gait. You can even generate a three dimensional skeleton. The origin of the wands has a little bit of history behind it, but the basic intent is the make the rotation about the long axis of the segment more obvious. A marker directly on the shank will rotate the same amount as a marker on a rod, however, the closer that a marker is to the segment the harder it is to see the rotation. On subjects with smaller segments, it may not be advisable to place the markers directly on the shank.
The ‘Progression Direction’ is defined in order to represent the general direction in which the subject walks in the global coordinate system. A coordinate system matrix (similar to a segment definition) is then defined and denoted the ‘Progression Frame’. This allows the calculation by Plug-in Gait and Polygon of ‘progression’ related variables (HeadAngles, ThoraxAngles, PelvisAngles, FootProgressAngles, Step Width) in relation to this frame.
In Plug-in Gait, the lower body Progression Direction is found by looking at the first and last valid position in a trial of the LASI marker. If the distance between the first and last valid position of the LASI marker is greater than a threshold of 800 mm, the X displacement of LASI is compared to its Y displacement. If the X displacement is greater, the subject is deemed to have been walking along the X axis, either positively or negatively, depending on the sign of the X offset. If the Y displacement is greater, the subject is deemed to have been walking along the Y axis, either positively or negatively, depending on the sign of the Y offset.
If the distance between the first and last frame of the LASI marker is less than a threshold of 800 mm however, the Progression Direction is calculated using the direction the pelvis is facing during the middle of the trial. This direction is calculated as a mean over 10% of the frames of the complete trial. Within these frames, only those which have data for all the pelvis markers are used. For each such frame, the rear pelvis position is calculated from either the SACR marker directly, or the centre point of the LPSI and RPSI markers. The front of the pelvis is calculated as the centre point between the LASI and RASI markers. The pelvis direction is calculated as the direction vector from the rear position to the front. This direction is then used in place of the LASI displacement, as described above and compared to the laboratory X and Y axes to choose the Progression Direction.
Following this definition, the Progression Direction in which the subject walks is assumed to be one of four possibilities; Global axes positive X, Global axes positive Y, Global axes negative X or Global axes negative Y and not diagonally to any of these, for example.
In Plug-in Gait, the upper body Progression Direction is adopted as the same as the lower body’s Progression Direction, if it has one. If no lower body Progression Direction has been calculated, an upper body Progression Direction is independently calculated in just the same way as for the lower body. C7 is tested first to determine if the subject moved a distance greater than the threshold. If not, the other thorax markers T10 CLAV and STRN are used to determine the general direction the thorax faces from a mean of 10% of the frames in the middle of the trial.
Once the Progression Direction along one of the four possible axes directions is determined, the Progression Frame is defined such that its X-axis is oriented positively along this Progression Direction. The Z axis is always assumed to be directed vertically upwards and the Progression Frame is defined following the right-hand rule. The diagram below shows this clearly for each of four circumstances where a subject walks along the different axis directions.
The ‘Progression Angles’ of the head, thorax, pelvis and feet, calculated by Plug in Gait, are the YXZ Cardan angles calculated from the rotation transformation of the subject’s Progression Frame for the trial, onto the orientation of each of these segments on a sample-by-sample basis.
The ‘Step Length’ calculated by Plug-in Gait, is the distance when the foot down event occurs, between the chosen marker (TOE by default) and the opposite foot’s corresponding marker, ALONG the Progression Direction. For example, with the LTOE and RTOE markers chosen in the ‘Gait Cycle Parameter Generation Options’ for the ‘Generate Gait Cycle Parameters’ Workstation pipeline entry, the Left Step Length will be calculated as the distance between the LTOE marker and the RTOE marker along the Progression Direction.
The ‘Stride Length’ calculated by Plug-in Gait, is the distance moved by the chosen marker (TOE by default), ALONG the Progression Direction between one foot down event and the next (i.e. from the start to the end of the gait cycle). For example, with the LTOE and RTOE markers chosen in the ‘Gait Cycle Parameter Generation Options’ for the ‘Generate Gait Cycle Parameters’ Workstation pipeline entry, the Left Stride Length will be calculated as the distance between the LTOE marker at the occurrence of one foot down event and the LTOE marker at the occurrence of the next foot down event, along the Progression Direction.
The ‘Step Width’ calculated by Polygon, is the distance when the foot down event occurs, between the chosen marker (TOE by default) and the opposite foot’s corresponding marker, NORMAL to the Progression Direction. For example, with the LTOE and RTOE markers chosen in the “Analysis” node’s Properties, the Left Step Width will be calculated as the distance between the LTOE marker and the RTOE marker normal to the Progression Direction
In Nexus the Generate Gait Cycle Parameters Pipeline Operation can be used in conjunction with the Gait events to calculate standard Gait Cycle Spatial and Temporal Parameters.
In Nexus, the parameters are based on the first cycle for each side where all the necessary events are found.
Polygon can re-calculate the parameters and define them using the first cycle (default) or the average of all defined cycles. [To use the average of all defined cycles in Polygon, right click on the trial subject’s Analysis node, select Properties, and check the box labeled Use Average of Nominated Cycles.]
These Parameters and available units (the units can be change in the Generate Gait Cycle Parameters Options box) are:
Cadence – 1/s; 1/min; steps/s; steps/min; strides/s; strides/min
Walking speed – m/s; cm/s; mm/s; in/s
Step Time – s; %
Foot Off/Contact events – s; %
Single/Double Support – s; %
Stride/Step Length – m; cm; mm; in
The Distance Parameters are based in the marker position at the time, by default the toe marker (LTOE for left and RTOE for right) is used for the calculation. This can be changed in the Options box of the Generate Gait Cycle Parameters Pipeline Operation.
Cadence: number of strides per unit time (usually per minute). The left and right cadence are first calculated separately based on either a single stride or an average of the defined gait cycles. The overall cadence is the average of the left and the right.
Stride time: time between successive ipsilateral foot strikes.
Step time: time between contralateral and the following ipsilateral foot contact, expressed in seconds or %GC.
Foot contact/off events are all expressed relative to the ipsilateral gait cycle, either as absolute time from ipsilateral foot contact or as %GC, as per the Polygon preference. Single and double support calculations are only valid for walking, i.e. when the contralateral foot off/contact events happen within the ipsilateral stance phase.
Foot off: time of ipsilateral foot off.
Opposite foot contact: time of contralateral foot contact.
Opposite foot off: time of contralateral foot off.
Single support: time from contralateral foot off to contralateral foot contact.
Double support: time from ipsilateral foot contact to contralateral foot off plus time from contralateral foot contact to ipsilateral foot off.
Limp index: the foot contact to foot off time of the ipsilateral foot is divided by the foot off to foot contact time plus the double support time. In other words, the limp index calculates the time the ipsilateral foot is on the ground and divides it by the time the contralateral foot is on the ground during the ipsilateral GC.
All distance and speed measurements use a reference marker on each foot, by default the LTOE/RTOE markers, but this can be changed in the preferences. The marker’s position is evaluated in 3D at the time of the events.
Four 3D points are defined:
IP1 is the ipsilateral marker’s position at the first ipsilateral foot contact.
IP2 is the ipsilateral marker’s position at the second ipsilateral foot contact.
CP is the contralateral marker’s position at the contralateral foot contact.
CPP is CP projected onto the IP1 to IP2 vector.
Stride length: is the distance from IP1 to IP2.
Step length: is the distance from CPP to IP2.
Step width: is the distance from CP to CPP.
Walking speed: is stride length divided by stride time.
The Forces calculated by Plug-in Gait and displayed by Polygon are in the local co-ordinate frame of the distal segment in the hierarchical Kinetic Chain. This means that the Ankle joint forces are recorded in the foot segment axis system. Therefore Ground Reaction force Z will look similar to Ankle Force X, Ground Reaction Force Y will look similar to Ankle Force Z and Ground Reaction Force X will look similar to Ankle Force Y.
For the tibia this will change as the axis orientation now changes. Z force is therefore compression or tension at the joint, Y force is mediolateral forces at the joint while X force is Anteroposterior forces at the joint.
The positive force acts in the positive direction of the axis in the distal segment on which it acts and a negative force acts in the negative direction along the axis.
In Plug-in Gait we use an external moment and force description. That means that a negative force is compression and a positive force, tension, for the Z axis. A positive force for the right side is medial and negative lateral for the Y axis and a positive force is anterior and negative posterior for the X axis.
The report uses a whole folder because there are potentially quite a few files that are associated with a single report. For example, there is one Rich Text Format file per text pane, one data file, one report file, any number of movie (*.avi) files, marker set files (*.mkr) and so on.
To avoid the files being spread around and to keep everything nicely in one place, Polygon copies everything to the report folder. This means that you could end up with more than one copy of your movie files, for example, which may seem unnecessary to you. However, in this day and age when hard disk storage comes in dozens of gigabytes and is cheaper than ever before, the decision was made to copy all the files to keep the report tidy rather than to try and optimize for storage.
The envelope algorithm in Polygon is intended to produce a curve which gives an idea of the shape of the underlying raw EMG. It is based on a running average algorithm, but has been modified to give better response to the peaks in the raw EMG data (a simple running average will produce an envelope curve which fits nowhere near the peaks of the raw data).
The envelope algorithm takes a single parameter which is the width of the envelope as it passes through the raw data. What this means is that if you have entered, say, 10 ms for the envelope width parameter, any given sample in the time series will be affected by the sample within a 10ms envelope either side of it. If this sounds too technical suffice to say that the lower the value the more “tight fitting” the envelope will be.
Furthermore, increasing the value will “smooth” the curve. There’s no way to determine a “perfect” value, so the best strategy is to experiment a bit – try to overlay the enveloped EMG using different parameter values (for example 10,20,30,40 and 50) and the raw EMG to get an idea of what the algorithm produces given different parameter values.
The default value is 25
You can create a new Polygon Report as a blank report or as a report based on a template. There are several ways to create a new blank report:
Create a Report from Data Management (Eclipse)
On the Home ribbon, click the Data Manager Button or press F2.
In the Data Manager, double-click the trial you want to add to the report.
Click the New Report button on the toolbar. A new report is added below the trial you selected.
Type a name for the report.
Double-click the report and click No when asked if you want to base the report on a template. A blank report is created.
Import data into the report.
Create a Report from within Polygon
On the Home ribbon, click the down arrow on the New button.
From the Quick Access Bar above the Ribbon, click the down arrow on the New button.
Select Blank from the drop-down menu.
In the New Report dialog, browse to the location where you want to save the report.
In Report Name, enter a name for the report. Then click OK.
If you are creating the report in a new directory, click Yes to the prompt, Directory not found. Create? A blank report is created.
Creating a Report from a Template
Note: You will require a Polygon Template (.tpl file) available.
Use one of the above mentioned methods to create a blank report (when using the New button select template from the drop-down menu).
When asked if you want to base the report on a template click Yes.
In the window that opens, browse to the location of the template you want to use.
The Data Bar is empty until you import trial data (*.c3d files) that were processed in Vicon Nexus. Data can be imported from either the Data Manager (Eclipse) or the Home Ribbon. You can import a variety of files into Polygon reports, including web pages, videos, and more. Most files become panes within Polygon for which you can create hyperlinks. Files that you can import:
|Vicon (*.c3d)||Polygon External Data (*.pxd)|
|VCM Report (*.gcd)||3D Mesh (*.obj)|
|Marker Set (*.mkr)||Adobe Acrobat (*.pdf)|
|Video (*.mpg, *.avi)||PowerPoint (*.ppt, *.pptx)|
Import Data from Data Manager (Eclipse)
Open the report for which you want to import data or create a new report.
On the Home ribbon, click the Data Manager button or press F2.
With Data Manager still open, double-click on the trial name you want to add to the report.
The Trial will appear in the Data Bar
When you are finished, close the Data Manager.
Import Data from the Home Ribbon
Open the report for which you want to import data or create a new report.
On the Home ribbon, click Import File.
In the Import File dialog, browse to the location of the c3d file you want to import.
Double click on the c3d file (In the drop-down you can filter the file types – optional).
The Trial will appear in the Data Bar.
Open the report for which you want to import data or create a new report.
On the Home ribbon, click Import Video.
In the Import File dialog, browse to the location of the .avi or .mpg file you want to import.
Double click on the file.
The Video will appear in the Data Bar.
Import Web Page:
Click the Home button on the Ribbon.
Click the Import Web Page button.
In the window that opens, enter the web page URL.
The web page opens in an HTML window in the Report Workspace. Web pages can be accessed by clicking Multimedia Files in the upper portion of the Data Bar. Then double-click the web page in the lower portion of the Data Bar
1. Make sure all analog and digital devices to be collected within Nexus are turned on. Wait at least 45 minutes after turning on the system to calibrate.
Please Note: Calibration should occur whenever cameras have had any movement. Preferably, cameras are calibrated at least each day the system is used.
2. Open Nexus. All cameras will populate within the Systems tab automatically.
3. Select the appropriate System Configuration file for the data collection. Configured analog devices will populate under Devices. Add configured digital devices, right click on Devices > Add Digital Device and select the appropriate devices.
Verify analog/digital devices are set-up correctly. For example, if there are any force plates, make sure the force vectors are correct.
4. Select all cameras and go to the Camera view. Verify all reflective material or markers have been removed from the volume. If a reflective area is unable to be removed, it will be masked.
5. Go to the Tools pane > System Preparation button > Mask Cameras. Select Start to mask the cameras. Existing masks will be removed and replaced with new masks. Once all reflections are covered with a blue mask, select Stop.
6. Go to Calibrate Cameras and select Start. Nexus will begin calibration as the wand is moved throughout the volume. Once all cameras have recorded the wand for a specific number of frames Nexus will calculate feedback values. Look at the Camera Calibration Feedback table to verify calibration was good.
7. Change the Camera view to 3D Perspective. Set the wand at the origin of the volume. Go to Set Volume Origin > Start and then Set to position the cameras around the wand.
System preparation is now complete and you can move on to data capture.
1. Go to Data Management and make sure a Session folder has been created for the subject.
2. Go to the Resource pane > Subjects tab and create a new subject from a labeling skeleton. The subject will be listed below with the associated labeling skeleton in parentheses.
3. Go to the Tools pane > Subject Preparation button for subject calibration. Have the subject stand in the middle of the volume in the base pose with all markers visible. Select Start under Subject Calibration. Only one good frame of data is needed. Once a trial with all markers present has been captured select Stop.
Please Note: Workflow is intended for Plug-In Gait templates. If using another you might need a Range of Motion calibration trial.
4. Nexus goes offline and opens the trial for immediate processing. Reconstruct the markers of the loaded trial. Run the AutoInitialize pipeline followed by Plug-in Gait Static to Calibrate the subject for labeling and Plug-in Gait.
Please Note: Check the accuracy of the marker labels (Hotkey: CTRL+Space) prior to running the calibration pipeline operations.
5. Save the Trial and Subject. Then go back online.
6. Dynamic Trials can now be collected. Select the Capture tab, go down to the Capture section and select Start to begin data collection. Change the name of the trial from Subject Name Cal 02 to something that correlates with the data collection.
To take advantage of the new Nexus 2.x labeling algorithms a Nexus 2.x VST must be used. If all you have is a Nexus 1.x VST, please remake the VST within Nexus 2.x. If you would like instructions on this process, please contact Vicon Support.
Calibrating your subject allows Nexus to calculate subject-specific parameters with regards to the size of the subject and the exact placement of markers. Calibration of the subject leads to better automatic labeling during Live capture and offline processing. During subject calibration, subject-specific information is what enables a skeleton labeling template (VST) to be converted to a subject-specific labeling skeleton (VSK). If a VST is not calibrated to the subject then the subject markers will not label well, if at all.
The subject should be calibrated at the beginning of each capture session, not before each trial. You only need to recalibrate if the subject marker placement changes, for example, if a marker falls off, or if the markers are moved.
Aiming cameras is useful for providing an initial, approximate calibration, before you fully calibrate the cameras.
To utilize Aim Cameras you will want to have your cameras roughly positioned within the volume. Create a Target Volume, from the Window > Options, with the dimensions of the ideal capture volume. Once configured, place the wand in the center of the capture volume and go to a camera view.
While in the Aim Camera mode, physically move the Vicon camera in the capture volume and check its coverage against the Target Volume.
Please Note: The Target Volume within the camera view will only be displayed if all 5 wand markers are visible. Thus the camera might need focusing in order to circle fit the wand markers.
For step by step instructions with visualizations on this process, please refer to the Aim Vicon Cameras section of the installed Nexus help.
When creating or modifying a template you will notice each marker has a Status property. The Status of a marker is set to either Required, Optional or Calibration Only. A marker status can affect the way a template reconstructs and labels.
Required: A required marker will need to be on the subject during the calibration trial as well as all dynamic trials
Calibration Only: These markers are used during the calibration trial and are then removed from the marker list for the dynamic trials.
Optional: If an optional marker is on the subject for calibration, Nexus will expect that marker to remain on the subject during all the dynamic trials. If the marker is not present during calibration, the optional marker is removed from the marker list.
If the cameras continue to connect with the previous version of software but don’t in the latest version then the software is probably being blocked by the Windows Defender Firewall. Although Vicon officially specifies that the Firewall should be turned off completely to ensure unobstructed communication with the cameras and connectivity devices, sometimes institutional protocols mandate it to be turned on. If this is the case:
If this does not resolve the issue then please contact Vicon Support.
Most settings for Nexus 2.x can be found in C:\
Simply copy the files you require from the relevant folder on the processing computer and paste them in their corresponding folder on the new computer.
If you cannot find a setting or are having issues with this process, please contact Vicon Support.
By default, Polygon 4 will generate Gait Cycle Parameters and write them to the subject’s Analysis node. Perform the following steps to apply a trial’s Analysis Outputs to a Polygon 4 report.
The contents of the trial’s Analysis Outputs will be available now to include in the report. The settings will be saved within the report as well as a template created from the report. The trial’s Analysis Outputs will override the parameters generated by Polygon so combine the Gait Cycle Parameters, Gait Deviation Index, etc., run the Compute Gait Cycle Parameters operation within a Nexus pipeline to write them to the trial’s Analysis Outputs group.
Typically, this is due to subsets of cameras being well calibrated together and those subsets not agreeing enough with each other. This can cause reconstructions to be generated from each subset, quite close in position to each other.
Try to capture during the wand wave, plenty of frames where all cameras see the wand at the same time if possible. A good strategy is to wave the wand across the floor with LEDs facing vertically upwards.
If not all cameras, try to ensure that cameras on two or three adjacent walls or two opposing walls can collect wand frames at the same time.
Try to avoid only waving the wand close to the edge of the volume where only 2 or 3 cameras may see the wand at the same time.
Before you begin camera calibration, ensure that: Cameras have fully warmed up to a stable operating temperature. Vicon recommends a minimum 30–60 minute warm-up period.
To request a license –
After you have received a license file from Vicon Support, you must activate it before you can start using your Vicon Software.
To activate a license:
1. Check your email for a message from Vicon Support. The license file (*.lic) is attached to the email. If you have not received a license file, request one as described in How do I request a license for current Vicon Software?
2. Save the license file (*.lic) to the Windows desktop of the machine for which you have a license (or any other suitable location).
3. Start Vicon Product Licensing and in the Vicon Automated Unified Licensing Tool dialog box, click Activate License.
4. Depending on whether you are using the file as it was received from Vicon Support or as a text string copied from the file
5. Click OK.
6. Launch you Vicon Software
If you need to change the license server or to enable a client PC to find its license quickly, follow the steps below to specify the license server for your Vicon Software (Blade, Nexus, Tracker, Polygon) to use.
To enable Vicon Software to find its license:
1. Ensure that you have installed your Vicon Software and that Vicon Software is licensed on the relevant server.
2. On the client PC, start your Vicon Software and depending on whether or not a license is found:
a. On the Help menu, click Licensing.
b. In the Vicon Automated Unified Licensing Tool dialog box, go to the Product License Location list (in the lower half of the dialog box), right-click on the line that shows the relevant Vicon Software license and then click Set License Type.
3. In the Change License Server dialog box, do one of the following:
a. Click Discover. Both local and network licenses are displayed.
b. In the Available Servers list, double-click the required license server and then click OK.
In the Vicon Automated Unified Licensing Tool dialog box, you can view information about all available license servers without affecting the license server that is currently in use. To do this:
1. Open Vicon Product Licensing.
2. In the Vicon Automated Unified Licensing Tool dialog box, if the required license server is not displayed in the License Server field at the top, click Change at the top right of the dialog box.
3. In the Options area of the Select License Server dialog box, do one of the following:
4. Click OK.
In License Server list in the top part of the Vicon Automated Unified Licensing Tool dialog box, licenses from the specified license server are displayed.
You can check out (borrow) a seat from a network license so that it can be used for the number of days that you specify, on a machine that is not connected to the license server network. You can check out a seat to either:
When a commuter license is no longer needed, it is checked back in again, so that it can be used from the license server network as usual. Licenses are automatically checked in at the end of a specified check-out period, or can be manually checked in early (not applicable to remotely checked-out licenses).
Licenses that have been checked out are checked back in and made available for use from the network in either of the following ways:
Caution: This does not apply to licenses that were checked out using Remote Check Out, which remain checked out until their check-out period expires.
To check in a license manually:
1. Open Vicon Product Licensing.
2. In the top part of the Vicon Automated Unified Licensing Tool dialog box, click on the license you want to check in and then click Check In License.
You can check out a seat from an existing license for use on a machine on your license server network, so that your Vicon Software can subsequently be used on the machine when it is no longer connected to your network.
To check out a seat to a machine on the license server network:
1. On a network machine that you later want to use remotely, open Vicon Product Licensing.
2. In the License Server list in the top part of the dialog box, right-click on the license that contains the seat that you want to check out and click Check Out.
3. In the Check Out License dialog box, specify the number of days for the license to be used remotely and then click Check Out.
Checked out licenses are flagged with Commuter in the Type column in the License Server list in the top part of the Vicon Automated Unified Licensing Tool dialog box.
The person with access to the license server can then check out a commuter license for use on the remote machine, as described in – On a network machine, how do I: Check out a commuter license?
In addition to checking out a license to a network machine (see How do I check out a license to a network machine? above), you can also check out a license to a machine that is running the Vicon Automated Unified Licensing Tool (VAULT), but is not connected to the network containing the license server. This involves the following procedures:
To check out a license:
1. Open Vicon Product Licensing.
2. In the License Server list in the top part of the dialog box, right-click on a license that permits commuter licensing for the required product.
If the selected license permits commuter licensing, the context menu displays a Check Out option and at the bottom of the dialog box, a Check Out License button is displayed.
3. Click Check Out and in the Check Out License dialog box:
a. Specify the number of days for which you want to use the license remotely.
b. Expand the Advanced Options by clicking the downward pointing arrow on the right, and click Remote Check Out.
Caution: Do not overestimate the number of days for which the license will remain checked out. After a remote check out, you cannot check the license back in again until the number of days that you specified has expired.
4. In the Remote Commuter License Check Out dialog box, enter the locking code string for the remote machine that was emailed or sent by the user of the remote machine, as described in On the remote machine, how do I: Generate a locking code? above, and click Check Out.
5. In the Save Commuter License dialog box, type or browse to a path and filename for the saved commuter license, click Save to File and then close the dialog box. The commuter license is saved as a license file (*.lic).
6. Email the saved commuter license file to the remote user.
The remote user can then save and activate the checked-out commuter license on the remote machine, as described in the following steps.
To save and activate a commuter license:
1. Save the file that was sent to you as described in On a network machine, how do I: Check out a commuter license? to the Windows desktop (or C:\Users\Public\Documents\Vicon\Licensing).
2. Open Vicon Product Licensing, and then click Activate License.
3. Depending on whether you are using the file as it was received from the license network user or a text string copied from the file, do one of the following:
4. Close the Activate a License dialog box.
In the License Server list in the top part of the Vicon Automated Unified Licensing Tool dialog box, checked out licenses are flagged with Commuter in the Type column.
A license will need to be revoked in order to make that license seat/s available to be licensed on other computers. Here are some examples of when to revoke a license:
For details on license revoking, please refer to the How do I revoke a license FAQ.
Below you will find the steps you need to follow in order to revoke a license so that it can be reissued.
1. Run the Vicon Product Licensing utility from the Start > Programs > Vicon > Licensing folder.
2. License revocation must be performed on the license server computer. Change the License Server to “localhost” to ensure you can only view local licenses.
3. Select the license you wish to revoke from the main list.
4. Click Revoke License at the bottom of the window.
5. Fill in your personal details.
6. Click Revoke and Save Request to a file.
Send this file to [email protected] or reply to the active licensing case.
Once we have received the request file a license can be re-issued. You will receive further instruction on how to request a license should you wish too.
Obtain the latest HASP4 dongle drivers here: HASP4 Download
There are 2 choices here: A GUI based installation package, or a command line based package. Either will work, but the command line package is a bit more complicated.
Instructions for the GUI installation:
1. Remove any connected HASP keys.
2. Download, the .zip file, extract, and run the setup installer.
3. Follow onscreen prompts.
4. Reconnect HASP. USB HASP keys should be detected as new hardware. Windows will locate the driver, and you’ll get a message “Your device is ready for use”.
Instructions for the command line installation:
Once downloaded you should extract both the ‘hinstall.zip’ and the ‘DiagnostiX.zip’ to the ‘Vicon| Hasp’ directory on the PC. Next, to install the drivers carry out the following;
1. Open a command-line window using the ‘Start| Run| Type: cmd| OK’.
2. Access the ‘Vicon| Hasp’ directory using ‘cd Vicon Hasp’ or the appropriate path for the directory on your PC.
3. At the command prompt type exactly:
To check that this has been successfully installed:
1. Unzip the ‘DiagnostiX.zip’ to the ‘Vicon| Hasp’ directory on the PC as instructed above
2. Double-click the resulting ‘diagnostix.exe’ to run the installer.
3. Check the driver version under ‘System Info| Dongle Drivers and Services| HASP Drivers’.
Reformatting or reimaging a machine will change the locking information of the license, rendering it invalid. To prevent this from happening, the license can be revoked, before the reformatting, and then re-issued to the computer once the process is complete. Before performing any major changes to a computer, seethe FAQ How do I revoke my license. Always contact Vicon support before taking action if you have any questions.
1. In Jack click Modules > Motion Capture > Devices and choose Vicon
2. Set the vicon Host field to 127.0.0.1:802 (If everything is running on same PC and default output port set in Pegasus)
4. In the menu bar click Human > Create > Default Male.
5. In Menu Bar click Modules > Motion Capture > Tracking.
6. Click Add.
7. When a new dialogue appears, select the human model in the scene and click Add Pair.
8. Back in the Tracking dialogue click Constrain.
9. In the Vicon Dialogue click Start.
The above will start streaming from Pegasus into Jack.
Below you will find an explanation on how to get started with the Vicon Datastream SDK and Labview.
You will also need to download the LabVIEW.exe.config:
The LabVIEW.exe.config file needs to be placed in the root Labview folder for example:
C:\Program Files\National Instruments\LabVIEW 2017 this my change depending on your operating system and Labview version (2010, 2013, 2014 and 2017 supported).
More details with regards to the config file can be found on the National Instruments website:
2. Create a Folder on your PC to contain all your LabView Projects for example:
C:\Users\Public\Documents\08 ThirdParty Software\Labview\LabviewProjects
3. Copy all the DataStream SDK files from C:\Program Files\Vicon\DataStream SDK\Win64\dotNET and place in C:\Users\Public\Documents\08 ThirdParty Software\Labview\LabviewProjects
4. Launch LabView 2017 (64bit)
5. Select Create Project
6. Double click Blank Project
7. Save your Project in the C:\Users\Public\Documents\08 ThirdParty Software\Labview\LabviewProjects folder created in Step 2
8. In the Project select File > New VI
9. In the New pop-up Window select Blank VI
10. In the Block Diagram Panel, right mouse click > select Connectivity > .NET > Constructor …
11. Place the .NET Constructor in the Block Diagram
12. In the Assembly, select Browse and in the Look in: option browse to C:\Users\Public\Documents\08 ThirdParty Software\Labview\LabviewProjects
13. Select ViconDataStreamSDK_DotNET.dll
14. The .NET Constructor in the Block Diagram will now display ViconDataStreamSDK_DotNET(0.0.0.0)
The following companies provide digital plug-ins for there devices to work in Nexus 2.x and Nexus 1.8.5:
|Cometa by GPEM|
|Kistler by Prophysics|
|Analog Card||NIDAQ by Prophysics|
Please contact the above companies for the latest versions.
proDAQ is a plug-in developed by Prophysics AG that allows a National Instruments Data Acquisition (DAQ) board to stream data directly into Nexus thus allowing analog data to be streamed and captured without a Lock or Giganet LAB.
The proDAQ Plug-in is supported in the current release versions of Nexus 2 and Nexus 1.8.5.
If you are running an earlier version of Nexus 1 you can either update to the current release version of Nexus, 1.8.5, or you can contact Prophysics AG at [email protected] for an earlier version of the proDAQ Plug-in.
proEMG software is designed to make the acquisition and processing of EMG signals easy. There are three version of proEMG; proEMG Lite, proEMG Stand-Alone and proEMG Vicon Plug-ins. The proEMG Vicon Plug-in implements all the advanced processing functions available in the proEMG Stand-Alone as plug-ins accessible from the Vicon Nexus and Vicon Workstation pipelines. The actual data capture is done with the Vicon software.
The proEMG Plug-in is supported in the current release versions of Nexus 2 and Nexus 1.8.5.
After installing the proEMG Vicon plug-in, launch Nexus and run either the proEMG Automatic Processing or proEMG Processing Window pipeline operations. During this process you will be prompted to obtain a licence for the plug-in.
You need to enter your name, email and affiliation, and send off a Prophysics Licence Request (PLR) file to [email protected]
If Basler digital cameras will be connected to Nexus 2.6, ensure you have updated to the Basler Pylon5 SDK and drivers (v5.0.0), which are available from the Vicon website.
If you are using an Intel i340, i350 or i210 network card, when you install the drivers, select the option for Filter drivers, not Performance drivers
The Pylon5 driver supports:
The current release version of Nexus 2 includes the Oxford Foot Model.
The Oxford Foot Model was developed and validated by the Nuffield Orthopaedic Centre in collaboration with Oxford University. The Vicon implementation of the Oxford Foot Model provides users with an easy-to use plug-in which can be included in the processing pipelines of Nexus 1.
The Oxford Foot Model Plug-in is designed to fit straight into the pipeline with the usual gait plug-ins such as the Woltring Filter, Gait Cycle event detection, and Plug-in Gait.
The Oxford Foot Model Installer and Release Notes can be downloaded from the associated pages below.
Hindfoot and forefoot graphs are output in the sequence:
1. Sagittal plane; 2. Transverse plane; 3. Frontal plane.
Positive is dorsiflexion, inversion/supination, internal rotation/adduction.
It has been used in running, stair climbing and jumping. You just need to make sure camera spatial and temporal resolution are adequate and markers are stuck on well!
Yes, for the tibia (TIBA) and hindfoot (HFTFL).
You need to make sure the $TravelDirectionX parameter is correct (“1” when x represents walking direction and “0” when y represents walking direction)
Generally the “forefoot flat” option is not being used as most children in particular don’t stand with their forefeet flat on the floor. “Hindfoot flat” can be use if they can stand with heels down. If using PlugInGait in conjunction with the foot model, then it is necessary to rerun the PlugInGait model after the foot model in the static trial, as a new HEE marker is created by the foot model code to be used by the PlugInGait model (since the HEE marker cannot be placed in the correct position due to other markers being present on the calcaneus). The original HEE marker position is maintained as the Hindfoot segment origin.
The arch height index is calculated as the perpendicular distance of the P1M marker from the plane defined by D1M, P5M and D5M divided by foot length (TOE – HEE). The midfoot is considered as a linking mechanism and is currently not directly modelled.
The Oxford Foot Model code uses the “torsioned” tibia to calculate knee angles (ie taking tibial torsion into account) whilst the Plug-in Gait model uses the “untorsioned” tibia (ie knee rotation is zero in the static trial).
Yes, the model won’t run without a value in the “tibial torsion” field. This can be manually entered or else calculated in your normal way.
The videos below cover topics to help with processing trials. Please note the Python examples require extra modules.
We provide sensor straps of various sizes for attaching sensors to the upper and lower limbs. Sensors can also be strapped to the body using physiotherapy strapping tape, and can still maintain a wireless connection when taped within an athlete’s strapping.
If the problem persists, contact Vicon Support.
The sensors collect at different sampling rates because the crystals used in the sensor circuitry are not perfectly identical to one another. These crystals control the measurement frequency. Thus, each sensor measures at slightly different sampling rates (e.g. 999.99 Hz vs 1000.01 Hz). We use time syncing to help with these small differences. This is also why there will not be an equal number of data points in each log per sensor.
To connect Blue Thunder IMU sensors in Vicon Nexus, ensure that:
In Vicon Nexus, if the sensor is unable to connect, an error message will be displayed in the log if:
To clear the Bluetooth cache:
The latest Blue Trident IMU sensor firmware is available via the Capure.U Desktop app. Vicon Nexus, Capture.U Desktop, and Capture.U (iOS) will notify you if the firmware is out-of-date.
The latest Blue Thunder IMU sensor firmware is available via the IMU Research app. Both Vicon Nexus (2.7-2.9.3) and IMU Research notify you if the firmware is out-of-date.
Unable to connect, Firmware is out of date and needs to be updated.
If, when you select Transfer Files for the IMU sensor, the progress bar does not change, ensure the CP210x USB to UART Bridge VCP Drivers have been installed. Download the drivers from:
With Nexus closed, install the drivers and restart the PC.
Launch Nexus, attach the sensor and re-try the IMU data transfer.
We have identified an incompatibility problem when using Lightning with Windows 10 1903 release (March, 2019).
The problem (hang) occurs when the “File Open” dialog is displayed, for selecting Sync files, Download location, etc.
The fix is to enable Compatibility Mode for Lightning:
The sensor axes orientation can be found here:
The following example shows a strap attached so that the sensor sits directly on the medial aspect of the tibia, just above the medial malleolus:
The Vicon Capture.U App is iOS only and we have no current plans to develop it for Android. The Vicon Capture.U App has been designed to work with both phones and tablets. Apple provides greater consistency in their operating system for phones and tablets, whereas there is more variation with Android, which would require multiple versions of the app to be supported on a variety of devices.
The sensors are synchronized by radio beacon, broadcast from one sensor. This synchronization occurs at the start of the recording session and requires that all sensors be within radio range of each other (<10m). Once synchronization is complete, all sensor clocks will be aligned within 100us, and no additional synchronization occurs. From this point, each sensor will “dead-reckon” based on their own high-accuracy clocks, and do not need to stay within radio range. While dead-reckoning, each sensor clock will drift at a low rate, affected by manufacturing variation and temperature difference, relative to each other. Assuming the sensors operate at a similar temperature, the expected drift is less than 20ms per hour.
Dead-reckoning used in a temporal sense. The key concept is that the sync occurs once only, and from this point the timebases of each sensor will drift apart from each other.
There is no sync mechanism. Both video and sensor streaming are free running and produce frames as fast as possible. In all capture modes, there may be a lag in data frames (a small offset) between sensors when starting capture, but the data will by synced.
|Vicon IMU status||LED display|
|Charging||Both LEDs slow blink (50% duty)|
|Charged||Both LEDs on steady|
|Battery Indication||Both LEDs blink 1-4 times, pause; pattern repeats 3 times. For example, full battery – LEDs blink 4 times, pause; pattern repeats 3 times.|
|Sampling||Single or both LEDs brief blink (<10% duty); single or both depending on left or right|
|Bootloader waiting||Single LED on steady|
|Bootloader connected||Both LEDs on steady|
|Bootloader exiting||Both LEDs twinkle|
|Error during a session
||Rapid blinking LEDs|
When a trial is started within the Capture.U app, it initializes each sensor one at a time. Once a sensor is initialized, it immediately begins recording for that trial. Similarly, when a trial is terminated, each sensor is terminated individually.
When multiple sensors are collected in a trial, the sensor which was initialized last is used to set time 0 based on its first time stamp. All sensors are then re-sampled using linear interpolation and values for time 0 are extracted for each sensor. Subsequent values for all sensors are then output based on the user-selected Output Rate (hz) with its end frame dependent on when the sensor terminated in the app.
The answer to this question depends significantly on the definition of the terms ‘Accuracy’ and ‘Precision’ (see ‘How Are the Terms ‘Accuracy’ & ‘Precision’ defined?’), and how it is measured (see ‘How Do You Test Accuracy / Precision?’). Differing methods produce widely varying results (image below) and care must be taken when making comparisons to take account of all the variables involved.
The figures provided below are obtained following ASTM E3064 standard (‘Standard Test Method for Evaluating the Performance of Optical Tracking Systems that Measure Six Degrees of Freedom (6DOF) Pose‘) and represent the ability to track the relative position of a rigid object while the object is moving, with no filtering or post-processing.
|High-End Volume (Vantage V16)||Mid-Range Volume (Vantage V5)|
|Total Number of Capture Frames||41993||62525|
|Reference Value (Measured Bar Length, r, mm)||320.880||320.880|
|Mean Observed Bar Length (𝑙̅, mm)||320.863||320.897|
|Accuracy of Mean to Reference Value (or ‘Trueness’, |𝒍̅−𝒓|, mm)||0.017||0.017|
|Standard Deviation of Observations (𝝈, mm)||0.198||0.321|
|Root Mean Squared Error (RMSE, mm)||0.201||0.324|
* Mean of all trials
Example trial data for a single capture from the larger dataset. In total, there were 20 trials across 4 calibrations for each configuration, for a volume of 192m3 with 24 cameras.
The following terms are used and referenced as defined by the International Standards Organization:
Our experimental method measures the precision as the level of agreement of each frame within capture. When the term ‘Accuracy of the mean’ is used, it equates to ‘Trueness’ in the above ISO definition.
Due to the complexity and many variables involved in motion capture systems, direct comparisons between different experimental methods are close to impossible, and results will vary with different test conditions.
It is important any measurement used reflects the typical use for an optical motion capture system, and in particular measures a moving (dynamic) object. Our standardized test results were obtained using the method described in ASTM E3064 (‘Standard Test Method for Evaluating the Performance of Optical Tracking Systems that Measure Six Degrees of Freedom (6DOF) Pose‘) which is highly reflective of a typical use case.
Below in an example, a separate test shows the difference simply between a measurement taken in a single capture, for an object that starts stationary, and then begins moving part way through the capture, with no other changes being introduced.
Single capture showing difference between measurement of a static and dynamic object
|Mean (mm)||Max Error (mm)||Root Mean Squared Error (mm)||Standard Deviation (mm)|
The introduction of movement to on object significantly increases the variability of the measurements being taken in the raw data.
This involves the measurement of an object comprising two clusters of passive markers (forming a rigid body at each end). The distance between them was measured to produce a reference measurement. This object was moved around an optical capture volume in a defined way, according to the standard ASTM E3064, and the two rigid bodies were tracked independently by the capture system and compared to an external fixed reference measurement.
Optical motion capture systems are complex. The resolution and quality of the cameras and lenses themselves, the calibration process, the object and the software tracking algorithms all play a important part in the quality of the data.
In addition to these, environmental factors can influence the measurements, such as the size and shape of the capture volume, the physical shape and movement of the object being measured, or the mounting of cameras and the physical structure they are mounted to.
For an example, this paper [P4, Figure 4] demonstrates the movement of a camera mount over a 24 hour period as measured by a laser measurement device, showing the difference in movement between an internal and external wall mounting.
This method is focused on the positional tracking of rigid bodies. Other variables may be involved in other applications, such as biomechanics, such as marker movement when attached to the skin, which may introduce additional errors.
In addition to world class equipment and software, Vicon has a highly trained expert team of support engineers who will install your system for you and make sure that our experience of 35 years in motion capture systems ensures it translates to the best possible performance for your system.
|Plug-in Gait units for forces, moments and powers in the c3d file does not match units displayed in Nexus.||Pre 2.10||Nexus displays the Plug-in Gait units as:
Plug-in Gait outputs generated by Nexus are normalized to the subject’s body mass, and their values are stored in the .c3d file as such. Therefore, the units in the .c3d file metadata should be divided by kg for these outputs (i.e. N/kg, N.mm/kg, W/kg).
Plug-in in Gait Documentation updated from Nexus 2.10 (https://docs.vicon.com/display/Nexus210/Plug-in+Gait+output+specification#PluginGaitoutputspecification-Jointkinetics)
|After completing dynamic wand wave it is possible for the Set Origin operation to fail. This is rare, but for those who experience the problem, it will be consistent.||2.10.3||The only way to get it to work again is to close Nexus down and reopen.||Solved
Resolved in Nexus 2.11.0
|Nexus performance issue when there was a large number of log messages.||2.10.1||Confirmed log workaround in Nexus 2.10.1:
Note that this will disable all log messages, both in the app and in the log file.
If you want to keep the log messages:
Resolved in Nexus 2.10.2
|GetDeviceChannel returns force plate outputs as an action force in a z-down coordinate system, compared to a reaction in a z-up system as stored in the C3D file.||2.10.1||Perform the conversion after requesting the value: rotate the force and moment outputs by 180 degrees about the y-axis, and then negate all components: this is equivalent to simply negating the y-component. This applies to both force and moment outputs.||Scheduled to be resolved|
|PC blue screen when connected digitally to Kistler force plates due to Kistler digital plug-in||2.6.1||With Nexus open:
|The Blue Trident firmware for the Capture.U app is set so that when a trial is stopped, the sensor goes into sleep mode to save power. The sensor resets the calibration in all capture modes when you stop capture.||1.2 and 1.3||None. To ensure precise calibration, we recommend that you follow the calibration procedure at the start of every trial. Instructions on how to calibrate the IMUs can be found here: https://docs.vicon.com/display/IMU/Calibrate+IMUs||Open|
|If you repeatedly enter and exit AR Visualization mode by tapping Home or exiting the app, the Capture.U app may stop.||1.1||None. If the app remains unresponsive on relaunch, restart the iOS device and flush the Bluetooth.||Open|
|Native MATLAB integration no longer supported||1.10||Native MATLAB integration has been replaced by .NET as Mathsworks support the use of .NET directly in MATLAB (https://uk.mathworks.com/help/matlab/using-net-libraries-in-matlab.html). To make use of Datastream SDK 1.10, update legacy MATLAB scripts to .NET. To continue using native MATLAB integration, refer to support in Datastream SDK 1.9.||Solved|
|Current release version||Windows 10||Windows 7*||Linux||OSX|
|Shōgun 1.5.2||64 bit||x||x||x|
|Nexus 2.11||64 bit||x||x||x|
|Evoke 1.3.1||64 bit||x||x||x|
|Tracker 3.9||64 bit||x||x||x|
|Polygon 4.4.6||64 bit||x||x||x|
|Capture.U Desktop 1.3||64 bit||x||x||10.12|
|CaraLive 1.3.0||64 bit||64 bit*||x||x|
|CaraPost 1.2.0||64 bit||64 bit*||x||x|
|Pegasus 1.2.2||64 bit||64 bit*||x||x|
|ProCalc 1.4.0||64 bit||x||x||x|
|ProEclipse 1.4.0||64 bit||x||x||x|
|DataStream SDK 1.11.0||64 bit||x||64 bit||10.11|
|Bodybuilder 3.6.4||64 bit||64 bit*||x||x|
Please do note:
1. Open the Network and Sharing Center and navigate to Change Adapter Settings. Vicon Vantage/Vero cameras are designated to one port. For each Vue (or Bonita Video) camera connected, there will be additional network port used.
2. Right click on the proper port and go into the Properties. The Local Area Connection Properties window will open. Make sure only Internet Protocol Version 4 (TCP/IPv4) is selected.
3. Select Internet Protocol Version 4 (TCP/IPv4) from the list and select Properties to assign the proper IP address.
a .Vantage/Vero cameras will have the following IP Address: 192.168.10.1 and Subnet Mask of: 255.255.255.0
b. The following will be dependent on which POE and computer network card is used within the lab.
The following table shows IP addresses that are commonly used in current Vicon systems (for 10G network cards):
|Port for Vicon optical (main)||
|Port for Vicon video 1–8||
For older Vicon system configurations that use i340 or i350 network cards, you may have additional ports that can be used for Vicon video cameras:
|Port for Vicon optical (main)||
|Port for Vicon video 1||
|Port for Vicon video 2||
|Port for Vicon video 3||
|Port for Vicon video 4||
Select OK to close out of the Internet Protocol Version 4 (TCP/IPv4) Properties. And OK again to close out of the Local Area Connection Properties. This will make sure all changes have been saved.
4. Feel free to rename the network port so it is easily identifiable. Such as ViconMX, VUE1 or VUE2
For further assistance please refer to the Configuring Ports section of the the Vicon Documentation Page: Configuring Network Card Settings