Aging populations, shrinking care workforces, and rising healthcare costs are pushing societies to rethink how they deliver support to older adults. At the Munich Institute of Robotics and Machine Intelligence (MIRMI) at the Technical University of Munich (TUM), the Geriatronics Research Center is meeting that challenge head‑on. Their flagship assistive robot, GARMI, a chat‑enabled, dexterous humanoid, aims to bring breakfast to your table, guide home‑based rehabilitation, and relieve nurses of repetitive tasks in care facilities. Helping the user‑perception studies that ensure GARMI’s gestures, size, and personality resonate with real people is Simone Nertinger, a sports scientist turned human–machine‑interaction specialist.
In this article, we trace Simone’s journey from sports‑science student to human–robot interaction specialist, exploring how Vicon’s motion‑capture technology underpins the iterative design of GARMI and why high‑precision data is critical to developing assistive robots that feel natural rather than mechanical.
EXPLORE VICON FOR ROBOTICSWhat Simone Does at TUM – Research Focus and Everyday Impact
Simone Nertinger distills her mission at MIRMI’s Geriatronics Center as “my research is about developing assistive robots for elderly care”, and describes MIRMI’s work as “they support older adults living at home and nurses in care facilities.” The centerpiece of that support system is GARMI, a 1.6‑metre humanoid that can roll into a kitchen, grasp a mug, or engage in gentle conversation through a friendly display. “The biggest robotic system we’re working on is our robot GARMI.”
Simone’s specific remit is to ensure that every aspect of GARMI, from assignment formulation/task definition to finger articulation, aligns with older adults’ or nurses’ expectations. “My part of the project is conducting all these user studies based on how the users perceive our robots.” Those studies look at preferred voice tone, acceptable approach distances, or comprehensibility of GARMI’s gestures. To capture the fine motor cues that shape that perception, she relies on Vicon. “GARMI’s size, appearance and movement are a big part of why we’re using Vicon.” Without millimetre‑level joint trajectories, subtle cues, like a nurse’s reassuring hand gesture, could be lost.
Educational Background & Path to Vicon Mocap
“My background is in sports science and I did [both] my bachelor’s and [my] master’s at TUM.” That foundation in anatomy and human movement led Simone to her “first job as a student assistant… in the biomechanics lab.” There she first encountered Vicon’s real‑time skeleton tracking system and was immediately captivated by how live data could translate human movement into a digital model. “I was super fascinated by seeing the human skeleton move in real time.” The fascination turned into academic inquiry: “I decided to write my bachelor thesis with Vicon, and I also used it in my master thesis.”
After her bachelors’ thesis, Simone joined BMW’s ergonomics division as an intern, using a Vicon system to analyse how drivers enter and exit, or “ingress and egress” from vehicles to validate BMW’s mock up car for ergonomic design validations. “We used Vicon at BMW for tracking ingress and egress movements into the car, and it was super precise.” The precision mattered: a misplaced footwell or seat‑edge angle can make or break user comfort, much like a robot’s height can decide whether an older adult feels safe or intimidated. That automotive stint sharpened Simone’s sense that human‑centred design demands human‑centred data.
Back at TUM, she expanded her toolset with a master’s in Human Factors Engineering, where courses in cognitive ergonomics, biomechanic modelling and human-machine-interaction design laid the groundwork for her Geriatronics work. The transition from cars to living rooms wasn’t as big as it seems: both settings require intuitive interfaces, seamless ingress/egress (wheelchair vs. car seat), and minimal cognitive load. Vicon remained the common denominator, giving her a language to compare joint‑angle data across contexts.
How She Uses Motion Capture & System Setup
Inside TUM’s Geriatronics lab in Garmisch-Partenkirchen, a 13‑camera Vero system captures every rehearsal. “We view both the real-time feed and a normal RGB camera.” The entire rig is synchronized with a 32‑channel EMG system, allowing Simone’s team to see not help transferring human gestures to a robot, but also model human arm stiffness for robot applications. “Older adults with possible hearing impairments rely more on the gestures that nurses are doing,” she explains, so the robot’s mimicry must feel natural.
Right now, the team is pushing fidelity further. “Now we are working on controlling GARMI’s single fingers in real time from Vicon to the robot.” By mapping Vicon markers on a data glove to GARMI’s 16 finger joints, the robot can replicate a required gestures.
GET STARTED WITH MOTION CAPTURE FOR ROBOTICSCurrent Projects Using Mocap
“We tested the comprehensibilities of these gestures in simulation.” The first phase involved projecting a virtual GARMI into MuJoCo where participants rated gesture clarity. “Now we want to do the same study with the real GARMI.”
Real‑world validation starts in a living lab, then in a controlled care facility in Garmisch‑Partenkirchen, the town that inspired GARMI’s name. Right now, we are developing a robot-administered conduction of health monitoring assessments like the grip-strength. The perception of healthcare professionals and older adults are always first tested in a living lab. “We will test GARMI in one care facility, but only with nurses, not end users (for care-related scenarios) .” Patients will be folded in later, once pilot studies and safety margins are locked down.
“We only have one GARMI and we’re focused on basic research about algorithms and requirements.” That scarcity forces careful planning: observed activities of nurses in supporting daily living of olders or physiotherapists performing geriatric rehabilitation are captured via Vicon in our lab. Based on the captured trajectories, together with the measurement of human-human-interaction forces, we can optimize GARMI’s kinematic design in order to ensure reachability and manipulability of the captured tasks for GARMI.
Her Vision for the Future of Robotics
“It’s always difficult to imagine the next 50 years because it’s a really fast developing sector.” Even so, Simone sees a clear trajectory: “Four years ago, people asked, ‘Do you really think people would use the robot?’ Now they ask, ‘Which functions are best?’” The shift indicates that adoption hurdles have moved from whether to how. “There is no questioning anymore if this is really necessary.” She points to domestic vacuum robots as proof: once a curiosity, now an appliance. “For example, vacuum robots in homes are now super common.”
“In 50 years we will be totally used to robots in our homes.” But let’s see if a single assistive platform could morph from walker to rehab coach to social companion or if several dedicated devices are necessary.
Why Vicon Is the System of Choice
Thinking back to her days as a student assistant in TUM’s biomechanics lab, Simone says, “It was super cool seeing the human skeleton move in real time.” That first spark never left. Today, Simone still values Vicon’s immediacy. “We need 3D high frequencies to track fast movements, and Vicon was super precise.” High‑frequency capture (>200 Hz) is critical for precise validation of internal models of the robot, as well as validation of motion planning, which is also done in our lab; a missed frame could produce deviations. “It’s easy to integrate other systems like EMG sensors with Vicon.” That plug‑and‑play ethos shortens setup cycles, letting the lab focus on research, not cabling.
SPEAK TO A VICON EXPERTHow AI Could Enhance Motion Capture
“Human movement is quite predictable.” That insight underpins Simone’s wish list for smarter motion capture. “We know how we move our joints and this could improve automatic labeling.” Instead of manually checking dozens of markers per clip, AI could flag anomalies and suggest corrections. “Automatic labeling would save us so much time in post-processing.” Time is money in grant‑funded labs; every hour regained is an hour spent iterating robot behaviours.
“If AI could track markers more stably in real time, we wouldn’t lose data during occlusions.” A stray sleeve, a reflective watch, any occlusion can break real‑time control loops. AI‑assisted interpolation could maintain continuity. Finally, Simone imagines AI‑generated metadata: “AI metadata could help classify daily tasks without manual coding.” Imagine a timeline that automatically tags “hand‑over,” “sit‑to‑stand,” and “feeding gesture,” ready for statistical analysis or machine‑learning pipelines.
Many of these dreams align with Vicon’s roadmap: Theia markerless aims to reduce reliance on reflective spheres; Nexus AI tools already speed up auto‑labeling; and the Capture.U app logs contextual IDs alongside IMU streams. As those features mature, researchers like Simone will spend less time cleaning data and more time answering the big questions: How should a robot bow when greeting? How high should it lift a tray? Where should its eyes look during a hand‑over?
Advice for Newcomers to Motion Capture
“The software has improved significantly in terms of usability.” Simone credits Vicon’s regular updates for lowering the barrier to entry. “The tutorials available today are really, really, really helpful.” For anyone daunted by calibration or skeleton setup, Vicon’s YouTube channel and quick‑start PDFs shorten the learning curve. “You can easily integrate other systems like EMG sensors with Vicon.” That openness means you’re not locked into a single vendor for force plates, VR headsets or robotic controllers.
“My advice is, rely on the support from Vicon and Prophysics – they respond within 10 minutes.” When a marker set falls apart mid‑study, that speed matters. “Prophysics has been super responsive and helped me with everything.” Good support turns scary downtimes into minor blips.
Data, Dignity, and the Path to Everyday Care Robots
Geriatronics at TUM exemplifies what happens when rigorous motion capture meets empathetic design. Simone Nertinger’s work turns inert joint angles into living, breathing interactions that respect human dignity. Vicon’s ultra‑precise data, mirrored in GARMI’s joints, ensures that users perceive GARMI as familiar rather than foreign, supportive rather than sterile.
As the lab moves from simulation to real‑world trials, the combination of precise measurement, culturally aware design and continuous user feedback will decide whether robots like GARMI become everyday helpers or remain lab curiosities. If the trajectory of vacuum robots is any clue, the leap from novelty to necessity can happen fast, especially when engineers, clinicians and end‑users share a common language of data. With Vicon as that bridge, the future of geriatronic care looks less like science fiction and more like common sense.
Ready to put motion capture at the heart of your human‑robot interaction studies? Talk to Vicon about scalable, integrated solutions that grow with your research.