Powering Physical AI: High-Fidelity Biomechanical Priors for Embodied Intelligence

OWN RISE Lab is architecting the data infrastructure required to bridge the gap between digital reasoning and physical action. By integrating high-frequency kinetic data into existing AI stacks, we provide the physical grounding necessary for autonomous systems to navigate, adapt, and interact with the real world with human-like precision.

Humanoid robotics is trying to solve locomotion from scratch. Boston Dynamics, Figure, and Tesla Optimus are running millions of simulated episodes trying to learn what every human already does without thinking. But humans are bipedal robots that already work. 400 million years of evolution has already solved terrain navigation across every surface on earth.

We view the human nervous system as a biological intelligence that has pre-computed the solution to terrain navigation. OWN is building the translation layer to port this biological intelligence into embodied AI. Every time someone wearing our sensors walks down a trail, steps off a curb, or catches themselves on ice, we are recording a solved reinforcement learning episode. We are not generating synthetic data. We are harvesting solved compute.

Our hardware sits at the only point of contact between the agent and the environment. We capture high-fidelity ground reaction forces, inertial dynamics, and surface interactions at 300Hz in the wild. This allows us to digitize the physics of the real world. We are building the datasets necessary to close the sim-to-real gap that currently limits the entire robotics industry.

Powering Physical AI: High-Fidelity Biomechanical Priors for Embodied Intelligence

OWN RISE Lab is architecting the data infrastructure required to bridge the gap between digital reasoning and physical action. By integrating high-frequency kinetic data into existing AI stacks, we will provide the physical grounding necessary for autonomous systems to navigate, adapt, and interact with the real world with human-like precision.

1. The Physical AI Bottleneck

The field of embodied intelligence is converging on a clear architectural consensus: vision-language-action (VLA) models represent the most promising path toward generalist robotic agents. Systems like Google DeepMind's RT-2, the Open X-Embodiment collaboration, and NVIDIA's GR00T have demonstrated that large pretrained models can be adapted for robotic control when given sufficiently rich sensorimotor data. But a structural gap persists in every one of these systems. They are overwhelmingly trained on visual and linguistic inputs. The physical world, however, is not purely optical. It is governed by contact forces, surface compliance, gravitational loading, and the continuous interplay between inertia and friction that makes bipedal locomotion possible.

This is not a minor omission. It is the central bottleneck in sim-to-real transfer. Radosavovic et al. demonstrated that transformer-based policies trained in simulation can achieve zero-shot real-world humanoid locomotion, but noted that domain randomization of contact and terrain parameters remains essential to bridge the reality gap (Science Robotics, 2024). Physics engines like MuJoCo, Isaac Sim, and PyBullet approximate ground contact through parameterized models of friction and restitution. These parameters are typically hand-tuned or borrowed from material databases. They rarely reflect the actual force distributions that a human or robot would encounter walking across wet concrete, loose gravel, or a carpeted office floor. The result is a well-documented class of failures: policies that work beautifully in simulation but produce brittle, overcautious, or unstable locomotion on real terrain. As Kim et al. showed in their systematic evaluation of sim-to-real techniques for humanoid bipedal locomotion, contact and terrain modeling errors remain among the primary sources of the reality gap (IEEE Robotics and Automation Magazine, 2025).

OWN RISE Lab exists to close this gap. We are building the ground truth data infrastructure that physical AI systems need but currently lack: continuous, high-frequency, ecologically valid measurements of how humans actually interact with the physical world through their feet. With 200,000 instrumented pairs expected to be deployed by end of 2026 and pilot datasets available for research partners in March 2026, this is not a conceptual roadmap. It is infrastructure under construction.

2. Augmenting World Models with Kinetic Intelligence

Current breakthroughs in Physical AI have established the power of vision-language-action architectures for robotic reasoning and planning. OWN RISE Lab will enhance these foundations by introducing a parallel stream of proprioceptive and haptic data that captures what cameras fundamentally cannot see.

Consider what happens when a person walks from a polished tile lobby onto a rain-slicked sidewalk, then up a flight of metal-grated stairs. A camera mounted on a robot's head will register changes in visual texture and geometry. It will not register the 3x increase in surface compliance, the drop in friction coefficient from 0.7 to 0.3, or the shift in center-of-pressure trajectory that the walker's nervous system detects and compensates for within 150 milliseconds. These are not edge cases. They are the ordinary physics of locomotion, and they are invisible to vision-only perception stacks.

OWN's instrumented footwear will capture this hidden physics at 300Hz through embedded pressure grids and inertial measurement units (IMUs), paired with what is, to our knowledge, the first commercially deployed photoplethysmography (PPG) sensor integrated into the sole of a shoe. Plantar PPG is a notoriously difficult engineering problem: the foot sole presents unique challenges in motion artifact rejection, variable contact pressure, and optical coupling through skin under load (Hong et al., Sensors, 2018). Existing smart insole platforms from Plantiga, Baliston, and academic groups like Stevens' SportSole have solved the pressure and inertial sensing problem, but none have achieved continuous cardiovascular signal acquisition from the plantar surface in a consumer form factor. OWN is engineering this capability. This means our data stream will be not just kinetic but simultaneously cardiovascular, enabling the fusion of gait dynamics with heart rate, heart rate variability, and peripheral vascular response in a single device worn during natural locomotion. The data stream will provide three categories of intelligence that current AI models are missing:

1. Physics-Grounded Priors

Ground reaction force (GRF) distributions measured during natural locomotion will serve as direct calibration targets for physics engine parameters. Rather than guessing friction coefficients or contact stiffness, simulation environments can be tuned to reproduce the actual force profiles observed in the real world. This is functionally equivalent to providing a physics engine with a supervision signal for its contact model, a capability that does not currently exist in any commercial simulation pipeline.

2. Environmental Haptics

Surface compliance, stability, and friction are properties that optical sensors can only infer indirectly (and often incorrectly). A camera cannot distinguish between dry and wet marble. It cannot tell you whether a grassy slope is firm or saturated. OWN's pressure and inertial data will encode these properties directly, providing terrain characterization that can be mapped to over 100 surface types encountered in urban, industrial, and natural environments. This data layer is critical for any world model that needs to reason about traversability, energy expenditure, or fall risk.

3. Biomechanical Benchmarks

Synthetic motion generated in simulation (via motion capture retargeting or learned policies) currently lacks a rigorous ground truth for validation. How do you know whether a simulated biped's foot placement forces are realistic? OWN's population-scale data from natural locomotion will establish these benchmarks. We will provide the reference distributions against which generated motion can be evaluated, enabling quantitative validation of sim-to-real fidelity rather than qualitative visual inspection.

3. The Ground Truth Pipeline

The core infrastructure of OWN RISE Lab is a four-stage pipeline that transforms raw footwear sensor data into ML-ready datasets for robotics, health, and world model training.

Stage 1 : Capture

High-fidelity insoles embedded in standard footwear record force, pressure, inertia, and cardiovascular signals at 300Hz during unconstrained natural locomotion. This is a deliberate design choice. Lab-based motion capture and force plate systems produce exquisite data, but they sample minutes of behavior in artificial environments. OWN captures hours and days of behavior in the environments where people actually walk, run, climb, stumble, and recover. The ecological validity of this data is what makes it uniquely valuable for training robust locomotion policies.

Stage 2 : Process

Edge AI running on the device computes calibrated ground reaction forces, slip detection events, and foot pose estimation in real time. This on-device processing layer serves two purposes. First, it reduces the bandwidth and storage requirements for continuous data collection by extracting features at the source. Second, it enables real-time feedback loops that are necessary for downstream applications in clinical monitoring and adaptive robotics.

Stage 3 : Annotate

A companion application enables users to label terrain type (gravel, ice, stairs, wet tile), activity context (commuting, trail running, occupational), critical events (near-falls, perturbation recoveries), and physiological state (fatigue level, perceived stability). This annotation layer is what transforms raw sensor streams into supervised training data. The combination of continuous sensing with episodic human labeling produces datasets that are both large in scale and rich in semantic content, a combination that is exceptionally difficult to achieve through either pure automation or pure manual annotation alone.

Stage 4 : Output

The final output is multimodal datasets formatted for direct ingestion into major physics engines (Isaac Sim, MuJoCo), reinforcement learning frameworks (IsaacGym, Stable-Baselines3), and health AI training pipelines. These datasets are not raw dumps. They are structured, versioned, and accompanied by metadata that specifies sensor calibration, participant demographics, environmental conditions, and data quality metrics.

4. Three Core Data Products

The pipeline will feed three distinct product lines, each targeting a specific market need within the Physical AI and digital health ecosystems.

1. OWN RISE Motion

Curated multimodal datasets for robotics training. The fundamental constraint in humanoid robotics today is not hardware or algorithms; it is data. The Open X-Embodiment project demonstrated that aggregating diverse manipulation data across institutions dramatically improves policy generalization. An equivalent dataset for locomotion does not yet exist. OWN RISE Motion is designed to fill this void by providing population-scale ecological locomotion data (not constrained to lab treadmills or motion capture volumes) formatted for the physics engines and RL frameworks that robotics teams already use. The data includes synchronized force, pressure, inertial, and cardiovascular channels across diverse demographics, terrains, and activity types.

2. OWN RISE World

Terrain interaction datasets for sim-to-real transfer. World models trained on video alone can represent visual appearance of surfaces but cannot infer the physical properties that govern how an agent should interact with those surfaces. OWN RISE World will provide the missing layer: over 100 terrain types characterized by their actual ground interaction profiles (force absorption, energy return, slip propensity, compliance). This data will enable physics engines to generate contact models that match real-world behavior, directly improving sim-to-real transfer fidelity. It will also serve as a validation set for synthetic data pipelines, allowing teams to verify that their procedurally generated terrains produce physically plausible contact dynamics.

3. OWN RISE Health

Foundation models for disease prediction and remote health monitoring. Gait has been described in the clinical literature as "the sixth vital sign" because changes in walking patterns are among the earliest detectable biomarkers for a range of conditions. The evidence base is robust: Studenski et al.'s pooled analysis of 34,485 older adults across nine cohort studies found that gait speed predicted survival with a hazard ratio of 0.88 per 0.1 m/s increase (JAMA, 2011), and Veronese et al.'s meta-analysis of 48 prospective cohorts confirmed associations between gait speed and mortality, cardiovascular disease, and cancer (Journal of the American Medical Directors Association, 2018). Beyond speed, higher-dimensional gait features have shown clinical promise: Schlachetzki et al. demonstrated that wearable sensor-based spatiotemporal gait parameters objectively distinguish Parkinson's disease stages with high biomechanical resolution (PLOS ONE, 2017), and Zhang et al. showed that wearable IMU-derived gait biomarkers can differentiate early PD motor subtypes and track disease progression (npj Digital Medicine, 2024). OWN RISE Health will leverage our longitudinal, high-dimensional gait data, fused with plantar PPG-derived cardiovascular signals, to build predictive models for pre-symptomatic detection of neurodegenerative, cardiovascular, and musculoskeletal conditions. The deployment model will support both edge inference (on-device alerts and monitoring) and cloud-based analytics (population-level surveillance and clinical trial endpoints).

5. Empirical Validation via Iterative Toy Models

A claim that footwear sensor data improves robotic locomotion policy is only as credible as the evidence behind it. We are not in the business of making promissory assertions. OWN RISE Lab is establishing a rigorous testing framework using simplified robotic toy models to quantify the impact of our data before scaling to full humanoid systems.

1. Baseline Comparison

We utilize low-dimensional bipedal walker and inverted pendulum systems to isolate the specific contribution of kinetic data. The experimental design is straightforward: train identical policy architectures on vision-only observation spaces versus augmented observation spaces that include OWN pressure and IMU streams. The performance delta (measured in convergence speed, asymptotic reward, and out-of-distribution robustness) will directly quantify the value of the kinetic modality. We expect that augmented agents will converge to stable locomotion policies faster and exhibit lower variance in gait stability metrics when deployed on novel terrains.

2. Policy Robustness

The primary test condition is perturbation recovery. Toy models are exposed to stochastic terrain changes (sudden compliance shifts, surface inclination perturbations, friction coefficient drops) and evaluated on their ability to maintain stable locomotion without falling. The hypothesis is that kinetic feedback loops (GRF sensing, slip detection, center-of-pressure tracking) significantly reduce time-to-recovery and widen the basin of stability for learned policies. This maps directly to the real-world problem of robotic fall prevention, one of the most commercially significant challenges in humanoid deployment.

3. Sim-to-Real Transfer Sandbox

These toy models also serve as the primary development environment for refining the mapping between raw sensor data and actuator commands. The dimensionality reduction from full humanoid frames (30+ degrees of freedom) to simplified walkers (4-6 DOF) allows us to iterate rapidly on data preprocessing, feature engineering, and policy architecture choices before committing compute to large-scale training runs. This iterative toy model methodology is standard practice in robotics research, and we apply it rigorously.

6. Architectural Vision: Scaling Biomechanical Intelligence

We are designing a vertically integrated compute pipeline to transform human motion data into actionable intelligence at scale. The roadmap is built around three prospective technical milestones, each aligned with the GPU-accelerated infrastructure that modern Physical AI requires.

1. High-Fidelity Neural Reconstruction

Future integration with NVIDIA Isaac Lab to calibrate physics engine contact models using real-world ground reaction forces. The objective is to enable zero-shot policy transfer: a locomotion policy trained in a physics engine calibrated with OWN data should produce stable, natural-looking gait when deployed on a physical robot without further fine-tuning. This requires that the simulation's contact dynamics match real-world force distributions with sufficient fidelity, measured by GRF profile correlation and center-of-pressure trajectory error. Our data will provide the calibration signal; Isaac Lab provides the differentiable physics engine that can be optimized against it.

2. CUDA-Accelerated Data Orchestration

At population scale, our 300Hz sensor streams generate substantial data volumes. Processing millions of concurrent data points for real-time feature extraction, pose estimation, and anomaly detection requires GPU-accelerated preprocessing. We envision leveraging CUDA-based pipelines for signal processing operations (FFT-based spectral decomposition, wavelet transforms for transient event detection, sliding-window feature computation) that would be prohibitively slow on CPU architectures at the throughput we require.

3. Gait Foundation Model (GFM)

The long-term objective is to train the first large-scale foundation model for human gait. Analogous to how language models learn generalizable representations of text and vision models learn transferable features from images, a Gait Foundation Model would learn the latent structure of human locomotion from population-scale sensor data. The target capabilities include: predicting physical fatigue onset from gait degradation patterns, detecting neurodegenerative biomarkers years before clinical symptom presentation, generating terrain-specific locomotive strategies for robotic transfer, and personalizing fall risk assessment from individual gait signatures. This is a multi-year, multi-institution effort that will require multi-node GPU clusters and partnerships with clinical and robotics research groups.

7. Core Research Vectors

OWN RISE Lab's research agenda is organized around four interconnected vectors that span the full stack from sensing hardware to clinical translation.

1. Multimodal Fusion for Spatial Awareness

Fusing plantar pressure-grid data with visual odometry to achieve superior spatial awareness for autonomous agents. The core insight is that foot-ground contact provides a complementary localization signal to visual SLAM: pressure patterns encode stride length, turning dynamics, and surface transitions that can disambiguate visual aliasing (e.g., identical-looking corridor segments) and provide drift correction for inertial navigation. This research vector has direct applications in both indoor robot navigation and pedestrian dead reckoning for GPS-denied environments.

2. Terrain Adaptation and Hidden Haptics

Mapping the haptic properties of urban and industrial environments that are invisible to cameras. This includes the compliance gradient across a pothole repair, the friction differential between painted crosswalk lines and asphalt, the vibration signature of a metal grate versus a wooden boardwalk. These are the terrain features that cause real robots to stumble, and they can only be captured through direct contact sensing. Our goal is to build the first large-scale haptic terrain atlas that pairs visual surface appearance with measured physical interaction properties.

3. Predictive Biomarkers from Gait

Utilizing high-dimensional gait data as a continuous, passive diagnostic tool for neuro-motor health. The clinical evidence base for gait as a biomarker is substantial: Studenski et al. showed that gait speed predicted 5-year and 10-year survival across 34,485 older adults as accurately as models using chronic conditions, blood pressure, BMI, and hospitalization history combined (JAMA, 2011). Hardy et al. further demonstrated that improvement in gait speed predicts a substantial reduction in mortality, concluding that gait speed may serve as a useful "vital sign" for older adults (Journal of the American Geriatrics Society, 2007). Higher-dimensional gait features (asymmetry indices, variability metrics, spectral content of ground reaction forces) have shown promise in detecting early-stage Parkinson's disease (Schlachetzki et al., PLOS ONE, 2017; Zhang et al., npj Digital Medicine, 2024), differentiating PD from essential tremor using wearable IMU data (Lin et al., Journal of Neurology, 2023), and tracking disease progression longitudinally. OWN RISE Health will translate this clinical evidence into deployable models that run on consumer-grade hardware, enabling remote monitoring at a scale that clinic-based gait labs cannot achieve.

4. Edge-AI Inference

Optimizing low-latency inference models for real-time feedback on resource-constrained hardware (Jetson-class and below). The deployment constraint for both clinical monitoring and robotic control is latency: useful biomechanical feedback must arrive within a single gait cycle (approximately 500-600ms at walking speed). This requires aggressive model compression (quantization, pruning, knowledge distillation) and hardware-aware architecture search to produce models that fit within the memory and compute budgets of edge devices while maintaining clinically and robotically relevant accuracy thresholds.

8. Application Domains

OWN RISE Lab's research agenda is organized around four interconnected vectors that span the full stack from sensing hardware to clinical translation.

Domain Value Proposition
Sim-to-Real Transfer Calibrate physics engine contact models with real-world GRF distributions. Enable zero-shot policy transfer from simulation to physical robots by closing the reality gap at the contact dynamics level.
World Models Provide physics priors that video-only world models fundamentally lack. Surface friction, compliance, and energy return properties derived from foot-ground interaction data enable world models to reason about traversability and physical plausibility.
Vision-Language-Action Supply the multimodal training data required for generalist robot policies. VLA architectures achieve their best performance when observation spaces include proprioceptive and force-sensing modalities alongside vision and language.
Clinical Biomarkers Remote monitoring endpoints and digital therapeutic outcome measures. Continuous gait data provides clinically validated biomarkers for neurodegenerative screening, fall risk stratification, and rehabilitation progress tracking.

9. Why This Data Is Defensible Infrastructure

The Physical AI market is projected to exceed $20B by 2030, driven primarily by humanoid robotics, autonomous navigation, and health AI. Within this landscape, data infrastructure companies occupy a structurally advantaged position. Models are becoming commoditized. Training data is not.

OWN's defensibility rests on four pillars. First, the hardware integration: our sensing platform is embedded directly into footwear manufactured by a company with 40+ years of experience in the footwear industry, creating a vertically integrated data collection channel that is difficult to replicate by pure-play AI companies. Second, the proprietary IP: OWN holds a portfolio of patents covering the smart shoe sensing architecture, including our plantar PPG integration, which represents a technical barrier that competitors have not cleared. Third, the data network effects: as more users wear instrumented footwear across more terrain types and demographic segments, the dataset becomes exponentially more valuable for training generalizable models. Fourth, the annotation layer: raw sensor data is abundant; semantically rich, ecologically valid, labeled locomotion data is exceptionally scarce. Our pipeline produces the latter.

The insight that grounds this entire venture is simple and, in our view, correct: the robots that will navigate our cities, our warehouses, and our hospitals will need to understand the physics of foot-ground contact at least as well as humans do. That understanding will not come from simulation alone. It will come from measuring the real world, at scale, through the surface where body meets environment.

10. Collaboration and Access

OWN RISE Lab collaborates on dataset access, joint publications, and co-research programs. We are particularly interested in partnerships with groups working on humanoid locomotion policy learning, clinical gait analysis and remote patient monitoring, physics engine development and sim-to-real benchmarking, world model architectures that incorporate contact and force modalities, and synthetic data validation for locomotion training environments.

Every step is the ground truth. We invite research groups, robotics companies, and health AI teams to build on it.

References
1. Hardy, S. E., Perera, S., Roumani, Y. F., Chandler, J. M., & Studenski, S. A. (2007). Improvement in usual gait speed predicts better survival in older adults. Journal of the American Geriatrics Society, 55(11), 1727–1734. https://doi.org/10.1111/j.1532-5415.2007.01413.x
2. Hong, S., & Park, K. S. (2018). Unobtrusive photoplethysmographic monitoring under the foot sole while in a standing posture. Sensors, 18(10), 3239. https://doi.org/10.3390/s18103239
3. Kim, D., Lee, H., Cha, J., & Park, J. (2025). Bridging the reality gap: Analyzing sim-to-real transfer techniques for reinforcement learning in humanoid bipedal locomotion. IEEE Robotics and Automation Magazine, 32(1), 49–58. https://doi.org/10.1109/MRA.2024.3505784
4. Lin, S., Gao, C., Li, H., Huang, P., Ling, Y., Chen, Z., Ren, K., & Chen, S. (2023). Wearable sensor-based gait analysis to discriminate early Parkinson's disease from essential tremor. Journal of Neurology, 270(4), 2283–2301. https://doi.org/10.1007/s00415-023-11577-6
5. Radosavovic, I., Xiao, T., Zhang, B., Darrell, T., Malik, J., & Sreenath, K. (2024). Real-world humanoid locomotion with reinforcement learning. Science Robotics, 9(89), eadi9579. https://doi.org/10.1126/scirobotics.adi9579
6. Schlachetzki, J. C. M., Barth, J., Marxreiter, F., Gossler, J., Kohl, Z., Reinfelder, S., Gassner, H., Aminian, K., Eskofier, B. M., Winkler, J., & Klucken, J. (2017). Wearable sensors objectively measure gait parameters in Parkinson's disease. PLOS ONE, 12(10), e0183989. https://doi.org/10.1371/journal.pone.0183989
7. Studenski, S., Perera, S., Patel, K., Rosano, C., Faulkner, K., Inzitari, M., Brach, J., Chandler, J., Cawthon, P., Connor, E. B., Nevitt, M., Visser, M., Kritchevsky, S., Badinelli, S., Harris, T., Newman, A. B., Cauley, J., Ferrucci, L., & Guralnik, J. (2011). Gait speed and survival in older adults. JAMA, 305(1), 50–58. https://doi.org/10.1001/jama.2010.1923
8. Veronese, N., Stubbs, B., Volpato, S., Zuliani, G., Maggi, S., Cesari, M., Lipnicki, D. M., Smith, L., Schofield, P., Firth, J., Vancampfort, D., Koyanagi, A., Pilotto, A., & Cereda, E. (2018). Association between gait speed with mortality, cardiovascular disease and cancer: A systematic review and meta-analysis of prospective cohort studies. Journal of the American Medical Directors Association, 19(11), 981–988.e7. https://doi.org/10.1016/j.jamda.2018.06.007
9. Zhang, W., Ling, Y., Chen, Z., Ren, K., Chen, S., Huang, P., & Tan, Y. (2024). Wearable sensor-based quantitative gait analysis in Parkinson's disease patients with different motor subtypes. npj Digital Medicine, 7(1), 169. https://doi.org/10.1038/s41746-024-01163-z
The Value-Add: Multimodal Ground Truth

Augmenting World Models with Kinetic Intelligence Current breakthroughs in Physical AI have demonstrated the power of vision-language-action (VLA) architectures. OWN RISE Lab enhances these foundations by introducing a parallel stream of proprioceptive and haptic data.

By capturing the hidden physics of movement—such as joint-level impedance, center-of-pressure (CoP) shifts, and micro-adjustments to terrain—we provide:

Core Research Vectors

Research & Collaboration

1
Multimodal
Fusion

Fusing pressure-grid data with visual odometry for superior spatial awareness.

2
Terrain
Adaptation

Mapping the hidden haptics of urban and industrial environments.

3
Predictive
Biomarkers

Utilizing high-dimensional gait data as a diagnostic tool for neuro-motor health.

4
Edge-AI
Inference

Optimizing low-latency models for real-time feedback on Jetson-class hardware.