Labelling data is expensive and time consuming especiallyfor domains such as medical imaging that contain volumetric imagingdata and require expert knowledge. Exploiting a larger pool of labeleddata available across multiple centers, such as in federated learning, hasalso seen limited success since current deep learning approaches do notgeneralize well to images acquired with scanners from different manufac-turers. We aim to address these problems in a common, learning-basedimage simulation framework which we refer to asFederated Simulation.We introduce a physics-driven generative approach that consists of twolearnable neural modules: 1) a module that synthesizes 3D cardiac shapesalong with their materials, and 2) a CT simulator that renders these intorealistic 3D CT Volumes, with annotations. Since the model of geometryand material is disentangled from the imaging sensor, it can effectivelybe trained across multiple medical centers. We show that our data syn-thesis framework improves the downstream segmentation performanceon several datasets.