The Illustris project includes large-scale cosmological simulations of the evolution of the universe, spanning initial conditions of the Big Bang, to the present day, 13.8 billion years later. Modeling, based on the most precise data and calculations currently available, are compared to actual findings of the observable universe in order to better understand the nature of the universe, including galaxy formation, dark matter and dark energy.
The simulation includes many physical processes which are thought to be critical for galaxy formation. These include the formation of stars and the subsequent "feedback" due to supernova explosions, as well as the formation of super-massive black holes, their consumption of nearby gas, and their multiple modes of energetic feedback.
Images, videos, and other data visualizations for public distribution are available at official media page. All original data files can be directly downloaded through the data release webpage
The main Illustris simulation was run on the
Curie supercomputer at CEA (France) and the SuperMUC supercomputer at the Leibniz Computing Center (Germany). A total of 19 million CPU hours was required, using 8,192 CPU cores. The peak memory usage was approximately 25 TB of RAM. A total of 136 snapshots were saved over the course of the simulation, totaling over 230 TB cumulative data volume.
A code called "Arepo" was used to run the Illustris simulations. It was written by Volker Springel, the same author as the GADGET code. The name is derived from the Sator Square. This code solves the coupled equations of gravity and hydrodynamics using a discretization of space based on a moving Voronoi tessellation. It is optimized for running on large, distributed memory supercomputers using a MPI approach.
Public data release
In April, 2015 (eleven months after the first papers were published) the project team publicly released all data products from all simulations. All original data files can be directly downloaded through the data release webpage. This includes group catalogs of individual halos and subhalos, merger trees tracking these objects through time, full snapshot particle data at 135 distinct time points, and various supplementary data catalogs. In addition to direct data download, a web-based API allows for many common search and tasks to be completed without needing access to the full data sets.