Monte Python

The Monte Carlo code for class in Python

Download the latest version


Latest release is 3.0, and lives in a different repository. Head over to there.

The full changelog is available on the Github page.

To be informed by mail of the latest release, write to brinckmann at, and you will be added to the mailing list.


Monte Python is a Monte Carlo code for Cosmological Parameter extraction. It contains likelihood codes of most recent experiments, and interfaces with the Boltzmann code class for computing the cosmological observables.

Several sampling methods are available: Metropolis-Hastings, Nested Sampling (through MultiNest), EMCEE (through CosmoHammer) and Importance Sampling.


The whole procedure is detailled there. You will need Python 2 (from 2.5 to 2.7). The Python packages will be installed automatically by the installation script.

If you have git, the easiest way to install the code is to simply issue

git clone
cd montepython_public
python install --user

This will create a montepython_public directory, and install the required Python modules. You will still have to install class by yourself, though.


A complete documentation, alongside detailled instructions on installing the dependencies, notably the class wrapper and the Planck likelihood is available on Read the Docs. A less complete pdf version is also available.

Getting Help

If the information contained in the documentation is not enough, please search for a similar problem on the issues page. If no post tackle your problem, create one: we will be automatically notified.


To contribute to the development, share your modifications, go to the Github page. Check out the code, follow the good practice described on this page, and when you are done modifying the code, send us a Pull-Request.

Learning the code

The slides from a one-week workshop to learn class and Monte Python are available on this page. The slide should reflect the current version of the codes. There are also exercices available, with solutions.


The code is written in the Python language, which is a high-level, clear to read and easy to learn language. It also wraps efficient C routines for intensive numerical manipulation.

A good introduction for scientists can be found there. This blog is also a useful stop, containing many useful tips for using Python daily in a scientific environment.

Previous versions

  • Implemented the JLA new SuperNovae analysis. There are two versions available, JLA and JLA_simple. Thanks to Martin Kunz, Stefano Foffa, Yves Dirian and Valeria Pettorino for their testing and help in debugging. The data being massive, you are asked to download it manually.
    Implemented the POLARBEAR likelihood (measurement of the BB spectrum in four bins at large ell).
    Implemented a new observable, cosmic clocks, that measure the Hubble rate at different redshifts. Full information can be found in data/cosmic_clocks/README.txt (credits Licia Verde)
    Fixed bao_2013.txt data, it now uses the right numbers, referring to the correct sound horizon. Thanks to Martin Kunz, Stefano Foffa and Yves Dirian for finding it.
    Implemented the BAO measurements from WiggleZ. Use the new likelihood, called WiggleZ_bao, in your parameter file.
    Added a new BAO data point from Ross et. al. 2014, available in data/bao_2014.txt along with all the previous ones. Notice that all reference to the historical and unfortunate rs_rescale is now gone - all the data points are stored as if computed with the true value of the sound horizon, and not the Eisenstein and Hu 98 formula. Thanks to Antonio Cuesta for his help in annihilating this issue.
  • You can now combine parameters when analyzing (at your own risk, beware of the priors), using a new dictionary in the extra plot file.
  • Unified scheme for handling complex parameters from CLASS. You can now specify for instance m_ncdm__1, m_ncdm__2 as running parameters, and they will be treated correctly. Note the double underscore in the names!
  • It is now possible to add derived parameters that were not chosen before the run began.
  • added a --silent option to avoid printing on-screen the steps in parameter space.
  • Monte Python is now under the MIT License (permissive BSD-type license)
  • v2.0:
  • You can now use MultiNest (credits F. Feroz and M. Hobson), and the CosmoHammer (credits J. Akeret and S. Seehars) within Monte Python. Both modules are commented, and their documentation is available as before on the automatically generated documentation.
    MultiNest has been wrapped in Python in the (credits J. Buchner, see PyMultiNest package), and the interface with Monte Python has been mostly done by Jesus Torrado. It is an excellent tool for exploring weirdly shaped posterior distribution, and multi-modal ones (a feature is being tested of automatically extracting the different modes and creating separate Monte Python folders). It also computes the evidence of a model.
    The CosmoHammer was implemented with the help of its creators in ETH, and now its basic functionnalities working. It is a great tool to make use of gigantic clusters.
    Note that both the use of CosmoHammer and PyMultiNest through Monte Python is restricted to their specific licenses, which includes citing their original paper(s). See their respective website for proper usage.
  • Finally implemented an over sampling scheme, that allows a fast running with likelihoods with a high number of nuisance parameters. The user can oversample each block of nuisance parameters at a rate of his choice. See the example in base.param, and don't forget to call the code with the option -j fast.
  • Repackaging: the whole folder now should behave more like a proper python package. This included renaming the `code' directory into `montepython', and moving the `likelihoods' folder inside this new `montepython'. Also, considering a likelihood named `lkl', instead of following the previous structure: likelihoods/lkl/ and .data, you should now use: montepython/likelihoods/lkl/ and If you implemented a custom likelihood, you should obviously update it to this standard. Note that the syntax to import the module likelihood_class has changed inside the likelihoods.
  • New WiggleZ analysis (credits Signe Riemer, David Parkinson) working and tested, apart from the use of the nuisance parameter P0 (WIP)
  • Update of the BAO data from 2013 (called `bao_boss`, reference in the data/ folder), and added the data from CFHTLens and Planck_SZ measurement.
  • First version of the test module, with basic functionalities being checked. It needs the `nose' Python package to run. Notably, there is a test listing all the needed functions that the cosmological code must have in order to work with Monte Python - this can be a good guideline to implement a wrapper around another Boltzmann code.
  • Convention of path changed to comply with Python standards. This might affect your modified likelihoods (you should now use the command os.path.join to create paths.
  • v1.2:
  • Added the Planck likelihoods (Planck_highl, Planck_lowl, Planck_lensing and lowlike) (download the code and the data from ESA website under frequently requested products) and provide a input parameter file as well as a covariance matrix and a bestfit file for the base LCDM case (note that you will have to use the option -f 1.5 to change the acceptance rate of the chains: as the posterior distribution of the nuisance parameters is highly non gaussian, this factor will ensure an acceptance rate close to 0.2.
  • Update of WMAP to the 9yr results.
  • Added a Cholesky decomposition method to speed up sampling for likelihoods with a large number of nuisance parameters (A. Lewis), which groups parameters by blocks of same speed.
  • Transfered the documentation to a sphinx-documentation. Most comments in the code have been replaced by docstrings, automatically extracted to create a website. You can now navigate the different functions, see the dependencies, see the source code in one convenient place.
  • Renamed both and into and (mp for Monte Python). It was conflicting with some installed packages of python, both being some very often used names.
  • Removed the comparison between input param and log.param: always choose the latest.
  • Changed the architecture of file. The whole thing is no longer a class, but instead a collection of functions. To exchange information between these functions, a new class called `information` has been created.
  • Removed the file, merging its content (lockfile system) with
  • Transfered the log_flag from being an indendent quantity being exchanged between data and likelihoods to a proper attribute of the data class. Since no comparison is done any-more, there was no reason to keep it this way anymore.
  • v1.1:
  • Added the following likelihoods: SPT full data (called spt_2500 in the likelihood directory), compilation of BAO scale data (called bao), compilation of quasar time-delay data (called timedelay)
  • To speed up the production of plots, you can play with a new extension defining the print-out format: use the command-line option -ext png, or eps, or pdf (from the fastest to the slowest format).
  • Added a lockfile system, that should prevent a file from being written simultaneously by two programs, when launching several different runs at the same time.
  • Changed a detail in the input file syntax, concerning parameter prior edges. To leave the prior edge undefined, you can still use "-1" as before, or "None". But if your prior edge is really minus one (for instance for w), you can write "-1."
  • Function cosmo_update_arguments() modified for better readability and simpler use (this function allows to rename or remap input parameters at each step before passing them to CLASS). The syntax pattern "try, except" is not needed anymore.
  • v1.0:
  • Initial release. Contains the WMAP 7 likelihood, Hubble Space Telescope, supernovae distance luminosity, all the newdat format experiments (spt, acbar, quad, boomerang, ...), WiggleZ, a Euclid-like galaxy survey, a Euclid-like cosmic shear survey, and a simple Planck-like likelihood. Two covariance matrices are included, one for WMAP 7 + SPT, one for the Planck-like likelihood. More to come !