JLA new SuperNovae analysis.
There are two versions available, JLA and
JLA_simple. Thanks to Martin Kunz, Stefano
Foffa, Yves Dirian and Valeria Pettorino for their
testing and help in debugging. The data being massive,
you are asked to download it manually.
likelihood (measurement of the BB spectrum in four
bins at large ell).
Implemented a new observable, cosmic clocks, that
measure the Hubble rate at different redshifts. Full
information can be found in data/cosmic_clocks/README.txt
(credits Licia Verde)
Fixed bao_2013.txt data, it now uses the right numbers,
referring to the correct sound horizon. Thanks to Martin
Kunz, Stefano Foffa and Yves Dirian for finding it.
from WiggleZ. Use the new
likelihood, called WiggleZ_bao, in your
Added a new BAO data point from Ross et. al. 2014,
available in data/bao_2014.txt along with all
the previous ones. Notice that all reference to the
historical and unfortunate rs_rescale is now gone - all
the data points are stored as if computed with the true
value of the sound horizon, and not the Eisenstein and Hu
98 formula. Thanks to Antonio Cuesta for his help in
annihilating this issue.
You can now combine parameters when analyzing (at your
own risk, beware of the priors), using a new dictionary
in the extra plot file.
Unified scheme for handling complex parameters from
CLASS. You can now specify for instance
m_ncdm__2 as running parameters, and they will
be treated correctly. Note the double underscore in the
It is now possible to add derived parameters that were
not chosen before the run began.
added a --silent option to avoid printing on-screen
the steps in parameter space.
Monte Python is now under the MIT License (permissive
You can now use
MultiNest (credits F. Feroz and M. Hobson), and the
CosmoHammer (credits J. Akeret and S. Seehars)
within Monte Python. Both modules are commented, and their
documentation is available as before on the automatically
MultiNest has been wrapped in Python in the (credits J.
Buchner, see PyMultiNest package), and the interface
with Monte Python has been mostly done by Jesus Torrado. It
is an excellent tool for exploring weirdly shaped posterior
distribution, and multi-modal ones (a feature is being tested
of automatically extracting the different modes and creating
separate Monte Python folders). It also computes the evidence
of a model.
The CosmoHammer was implemented with the help of its creators
in ETH, and now its basic functionnalities working. It is a
great tool to make use of gigantic clusters.
Note that both the use of CosmoHammer and PyMultiNest through
Monte Python is restricted to their specific licenses, which
includes citing their original paper(s). See their respective
website for proper usage.
Finally implemented an over sampling scheme, that allows
a fast running with likelihoods with a high number of
nuisance parameters. The user can oversample each block of
nuisance parameters at a rate of his choice. See the example
in base.param, and don't forget to call the code with the
option -j fast.
Repackaging: the whole folder now should behave more
like a proper python package. This included renaming the
`code' directory into `montepython', and moving the
`likelihoods' folder inside this new `montepython'. Also,
considering a likelihood named `lkl', instead of following
the previous structure:
likelihoods/lkl/lkl.py and .data,
you should now use:
If you implemented a custom likelihood, you should obviously
update it to this standard. Note that the syntax to import
the module likelihood_class has changed inside the
New WiggleZ analysis (credits Signe Riemer, David
Parkinson) working and tested, apart from the use of the
nuisance parameter P0 (WIP)
Update of the BAO data from 2013 (called `bao_boss`,
reference in the data/ folder), and added the data from
CFHTLens and Planck_SZ measurement.
First version of the test module, with basic
functionalities being checked. It needs the `nose' Python
package to run. Notably, there is a test listing all the
needed functions that the cosmological code must have in
order to work with Monte Python - this can be a good
guideline to implement a wrapper around another Boltzmann
Convention of path changed to comply with Python
standards. This might affect your modified likelihoods (you
should now use the command os.path.join to create paths.
Added the Planck likelihoods (Planck_highl,
Planck_lowl, Planck_lensing and
lowlike) (download the code and the data
from ESA website under frequently requested products) and provide a input parameter
file as well as a covariance matrix and a bestfit
file for the base LCDM case (note that you will have
to use the option -f 1.5 to change the
acceptance rate of the chains: as the posterior
distribution of the nuisance parameters is highly
non gaussian, this factor will ensure an acceptance
rate close to 0.2.
Update of WMAP to the 9yr results.
Added a Cholesky decomposition method to speed up sampling
for likelihoods with a large number of nuisance parameters
Lewis), which groups parameters by
blocks of same speed.
Transfered the documentation to a sphinx-documentation. Most
comments in the code have been replaced by docstrings,
automatically extracted to create a website. You can now
navigate the different functions, see the dependencies, see
the source code in one convenient place.
Renamed both io.py and parser.py into io_mp.py and
parser_mp.py (mp for Monte Python). It was conflicting with
some installed packages of python, both being some very often
Removed the comparison between input param and log.param:
always choose the latest.
Changed the architecture of analyze.py file. The whole thing
is no longer a class, but instead a collection of functions.
To exchange information between these functions, a new class
called `information` has been created.
Removed the common.py file, merging its content (lockfile
system) with io_mp.py
Transfered the log_flag from being an indendent quantity
being exchanged between data and likelihoods to a proper
attribute of the data class. Since no comparison is done
any-more, there was no reason to keep it this way anymore.
Added the following likelihoods: SPT full data (called
spt_2500 in the likelihood directory), compilation
of BAO scale data (called bao), compilation of
quasar time-delay data (called timedelay)
To speed up the production of plots, you can play with a
new extension defining the print-out format: use the
command-line option -ext png, or eps, or
pdf (from the fastest to the slowest format).
Added a lockfile system, that should prevent a file from
being written simultaneously by two programs, when launching
several different runs at the same time.
Changed a detail in the input file syntax, concerning
parameter prior edges. To leave the prior edge undefined, you
can still use "-1" as before, or "None". But if your prior
edge is really minus one (for instance for w), you can write
Function cosmo_update_arguments() modified for
better readability and simpler use (this function allows to
rename or remap input parameters at each step before passing
them to CLASS). The syntax pattern "try, except" is
not needed anymore.
Initial release. Contains the WMAP 7 likelihood, Hubble
Space Telescope, supernovae distance luminosity, all the
newdat format experiments (spt, acbar, quad, boomerang, ...),
WiggleZ, a Euclid-like galaxy survey, a Euclid-like cosmic
shear survey, and a simple Planck-like likelihood. Two
covariance matrices are included, one for WMAP 7 + SPT, one
for the Planck-like likelihood. More to come !