Chemical Modelling covers a wide range of disciplines and this Specialist Periodical Report is the first stop for any materials scientist, biochemist, chemist or molecular physicist wishing to acquaint themselves with major developments and current opinion in the applications and theory of chemical modelling. The topics covered are wide ranging with authors writing on clusters to modelling nanotubes and dynamics. Containing both comprehensive and critical reviews, this volume is an essential resource and convenient reference for any research group active in the field or chemical sciences library.
Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.
Chemical Modelling covers a wide range of disciplines and this Specialist Periodical Report is the first stop for any materials scientist, biochemist, chemist or molecular physicist wishing to acquaint themselves with major developments and current opinion in the applications and theory of chemical modelling. The topics covered are wide ranging with authors writing on clusters to modelling nanotubes and dynamics. Containing both comprehensive and critical reviews, this volume is an essential resource and convenient reference for any research group active in the field or chemical sciences library.
Chemical Modelling covers a wide range of disciplines and this Specialist Periodical Report is the first stop for any materials scientist, biochemist, chemist or molecular physicist wishing to acquaint themselves with major developments and current opinion in the applications and theory of chemical modelling. The topics covered are wide ranging with authors writing on clusters to modelling nanotubes and dynamics. Containing both comprehensive and critical reviews, this volume is an essential resource and convenient reference for any research group active in the field or chemical sciences library.
Preface Michael Springborg and Jan-Ole Joswig, vii,
Toward accurate coarse-graining approaches for protein and membrane simulations Michele Cascella and Stefano Vanni, 1,
Chemical bonding in solids: recovering chemical concepts in the realm of infinite periodic structures Alexey I. Baranov, Robert Ponec and Miroslav Kohout, 53,
Vibrational quantum dynamics at metallic surfaces Jean Christophe Tremblay, 84,
Theoretical studies of supercapacitors Mathieu Salanne, 119,
Nanotubes with well-defined structure: imogolites Luciana Guimarães, Maicon P. Lourenço and Hélio A. Duarte, 151,
Application of DFT modeling in Fischer–Tropsch synthesis over Co-based catalysts Xin-Chao Xu, Pengfei Tian, Yong Cao, Jing Xu and Yi-Fan Han, 184,
Structure prediction and its applications in computational materials design Qiang Zhu, Artem R. Oganov, Qingfeng Zeng and Xiangfeng Zhou, 219,
Ab initio global optimization of clusters Jijun Zhao, Xiaoming Huang, Ruili Shi, Lingli Tang, Yan Su and Linwei Sai, 249,
Nitrogenand phosphine-binding ligands in interaction with gold atoms, clusters, nanoparticles and surfaces Doreen Mollenhauer, 293,
Toward accurate coarse-graining approaches for protein and membrane simulations
Michele Cascella and Stefano Vanni
DOI: 10.1039/9781782622703-00001
1 Introduction
From the smallest biological molecules to complex living organisms, the organisation of the living matter follows highly hierarchical organisation. As depicted in Fig. 1, starting from the Ångström dimensionality, we first encounter atoms and molecules, then oligomers and polymer, like short RNAs or single-domain globular proteins; at larger scales, macromolecular assemblies give rise to cellular organelles and cells, that in turns, in superior organisms, form tissues, organs, and finally the whole body. Likewise, different biological phenomena occur at different size and time-scales, and therefore can be understood by employing methods of investigation at the most pertinent level of resolution.
Since the beginning of the informatics revolution in the 50's of the past century, major effort has been put in developing reliable mathematical and physical computational models of complex systems at different resolutions. In bottom-up approaches, the aim is to establish computational models based on fundamental physical principles that are able to predict the behaviour of the system of interest (Fig. 2). At the most fundamental level, quantum mechanics approaches can be used to treat relatively small-sized molecular systems (up to ~103 atoms, and for times of the order of 10-12,-10 s).
Quantum mechanical calculations can nowadays reach up even to millions of atoms for static calculations, also in this case depending on the degree of approximation with respect to the exact theoretical formulation (and according to the complexity of the system of interest).
For larger systems and longer times (~106 atoms, and for routinely times of 10-8,-7 s up to 10-6,-3 s) molecular models employing explicit representation of atoms (all-atom models, AA hereafter) interacting through parameterised mechanical effective potentials are the most commonly used approach. Such potentials can be trained on both accurate quantum-mechanical calculations and on large experimental data set and they can reliably reproduce molecular processes involving non-covalent intermolecular interactions or conformational changes.
Combination of quantum mechanical and classical methods in a hierarchical structure is often used as a way of treating those biochemical phenomena that require quantum mechanical treatment while keeping a direct coupling with the environment. Historically, the first multi-scale model proposed dates back to 1976 by Arieh Warshel and Michael Levitt, where the idea of embedding quantum mechanical treatment of a chemically relevant portion of a biological system (like the active site of an enzyme) into a parameterised description of the environment was proposed. In the past decades, a large family of hybrid quantum mechanics/classical mechanics (QM/MM) methods stemmed from this seminal work, and have established what is today recognised as the standard common practice to treat quantum mechanical phenomena in biological systems. For this fundamental theoretical work Profs. Warshel and Levitt, together with Prof. Martin Karplus, were awarded the Nobel Prize in Chemistry in 2013.
Even though atomistic simulations can now deal with systems as large as tens of millions of atoms, and for simulation times beyond the millisecond, several biological processes involving large macromolecular complexes require description at time and sizes that go beyond even such dimensionalities. In order to overcome these bottlenecks, in the past decades several groups have been working on the development of reliable Coarse-Grained (CG) models. Similarly to AA, effective CG potentials can be derived from higher-resolution AA simulations, or from direct match with specific experimental properties of interest. In such approaches the detailed atomic resolution is lost; nonetheless, some information on the topological structure of the molecular assembly is retained, as described in Fig. 3. These models can e?ciently represent molecular systems composed by several millions of atoms, for effective times that can reach the second scale; therefore, they are in principle well-adapted to investigate the structure and dynamics of large macromolecular assemblies and multi-phase systems. The large number of reviews published on the subject in recent years highlights the strong interest by the scientific community in this topic (for example: ref. 1, 16, 50, 53, 57–67).
Treatment of very large systems, and for very long times, opens up a completely different view on the understanding of biological systems and phenomena. In fact, it is often the case that the complexity of such systems is irreducible to the fundamental properties of individual or relatively few molecules, but it requires the treatment of a large number of particles. Moreover, biochemical/biophysical processes are often not simply driven by simple thermodynamic equilibrium, but several kinetic effects, like for example diffusional barriers, may play a fundamental role.
As a pivotal example of the power of coarse-graining approaches in modern computational investigations, a very recent study by Marrink and co-workers was able to investigate the lipid composition, dynamics and diffusion in a realistic model of the plasma membrane. Biological membranes are extremely complex environments formed by several lipophilic/amphiphilic compounds, which can behave in very different manners according to their specific composition. Building reliable models of such environments necessarily implies the use of large model systems to respect the relative concentration of the different species.
Moreover, properties such as lipid lateral diffusion, lipid flip/flop, membrane elasticity and surface tension can be heavily biased in MD simulations by too small boundaries. The study in ref. 68 reported a model made of over 60 different lipid types, in a stoichiometric ratio compatible to that experimentally determined using lipidomics approaches, for a total of ~20 000 lipid molecules, and simulated for 40 microseconds using coarse-grained potentials.
The major complication present in any multi-scale modelling approach of biological systems is associated to the fact that phenomena characteristic of a certain size/time scale may influence, directly or indirectly, properties that are intrinsic of a different scale. For example, the network of molecular interactions and molecular recognition patterns at interfaces directly influences the dynamics of large macromolecular complexes; on the other hand, the in vivo efficiency of an enzyme in a cell does not solely depend on its catalytic activity, but also on the accessibility to the substrates within the highly crowded cytoplasmic environment.
To date, the computational community struggles in the effort of building up, on one hand, more and more reliable and general models at the different resolution scales; on the other hand, it is becoming increasingly urgent to develop methods that combine and integrate information from multiple resolutions, in order to improve the predictive power of the implemented models.
2 Coarse-grained modelling: basic ideas
The fundamental concepts related to coarse-graining root deeply into developments of statistical mechanics and the study of phase-transitions near the critical point. These studies evidenced how in such regions of the phase diagram the behaviour of the single particles become less relevant, while collective phenomena dominate the global behaviour of the system. Although not necessarily near critical points, soft-matter systems, and proteins in particular, are usually characterised by a rather complex phase diagram, and at room conditions they often lay in marginally stable regions near several phase transition crossings. This suggests that several of the physical properties of a polymer may be understood employing a description that is at a coarser resolution of the atomic one (for example: ref. 75–79).
Restricting our discussion to biological systems, the first CG model for proteins was proposed again by Levitt and Warshel. The original model made use of single centroids to describe individual amino acids, and of an elastic network to reproduce the folded state structure and predict the folding kinetics. Through the last 40 years, several coarse grained models have been developed to study polymers, multiphase systems, proteins and nucleic acids, as previously referenced.
A coarse graining operation implies the mapping of a finely grained system formed by N particles and described by a Hamiltonian HN(α), defined by a set of parameters α, onto a second system composed by M particles and responding to a new Hamiltonian HM(β), which will depend on another set of parameters β.
The mapping operation requires a transformation connecting the N and M particles of the two systems. Moreover, the mapping Hamiltonian HM(β) must be such that the partition function ZM:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1)
is distributed according to the statistical distribution of the starting system.
In other words:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2)
where [XI](rN,pN [right arrow]rM, pM) is the transformation mapping any conformation of the finely grained system into the coarse grained one.
Such transformation is evidently non-trivial for the following reasons:
(i) The mapping transformation is ill defined. In fact, there is no unique mathematical way of defining the mapping from one fine description into a coarse one. For example, a variable number of bodies may be used, yielding different levels of coarsening, or the same number of bodies may be used to address different structural parameters as degrees of freedom of the coarse grained Hamiltonian.
(ii) The functional form of the coarse grained Hamiltonian is ill defined. In fact, there is no universal transformation that defines an analytical potential function for any coarse grained representation. The choice of the functional form for the potential energy is typically made according to the properties of interest that the model should address.
(iii) There is no consensus on how to parameterise the coarse grained Hamiltonian. This is a consequence of the absence of a well-defined functional form for the potential energy. The interaction among coarsegrained bodies can be built to match as rigorously as possible the finegrained system, or to fit chosen experimental data sets, or to model a specific property. In any case, this leads to potentials that have either limited transferability or reduced reliability.
Ultimately, the challenge in coarse-graining procedures lies in the definition of those finely grained degrees of freedom that will be disregarded in the coarse representation. It is evident that such choice is strictly dependent on the properties of interest that the coarse grained model should investigate. In fact, not all phenomena depend to the same extent on the same degrees of freedom or structural/dynamical parameters. Therefore, it is debatable whether the hunt for a universal coarse-grained model is by itself well placed, and that different models addressing multiple properties should instead be the right approach to coarse graining.
3 Protein representations
3.1 Atomistic vs. coarse-grained modelling
The study of the proteome constitutes one of the most fascinating challenges of molecular and cellular biology. Investigations may address different topics, comprising, among others, the relationship between the amino acid sequence, the structure and the function of individual proteins, protein folding, protein/protein interaction networks, dynamical effects on protein function (like in motor proteins), or interference of protein function by ligand binding.
Different phenomena associated to proteins can span the most different time and size scales. In fact, single protein filaments can have very different lengths (from few tens of amino acids like in small rubredoxins, to hundred thousands like in titin) and processes can be as fast as femtoseconds (like in early photo-activation of the visual signal in rhodopsin) down to several hours, for example in weakly catalysed processes. It is therefore unrealistic to imagine the establishment of a universal computational model able to describe so different phenomena at so diverse scales.
In the course of the decades, different computational protocols tackling modelling of proteins at different resolutions have been established. At the most accurate level, we find today quantum mechanics methods (mostly based on Density Functional Theory), used in combination with embedding methods. The applicability of such methods is typically restricted to study those biochemical phenomena that strictly require a quantum-mechanical treatment. The most prominent examples include the study of enzymatic activity, photo-activation, or biological electron transfer and redox systems. Even though the constant increase of computational power has allowed in recent years the appearance of the first quantum-mechanical studies on whole small proteins, even with explicit quantum mechanical treatment of the solvent, it is unlikely that such approaches become feasible on a routinely basis to study biological phenomena occurring at larger time and size dimensionalities.
To date, the standard method to study structural and dynamical properties of proteins that provides the best compromise between accuracy and computational feasibility is classical molecular dynamics using parameterised potentials. Within this approach, molecular systems are described starting from their atomic constituents. Atoms are represented as massive point objects; molecular structures are held together making use of mechanical stretching, bending and torsional effective potentials mimicking the binding e?ect on nuclei by the electronic cloud. The e?ect of bond polarization and van der Waals forces driving the interaction between non-bonded atoms is typically taken into account by associating an individual point electrostatic charge to any atomic centre, and by defining Lennard-Jones potentials between pairs of atoms. Overall, the general formula of an AA potential takes the following analytical form:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3)
where the first three terms represent the stretching, bending and torsional energies, the set of (qi) parameters define the Coulomb charges associated to each atom present in the system, and the (Aij, Bij) constants are the Lennard-Jones parameters associated to each i, j couple of atoms.
Parameterisation of the several constants defining the function VAA can be achieved in multiple ways, but in general is determined by combining fitting over a series of experimental and high-level quantum mechanical data. In the course of the last decades, several sets of standardised parameters, generally named "force fields", have been developed. These force fields are nowadays quite reliable in predicting molecular properties of most protein systems. In particular, several folding simulations have verified that the global free energy landscape of protein conformations can be reliably reproduced using such potentials. Force fields are continuously re-parameterised to improve their reliability and transferability; therefore, we advise the reading of specialised literature and reviews to have a clear view on the performance of the different force fields for the calculation of different properties.
Excerpted from Chemical Modelling Volume 12 by Alexey I. Baranov, Yong Cao, Michele Cascella, Hélio A. Duarte, Luciana Guimarães, Yi-Fan Han, Xiaoming Huang, Miroslav Kohout, Maicon P. Lourenço, Doreen Mollenhauer, Artem R. Oganov, Robert Ponec, Linwei Sai, Mathieu Salanne, Ruili Shi, Yan Su, Lingli Tang, Pengfei Tian, Jean Christophe Tremblay, Stefano Vanni, Jing Xu, Xin-Chao Xu, Qingfeng Zeng, Jijun Zhao, Xiangfeng Zhou, Qiang Zhu, Jan-Ole Joswig, Michael Springborg. Copyright © 2016 The Royal Society of Chemistry. Excerpted by permission of The Royal Society of Chemistry.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.
Gratis für den Versand innerhalb von/der Deutschland
Versandziele, Kosten & DauerAnbieter: moluna, Greven, Deutschland
Gebunden. Zustand: New. KlappentextChemical Modelling covers a wide range of disciplines and is the first stop for any materials scientist, biochemist, chemist or molecular physicist wishing to acquaint themselves with major developments and current opinion in . Artikel-Nr. 38927689
Anzahl: 1 verfügbar
Anbieter: Revaluation Books, Exeter, Vereinigtes Königreich
Hardcover. Zustand: Brand New. 350 pages. 9.50x6.50x1.25 inches. In Stock. Artikel-Nr. x-1782621156
Anzahl: 1 verfügbar