EUR 42,55
Währung umrechnenAnzahl: 1 verfügbar
In den WarenkorbHardcover. Zustand: New. Zustand des Schutzumschlags: New. 1st Edition. Contents: Preface. I. Perl for bioinformatics: 1. Perl control flow motif. 2. Basic operators lists and arrays file handling pattern matching. 3. Subroutines hashes and application in bioinformatics. II. Java, Bio java and Bio java packages: 4. Java for bioinformatics. 5. Java programming. 6. Object oriented programming. 7. Exception handling. 8. Event handling. 9. Interfaces and packages. 10. Multithreading in Java. 11. Applets and graphics. 12. Java database and connectivity. 13. Network programming with Java. 14. Bio java. Glossary of definitions used in Java. Bibliography. Index. One characteristic of Java is portability which means that computer programs written in the Java language must run similarly on any hardware/operating system platform. This is achieved by compiling the Java language code to an intermediate representation called Java bytecode, instead of directly to platform-specific machine code. Java bytecode instructions are analogous to machine code but they are intended to be interpreted by a virtual machine VM written specifically for the host hardware. End-users commonly use a Java Runtime Environment (JRE) installed on their own machine for standalone Java applications, or in a web browser for Java applets. Bio Java is an open-source project dedicated to providing a Java framework for processing biological data. It provides analytical and statistical routines, parsers for common file formats and allows the manipulation of sequences and 3D structures. The goal of the Bio java project is to facilitate rapid application development for bioinformatics. (jacket).
Verlag: Jnanada Prakashan, 2014
ISBN 10: 8171395953 ISBN 13: 9788171395958
Anbieter: Vedams eBooks (P) Ltd, New Delhi, Indien
Erstausgabe
EUR 27,50
Währung umrechnenAnzahl: 1 verfügbar
In den WarenkorbHardcover. Zustand: New. Zustand des Schutzumschlags: New. 1st Edition. Contents: Preface. 1. Data abstraction. 2. Hiding the implementation. 3. Initialization and cleanup. 4. Function overloading and default arguments. 5. Constants. 6. Inline functions. 7. Name control. Bibliography. Index. Data abstraction is the separation between the specification of data object and its implementation. Abstraction is the process by which data and programs are defined with a representation similar in form to its meaning semantics while hiding away the implementation details. Abstraction tries to reduce and factor out details so that the programmer can focus on a few concepts at a time. It captures only those details about an object that are relevant to the current perspective. This book talks about advance object oriented programming in easy to understand language. The author provides concise but in-depth pieces on data abstraction, hiding the implementation initialization and clean up, function overloading and default arguments, constants inline functions and name control. (jacket).
Verlag: Jnanada Prakashan, 2011
Anbieter: Vedams eBooks (P) Ltd, New Delhi, Indien
EUR 32,74
Währung umrechnenAnzahl: 1 verfügbar
In den WarenkorbContents Foreword 1 Introduction 2 Discovering hidden value in the data warehouse 3 Data mining techniques 4 What is data warehousing 5 Data warehouse appliances 6 Data warehouse concepts 7 Applications of data mining and data warehousing 8 Benefits of data warehousing 9 Further applications of data mining 10 Quantitative structure activity relationship 11 Data mining methods 12 Glossary of data mining terms Bibliography Index "Data Mining and Data Warehousing techniques are becoming indispensable parts of business intelligence programmes Use these links to learn more about these emerging fields and keep on top this trend Data mining is the process of extracting patterns from data Data mining is becoming an increasingly important tool to transform this data into information It is commonly used in a wide range of profiling practices such as marketing surveillance fraud detection and scientific discoveryData mining can be used to uncover patterns in data but is often carried out only on samples of data The mining process will be ineffective if the samples are not a good representation of the larger body of data Data mining cannot discover patterns that may be present in the larger body of data if those patterns are not present in the sample being mined Inability to find patterns may become a cause for some disputes between customers and service provides Therefore data mining is not foolproof but may be useful if sufficiently representative data samples are collected The discovery of a particular pattern in a particular set of data does not necessarily mean that a pattern is found elsewhere in the larger data from which that sample was drawn An important part of the process is the verification and validation of patterns on other samples of dataThe related terms data dredging data fishing and data snooping refer to the use of data mining techniques to sample sizes that are or may be too small for statistical inferences to be made about the validity of any patterns discovered Data dredging may however be used to develop new hypotheses which must then be validated with sufficiently large sample sets This book will be of immense help to all those contemplating to acquire expert knowledge of Data Mining and Data Warehousing" jacket 270 pp.
Verlag: Jnanada Prakashan, 2014
ISBN 10: 8171395961 ISBN 13: 9788171395965
Anbieter: Vedams eBooks (P) Ltd, New Delhi, Indien
Erstausgabe
EUR 34,70
Währung umrechnenAnzahl: 1 verfügbar
In den WarenkorbHardcover. Zustand: New. Zustand des Schutzumschlags: New. 1st Edition. Contents: Preface. 1. Database management system. 2. Data models. 3. Relational model. 4. Relational databases-SQL. 5. Database system architectures. Bibliography. Index. A database management system (DBMS) is a suite of computer software providing the interface between users and a database or databases. Because they are so closely related, the term "database" when used casually often refers to both a DBMS and the data it manipulates. Databases are not used only to hold administrative information, but are often embedded within applications to hold more specialized data: for example engineering data of economic models. With the progress in technology in the areas of processors, computer memory, computer storage and computer networks, the sizes, capabilities and performance of databases and their respective DBMSs have grown in orders of magnitudes. A database is an organized collection of data where the data are typically organized to model relevant aspects of reality in a way that supports processes requiring this information. A general-purpose database management system (DBMS) is a software system designed to allow the definition, creation, querying, update and administration of databases.