Handbook of Big Data

Springer

Editors: Dr. Borko Furht, Florida Atlantic University and

Armando Escalante, LexisNexis

 

 

Potential contributors:

Send an email to bfurht@fau.edu

 
   

Springer is launching a new Handbook of Big Data with the main objective to provide a variety of research and survey articles (~18-36 pages) contributed by world experts in the field. Springer is committed to create a successful and unique Handbook in this field and therefore it intends to support it with a large marketing and advertising effort. Potential contributors should express their interest by sending an email to Borko Furht at bfurht@fau.edu.

DESCRIPTION

This Handbook will include contributions of the world experts in the field of data intensive computing and its applications from academia, research laboratories, and private industry. Big Data Analytics is no longer a specialized solution for cutting-edge technology companies; it refers to a cost-effective way to store and analyze large volume of data across many industries. The applications of Big Data include health care and life sciences, supply chain, logistics, and manufacturing, online services and Web analytics, financial services, energy and utilities, media and telecommunications, and retail and consumer products. The Big data can be defined via three Vs: volume, velocity, and variety. Volume refers to terabytes, records, transactions, and tables and files; velocity includes batch processing, near time, real-time and streaming, while variety includes structured, unstructured, and semi-structured data. The Handbook will focus on chapters discussing big data challenges including data capture and storage, search, sharing, and analytics, big data technologies, and data visualization. A special focus will be given to big data technologies including architectures for massively parallel processing, data mining tools and techniques, machine learning algorithms for big data, distributed file systems and databases, cloud computing platforms, and scalable storage systems. The objective of the project is to introduce the basic concepts of data intensive computing, technologies and hardware and software techniques applied in data intensive computing, and current and future applications.

 

SCHEDULE

 
1. Contributors solicited and TOC defined   June 1, 2012 - November 15, 2012
2. Articles/chapters delivered       August 15, 2013
3. First draft of Handbook completed   September 1, 2013
4. Handbook delivered to the Publisher  September 15, 2013
5. Handbook published  January 2014

 

Links to important documents

Sample Chapter

(in MS Word)

 

TO AUTHORS:

1. MS Word and LaTex are acceptable formats.

2. Include in text only B/W FIGURES (high quality and clear).

3. Submit separate files for all figures and tables. You can also submit color figures (for the Web version of the Handbook).
3. Follow the format of the sample chapter
4. Follow the format of references in the sample chapter
.
5. Include index terms at the end of the chapter
.

 

LaTex format

LaTex sample 1

Latex sample 2

Latex sample 3


TOPICS OF INTERESTS

(not limited to)

Big data technologies

Data capure and storage

Extracting knowledge from large datasets

Architectures for massively parallel data processing

Data mining tools and techniques

Scalable storage systems

Hadoop and HPCC (High Performance Computing Cluster)

Machine learning algorithms for big data

Cloud computing platforms for big data analytics

Distributed file systems and databases

Applications of big data:

- Scientific applications - Bioinformatics - Healthcare - Life sciences - Supply chain - Online services - Web analytics - Financial services - Large science discoveries - Climate change - Environment - Energy and utilities - Media and telecommunications - Retail and consumer products - Commercial applications