Big Data Training Courses

Big Data Training Courses

Local, instructor-led live Big Data training courses start with an introduction to elemental concepts of Big Data, then progress into the programming languages and methodologies used to perform Data Analysis. Tools and infrastructure for enabling Big Data storage, Distributed Processing, and Scalability are discussed, compared and implemented in demo practice sessions.

Big Data training is available as "onsite live training" or "remote live training". Onsite live Big Data training can be carried out locally on customer premises in Singapore or in NobleProg corporate training centers in Singapore. Remote live training is carried out by way of an interactive, remote desktop.

NobleProg -- Your Local Training Provider

Testimonials

★★★★★
★★★★★

Big Data Course Outlines

Title
Duration
Overview
Title
Duration
Overview
14 hours
Overview
Goal:

Learning to work with SPSS at the level of independence

The addressees:

Analysts, researchers, scientists, students and all those who want to acquire the ability to use SPSS package and learn popular data mining techniques.
21 hours
Overview
Apache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time.

In this instructor-led, live training (onsite or remote), participants will learn how to deploy and manage Apache NiFi in a live lab environment.

By the end of this training, participants will be able to:

- Install and configure Apachi NiFi.
- Source, transform and manage data from disparate, distributed data sources, including databases and big data lakes.
- Automate dataflows.
- Enable streaming analytics.
- Apply various approaches for data ingestion.
- Transform Big Data and into business insights.

Format of the Course

- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.

Course Customization Options

- To request a customized training for this course, please contact us to arrange.
28 hours
Overview
MonetDB is an open-source database that pioneered the column-store technology approach.

In this instructor-led, live training, participants will learn how to use MonetDB and how to get the most value out of it.

By the end of this training, participants will be able to:

- Understand MonetDB and its features
- Install and get started with MonetDB
- Explore and perform different functions and tasks in MonetDB
- Accelerate the delivery of their project by maximizing MonetDB capabilities

Audience

- Developers
- Technical experts

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
28 hours
Overview
MemSQL is an in-memory, distributed, SQL database management system for cloud and on-premises. It's a real-time data warehouse that immediately delivers insights from live and historical data.

In this instructor-led, live training, participants will learn the essentials of MemSQL for development and administration.

By the end of this training, participants will be able to:

- Understand the key concepts and characteristics of MemSQL
- Install, design, maintain, and operate MemSQL
- Optimize schemas in MemSQL
- Improve queries in MemSQL
- Benchmark performance in MemSQL
- Build real-time data applications using MemSQL

Audience

- Developers
- Administrators
- Operation Engineers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
28 hours
Overview
Hadoop is a popular Big Data processing framework. Python is a high-level programming language famous for its clear syntax and code readibility.

In this instructor-led, live training, participants will learn how to work with Hadoop, MapReduce, Pig, and Spark using Python as they step through multiple examples and use cases.

By the end of this training, participants will be able to:

- Understand the basic concepts behind Hadoop, MapReduce, Pig, and Spark
- Use Python with Hadoop Distributed File System (HDFS), MapReduce, Pig, and Spark
- Use Snakebite to programmatically access HDFS within Python
- Use mrjob to write MapReduce jobs in Python
- Write Spark programs with Python
- Extend the functionality of pig using Python UDFs
- Manage MapReduce jobs and Pig scripts using Luigi

Audience

- Developers
- IT Professionals

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
21 hours
Overview
Python is a high-level programming language famous for its clear syntax and code readibility. Spark is a data processing engine used in querying, analyzing, and transforming big data. PySpark allows users to interface Spark with Python.

In this instructor-led, live training, participants will learn how to use Python and Spark together to analyze big data as they work on hands-on exercises.

By the end of this training, participants will be able to:

- Learn how to use Spark with Python to analyze Big Data.
- Work on exercises that mimic real world circumstances.
- Use different tools and techniques for big data analysis using PySpark.

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
35 hours
Overview
Advances in technologies and the increasing amount of information are transforming how law enforcement is conducted. The challenges that Big Data pose are nearly as daunting as Big Data's promise. Storing data efficiently is one of these challenges; effectively analyzing it is another.

In this instructor-led, live training, participants will learn the mindset with which to approach Big Data technologies, assess their impact on existing processes and policies, and implement these technologies for the purpose of identifying criminal activity and preventing crime. Case studies from law enforcement organizations around the world will be examined to gain insights on their adoption approaches, challenges and results.

By the end of this training, participants will be able to:

- Combine Big Data technology with traditional data gathering processes to piece together a story during an investigation
- Implement industrial big data storage and processing solutions for data analysis
- Prepare a proposal for the adoption of the most adequate tools and processes for enabling a data-driven approach to criminal investigation

Audience

- Law Enforcement specialists with a technical background

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
14 hours
Overview
To meet compliance of the regulators, CSPs (Communication service providers) can tap into Big Data Analytics which not only help them to meet compliance but within the scope of same project they can increase customer satisfaction and thus reduce the churn. In fact since compliance is related to Quality of service tied to a contract, any initiative towards meeting the compliance, will improve the “competitive edge” of the CSPs. Therefore, it is important that Regulators should be able to advise/guide a set of Big Data analytic practice for CSPs that will be of mutual benefit between the regulators and CSPs.

The course consists of 8 modules (4 on day 1, and 4 on day 2)
28 hours
Overview
Many real world problems can be described in terms of graphs. For example, the Web graph, the social network graph, the train network graph and the language graph. These graphs tend to be extremely large; processing them requires a specialized set of tools and processes -- these tools and processes can be referred to as Graph Computing (also known as Graph Analytics).

In this instructor-led, live training, participants will learn about the technology offerings and implementation approaches for processing graph data. The aim is to identify real-world objects, their characteristics and relationships, then model these relationships and process them as data using a Graph Computing (also known as Graph Analytics) approach. We start with a broad overview and narrow in on specific tools as we step through a series of case studies, hands-on exercises and live deployments.

By the end of this training, participants will be able to:

- Understand how graph data is persisted and traversed.
- Select the best framework for a given task (from graph databases to batch processing frameworks.)
- Implement Hadoop, Spark, GraphX and Pregel to carry out graph computing across many machines in parallel.
- View real-world big data problems in terms of graphs, processes and traversals.

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
21 hours
Overview
Predictive analytics is the process of using data analytics to make predictions about the future. This process uses data along with data mining, statistics, and machine learning techniques to create a predictive model for forecasting future events.

In this instructor-led, live training, participants will learn how to use Matlab to build predictive models and apply them to large sample data sets to predict future events based on the data.

By the end of this training, participants will be able to:

- Create predictive models to analyze patterns in historical and transactional data
- Use predictive modeling to identify risks and opportunities
- Build mathematical models that capture important trends
- Use data from devices and business systems to reduce waste, save time, or cut costs

Audience

- Developers
- Engineers
- Domain experts

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
7 hours
Overview
Apache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time.

In this instructor-led, live training, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.

By the end of this training, participants will be able to:

- Understand NiFi's architecture and dataflow concepts.
- Develop extensions using NiFi and third-party APIs.
- Custom develop their own Apache Nifi processor.
- Ingest and process real-time data from disparate and uncommon file formats and data sources.

Format of the Course

- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.

Course Customization Options

- To request a customized training for this course, please contact us to arrange.
14 hours
Overview
Apache SolrCloud is a distributed data processing engine that facilitates the searching and indexing of files on a distributed network.

In this instructor-led, live training, participants will learn how to set up a SolrCloud instance on Amazon AWS.

By the end of this training, participants will be able to:

- Understand SolCloud's features and how they compare to those of conventional master-slave clusters
- Configure a SolCloud centralized cluster
- Automate processes such as communicating with shards, adding documents to the shards, etc.
- Use Zookeeper in conjunction with SolrCloud to further automate processes
- Use the interface to manage error reporting
- Load balance a SolrCloud installation
- Configure SolrCloud for continuous processing and fail-over

Audience

- Solr Developers
- Project Managers
- System Administrators
- Search Analysts

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
14 hours
Overview
AI is a collection of technologies for building intelligent systems capable of understanding data and the activities surrounding the data to make "intelligent decisions". For Telecom providers, building applications and services that make use of AI could open the door for improved operations and servicing in areas such as maintenance and network optimization.

In this course we examine the various technologies that make up AI and the skill sets required to put them to use. Throughout the course, we examine AI's specific applications within the Telecom industry.

Audience

- Network engineers
- Network operations personnel
- Telecom technical managers

Format of the course

- Part lecture, part discussion, hands-on exercises
28 hours
Overview
Data Vault Modeling is a database modeling technique that provides long-term historical storage of data that originates from multiple sources. A data vault stores a single version of the facts, or "all the data, all the time". Its flexible, scalable, consistent and adaptable design encompasses the best aspects of 3rd normal form (3NF) and star schema.

In this instructor-led, live training, participants will learn how to build a Data Vault.

By the end of this training, participants will be able to:

- Understand the architecture and design concepts behind Data Vault 2.0, and its interaction with Big Data, NoSQL and AI.
- Use data vaulting techniques to enable auditing, tracing, and inspection of historical data in a data warehouse.
- Develop a consistent and repeatable ETL (Extract, Transform, Load) process.
- Build and deploy highly scalable and repeatable warehouses.

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
14 hours
Overview
Datameer is a business intelligence and analytics platform built on Hadoop. It allows end-users to access, explore and correlate large-scale, structured, semi-structured and unstructured data in an easy-to-use fashion.

In this instructor-led, live training, participants will learn how to use Datameer to overcome Hadoop's steep learning curve as they step through the setup and analysis of a series of big data sources.

By the end of this training, participants will be able to:

- Create, curate, and interactively explore an enterprise data lake
- Access business intelligence data warehouses, transactional databases and other analytic stores
- Use a spreadsheet user-interface to design end-to-end data processing pipelines
- Access pre-built functions to explore complex data relationships
- Use drag-and-drop wizards to visualize data and create dashboards
- Use tables, charts, graphs, and maps to analyze query results

Audience

- Data analysts

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
14 hours
Overview
Tigon is an open-source, real-time, low-latency, high-throughput, native YARN, stream processing framework that sits on top of HDFS and HBase for persistence. Tigon applications address use cases such as network intrusion detection and analytics, social media market analysis, location analytics, and real-time recommendations to users.

This instructor-led, live training introduces Tigon's approach to blending real-time and batch processing as it walks participants through the creation a sample application.

By the end of this training, participants will be able to:

- Create powerful, stream processing applications for handling large volumes of data
- Process stream sources such as Twitter and Webserver Logs
- Use Tigon for rapid joining, filtering, and aggregating of streams

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
14 hours
Overview
Apache Ignite is an in-memory computing platform that sits between the application and data layer to improve speed, scale, and availability.

In this instructor-led, live training, participants will learn the principles behind persistent and pure in-memory storage as they step through the creation of a sample in-memory computing project.

By the end of this training, participants will be able to:

- Use Ignite for in-memory, on-disk persistence as well as a purely distributed in-memory database.
- Achieve persistence without syncing data back to a relational database.
- Use Ignite to carry out SQL and distributed joins.
- Improve performance by moving data closer to the CPU, using RAM as a storage.
- Spread data sets across a cluster to achieve horizontal scalability.
- Integrate Ignite with RDBMS, NoSQL, Hadoop and machine learning processors.

Format of the Course

- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.

Course Customization Options

- To request a customized training for this course, please contact us to arrange.
14 hours
Overview
Vespa is an open-source big data processing and serving engine created by Yahoo. It is used to respond to user queries, make recommendations, and provide personalized content and advertisements in real-time.

This instructor-led, live training introduces the challenges of serving large-scale data and walks participants through the creation of an application that can compute responses to user requests, over large datasets in real-time.

By the end of this training, participants will be able to:

- Use Vespa to quickly compute data (store, search, rank, organize) at serving time while a user waits
- Implement Vespa into existing applications involving feature search, recommendations, and personalization
- Integrate and deploy Vespa with existing big data systems such as Hadoop and Storm.

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
21 hours
Overview
Apache Apex is a YARN-native platform that unifies stream and batch processing. It processes big data-in-motion in a way that is scalable, performant, fault-tolerant, stateful, secure, distributed, and easily operable.

This instructor-led, live training introduces Apache Apex's unified stream processing architecture, and walks participants through the creation of a distributed application using Apex on Hadoop.

By the end of this training, participants will be able to:

- Understand data processing pipeline concepts such as connectors for sources and sinks, common data transformations, etc.
- Build, scale and optimize an Apex application
- Process real-time data streams reliably and with minimum latency
- Use Apex Core and the Apex Malhar library to enable rapid application development
- Use the Apex API to write and re-use existing Java code
- Integrate Apex into other applications as a processing engine
- Tune, test and scale Apex applications

Format of the Course

- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.

Course Customization Options

- To request a customized training for this course, please contact us to arrange.
7 hours
Overview
Alluxio is an open-source virtual distributed storage system that unifies disparate storage systems and enables applications to interact with data at memory speed. It is used by companies such as Intel, Baidu and Alibaba.

In this instructor-led, live training, participants will learn how to use Alluxio to bridge different computation frameworks with storage systems and efficiently manage multi-petabyte scale data as they step through the creation of an application with Alluxio.

By the end of this training, participants will be able to:

- Develop an application with Alluxio
- Connect big data systems and applications while preserving one namespace
- Efficiently extract value from big data in any storage format
- Improve workload performance
- Deploy and manage Alluxio standalone or clustered

Audience

- Data scientist
- Developer
- System administrator

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
28 hours
Overview
Apache Flink is an open-source framework for scalable stream and batch data processing.

This instructor-led, live training introduces the principles and approaches behind distributed stream and batch data processing, and walks participants through the creation of a real-time, data streaming application in Apache Flink.

By the end of this training, participants will be able to:

- Set up an environment for developing data analysis applications.
- Package, execute, and monitor Flink-based, fault-tolerant, data streaming applications.
- Manage diverse workloads.
- Perform advanced analytics using Flink ML.
- Set up a multi-node Flink cluster.
- Measure and optimize performance.
- Integrate Flink with different Big Data systems.
- Compare Flink capabilities with those of other big data processing frameworks.

Format of the Course

- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.

Course Customization Options

- To request a customized training for this course, please contact us to arrange.
14 hours
Overview
Apache Samza is an open-source near-realtime, asynchronous computational framework for stream processing. It uses Apache Kafka for messaging, and Apache Hadoop YARN for fault tolerance, processor isolation, security, and resource management.

This instructor-led, live training introduces the principles behind messaging systems and distributed stream processing, while walking participants through the creation of a sample Samza-based project and job execution.

By the end of this training, participants will be able to:

- Use Samza to simplify the code needed to produce and consume messages.
- Decouple the handling of messages from an application.
- Use Samza to implement near-realtime asynchronous computation.
- Use stream processing to provide a higher level of abstraction over messaging systems.

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
21 hours
Overview
Apache Drill is a schema-free, distributed, in-memory columnar SQL query engine for Hadoop, NoSQL and other Cloud and file storage systems. The power of Apache Drill lies in its ability to join data from multiple data stores using a single query. Apache Drill supports numerous NoSQL databases and file systems, including HBase, MongoDB, MapR-DB, HDFS, MapR-FS, Amazon S3, Azure Blob Storage, Google Cloud Storage, Swift, NAS and local files. Apache Drill is the open source version of Google's Dremel system which is available as an infrastructure service called Google BigQuery.

In this instructor-led, live training, participants will learn the fundamentals of Apache Drill, then leverage the power and convenience of SQL to interactively query big data across multiple data sources, without writing code. Participants will also learn how to optimize their Drill queries for distributed SQL execution.

By the end of this training, participants will be able to:

- Perform "self-service" exploration on structured and semi-structured data on Hadoop
- Query known as well as unknown data using SQL queries
- Understand how Apache Drills receives and executes queries
- Write SQL queries to analyze different types of data, including structured data in Hive, semi-structured data in HBase or MapR-DB tables, and data saved in files such as Parquet and JSON.
- Use Apache Drill to perform on-the-fly schema discovery, bypassing the need for complex ETL and schema operations
- Integrate Apache Drill with BI (Business Intelligence) tools such as Tableau, Qlikview, MicroStrategy and Excel

Audience

- Data analysts
- Data scientists
- SQL programmers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
7 hours
Overview
In this instructor-led, live training, participants will learn the core concepts behind MapR Stream Architecture as they develop a real-time streaming application.

By the end of this training, participants will be able to build producer and consumer applications for real-time stream data procesing.

Audience

- Developers
- Administrators

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Note

- To request a customized training for this course, please contact us to arrange.
14 hours
Overview
Magellan is an open-source distributed execution engine for geospatial analytics on big data. Implemented on top of Apache Spark, it extends Spark SQL and provides a relational abstraction for geospatial analytics.

This instructor-led, live training introduces the concepts and approaches for implementing geospacial analytics and walks participants through the creation of a predictive analysis application using Magellan on Spark.

By the end of this training, participants will be able to:

- Efficiently query, parse and join geospatial datasets at scale
- Implement geospatial data in business intelligence and predictive analytics applications
- Use spatial context to extend the capabilities of mobile devices, sensors, logs, and wearables

Format of the Course

- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.

Course Customization Options

- To request a customized training for this course, please contact us to arrange.
14 hours
Overview
Apache Beam is an open source, unified programming model for defining and executing parallel data processing pipelines. It's power lies in its ability to run both batch and streaming pipelines, with execution being carried out by one of Beam's supported distributed processing back-ends: Apache Apex, Apache Flink, Apache Spark, and Google Cloud Dataflow. Apache Beam is useful for ETL (Extract, Transform, and Load) tasks such as moving data between different storage media and data sources, transforming data into a more desirable format, and loading data onto a new system.

In this instructor-led, live training (onsite or remote), participants will learn how to implement the Apache Beam SDKs in a Java or Python application that defines a data processing pipeline for decomposing a big data set into smaller chunks for independent, parallel processing.

By the end of this training, participants will be able to:

- Install and configure Apache Beam.
- Use a single programming model to carry out both batch and stream processing from withing their Java or Python application.
- Execute pipelines across multiple environments.

Format of the Course

- Part lecture, part discussion, exercises and heavy hands-on practice

Note

- This course will be available Scala in the future. Please contact us to arrange.
35 hours
Overview
KNIME is a free and open-source data analytics, reporting and integration platform. KNIME integrates various components for machine learning and data mining through its modular data pipelining concept. A graphical user interface and use of JDBC allows assembly of nodes blending different data sources, including preprocessing (ETL: Extraction, Transformation, Loading), for modeling, data analysis and visualization without, or with only minimal, programming. To some extent as advanced analytics tool KNIME can be considered as a SAS alternative.

Since 2006, KNIME has been used in pharmaceutical research, it also used in other areas like CRM customer data analysis, business intelligence and financial data analysis.
21 hours
Overview
Pivotal Greenplum is a Massively Parallel Processing (MPP) Data Warehouse platform based on PostgreSQL.

This instructor-led, live training (onsite or remote) is aimed at developers who wish to set up a multi-node Greenplum database.

By the end of this training, participants will be able to:

- Install and configure Pivotal Greenplum.
- Model data in accordance to current needs and future expansion plans.
- Carry out different techniques for distributing data across multiple nodes.
- Improve database performance through tuning.
- Monitor and troubleshoot a Greenplum database.

Format of the Course

- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.

Course Customization Options

- To request a customized training for this course, please contact us to arrange.
14 hours
Overview
This instructor-led, live training (onsite or remote) is aimed at engineers who wish to use Confluent (a distribution of Kafka) to build and manage a real-time data processing platform for their applications.

By the end of this training, participants will be able to:

- Install and configure Confluent Platform.
- Use Confluent's management tools and services to run Kafka more easily.
- Store and process incoming stream data.
- Optimize and manage Kafka clusters.
- Secure data streams.

Format of the Course

- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.

Course Customization Options

- This course is based on the open source version of Confluent: Confluent Open Source.
- To request a customized training for this course, please contact us to arrange.
14 hours
Overview
This instructor-led, live training (onsite or remote) is aimed at data analysts and data scientists who wish to implement more advanced data analytics techniques for data mining using Python.

By the end of this training, participants will be able to:

- Understand important areas of data mining, including association rule mining, text sentiment analysis, automatic text summarization, and data anomaly detection.
- Compare and implement various strategies for solving real-world data mining problems.
- Understand and interpret the results.

Format of the Course

- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.

Course Customization Options

- To request a customized training for this course, please contact us to arrange.
Weekend Big Data courses, Evening Big Data training, Big Data boot camp, Big Data instructor-led, Weekend Big Data training, Evening Big Data courses, Big Data coaching, Big Data instructor, Big Data trainer, Big Data training courses, Big Data classes, Big Data on-site, Big Data private courses, Big Data one on one training

Course Discounts

Course Discounts Newsletter

We respect the privacy of your email address. We will not pass on or sell your address to others.
You can always change your preferences or unsubscribe completely.

Some of our clients

is growing fast!

We are looking to expand our presence in Singapore!

As a Business Development Manager you will:

  • expand business in Singapore
  • recruit local talent (sales, agents, trainers, consultants)
  • recruit local trainers and consultants

We offer:

  • Artificial Intelligence and Big Data systems to support your local operation
  • high-tech automation
  • continuously upgraded course catalogue and content
  • good fun in international team

If you are interested in running a high-tech, high-quality training and consulting business.

Apply now!