For More Udemy Free Courses >>> https://freetutorials.us/
For more Lynda and other Courses >>> https://www.freecoursesonline.me/
Forum for discussion >>> https://1hack.us/
By : Anghel Leonard
Released : November 30, 2018
Caption (CC) : Included
Torrent Contains : 61 Files, 1 Folders
Course Source : https://www.packtpub.com/application-development/data-stream-development-apache-spark-kafka-and-spring-boot-video
Handle high volumes of data at high speed. Architect and implement an end-to-end data streaming pipeline
Video Details
ISBN 9781789539585
Course Length 7 hours 51 minutes
Table of Contents
• Introducing Data Streaming Architecture
• Deployment of Collection and Message Queuing Tiers
• Proceeding to the Data Access Tier
• Implementing the Analysis Tier
• Mitigate Data Loss between Collection, Analysis and Message Queuing Tiers
Learn
• Attain a solid foundation in the most powerful and versatile technologies involved in data streaming: Apache Spark and Apache Kafka
• Form a robust and clean architecture for a data streaming pipeline
• Implement the correct tools to bring your data streaming architecture to life
• Isolate the most problematic tradeoff for each tier involved in a data streaming pipeline
• Query, analyze, and apply machine learning algorithms to collected data
• Display analyzed pipeline data via Google Maps on your web browser
• Discover and resolve difficulties in scaling and securing data streaming applications
About
Today, organizations have a difficult time working with huge numbers of datasets. In addition, data processing and analyzing need to be done in real time to gain insights. This is where data streaming comes in. As big data is no longer a niche topic, having the skillset to architect and develop robust data streaming pipelines is a must for all developers. In addition, they also need to think of the entire pipeline, including the trade-offs for every tier.
This course starts by explaining the blueprint architecture for developing a completely functional data streaming pipeline and installing the technologies used. With the help of live coding sessions, you will get hands-on with architecting every tier of the pipeline. You will also handle specific issues encountered working with streaming data. You will input a live data stream of Meetup RSVPs that will be analyzed and displayed via Google Maps.
By the end of the course, you will have built an efficient data streaming pipeline and will be able to analyze its various tiers, ensuring a continuous flow of data.
All the code and supporting files for this course are available at https://github.com/PacktPublishing/-Data-Stream-Development-with-Apache-Spark-Kafka-and-Spring-Boot
Style and Approach
This course is a combination of text, a lot of images (diagrams), and meaningful live coding sessions. Each topic covered follows a three-step structure: first, we have some headlines (facts); second, we continue with images (diagrams) meant to provide more details; and finally we convert the text and images into code written in the proper technology.
Features:
• From blueprint architecture to complete code solution, this course treats every important aspect involved in architecting and developing a data streaming pipeline
• Select the right tools and frameworks and follow the best approaches to designing your data streaming framework
• Build an end-to-end data streaming pipeline from a real data stream (Meetup RSVPs) and expose the analyzed data in browsers via Google Maps
Author
Anghel Leonard
Anghel Leonard is currently a Java chief architect. He is a member of the Java EE Guardians with 20+ years’ experience. He has spent most of his career architecting distributed systems. He is also the author of several books, a speaker, and a big fan of working with data.