LEARN APACHE SPARK Build Scalable Pipelines with PySpark and Optimization
This book is designed for students, developers, data engineers, data scientists, and technology professionals who want to master Apache Spark in practice, in corporate environments, public cloud, and modern integrations.
You will learn to build scalable pipelines for large-scale data processing, orchestrating distributed workloads with AWS EMR, Databricks, Azure Synapse, and Google Cloud Dataproc. The content covers integration with Hadoop, Hive, Kafka, SQL, Delta Lake, MongoDB, and Python, as well as advanced techniques in tuning, job optimization, real-time analysis, machine learning with MLlib, and workflow automation.
Includes:
• Implementation of ETL and ELT pipelines with Spark SQL and DataFrames
• Data streaming processing and integration with Kafka and AWS Kinesis
• Optimization of distributed jobs, performance tuning, and use of Spark UI
• Integration of Spark with S3, Data Lake, NoSQL, and relational databases
• Deployment on managed clusters in AWS, Azure, and Google Cloud
• Applied Machine Learning with MLlib, Delta Lake, and Databricks
• Automation of routines, monitoring, and scalability for Big Data
By the end, you will master Apache Spark as a professional solution for data analysis, process automation, and machine learning in complex, high-performance environments.
Content reviewed by A.I. with technical supervision.
apache spark, big data, pipelines, distributed processing, aws emr, databricks, streaming, etl, machine learning, cloud integration Google Data Engineer, AWS Data Analytics, Azure Data Engineer, Big Data Engineer, MLOps, DataOps Professional
Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.
EUR 7,68 per la spedizione da U.S.A. a Italia
Destinazione, tempi e costiDa: California Books, Miami, FL, U.S.A.
Condizione: New. Print on Demand. Codice articolo I-9798289704603
Quantità: Più di 20 disponibili
Da: Best Price, Torrance, CA, U.S.A.
Condizione: New. SUPER FAST SHIPPING. Codice articolo 9798289704603
Quantità: 2 disponibili
Da: CitiRetail, Stevenage, Regno Unito
Paperback. Condizione: new. Paperback. LEARN APACHE SPARK Build Scalable Pipelines with PySpark and OptimizationThis book is designed for students, developers, data engineers, data scientists, and technology professionals who want to master Apache Spark in practice, in corporate environments, public cloud, and modern integrations.You will learn to build scalable pipelines for large-scale data processing, orchestrating distributed workloads with AWS EMR, Databricks, Azure Synapse, and Google Cloud Dataproc. The content covers integration with Hadoop, Hive, Kafka, SQL, Delta Lake, MongoDB, and Python, as well as advanced techniques in tuning, job optimization, real-time analysis, machine learning with MLlib, and workflow automation.Includes: - Implementation of ETL and ELT pipelines with Spark SQL and DataFrames- Data streaming processing and integration with Kafka and AWS Kinesis- Optimization of distributed jobs, performance tuning, and use of Spark UI- Integration of Spark with S3, Data Lake, NoSQL, and relational databases- Deployment on managed clusters in AWS, Azure, and Google Cloud- Applied Machine Learning with MLlib, Delta Lake, and Databricks- Automation of routines, monitoring, and scalability for Big DataBy the end, you will master Apache Spark as a professional solution for data analysis, process automation, and machine learning in complex, high-performance environments.apache spark, big data, pipelines, distributed processing, aws emr, databricks, streaming, etl, machine learning, cloud integration Google Data Engineer, AWS Data Analytics, Azure Data Engineer, Big Data Engineer, MLOps, DataOps Professional Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability. Codice articolo 9798289704603
Quantità: 1 disponibili
Da: Grand Eagle Retail, Mason, OH, U.S.A.
Paperback. Condizione: new. Paperback. LEARN APACHE SPARK Build Scalable Pipelines with PySpark and OptimizationThis book is designed for students, developers, data engineers, data scientists, and technology professionals who want to master Apache Spark in practice, in corporate environments, public cloud, and modern integrations.You will learn to build scalable pipelines for large-scale data processing, orchestrating distributed workloads with AWS EMR, Databricks, Azure Synapse, and Google Cloud Dataproc. The content covers integration with Hadoop, Hive, Kafka, SQL, Delta Lake, MongoDB, and Python, as well as advanced techniques in tuning, job optimization, real-time analysis, machine learning with MLlib, and workflow automation.Includes: - Implementation of ETL and ELT pipelines with Spark SQL and DataFrames- Data streaming processing and integration with Kafka and AWS Kinesis- Optimization of distributed jobs, performance tuning, and use of Spark UI- Integration of Spark with S3, Data Lake, NoSQL, and relational databases- Deployment on managed clusters in AWS, Azure, and Google Cloud- Applied Machine Learning with MLlib, Delta Lake, and Databricks- Automation of routines, monitoring, and scalability for Big DataBy the end, you will master Apache Spark as a professional solution for data analysis, process automation, and machine learning in complex, high-performance environments.apache spark, big data, pipelines, distributed processing, aws emr, databricks, streaming, etl, machine learning, cloud integration Google Data Engineer, AWS Data Analytics, Azure Data Engineer, Big Data Engineer, MLOps, DataOps Professional Shipping may be from multiple locations in the US or from the UK, depending on stock availability. Codice articolo 9798289704603
Quantità: 1 disponibili