Parallel Computing for Data Science
Parallel Computing for Data Science
With Examples in R, C++ and CUDA
Matloff, Norman
Taylor & Francis Ltd
12/2020
328
Mole
Inglês
9780367738198
15 a 20 dias
650
Descrição não disponível.
Introduction to Parallel Processing in R. "Why Is My Program So Slow?": Obstacles to Speed. Principles of Parallel Loop Scheduling. The Shared Memory Paradigm: A Gentle Introduction through R. The Shared Memory Paradigm: C Level. The Shared Memory Paradigm: GPUs. Thrust and Rth. The Message Passing Paradigm. MapReduce Computation. Parallel Sorting and Merging. Parallel Prefix Scan. Parallel Matrix Operations. Inherently Statistical Approaches: Subset Methods. Appendices.
Este título pertence ao(s) assunto(s) indicados(s). Para ver outros títulos clique no assunto desejado.
GPU Global Memory;CUDA Code;parallel programming;Hadoop Distributed File System;parallel data structures;GPU Computation;network graph models;TBB.;multicore systems;Multicore Platform;clusters;Vice Versa;graphics processing units;GPU Memory;GPU;Execution Time;programming language;Multicore Machine;computing platforms;OMP;Thrust package;GPU Program;Column Major Order;D Iv;NVIDIA GPU;Adjacency Matrix;Lock Variable;Num Threads;HDFS File;Reduced Row Echelon Form;Data Set;Row Echelon Form;BLAS Library;Shared Memory Programming;Quantile Regression
Introduction to Parallel Processing in R. "Why Is My Program So Slow?": Obstacles to Speed. Principles of Parallel Loop Scheduling. The Shared Memory Paradigm: A Gentle Introduction through R. The Shared Memory Paradigm: C Level. The Shared Memory Paradigm: GPUs. Thrust and Rth. The Message Passing Paradigm. MapReduce Computation. Parallel Sorting and Merging. Parallel Prefix Scan. Parallel Matrix Operations. Inherently Statistical Approaches: Subset Methods. Appendices.
Este título pertence ao(s) assunto(s) indicados(s). Para ver outros títulos clique no assunto desejado.
GPU Global Memory;CUDA Code;parallel programming;Hadoop Distributed File System;parallel data structures;GPU Computation;network graph models;TBB.;multicore systems;Multicore Platform;clusters;Vice Versa;graphics processing units;GPU Memory;GPU;Execution Time;programming language;Multicore Machine;computing platforms;OMP;Thrust package;GPU Program;Column Major Order;D Iv;NVIDIA GPU;Adjacency Matrix;Lock Variable;Num Threads;HDFS File;Reduced Row Echelon Form;Data Set;Row Echelon Form;BLAS Library;Shared Memory Programming;Quantile Regression