Condizione: New. Brand New. Soft Cover International Edition. Different ISBN and Cover Image. Priced lower than the standard editions which is usually intended to make them more affordable for students abroad. The core content of the book is generally the same as the standard edition. The country selling restrictions may be printed on the book but is no problem for the self-use. This Item maybe shipped from US or any other country as we have multiple locations worldwide.
Da: GreatBookPrices, Columbia, MD, U.S.A.
EUR 42,70
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New.
Da: Lakeside Books, Benton Harbor, MI, U.S.A.
EUR 41,52
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New. Brand New! Not Overstocks or Low Quality Book Club Editions! Direct From the Publisher! We're not a giant, faceless warehouse organization! We're a small town bookstore that loves books and loves it's customers! Buy from Lakeside Books!
Paperback or Softback. Condizione: New. Modern Data Mining Algorithms in C++ and Cuda C: Recent Developments in Feature Extraction and Selection Algorithms for Data Science. Book.
Da: California Books, Miami, FL, U.S.A.
EUR 50,70
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New.
Da: GreatBookPrices, Columbia, MD, U.S.A.
EUR 50,15
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: As New. Unread book in perfect condition.
Da: Kennys Bookshop and Art Galleries Ltd., Galway, GY, Irlanda
Prima edizione
EUR 60,50
Quantità: 15 disponibili
Aggiungi al carrelloCondizione: New. 2020. 1st ed. paperback. . . . . .
Da: GreatBookPricesUK, Woodford Green, Regno Unito
EUR 56,72
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: As New. Unread book in perfect condition.
Da: Ria Christie Collections, Uxbridge, Regno Unito
EUR 61,82
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New. In English.
Da: Chiron Media, Wallingford, Regno Unito
EUR 60,18
Quantità: 10 disponibili
Aggiungi al carrelloPF. Condizione: New.
Da: Revaluation Books, Exeter, Regno Unito
EUR 66,66
Quantità: 2 disponibili
Aggiungi al carrelloPaperback. Condizione: Brand New. 237 pages. 10.00x7.00x0.50 inches. In Stock.
Da: GreatBookPricesUK, Woodford Green, Regno Unito
EUR 61,09
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New.
Condizione: New. 2020. 1st ed. paperback. . . . . . Books ship from the US and Ireland.
Condizione: New.
Da: buchversandmimpf2000, Emtmannsberg, BAYE, Germania
EUR 69,54
Quantità: 2 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. Neuware -Discover a variety of data-mining algorithms that are useful for selecting small sets of important features from among unwieldy masses of candidates, or extracting useful features from measured variables.As a serious data miner you will often be faced with thousands of candidate features for your prediction or classification application, with most of the features being of little or no value. Yoüll know that many of these features may be useful only in combination with certain other features while being practically worthless alone or in combination with most others. Some features may have enormous predictive power, but only within a small, specialized area of the feature space. The problems that plague modern data miners are endless. This book helps you solve this problem by presenting modern feature selection techniques and the code to implement them. Some of these techniques are:Forward selection component analysisLocal feature selectionLinking features and a target with a hidden Markov modelImprovements on traditional stepwise selectionNominal-to-ordinal conversionAll algorithms are intuitively justified and supported by the relevant equations and explanatory material. The author also presents and explains complete, highly commented source code.The example code is in C++ and CUDA C but Python or other code can be substituted; the algorithm is important, not the code that's used to write it.What You Will LearnCombine principal component analysis with forward and backward stepwise selection to identify a compact subset of a large collection of variables that captures the maximum possible variation within the entire set.Identify features that may have predictive power over only a small subset of the feature domain. Such features can be profitably used by modern predictive models but may be missed by other feature selection methods.Find an underlying hidden Markov model that controls the distributions of feature variables and the target simultaneously. The memory inherent in this method is especially valuable in high-noise applications such as prediction of financial markets.Improve traditional stepwise selection in three ways: examine a collection of 'best-so-far' feature sets; test candidate features for inclusion with cross validation to automatically and effectively limit model complexity; and at each step estimate the probability that our results so far could be just the product of random good luck. We also estimate the probability that the improvement obtained by adding a new variable could have been just good luck. Take a potentially valuable nominal variable (a category or class membership) that is unsuitable for input to a prediction model, and assign to each category a sensible numeric value that can be used as a model input.Who This Book Is ForIntermediate to advanced data science programmers and analysts.APress in Springer Science + Business Media, Heidelberger Platz 3, 14197 Berlin 240 pp. Englisch.
Da: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germania
EUR 69,54
Quantità: 2 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Discover a variety of data-mining algorithms that are useful for selecting small sets of important features from among unwieldy masses of candidates, or extracting useful features from measured variables. As a serious data miner you will often be faced with thousands of candidate features for your prediction or classification application, with most of the features being of little or no value. You'll know that many of these features may be useful only in combination with certain other features while being practically worthless alone or in combination with most others. Some features may have enormous predictive power, but only within a small, specialized area of the feature space. The problems that plague modern data miners are endless. This book helps you solve this problem by presenting modern feature selection techniques and the code to implement them. Some of these techniques are:Forward selection component analysis Local feature selectionLinking features and a target with a hidden Markov modelImprovements on traditional stepwise selectionNominal-to-ordinal conversion All algorithms are intuitively justified and supported by the relevant equations and explanatory material. The author also presents and explains complete, highly commented source code.The example code is in C++ and CUDA C but Python or other code can be substituted; the algorithm is important, not the code that's used to write it.What You Will Learn Combine principal component analysis with forward and backward stepwise selection to identify a compact subset of a large collection of variables that captures the maximum possible variation within the entire set. Identify features that may have predictive power over only a small subset of the feature domain. Such features can be profitably used by modern predictive models but may be missed by other feature selection methods. Find an underlying hidden Markov model that controls the distributions of feature variables and the target simultaneously. The memory inherent in this method is especially valuable in high-noise applications such as prediction of financial markets.Improve traditional stepwise selection in three ways: examine a collection of 'best-so-far' feature sets; test candidate features for inclusion with cross validation to automatically and effectively limit model complexity; and at each step estimate the probability that our results so far could be just the product of random good luck. We also estimate the probability that the improvement obtained by adding a new variable could have been just good luck. Take a potentially valuable nominal variable (a category or class membership) that is unsuitable for input to a prediction model, and assign to each category a sensible numeric value that can be used as a model input. Who This Book Is ForIntermediate to advanced data science programmers and analysts. 240 pp. Englisch.
Da: Majestic Books, Hounslow, Regno Unito
EUR 92,34
Quantità: 4 disponibili
Aggiungi al carrelloCondizione: New. Print on Demand.
Da: Biblios, Frankfurt am main, HESSE, Germania
EUR 95,12
Quantità: 4 disponibili
Aggiungi al carrelloCondizione: New. PRINT ON DEMAND.
Da: moluna, Greven, Germania
EUR 56,35
Quantità: Più di 20 disponibili
Aggiungi al carrelloCondizione: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. A novel expert-driven data-mining approach to algorithms in C++ and CUDA C Author has been developing and using algorithms for over 20 yearsData mining is an important topic in big data and data science.
Da: preigu, Osnabrück, Germania
EUR 58,50
Quantità: 5 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. Modern Data Mining Algorithms in C++ and CUDA C | Recent Developments in Feature Extraction and Selection Algorithms for Data Science | Timothy Masters | Taschenbuch | ix | Englisch | 2020 | Apress | EAN 9781484259870 | Verantwortliche Person für die EU: APress in Springer Science + Business Media, Heidelberger Platz 3, 14197 Berlin, juergen[dot]hartmann[at]springer[dot]com | Anbieter: preigu Print on Demand.
Da: AHA-BUCH GmbH, Einbeck, Germania
EUR 70,37
Quantità: 1 disponibili
Aggiungi al carrelloTaschenbuch. Condizione: Neu. nach der Bestellung gedruckt Neuware - Printed after ordering - Discover a variety of data-mining algorithms that are useful for selecting small sets of important features from among unwieldy masses of candidates, or extracting useful features from measured variables. As a serious data miner you will often be faced with thousands of candidate features for your prediction or classification application, with most of the features being of little or no value. You'll know that many of these features may be useful only in combination with certain other features while being practically worthless alone or in combination with most others. Some features may have enormous predictive power, but only within a small, specialized area of the feature space. The problems that plague modern data miners are endless. This book helps you solve this problem by presenting modern feature selection techniques and the code to implement them. Some of these techniques are:Forward selection component analysis Local feature selectionLinking features and a target with a hidden Markov modelImprovements on traditional stepwise selectionNominal-to-ordinal conversion All algorithms are intuitively justified and supported by the relevant equations and explanatory material. The author also presents and explains complete, highly commented source code.The example code is in C++ and CUDA C but Python or other code can be substituted; the algorithm is important, not the code that's used to write it.What You Will Learn Combine principal component analysis with forward and backward stepwise selection to identify a compact subset of a large collection of variables that captures the maximum possible variation within the entire set. Identify features that may have predictive power over only a small subset of the feature domain. Such features can be profitably used by modern predictive models but may be missed by other feature selection methods. Find an underlying hidden Markov model that controls the distributions of feature variables and the target simultaneously. The memory inherent in this method is especially valuable in high-noise applications such as prediction of financial markets.Improve traditional stepwise selection in three ways: examine a collection of 'best-so-far' feature sets; test candidate features for inclusion with cross validation to automatically and effectively limit model complexity; and at each step estimate the probability that our results so far could be just the product of random good luck. We also estimate the probability that the improvement obtained by adding a new variable could have been just good luck. Take a potentially valuable nominal variable (a category or class membership) that is unsuitable for input to a prediction model, and assign to each category a sensible numeric value that can be used as a model input. Who This Book Is ForIntermediate to advanced data science programmers and analysts.