close
1.

電子ブック

EB
by Nada Lavrač, Vid Podpečan, Marko Robnik-Šikonja
出版情報: Cham : Springer International Publishing : Imprint: Springer, 2021
オンライン: https://doi.org/10.1007/978-3-030-68817-2
所蔵情報: loading…
目次情報: 続きを見る
Introduction to Representation Learning
Machine Learning Background
Text Embeddings
Propositionalization of Relational Data
Graph and Heterogeneous Network Transformations
Unified Representation Learning Approaches
Many Faces of Representation Learning
Introduction to Representation Learning
Machine Learning Background
Text Embeddings
2.

電子ブック

EB
by Paulo Cortez
出版情報: Cham : Springer International Publishing : Imprint: Springer, 2021
シリーズ名: Use R! ;
オンライン: https://doi.org/10.1007/978-3-030-72819-9
所蔵情報: loading…
目次情報: 続きを見る
Chapter 1. introduction
Chapter 2. R. Basics
Chapter 3. Blind Search
Chapter 4. Local Search
Chapter 5. Population Based Search
Chapter 6. Multi-Object Optimization
Chapter 1. introduction
Chapter 2. R. Basics
Chapter 3. Blind Search
3.

電子ブック

EB
by Feng Bao
出版情報: Singapore : Springer Nature Singapore : Imprint: Springer, 2021
シリーズ名: Springer Theses, Recognizing Outstanding Ph.D. Research ;
オンライン: https://doi.org/10.1007/978-981-16-3064-4
所蔵情報: loading…
目次情報: 続きを見る
Chapter 1 Introduction
Chapter 2 Fast computational recovery of missing features for large-scale biological data
Chapter 3 Computational recovery of information from low-quality and missing labels
Chapter 4 Computational recovery of sample missings
Chapter 5 Summary and outlook
Chapter 1 Introduction
Chapter 2 Fast computational recovery of missing features for large-scale biological data
Chapter 3 Computational recovery of information from low-quality and missing labels
4.

電子ブック

EB
by Arilova A. Randrianasolo
出版情報: Cham : Springer International Publishing : Imprint: Springer, 2021
シリーズ名: SpringerBriefs in Statistics ;
オンライン: https://doi.org/10.1007/978-3-030-79032-5
所蔵情報: loading…
目次情報: 続きを見る
Introduction
1. Da Real MVP
2. A Tribe of Goats
3. The Myth of the Superteam
4. Hey Now, You're an All-Star...But Are You All-NBA?- 5. Small Ball in a Big Man's Game
6.Is the Clutch Gene Real
7. Offense Wins Games, But Does Defense Win Championships? - 8. Strategic Implications of the Findings in This Book
9. Debates the Future Work Should Consider
Introduction
1. Da Real MVP
2. A Tribe of Goats
5.

電子ブック

EB
by John O'Quigley
出版情報: Cham : Springer International Publishing : Imprint: Springer, 2021
オンライン: https://doi.org/10.1007/978-3-030-33439-0
所蔵情報: loading…
目次情報: 続きを見る
Introduction
Survival analysis
Survival without covariates
Proportional hazards models
Proportional hazards models in epidemiology
Non-proportional hazards models
Estimating equations
Survival given covariate information
Regression effect process
Model construction guided by regression effect process
Hypothesis tests
Introduction
Survival analysis
Survival without covariates
6.

電子ブック

EB
by Göran Kauermann, Helmut Küchenhoff, Christian Heumann
出版情報: Cham : Springer International Publishing : Imprint: Springer, 2021
シリーズ名: Springer Series in Statistics ;
オンライン: https://doi.org/10.1007/978-3-030-69827-0
所蔵情報: loading…
目次情報: 続きを見る
Introduction
Background in Probability
Parametric Statistical Models
Maximum Likelihood Inference
Bayesian Statistics
Statistical Decisions
Regression
Bootstrapping
Model Selection and Model Averaging
Multivariate and Extreme Value Distributions
Missing and Deficient Data
Experiments and Causality
Introduction
Background in Probability
Parametric Statistical Models
7.

電子ブック

EB
edited by Filippo De Mari, Ernesto De Vito
出版情報: Cham : Springer International Publishing : Imprint: Birkhäuser, 2021
シリーズ名: Applied and Numerical Harmonic Analysis ;
オンライン: https://doi.org/10.1007/978-3-030-86664-8
所蔵情報: loading…
目次情報: 続きを見る
Bartolucci, F., De Mari, F., Monti, M., Unitarization of the Horocyclic Radon Transform on Symmetric Spaces
Maurer, A., Entropy and Concentration.-Alaifari, R., Ill-Posed Problems: From Linear to Non-Linear and Beyond
Salzo, S., Villa, S., Proximal Gradient Methods for Machine Learning and Imaging
De Vito, E., Rosasco, L., Rudi, A., Regularization: From Inverse Problems to Large Scale Machine Learning
Bartolucci, F., De Mari, F., Monti, M., Unitarization of the Horocyclic Radon Transform on Symmetric Spaces
Maurer, A., Entropy and Concentration.-Alaifari, R., Ill-Posed Problems: From Linear to Non-Linear and Beyond
Salzo, S., Villa, S., Proximal Gradient Methods for Machine Learning and Imaging
8.

電子ブック

EB
by Sylvain Lespinats, Benoit Colange, Denys Dutykh
出版情報: Cham : Springer International Publishing : Imprint: Springer, 2022
オンライン: https://doi.org/10.1007/978-3-030-81026-9
所蔵情報: loading…
目次情報: 続きを見る
1 Data science context
1.1 Data in a metric space
1.1.1 Measuring dissimilarities and similarities
1.1.2 Neighbourhood ranks
1.1.3 Embedding space notations
1.1.4 Multidimensional data
1.1.5 Sequence data
1.1.6 Network data
1.1.7 A few multidimensional datasets
1.2 Automated tasks
1.2.1 Underlying distribution
1.2.2 Category identification
1.2.3 Data manifold analysis
1.2.4 Model learning
1.2.5 Regression
1.3 Visual exploration
1.3.1 Human in the loop using graphic variables
1.3.2 Spatialization and Gestalt principles
1.3.3 Scatter plots
1.3.4 Parallel coordinates
1.3.5 Colour coding
1.3.6 Multiple coordinated views and visual interaction
1.3.7 Graph drawing
2 Intrinsic dimensionality
2.1 Curse of dimensionality
2.1.1 Data sparsity
2.1.2 Norm concentration
2.2 ID estimation
2.2.1 Covariance-based approaches
2.2.2 Fractal approaches
2.2.3 Towards local estimation
2.3 TIDLE
2.3.1 Gaussian mixture modelling
2.3.2 Test of TIDLE on a two clusters case
3 Map evaluation
3.1 Objective and practical indicators
3.1.1 Subjectivity of indicators
3.1.2 User studies on specific tasks
3.2 Unsupervised global evaluation
3.2.1 Types of distortions
3.2.2 Link between distortions and mapping continuity
3.2.3 Reasons of distortions ubiquity
3.2.4 Scalar indicators
3.2.5 Aggregation
3.2.6 Diagrams
3.3 Class-aware indicators
3.3.1 Class separation and aggregation
3.3.2 Comparing scores between the two spaces
3.3.3 Class cohesion and distinction
3.3.4 The case of one cluster per class
4 Map interpretation
4.1 Axes recovery
4.1.1 Linear case: biplots
4.1.2 Non-linear case
4.2 Local evaluation
4.2.1 Point-wise aggregation
4.2.2 One to many relations with focus point
4.2.3 Many to many relations
4.3 MING
4.3.1 Uniform formulation of rank-based indicator
4.3.2 MING graphs
4.3.3 MING analysis for a toy dataset
4.3.4 Impact of MING parameters
4.3.5 Visual clutter
4.3.6 Oil flow
4.3.7 COIL-20 dataset
4.3.8 MING perspectives
5 Unsupervised DR
5.1 Spectral projections
5.1.1 Principal Component Analysis
5.1.2 Classical MultiDimensional Scaling
5.1.3 Kernel methods: Isompap, KPCA, LE
5.2 Non-linear MDS
5.2.1 Metric MultiDimensional Scaling
5.2.2 Non-metric MultiDimensional Scaling
5.3 Neighbourhood Embedding
5.3.1 General principle: SNE
5.3.2 Scale setting
5.3.3 Divergence choice: NeRV and JSE
5.3.4 Symmetrization
5.3.5 Solving the crowding problem: tSNE
5.3.6 Kernel choice
5.3.7 Adaptive Student Kernel Imbedding
5.4 Graph layout
5.4.1 Force directed graph layout: Elastic Embedding
5.4.2 Probabilistic graph layout: LargeVis
5.4.3 Topological method UMAP
5.5 Artificial neural networks
5.5.1 Auto-encoders
5.5.2 IVIS
6 Supervised DR
6.1 Types of supervision
6.1.1 Full supervision
6.1.2 Weak supervision
6.1.3 Semi-supervision
6.2 Parametric with class purity
6.2.1 Linear Discriminant Analysis
6.2.2 Neighbourhood Component Analysis
6.3 Metric learning
6.3.1 Mahalanobis distances
6.3.2 Riemannian metric
6.3.3 Direct distances transformation
6.3.4 Similarities learning
6.3.5 Metric learning limitations
6.4 Class adaptive scale
6.5 Classimap
6.6 CGNE
6.6.1 ClassNeRV stress
6.6.2 Flexibility of the supervision
6.6.3 Ablation study
6.6.4 Isolet 5 case study
6.6.5 Robustness to class misinformation
6.6.6 Extension to the type 2 mixture: ClassJSE
6.6.7 Extension to semi-supervision and weak-supervision
6.6.8 Extension to soft labels
7 Mapping construction
7.1 Optimization
7.1.1 Global and local optima
7.1.2 Descent algorithms
7.1.3 Initialization
7.1.4 Multi-scale optimization
7.1.5 Force-directed placement interpretation
7.2 Acceleration strategies
7.2.1 Attractive forces approximation
7.2.2 Binary search trees
7.2.3 Repulsive forces
7.2.4 Landmarks approximation
7.3 Out of sample extension
1 Data science context
1.1 Data in a metric space
1.1.1 Measuring dissimilarities and similarities
9.

電子ブック

EB
by Mayer Alvo
出版情報: Cham : Springer International Publishing : Imprint: Springer, 2022
シリーズ名: Springer Series in the Data Sciences ;
オンライン: https://doi.org/10.1007/978-3-031-06784-6
所蔵情報: loading…
目次情報: 続きを見る
I. Introduction to Big Data
Examples of Big Data
II. Statistical Inference for Big Data
Basic Concepts in Probability
Basic Concepts in Statistics
Multivariate Methods
Nonparametric Statistics
Exponential Tilting and its Applications
Counting Data Analysis
Time Series Methods
Estimating Equations
Symbolic Data Analysis
III Machine Learning for Big Data
Tools for Machine Learning
Neural Networks
IV Computational Methods for Statistical Inference
Bayesian Computation Methods
I. Introduction to Big Data
Examples of Big Data
II. Statistical Inference for Big Data
10.

電子ブック

EB
by Thomas Haslwanter
出版情報: Cham : Springer International Publishing : Imprint: Springer, 2022
シリーズ名: Statistics and Computing ;
オンライン: https://doi.org/10.1007/978-3-030-97371-1
所蔵情報: loading…
目次情報: 続きを見る
I Python and Statistics
1 Introduction
2 Python
3 Data Input
4 Data Display
II Distributions and Hypothesis Tests
5 Basic Statistical Concepts
6 Distributions of One Variable
7 Hypothesis Tests
8 Tests of Means of Numerical Data
9 Tests on Categorical Data
10 Analysis of Survival Times
III Statistical Modelling
11 Finding Patterns in Signals
12 Linear Regression Models
13 Generalized Linear Models
14 Bayesian Statistics
Appendices
A Useful Programming Tools
B Solutions
C Equations for Confidence Intervals
D Web Ressources
Glossary
Bibliography
Index
I Python and Statistics
1 Introduction
2 Python