0%

Gal, Yarin, “Uncertainty in Deep Learning,” Doctor of Philosophy, University of Cambridge, 2016.

全文的主要贡献

(p15) We will thus concentrate on the development of practical techniques to obtain model confidence in deep learning, techniques which are also well rooted within the theoretical foundations of probability theory and Bayesian modelling. Specifically, we will make use of stochastic regularisation techniques (SRTs).

These techniques adapt the model output stochastically as a way of model regularisation (hence the name stochastic regularisation). This results in the loss becoming a random quantity, which is optimised using tools from the stochastic non-convex optimisation literature. Popular SRTs include dropout [Hinton et al., 2012], multiplicative Gaussian noise [Srivastava et al., 2014], dropConnect [Wan et al., 2013], and countless other recent techniques4,5.

作者对 NN 的一些讨论

CNN

经常用在图像识别。卷积层处理空间信息;pooling 层缩减维度。

Convolutional neural networks (CNNs). CNNs [LeCun et al., 1989; Rumelhart et al., 1985] are popular deep learning tools for image processing, which can solve tasks that until recently were considered to lie beyond our reach [Krizhevsky et al., 2012; Szegedy et al., 2014]. The model is made of a recursive application of convolution and pooling layers, followed by inner product layers at the end of the network (simple NNs as described above). A convolution layer is a linear transformation that preserves spatial information in the input image (depicted in figure 1.1). Pooling layers simply take the output of a convolution layer and reduce its dimensionality (by taking the maximum of each (2, 2) block of pixels for example). The convolution layer will be explained in more detail in section §3.4.1.

RNN

擅长处理序列数据,比如自然语言识别、语言生成、视频处理、其它(?)。

Recurrent neural networks (RNNs). RNNs [Rumelhart et al., 1985; Werbos, 1988] are sequence-based models of key importance for natural language understanding, language generation, video processing, and many other tasks [Kalchbrenner and Blunsom, 2013; Mikolov et al., 2010; Sundermeyer et al., 2012; Sutskever et al., 2014].

PILCO

PILCO [Deisenroth and Rasmussen, 2011], for example, is a data-efficient probabilistic model-based policy search algorithm. PILCO analytically propagates uncertain state distributions through a Gaussian process dynamics model. This is done by recursively feeding the output state distribution (output uncertainty) of one time step as the input state distribution (input uncertainty) of the next time step, until a fixed time horizon T.

与 GP 的关系

使用无穷个 neuron,每个 weight 都取为高斯分布,则成为 GP。

对有限个 weights,则是 BNN。

(p14) Even though modern deep learning models used in practice do not capture model confidence, they are closely related to a family of probabilistic models which induce probability distributions over functions: the Gaussian process. Given a neural network, by placing a probability distribution over each weight (a standard normal distribution for example), a Gaussian process can be recovered in the limit of infinitely many weights (see Neal [1995] or Williams [1997]). For a finite number of weights, model uncertainty can still be obtained by placing distributions over the weights—these models are called Bayesian neural networks.

Bayesian modeling 的一些基础知识

在未进行观测前,假设存在关于函数 y=fω(x)\bm{y}=f^\bm{\omega}(\bm{x})

先验分布 prior distribution: p(ω)p(\bm{\omega})

代表了我们对先验情况的主观猜测。


当获得一些观测后,可以定义

似然函数 likelihood distribution: p(yx,ω)p(\bm{y}|\bm{x},\bm{\omega})

反映了在当前假设的函数参数 ω\bm{\omega} 下,x\bm{x} 给出观测值 y\bm{y} 的概率。

Read more »

Monte Carlo integration

对于我这样的统计外行,最大的问题是在一开始没有搞明白为什么要采样,或者说采样的结果应该如何使用,而不是MCMC的算法细节。

其实我想知道的东西,名字叫 Monte Carlo integration,参考这里的提问或者这里的PPT

Assuming θi\theta_i is sampled from the distribution p(θD)p(\theta|D), the Monte Carlo integreation formula is:

Eθp(θD)[g(θ)]=g(θ)p(θD)dθ1nθip(θD)g(θi)+O(n)\mathbb{E}_{\theta\sim p(\theta|D)}[g(\theta)] = \int g(\theta) p(\theta|D) d\theta \approx \frac{1}{n} \sum_{\theta_i\sim p(\theta|D)} g(\theta_i) + O(\sqrt{n})

The following discussion provides a very clear interpretation about Bayes Inferencing, but I’m not sure it’s exact and 100% correct. Need to do more reading.
Can a posterior expectation be used as a approximate for the true (prior) expectation?

MCMC 是否适用于大规划问题,有上千个参数的问题

MCMC应用的概率模型,其参数维数往往巨大,但每个参数的支撑集非常小。比如一些NLP问题的参数只取{0,1},但维数往往达到几千甚至上万左右,这正说明了MCMC更适用这些问题。

为什么要使用MCMC方法? - Ni Yun的回答 - 知乎 https://www.zhihu.com/question/60437632/answer/179001481

目前已有的两个答案(by Ni Yun and by 李定)说得很对,是 因为维度灾难,蒙特卡洛积分法是求高维函数积分的一种非常高效的方法,而statistic inference中求参数的期望本质上就是一个积分。所以,以此为出发点,就通向了MCMC方法。

为什么要使用MCMC方法? - astrojhgu的回答 - 知乎 https://www.zhihu.com/question/60437632/answer/179040869

结论: 可以适用于大规划,而且我的规划也不算大。 但是效果如果,未知。 潜在的问题之一是:如果实际并没有办法做严格的regression,那么就是会导致过度学习,overfitting 严重。

学习资料

按照我的阅读顺序排序

[ref-1] Dylon 大仙, MCMC基本原理与应用(一), 2015-06-03

[ref-2] daniel-D, 从随机过程到马尔科夫链蒙特卡洛方法 (不太好,讲得比较混乱)

[ref-3] 靳志辉, LDA-math-MCMC 和 Gibbs Sampling (我从这里开始仔细看算法,细致平稳条件)

[ref-4] shenxiaolu1984, 蒙特卡洛-马尔科夫链(MCMC)初步 (简要介绍了4种采用方法,具体算法的公式挂了)

[ref-5] shenxiaolu1984, 蒙特卡洛-马尔科夫链(MCMC)的混合速度

[ref-6] qy20115549, HMC(Hamiltonian Monte Carlo抽样算法详细介绍) (未看)

[ref-7] 随机模拟-Monte Carlo积分及采样(详述直接采样、接受-拒绝采样、重要性采样) (讲了 Monte Carlo 积分与几种常见的采样方式的解释比较直观和深刻。MCMC 的主要作用之一是用来 支持 Monte Carlo 积分,其中涉及到了对某概率 f(x)f(x) 的采样。)

[ref-8] Bin的专栏, 随机采样方法整理与讲解(MCMC、Gibbs Sampling等) (推荐。基本是最正确的理解顺序。)

[ref-9] 再谈MCMC方法
讲到用MCMC破解凯撒密码

[ref-wiki-MCMC] Markov chain Monte Carlo(未看)

[ref-wiki-Gibbs]

HMC

如何简单地理解「哈密尔顿蒙特卡洛 (HMC)」?/ 写得比较简略,对入门不是很有帮助。

我的整理

理解这一算法的关键在于对以下几个概念的理解:

  • 随机过程
  • Markov 性,无后效性
  • Markov Chain 的极限和平稳分布
  • 概率分布的采样,数值方法
Read more »

steinwart_support_2008

Statistical Learning Theory 的本质

与经典的参数化方法不同,参数化方法假设 x 和 y 的关系服从某个确定性的函数。 (p3) this is a fundamental difference from parametric models, in which the relationship between the inputs x and the outputs y is assumed to follow some unknown function f ∈ F from a known, finite-dimensional set of functions F.

  • assuming that the output value y to a given x is stochastically generated by P( · |x) accommodates the fact that in general the information contained in x may not be sufficient to determine a single response in a deterministic manner.
  • assuming that the conditional probability P( · |x) is unknown contributes to the fact that we assume that we do not have a reasonable description of the relationship between the input and output values.

SVM 和 GP 的关系

For a brief description of kernel ridge regression and Gaussian processes, see Cristianini and Shawe-Taylor (2000, Section 6.2).

We refer to Wahba (1999) for the relationship between SVMs and Gaussian processes.

积累的一些小软件

名称 作用
Language Switcher 自定义打开软件时使用的语言环境设置
~~TinkerTool 设置Eclipse的系统相关字体大小 ~~(2019/03开始放弃使用Eclispe了)
QBlocker 防止误操作关闭。有时候会失效
~~清歌输入法 使用还算可以的五笔输入法~~
搜狗五笔
HyperSwitch 增强切换窗口

放弃解决的一些问题

Read more »

brack_inorbit_2017

In-Orbit Tracking of High Area-to-Mass Ratio Space Objects

这篇的intro介绍了许多新内容,值得学习

Singla, Puneet. 2016. “Certain Thoughts on Uncertainty Analysis for Dynamical Systems.” Department of Mechanical and Aerospace Engineering, University of Texas at Arlington, August 17. http://lairs.eng.buffalo.edu/wiki/images/a/ac/SinglaTalk.pdf.

The fusion of observational data with numerical simulation promises to provide greater understanding of physical phenomenon than either approach alone can achieve.

The most critical challenge here is to provide a quantitative assessment of how closely our estimates reflect reality in the presence of model uncertainty as well as measurement errors and uncertainty.

Uncertainty Propagation: Nonlinear Systems

  • Approximate Solution to exact problem: Multiple-model estimation method, Unscented Kalman Filter (UKF), Monte Carlo (MC) methods.
  • Exact solution to approximate problem: Extended Kalman Filter (EKF), Gaussian closure, Stochastic Linearization…

这两者的区别在哪里?为什么EKF是exact solution?

Fokker-Planck-Kolmogorov equation (FPKE)

With sufficient number of Gaussian components, any pdf can be approximated as closely as desired.

神经网络工具箱学习摘要

both shallow and deep NN

  • classification
  • regression (这里主要是针对这部分功能的学习摘要,其它网络还有很多不同的内容)
  • clustering
  • dimensionality reducetion
  • time-series forecasting: long short-term memory (LSTM) deep learning networks
  • dynamic system modeling and control

For small training sets, you can quickly apply deep learning by performing transfer learning with pretrained deep network models (GoogLeNet, AlexNet, VGG-16, and VGG-19) and models from the Caffe Model Zoo. (什么东西?)

支持 CPU 和 GPU 并行 支持 Amazon EC2 P2 GPU instances (是什么?)

主要函数

Read more »