scipy统计模块stats翻译

2,654次阅读
没有评论

Statistics (scipy.stats)

介绍

在这个教程我们讨论一部分scipy.stats模块的特性。这里我们的意图是提供给使用者一个关于这个 包的实用性知识。我们推荐reference manual来介绍更多的细节。

注意:这个文档还在发展中。

随机变量

有一些通用的概率分布类被封装在continuous random variables以及 discrete random variables中。有80多个连续性随机变量(RVs)以及10余个离散随机变量已经用 这些类建立。同样,新的程序和分布可以被用户新建(如果你构造了一个,请提供它给我们帮助发展 这个包)。

所有统计函数被放在子包scipy.stats中,且有这些函数的一个几乎完整的列表可以使用 info(stats)获得。这个列表里的随机变量也可以从stats子包的docstring中获得介绍。

在接下来的讨论中,我们着重于连续性随机变量(RVs)。几乎所有离散变量也符合下面的讨论, 尽管我们将“离散分布的特殊之处”指出它们的一些区别。

下面的示例代码我们假设scipy.stats包已被下述方式导入。

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-k">from</span> scipy <span class="pl-k">import</span> stats

有些例子假设对象被这样的方式导入(不用输完整路径)了。

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-k">from</span> scipy.stats <span class="pl-k">import</span> norm

获得帮助

所有分布可以使用help函数得到解释。为获得这些信息只需要使用像这样的简单调用:

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> norm.<span class="pl-c1">__doc__</span>

作为例子,我们用这种方式获取分布的上下界

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>bounds of distribution lower: <span class="pl-c1">%s</span>, upper: <span class="pl-c1">%s</span><span class="pl-pds">'</span></span> <span class="pl-k">%</span> (norm.a,norm.b)
bounds of distribution lower: <span class="pl-k">-</span>inf, upper: inf

我们可以通过调用dir(norm)来获得关于这个(正态)分布的所有方法和属性。应该看到, 一些方法是私有方法尽管其并没有以名称表示出来(比如它们前面没有以下划线开头), 比如veccdf就只用于内部计算(试图使用那些方法将引发警告,因为它们可能会在后续开发中被移除)

为了获得真正的主要方法,我们列举冻结分布的方法(我们将在下文解释何谓冻结

<span class="pl-k">>></span><span class="pl-k">></span> rv <span class="pl-k">=</span> norm()
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">dir</span>(rv)  <span class="pl-c"># reformatted</span>
    [<span class="pl-s"><span class="pl-pds">'</span>__class__<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>__delattr__<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>__dict__<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>__doc__<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>__getattribute__<span class="pl-pds">'</span></span>,
    <span class="pl-s"><span class="pl-pds">'</span>__hash__<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>__init__<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>__module__<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>__new__<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>__reduce__<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>__reduce_ex__<span class="pl-pds">'</span></span>,
    <span class="pl-s"><span class="pl-pds">'</span>__repr__<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>__setattr__<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>__str__<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>__weakref__<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>args<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>cdf<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>dist<span class="pl-pds">'</span></span>,
    <span class="pl-s"><span class="pl-pds">'</span>entropy<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>isf<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>kwds<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>moment<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>pdf<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>pmf<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>ppf<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>rvs<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>sf<span class="pl-pds">'</span></span>, <span class="pl-s"><span class="pl-pds">'</span>stats<span class="pl-pds">'</span></span>]

最后,我们能通过内省获得所有的可用分布的信息。

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-k">import</span> warnings
<span class="pl-k">>></span><span class="pl-k">></span> warnings.simplefilter(<span class="pl-s"><span class="pl-pds">'</span>ignore<span class="pl-pds">'</span></span>, <span class="pl-c1">DeprecationWarning</span>)
<span class="pl-k">>></span><span class="pl-k">></span> dist_continu <span class="pl-k">=</span> [d <span class="pl-k">for</span> d <span class="pl-k">in</span> <span class="pl-c1">dir</span>(stats) <span class="pl-k">if</span>
<span class="pl-c1">...</span>                 <span class="pl-c1">isinstance</span>(<span class="pl-c1">getattr</span>(stats,d), stats.rv_continuous)]
<span class="pl-k">>></span><span class="pl-k">></span> dist_discrete <span class="pl-k">=</span> [d <span class="pl-k">for</span> d <span class="pl-k">in</span> <span class="pl-c1">dir</span>(stats) <span class="pl-k">if</span>
<span class="pl-c1">...</span>                  <span class="pl-c1">isinstance</span>(<span class="pl-c1">getattr</span>(stats,d), stats.rv_discrete)]
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>number of continuous distributions:<span class="pl-pds">'</span></span>, <span class="pl-c1">len</span>(dist_continu)
number of continuous distributions: <span class="pl-c1">84</span>
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>number of discrete distributions:  <span class="pl-pds">'</span></span>, <span class="pl-c1">len</span>(dist_discrete)
number of discrete distributions:   <span class="pl-c1">12</span>

通用方法

连续随机变量的主要公共方法如下:

  • rvs:随机变量(就是从这个分布中抽一些样本)
  • pdf:概率密度函数。
  • cdf:累计分布函数
  • sf:残存函数(1-CDF)
  • ppf:分位点函数(CDF的逆)
  • isf:逆残存函数(sf的逆)
  • stats:返回均值,方差,(费舍尔)偏态,(费舍尔)峰度。
  • moment:分布的非中心矩。

让我们使用一个标准正态(normal)随机变量(RV)作为例子。

<span class="pl-k">>></span><span class="pl-k">></span> norm.cdf(<span class="pl-c1">0</span>)
<span class="pl-c1">0.5</span>

为了计算在一个点上的cdf,我们可以传递一个列表或一个numpy数组。

<span class="pl-k">>></span><span class="pl-k">></span> norm.cdf([<span class="pl-k">-</span><span class="pl-c1">1</span>., <span class="pl-c1">0</span>, <span class="pl-c1">1</span>])
array([ <span class="pl-c1">0.15865525</span>,  <span class="pl-c1">0.5</span>       ,  <span class="pl-c1">0.84134475</span>])
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-k">import</span> numpy <span class="pl-k">as</span> np
<span class="pl-k">>></span><span class="pl-k">></span> norm.cdf(np.array([<span class="pl-k">-</span><span class="pl-c1">1</span>., <span class="pl-c1">0</span>, <span class="pl-c1">1</span>]))
array([ <span class="pl-c1">0.15865525</span>,  <span class="pl-c1">0.5</span>       ,  <span class="pl-c1">0.84134475</span>])

相应的,像pdf,cdf之类的简单方法可以用np.vectorize进行矢量化.

一些其他的实用通用方法:

<span class="pl-k">>></span><span class="pl-k">></span> norm.mean(), norm.std(), norm.var()
(<span class="pl-c1">0.0</span>, <span class="pl-c1">1.0</span>, <span class="pl-c1">1.0</span>)
<span class="pl-k">>></span><span class="pl-k">></span> norm.stats(<span class="pl-v">moments</span> <span class="pl-k">=</span> <span class="pl-s"><span class="pl-pds">"</span>mv<span class="pl-pds">"</span></span>)
(array(<span class="pl-c1">0.0</span>), array(<span class="pl-c1">1.0</span>))

为了找到一个分布的中中位数,我们可以使用分位数函数ppf,它是cdf的逆。

<span class="pl-k">>></span><span class="pl-k">></span> norm.ppf(<span class="pl-c1">0.5</span>)
<span class="pl-c1">0.0</span>

为了产生一个随机变量列,使用size关键字参数。

<span class="pl-k">>></span><span class="pl-k">></span> norm.rvs(<span class="pl-v">size</span><span class="pl-k">=</span><span class="pl-c1">5</span>)
array([<span class="pl-k">-</span><span class="pl-c1">0.35687759</span>,  <span class="pl-c1">1.34347647</span>, <span class="pl-k">-</span><span class="pl-c1">0.11710531</span>, <span class="pl-k">-</span><span class="pl-c1">1.00725181</span>, <span class="pl-k">-</span><span class="pl-c1">0.51275702</span>])

不要认为norm.rvs(5)产生了五个变量。

<span class="pl-k">>></span><span class="pl-k">></span> norm.rvs(<span class="pl-c1">5</span>)
<span class="pl-c1">7.131624370075814</span>

欲知其意,请看下一部分的内容。

偏移(Shifting)与缩放(Scaling)

所有连续分布可以操纵loc以及scale参数调整分布的location和scale属性。作为例子, 标准正态分布的location是均值而scale是标准差。

<span class="pl-k">>></span><span class="pl-k">></span> norm.stats(<span class="pl-v">loc</span> <span class="pl-k">=</span> <span class="pl-c1">3</span>, <span class="pl-v">scale</span> <span class="pl-k">=</span> <span class="pl-c1">4</span>, <span class="pl-v">moments</span> <span class="pl-k">=</span> <span class="pl-s"><span class="pl-pds">"</span>mv<span class="pl-pds">"</span></span>)
(array(<span class="pl-c1">3.0</span>), array(<span class="pl-c1">16.0</span>))

通常经标准化的分布的随机变量X可以通过变换(X-loc)/scale获得。它们的默认值是loc=0以及scale=1.

聪明的使用loc与scale可以帮助以灵活的方式调整标准分布达到所想要的效果。 为了进一步说明缩放的效果,下面给出期望为1/λ指数分布的cdf。

F(x)=1−exp(−λx)

通过像上面那样使用scale,可以看到如何得到目标期望值。

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-k">from</span> scipy.stats <span class="pl-k">import</span> expon
<span class="pl-k">>></span><span class="pl-k">></span> expon.mean(<span class="pl-v">scale</span><span class="pl-k">=</span><span class="pl-c1">3</span>.)
<span class="pl-c1">3.0</span>

均匀分布也是令人感兴趣的:

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-k">from</span> scipy.stats <span class="pl-k">import</span> uniform
<span class="pl-k">>></span><span class="pl-k">></span> uniform.cdf([<span class="pl-c1">0</span>, <span class="pl-c1">1</span>, <span class="pl-c1">2</span>, <span class="pl-c1">3</span>, <span class="pl-c1">4</span>, <span class="pl-c1">5</span>], <span class="pl-v">loc</span> <span class="pl-k">=</span> <span class="pl-c1">1</span>, <span class="pl-v">scale</span> <span class="pl-k">=</span> <span class="pl-c1">4</span>)
array([ <span class="pl-c1">0</span>.  ,  <span class="pl-c1">0</span>.  ,  <span class="pl-c1">0.25</span>,  <span class="pl-c1">0.5</span> ,  <span class="pl-c1">0.75</span>,  <span class="pl-c1">1</span>.  ])

最后,联系起我们在前面段落中留下的norm.rvs(5)的问题。事实上,像这样调用一个分布, 其第一个参数,像之前的5,是把loc参数调到了5,让我们看:

<span class="pl-k">>></span><span class="pl-k">></span> np.mean(norm.rvs(<span class="pl-c1">5</span>, <span class="pl-v">size</span><span class="pl-k">=</span><span class="pl-c1">500</span>))
<span class="pl-c1">4.983550784784704</span>

在这里,为解释最后一段的输出:norm.rvs(5)产生了一个正态分布变量,其期望,即loc=5.

我倾向于明确的使用loc,scale作为关键字而非像上面那样依赖参数的顺序。 因为这看上去有点令人困惑。我们在我们解释“冻结RV”的主题之前澄清这一点。

形态(shape)参数

虽然一般连续随机变量都可以通过赋予loc和scale参数进行偏移和缩放,但一些分布还需要 额外的形态参数确定其形态。作为例子,看到这个伽马分布,这是它的密度函数

γ(x,a)=λ(λx)a−1Γ(a)e−λx,

它要求一个形态参数a。注意到λ的设置可以通过设置scale关键字为1/λ进行。

让我们检查伽马分布的形态参数的名字的数量。(我们从上面知道其应该为1)

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-k">from</span> scipy.stats <span class="pl-k">import</span> gamma
<span class="pl-k">>></span><span class="pl-k">></span> gamma.numargs
<span class="pl-c1">1</span>
<span class="pl-k">>></span><span class="pl-k">></span> gamma.shapes
<span class="pl-s"><span class="pl-pds">'</span>a<span class="pl-pds">'</span></span>

现在我们设置形态变量的值为1以变成指数分布。所以我们可以容易的比较是否得到了我们所期望的结果。

<span class="pl-k">>></span><span class="pl-k">></span>  gamma(<span class="pl-c1">1</span>, <span class="pl-v">scale</span><span class="pl-k">=</span><span class="pl-c1">2</span>.).stats(<span class="pl-v">moments</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">"</span>mv<span class="pl-pds">"</span></span>)
(array(<span class="pl-c1">2.0</span>), array(<span class="pl-c1">4.0</span>))

注意我们也可以以关键字的方式指定形态参数:

<span class="pl-k">>></span><span class="pl-k">></span> gamma(<span class="pl-v">a</span><span class="pl-k">=</span><span class="pl-c1">1</span>, <span class="pl-v">scale</span><span class="pl-k">=</span><span class="pl-c1">2</span>.).stats(<span class="pl-v">moments</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">"</span>mv<span class="pl-pds">"</span></span>)
(array(<span class="pl-c1">2.0</span>), array(<span class="pl-c1">4.0</span>))

冻结分布

不断地传递loc与scale关键字最终会让人厌烦。而冻结RV的概念被用来解决这个问题。

<span class="pl-k">>></span><span class="pl-k">></span> rv <span class="pl-k">=</span> gamma(<span class="pl-c1">1</span>, <span class="pl-v">scale</span><span class="pl-k">=</span><span class="pl-c1">2</span>.)

通过使用rv,在任何情况下我们不再需要包含scale与形态参数。显然,分布可以被多种方式使用, 我们可以通过传递所有分布参数给对方法的每次调用(像我们之前做的那样)或者可以对一个分 布对象先冻结参数。让我们看看是怎么回事:

<span class="pl-k">>></span><span class="pl-k">></span> rv.mean(), rv.std()
(<span class="pl-c1">2.0</span>, <span class="pl-c1">2.0</span>)

这正是我们应该得到的。

广播

像pdf这样的简单方法满足numpy的广播规则。作为例子,我们可以计算t分布的右尾分布的临界值 对于不同的概率值以及自由度。

<span class="pl-k">>></span><span class="pl-k">></span> stats.t.isf([<span class="pl-c1">0.1</span>, <span class="pl-c1">0.05</span>, <span class="pl-c1">0.01</span>], [[<span class="pl-c1">10</span>], [<span class="pl-c1">11</span>]])
array([[ <span class="pl-c1">1.37218364</span>,  <span class="pl-c1">1.81246112</span>,  <span class="pl-c1">2.76376946</span>],
       [ <span class="pl-c1">1.36343032</span>,  <span class="pl-c1">1.79588482</span>,  <span class="pl-c1">2.71807918</span>]])

这里,第一行是以10自由度的临界值,而第二行是以11为自由度的临界值。所以, 广播规则与下面调用了两次isf产生的结果相同。

<span class="pl-k">>></span><span class="pl-k">></span> stats.t.isf([<span class="pl-c1">0.1</span>, <span class="pl-c1">0.05</span>, <span class="pl-c1">0.01</span>], <span class="pl-c1">10</span>)
array([ <span class="pl-c1">1.37218364</span>,  <span class="pl-c1">1.81246112</span>,  <span class="pl-c1">2.76376946</span>])
<span class="pl-k">>></span><span class="pl-k">></span> stats.t.isf([<span class="pl-c1">0.1</span>, <span class="pl-c1">0.05</span>, <span class="pl-c1">0.01</span>], <span class="pl-c1">11</span>)
array([ <span class="pl-c1">1.36343032</span>,  <span class="pl-c1">1.79588482</span>,  <span class="pl-c1">2.71807918</span>])

但是如果概率数组,如[0.1,0.05,0.01]与自由度数组,如[10,11,12]具有相同的数组形态, 则进行对应匹配,我们可以分别得到10%,5%,1%尾的临界值对于10,11,12的自由度。

<span class="pl-k">>></span><span class="pl-k">></span> stats.t.isf([<span class="pl-c1">0.1</span>, <span class="pl-c1">0.05</span>, <span class="pl-c1">0.01</span>], [<span class="pl-c1">10</span>, <span class="pl-c1">11</span>, <span class="pl-c1">12</span>])
array([ <span class="pl-c1">1.37218364</span>,  <span class="pl-c1">1.79588482</span>,  <span class="pl-c1">2.68099799</span>])

离散分布的特殊之处

离散分布的方法的大多数与连续分布很类似。当然像pdf被更换为密度函数pmf,没有估计方法, 像fit就不能用了。而scale不是一个合法的关键字参数。Location参数, 关键字loc则仍然可以使用用于位移。

cdf的计算要求一些额外的关注。在连续分布的情况下,累积分布函数在大多数标准情况下是严格递增的, 所以有唯一的逆。而cdf在离散分布却一般是阶跃函数,所以cdf的逆,分位点函数,要求一个不同的定义:

ppf(q) = min{x : cdf(x) >= q, x integer}

为了更多信息可以看这里。

我们可以看这个超几何分布的例子

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-k">from</span> scipy.stats <span class="pl-k">import</span> hypergeom
<span class="pl-k">>></span><span class="pl-k">></span> [M, n, N] <span class="pl-k">=</span> [<span class="pl-c1">20</span>, <span class="pl-c1">7</span>, <span class="pl-c1">12</span>]

如果我们在一些整数点使用cdf,则它们的cdf值再作用ppf会回到开始的值。

<span class="pl-k">>></span><span class="pl-k">></span> x <span class="pl-k">=</span> np.arange(<span class="pl-c1">4</span>)<span class="pl-k">*</span><span class="pl-c1">2</span>
<span class="pl-k">>></span><span class="pl-k">></span> x
array([<span class="pl-c1">0</span>, <span class="pl-c1">2</span>, <span class="pl-c1">4</span>, <span class="pl-c1">6</span>])
<span class="pl-k">>></span><span class="pl-k">></span> prb <span class="pl-k">=</span> hypergeom.cdf(x, M, n, N)
<span class="pl-k">>></span><span class="pl-k">></span> prb
array([ <span class="pl-c1">0.0001031991744066</span>,  <span class="pl-c1">0.0521155830753351</span>,  <span class="pl-c1">0.6083591331269301</span>,
        <span class="pl-c1">0.9897832817337386</span>])
<span class="pl-k">>></span><span class="pl-k">></span> hypergeom.ppf(prb, M, n, N)
array([ <span class="pl-c1">0</span>.,  <span class="pl-c1">2</span>.,  <span class="pl-c1">4</span>.,  <span class="pl-c1">6</span>.])

如果我们使用的值不是cdf的函数值,则我们得到一个更高的值。

<span class="pl-k">>></span><span class="pl-k">></span> hypergeom.ppf(prb <span class="pl-k">+</span> <span class="pl-c1">1e-8</span>, M, n, N)
array([ <span class="pl-c1">1</span>.,  <span class="pl-c1">3</span>.,  <span class="pl-c1">5</span>.,  <span class="pl-c1">7</span>.])
<span class="pl-k">>></span><span class="pl-k">></span> hypergeom.ppf(prb <span class="pl-k">-</span> <span class="pl-c1">1e-8</span>, M, n, N)
array([ <span class="pl-c1">0</span>.,  <span class="pl-c1">2</span>.,  <span class="pl-c1">4</span>.,  <span class="pl-c1">6</span>.])

分布拟合

非冻结分布的参数估计的主要方法:

  • fit:分布参数的极大似然估计,包括location与scale
  • fit_loc_scale: 给定形态参数确定下的location和scale参数的估计
  • nnlf:负对数似然函数
  • expect:计算函数pdf或pmf的期望值。

性能问题与注意事项

分布方法的性能与运行速度根据分布的不同表现差异极大。方法的结果可以由两种方式获得, 精确计算或使用独立于各具体分布的通用算法。

精确计算一般更快。为了进行精确计算,要么直接使用解析公式,要么使用scipy.special中的 函数,对于rvs还可以使用numpy.random里的函数。

另一方面,如果不能进行精确计算,将使用通用方法进行计算。于是为了定义一个分布, 只有pdf异或cdf是必须的;通用方法使用数值积分和求根法进行求解。作为例子, rgh = stats.gausshyper.rvs(0.5, 2, 2, 2, size=100)以这种方式创建了100个随机变量 (抽了100个值),这在我的电脑上花了19秒(译者:我花了3.5秒), 对比取一百万个标准正态分布的值只需要1秒。

遗留问题

scipy.stats里的分布最近进行了升级并且被仔细的检查过了,不过仍有一些问题存在。

  • 分布在很多参数区间上的值被测试过了,然而在一些奇葩的临界条件,仍然可能有错误的值存在。
  • fit的极大似然估计以默认值作为初始参数将不会工作的很好,用户必须指派合适的初始参数。 并且,对于一些分布使用极大似然估计本身就不是一个好的选择。

构造具体的分布

下一个例子展示了如何建立你自己的分布。更多的例子见分布用法以及统计检验

创建一个连续分布,继承rv_continuous

创建连续分布是非常简单的.

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-k">from</span> scipy <span class="pl-k">import</span> stats
>>> class deterministic_gen(stats.rv_continuous):
...     def _cdf(self, x):
...         return np.where(x < 0, 0., 1.)
...     def _stats(self):
...         return 0., 0., 0., 0.

>>> deterministic = deterministic_gen(name="deterministic")
>>> deterministic.cdf(np.arange(-3, 3, 0.5))
array([ 0.,  0.,  0.,  0.,  0.,  0.,  1.,  1.,  1.,  1.,  1.,  1.])

令人高兴的是,pdf也能被自动计算出来:

<span class="pl-k">>></span><span class="pl-k">></span>
<span class="pl-k">>></span><span class="pl-k">></span> deterministic.pdf(np.arange(<span class="pl-k">-</span><span class="pl-c1">3</span>, <span class="pl-c1">3</span>, <span class="pl-c1">0.5</span>))
array([  <span class="pl-c1">0.00000000e+00</span>,   <span class="pl-c1">0.00000000e+00</span>,   <span class="pl-c1">0.00000000e+00</span>,
         <span class="pl-c1">0.00000000e+00</span>,   <span class="pl-c1">0.00000000e+00</span>,   <span class="pl-c1">0.00000000e+00</span>,
         <span class="pl-c1">5.83333333e+04</span>,   <span class="pl-c1">4.16333634e-12</span>,   <span class="pl-c1">4.16333634e-12</span>,
         <span class="pl-c1">4.16333634e-12</span>,   <span class="pl-c1">4.16333634e-12</span>,   <span class="pl-c1">4.16333634e-12</span>])

注意这种用法的性能问题,参见“性能问题与注意事项”一节。这种缺乏信息的通用计算可能非常慢。 而且再看看下面这个准确性的例子:

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-k">from</span> scipy.integrate <span class="pl-k">import</span> quad
<span class="pl-k">>></span><span class="pl-k">></span> quad(deterministic.pdf, <span class="pl-k">-</span><span class="pl-c1">1e-1</span>, <span class="pl-c1">1e-1</span>)
(<span class="pl-c1">4.163336342344337e-13</span>, <span class="pl-c1">0.0</span>)

但这并不是对pdf积分的正确的结果,实际上结果应为1.让我们将积分变得更小一些。

<span class="pl-k">>></span><span class="pl-k">></span> quad(deterministic.pdf, <span class="pl-k">-</span><span class="pl-c1">1e-3</span>, <span class="pl-c1">1e-3</span>)  <span class="pl-c"># warning removed</span>
(<span class="pl-c1">1.000076872229173</span>, <span class="pl-c1">0.0010625571718182458</span>)

这样看上去好多了,然而,问题本身来源于pdf不是来自包给定的类的定义。

继承rv_discrete

在之后我们使用stats.rv_discrete产生一个离散分布,其有一个整数区间截断概率。

通用信息

通用信息可以从 rv_discrete的 docstring中得到。

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-k">from</span> scipy.stats <span class="pl-k">import</span> rv_discrete
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">help</span>(rv_discrete)

从中我们得知:

“你可以构建任意一个像P(X=xk)=pk一样形式的离散rv,通过传递(xk,pk)元组序列给 rv_discrete初始化方法(通过values=keyword方式),但其不能有0概率值。”

接下来,还有一些进一步的要求:

  • keyword必须给出。
  • Xk必须是整数
  • 小数的有效位数应当被给出。

事实上,如果最后两个要求没有被满足,一个异常将被抛出或者导致一个错误的数值。

一个例子

让我们开始办,首先

<span class="pl-k">>></span><span class="pl-k">></span> npoints <span class="pl-k">=</span> <span class="pl-c1">20</span>   <span class="pl-c"># number of integer support points of the distribution minus 1</span>
<span class="pl-k">>></span><span class="pl-k">></span> npointsh <span class="pl-k">=</span> npoints <span class="pl-k">/</span> <span class="pl-c1">2</span>
<span class="pl-k">>></span><span class="pl-k">></span> npointsf <span class="pl-k">=</span> <span class="pl-c1">float</span>(npoints)
<span class="pl-k">>></span><span class="pl-k">></span> nbound <span class="pl-k">=</span> <span class="pl-c1">4</span>   <span class="pl-c"># bounds for the truncated normal</span>
<span class="pl-k">>></span><span class="pl-k">></span> normbound <span class="pl-k">=</span> (<span class="pl-c1">1</span><span class="pl-k">+</span><span class="pl-c1">1</span><span class="pl-k">/</span>npointsf) <span class="pl-k">*</span> nbound   <span class="pl-c"># actual bounds of truncated normal</span>
<span class="pl-k">>></span><span class="pl-k">></span> grid <span class="pl-k">=</span> np.arange(<span class="pl-k">-</span>npointsh, npointsh<span class="pl-k">+</span><span class="pl-c1">2</span>, <span class="pl-c1">1</span>)   <span class="pl-c"># integer grid</span>
<span class="pl-k">>></span><span class="pl-k">></span> gridlimitsnorm <span class="pl-k">=</span> (grid<span class="pl-k">-</span><span class="pl-c1">0.5</span>) <span class="pl-k">/</span> npointsh <span class="pl-k">*</span> nbound   <span class="pl-c"># bin limits for the truncnorm</span>
<span class="pl-k">>></span><span class="pl-k">></span> gridlimits <span class="pl-k">=</span> grid <span class="pl-k">-</span> <span class="pl-c1">0.5</span>   <span class="pl-c"># used later in the analysis</span>
<span class="pl-k">>></span><span class="pl-k">></span> grid <span class="pl-k">=</span> grid[:<span class="pl-k">-</span><span class="pl-c1">1</span>]
<span class="pl-k">>></span><span class="pl-k">></span> probs <span class="pl-k">=</span> np.diff(stats.truncnorm.cdf(gridlimitsnorm, <span class="pl-k">-</span>normbound, normbound))
<span class="pl-k">>></span><span class="pl-k">></span> gridint <span class="pl-k">=</span> grid

然后我们可以继承rv_discrete类

<span class="pl-k">>></span><span class="pl-k">></span> normdiscrete <span class="pl-k">=</span> stats.rv_discrete(<span class="pl-v">values</span><span class="pl-k">=</span>(gridint,
<span class="pl-c1">...</span>              np.round(probs, <span class="pl-v">decimals</span><span class="pl-k">=</span><span class="pl-c1">7</span>)), <span class="pl-v">name</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">'</span>normdiscrete<span class="pl-pds">'</span></span>)

现在我们已经定义了这个分布,我们可以调用其所有常规的离散分布方法。

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>mean = <span class="pl-c1">%6.4f</span>, variance = <span class="pl-c1">%6.4f</span>, skew = <span class="pl-c1">%6.4f</span>, kurtosis = <span class="pl-c1">%6.4f</span><span class="pl-pds">'</span></span><span class="pl-k">%</span> \
<span class="pl-c1">...</span>       normdiscrete.stats(<span class="pl-v">moments</span> <span class="pl-k">=</span>  <span class="pl-s"><span class="pl-pds">'</span>mvsk<span class="pl-pds">'</span></span>)
mean <span class="pl-k">=</span> <span class="pl-k">-</span><span class="pl-c1">0.0000</span>, variance <span class="pl-k">=</span> <span class="pl-c1">6.3302</span>, skew <span class="pl-k">=</span> <span class="pl-c1">0.0000</span>, kurtosis <span class="pl-k">=</span> <span class="pl-k">-</span><span class="pl-c1">0.0076</span>

<span class="pl-k">>></span><span class="pl-k">></span> nd_std <span class="pl-k">=</span> np.sqrt(normdiscrete.stats(<span class="pl-v">moments</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">'</span>v<span class="pl-pds">'</span></span>))

测试上面的结果

让我们产生一个随机样本并且比较连续概率的情况。

<span class="pl-k">>></span><span class="pl-k">></span> n_sample <span class="pl-k">=</span> <span class="pl-c1">500</span>
<span class="pl-k">>></span><span class="pl-k">></span> np.random.seed(<span class="pl-c1">87655678</span>)   <span class="pl-c"># fix the seed for replicability</span>
<span class="pl-k">>></span><span class="pl-k">></span> rvs <span class="pl-k">=</span> normdiscrete.rvs(<span class="pl-v">size</span><span class="pl-k">=</span>n_sample)
<span class="pl-k">>></span><span class="pl-k">></span> rvsnd <span class="pl-k">=</span> rvs
<span class="pl-k">>></span><span class="pl-k">></span> f, l <span class="pl-k">=</span> np.histogram(rvs, <span class="pl-v">bins</span><span class="pl-k">=</span>gridlimits)
<span class="pl-k">>></span><span class="pl-k">></span> sfreq <span class="pl-k">=</span> np.vstack([gridint, f, probs<span class="pl-k">*</span>n_sample]).T
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> sfreq
[[ <span class="pl-k">-</span><span class="pl-c1">1.00000000e+01</span>   <span class="pl-c1">0.00000000e+00</span>   <span class="pl-c1">2.95019349e-02</span>]
 [ <span class="pl-k">-</span><span class="pl-c1">9.00000000e+00</span>   <span class="pl-c1">0.00000000e+00</span>   <span class="pl-c1">1.32294142e-01</span>]
 [ <span class="pl-k">-</span><span class="pl-c1">8.00000000e+00</span>   <span class="pl-c1">0.00000000e+00</span>   <span class="pl-c1">5.06497902e-01</span>]
 [ <span class="pl-k">-</span><span class="pl-c1">7.00000000e+00</span>   <span class="pl-c1">2.00000000e+00</span>   <span class="pl-c1">1.65568919e+00</span>]
 [ <span class="pl-k">-</span><span class="pl-c1">6.00000000e+00</span>   <span class="pl-c1">1.00000000e+00</span>   <span class="pl-c1">4.62125309e+00</span>]
 [ <span class="pl-k">-</span><span class="pl-c1">5.00000000e+00</span>   <span class="pl-c1">9.00000000e+00</span>   <span class="pl-c1">1.10137298e+01</span>]
 [ <span class="pl-k">-</span><span class="pl-c1">4.00000000e+00</span>   <span class="pl-c1">2.60000000e+01</span>   <span class="pl-c1">2.24137683e+01</span>]
 [ <span class="pl-k">-</span><span class="pl-c1">3.00000000e+00</span>   <span class="pl-c1">3.70000000e+01</span>   <span class="pl-c1">3.89503370e+01</span>]
 [ <span class="pl-k">-</span><span class="pl-c1">2.00000000e+00</span>   <span class="pl-c1">5.10000000e+01</span>   <span class="pl-c1">5.78004747e+01</span>]
 [ <span class="pl-k">-</span><span class="pl-c1">1.00000000e+00</span>   <span class="pl-c1">7.10000000e+01</span>   <span class="pl-c1">7.32455414e+01</span>]
 [  <span class="pl-c1">0.00000000e+00</span>   <span class="pl-c1">7.40000000e+01</span>   <span class="pl-c1">7.92618251e+01</span>]
 [  <span class="pl-c1">1.00000000e+00</span>   <span class="pl-c1">8.90000000e+01</span>   <span class="pl-c1">7.32455414e+01</span>]
 [  <span class="pl-c1">2.00000000e+00</span>   <span class="pl-c1">5.50000000e+01</span>   <span class="pl-c1">5.78004747e+01</span>]
 [  <span class="pl-c1">3.00000000e+00</span>   <span class="pl-c1">5.00000000e+01</span>   <span class="pl-c1">3.89503370e+01</span>]
 [  <span class="pl-c1">4.00000000e+00</span>   <span class="pl-c1">1.70000000e+01</span>   <span class="pl-c1">2.24137683e+01</span>]
 [  <span class="pl-c1">5.00000000e+00</span>   <span class="pl-c1">1.10000000e+01</span>   <span class="pl-c1">1.10137298e+01</span>]
 [  <span class="pl-c1">6.00000000e+00</span>   <span class="pl-c1">4.00000000e+00</span>   <span class="pl-c1">4.62125309e+00</span>]
 [  <span class="pl-c1">7.00000000e+00</span>   <span class="pl-c1">3.00000000e+00</span>   <span class="pl-c1">1.65568919e+00</span>]
 [  <span class="pl-c1">8.00000000e+00</span>   <span class="pl-c1">0.00000000e+00</span>   <span class="pl-c1">5.06497902e-01</span>]
 [  <span class="pl-c1">9.00000000e+00</span>   <span class="pl-c1">0.00000000e+00</span>   <span class="pl-c1">1.32294142e-01</span>]
 [  <span class="pl-c1">1.00000000e+01</span>   <span class="pl-c1">0.00000000e+00</span>   <span class="pl-c1">2.95019349e-02</span>]]

scipy统计模块stats翻译

scipy统计模块stats翻译

接下来,我们可以测试,是否我们的样本取自于一个normdiscrete分布。这也是在验证 是否随机数是以正确的方式产生的。

卡方测试要求起码在每个子区间(bin)里具有最小数目的观测值。我们组合末端子区间进大子区间 所以它们现在包含了足够数量的观测值。

<span class="pl-k">>></span><span class="pl-k">></span> f2 <span class="pl-k">=</span> np.hstack([f[:<span class="pl-c1">5</span>].sum(), f[<span class="pl-c1">5</span>:<span class="pl-k">-</span><span class="pl-c1">5</span>], f[<span class="pl-k">-</span><span class="pl-c1">5</span>:].sum()])
<span class="pl-k">>></span><span class="pl-k">></span> p2 <span class="pl-k">=</span> np.hstack([probs[:<span class="pl-c1">5</span>].sum(), probs[<span class="pl-c1">5</span>:<span class="pl-k">-</span><span class="pl-c1">5</span>], probs[<span class="pl-k">-</span><span class="pl-c1">5</span>:].sum()])
<span class="pl-k">>></span><span class="pl-k">></span> ch2, pval <span class="pl-k">=</span> stats.chisquare(f2, p2<span class="pl-k">*</span>n_sample)
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>chisquare for normdiscrete: chi2 = <span class="pl-c1">%6.3f</span> pvalue = <span class="pl-c1">%6.4f</span><span class="pl-pds">'</span></span> <span class="pl-k">%</span> (ch2, pval)
chisquare <span class="pl-k">for</span> normdiscrete: chi2 <span class="pl-k">=</span> <span class="pl-c1">12.466</span> pvalue <span class="pl-k">=</span> <span class="pl-c1">0.4090</span>

P值在这个情况下是不显著地,所以我们可以断言我们的随机样本的确是由此分布产生的。

样本分析

首先,我们创建一些随机变量。我们设置一个种子所以每次我们都可以得到相同的结果以便观察。 作为一个例子,我们从t分布中抽一个样本。

<span class="pl-k">>></span><span class="pl-k">></span> np.random.seed(<span class="pl-c1">282629734</span>)
<span class="pl-k">>></span><span class="pl-k">></span> x <span class="pl-k">=</span> stats.t.rvs(<span class="pl-c1">10</span>, <span class="pl-v">size</span><span class="pl-k">=</span><span class="pl-c1">1000</span>)

这里,我们设置了t分布的形态参数,在这里就是自由度,设为10.使用size=1000表示 我们的样本由1000个抽样是独立的(伪)。当我们不指派loc和scale时,它们具有默认值0和1.

描述统计

X是一个numpy数组。我们可以直接调用它的方法。

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> x.max(), x.min()  <span class="pl-c"># equivalent to np.max(x), np.min(x)</span>
<span class="pl-c1">5.26327732981</span> <span class="pl-k">-</span><span class="pl-c1">3.78975572422</span>
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> x.mean(), x.var() <span class="pl-c"># equivalent to np.mean(x), np.var(x)</span>
<span class="pl-c1">0.0140610663985</span> <span class="pl-c1">1.28899386208</span>

如何比较分布本身和它的样本的指标?

<span class="pl-k">>></span><span class="pl-k">></span> m, v, s, k <span class="pl-k">=</span> stats.t.stats(<span class="pl-c1">10</span>, <span class="pl-v">moments</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">'</span>mvsk<span class="pl-pds">'</span></span>)
<span class="pl-k">>></span><span class="pl-k">></span> n, (smin, smax), sm, sv, ss, sk <span class="pl-k">=</span> stats.describe(x)
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>distribution:<span class="pl-pds">'</span></span>,
distribution:
<span class="pl-k">>></span><span class="pl-k">></span> sstr <span class="pl-k">=</span> <span class="pl-s"><span class="pl-pds">'</span>mean = <span class="pl-c1">%6.4f</span>, variance = <span class="pl-c1">%6.4f</span>, skew = <span class="pl-c1">%6.4f</span>, kurtosis = <span class="pl-c1">%6.4f</span><span class="pl-pds">'</span></span>
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> sstr <span class="pl-k">%</span>(m, v, s ,k)
mean <span class="pl-k">=</span> <span class="pl-c1">0.0000</span>, variance <span class="pl-k">=</span> <span class="pl-c1">1.2500</span>, skew <span class="pl-k">=</span> <span class="pl-c1">0.0000</span>, kurtosis <span class="pl-k">=</span> <span class="pl-c1">1.0000</span>
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>sample:      <span class="pl-pds">'</span></span>,
sample:
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> sstr <span class="pl-k">%</span>(sm, sv, ss, sk)
mean <span class="pl-k">=</span> <span class="pl-c1">0.0141</span>, variance <span class="pl-k">=</span> <span class="pl-c1">1.2903</span>, skew <span class="pl-k">=</span> <span class="pl-c1">0.2165</span>, kurtosis <span class="pl-k">=</span> <span class="pl-c1">1.0556</span>

注意:stats.describe用的是无偏的方差估计量,而np.var却用的是有偏的估计量。

T检验和KS检验

我们可以使用t检验是否样本与给定均值(这里是理论均值)存在统计显著差异。

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>t-statistic = <span class="pl-c1">%6.3f</span> pvalue = <span class="pl-c1">%6.4f</span><span class="pl-pds">'</span></span> <span class="pl-k">%</span>  stats.ttest_1samp(x, m)
t<span class="pl-k">-</span>statistic <span class="pl-k">=</span>  <span class="pl-c1">0.391</span> pvalue <span class="pl-k">=</span> <span class="pl-c1">0.6955</span>

P值为0.7,这代表第一类错误的概率,在例子中,为10%。我们不能拒绝“该样本均值为0”这个假设, 0是标准t分布的理论均值。

<span class="pl-k">>></span><span class="pl-k">></span> tt <span class="pl-k">=</span> (sm<span class="pl-k">-</span>m)<span class="pl-k">/</span>np.sqrt(sv<span class="pl-k">/</span><span class="pl-c1">float</span>(n))  <span class="pl-c"># t-statistic for mean</span>
<span class="pl-k">>></span><span class="pl-k">></span> pval <span class="pl-k">=</span> stats.t.sf(np.abs(tt), n<span class="pl-k">-</span><span class="pl-c1">1</span>)<span class="pl-k">*</span><span class="pl-c1">2</span>  <span class="pl-c"># two-sided pvalue = Prob(abs(t)>tt)</span>
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>t-statistic = <span class="pl-c1">%6.3f</span> pvalue = <span class="pl-c1">%6.4f</span><span class="pl-pds">'</span></span> <span class="pl-k">%</span> (tt, pval)
t<span class="pl-k">-</span>statistic <span class="pl-k">=</span>  <span class="pl-c1">0.391</span> pvalue <span class="pl-k">=</span> <span class="pl-c1">0.6955</span>

这里Kolmogorov-Smirnov检验(KS检验)被被用来检验样本是否来自一个标准的t分布。

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>KS-statistic D = <span class="pl-c1">%6.3f</span> pvalue = <span class="pl-c1">%6.4f</span><span class="pl-pds">'</span></span> <span class="pl-k">%</span> stats.kstest(x, <span class="pl-s"><span class="pl-pds">'</span>t<span class="pl-pds">'</span></span>, (<span class="pl-c1">10</span>,))
<span class="pl-c1">KS</span><span class="pl-k">-</span>statistic D <span class="pl-k">=</span>  <span class="pl-c1">0.016</span> pvalue <span class="pl-k">=</span> <span class="pl-c1">0.9606</span>

又一次得到了很高的P值。所以我们不能拒绝样本是来自t分布的假设。在实际应用中, 我们不能知道潜在的分布到底是什么。如果我们使用KS检验我们的样本对照正态分布, 那么我们将也不能拒绝我们的样本是来自正态分布的,在这种情况下P值为0.4左右。

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>KS-statistic D = <span class="pl-c1">%6.3f</span> pvalue = <span class="pl-c1">%6.4f</span><span class="pl-pds">'</span></span> <span class="pl-k">%</span> stats.kstest(x,<span class="pl-s"><span class="pl-pds">'</span>norm<span class="pl-pds">'</span></span>)
<span class="pl-c1">KS</span><span class="pl-k">-</span>statistic D <span class="pl-k">=</span>  <span class="pl-c1">0.028</span> pvalue <span class="pl-k">=</span> <span class="pl-c1">0.3949</span>

无论如何,标准正态分布有1的方差,当我们的样本有1.29时。如果我们标准化我们的样本并且 测试它比照正态分布,那么P值将又一次很高我们将还是不能拒绝假设是来自正态分布的。

<span class="pl-k">>></span><span class="pl-k">></span> d, pval <span class="pl-k">=</span> stats.kstest((x<span class="pl-k">-</span>x.mean())<span class="pl-k">/</span>x.std(), <span class="pl-s"><span class="pl-pds">'</span>norm<span class="pl-pds">'</span></span>)
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>KS-statistic D = <span class="pl-c1">%6.3f</span> pvalue = <span class="pl-c1">%6.4f</span><span class="pl-pds">'</span></span> <span class="pl-k">%</span> (d, pval)
<span class="pl-c1">KS</span><span class="pl-k">-</span>statistic D <span class="pl-k">=</span>  <span class="pl-c1">0.032</span> pvalue <span class="pl-k">=</span> <span class="pl-c1">0.2402</span>

注释:KS检验假设我们比照的分布就是以给定的参数确定的,但我们在最后估计了均值和方差, 这个假设就被违反了,故而这个测试统计量的P值是含偏的,这个用法是错误的。

分布尾部

最后,我们可以检查分布的右尾,我们可以使用分位点函数ppf,其为cdf函数的逆,来获得临界值, 或者更直接的,我们可以使用残存函数的逆来办。

<span class="pl-k">>></span><span class="pl-k">></span> crit01, crit05, crit10 <span class="pl-k">=</span> stats.t.ppf([<span class="pl-c1">1</span><span class="pl-k">-</span><span class="pl-c1">0.01</span>, <span class="pl-c1">1</span><span class="pl-k">-</span><span class="pl-c1">0.05</span>, <span class="pl-c1">1</span><span class="pl-k">-</span><span class="pl-c1">0.10</span>], <span class="pl-c1">10</span>)
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>critical values from ppf at 1<span class="pl-c1">%%</span>, 5<span class="pl-c1">%%</span> and 10<span class="pl-c1">%%</span> <span class="pl-c1">%8.4f</span> <span class="pl-c1">%8.4f</span> <span class="pl-c1">%8.4f</span><span class="pl-pds">'</span></span><span class="pl-k">%</span> (crit01, crit05, crit10)
critical values <span class="pl-k">from</span> ppf at <span class="pl-c1">1</span><span class="pl-k">%</span>, <span class="pl-c1">5</span><span class="pl-k">%</span> <span class="pl-k">and</span> <span class="pl-c1">10</span><span class="pl-k">%</span>   <span class="pl-c1">2.7638</span>   <span class="pl-c1">1.8125</span>   <span class="pl-c1">1.3722</span>
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>critical values from isf at 1<span class="pl-c1">%%</span>, 5<span class="pl-c1">%%</span> and 10<span class="pl-c1">%%</span> <span class="pl-c1">%8.4f</span> <span class="pl-c1">%8.4f</span> <span class="pl-c1">%8.4f</span><span class="pl-pds">'</span></span><span class="pl-k">%</span> <span class="pl-c1">tuple</span>(stats.t.isf([<span class="pl-c1">0.01</span>,<span class="pl-c1">0.05</span>,<span class="pl-c1">0.10</span>],<span class="pl-c1">10</span>))
critical values <span class="pl-k">from</span> isf at <span class="pl-c1">1</span><span class="pl-k">%</span>, <span class="pl-c1">5</span><span class="pl-k">%</span> <span class="pl-k">and</span> <span class="pl-c1">10</span><span class="pl-k">%</span>   <span class="pl-c1">2.7638</span>   <span class="pl-c1">1.8125</span>   <span class="pl-c1">1.3722</span>

<span class="pl-k">>></span><span class="pl-k">></span> freq01 <span class="pl-k">=</span> np.sum(x<span class="pl-k">></span>crit01) <span class="pl-k">/</span> <span class="pl-c1">float</span>(n) <span class="pl-k">*</span> <span class="pl-c1">100</span>
<span class="pl-k">>></span><span class="pl-k">></span> freq05 <span class="pl-k">=</span> np.sum(x<span class="pl-k">></span>crit05) <span class="pl-k">/</span> <span class="pl-c1">float</span>(n) <span class="pl-k">*</span> <span class="pl-c1">100</span>
<span class="pl-k">>></span><span class="pl-k">></span> freq10 <span class="pl-k">=</span> np.sum(x<span class="pl-k">></span>crit10) <span class="pl-k">/</span> <span class="pl-c1">float</span>(n) <span class="pl-k">*</span> <span class="pl-c1">100</span>
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>sample <span class="pl-c1">%%</span>-frequency at 1<span class="pl-c1">%%</span>, 5<span class="pl-c1">%%</span> and 10<span class="pl-c1">%%</span> tail <span class="pl-c1">%8.4f</span> <span class="pl-c1">%8.4f</span> <span class="pl-c1">%8.4f</span><span class="pl-pds">'</span></span><span class="pl-k">%</span> (freq01, freq05, freq10)
sample <span class="pl-k">%-</span>frequency at <span class="pl-c1">1</span><span class="pl-k">%</span>, <span class="pl-c1">5</span><span class="pl-k">%</span> <span class="pl-k">and</span> <span class="pl-c1">10</span><span class="pl-k">%</span> tail   <span class="pl-c1">1.4000</span>   <span class="pl-c1">5.8000</span>  <span class="pl-c1">10.5000</span>

在这三种情况中,我们的样本有有一个更重的尾部,即实际在理论分界值右边的概率要高于理论值。 我们可以通过使用更大的样本来获得更好的拟合。在以下情况经验频率已经很接近理论概率了, 但即使我们重复这个过程若干次,波动依然会保持在这个程度。

<span class="pl-k">>></span><span class="pl-k">></span> freq05l <span class="pl-k">=</span> np.sum(stats.t.rvs(<span class="pl-c1">10</span>, <span class="pl-v">size</span><span class="pl-k">=</span><span class="pl-c1">10000</span>) <span class="pl-k">></span> crit05) <span class="pl-k">/</span> <span class="pl-c1">10000.0</span> <span class="pl-k">*</span> <span class="pl-c1">100</span>
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>larger sample <span class="pl-c1">%%</span>-frequency at 5<span class="pl-c1">%%</span> tail <span class="pl-c1">%8.4f</span><span class="pl-pds">'</span></span><span class="pl-k">%</span> freq05l
larger sample <span class="pl-k">%-</span>frequency at <span class="pl-c1">5</span><span class="pl-k">%</span> tail   <span class="pl-c1">4.8000</span>

我们也可以比较它与正态分布的尾部,其有一个轻的多的尾部:

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>tail prob. of normal at 1<span class="pl-c1">%%</span>, 5<span class="pl-c1">%%</span> and 10<span class="pl-c1">%%</span> <span class="pl-c1">%8.4f</span> <span class="pl-c1">%8.4f</span> <span class="pl-c1">%8.4f</span><span class="pl-pds">'</span></span><span class="pl-k">%</span> \
<span class="pl-c1">...</span>       <span class="pl-c1">tuple</span>(stats.norm.sf([crit01, crit05, crit10])<span class="pl-k">*</span><span class="pl-c1">100</span>)
tail prob. of normal at <span class="pl-c1">1</span><span class="pl-k">%</span>, <span class="pl-c1">5</span><span class="pl-k">%</span> <span class="pl-k">and</span> <span class="pl-c1">10</span><span class="pl-k">%</span>   <span class="pl-c1">0.2857</span>   <span class="pl-c1">3.4957</span>   <span class="pl-c1">8.5003</span>

卡方检验可以被用来测试,是否一个有限的分类观测值频率与假定的理论概率分布具有显著差异。

<span class="pl-k">>></span><span class="pl-k">></span> quantiles <span class="pl-k">=</span> [<span class="pl-c1">0.0</span>, <span class="pl-c1">0.01</span>, <span class="pl-c1">0.05</span>, <span class="pl-c1">0.1</span>, <span class="pl-c1">1</span><span class="pl-k">-</span><span class="pl-c1">0.10</span>, <span class="pl-c1">1</span><span class="pl-k">-</span><span class="pl-c1">0.05</span>, <span class="pl-c1">1</span><span class="pl-k">-</span><span class="pl-c1">0.01</span>, <span class="pl-c1">1.0</span>]
<span class="pl-k">>></span><span class="pl-k">></span> crit <span class="pl-k">=</span> stats.t.ppf(quantiles, <span class="pl-c1">10</span>)
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> crit
[       <span class="pl-k">-</span>Inf <span class="pl-k">-</span><span class="pl-c1">2.76376946</span> <span class="pl-k">-</span><span class="pl-c1">1.81246112</span> <span class="pl-k">-</span><span class="pl-c1">1.37218364</span>  <span class="pl-c1">1.37218364</span>  <span class="pl-c1">1.81246112</span>
  <span class="pl-c1">2.76376946</span>         Inf]
<span class="pl-k">>></span><span class="pl-k">></span> n_sample <span class="pl-k">=</span> x.size
<span class="pl-k">>></span><span class="pl-k">></span> freqcount <span class="pl-k">=</span> np.histogram(x, <span class="pl-v">bins</span><span class="pl-k">=</span>crit)[<span class="pl-c1">0</span>]
<span class="pl-k">>></span><span class="pl-k">></span> tprob <span class="pl-k">=</span> np.diff(quantiles)
<span class="pl-k">>></span><span class="pl-k">></span> nprob <span class="pl-k">=</span> np.diff(stats.norm.cdf(crit))
<span class="pl-k">>></span><span class="pl-k">></span> tch, tpval <span class="pl-k">=</span> stats.chisquare(freqcount, tprob<span class="pl-k">*</span>n_sample)
<span class="pl-k">>></span><span class="pl-k">></span> nch, npval <span class="pl-k">=</span> stats.chisquare(freqcount, nprob<span class="pl-k">*</span>n_sample)
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>chisquare for t:      chi2 = <span class="pl-c1">%6.3f</span> pvalue = <span class="pl-c1">%6.4f</span><span class="pl-pds">'</span></span> <span class="pl-k">%</span> (tch, tpval)
chisquare <span class="pl-k">for</span> t:      chi2 <span class="pl-k">=</span>  <span class="pl-c1">2.300</span> pvalue <span class="pl-k">=</span> <span class="pl-c1">0.8901</span>
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>chisquare for normal: chi2 = <span class="pl-c1">%6.3f</span> pvalue = <span class="pl-c1">%6.4f</span><span class="pl-pds">'</span></span> <span class="pl-k">%</span> (nch, npval)
chisquare <span class="pl-k">for</span> normal: chi2 <span class="pl-k">=</span> <span class="pl-c1">64.605</span> pvalue <span class="pl-k">=</span> <span class="pl-c1">0.0000</span>

我们看到当t分布检验没被拒绝时标准正态分布却被完全拒绝。在我们的样本区分出这两个分布后, 我们可以先进行拟合确定scale与location再检查拟合后的分布的差异性。

我们可以先进行拟合,再用拟合分布而不是默认(起码location和scale是默认的)分布去进行检验。

<span class="pl-k">>></span><span class="pl-k">></span> tdof, tloc, tscale <span class="pl-k">=</span> stats.t.fit(x)
<span class="pl-k">>></span><span class="pl-k">></span> nloc, nscale <span class="pl-k">=</span> stats.norm.fit(x)
<span class="pl-k">>></span><span class="pl-k">></span> tprob <span class="pl-k">=</span> np.diff(stats.t.cdf(crit, tdof, <span class="pl-v">loc</span><span class="pl-k">=</span>tloc, <span class="pl-v">scale</span><span class="pl-k">=</span>tscale))
<span class="pl-k">>></span><span class="pl-k">></span> nprob <span class="pl-k">=</span> np.diff(stats.norm.cdf(crit, <span class="pl-v">loc</span><span class="pl-k">=</span>nloc, <span class="pl-v">scale</span><span class="pl-k">=</span>nscale))
<span class="pl-k">>></span><span class="pl-k">></span> tch, tpval <span class="pl-k">=</span> stats.chisquare(freqcount, tprob<span class="pl-k">*</span>n_sample)
<span class="pl-k">>></span><span class="pl-k">></span> nch, npval <span class="pl-k">=</span> stats.chisquare(freqcount, nprob<span class="pl-k">*</span>n_sample)
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>chisquare for t:      chi2 = <span class="pl-c1">%6.3f</span> pvalue = <span class="pl-c1">%6.4f</span><span class="pl-pds">'</span></span> <span class="pl-k">%</span> (tch, tpval)
chisquare <span class="pl-k">for</span> t:      chi2 <span class="pl-k">=</span>  <span class="pl-c1">1.577</span> pvalue <span class="pl-k">=</span> <span class="pl-c1">0.9542</span>
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>chisquare for normal: chi2 = <span class="pl-c1">%6.3f</span> pvalue = <span class="pl-c1">%6.4f</span><span class="pl-pds">'</span></span> <span class="pl-k">%</span> (nch, npval)
chisquare <span class="pl-k">for</span> normal: chi2 <span class="pl-k">=</span> <span class="pl-c1">11.084</span> pvalue <span class="pl-k">=</span> <span class="pl-c1">0.0858</span>

在经过参数调整之后,我们仍然可以以5%水平拒绝正态分布假设。然而却以95%的p值显然的不能拒绝t分布。

正态分布的特殊检验

自从正态分布变为统计学中最常见的分布,就出现了大量的方法用来检验一个样本 是否可以被看成是来自正态分布的。

首先我们检验分布的峰度和偏度是否显著地与正态分布的对应值相差异。

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>normal skewtest teststat = <span class="pl-c1">%6.3f</span> pvalue = <span class="pl-c1">%6.4f</span><span class="pl-pds">'</span></span> <span class="pl-k">%</span> stats.skewtest(x)
normal skewtest teststat <span class="pl-k">=</span>  <span class="pl-c1">2.785</span> pvalue <span class="pl-k">=</span> <span class="pl-c1">0.0054</span>
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>normal kurtosistest teststat = <span class="pl-c1">%6.3f</span> pvalue = <span class="pl-c1">%6.4f</span><span class="pl-pds">'</span></span> <span class="pl-k">%</span> stats.kurtosistest(x)
normal kurtosistest teststat <span class="pl-k">=</span>  <span class="pl-c1">4.757</span> pvalue <span class="pl-k">=</span> <span class="pl-c1">0.0000</span>

将这两个检验组合起来的正态性检验

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>normaltest teststat = <span class="pl-c1">%6.3f</span> pvalue = <span class="pl-c1">%6.4f</span><span class="pl-pds">'</span></span> <span class="pl-k">%</span> stats.normaltest(x)
normaltest teststat <span class="pl-k">=</span> <span class="pl-c1">30.379</span> pvalue <span class="pl-k">=</span> <span class="pl-c1">0.0000</span>

在所有三个测试中,P值是非常低的,所以我们可以拒绝我们的样本的峰度与偏度与正态分布相同的假设。

当我们的样本标准化之后,我们依旧得到相同的结果。

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>normaltest teststat = <span class="pl-c1">%6.3f</span> pvalue = <span class="pl-c1">%6.4f</span><span class="pl-pds">'</span></span> <span class="pl-k">%</span> \
<span class="pl-c1">...</span>                      stats.normaltest((x<span class="pl-k">-</span>x.mean())<span class="pl-k">/</span>x.std())
normaltest teststat <span class="pl-k">=</span> <span class="pl-c1">30.379</span> pvalue <span class="pl-k">=</span> <span class="pl-c1">0.0000</span>

因为正态性被很强的拒绝了,所以我们可以检查这种检验方式是否可以有效地作用到其他情况中。

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>normaltest teststat = <span class="pl-c1">%6.3f</span> pvalue = <span class="pl-c1">%6.4f</span><span class="pl-pds">'</span></span> <span class="pl-k">%</span> stats.normaltest(stats.t.rvs(<span class="pl-c1">10</span>, <span class="pl-v">size</span><span class="pl-k">=</span><span class="pl-c1">100</span>))
normaltest teststat <span class="pl-k">=</span>  <span class="pl-c1">4.698</span> pvalue <span class="pl-k">=</span> <span class="pl-c1">0.0955</span>
<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-c1">print</span> <span class="pl-s"><span class="pl-pds">'</span>normaltest teststat = <span class="pl-c1">%6.3f</span> pvalue = <span class="pl-c1">%6.4f</span><span class="pl-pds">'</span></span> <span class="pl-k">%</span> stats.normaltest(stats.norm.rvs(<span class="pl-v">size</span><span class="pl-k">=</span><span class="pl-c1">1000</span>))
normaltest teststat <span class="pl-k">=</span>  <span class="pl-c1">0.613</span> pvalue <span class="pl-k">=</span> <span class="pl-c1">0.7361</span>

我们检验了小样本的t分布样本的观测值以及一个大样本的正态分布观测值,在这两种情况中我们 都不能拒绝其来自正态分布的空假设。得到这样的结果是因为前者是因为无法区分小样本下的t分布, 后者是因为它本来就来自正态分布。

比较两个样本

接下来,我们有两个分布,其可以判定为相同或者来自不同的分布,以及我们希望测试是否这些 样本有相同的统计特征。

均值

以相同的均值产生的样本进行检验:

<span class="pl-k">>></span><span class="pl-k">></span> rvs1 <span class="pl-k">=</span> stats.norm.rvs(<span class="pl-v">loc</span><span class="pl-k">=</span><span class="pl-c1">5</span>, <span class="pl-v">scale</span><span class="pl-k">=</span><span class="pl-c1">10</span>, <span class="pl-v">size</span><span class="pl-k">=</span><span class="pl-c1">500</span>)
<span class="pl-k">>></span><span class="pl-k">></span> rvs2 <span class="pl-k">=</span> stats.norm.rvs(<span class="pl-v">loc</span><span class="pl-k">=</span><span class="pl-c1">5</span>, <span class="pl-v">scale</span><span class="pl-k">=</span><span class="pl-c1">10</span>, <span class="pl-v">size</span><span class="pl-k">=</span><span class="pl-c1">500</span>)
<span class="pl-k">>></span><span class="pl-k">></span> stats.ttest_ind(rvs1, rvs2)
(<span class="pl-k">-</span><span class="pl-c1">0.54890361750888583</span>, <span class="pl-c1">0.5831943748663857</span>)

以不同的均值产生的样本进行检验:

<span class="pl-k">>></span><span class="pl-k">></span> rvs3 <span class="pl-k">=</span> stats.norm.rvs(<span class="pl-v">loc</span><span class="pl-k">=</span><span class="pl-c1">8</span>, <span class="pl-v">scale</span><span class="pl-k">=</span><span class="pl-c1">10</span>, <span class="pl-v">size</span><span class="pl-k">=</span><span class="pl-c1">500</span>)
<span class="pl-k">>></span><span class="pl-k">></span> stats.ttest_ind(rvs1, rvs3)
(<span class="pl-k">-</span><span class="pl-c1">4.5334142901750321</span>, <span class="pl-c1">6.507128186505895e-006</span>)

对于两个不同的样本进行的KS检验

在这个例子中我们使用两个同分布的样本进行检验.设因为P值很高,毫不奇怪我们不能拒绝原假设。

<span class="pl-k">>></span><span class="pl-k">></span> stats.ks_2samp(rvs1, rvs2)
(<span class="pl-c1">0.025999999999999995</span>, <span class="pl-c1">0.99541195173064878</span>)

在第二个例子中,由于均值不同,所以我们可以拒绝空假设,由P值小于1%。

<span class="pl-k">>></span><span class="pl-k">></span> stats.ks_2samp(rvs1, rvs3)
(<span class="pl-c1">0.11399999999999999</span>, <span class="pl-c1">0.0027132103661283141</span>)

核密度估计

一个常见的统计学问题是从一个样本中估计随机变量的概率密度分布函数(PDF) 这个问题被称为密度估计,对此最著名的工具是直方图。直方图是一个很好的可视化工具 (主要是因为每个人都理解它)。但是对于对于数据特征的利用却并不是非常有效率。

核密度估计(KDE对于这个问题)是一个更有效的工具。这个gaussian_kde估计方法可以被用来估计 单元或多元数据的PDF。它在数据呈单峰的时候工作的最好,但也可以在多峰情况下工作。

单元估计

我们以一个最小数据集来观察gaussian_kde是如何工作的,以及带宽(bandwidth)的不同选择方式。 PDF对应的数据被以蓝线的形式显示在图像的底端(被称为毯图(rug plot))

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-k">from</span> scipy <span class="pl-k">import</span> stats
>>> import matplotlib.pyplot as plt

>>> x1 = np.array([-7, -5, 1, 4, 5], dtype=np.float)
>>> kde1 = stats.gaussian_kde(x1)
>>> kde2 = stats.gaussian_kde(x1, bw_method='silverman')

>>> fig = plt.figure()
>>> ax = fig.add_subplot(111)

>>> ax.plot(x1, np.zeros(x1.shape), 'b+', ms=20)  # rug plot
>>> x_eval = np.linspace(-10, 10, num=200)
>>> ax.plot(x_eval, kde1(x_eval), 'k-', label="Scott's Rule")
>>> ax.plot(x_eval, kde1(x_eval), 'r-', label="Silverman's Rule")

>>> plt.show()

scipy统计模块stats翻译

我们看到在Scott规则以及Silverman规则下的结果几乎没有差异。以及带宽的选择相比较于 数据的稀少显得太宽。我们可以定义我们的带宽函数以获得一个更少平滑的结果。

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-k">def</span> <span class="pl-en">my_kde_bandwidth</span>(<span class="pl-smi">obj</span>, <span class="pl-smi">fac</span><span class="pl-k">=</span><span class="pl-c1">1</span>.<span class="pl-k">/</span><span class="pl-c1">5</span>):
<span class="pl-c1">...</span>     <span class="pl-s"><span class="pl-pds">"""</span>We use Scott's Rule, multiplied by a constant factor.<span class="pl-pds">"""</span></span>
<span class="pl-c1">...</span>     <span class="pl-k">return</span> np.power(obj.n, <span class="pl-k">-</span><span class="pl-c1">1</span>.<span class="pl-k">/</span>(obj.d<span class="pl-k">+</span><span class="pl-c1">4</span>)) <span class="pl-k">*</span> fac

<span class="pl-k">>></span><span class="pl-k">></span> fig <span class="pl-k">=</span> plt.figure()
<span class="pl-k">>></span><span class="pl-k">></span> ax <span class="pl-k">=</span> fig.add_subplot(<span class="pl-c1">111</span>)

<span class="pl-k">>></span><span class="pl-k">></span> ax.plot(x1, np.zeros(x1.shape), <span class="pl-s"><span class="pl-pds">'</span>b+<span class="pl-pds">'</span></span>, <span class="pl-v">ms</span><span class="pl-k">=</span><span class="pl-c1">20</span>)  <span class="pl-c"># rug plot</span>
<span class="pl-k">>></span><span class="pl-k">></span> kde3 <span class="pl-k">=</span> stats.gaussian_kde(x1, <span class="pl-v">bw_method</span><span class="pl-k">=</span>my_kde_bandwidth)
<span class="pl-k">>></span><span class="pl-k">></span> ax.plot(x_eval, kde3(x_eval), <span class="pl-s"><span class="pl-pds">'</span>g-<span class="pl-pds">'</span></span>, <span class="pl-v">label</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">"</span>With smaller BW<span class="pl-pds">"</span></span>)

<span class="pl-k">>></span><span class="pl-k">></span> plt.show()

scipy统计模块stats翻译

我们看到如果我们设置带宽为非常狭窄,则获得PDF的估计退化为围绕在数据点的简单的高斯和。

我们现在使用更真实的例子,并且看看在两种带宽选择规则中的差异。这些规则被认为在 正态分布上很好用,但即使是偏离正态的单峰分布上它也工作的很好。作为一个非正态分布, 我们采用5自由度的t分布。

<span class="pl-k">import</span> numpy <span class="pl-k">as</span> np
<span class="pl-k">import</span> matplotlib.pyplot <span class="pl-k">as</span> plt
<span class="pl-k">from</span> scipy <span class="pl-k">import</span> stats


np.random.seed(<span class="pl-c1">12456</span>)
x1 <span class="pl-k">=</span> np.random.normal(<span class="pl-v">size</span><span class="pl-k">=</span><span class="pl-c1">200</span>)  <span class="pl-c"># random data, normal distribution</span>
xs <span class="pl-k">=</span> np.linspace(x1.min()<span class="pl-k">-</span><span class="pl-c1">1</span>, x1.max()<span class="pl-k">+</span><span class="pl-c1">1</span>, <span class="pl-c1">200</span>)

kde1 <span class="pl-k">=</span> stats.gaussian_kde(x1)
kde2 <span class="pl-k">=</span> stats.gaussian_kde(x1, <span class="pl-v">bw_method</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">'</span>silverman<span class="pl-pds">'</span></span>)

fig <span class="pl-k">=</span> plt.figure(<span class="pl-v">figsize</span><span class="pl-k">=</span>(<span class="pl-c1">8</span>, <span class="pl-c1">6</span>))

ax1 <span class="pl-k">=</span> fig.add_subplot(<span class="pl-c1">211</span>)
ax1.plot(x1, np.zeros(x1.shape), <span class="pl-s"><span class="pl-pds">'</span>b+<span class="pl-pds">'</span></span>, <span class="pl-v">ms</span><span class="pl-k">=</span><span class="pl-c1">12</span>)  <span class="pl-c"># rug plot</span>
ax1.plot(xs, kde1(xs), <span class="pl-s"><span class="pl-pds">'</span>k-<span class="pl-pds">'</span></span>, <span class="pl-v">label</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">"</span>Scott's Rule<span class="pl-pds">"</span></span>)
ax1.plot(xs, kde2(xs), <span class="pl-s"><span class="pl-pds">'</span>b-<span class="pl-pds">'</span></span>, <span class="pl-v">label</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">"</span>Silverman's Rule<span class="pl-pds">"</span></span>)
ax1.plot(xs, stats.norm.pdf(xs), <span class="pl-s"><span class="pl-pds">'</span>r--<span class="pl-pds">'</span></span>, <span class="pl-v">label</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">"</span>True PDF<span class="pl-pds">"</span></span>)

ax1.set_xlabel(<span class="pl-s"><span class="pl-pds">'</span>x<span class="pl-pds">'</span></span>)
ax1.set_ylabel(<span class="pl-s"><span class="pl-pds">'</span>Density<span class="pl-pds">'</span></span>)
ax1.set_title(<span class="pl-s"><span class="pl-pds">"</span>Normal (top) and Student's T$_{df=5}$ (bottom) distributions<span class="pl-pds">"</span></span>)
ax1.legend(<span class="pl-v">loc</span><span class="pl-k">=</span><span class="pl-c1">1</span>)

x2 <span class="pl-k">=</span> stats.t.rvs(<span class="pl-c1">5</span>, <span class="pl-v">size</span><span class="pl-k">=</span><span class="pl-c1">200</span>)  <span class="pl-c"># random data, T distribution</span>
xs <span class="pl-k">=</span> np.linspace(x2.min() <span class="pl-k">-</span> <span class="pl-c1">1</span>, x2.max() <span class="pl-k">+</span> <span class="pl-c1">1</span>, <span class="pl-c1">200</span>)

kde3 <span class="pl-k">=</span> stats.gaussian_kde(x2)
kde4 <span class="pl-k">=</span> stats.gaussian_kde(x2, <span class="pl-v">bw_method</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">'</span>silverman<span class="pl-pds">'</span></span>)

ax2 <span class="pl-k">=</span> fig.add_subplot(<span class="pl-c1">212</span>)
ax2.plot(x2, np.zeros(x2.shape), <span class="pl-s"><span class="pl-pds">'</span>b+<span class="pl-pds">'</span></span>, <span class="pl-v">ms</span><span class="pl-k">=</span><span class="pl-c1">12</span>)  <span class="pl-c"># rug plot</span>
ax2.plot(xs, kde3(xs), <span class="pl-s"><span class="pl-pds">'</span>k-<span class="pl-pds">'</span></span>, <span class="pl-v">label</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">"</span>Scott's Rule<span class="pl-pds">"</span></span>)
ax2.plot(xs, kde4(xs), <span class="pl-s"><span class="pl-pds">'</span>b-<span class="pl-pds">'</span></span>, <span class="pl-v">label</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">"</span>Silverman's Rule<span class="pl-pds">"</span></span>)
ax2.plot(xs, stats.t.pdf(xs, <span class="pl-c1">5</span>), <span class="pl-s"><span class="pl-pds">'</span>r--<span class="pl-pds">'</span></span>, <span class="pl-v">label</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">"</span>True PDF<span class="pl-pds">"</span></span>)

ax2.set_xlabel(<span class="pl-s"><span class="pl-pds">'</span>x<span class="pl-pds">'</span></span>)
ax2.set_ylabel(<span class="pl-s"><span class="pl-pds">'</span>Density<span class="pl-pds">'</span></span>)

plt.show()

scipy统计模块stats翻译

下面我们看到这个一个宽一个窄的双峰分布。可以想到结果将难达到以十分近似, 因为每个峰需要不同的带宽去拟合。

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-k">from</span> functools <span class="pl-k">import</span> partial

<span class="pl-k">>></span><span class="pl-k">></span> loc1, scale1, size1 <span class="pl-k">=</span> (<span class="pl-k">-</span><span class="pl-c1">2</span>, <span class="pl-c1">1</span>, <span class="pl-c1">175</span>)
<span class="pl-k">>></span><span class="pl-k">></span> loc2, scale2, size2 <span class="pl-k">=</span> (<span class="pl-c1">2</span>, <span class="pl-c1">0.2</span>, <span class="pl-c1">50</span>)
<span class="pl-k">>></span><span class="pl-k">></span> x2 <span class="pl-k">=</span> np.concatenate([np.random.normal(<span class="pl-v">loc</span><span class="pl-k">=</span>loc1, <span class="pl-v">scale</span><span class="pl-k">=</span>scale1, <span class="pl-v">size</span><span class="pl-k">=</span>size1),
<span class="pl-c1">...</span>                      np.random.normal(<span class="pl-v">loc</span><span class="pl-k">=</span>loc2, <span class="pl-v">scale</span><span class="pl-k">=</span>scale2, <span class="pl-v">size</span><span class="pl-k">=</span>size2)])

<span class="pl-k">>></span><span class="pl-k">></span> x_eval <span class="pl-k">=</span> np.linspace(x2.min() <span class="pl-k">-</span> <span class="pl-c1">1</span>, x2.max() <span class="pl-k">+</span> <span class="pl-c1">1</span>, <span class="pl-c1">500</span>)

<span class="pl-k">>></span><span class="pl-k">></span> kde <span class="pl-k">=</span> stats.gaussian_kde(x2)
<span class="pl-k">>></span><span class="pl-k">></span> kde2 <span class="pl-k">=</span> stats.gaussian_kde(x2, <span class="pl-v">bw_method</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">'</span>silverman<span class="pl-pds">'</span></span>)
<span class="pl-k">>></span><span class="pl-k">></span> kde3 <span class="pl-k">=</span> stats.gaussian_kde(x2, <span class="pl-v">bw_method</span><span class="pl-k">=</span>partial(my_kde_bandwidth, <span class="pl-v">fac</span><span class="pl-k">=</span><span class="pl-c1">0.2</span>))
<span class="pl-k">>></span><span class="pl-k">></span> kde4 <span class="pl-k">=</span> stats.gaussian_kde(x2, <span class="pl-v">bw_method</span><span class="pl-k">=</span>partial(my_kde_bandwidth, <span class="pl-v">fac</span><span class="pl-k">=</span><span class="pl-c1">0.5</span>))

<span class="pl-k">>></span><span class="pl-k">></span> pdf <span class="pl-k">=</span> stats.norm.pdf
<span class="pl-k">>></span><span class="pl-k">></span> bimodal_pdf <span class="pl-k">=</span> pdf(x_eval, <span class="pl-v">loc</span><span class="pl-k">=</span>loc1, <span class="pl-v">scale</span><span class="pl-k">=</span>scale1) <span class="pl-k">*</span> <span class="pl-c1">float</span>(size1) <span class="pl-k">/</span> x2.size <span class="pl-k">+</span> \
<span class="pl-c1">...</span>               pdf(x_eval, <span class="pl-v">loc</span><span class="pl-k">=</span>loc2, <span class="pl-v">scale</span><span class="pl-k">=</span>scale2) <span class="pl-k">*</span> <span class="pl-c1">float</span>(size2) <span class="pl-k">/</span> x2.size

<span class="pl-k">>></span><span class="pl-k">></span> fig <span class="pl-k">=</span> plt.figure(<span class="pl-v">figsize</span><span class="pl-k">=</span>(<span class="pl-c1">8</span>, <span class="pl-c1">6</span>))
<span class="pl-k">>></span><span class="pl-k">></span> ax <span class="pl-k">=</span> fig.add_subplot(<span class="pl-c1">111</span>)

<span class="pl-k">>></span><span class="pl-k">></span> ax.plot(x2, np.zeros(x2.shape), <span class="pl-s"><span class="pl-pds">'</span>b+<span class="pl-pds">'</span></span>, <span class="pl-v">ms</span><span class="pl-k">=</span><span class="pl-c1">12</span>)
<span class="pl-k">>></span><span class="pl-k">></span> ax.plot(x_eval, kde(x_eval), <span class="pl-s"><span class="pl-pds">'</span>k-<span class="pl-pds">'</span></span>, <span class="pl-v">label</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">"</span>Scott's Rule<span class="pl-pds">"</span></span>)
<span class="pl-k">>></span><span class="pl-k">></span> ax.plot(x_eval, kde2(x_eval), <span class="pl-s"><span class="pl-pds">'</span>b-<span class="pl-pds">'</span></span>, <span class="pl-v">label</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">"</span>Silverman's Rule<span class="pl-pds">"</span></span>)
<span class="pl-k">>></span><span class="pl-k">></span> ax.plot(x_eval, kde3(x_eval), <span class="pl-s"><span class="pl-pds">'</span>g-<span class="pl-pds">'</span></span>, <span class="pl-v">label</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">"</span>Scott * 0.2<span class="pl-pds">"</span></span>)
<span class="pl-k">>></span><span class="pl-k">></span> ax.plot(x_eval, kde4(x_eval), <span class="pl-s"><span class="pl-pds">'</span>c-<span class="pl-pds">'</span></span>, <span class="pl-v">label</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">"</span>Scott * 0.5<span class="pl-pds">"</span></span>)
<span class="pl-k">>></span><span class="pl-k">></span> ax.plot(x_eval, bimodal_pdf, <span class="pl-s"><span class="pl-pds">'</span>r--<span class="pl-pds">'</span></span>, <span class="pl-v">label</span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">"</span>Actual PDF<span class="pl-pds">"</span></span>)

<span class="pl-k">>></span><span class="pl-k">></span> ax.set_xlim([x_eval.min(), x_eval.max()])
<span class="pl-k">>></span><span class="pl-k">></span> ax.legend(<span class="pl-v">loc</span><span class="pl-k">=</span><span class="pl-c1">2</span>)
<span class="pl-k">>></span><span class="pl-k">></span> ax.set_xlabel(<span class="pl-s"><span class="pl-pds">'</span>x<span class="pl-pds">'</span></span>)
<span class="pl-k">>></span><span class="pl-k">></span> ax.set_ylabel(<span class="pl-s"><span class="pl-pds">'</span>Density<span class="pl-pds">'</span></span>)
<span class="pl-k">>></span><span class="pl-k">></span> plt.show()

scipy统计模块stats翻译

正如预想的,KDE并没有很好的趋近正确的PDF,因为双峰分布的形状不同。通过使用默认带宽 (Scott*0.5)我们可以做得更好,再使用更小的带宽将使平滑性受到影响。这里我们真正需要 的是非均匀(自适应)带宽。

多元估计

通过gaussian_kde我们可以像单元估计那样进行多元估计。我们现在来解决二元情况, 首先我们产生一些随机的二元数据。

<span class="pl-k">>></span><span class="pl-k">></span> <span class="pl-k">def</span> <span class="pl-en">measure</span>(<span class="pl-smi">n</span>):
<span class="pl-c1">...</span>     <span class="pl-s"><span class="pl-pds">"""</span>Measurement model, return two coupled measurements.<span class="pl-pds">"""</span></span>
<span class="pl-c1">...</span>     m1 <span class="pl-k">=</span> np.random.normal(<span class="pl-v">size</span><span class="pl-k">=</span>n)
<span class="pl-c1">...</span>     m2 <span class="pl-k">=</span> np.random.normal(<span class="pl-v">scale</span><span class="pl-k">=</span><span class="pl-c1">0.5</span>, <span class="pl-v">size</span><span class="pl-k">=</span>n)
<span class="pl-c1">...</span>     <span class="pl-k">return</span> m1<span class="pl-k">+</span>m2, m1<span class="pl-k">-</span>m2

<span class="pl-k">>></span><span class="pl-k">></span> m1, m2 <span class="pl-k">=</span> measure(<span class="pl-c1">2000</span>)
<span class="pl-k">>></span><span class="pl-k">></span> xmin <span class="pl-k">=</span> m1.min()
<span class="pl-k">>></span><span class="pl-k">></span> xmax <span class="pl-k">=</span> m1.max()
<span class="pl-k">>></span><span class="pl-k">></span> ymin <span class="pl-k">=</span> m2.min()
<span class="pl-k">>></span><span class="pl-k">></span> ymax <span class="pl-k">=</span> m2.max()

然后我们对这些数据使用KDE:

<span class="pl-k">>></span><span class="pl-k">></span> X, Y <span class="pl-k">=</span> np.mgrid[xmin:xmax:<span class="pl-c1">100<span class="pl-k">j</span></span>, ymin:ymax:<span class="pl-c1">100<span class="pl-k">j</span></span>]
<span class="pl-k">>></span><span class="pl-k">></span> positions <span class="pl-k">=</span> np.vstack([X.ravel(), Y.ravel()])
<span class="pl-k">>></span><span class="pl-k">></span> values <span class="pl-k">=</span> np.vstack([m1, m2])
<span class="pl-k">>></span><span class="pl-k">></span> kernel <span class="pl-k">=</span> stats.gaussian_kde(values)
<span class="pl-k">>></span><span class="pl-k">></span> Z <span class="pl-k">=</span> np.reshape(kernel.evaluate(positions).T, X.shape)

最后我们把估计的双峰分布以colormap形式显示出来,并且在上面画出每个数据点。

<span class="pl-k">>></span><span class="pl-k">></span> fig <span class="pl-k">=</span> plt.figure(<span class="pl-v">figsize</span><span class="pl-k">=</span>(<span class="pl-c1">8</span>, <span class="pl-c1">6</span>))
<span class="pl-k">>></span><span class="pl-k">></span> ax <span class="pl-k">=</span> fig.add_subplot(<span class="pl-c1">111</span>)

<span class="pl-k">>></span><span class="pl-k">></span> ax.imshow(np.rot90(Z), <span class="pl-v">cmap</span><span class="pl-k">=</span>plt.cm.gist_earth_r,
<span class="pl-c1">...</span>           <span class="pl-v">extent</span><span class="pl-k">=</span>[xmin, xmax, ymin, ymax])
<span class="pl-k">>></span><span class="pl-k">></span> ax.plot(m1, m2, <span class="pl-s"><span class="pl-pds">'</span>k.<span class="pl-pds">'</span></span>, <span class="pl-v">markersize</span><span class="pl-k">=</span><span class="pl-c1">2</span>)

<span class="pl-k">>></span><span class="pl-k">></span> ax.set_xlim([xmin, xmax])
<span class="pl-k">>></span><span class="pl-k">></span> ax.set_ylim([ymin, ymax])

<span class="pl-k">>></span><span class="pl-k">></span> plt.show()

scipy统计模块stats翻译

admin
版权声明:本站原创文章,由admin2017-08-09发表,共计20205字。
转载提示:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
评论(没有评论)