Tag Archives: mathematics

The proof of Chebyshev inequality

Details are as follows: http://makercradle.com/2017/%E5%88%87%E6%AF%94%E9%9B%AA%E5%A4%AB%E4%B8%8D%E7%AD%89%E5%BC%8F%E8%AF%81%E6%98%8E/

It is known that x is a continuous random variable,

E(X)=μ,D(X)=δ2

real number

ε>0

Verification:

P(∥X−μ∥≥ε)≤δ2ε2

prove:

  because:

δ2=V(X)

=∫+∞−∞(t−μ)2fX(t)dt

≥∫μ−ε−∞(t−μ)2fX(t)dt+∫+∞μ−ε(t−μ)2fX(t)dt

≥∫μ−ε−∞ε2fX(t)dt+∫+∞μ−εε2fX(t)dt

  due to:

t≤μ−ε⇒ε≤∥t−μ∥⇒ε2≤(t−μ)2

So there is

=ε2∫μ−ε−∞fX(t)dt+∫+∞μ−εfX(t)dt

=ε2P(X≤μ−εorX≥μ+ε)

=ε2P(∥X−μ∥≥ε)

Therefore, there are:

δ2≥ε2P(∥X−μ∥≥ε)

It’s true!

Greek letters in latex

Greek alphabet, we started to know it from primary school, but I still rely on the pronunciation of it. Especially in the analysis of university mathematics, there are so many Greek letters that many classical formulas are represented by Greek letters. It has naturally become an indispensable symbol in the field of mathematics, turning the complex content of mathematics into clear, easy to understand and approachable.

Today, why talk about the Greek alphabet? I have to use it when I wrote latex the day before yesterday

ε

Speaking of, what we found in Baidu Encyclopedia is

ϵ

The symbol is not what I want, and my hatred of Baidu suddenly increases several times. I found the correct way to write it from Google, including other commonly used Greek letters. By the way, I also want to introduce the upper and lower case forms of Greek letters. Think of what you want to use frequently, so write it down for subsequent use. Do enough homework to make yourself more convenient and successful. Enjoy it!


The usage of Greek letters in latex

In latex, the Greek letter should be written as a formula. In the $$sign, use the slash and the English symbol of Greek letter.

Greek letters in latex form

For ease of understanding, show how to write Greek letters in code symbols.

$\epsilon$

Results:
there was no significant difference between the two groups

ϵ


Greek alphabet

Greek lowercase and uppercase latex form Greek lowercase and uppercase latex form
latex form

α

A

\alpha A

μ

N

\mu N

β

B

\beta B

ξ

Ξ

\xi \Xi

γ

Γ

\gamma \Gamma o O o O

δ

Δ

\delta \ Delta

π

Π

\pi \Pi

ϵ

ε

E

\epsilon \varepsilon E

ρ

ϱ

P

\rho \varrho P

ζ

Z

\zeta Z

σ

Σ

\sigma \Sigma

η

H

\eta H

τ

T

\tau T

θ

ϑ

Θ

\theta \vartheta \Theta

υ

Υ

\upsilon \Upsilon

ι

I

\iota I

ϕ

φ

Φ

\phi \varphi \Phi

κ

K

\kappa K

χ

X

\chi X

λ

Λ

\lambda \Lambda

ψ

Ψ

\psi \Psi

μ

M

\mu M

ω

Ω

\omega \Omega

The usage of Greek alphabet in other programming languages

In other programming languages, the implicit latex method is used

$\Psi$

If it’s a formula, it’s used the same way.

Here is the difference and connection of Torch. View (), Transpose (), and Permute ()

having recently been stretched by several of pytorch’s Tensor Tensor dimensional transformations, I delve into them, outlining their journey and their results as follows:

note: torch. The __version__ 1.2.0 ‘= =’

torch. Transpose () and the torch permute ()

and

are used to exchange content from different dimensions. Here, however, torch. () is exchange the content of two specified dimensions, and permute() can exchange more than one dimension all at once. Here is code:
(): the exchange of two dimensions

 >>> a = torch.Tensor([[[1,2,3,4,5], [6,7,8,9,10], [11,12,13,14,15]], 
                  [[-1,-2,-3,-4,-5], [-6,-7,-8,-9,-10], [-11,-12,-13,-14,-15]]])
 >>> a.shape
 torch.Size([2, 3, 5])
 >>> print(a)
 tensor([[[  1.,   2.,   3.,   4.,   5.],
         [  6.,   7.,   8.,   9.,  10.],
         [ 11.,  12.,  13.,  14.,  15.]],

        [[ -1.,  -2.,  -3.,  -4.,  -5.],
         [ -6.,  -7.,  -8.,  -9., -10.],
         [-11., -12., -13., -14., -15.]]])
 >>> b = a.transpose(1,2)  # 使用transpose,将维度1和2进行交换。这个很好理解。转换后tensor与其shape如下
 >>> print(b, b.shape)
 (tensor([[[  1.,   6.,  11.],
         [  2.,   7.,  12.],
         [  3.,   8.,  13.],
         [  4.,   9.,  14.],
         [  5.,  10.,  15.]],

        [[ -1.,  -6., -11.],
         [ -2.,  -7., -12.],
         [ -3.,  -8., -13.],
         [ -4.,  -9., -14.],
         [ -5., -10., -15.]]]),
torch.Size([2, 5, 3])))

permute() : does an arbitrary dimension swap

at once

 >>> c = a.permute(2, 0, 1)
 >>> print(c, c.shape)  # 此举将原维度0,1,2的次序变为2,1,0,所以shape也发生了相应的变化。
 (tensor([[[  1.,   6.,  11.],
          [ -1.,  -6., -11.]],
 
         [[  2.,   7.,  12.],
          [ -2.,  -7., -12.]],
 
         [[  3.,   8.,  13.],
          [ -3.,  -8., -13.]],
 
         [[  4.,   9.,  14.],
          [ -4.,  -9., -14.]],
 
         [[  5.,  10.,  15.],
          [ -5., -10., -15.]]]),
 torch.Size([5, 2, 3]))

This transformation between

transpose() and permute() :

>>> b = a.permute(2,0,1)
>>> c = a.transpose(1,2).transpose(0,1)
>>> print(b == c, b.shape)
(tensor([[[True, True, True],
          [True, True, True]],
 
         [[True, True, True],
          [True, True, True]],
 
         [[True, True, True],
          [True, True, True]],
 
         [[True, True, True],
          [True, True, True]],
 
         [[True, True, True],
          [True, True, True]]]),
 torch.Size([5, 2, 3]))

as shown in the code, if you swap the first and second dimensions for Tensor a, and then swap the first and second dimensions for Tensor a, then they will get the same result as permute.


transpose () and the view ()

view() is a very common function in pytorch. This function also ACTS as an Tensor dimension, but does all this in a very different way from Transpose ()/permute(). If tranpose() is the Tensor whose original dimensions are exchanged faithfully, then view() is much more straightforward and simple — first, the view() function flattens all the Tensor dimensions into one, and then reconstructs an Tensor based on the incoming dimension information. Code is as follows:

# 还是上面的Tensor a
 >>> print(a.shape)
 torch.Size([2, 3, 5])
 >>> print(a.view(2,5,3))
 tensor([[[  1.,   2.,   3.],
         [  4.,   5.,   6.],
         [  7.,   8.,   9.],
         [ 10.,  11.,  12.],
         [ 13.,  14.,  15.]],

        [[ -1.,  -2.,  -3.],
         [ -4.,  -5.,  -6.],
         [ -7.,  -8.,  -9.],
         [-10., -11., -12.],
         [-13., -14., -15.]]])
  >>> c = a.transpose(1,2)
 >>> print(c, c.shape)
(tensor([[[  1.,   6.,  11.],
          [  2.,   7.,  12.],
          [  3.,   8.,  13.],
          [  4.,   9.,  14.],
          [  5.,  10.,  15.]],
 
         [[ -1.,  -6., -11.],
          [ -2.,  -7., -12.],
          [ -3.,  -8., -13.],
          [ -4.,  -9., -14.],
          [ -5., -10., -15.]]]),
 torch.Size([2, 5, 3]))

is shown in the code. Even though view() and () end up doing the same thing, their contents are not the same. The view function is just going to be applied to the Tensor dimensions of (2,5,3), which are going to be applied to the elements and ; All this does is do of the first second dimension.


Moreover, there are cases where the Tensor after transpose cannot be called view, because the Tensor after transpose is not “continuous” (non-self-help). The question about self-help array is the same in numpy, we have a great explanation here for