Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
552 views
in Technique[技术] by (71.8m points)

python - Pure NumPy 2D mean convolution derivative of input image

I have b 2d m x n greyscale images that I'm convolving with a p x q filter and then doing mean-pooling on. With pure numpy, I'd like to compute the derivative of the input image and the filter, but I'm having trouble computing the derivative of the input image:

def conv2d_derivatives(x, f, dy):
    """
    dimensions:
        b = batch size
        m = input image height
        n = input image width
        p = filter height
        q = filter width
        r = output height
        s = output width

    input:
        x = input image                       (b x m x n)
        f = filter                            (p x q)
        dy = derivative of some loss w.r.t. y (b x r x s)

    output:
        df = derivative of loss w.r.t. f      (p x q)
        dx = derivative of loss w.r.t. x      (b x m x n)

    notes:
        wx = windowed version of x s.t. wx[b, r, s] = the window of x to compute y[b, r, s]
        vdx = a view of dx 
    """
    b, m, n = x.shape
    p, q = f.shape
    r = m - p + 1
    s = n - q + 1
    wx = as_strided(x, (b, r, s, p, q), np.array([m * n, 1, q, 1, n]) * x.itemsize)

    # This derivative is correct
    df = 1 / (p * q) * np.einsum('brspq,brs->pq', wx, dy)

    # Method 1: this derivative is incorrect
    dx = np.zeros_like(x)
    vdx = as_strided(dx, (b, r, s, p, q), np.array([m * n, 1, q, 1, n]) * dx.itemsize)
    np.einsum('pq,brs->brspq', f, dy, out=vdx)
    dx /= (p * q)

    # Method 2: this derivative is correct, but it's slow and memory-intensive
    dx = np.zeros_like(x)
    vdx = as_strided(dx, (b, r, s, p, q), np.array([m * n, 1, q, 1, n]) * dx.itemsize)
    prod = f[None, None, None, :, :] * dy[:, :, :, None, None]
    for index in np.ndindex(*vdx.shape):
        vdx[index] += prod[index]
    dx /= (p * q)

    return df, dx

I know that the derivative of the loss w.r.t. w[b,r,s,p,q] is just 1/(p*q) * f[p,q] * dy[b,r,s]. However, I don't want to explicitly compute the derivatives for w and store them in memory because that array would be massive.

I thought I could do an einsum of a view of dx, vdx, similar to the windowed wdx, and hope that einsum would increment vdx[b,r,s,p,q] += f[p,q] * dy[b,r,s], but it actually assigns vdx[b,r,s,p,q] = f[p,q] * dy[b,r,s]. If there was a way to specify out_add_to in einsum, then my problem would be solved.

How do I compute dx without storing a large b x r x s x p x q matrix in pure NumPy? I can't use scipy or any other dependency for this problem.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)
Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...