• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Python filter.threshold_adaptive函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中skimage.filter.threshold_adaptive函数的典型用法代码示例。如果您正苦于以下问题:Python threshold_adaptive函数的具体用法?Python threshold_adaptive怎么用?Python threshold_adaptive使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了threshold_adaptive函数的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: run_cmd

def run_cmd(method, block_size=40):
    stdin = sys.stdin.read()
    if stdin == '\n':
        exit()

    img = Image.open(StringIO.StringIO(stdin)).convert('L')
    imgc = np.array(img)

    imggray = rgb2gray(imgc)

    if method is None or method == '':
        imgthresh = threshold_adaptive(imggray, block_size, 'gaussian', offset=10)
    elif method == 'gaussian':
        imgthresh = threshold_adaptive(imggray, block_size, 'gaussian', offset=10)
    elif method == 'median':
        imgthresh = threshold_adaptive(imggray, block_size, 'median', offset=10)
    elif method == 'mean':
        imgthresh = threshold_adaptive(imggray, block_size, 'mean', offset=10)
    elif method == 'otsu':
        thresh = threshold_otsu(imggray)
        imgthresh = imggray > thresh
    elif method == 'yen':
        thresh = threshold_yen(imggray)
        imgthresh = imggray > thresh
    elif method == 'iso':
        thresh = threshold_isodata(imggray)
        imgthresh = imggray > thresh


    rescaled = (255.0 / imgthresh.max() * (imgthresh - imgthresh.min())).astype(np.uint8)

    out = Image.fromarray(rescaled)
    out.save(sys.stdout, format='PNG')
开发者ID:CircaVictor,项目名称:FCC-Political-Ads_The-Code,代码行数:33,代码来源:threshold.py


示例2: modify

def modify(img):
    """Randomly modify an image
    
    This is a preprocessing step for training an OCR classifier. It takes
    in an image and casts it to greyscale, reshapes it, and adds some
    (1) rotations, (2) translations and (3) noise.
    
    If more efficiency is needed, we could factor out some of the initial
    nonrandom transforms.
    """
    
    block_size = np.random.uniform(20, 40)
    rotation = 5*np.random.randn()
    
    #print 'BLOCK SIZE', block_size
    #print 'ROTATION  ', rotation
    
    img = color.rgb2grey(img)
    img = transform.resize(img, output_shape=(50,30))
    img = filter.threshold_adaptive(img, block_size=block_size)
    
    # rotate the image
    img = np.logical_not(transform.rotate(np.logical_not(img), rotation))
    # translate the image
    img = shift(img)
    # add some noise to the image
    img = noise(img)
    
    img = transform.resize(img, output_shape=(25,15))
    return filter.threshold_adaptive(img, block_size=25)
开发者ID:rmcgibbo,项目名称:autogert,代码行数:30,代码来源:train_synthetic.py


示例3: sino_remove_bragg_spots

def sino_remove_bragg_spots(sinogram, block_size=5, tolerance=0.05, sensitivity_low=1.5, sensitivity_high=0.2):
    """ If value is above some local threshold,
        replace by median. Removes dodgy highlights and shadows
        resulting from bragg peaks from large crystallites
        in diffracting orientations """

    # Footprint for median value to replace bragg spots.
    # Usually the spots are contained to one projection,
    # so we sample above and below for good values.
    footprint = np.array(
        [[  False, True, False ],
         [  True,  True,  True ],
         [  False, False, False ],
         [  True,  True,  True ],
         [  False, True, False ]])

    # Only consider pixels which differ from the local median by this offset.
    # Highlights and shadows will skew the arithmetic mean so use median.

    median_value = np.median(sinogram)
    offset_high  = np.median(sinogram[sinogram>median_value])
    offset_low   = np.median(sinogram[sinogram<median_value])

    utils.debug_print(median=median_value,offset_high=offset_high, offset_low=offset_low)

    mask_low = ~filters.threshold_adaptive(
                 sinogram,
                 block_size,
                 method='median',
                 offset=-sensitivity_low*(offset_low-median_value),
             )
    mask_high = filters.threshold_adaptive(
                 sinogram,
                 block_size,
                 method='median',
                 offset=-sensitivity_high*(offset_high-median_value),
             )
    if float(mask_high.sum()) > tolerance * mask_high.size:
        # Too many values marked as spots. Ignoring hilights.
        print('Found more than %s%% of values as hilights' % (tolerance * 100))
        mask_high = np.zeros(shape=sinogram.shape, dtype=bool)
    if float(mask_low.sum()) > tolerance * mask_low.size:
        # Too many values marked as spots. Ignoring shadows.
        print('Found more than %s%% of values as shadows' % (tolerance * 100))
        mask_low = np.zeros(shape=sinogram.shape, dtype=bool)

    mask = mask_low + mask_high
    # FIXME, only calculate values in mask.
    median = ndimage.median_filter(sinogram, footprint=footprint)
    ret = sinogram.copy()
    ret[mask==True] = median[mask==True]
    return ret
开发者ID:amundhov,项目名称:xrdtoolkit,代码行数:52,代码来源:tomo.py


示例4: segment

    def segment(self, src):
        image = src.ndarray[:]
        if self.use_adaptive_threshold:
            block_size = 25
            markers = threshold_adaptive(image, block_size) * 255
            markers = invert(markers)

        else:
            markers = zeros_like(image)
            markers[image < self.threshold_low] = 1
            markers[image > self.threshold_high] = 255

        elmap = sobel(image, mask=image)
        wsrc = watershed(elmap, markers, mask=image)

#        elmap = ndimage.distance_transform_edt(image)
#        local_maxi = is_local_maximum(elmap, image,
#                                      ones((3, 3))
#                                      )
#        markers = ndimage.label(local_maxi)[0]
#        wsrc = watershed(-elmap, markers, mask=image)
#        fwsrc = ndimage.binary_fill_holes(out)
#        return wsrc
        if self.use_inverted_image:
            out = invert(wsrc)
        else:
            out = wsrc

#        time.sleep(1)
#        do_later(lambda:self.show_image(image, -elmap, out))
        return out
开发者ID:softtrainee,项目名称:arlab,代码行数:31,代码来源:region.py


示例5: intensity_object_features

def intensity_object_features(im, adaptive_t_radius=51, sample_size=None):
    """Segment objects based on intensity threshold and compute properties.

    Parameters
    ----------
    im : 2D np.ndarray of float or uint8.
        The input image.
    adaptive_t_radius : int, optional
        The radius to calculate background with adaptive threshold.
    sample_size : int, optional
        Sample this many objects randomly, rather than measuring all
        objects.

    Returns
    -------
    f : 1D np.ndarray of float
        The feature vector.
    names : list of string
        The list of feature names.
    """
    tim1 = im > imfilter.threshold_otsu(im)
    f1, names1 = object_features(tim1, im, sample_size=sample_size)
    names1 = ['otsu-threshold-' + name for name in names1]
    tim2 = imfilter.threshold_adaptive(im, adaptive_t_radius)
    f2, names2 = object_features(tim2, im, sample_size=sample_size)
    names2 = ['adaptive-threshold-' + name for name in names2]
    f = np.concatenate([f1, f2])
    return f, names1 + names2
开发者ID:gitter-badger,项目名称:husc,代码行数:28,代码来源:features.py


示例6: segment

    def segment(self, src):
        '''
            pychron: preprocessing cv.Mat
        '''
#        image = pychron.ndarray[:]
#         image = asarray(pychron)
        image = src[:]
        if self.use_adaptive_threshold:
#            block_size = 25
            markers = threshold_adaptive(image, self.block_size)

            n = markers[:].astype('uint8')
            n[markers == True] = 255
            n[markers == False] = 1
            markers = n

        else:
            markers = zeros_like(image)
            markers[image < self.threshold_low] = 1
            markers[image > self.threshold_high] = 255

        elmap = sobel(image, mask=image)
        wsrc = watershed(elmap, markers, mask=image)

#         wsrc = wsrc.astype('uint8')
        return invert(wsrc)
开发者ID:OSUPychron,项目名称:pychron,代码行数:26,代码来源:region.py


示例7: __call__

    def __call__(self, image, window_size=10, threshold=0, fill_holes=True,
                 outline_smoothing=2, remove_borderobjects=True, size_min=1,
                 *args, **kw):

        thresh = threshold_adaptive(image, block_size=window_size,
                                    offset=-1*threshold)

        if outline_smoothing >= 1:
            thresh = outlineSmoothing(thresh, outline_smoothing)

        thresh = remove_small_objects(thresh, size_min)

        seeds = ndi.label(clear_border(~thresh))[0]
        thresh = ndi.binary_fill_holes(thresh)
        smask = seeds.astype(bool)

        # object don't touch border after outline smoothing
        if remove_borderobjects:
            thresh = clear_border(thresh)

        img = np.zeros(thresh.shape)
        img[~smask] = 1
        edt = ndi.morphology.distance_transform_edt(img)
        edt -= ndi.morphology.distance_transform_edt(seeds)

        labels = watershed(edt, seeds)
        labels[smask] = 0
        labels[~thresh] = 0

        return labels
开发者ID:rhoef,项目名称:afw,代码行数:30,代码来源:honeycomp.py


示例8: adaptive_segment

def adaptive_segment(args):
    """
    Applies an adaptive threshold to reconstructed data.

    Also known as local or dynamic thresholding
    where the threshold value is the weighted mean
    for the local neighborhood of a pixel subtracted
    by constant. Alternatively the threshold can be
    determined dynamically by a given function using
    the 'generic' method.

    Parameters
    ----------
    data : ndarray, float32
        3-D reconstructed data with dimensions:
        [slices, pixels, pixels]

    block_size : scalar, int
        Uneven size of pixel neighborhood which is
        used to calculate the threshold value
        (e.g. 3, 5, 7, ..., 21, ...).

    offset : scalar, float
         Constant subtracted from weighted mean of
         neighborhood to calculate the local threshold
         value. Default offset is 0.

    Returns
    -------
    output : ndarray
        Thresholded data.

    References
    ----------
    - `http://scikit-image.org/docs/dev/auto_examples/plot_threshold_adaptive.html \
    <http://scikit-image.org/docs/dev/auto_examples/plot_threshold_adaptive.html>`_
    """
    # Arguments passed by multi-processing wrapper
    ind, dshape, inputs = args

    # Function inputs
    data = mp.tonumpyarray(mp.shared_arr, dshape)  # shared-array
    block_size, offset = inputs

    for m in ind:
        img = data[m, :, :]

        # Perform scikit adaptive thresholding.
        img = threshold_adaptive(img, block_size=block_size, offset=offset)

        # Remove small white regions
        img = ndimage.binary_opening(img)

        # Remove small black holes
        img = ndimage.binary_closing(img)

        data[m, :, :] = img
开发者ID:djvine,项目名称:tomopy,代码行数:57,代码来源:adaptive_segment.py


示例9: adapative_threshold

def adapative_threshold(image, block_size=100):
	"""
	This method returns the adaptively-thresholded image.
	"""

	thresholded_image = threshold_adaptive(image, block_size)
	imshow(thresholded_image)

	return thresholded_image
开发者ID:runstadler-lab,项目名称:Seal-H3N8-Image-Analysis,代码行数:9,代码来源:rgprocessing.py


示例10: extract_bill

def extract_bill(image, screen, ratio):
    """"Extract the bill of the image"""
    warped = four_point_transform(image, screen.reshape(4, 2) * ratio)

    # convert the warped image to grayscale, then threshold it
    # to give it that 'black and white' paper effect
    warped = cv2.cvtColor(warped, cv2.COLOR_BGR2GRAY)
    warped = threshold_adaptive(warped, 250, offset=10)
    warped = warped.astype("uint8") * 255
    return warped
开发者ID:llrs,项目名称:bills,代码行数:10,代码来源:process_image.py


示例11: scan

    def scan(cls, filepath):
        print("Starting scan")
        # load the image and compute the ratio of the old height
        # to the new height, clone it, and resize it
        image = cv2.imread(filepath)
        ratio = image.shape[0] / 500.0
        orig = image.copy()
        image = imutils.resize(image, height=500)

        # convert the image to grayscale, blur it, and find edges
        # in the image
        gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
        gray = cv2.GaussianBlur(gray, (5, 5), 0)
        edged = cv2.Canny(gray, 75, 200)

        # find the contours in the edged image, keeping only the
        # largest ones, and initialize the screen contour
        cnts, hierarchy = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
        cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:5]
        screenCnt = None

        # loop over the contours
        for c in cnts:
            # approximate contours
            peri = cv2.arcLength(c, True)
            approx = cv2.approxPolyDP(c, 0.02 * peri, True)

            # if our approximated contour has four points, then we
            # can assume that we have found our screen
            if len(approx) == 4:
                screenCnt = approx
                break

        # Check if we found a 4 point contour. If not, we create our own bounding box
        # with the largest contour
        if screenCnt is None:
            height, width, channels = image.shape
            imageBounds = np.array([[1, 1], [width, 1], [width, height], [1, height]])
            screenCnt = imutils.get_bounding_box(imageBounds)

        # apply the four point transform to obtain a top-down
        # view of the original image
        warped = four_point_transform(orig, screenCnt.reshape(4, 2) * ratio)

        # convert the warped image to grayscale, then threshold it
        # to give it that 'black and white' paper effect
        warped = cv2.cvtColor(warped, cv2.COLOR_BGR2GRAY)
        warped = threshold_adaptive(warped, 250, offset=10)
        warped = warped.astype("uint8") * 255

        # Write out image to tmp file
        filename = "tmp/tmp-result.png"
        cv2.imwrite(filename, warped)
        print("Finished scan")
        return filename
开发者ID:twuster,项目名称:DocumentScanner,代码行数:55,代码来源:scanner.py


示例12: preprocess

def preprocess(image, height=50, block_size=50):
    """Turn to greyscale, scale to a height, and then threshold to binary
    """

    image = color.rgb2grey(image)
    size_factor = float(height) / image.shape[0]
    new_size = [int(e * size_factor) for e in image.shape]
    image = transform.resize(image, new_size)
    image = filter.threshold_adaptive(image, block_size=30)

    return image
开发者ID:rmcgibbo,项目名称:autogert,代码行数:11,代码来源:image.py


示例13: image_features_resize_adaptive

def image_features_resize_adaptive(img, maxPixel, num_features,imageSize):
     # X is the feature vector with one row of features per image
     #  consisting of the pixel values a, num_featuresnd our metric
     block_size = 20
     im = threshold_adaptive(img, block_size, offset=5)
     X=np.zeros(num_features, dtype=float)
     image = resize(im, (maxPixel, maxPixel))
     # Store the rescaled image pixels
     X[0:imageSize] = np.reshape(image,(1, imageSize))

     return X
开发者ID:kailex,项目名称:Bowl,代码行数:11,代码来源:Prepare_Features.py


示例14: thresholdImage

 def thresholdImage(self):
     """Threshold Image"""
     ### THRESHOLDING ###
     #=======================================================================
     # self.ThresholdMethod:
     # thresholdGlobalOtsu = filter.threshold_otsu(pyLaneTracker.Img, 64)
     # thresholdGlobalYen = filter.threshold_yen(pyLaneTracker.Img, 64)
     #=======================================================================
     
     thresholdAdaptive = filter.threshold_adaptive(self.Img, 96, method='median', offset=0, mode='reflect', param=None)
     self.Threshold = thresholdAdaptive
开发者ID:janthonywilson,项目名称:Cruise,代码行数:11,代码来源:pyCar.py


示例15: image_features_hog2

def image_features_hog2(img, num_features,orientation,maxcell,maxPixel):
     # X is the feature vector with one row of features per image
     #  consisting of the pixel values a, num_featuresnd our metric
     block_size = 10
     image = threshold_adaptive(img, block_size, offset=5)
     im = resize(image, (maxPixel, maxPixel))
     ##hog scikit transform
     fd= hog(im, orientations=orientation, pixels_per_cell=(maxcell, maxcell),
                    cells_per_block=(1, 1), visualise=False,normalise=True)

     return fd
开发者ID:kailex,项目名称:Bowl,代码行数:11,代码来源:Prepare_Features.py


示例16: determine_threshold

def determine_threshold(avg_image, block_size=100, offset=-2):
	'''Used to determine the right threshold value for the segmentation. The input
	is the average of the image stack of particles on the ring. Play with the offset
	and block size to make a clear ring with minimal background noise. Negative values
	of offset should reduce background noise. This functions returns the thresholded array
	in addition to showing what it looks like.'''
	from skimage.filter import threshold_adaptive
	threshold=threshold_adaptive(avg_image, block_size, offset=offset)
	import matplotlib.pyplot as plt
	plt.imshow(threshold)
	plt.show()
	return threshold
开发者ID:pfigliozzi,项目名称:Python_Data_Analysis,代码行数:12,代码来源:functions_for_data_analysis.py


示例17: encode_centro_telomeres

def encode_centro_telomeres(image_centro, image_telo,
                            centro_offset=0.0, centro_factor=1.0,
                            centro_min_size=36, centro_radius=10,
                            telo_offset=0.0, telo_adapt_radius=49,
                            telo_open_radius=4):
    """Find centromeres, telomeres, and their overlap.

    Parameters
    ----------
    image_centro : array, shape (M, N)
        The grayscale channel for centromeres.
    image_telo : array, shape (M, N)
        The grayscale channel for telomeres.
    centro_offset : float, optional
        Offset Otsu's threshold by this amount (i.e. be less stringent
        about what image intensity constitutes a centromere)
    centro_factor : float, optional
        Offset Otsu's threshold by a multiplicative constant.
    centro_min_size : int, optional
        Remove objects smaller than this, as they would be too small to
        be a centromere.
    centro_radius : int, optional
        Consider anything within this radius to be "near" a centromere.
    telo_offset : float, optional
        Offset the telomere image threshold by this amount.
    telo_adapt_radius : int, optional
        Use this radius to threshold telomere image adaptively.
    telo_open_radius : int, optional
        Use this radius for a binary opening of thresholded telomeres
        (removes noise).

    Returns
    -------
    encoded_regions : array of int, shape (M, N)
        A uint8 image with the following values:
         - 0: background
         - 1: telomeres
         - 2: centromeres
         - 3: centromere/telomere overlap
    """
    centros = otsu(image_centro, centro_offset, centro_factor)
    centros = remove_small_objects(centros, centro_min_size)
    centro_strel = selem.disk(centro_radius)
    centros = nd.binary_dilation(centros, structure=centro_strel)
    telos = imfilter.threshold_adaptive(image_telo, telo_adapt_radius,
                                        offset=telo_offset)
    telo_strel = selem.disk(telo_open_radius)
    telos = nd.binary_opening(telos, structure=telo_strel)
    encoded_regions = 2 * centros.astype(np.uint8) + telos
    return encoded_regions
开发者ID:jni,项目名称:cafe,代码行数:50,代码来源:cafe.py


示例18: split_rigid

def split_rigid(image, charseps):
    """Split an image into characters. Charseps should be a list of ints
    giving the horizontal location to split
    """
    n_chars = len(charseps) - 1

    chars = []
    for i in range(n_chars):
        char = image[:, charseps[i] : charseps[i + 1]]
        char = transform.resize(char, output_shape=(25, 15))
        char = filter.threshold_adaptive(char, block_size=30)
        chars.append(char)

    return chars
开发者ID:rmcgibbo,项目名称:autogert,代码行数:14,代码来源:image.py


示例19: find_symbols

def find_symbols(input_image):
    """
    Pronalazi na karti pozicije broja i boje te ih ekstraktira u nove matrice
    :param input_image: image for processing
    :return: dimensions of new image matrix for rank and suit
    """
    grey_warped_image = cv2.cvtColor(input_image, cv2.COLOR_BGR2GRAY)
    black_and_white = threshold_adaptive(grey_warped_image, 250, offset=10)  # napravi binarnu sliku, crno-bijelu
    black_and_white = black_and_white.astype("uint8") * 255

    kernel = np.ones((3, 2), 'uint8')
    # print black_and_white[20][20]
    black_and_white = cv2.erode(black_and_white, kernel, iterations=1)
    # cv2.imshow('Erodirana', black_and_white)
    blob_found = False
    region_width, region_height = 32, 93
    rect_top, rect_bot = input_image[5:region_height, 5:region_width], input_image[247:(247 + 98), 208:(208 + 37)]

    blob_found = False
    region_width, region_height = 32, 93
    # rect_top, rect_bot = input_image[5:region_height, 5:region_width], input_image[247:(247 + 98), 208:(208 + 37)]
    # print black_and_white.shape
    mask = np.zeros((black_and_white.shape[0] + 2, black_and_white.shape[1] + 2), 'uint8')
    bin_card = black_and_white.copy()
    rects = []
    # cnt = 0
    for y in np.arange(5, region_height):
        for x in np.arange(5, region_width):
            bgr = black_and_white[y, x]
            if bgr == 0:
                cv2.floodFill(black_and_white, mask, (x, y), (255, 255, 255))
                # cv2.imshow("flooded", black_and_white)
                # cv2.imwrite("flooded" + str(cnt) +".jpg", black_and_white)
                # cnt += 1
                # cv2.waitKey(0)
                # cv2.destroyAllWindows()
                rects.append(xor(black_and_white, bin_card, rects))

    # print "RECTS: ", rects
    if len(rects) < 3:
        rank_dim, suit_dim = rects[0], rects[1]
    else:
        x1, y1, w1, h1 = rects[0]
        x2, y2, w2, h2 = rects[1]
        rank_dim = (x1, y1, w1 + w2 + 2, h1)
        suit_dim = rects[2]

    return rank_dim, suit_dim
开发者ID:jknezevic,项目名称:ZR,代码行数:48,代码来源:classification.py


示例20: noise

def noise(img, rho=0.01, sigma=0.5, block_size=50):
    """Add two forms of noise to a binary image
    
    First, flip a fraction, rho, of the bits. The bits to flip are
    selected uniformly at random.
    
    Second, add white noise to the image, and then re-threshold it back to
    binary. Here, errors in the thresholding lead to a new "splotchy" error
    pattern, especially near the edges.
    """
    
    mask = scipy.sparse.rand(img.shape[0], img.shape[1], density=rho)
    mask.data = np.ones_like(mask.data)
    img = np.mod(img + mask, 2)
    
    img = img + sigma * np.random.random(img.shape)
    img = filter.threshold_adaptive(img, block_size=block_size)
    
    return img
开发者ID:rmcgibbo,项目名称:autogert,代码行数:19,代码来源:train_synthetic.py



注:本文中的skimage.filter.threshold_adaptive函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python filter.threshold_otsu函数代码示例发布时间:2022-05-27
下一篇:
Python filter.sobel函数代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap