• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Python numpy.partition函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中numpy.partition函数的典型用法代码示例。如果您正苦于以下问题:Python partition函数的具体用法?Python partition怎么用?Python partition使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了partition函数的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: _trimmed_mean_1d

def _trimmed_mean_1d(arr, k):
    """Calculate trimmed mean on a 1d array.

    Trim values largest than the k'th largest value or smaller than the k'th
    smallest value

    Parameters
    ----------
    arr: ndarray, shape (n,)
        The one-dimensional input array to perform trimmed mean on

    k: int
        The thresholding order for trimmed mean

    Returns
    -------
    trimmed_mean: float
        The trimmed mean calculated
    """
    kth_smallest = np.partition(arr, k)[k-1]
    kth_largest = -np.partition(-arr, k)[k-1]

    cnt = 0
    summation = 0.0
    for elem in arr:
        if elem >= kth_smallest and elem <= kth_largest:
            cnt += 1
            summation += elem
    return summation / cnt
开发者ID:amueller,项目名称:pca,代码行数:29,代码来源:tga.py


示例2: color_range

def color_range(data):
  #Define the color range
  clean = data[data>0]
  min_element = clean.size/20
  max_element = clean.size*9/10
  vmin = np.partition(clean, min_element, axis=None)[min_element]   #invece di subint[subint>0] possibile subint[:-(num_rows/down_fact)]
  vmax = np.partition(clean, max_element, axis=None)[max_element]
  return vmin,vmax
开发者ID:danielemichilli,项目名称:LSPs,代码行数:8,代码来源:Utilities.py


示例3: arg_median

def arg_median(a):
    if len(a) % 2 == 1:
        return np.where( a == np.median(a) )[0][0]
    else:
        l,r = len(a)/2 -1, len(a)/2
        left = np.partition(a, l)[l]
        right = np.partition(a, r)[r]
        return [np.where(a == left)[0][0], np.where(a==right)[0][0]]
开发者ID:ACCarnall,项目名称:SED_fitting,代码行数:8,代码来源:plot_UDS_val_comp.py


示例4: _degree_feats

    def _degree_feats(uts=None, G=None, name_ext="", exclude_id=None):
        """
        Helper method for retrieve_feats().
        Generate statistics on degree-related features in a Hypergraph (G), or a Hypergraph
        constructed from provided utterances (uts)
        :param uts: utterances to construct Hypergraph from
        :param G: Hypergraph to calculate degree features statistics from
        :param name_ext: Suffix to append to feature name
        :param exclude_id: id of utterance to exclude from Hypergraph construction
        :return: A dictionary from a thread root id to its stats dictionary,
            which is a dictionary from feature names to feature values. For degree-related
            features specifically.
        """
        assert uts is None or G is None
        if G is None:
            G = HyperConvo._make_hypergraph(uts, exclude_id=exclude_id)

        stat_funcs = {
            "max": np.max,
            "argmax": np.argmax,
            "norm.max": lambda l: np.max(l) / np.sum(l),
            "2nd-largest": lambda l: np.partition(l, -2)[-2] if len(l) > 1
            else np.nan,
            "2nd-argmax": lambda l: (-l).argsort()[1] if len(l) > 1 else np.nan,
            "norm.2nd-largest": lambda l: np.partition(l, -2)[-2] / np.sum(l)
            if len(l) > 1 else np.nan,
            "mean": np.mean,
            "mean-nonzero": lambda l: np.mean(l[l != 0]),
            "prop-nonzero": lambda l: np.mean(l != 0),
            "prop-multiple": lambda l: np.mean(l[l != 0] > 1),
            "entropy": scipy.stats.entropy,
            "2nd-largest / max": lambda l: np.partition(l, -2)[-2] / np.max(l)
            if len(l) > 1 else np.nan
        }

        stats = {}
        for from_hyper in [False, True]:
            for to_hyper in [False, True]:
                if not from_hyper and to_hyper: continue  # skip c -> C
                outdegrees = np.array(G.outdegrees(from_hyper, to_hyper))
                indegrees = np.array(G.indegrees(from_hyper, to_hyper))

                for stat, stat_func in stat_funcs.items():
                    stats["{}[outdegree over {}->{} {}responses]".format(stat,
                                                                         HyperConvo._node_type_name(from_hyper),
                                                                         HyperConvo._node_type_name(to_hyper),
                                                                         name_ext)] = stat_func(outdegrees)
                    stats["{}[indegree over {}->{} {}responses]".format(stat,
                                                                        HyperConvo._node_type_name(from_hyper),
                                                                        HyperConvo._node_type_name(to_hyper),
                                                                        name_ext)] = stat_func(indegrees)
        return stats
开发者ID:CornellNLP,项目名称:Cornell-Conversational-Analysis-Toolkit,代码行数:52,代码来源:hyperconvo.py


示例5: test_uint64

    def test_uint64(self):
        arr = np.arange(10, -1, -1, dtype='int64')

        partition = np.partition(arr, self.pivot_index)

        self._check_partition(partition, self.pivot_index)
        self._check_content_along_axis(arr, partition, -1)
开发者ID:k3331863,项目名称:IR2,代码行数:7,代码来源:test_partition.py


示例6: distance_to_kth_neighbor

def distance_to_kth_neighbor(metric, k_neighbors):
    """Computes the distance to the kth neighbor for each point in a metric.

    Args
    ----
    metric : ndarray
        A distance matrix in square or condensed form.
    k_neighbors : int
        The order of neighbor to which distance is computed, where the 0th
        neighbor of any point is the point itself.

    Returns
    -------
    distances : ndarray
        Distance to the kth neighbor for each point in `metric`.

    Note
    ----
    It is an implementation detail that the input metric is coerced to a 
    squareform array.

    """
    # coerce any condensed metric to square form
    if metric.ndim == 1:
        metric = _dist.squareform(metric)

    if k_neighbors >= metric.shape[0]:
        message = 'k_neighbors must be less than the number of points.'
        raise ValueError(message)

    if k_neighbors < 0:
        raise ValueError('k_neighbors must be non-negative')

    return _np.partition(metric, k_neighbors, axis=1)[:, k_neighbors]
开发者ID:pombredanne,项目名称:phdlib,代码行数:34,代码来源:metrics.py


示例7: estimate_eps

def estimate_eps(dist_mat, n_closest=5):
    """
    Estimates possible eps values (to be used with DBSCAN)
    for a given distance matrix by looking at the largest distance "jumps"
    amongst the `n_closest` distances for each item.

    Tip: the value for `n_closest` is important - set it too large and you may only get
    really large distances which are uninformative. Set it too small and you may get
    premature cutoffs (i.e. select jumps which are really not that big).

    TO DO this could be fancier by calculating support for particular eps values,
    e.g. 80% are around 4.2 or w/e
    """
    dist_mat = dist_mat.copy()

    # To ignore i == j distances
    dist_mat[np.where(dist_mat == 0)] = np.inf
    estimates = []
    for i in range(dist_mat.shape[0]):
        # Indices of the n closest distances
        row = dist_mat[i]
        dists = sorted(np.partition(row, n_closest)[:n_closest])
        difs = [(x, y,
                (y - x)) for x, y in zip(dists, dists[1:])]
        eps_candidate, _, jump = max(difs, key=lambda x: x[2])

        estimates.append(eps_candidate)
    return sorted(estimates)
开发者ID:MaxwellRebo,项目名称:broca,代码行数:28,代码来源:parameter.py


示例8: nsmall

def nsmall(arr, n, axis):
    return np.partition(arr, n, axis)[n]

#def addToPlot(subplot, appendX, appendY):
#    plt.figure(1)
#    plt.subplot(subplot)
#    plt.se
开发者ID:UwaisA,项目名称:CompEvo,代码行数:7,代码来源:Analyse.py


示例9: avg_x_weak_weights

def avg_x_weak_weights(wmxO, x):
    '''
    Holds only the 4000-x strongest weight in every row and average the other ones
    :param wmxO: original weight matrix (4000 * 4000 ndarray)
    :return: wmxM: modified weight matrix (4000 * 4000 ndarray)
    '''

    nHolded = 4000 - x
    wmxM = np.zeros((4000, 4000))

    for i in range(0, 4000):
        row = wmxO[i, :]

        tmp = np.partition(-row, nHolded)
        max = -tmp[:nHolded]  # values of first 4000-x elements
        rest = -tmp[nHolded:]
        mu = np.mean(rest)  # mean of the x weights
        tmp = np.argpartition(-row, nHolded)
        maxj = tmp[:nHolded]  # indexes of first 4000-x elements

        rowM = mu * np.ones((1, 4000))
        for j, val in zip(maxj, max):
           rowM[0, j] = val

        wmxM[i, :] = rowM

    return wmxM
开发者ID:andrisecker,项目名称:KOKISharpWaves,代码行数:27,代码来源:wmx_modifications.py


示例10: compute_csls_accuracy

def compute_csls_accuracy(x_src, x_tgt, lexicon, lexicon_size=-1, k=10, bsz=1024):
    if lexicon_size < 0:
        lexicon_size = len(lexicon)
    idx_src = list(lexicon.keys())

    x_src /= np.linalg.norm(x_src, axis=1)[:, np.newaxis] + 1e-8
    x_tgt /= np.linalg.norm(x_tgt, axis=1)[:, np.newaxis] + 1e-8

    sr = x_src[list(idx_src)]
    sc = np.dot(sr, x_tgt.T)
    similarities = 2 * sc
    sc2 = np.zeros(x_tgt.shape[0])
    for i in range(0, x_tgt.shape[0], bsz):
        j = min(i + bsz, x_tgt.shape[0])
        sc_batch = np.dot(x_tgt[i:j, :], x_src.T)
        dotprod = np.partition(sc_batch, -k, axis=1)[:, -k:]
        sc2[i:j] = np.mean(dotprod, axis=1)
    similarities -= sc2[np.newaxis, :]

    nn = np.argmax(similarities, axis=1).tolist()
    correct = 0.0
    for k in range(0, len(lexicon)):
        if nn[k] in lexicon[idx_src[k]]:
            correct += 1.0
    return correct / lexicon_size
开发者ID:basicv8vc,项目名称:fastText,代码行数:25,代码来源:utils.py


示例11: make_query

    def make_query(self):
        """Return the index of the sample to be queried and labeled.

        Returns
        -------
        ask_id: int
            The entry_id of the sample this algorithm wants to query.
        """
        dataset = self.dataset
        self.model.train(dataset)

        unlabeled_entry_ids, X_pool = zip(*dataset.get_unlabeled_entries())

        if self.method == 'lc':  # least confident
            ask_id = np.argmin(
                np.max(self.model.predict_real(X_pool), axis=1)
            )

        elif self.method == 'sm':  # smallest margin
            dvalue = self.model.predict_real(X_pool)

            if np.shape(dvalue)[1] > 2:
                # Find 2 largest decision values
                dvalue = -(np.partition(-dvalue, 2, axis=1)[:, :2])

            margin = np.abs(dvalue[:, 0] - dvalue[:, 1])
            ask_id = np.argmin(margin)

        return unlabeled_entry_ids[ask_id]
开发者ID:maxbest,项目名称:libact,代码行数:29,代码来源:uncertainty_sampling.py


示例12: _chunk_based_bmu_find

def _chunk_based_bmu_find(input_matrix, codebook, y2, nth=1):
    """
    Finds the corresponding bmus to the input matrix.

    :param input_matrix: a matrix of input data, representing input vector as
                         rows, and vectors features/dimention as cols
                         when parallelizing the search, the input_matrix can be
                         a sub matrix from the bigger matrix
    :param codebook: matrix of weights to be used for the bmu search
    :param y2: <not sure>
    """
    dlen = input_matrix.shape[0]
    nnodes = codebook.shape[0]
    bmu = np.empty((dlen, 2))

    # It seems that small batches for large dlen is really faster:
    # that is because of ddata in loops and n_jobs. for large data it slows
    # down due to memory needs in parallel
    blen = min(50, dlen)
    i0 = 0

    while i0+1 <= dlen:
        low = i0
        high = min(dlen, i0+blen)
        i0 = i0+blen
        ddata = input_matrix[low:high+1]
        d = np.dot(codebook, ddata.T)
        d *= -2
        d += y2.reshape(nnodes, 1)
        bmu[low:high+1, 0] = np.argpartition(d, nth, axis=0)[nth-1]
        bmu[low:high+1, 1] = np.partition(d, nth, axis=0)[nth-1]
        del ddata

    return bmu
开发者ID:sevamoo,项目名称:SOMPY,代码行数:34,代码来源:sompy.py


示例13: disp_results

def disp_results(fig, ax1, ax2, loss_iterations, losses, accuracy_iterations, accuracies, accuracies_iteration_checkpoints_ind, fileName, color_ind=0):
    modula = len(plt.rcParams['axes.color_cycle'])
    acrIterations =[]
    top_acrs={}
    if accuracies.size:
        if 	accuracies.size>4:
		    top_n = 4
        else:
            top_n = accuracies.size -1		
        temp = np.argpartition(-accuracies, top_n)
        result_indexces = temp[:top_n]
        temp = np.partition(-accuracies, top_n)
        result = -temp[:top_n]
        for acr in result_indexces:
            acrIterations.append(accuracy_iterations[acr])
            top_acrs[str(accuracy_iterations[acr])]=str(accuracies[acr])

        sorted_top4 = sorted(top_acrs.items(), key=operator.itemgetter(1))
        maxAcc = np.amax(accuracies, axis=0)
        iterIndx = np.argmax(accuracies)
        maxAccIter = accuracy_iterations[iterIndx]
        maxIter =   accuracy_iterations[-1]
        consoleInfo = format('\n[%s]:maximum accuracy [from 0 to %s ] = [Iteration %s]: %s ' %(fileName,maxIter,maxAccIter ,maxAcc))
        plotTitle = format('max accuracy(%s) [Iteration %s]: %s ' % (fileName,maxAccIter, maxAcc))
        print (consoleInfo)
        #print (str(result))
        #print(acrIterations)
       # print 'Top 4 accuracies:'		
        print ('Top 4 accuracies:'+str(sorted_top4))		
        plt.title(plotTitle)
    ax1.plot(loss_iterations, losses, color=plt.rcParams['axes.color_cycle'][(color_ind * 2 + 0) % modula])
    ax2.plot(accuracy_iterations, accuracies, plt.rcParams['axes.color_cycle'][(color_ind * 2 + 1) % modula], label=str(fileName))
    ax2.plot(accuracy_iterations[accuracies_iteration_checkpoints_ind], accuracies[accuracies_iteration_checkpoints_ind], 'o', color=plt.rcParams['axes.color_cycle'][(color_ind * 2 + 1) % modula])
    plt.legend(loc='lower right') 
开发者ID:Coderx7,项目名称:caffe-windows-examples,代码行数:34,代码来源:plot.py


示例14: find_top_k

def find_top_k(x, k):
    #return an array where anything less than the top k values of an array is zero
    if( np.count_nonzero(x) <k):
        return x
    else:
        x[x < -1*np.partition(-1*x, k)[k]] = 0
        return x
开发者ID:ankitaanand,项目名称:find_best_mall,代码行数:7,代码来源:recsys.py


示例15: random_con_bitmask

def random_con_bitmask(prob, shape, mins=1):
    """Generate a random bitmask with constraints

    The functions allows specifying the minimum number of True values along a
    single dimension while counting over the other ones. `mins` can be a scalar
    or a tuple for each dimension and must be less than the product of the size
    of the other dimensions.

    If you just want a random bitmask use np.random.random(shape) < prob"""
    assert len(shape) > 1
    vals = np.random.random(shape)
    mask = vals < prob
    total = vals.size

    if isinstance(mins, abc.Sequence):
        assert len(mins) == vals.ndim
        assert all(0 < s <= total // m for s, m in zip(mins, vals.shape))
    else:
        assert mins > 0
        mins = tuple(min(mins, total // m) for m in vals.shape)

    for dim, num in enumerate(mins):
        aligned = np.rollaxis(vals, dim).reshape(vals.shape[dim], -1)
        thresh = np.partition(aligned, num - 1, 1)[:, num - 1]
        thresh.shape += (1,) * (vals.ndim - dim - 1)
        mask |= vals <= thresh

    return mask
开发者ID:yackj,项目名称:GameAnalysis,代码行数:28,代码来源:utils.py


示例16: sumvalues

 def sumvalues(self, q=0):
     """Sum of top q passowrd frequencies
     """
     if q == 0:
         return self._totalf
     else:
         return -np.partition(-self._freq_list, q)[:q].sum()
开发者ID:rchatterjee,项目名称:pwmodels,代码行数:7,代码来源:readpw.py


示例17: make_query

    def make_query(self):
        """
        Choices for method (default 'lc'):
        'lc' (Least Confident), 'sm' (Smallest Margin)
        """
        dataset = self.dataset
        self.model.train(dataset)

        unlabeled_entry_ids, X_pool = zip(*dataset.get_unlabeled_entries())

        if self.method == 'lc':  # least confident
            ask_id = np.argmin(
                np.max(self.model.predict_real(X_pool), axis=1)
            )

        elif self.method == 'sm':  # smallest margin
            dvalue = self.model.predict_real(X_pool)

            if np.shape(dvalue)[1] > 2:
                # Find 2 largest decision values
                dvalue = -(np.partition(-dvalue, 2, axis=1)[:, :2])

            margin = np.abs(dvalue[:, 0] - dvalue[:, 1])
            ask_id = np.argmin(margin)

        return unlabeled_entry_ids[ask_id]
开发者ID:Hao-Hsuan,项目名称:libact,代码行数:26,代码来源:uncertainty_sampling.py


示例18: fast_abs_percentile

def fast_abs_percentile(data, percentile=80):
    """ A fast version of the percentile of the absolute value.

    Parameters
    ==========
    data: ndarray, possibly masked array
        The input data
    percentile: number between 0 and 100
        The percentile that we are asking for

    Returns
    =======
    value: number
        The score at percentile

    Notes
    =====

    This is a faster, and less accurate version of
    scipy.stats.scoreatpercentile(np.abs(data), percentile)
    """
    if hasattr(data, 'mask'):
        # Catter for masked arrays
        data = np.asarray(data[np.logical_not(data.mask)])
    data = np.abs(data)
    data = data.ravel()
    index = int(data.size * .01 * percentile)
    if partition is not None:
        # Partial sort: faster than sort
        return partition(data, index)[index + 1]
    data.sort()
    return data[index + 1]
开发者ID:DavidDJChen,项目名称:nilearn,代码行数:32,代码来源:extmath.py


示例19: test_partition_matrix_none

def test_partition_matrix_none():
    # gh-4301
    # 2018-04-29: moved here from core.tests.test_multiarray
    a = np.matrix([[2, 1, 0]])
    actual = np.partition(a, 1, axis=None)
    expected = np.matrix([[0, 1, 2]])
    assert_equal(actual, expected)
    assert_(type(expected) is np.matrix)
开发者ID:chinaloryu,项目名称:numpy,代码行数:8,代码来源:test_interaction.py


示例20: get_top_k_elements_per_row_sim_mat

def get_top_k_elements_per_row_sim_mat(similarity_matrix, k):
    '''
    Introduces another parameter, k, where we only keep the top k similarity
    scores per row in the similarity matrix.
    '''
    for i, row in enumerate(similarity_matrix):
        k = len(row) - k
        kth_largest = np.partition(row, k)[k]
        similarity_matrix[i] = [ele if ele >= kth_largest else 0 for ele in row]
    return similarity_matrix
开发者ID:ewhuang,项目名称:tcm_project,代码行数:10,代码来源:cluster_with_embedding.py



注:本文中的numpy.partition函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python numpy.percentile函数代码示例发布时间:2022-05-27
下一篇:
Python numpy.pad函数代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap