• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Python base.warning函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中mvpa.base.warning函数的典型用法代码示例。如果您正苦于以下问题:Python warning函数的具体用法?Python warning怎么用?Python warning使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了warning函数的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: _SLcholesky_autoreg

def _SLcholesky_autoreg(C, nsteps=None, **kwargs):
    """Simple wrapper around cholesky to incrementally regularize the
    matrix until successful computation.

    For `nsteps` we boost diagonal 10-fold each time from the
    'epsilon' of the respective dtype. If None -- would proceed until
    reaching 1.
    """
    if nsteps is None:
        nsteps = -int(np.floor(np.log10(np.finfo(float).eps)))
    result = None
    for step in xrange(nsteps):
        epsilon_value = (10**step) * np.finfo(C.dtype).eps
        epsilon = epsilon_value * np.eye(C.shape[0])
        try:
            result = SLcholesky(C + epsilon, lower=True)
        except SLAError, e:
            warning("Cholesky decomposition lead to failure: %s.  "
                    "As requested, performing auto-regularization but "
                    "for better control you might prefer to regularize "
                    "yourself by providing lm parameter to GPR" % e)
            if step < nsteps-1:
                if __debug__:
                    debug("GPR", "Failed to obtain cholesky on "
                          "auto-regularization step %d value %g. Got %s."
                          " Boosting lambda more to reg. C."
                          % (step, epsilon_value, e))
                continue
            else:
                raise
开发者ID:B-Rich,项目名称:PyMVPA,代码行数:30,代码来源:gpr.py


示例2: __init__

    def __init__(self, samples=None, **kwargs):
        """Initialize EEPDataset.

        :Parameters:
          samples: Filename (string) of a EEP binary file or an `EEPBin`
                   object
        """
        # dataset props defaults
        dt = t0 = channelids = None

        # default way to use the constructor: with filename
        if not samples is None:
            if isinstance(samples, str):
                # open the eep file
                try:
                    eb = EEPBin(samples)
                except RuntimeError, e:
                    warning("ERROR: EEPDatasets: Cannot open samples file %s" \
                            % samples) # should we make also error?
                    raise e
            elif isinstance(samples, EEPBin):
                # nothing special
                eb = samples
            else:
                raise ValueError, \
                      "EEPDataset constructor takes the filename of an " \
                      "EEP file or a EEPBin object as 'samples' argument."
            samples = eb.data
            dt = eb.dt
            channelids = eb.channels
            t0 = eb.t0
开发者ID:gorlins,项目名称:PyMVPA,代码行数:31,代码来源:eep.py


示例3: test_confusion_based_error

    def test_confusion_based_error(self, l_clf):
        train = datasets['uni2medium']
        train = train[train.sa.train == 1]
        # to check if we fail to classify for 3 labels
        test3 = datasets['uni3medium']
        test3 = test3[test3.sa.train == 1]
        err = ConfusionBasedError(clf=l_clf)
        terr = TransferMeasure(l_clf, Splitter('train', attr_values=[1,1]),
                               postproc=BinaryFxNode(mean_mismatch_error,
                                                     'targets'))

        self.failUnlessRaises(UnknownStateError, err, None)
        """Shouldn't be able to access the state yet"""

        l_clf.train(train)
        e, te = err(None), terr(train)
        te = np.asscalar(te)
        self.failUnless(abs(e-te) < 1e-10,
            msg="ConfusionBasedError (%.2g) should be equal to TransferError "
                "(%.2g) on traindataset" % (e, te))

        # this will print nasty WARNING but it is ok -- it is just checking code
        # NB warnings are not printed while doing whole testing
        warning("Don't worry about the following warning.")
        if 'multiclass' in l_clf.__tags__:
            self.failIf(terr(test3) is None)

        # try copying the beast
        terr_copy = copy(terr)
开发者ID:B-Rich,项目名称:PyMVPA,代码行数:29,代码来源:test_transerror.py


示例4: _getUniqueLengthNCombinations_binary

def _getUniqueLengthNCombinations_binary(L, n=None, sort=True):
    """Find all subsets of data

    :Parameters:
      L : list
        list of unique ids
      n : None or int
        If None, all possible subsets are returned. If n is specified (int),
        then only the ones of the length n are returned
      sort : bool
        Either to sort the resultant sequence

    Adopted from Alex Martelli:
    http://mail.python.org/pipermail/python-list/2001-January/067815.html
    """
    N = len(L)
    if N > 20 and n == 1:
        warning("getUniqueLengthNCombinations_binary should not be used for "
                "large N")
    result = []
    for X in range(2**N):
        x = [ L[i] for i in range(N) if X & (1L<<i) ]
        if n is None or len(x) == n:
            # yield x # if we wanted to use it as a generator
            result.append(x)
    result.sort()
    # if __debug__ and n is not None:
    #     # verify the result
    #     # would need scipy... screw it
    #     assert(len(result) == ...)
    return result
开发者ID:gorlins,项目名称:PyMVPA,代码行数:31,代码来源:support.py


示例5: train

    def train(self, dataset):
        """Train classifier on a dataset

        Shouldn't be overridden in subclasses unless explicitly needed
        to do so
        """
        if dataset.nfeatures == 0 or dataset.nsamples == 0:
            raise DegenerateInputError, \
                  "Cannot train classifier on degenerate data %s" % dataset
        if __debug__:
            debug("CLF", "Training classifier %(clf)s on dataset %(dataset)s",
                  msgargs={'clf':self, 'dataset':dataset})

        self._pretrain(dataset)

        # remember the time when started training
        t0 = time.time()

        if dataset.nfeatures > 0:

            result = self._train(dataset)
        else:
            warning("Trying to train on dataset with no features present")
            if __debug__:
                debug("CLF",
                      "No features present for training, no actual training " \
                      "is called")
            result = None

        self.ca.training_time = time.time() - t0
        self._posttrain(dataset)
        return result
开发者ID:geeragh,项目名称:PyMVPA,代码行数:32,代码来源:base.py


示例6: _load_anynifti

def _load_anynifti(src, ensure=False, enforce_dim=None):
    """Load/access NIfTI data from files or instances.

    Parameters
    ----------
    src : str or NiftiImage
      Filename of a NIfTI image or a `NiftiImage` instance.
    ensure : bool, optional
      If True, throw ValueError exception if cannot be loaded.
    enforce_dim : int or None
      If not None, it is the dimensionality of the data to be enforced,
      commonly 4D for the data, and 3D for the mask in case of fMRI.

    Returns
    -------
    NiftiImage or None
      If the source is not supported None is returned.

    Raises
    ------
    ValueError
      If there is a problem with data (variable dimensionality) or
      failed to load data and ensure=True.
    """
    nifti = None

    # figure out what type
    if isinstance(src, str):
        # open the nifti file
        try:
            nifti = NiftiImage(src)
        except RuntimeError, e:
            warning("ERROR: Cannot open NIfTI file %s" % src)
            raise e
开发者ID:geeragh,项目名称:PyMVPA,代码行数:34,代码来源:mri.py


示例7: __init__

    def __init__(self, **kwargs):
        """Initialize an SMLR classifier.
        """

        """
        TODO:
         # Add in likelihood calculation
         # Add kernels, not just direct methods.
         """
        # init base class first
        Classifier.__init__(self, **kwargs)

        if _cStepwiseRegression is None and self.params.implementation == 'C':
            warning('SMLR: C implementation is not available.'
                    ' Using pure Python one')
            self.params.implementation = 'Python'

        # pylint friendly initializations
        self._ulabels = None
        """Unigue labels from the training set."""
        self.__weights_all = None
        """Contains all weights including bias values"""
        self.__weights = None
        """Just the weights, without the biases"""
        self.__biases = None
        """The biases, will remain none if has_bias is False"""
开发者ID:geeragh,项目名称:PyMVPA,代码行数:26,代码来源:smlr.py


示例8: _setdebug

def _setdebug(obj, partname):
    """Helper to set level of debugging output for SG
    :Parameters:
      obj
        In SG debug output seems to be set per every object
      partname : basestring
        For what kind of object we are talking about... could be automated
        later on (TODO)
    """
    debugname = "SG_%s" % partname.upper()

    switch = {True: (shogun.Kernel.M_DEBUG, 'M_DEBUG', "enable"),
              False: (shogun.Kernel.M_ERROR, 'M_ERROR', "disable")}

    key = __debug__ and debugname in debug.active

    sglevel, slevel, progressfunc = switch[key]

    if __debug__:
        debug("SG_", "Setting verbosity for shogun.%s instance: %s to %s" %
              (partname, `obj`, slevel))
    obj.io.set_loglevel(sglevel)
    try:
        exec "obj.io.%s_progress()" % progressfunc
    except:
        warning("Shogun version installed has no way to enable progress" +
                " reports")
开发者ID:gorlins,项目名称:PyMVPA,代码行数:27,代码来源:svm.py


示例9: getNiftiFromAnySource

def getNiftiFromAnySource(src, ensure=False, enforce_dim=None):
    """Load/access NIfTI data from files or instances.

    :Parameters:
      src: str | NiftiImage
        Filename of a NIfTI image or a `NiftiImage` instance.
      ensure : bool
        If True, through ValueError exception if cannot be loaded.
      enforce_dim : int or None
        If not None, it is the dimensionality of the data to be enforced,
        commonly 4D for the data, and 3D for the mask in case of fMRI.

    :Returns:
      NiftiImage | None
        If the source is not supported None is returned.
    """
    nifti = None

    # figure out what type
    if isinstance(src, str):
        # open the nifti file
        try:
            nifti = NiftiImage(src)
        except RuntimeError, e:
            warning("ERROR: NiftiDatasets: Cannot open NIfTI file %s" \
                    % src)
            raise e
开发者ID:gorlins,项目名称:PyMVPA,代码行数:27,代码来源:nifti.py


示例10: labelVoxel

    def labelVoxel(self, c, levels = None):

        if self.__referenceLevel is None:
            warning("You did not provide what level to use "
					"for reference. Assigning 0th level -- '%s'"
                    % (self._levels_dict[0],))
            self.setReferenceLevel(0)
            # return self.__referenceAtlas.labelVoxel(c, levels)

        c = self._checkRange(c)

        # obtain coordinates of the closest voxel
        cref = self._data[ self.__referenceLevel.indexes, c[2], c[1], c[0] ]
        dist = norm( (cref - c) * self.voxdim )
        if __debug__:
            debug('ATL__', "Closest referenced point for %s is "
                  "%s at distance %3.2f" % (`c`, `cref`, dist))
        if (self.distance - dist) >= 1e-3: # neglect everything smaller
            result = self.__referenceAtlas.labelVoxel(cref, levels)
            result['voxel_referenced'] = c
            result['distance'] = dist
        else:
            result = self.__referenceAtlas.labelVoxel(c, levels)
            if __debug__:
                debug('ATL__', "Closest referenced point is "
                      "further than desired distance %.2f" % self.distance)
            result['voxel_referenced'] = None
            result['distance'] = 0
        return result
开发者ID:gorlins,项目名称:PyMVPA,代码行数:29,代码来源:base.py


示例11: _get_increments

    def _get_increments(self, ndim):
        """Creates a list of increments for a given dimensionality

        RF: lame yoh just cut-pasted and tuned up because everything
            depends on ndim...
        """
        # Set element_sizes
        element_sizes = self._element_sizes
        if element_sizes is None:
            element_sizes = np.ones(ndim)
        else:
            if (ndim != len(element_sizes)):
                raise ValueError, \
                      "Dimensionality mismatch: element_sizes %s provided " \
                      "to constructor had %i dimensions, whenever queried " \
                      "coordinate had %i" \
                      % (element_sizes, len(element_sizes), ndim)
        center = np.zeros(ndim)

        element_sizes = np.asanyarray(element_sizes)
        # What range for each dimension
        erange = np.ceil(self._radius / element_sizes).astype(int)

        tentative_increments = np.array(list(np.ndindex(tuple(erange*2 + 1)))) \
                               - erange
        # Filter out the ones beyond the "sphere"
        res = array([x for x in tentative_increments
                      if self._inner_radius
                      < self._distance_func(x * element_sizes, center)
                      <= self._radius])

        if not len(res):
            warning("%s defines no neighbors" % self)
        return res
开发者ID:esc,项目名称:PyMVPA,代码行数:34,代码来源:neighborhood.py


示例12: _precall

    def _precall(self, testdataset, trainingdataset=None):
        """Generic part which trains the classifier if necessary
        """
        if not trainingdataset is None:
            if self.__train:
                # XXX can be pretty annoying if triggered inside an algorithm
                # where it cannot be switched of, but retraining might be
                # intended or at least not avoidable.
                # Additonally is_trained docs say:
                #   MUST BE USED WITH CARE IF EVER
                #
                # switching it off for now
                #if self.__clf.is_trained(trainingdataset):
                #    warning('It seems that classifier %s was already trained' %
                #            self.__clf + ' on dataset %s. Please inspect' \
                #                % trainingdataset)
                if self.ca.is_enabled('training_stats'):
                    self.__clf.ca.change_temporarily(
                        enable_ca=['training_stats'])
                self.__clf.train(trainingdataset)
                if self.ca.is_enabled('training_stats'):
                    self.ca.training_stats = \
                        self.__clf.ca.training_stats
                    self.__clf.ca.reset_changed_temporarily()

        if self.__clf.ca.is_enabled('trained_targets') \
               and not self.__clf.__is_regression__ \
               and not testdataset is None:
            newlabels = set(testdataset.sa[self.clf.get_space()].unique) \
                        - set(self.__clf.ca.trained_targets)
            if len(newlabels)>0:
                warning("Classifier %s wasn't trained to classify labels %s" %
                        (self.__clf, newlabels) +
                        " present in testing dataset. Make sure that you have" +
                        " not mixed order/names of the arguments anywhere")
开发者ID:B-Rich,项目名称:PyMVPA,代码行数:35,代码来源:transerror.py


示例13: _call

    def _call(self, ds):
        # local binding
        generator = self._generator
        node = self._node
        ca = self.ca
        space = self.get_space()
        concat_as = self._concat_as

        if self.ca.is_enabled("stats") and (not node.ca.has_key("stats") or
                                            not node.ca.is_enabled("stats")):
            warning("'stats' conditional attribute was enabled, but "
                    "the assigned node '%s' either doesn't support it, "
                    "or it is disabled" % node)
        # precharge conditional attributes
        ca.datasets = []

        # run the node an all generated datasets
        results = []
        for i, sds in enumerate(generator.generate(ds)):
            if ca.is_enabled("datasets"):
                # store dataset in ca
                ca.datasets.append(sds)
            # run the beast
            result = node(sds)
            # callback
            if not self._callback is None:
                self._callback(data=sds, node=node, result=result)
            # subclass postprocessing
            result = self._repetition_postcall(sds, node, result)
            if space:
                # XXX maybe try to get something more informative from the
                # processing node (e.g. in 0.5 it used to be 'chunks'->'chunks'
                # to indicate what was trained and what was tested. Now it is
                # more tricky, because `node` could be anything
                result.set_attr(space, (i,))
            # store
            results.append(result)

            if ca.is_enabled("stats") and node.ca.has_key("stats") \
               and node.ca.is_enabled("stats"):
                if not ca.is_set('stats'):
                    # create empty stats container of matching type
                    ca.stats = node.ca['stats'].value.__class__()
                # harvest summary stats
                ca['stats'].value.__iadd__(node.ca['stats'].value)

        # charge condition attribute
        self.ca.repetition_results = results

        # stack all results into a single Dataset
        if concat_as == 'samples':
            results = vstack(results)
        elif concat_as == 'features':
            results = hstack(results)
        else:
            raise ValueError("Unkown concatenation mode '%s'" % concat_as)
        # no need to store the raw results, since the Measure class will
        # automatically store them in a CA
        return results
开发者ID:esc,项目名称:PyMVPA,代码行数:59,代码来源:base.py


示例14: seed

def seed(random_seed):
    if __debug__:
        debug('SG', "Seeding shogun's RNG with %s" % random_seed)
    try:
        # reuse the same seed for shogun
        shogun.Library.Math_init_random(random_seed)
    except Exception, e:
        warning('Shogun cannot be seeded due to %s' % (e,))
开发者ID:B-Rich,项目名称:PyMVPA,代码行数:8,代码来源:svm.py


示例15: corr_error_prob

 def corr_error_prob(predicted, target):
     """Computes p-value of correlation between the target and the predicted
     values.
     """
     from mvpa.base import warning
     warning("p-value for correlation is implemented only when scipy is "
             "available. Bogus value -1.0 is returned otherwise")
     return -1.0
开发者ID:B-Rich,项目名称:PyMVPA,代码行数:8,代码来源:errorfx.py


示例16: _pvalue

def _pvalue(x, cdf_func, tail, return_tails=False, name=None):
    """Helper function to return p-value(x) given cdf and tail

    Parameters
    ----------
    cdf_func : callable
      Function to be used to derive cdf values for x
    tail : str ('left', 'right', 'any', 'both')
      Which tail of the distribution to report. For 'any' and 'both'
      it chooses the tail it belongs to based on the comparison to
      p=0.5. In the case of 'any' significance is taken like in a
      one-tailed test.
    return_tails : bool
      If True, a tuple return (pvalues, tails), where tails contain
      1s if value was from the right tail, and 0 if the value was
      from the left tail.
    """
    is_scalar = np.isscalar(x)
    if is_scalar:
        x = [x]

    cdf = cdf_func(x)

    if __debug__ and 'CHECK_STABILITY' in debug.active:
        cdf_min, cdf_max = np.min(cdf), np.max(cdf)
        if cdf_min < 0 or cdf_max > 1.0:
            s = ('', ' for %s' % name)[int(name is not None)]
            warning('Stability check of cdf %s failed%s. Min=%s, max=%s' % \
                  (cdf_func, s, cdf_min, cdf_max))

    # no escape but to assure that CDF is in the right range. Some
    # distributions from scipy tend to jump away from [0,1]
    cdf = np.clip(cdf, 0, 1.0)

    if tail == 'left':
        if return_tails:
            right_tail = np.zeros(cdf.shape, dtype=bool)
    elif tail == 'right':
        cdf = 1 - cdf
        if return_tails:
            right_tail = np.ones(cdf.shape, dtype=bool)
    elif tail in ('any', 'both'):
        right_tail = (cdf >= 0.5)
        cdf[right_tail] = 1.0 - cdf[right_tail]
        if tail == 'both':
            # we need report the area under both tails
            # XXX this is only meaningful for symetric distributions
            cdf *= 2

    # Assure that NaNs didn't get significant value
    cdf[np.isnan(x)] = 1.0
    if is_scalar: res = cdf[0]
    else:         res = cdf

    if return_tails:
        return (res, right_tail)
    else:
        return res
开发者ID:arokem,项目名称:PyMVPA,代码行数:58,代码来源:stats.py


示例17: _call

    def _call(self, dataset):
        """Perform the ROI search.
        """
        # local binding
        nproc = self.nproc

        if nproc is None and externals.exists('pprocess'):
            import pprocess
            try:
                nproc = pprocess.get_number_of_cores() or 1
            except AttributeError:
                warning("pprocess version %s has no API to figure out maximal "
                        "number of cores. Using 1"
                        % externals.versions['pprocess'])
                nproc = 1
        # train the queryengine
        self._queryengine.train(dataset)

        # decide whether to run on all possible center coords or just a provided
        # subset
        if isinstance(self.__roi_ids, str):
            roi_ids = dataset.fa[self.__roi_ids].value.nonzero()[0]
        elif self.__roi_ids is not None:
            roi_ids = self.__roi_ids
            # safeguard against stupidity
            if __debug__:
                if max(roi_ids) >= dataset.nfeatures:
                    raise IndexError, \
                          "Maximal center_id found is %s whenever given " \
                          "dataset has only %d features" \
                          % (max(roi_ids), dataset.nfeatures)
        else:
            roi_ids = np.arange(dataset.nfeatures)

        # pass to subclass
        results, roi_sizes = self._sl_call(dataset, roi_ids, nproc)

        if not roi_sizes is None:
            self.ca.roi_sizes = roi_sizes

        if 'mapper' in dataset.a:
            # since we know the space we can stick the original mapper into the
            # results as well
            if self.__roi_ids is None:
                results.a['mapper'] = copy.copy(dataset.a.mapper)
            else:
                # there is an additional selection step that needs to be
                # expressed by another mapper
                mapper = copy.copy(dataset.a.mapper)
                mapper.append(StaticFeatureSelection(roi_ids,
                                                     dshape=dataset.shape[1:]))
                results.a['mapper'] = mapper

        # charge state
        self.ca.raw_results = results

        # return raw results, base-class will take care of transformations
        return results
开发者ID:esc,项目名称:PyMVPA,代码行数:58,代码来源:searchlight.py


示例18: _setRetrainable

    def _setRetrainable(self, value, force=False):
        """Assign value of retrainable parameter

        If retrainable flag is to be changed, classifier has to be
        untrained.  Also internal attributes such as _changedData,
        __changedData_isset, and __idhashes should be initialized if
        it becomes retrainable
        """
        pretrainable = self.params['retrainable']
        if (force or value != pretrainable.value) \
               and 'retrainable' in self._clf_internals:
            if __debug__:
                debug("CLF_", "Setting retrainable to %s" % value)
            if 'meta' in self._clf_internals:
                warning("Retrainability is not yet crafted/tested for "
                        "meta classifiers. Unpredictable behavior might occur")
            # assure that we don't drag anything behind
            if self.trained:
                self.untrain()
            states = self.states
            if not value and states.isKnown('retrained'):
                states.remove('retrained')
                states.remove('repredicted')
            if value:
                if not 'retrainable' in self._clf_internals:
                    warning("Setting of flag retrainable for %s has no effect"
                            " since classifier has no such capability. It would"
                            " just lead to resources consumption and slowdown"
                            % self)
                states.add(StateVariable(enabled=True,
                        name='retrained',
                        doc="Either retrainable classifier was retrained"))
                states.add(StateVariable(enabled=True,
                        name='repredicted',
                        doc="Either retrainable classifier was repredicted"))

            pretrainable.value = value

            # if retrainable we need to keep track of things
            if value:
                self.__idhashes = {'traindata': None, 'labels': None,
                                   'testdata': None} #, 'testtraindata': None}
                if __debug__ and 'CHECK_RETRAIN' in debug.active:
                    # ??? it is not clear though if idhash is faster than
                    # simple comparison of (dataset != __traineddataset).any(),
                    # but if we like to get rid of __traineddataset then we
                    # should use idhash anyways
                    self.__trained = self.__idhashes.copy() # just same Nones
                self.__resetChangedData()
                self.__invalidatedChangedData = {}
            elif 'retrainable' in self._clf_internals:
                #self.__resetChangedData()
                self.__changedData_isset = False
                self._changedData = None
                self.__idhashes = None
                if __debug__ and 'CHECK_RETRAIN' in debug.active:
                    self.__trained = None
开发者ID:gorlins,项目名称:PyMVPA,代码行数:57,代码来源:base.py


示例19: fit

    def fit(self, measure, wdata, vdata=None):
        """Fit the distribution by performing multiple cycles which repeatedly
        permuted labels in the training dataset.

        Parameters
        ----------
        measure: (`Featurewise`)`DatasetMeasure` or `TransferError`
          TransferError instance used to compute all errors.
        wdata: `Dataset` which gets permuted and used to compute the
          measure/transfer error multiple times.
        vdata: `Dataset` used for validation.
          If provided measure is assumed to be a `TransferError` and
          working and validation dataset are passed onto it.
        """
        # TODO: place exceptions separately so we could avoid circular imports
        from mvpa.clfs.base import LearnerError

        dist_samples = []
        """Holds the values for randomized labels."""

        # estimate null-distribution
        for p in xrange(self.__permutations):
            # new permutation all the time
            # but only permute the training data and keep the testdata constant
            #
            if __debug__:
                debug('STATMC', "Doing %i permutations: %i" \
                      % (self.__permutations, p+1), cr=True)

            # TODO this really needs to be more clever! If data samples are
            # shuffled within a class it really makes no difference for the
            # classifier, hence the number of permutations to estimate the
            # null-distribution of transfer errors can be reduced dramatically
            # when the *right* permutations (the ones that matter) are done.
            permuted_wdata = wdata.copy('shallow')
            permuted_wdata.permute_attr(
                attr=self.permute_attr,
                chunks_attr=self.chunks_attr,
                col=self.permute_col,
                assure_permute=self.assure_permute)

            # decide on the arguments to measure
            if not vdata is None:
                measure_args = [vdata, permuted_wdata]
            else:
                measure_args = [permuted_wdata]

            # compute and store the measure of this permutation
            # assume it has `TransferError` interface
            try:
                res = measure(*measure_args)
            except LearnerError, e:
                warning('Failed to obtain value from %s due to %s.  Measurement'
                        ' was skipped, which could lead to unstable and/or'
                        ' incorrect assessment of the null_dist' % (measure, e))
            res = np.asanyarray(res)
            dist_samples.append(res)
开发者ID:arokem,项目名称:PyMVPA,代码行数:57,代码来源:stats.py


示例20: _predict

    def _predict(self, data):
        """Predict values for the data
        """
        # libsvm needs doubles
        src = _data2ls(data)
        ca = self.ca

        predictions = [ self.model.predict(p) for p in src ]

        if ca.is_enabled('estimates'):
            if self.__is_regression__:
                estimates = [ self.model.predict_values_raw(p)[0] for p in src ]
            else:
                # if 'trained_targets' are literal they have to be mapped
                if np.issubdtype(self.ca.trained_targets.dtype, 'c'):
                    trained_targets = self._attrmap.to_numeric(
                            self.ca.trained_targets)
                else:
                    trained_targets = self.ca.trained_targets
                nlabels = len(trained_targets)
                # XXX We do duplicate work. model.predict calls
                # predict_values_raw internally and then does voting or
                # thresholding. So if speed becomes a factor we might
                # want to move out logic from libsvm over here to base
                # predictions on obtined values, or adjust libsvm to
                # spit out values from predict() as well
                if nlabels == 2:
                    # Apperently libsvm reorders labels so we need to
                    # track (1,0) values instead of (0,1) thus just
                    # lets take negative reverse
                    estimates = [ self.model.predict_values(p)[(trained_targets[1],
                                                            trained_targets[0])]
                               for p in src ]
                    if len(estimates) > 0:
                        if __debug__:
                            debug("SVM",
                                  "Forcing estimates to be ndarray and reshaping"
                                  " them into 1D vector")
                        estimates = np.asarray(estimates).reshape(len(estimates))
                else:
                    # In multiclass we return dictionary for all pairs
                    # of labels, since libsvm does 1-vs-1 pairs
                    estimates = [ self.model.predict_values(p) for p in src ]
            ca.estimates = estimates

        if ca.is_enabled("probabilities"):
            # XXX Is this really necesssary? yoh don't think so since
            # assignment to ca is doing the same
            #self.probabilities = [ self.model.predict_probability(p)
            #                       for p in src ]
            try:
                ca.probabilities = [ self.model.predict_probability(p)
                                         for p in src ]
            except TypeError:
                warning("Current SVM %s doesn't support probability " %
                        self + " estimation.")
        return predictions
开发者ID:arokem,项目名称:PyMVPA,代码行数:57,代码来源:svm.py



注:本文中的mvpa.base.warning函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python externals.exists函数代码示例发布时间:2022-05-27
下一篇:
Python base.debug函数代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap