• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

C++ intrusive_ptr类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了C++中intrusive_ptr的典型用法代码示例。如果您正苦于以下问题:C++ intrusive_ptr类的具体用法?C++ intrusive_ptr怎么用?C++ intrusive_ptr使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



在下文中一共展示了intrusive_ptr类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的C++代码示例。

示例1: coalesce

    bool DocumentSourceCursor::coalesce(const intrusive_ptr<DocumentSource>& nextSource) {
        // Note: Currently we assume the $limit is logically after any $sort or
        // $match. If we ever pull in $match or $sort using this method, we
        // will need to keep track of the order of the sub-stages.

        if (!_limit) {
            _limit = dynamic_cast<DocumentSourceLimit*>(nextSource.get());
            return _limit.get(); // false if next is not a $limit
        }
        else {
            return _limit->coalesce(nextSource);
        }

        return false;
    }
开发者ID:chmanie,项目名称:mongo,代码行数:15,代码来源:document_source_cursor.cpp


示例2: pAnd

bool DocumentSourceFilter::coalesce(
    const intrusive_ptr<DocumentSource> &pNextSource) {

    /* we only know how to coalesce other filters */
    DocumentSourceFilter *pDocFilter =
        dynamic_cast<DocumentSourceFilter *>(pNextSource.get());
    if (!pDocFilter)
        return false;

    /*
      Two adjacent filters can be combined by creating a conjunction of
      their predicates.
     */
    intrusive_ptr<ExpressionNary> pAnd(ExpressionAnd::create());
    pAnd->addOperand(pFilter);
    pAnd->addOperand(pDocFilter->pFilter);
    pFilter = pAnd;

    return true;
}
开发者ID:JeremyOT,项目名称:mongo,代码行数:20,代码来源:document_source_filter.cpp


示例3: accept

bool DocumentSourceMatch::accept(
    const intrusive_ptr<Document> &pDocument) const {

    /*
      The matcher only takes BSON documents, so we have to make one.

      LATER
      We could optimize this by making a document with only the
      fields referenced by the Matcher.  We could do this by looking inside
      the Matcher's BSON before it is created, and recording those.  The
      easiest implementation might be to hold onto an ExpressionDocument
      in here, and give that pDocument to create the created subset of
      fields, and then convert that instead.
    */
    BSONObjBuilder objBuilder;
    pDocument->toBson(&objBuilder);
    BSONObj obj(objBuilder.done());

    return matcher.matches(obj);
}
开发者ID:JeremyOT,项目名称:mongo,代码行数:20,代码来源:document_source_match.cpp


示例4: pPipeline

    intrusive_ptr<Pipeline> Pipeline::parseCommand(
        string &errmsg, BSONObj &cmdObj,
        const intrusive_ptr<ExpressionContext> &pCtx) {
        intrusive_ptr<Pipeline> pPipeline(new Pipeline(pCtx));
        vector<BSONElement> pipeline;

        /* gather the specification for the aggregation */
        for(BSONObj::iterator cmdIterator = cmdObj.begin();
                cmdIterator.more(); ) {
            BSONElement cmdElement(cmdIterator.next());
            const char *pFieldName = cmdElement.fieldName();

            // ignore top-level fields prefixed with $. They are for the command processor, not us.
            if (pFieldName[0] == '$') {
                continue;
            }

            /* look for the aggregation command */
            if (!strcmp(pFieldName, commandName)) {
                pPipeline->collectionName = cmdElement.String();
                continue;
            }

            /* check for the collection name */
            if (!strcmp(pFieldName, pipelineName)) {
                pipeline = cmdElement.Array();
                continue;
            }

            /* check for explain option */
            if (!strcmp(pFieldName, explainName)) {
                pPipeline->explain = cmdElement.Bool();
                continue;
            }

            /* if the request came from the router, we're in a shard */
            if (!strcmp(pFieldName, fromRouterName)) {
                pCtx->setInShard(cmdElement.Bool());
                continue;
            }

            /* check for debug options */
            if (!strcmp(pFieldName, splitMongodPipelineName)) {
                pPipeline->splitMongodPipeline = true;
                continue;
            }

            /* we didn't recognize a field in the command */
            ostringstream sb;
            sb <<
               "unrecognized field \"" <<
               cmdElement.fieldName();
            errmsg = sb.str();
            return intrusive_ptr<Pipeline>();
        }

        /*
          If we get here, we've harvested the fields we expect for a pipeline.

          Set up the specified document source pipeline.
        */
        SourceContainer& sources = pPipeline->sources; // shorthand

        /* iterate over the steps in the pipeline */
        const size_t nSteps = pipeline.size();
        for(size_t iStep = 0; iStep < nSteps; ++iStep) {
            /* pull out the pipeline element as an object */
            BSONElement pipeElement(pipeline[iStep]);
            uassert(15942, str::stream() << "pipeline element " <<
                    iStep << " is not an object",
                    pipeElement.type() == Object);
            BSONObj bsonObj(pipeElement.Obj());

            // Parse a pipeline stage from 'bsonObj'.
            uassert(16435, "A pipeline stage specification object must contain exactly one field.",
                    bsonObj.nFields() == 1);
            BSONElement stageSpec = bsonObj.firstElement();
            const char* stageName = stageSpec.fieldName();

            // Create a DocumentSource pipeline stage from 'stageSpec'.
            StageDesc key;
            key.pName = stageName;
            const StageDesc* pDesc = (const StageDesc*)
                    bsearch(&key, stageDesc, nStageDesc, sizeof(StageDesc),
                            stageDescCmp);

            uassert(16436,
                    str::stream() << "Unrecognized pipeline stage name: '" << stageName << "'",
                    pDesc);
            intrusive_ptr<DocumentSource> stage = (*pDesc->pFactory)(&stageSpec, pCtx);
            verify(stage);
            stage->setPipelineStep(iStep);
            sources.push_back(stage);
        }

        /* if there aren't any pipeline stages, there's nothing more to do */
        if (sources.empty())
            return pPipeline;

        /*
//.........这里部分代码省略.........
开发者ID:darkiri,项目名称:mongo,代码行数:101,代码来源:pipeline.cpp


示例5: get_pointer

inline typename boost::interprocess::intrusive_ptr<T, VP>::pointer
   get_pointer(intrusive_ptr<T, VP> p)
{  return p.get();   }
开发者ID:AndresGalaviz,项目名称:NAO,代码行数:3,代码来源:intrusive_ptr.hpp


示例6:

//!Returns a != b.get().
//!Does not throw
template<class T, class VP> inline
bool operator!=(const typename intrusive_ptr<T, VP>::pointer &a,
                       intrusive_ptr<T, VP> const & b)
{  return a != b.get(); }
开发者ID:AndresGalaviz,项目名称:NAO,代码行数:6,代码来源:intrusive_ptr.hpp


示例7: intrusive_ptr

 template<class U> intrusive_ptr(intrusive_ptr<U> const & rhs): p_(rhs.get())
 {
     if(p_ != 0) intrusive_ptr_add_ref(p_);
 }
开发者ID:Albermg7,项目名称:boost,代码行数:4,代码来源:intrusive_ptr.hpp


示例8: move

std::shared_ptr<PlanExecutor> PipelineD::prepareExecutor(
    OperationContext* txn,
    Collection* collection,
    const NamespaceString& nss,
    const intrusive_ptr<Pipeline>& pipeline,
    const intrusive_ptr<ExpressionContext>& expCtx,
    const intrusive_ptr<DocumentSourceSort>& sortStage,
    const DepsTracker& deps,
    const BSONObj& queryObj,
    BSONObj* sortObj,
    BSONObj* projectionObj) {
    // The query system has the potential to use an index to provide a non-blocking sort and/or to
    // use the projection to generate a covered plan. If this is possible, it is more efficient to
    // let the query system handle those parts of the pipeline. If not, it is more efficient to use
    // a $sort and/or a ParsedDeps object. Thus, we will determine whether the query system can
    // provide a non-blocking sort or a covered projection before we commit to a PlanExecutor.
    //
    // To determine if the query system can provide a non-blocking sort, we pass the
    // NO_BLOCKING_SORT planning option, meaning 'getExecutor' will not produce a PlanExecutor if
    // the query system would use a blocking sort stage.
    //
    // To determine if the query system can provide a covered projection, we pass the
    // NO_UNCOVERED_PROJECTS planning option, meaning 'getExecutor' will not produce a PlanExecutor
    // if the query system would need to fetch the document to do the projection. The following
    // logic uses the above strategies, with multiple calls to 'attemptToGetExecutor' to determine
    // the most efficient way to handle the $sort and $project stages.
    //
    // LATER - We should attempt to determine if the results from the query are returned in some
    // order so we can then apply other optimizations there are tickets for, such as SERVER-4507.
    size_t plannerOpts = QueryPlannerParams::DEFAULT | QueryPlannerParams::NO_BLOCKING_SORT;

    // If we are connecting directly to the shard rather than through a mongos, don't filter out
    // orphaned documents.
    if (ShardingState::get(txn)->needCollectionMetadata(txn, nss.ns())) {
        plannerOpts |= QueryPlannerParams::INCLUDE_SHARD_FILTER;
    }

    if (deps.hasNoRequirements()) {
        // If we don't need any fields from the input document, performing a count is faster, and
        // will output empty documents, which is okay.
        plannerOpts |= QueryPlannerParams::IS_COUNT;
    }

    // The only way to get a text score is to let the query system handle the projection. In all
    // other cases, unless the query system can do an index-covered projection and avoid going to
    // the raw record at all, it is faster to have ParsedDeps filter the fields we need.
    if (!deps.needTextScore) {
        plannerOpts |= QueryPlannerParams::NO_UNCOVERED_PROJECTIONS;
    }

    std::shared_ptr<PlanExecutor> exec;

    BSONObj emptyProjection;
    if (sortStage) {
        // See if the query system can provide a non-blocking sort.
        auto swExecutorSort = attemptToGetExecutor(
            txn, collection, expCtx, queryObj, emptyProjection, *sortObj, plannerOpts);

        if (swExecutorSort.isOK()) {
            // Success! Now see if the query system can also cover the projection.
            auto swExecutorSortAndProj = attemptToGetExecutor(
                txn, collection, expCtx, queryObj, *projectionObj, *sortObj, plannerOpts);

            if (swExecutorSortAndProj.isOK()) {
                // Success! We have a non-blocking sort and a covered projection.
                exec = std::move(swExecutorSortAndProj.getValue());
            } else {
                // The query system couldn't cover the projection.
                *projectionObj = BSONObj();
                exec = std::move(swExecutorSort.getValue());
            }

            // We know the sort is being handled by the query system, so remove the $sort stage.
            pipeline->sources.pop_front();

            if (sortStage->getLimitSrc()) {
                // We need to reinsert the coalesced $limit after removing the $sort.
                pipeline->sources.push_front(sortStage->getLimitSrc());
            }
            return exec;
        }
        // The query system can't provide a non-blocking sort.
        *sortObj = BSONObj();
    }

    // Either there was no $sort stage, or the query system could not provide a non-blocking
    // sort.
    dassert(sortObj->isEmpty());

    // See if the query system can cover the projection.
    auto swExecutorProj = attemptToGetExecutor(
        txn, collection, expCtx, queryObj, *projectionObj, *sortObj, plannerOpts);
    if (swExecutorProj.isOK()) {
        // Success! We have a covered projection.
        return std::move(swExecutorProj.getValue());
    }

    // The query system couldn't provide a covered projection.
    *projectionObj = BSONObj();
    // If this doesn't work, nothing will.
//.........这里部分代码省略.........
开发者ID:DCEngines,项目名称:mongo,代码行数:101,代码来源:pipeline_d.cpp


示例9: createRandomCursorExecutor

shared_ptr<PlanExecutor> PipelineD::prepareCursorSource(
    OperationContext* txn,
    Collection* collection,
    const NamespaceString& nss,
    const intrusive_ptr<Pipeline>& pPipeline,
    const intrusive_ptr<ExpressionContext>& pExpCtx) {
    // We will be modifying the source vector as we go.
    Pipeline::SourceContainer& sources = pPipeline->sources;

    // Inject a MongodImplementation to sources that need them.
    for (auto&& source : sources) {
        DocumentSourceNeedsMongod* needsMongod =
            dynamic_cast<DocumentSourceNeedsMongod*>(source.get());
        if (needsMongod) {
            needsMongod->injectMongodInterface(std::make_shared<MongodImplementation>(pExpCtx));
        }
    }

    if (!sources.empty()) {
        if (sources.front()->isValidInitialSource()) {
            if (dynamic_cast<DocumentSourceMergeCursors*>(sources.front().get())) {
                // Enable the hooks for setting up authentication on the subsequent internal
                // connections we are going to create. This would normally have been done
                // when SetShardVersion was called, but since SetShardVersion is never called
                // on secondaries, this is needed.
                ShardedConnectionInfo::addHook();
            }
            return std::shared_ptr<PlanExecutor>();  // don't need a cursor
        }

        auto sampleStage = dynamic_cast<DocumentSourceSample*>(sources.front().get());
        // Optimize an initial $sample stage if possible.
        if (collection && sampleStage) {
            const long long sampleSize = sampleStage->getSampleSize();
            const long long numRecords = collection->getRecordStore()->numRecords(txn);
            auto exec = createRandomCursorExecutor(collection, txn, sampleSize, numRecords);
            if (exec) {
                // Replace $sample stage with $sampleFromRandomCursor stage.
                sources.pop_front();
                std::string idString = collection->ns().isOplog() ? "ts" : "_id";
                sources.emplace_front(DocumentSourceSampleFromRandomCursor::create(
                    pExpCtx, sampleSize, idString, numRecords));

                const BSONObj initialQuery;
                return addCursorSource(
                    pPipeline, pExpCtx, exec, pPipeline->getDependencies(initialQuery));
            }
        }
    }

    // Look for an initial match. This works whether we got an initial query or not. If not, it
    // results in a "{}" query, which will be what we want in that case.
    const BSONObj queryObj = pPipeline->getInitialQuery();
    if (!queryObj.isEmpty()) {
        if (dynamic_cast<DocumentSourceMatch*>(sources.front().get())) {
            // If a $match query is pulled into the cursor, the $match is redundant, and can be
            // removed from the pipeline.
            sources.pop_front();
        } else {
            // A $geoNear stage, the only other stage that can produce an initial query, is also
            // a valid initial stage and will be handled above.
            MONGO_UNREACHABLE;
        }
    }

    // Find the set of fields in the source documents depended on by this pipeline.
    DepsTracker deps = pPipeline->getDependencies(queryObj);

    BSONObj projForQuery = deps.toProjection();

    /*
      Look for an initial sort; we'll try to add this to the
      Cursor we create.  If we're successful in doing that (further down),
      we'll remove the $sort from the pipeline, because the documents
      will already come sorted in the specified order as a result of the
      index scan.
    */
    intrusive_ptr<DocumentSourceSort> sortStage;
    BSONObj sortObj;
    if (!sources.empty()) {
        sortStage = dynamic_cast<DocumentSourceSort*>(sources.front().get());
        if (sortStage) {
            // build the sort key
            sortObj = sortStage->serializeSortKey(/*explain*/ false).toBson();
        }
    }

    // Create the PlanExecutor.
    auto exec = prepareExecutor(txn,
                                collection,
                                nss,
                                pPipeline,
                                pExpCtx,
                                sortStage,
                                deps,
                                queryObj,
                                &sortObj,
                                &projForQuery);

    return addCursorSource(pPipeline, pExpCtx, exec, deps, queryObj, sortObj, projForQuery);
//.........这里部分代码省略.........
开发者ID:DCEngines,项目名称:mongo,代码行数:101,代码来源:pipeline_d.cpp


示例10: swap

template<class T> void swap(intrusive_ptr<T> & lhs, intrusive_ptr<T> & rhs)
{
    lhs.swap(rhs);
}
开发者ID:Albermg7,项目名称:boost,代码行数:4,代码来源:intrusive_ptr.hpp


示例11: dynamic_pointer_cast

template<class T, class U> intrusive_ptr<T> dynamic_pointer_cast(intrusive_ptr<U> const & p)
{
    return dynamic_cast<T *>(p.get());
}
开发者ID:Albermg7,项目名称:boost,代码行数:4,代码来源:intrusive_ptr.hpp


示例12: pShardSplit

    bool PipelineCommand::executePipeline(
        BSONObjBuilder &result, string &errmsg, const string &ns,
        intrusive_ptr<Pipeline> &pPipeline,
        intrusive_ptr<DocumentSourceCursor> &pSource,
        intrusive_ptr<ExpressionContext> &pCtx) {

        /* this is the normal non-debug path */
        if (!pPipeline->getSplitMongodPipeline())
            return pPipeline->run(result, errmsg, pSource);

        /* setup as if we're in the router */
        pCtx->setInRouter(true);

        /*
          Here, we'll split the pipeline in the same way we would for sharding,
          for testing purposes.

          Run the shard pipeline first, then feed the results into the remains
          of the existing pipeline.

          Start by splitting the pipeline.
         */
        intrusive_ptr<Pipeline> pShardSplit(
            pPipeline->splitForSharded());

        /*
          Write the split pipeline as we would in order to transmit it to
          the shard servers.
        */
        BSONObjBuilder shardBuilder;
        pShardSplit->toBson(&shardBuilder);
        BSONObj shardBson(shardBuilder.done());

        DEV (log() << "\n---- shardBson\n" <<
             shardBson.jsonString(Strict, 1) << "\n----\n").flush();

        /* for debugging purposes, show what the pipeline now looks like */
        DEV {
            BSONObjBuilder pipelineBuilder;
            pPipeline->toBson(&pipelineBuilder);
            BSONObj pipelineBson(pipelineBuilder.done());
            (log() << "\n---- pipelineBson\n" <<
             pipelineBson.jsonString(Strict, 1) << "\n----\n").flush();
        }

        /* on the shard servers, create the local pipeline */
        intrusive_ptr<ExpressionContext> pShardCtx(
            ExpressionContext::create(&InterruptStatusMongod::status));
        intrusive_ptr<Pipeline> pShardPipeline(
            Pipeline::parseCommand(errmsg, shardBson, pShardCtx));
        if (!pShardPipeline.get()) {
            return false;
        }

        /* run the shard pipeline */
        BSONObjBuilder shardResultBuilder;
        string shardErrmsg;
        pShardPipeline->run(shardResultBuilder, shardErrmsg, pSource);
        BSONObj shardResult(shardResultBuilder.done());

        /* pick out the shard result, and prepare to read it */
        intrusive_ptr<DocumentSourceBsonArray> pShardSource;
        BSONObjIterator shardIter(shardResult);
        while(shardIter.more()) {
            BSONElement shardElement(shardIter.next());
            const char *pFieldName = shardElement.fieldName();

            if ((strcmp(pFieldName, "result") == 0) ||
                (strcmp(pFieldName, "serverPipeline") == 0)) {
                pShardSource = DocumentSourceBsonArray::create(
                    &shardElement, pCtx);

                /*
                  Connect the output of the shard pipeline with the mongos
                  pipeline that will merge the results.
                */
                return pPipeline->run(result, errmsg, pShardSource);
            }
        }

        /* NOTREACHED */
        verify(false);
        return false;
    }
开发者ID:JKO,项目名称:mongo,代码行数:84,代码来源:pipeline_command.cpp


示例13: toBson

BSONObj toBson(const intrusive_ptr<DocumentSource>& source) {
    vector<Value> arr;
    source->serializeToArray(arr);
    ASSERT_EQUALS(arr.size(), 1UL);
    return arr[0].getDocument().toBson();
}
开发者ID:hgGeorg,项目名称:mongo,代码行数:6,代码来源:documentsourcetests.cpp


示例14: BSONObj

    boost::shared_ptr<Runner> PipelineD::prepareCursorSource(
            Collection* collection,
            const intrusive_ptr<Pipeline>& pPipeline,
            const intrusive_ptr<ExpressionContext>& pExpCtx) {
        // get the full "namespace" name
        const string& fullName = pExpCtx->ns.ns();
        pExpCtx->opCtx->lockState()->assertAtLeastReadLocked(fullName);

        // We will be modifying the source vector as we go
        Pipeline::SourceContainer& sources = pPipeline->sources;

        // Inject a MongodImplementation to sources that need them.
        for (size_t i = 0; i < sources.size(); i++) {
            DocumentSourceNeedsMongod* needsMongod =
                dynamic_cast<DocumentSourceNeedsMongod*>(sources[i].get());
            if (needsMongod) {
                needsMongod->injectMongodInterface(
                    boost::make_shared<MongodImplementation>(pExpCtx));
            }
        }

        if (!sources.empty() && sources.front()->isValidInitialSource()) {
            if (dynamic_cast<DocumentSourceMergeCursors*>(sources.front().get())) {
                // Enable the hooks for setting up authentication on the subsequent internal
                // connections we are going to create. This would normally have been done
                // when SetShardVersion was called, but since SetShardVersion is never called
                // on secondaries, this is needed.
                ShardedConnectionInfo::addHook();
            }
            return boost::shared_ptr<Runner>(); // don't need a cursor
        }


        // Look for an initial match. This works whether we got an initial query or not.
        // If not, it results in a "{}" query, which will be what we want in that case.
        const BSONObj queryObj = pPipeline->getInitialQuery();
        if (!queryObj.isEmpty()) {
            // This will get built in to the Cursor we'll create, so
            // remove the match from the pipeline
            sources.pop_front();
        }

        // Find the set of fields in the source documents depended on by this pipeline.
        const DepsTracker deps = pPipeline->getDependencies(queryObj);

        // Passing query an empty projection since it is faster to use ParsedDeps::extractFields().
        // This will need to change to support covering indexes (SERVER-12015). There is an
        // exception for textScore since that can only be retrieved by a query projection.
        const BSONObj projectionForQuery = deps.needTextScore ? deps.toProjection() : BSONObj();

        /*
          Look for an initial sort; we'll try to add this to the
          Cursor we create.  If we're successful in doing that (further down),
          we'll remove the $sort from the pipeline, because the documents
          will already come sorted in the specified order as a result of the
          index scan.
        */
        intrusive_ptr<DocumentSourceSort> sortStage;
        BSONObj sortObj;
        if (!sources.empty()) {
            sortStage = dynamic_cast<DocumentSourceSort*>(sources.front().get());
            if (sortStage) {
                // build the sort key
                sortObj = sortStage->serializeSortKey(/*explain*/false).toBson();
            }
        }

        // Create the Runner.
        //
        // If we try to create a Runner that includes both the match and the
        // sort, and the two are incompatible wrt the available indexes, then
        // we don't get a Runner back.
        //
        // So we try to use both first.  If that fails, try again, without the
        // sort.
        //
        // If we don't have a sort, jump straight to just creating a Runner
        // without the sort.
        //
        // If we are able to incorporate the sort into the Runner, remove it
        // from the head of the pipeline.
        //
        // LATER - we should be able to find this out before we create the
        // cursor.  Either way, we can then apply other optimizations there
        // are tickets for, such as SERVER-4507.
        const size_t runnerOptions = QueryPlannerParams::DEFAULT
                                   | QueryPlannerParams::INCLUDE_SHARD_FILTER
                                   | QueryPlannerParams::NO_BLOCKING_SORT
                                   ;
        boost::shared_ptr<Runner> runner;
        bool sortInRunner = false;

        const WhereCallbackReal whereCallback(pExpCtx->ns.db());

        if (sortStage) {
            CanonicalQuery* cq;
            Status status =
                CanonicalQuery::canonicalize(pExpCtx->ns,
                                             queryObj,
                                             sortObj,
//.........这里部分代码省略.........
开发者ID:DesignByOnyx,项目名称:mongo,代码行数:101,代码来源:pipeline_d.cpp


示例15:

template<class T, class U> inline bool operator==(intrusive_ptr<T> const & a, intrusive_ptr<U> const & b)
{
    return a.get() == b.get();
}
开发者ID:Albermg7,项目名称:boost,代码行数:4,代码来源:intrusive_ptr.hpp


示例16: swap

//!Exchanges the contents of the two intrusive_ptrs.
//!Does not throw
template<class T, class VP> inline
void swap(intrusive_ptr<T, VP> & lhs,
          intrusive_ptr<T, VP> & rhs)
{  lhs.swap(rhs); }
开发者ID:AndresGalaviz,项目名称:NAO,代码行数:6,代码来源:intrusive_ptr.hpp


示例17: m_ptr

 //!Constructor from related. Copies the internal pointer and if "p" is not
 //!zero calls intrusive_ptr_add_ref(get_pointer(p)). Does not throw
 template<class U> intrusive_ptr
    (intrusive_ptr<U, VP> const & rhs)
    :  m_ptr(rhs.get())
 {
    if(m_ptr != 0) intrusive_ptr_add_ref(ipcdetail::get_pointer(m_ptr));
 }
开发者ID:AndresGalaviz,项目名称:NAO,代码行数:8,代码来源:intrusive_ptr.hpp


示例18: itself

    int Value::compare(const intrusive_ptr<const Value> &rL,
                       const intrusive_ptr<const Value> &rR) {
        BSONType lType = rL->getType();
        BSONType rType = rR->getType();

        /*
          Special handling for Undefined and NULL values; these are types,
          so it's easier to handle them here before we go below to handle
          values of the same types.  This allows us to compare Undefined and
          NULL values with everything else.  As coded now:
          (*) Undefined is less than everything except itself (which is equal)
          (*) NULL is less than everything except Undefined and itself
         */
        if (lType == Undefined) {
            if (rType == Undefined)
                return 0;

            /* if rType is anything else, the left value is less */
            return -1;
        }
        
        if (lType == jstNULL) {
            if (rType == Undefined)
                return 1;
            if (rType == jstNULL)
                return 0;

            return -1;
        }

        if ((rType == Undefined) || (rType == jstNULL)) {
            /*
              We know the left value isn't Undefined, because of the above.
              Count a NULL value as greater than an undefined one.
            */
            return 1;
        }

        /* if the comparisons are numeric, prepare to promote the values */
        if (((lType == NumberDouble) || (lType == NumberLong) ||
             (lType == NumberInt)) &&
            ((rType == NumberDouble) || (rType == NumberLong) ||
             (rType == NumberInt))) {

            /* if the biggest type of either is a double, compare as doubles */
            if ((lType == NumberDouble) || (rType == NumberDouble)) {
                const double left = rL->getDouble();
                const double right = rR->getDouble();
                if (left < right)
                    return -1;
                if (left > right)
                    return 1;
                return 0;
            }

            /* if the biggest type of either is a long, compare as longs */
            if ((lType == NumberLong) || (rType == NumberLong)) {
                const long long left = rL->getLong();
                const long long right = rR->getLong();
                if (left < right)
                    return -1;
                if (left > right)
                    return 1;
                return 0;
            }

            /* if we got here, they must both be ints; compare as ints */
            {
                const int left = rL->getInt();
                const int right = rR->getInt();
                if (left < right)
                    return -1;
                if (left > right)
                    return 1;
                return 0;
            }
        }

        // CW TODO for now, only compare like values
        uassert(16016, str::stream() <<
                "can't compare values of BSON types " << typeName(lType) <<
                " and " << typeName(rType),
                lType == rType);

        switch(lType) {
        case NumberDouble:
        case NumberInt:
        case NumberLong:
            /* these types were handled above */
            verify(false);

        case String:
            return rL->stringValue.compare(rR->stringValue);

        case Object:
            return Document::compare(rL->getDocument(), rR->getDocument());

        case Array: {
            intrusive_ptr<ValueIterator> pli(rL->getArray());
            intrusive_ptr<ValueIterator> pri(rR->getArray());
//.........这里部分代码省略.........
开发者ID:milkie,项目名称:mongo,代码行数:101,代码来源:value.cpp


示例19: vps

vector<intrusive_ptr<DocumentSource>> DocumentSourceBucket::createFromBson(
    BSONElement elem, const intrusive_ptr<ExpressionContext>& pExpCtx) {
    uassert(40201,
            str::stream() << "Argument to $bucket stage must be an object, but found type: "
                          << typeName(elem.type())
                          << ".",
            elem.type() == BSONType::Object);

    const BSONObj bucketObj = elem.embeddedObject();
    BSONObjBuilder groupObjBuilder;
    BSONObjBuilder switchObjBuilder;

    VariablesIdGenerator idGenerator;
    VariablesParseState vps(&idGenerator);

    vector<Value> boundaryValues;
    BSONElement groupByField;
    Value defaultValue;

    bool outputFieldSpecified = false;
    for (auto&& argument : bucketObj) {
        const auto argName = argument.fieldNameStringData();
        if ("groupBy" == argName) {
            groupByField = argument;

            const bool groupByIsExpressionInObject = groupByField.type() == BSONType::Object &&
                groupByField.embeddedObject().firstElementFieldName()[0] == '$';

            const bool groupByIsPrefixedPath =
                groupByField.type() == BSONType::String && groupByField.valueStringData()[0] == '$';
            uassert(40202,
                    str::stream() << "The $bucket 'groupBy' field must be defined as a $-prefixed "
                                     "path or an expression, but found: "
                                  << groupByField.toString(false, false)
                                  << ".",
                    groupByIsExpressionInObject || groupByIsPrefixedPath);
        } else if ("boundaries" == argName) {
            uassert(
                40200,
                str::stream() << "The $bucket 'boundaries' field must be an array, but found type: "
                              << typeName(argument.type())
                              << ".",
                argument.type() == BSONType::Array);

            for (auto&& boundaryElem : argument.embeddedObject()) {
                auto exprConst = getExpressionConstant(boundaryElem, vps);
                uassert(40191,
                        str::stream() << "The $bucket 'boundaries' field must be an array of "
                                         "constant values, but found value: "
                                      << boundaryElem.toString(false, false)
                                      << ".",
                        exprConst);
                boundaryValues.push_back(exprConst->getValue());
            }

            uassert(40192,
                    str::stream()
                        << "The $bucket 'boundaries' field must have at least 2 values, but found "
                        << boundaryValues.size()
                        << " value(s).",
                    boundaryValues.size() >= 2);

            // Make sure that the boundaries are unique, sorted in ascending order, and have the
            // same canonical type.
            for (size_t i = 1; i < boundaryValues.size(); ++i) {
                Value lower = boundaryValues[i - 1];
                Value upper = boundaryValues[i];
                int lowerCanonicalType = canonicalizeBSONType(lower.getType());
                int upperCanonicalType = canonicalizeBSONType(upper.getType());

                uassert(40193,
                        str::stream() << "All values in the the 'boundaries' option to $bucket "
                                         "must have the same type. Found conflicting types "
                                      << typeName(lower.getType())
                                      << " and "
                                      << typeName(upper.getType())
                                      << ".",
                        lowerCanonicalType == upperCanonicalType);
                uassert(40194,
                        str::stream()
                            << "The 'boundaries' option to $bucket must be sorted, but elements "
                            << i - 1
                            << " and "
                            << i
                            << " are not in ascending order ("
                            << lower.toString()
                            << " is not less than "
                            << upper.toString()
                            << ").",
                        pExpCtx->getValueComparator().evaluate(lower < upper));
            }
        } else if ("default" == argName) {
            // If there is a default, make sure that it parses to a constant expression then add
            // default to switch.
            auto exprConst = getExpressionConstant(argument, vps);
            uassert(40195,
                    str::stream()
                        << "The $bucket 'default' field must be a constant expression, but found: "
                        << argument.toString(false, false)
                        << ".",
//.........这里部分代码省略.........
开发者ID:hgGeorg,项目名称:mongo,代码行数:101,代码来源:document_source_bucket.cpp


示例20: swap

 inline void swap(cow_ptr& other)
 {
     m_ptr.swap(other.m_ptr);
 }
开发者ID:mischumok,项目名称:libcppa,代码行数:4,代码来源:cow_ptr.hpp



注:本文中的intrusive_ptr类示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
C++ inventory类代码示例发布时间:2022-05-31
下一篇:
C++ interval类代码示例发布时间:2022-05-31
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap