• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Python starutil.invoke函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中starutil.invoke函数的典型用法代码示例。如果您正苦于以下问题:Python invoke函数的具体用法?Python invoke怎么用?Python invoke使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了invoke函数的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: write_ip_NDF

def write_ip_NDF(data,bad_pixel_ref):
    """

    This function writes out the array ip parameter data to an ndf_file.

    Invocation:
        result = write_ip_NDF(data,bad_pixel_ref)

    Arguements:
        data = The array ip parameter data
        bad_ref = A NDF with bad pixel values to copy over.

    Returned Value:
        Writes NDF and returns handle.
    """

    ndf_name_orig = NDG(1)
    indf = ndf.open( ndf_name_orig[0], 'WRITE', 'NEW' )
    indf.new('_DOUBLE', 2, numpy.array([1,1]),numpy.array([32,40]))
    ndfmap = indf.map( 'DATA', '_DOUBLE', 'WRITE' )
    ndfmap.numpytondf( data )
    indf.annul()

    # Copy bad pixels
    ndf_name = NDG(1)
    invoke( "$KAPPA_DIR/copybad in={0} ref={1} out={2}".format(ndf_name_orig,bad_pixel_ref,ndf_name) )
    return ndf_name
开发者ID:astrobuff,项目名称:starlink,代码行数:27,代码来源:pol2_ipdata.py


示例2: normer

def normer( model, test, cmin, newmodel ):
   """

   Normalise "model" to "test" returning result in "newmodel", so long as
   the "correlation factor" (determined by function blanker) of test and
   model is at least "cmin". Returns a boolean indicating the cmin value
   was reached.


   Invocation:
      result = normer( model, test, cmin, newmodel )

   Arguments:
      model = string
         The name of an existing NDF.
      test = string
         The name of an existing NDF.
      cmin = float
         The lowest acceptable absolute correlation factor.
      newmodel = string
         The name of an NDF to be created. The new NDF is only created if
         the cmin value is reached.

   Returned Value:
      A boolean indicating if the cmin value was reached.

   """

   btest = "{0}/btest".format(NDG.tempdir)
   if abs( blanker( test, model, btest ) ) > cmin:
      invoke( "$KAPPA_DIR/normalize in1={0} in2={2} out={1} device=!".format(model,newmodel,btest))
      return True
   else:
      return False
开发者ID:joaogerd,项目名称:starlink,代码行数:34,代码来源:smurfutil.py


示例3: run_calcqu

def run_calcqu(input_data,config,harmonic):
    #  The following call to SMURF:CALCQU creates two HDS container files -
    #  one holding a set of Q NDFs and the other holding a set of U NDFs. Create
    #  these container files in the NDG temporary directory.
    qcont = NDG(1)
    qcont.comment = "qcont"
    ucont = NDG(1)
    ucont.comment = "ucont"

    msg_out( "Calculating Q and U values for each bolometer...")
    invoke("$SMURF_DIR/calcqu in={0} config=\"{1}\" lsqfit=no outq={2} outu={3} "
           "harmonic={4} fix".format(input_data,starutil.shell_quote(config),
                                     qcont,ucont,harmonic) )
    return (qcont,ucont)
开发者ID:astrobuff,项目名称:starlink,代码行数:14,代码来源:pol2_ipdata.py


示例4: force_flat

def force_flat( ins, masks ):
   """

   Forces the background regions to be flat in a set of Q or U images.

   Invocation:
      result = force_flat( ins, masks )

   Arguments:
      in = NDG
         An NDG object specifying a group of Q or U images from which
         any low frequency background structure is to be removed.
      masks = NDG
         An NDG object specifying a corresponding group of Q or U images
         in which source pixels are bad. These are only used to mask the
         images specified by "in". It should have the same size as "in".

   Returned Value:
      A new NDG object containing the group of corrected Q or U images.

   """

#  How many NDFs are we processing?
   nndf = len( ins )

#  Blank out sources by copy the bad pixels from "mask" into "in".
   msg_out( "   masking...")
   qm = NDG( ins )
   invoke( "$KAPPA_DIR/copybad in={0} ref={1} out={2}".format(ins,masks,qm) )

#  Smooth the blanked NDFs using a 3 pixel Gaussian. Set wlim so that
#  small holes are filled in by the smoothing process.
   msg_out( "   smoothing...")
   qs = NDG( ins )
   invoke( "$KAPPA_DIR/gausmooth in={0} out={1} fwhm=3 wlim=0.5".format(qm,qs) )

#  Fill remaining big holes using artifical data.
   msg_out( "   filling...")
   qf = NDG( ins )
   invoke( "$KAPPA_DIR/fillbad in={0} out={1} niter=10 size=10 variance=no".format(qs,qf) )

#  Subtract the filled low frequency data form the original to create the
#  returned images.
   msg_out( "   removing low frequency background structure...")
   result = NDG( ins )
   invoke( "$KAPPA_DIR/sub in1={0} in2={1} out={2}".format(ins,qf,result) )

   return result
开发者ID:joaogerd,项目名称:starlink,代码行数:48,代码来源:smurfutil.py


示例5: pca

def pca( indata, ncomp ):
   """

   Identifies and returns the strongest PCA components in a 3D NDF.

   Invocation:
      result = pca( indata, ncomp )

   Arguments:
      indata = NDG
         An NDG object specifying a single 3D NDF. Each plane in the cube
         is a separate image, and the images are compared using PCA.
      ncomp = int
         The number of PCA components to include in the returned NDF.

   Returned Value:
      A new NDG object containing a single 3D NDF containing just the
      strongest "ncomp" PCA components found in the input NDF.

   """

   msg_out( "   finding strongest {0} components using Principal Component Analysis...".format(ncomp) )

#  Get the shape of the input NDF.
   invoke( "$KAPPA_DIR/ndftrace {0} quiet".format(indata) )
   nx = get_task_par( "dims(1)", "ndftrace" )
   ny = get_task_par( "dims(2)", "ndftrace" )
   nz = get_task_par( "dims(3)", "ndftrace" )

#  Fill any bad pixels.
   tmp = NDG(1)
   invoke( "$KAPPA_DIR/fillbad in={0} out={1} variance=no niter=10 size=10".format(indata,tmp) )

#  Read the planes from the supplied NDF. Note, numpy axis ordering is the
#  reverse of starlink axis ordering. We want a numpy array consisting of
#  "nz" elements, each being a vectorised form of a plane from the 3D NDF.
   ndfdata = numpy.reshape( Ndf( tmp[0] ).data, (nz,nx*ny) )

#  Normalize each plane to a mean of zero and standard deviation of 1.0
   means = []
   sigmas = []
   newdata = []
   for iplane in range(0,nz):
      plane = ndfdata[ iplane ]
      mn = plane.mean()
      sg = math.sqrt( plane.var() )
      means.append( mn )
      sigmas.append( sg )

      if sg > 0.0:
         newdata.append( (plane-mn)/sg )

   newdata= numpy.array( newdata )

#  Transpose as required by MDP.
   pcadata = numpy.transpose( newdata )

#  Find the required number of PCA components (these are the strongest
#  components).
   pca = mdp.nodes.PCANode( output_dim=ncomp )
   comp = pca.execute( pcadata )

#  Re-project the components back into the space of the input 3D NDF.
   ip = numpy.dot( comp, pca.get_recmatrix() )

#  Transpose the array so that each row is an image.
   ipt = numpy.transpose(ip)

#  Normalise them back to the original scales.
   jplane = 0
   newdata = []
   for iplane in range(0,nz):
      if sigmas[ iplane ] > 0.0:
         newplane = sigmas[ iplane ] * ipt[ jplane ] + means[ iplane ]
         jplane += 1
      else:
         newplane = ndfdata[ iplane ]
      newdata.append( newplane )
   newdata= numpy.array( newdata )

#  Dump the re-projected images out to a 3D NDF.
   result = NDG(1)
   indf = ndf.open( result[0], 'WRITE', 'NEW' )
   indf.new('_DOUBLE', 3, numpy.array([1,1,1]),numpy.array([nx,ny,nz]))
   ndfmap = indf.map( 'DATA', '_DOUBLE', 'WRITE' )
   ndfmap.numpytondf( newdata )
   indf.annul()

#  Uncomment to dump the components.
#   msg_out( "Dumping PCA comps to {0}-comps".format(result[0]) )
#   compt = numpy.transpose(comp)
#   indf = ndf.open( "{0}-comps".format(result[0]), 'WRITE', 'NEW' )
#   indf.new('_DOUBLE', 3, numpy.array([1,1,1]),numpy.array([nx,ny,ncomp]))
#   ndfmap = indf.map( 'DATA', '_DOUBLE', 'WRITE' )
#   ndfmap.numpytondf( compt )
#   indf.annul()

   return result
开发者ID:bbrond,项目名称:starlink,代码行数:98,代码来源:smurfutil.py


示例6: invoke

      elif cval == "DAS":
         instrument = "DAS"

#  If so, set the default for the INSTRUMENT parameter and prevent the
#  user being prompted for a value.
   if instrument != None:
      parsys["INSTRUMENT"].default = instrument
      parsys["INSTRUMENT"].noprompt = True

#  Get the chosen instrument.
   instrument = parsys["INSTRUMENT"].value
   instrument = starutil.shell_quote( instrument )

#  Get a list of the tiles that overlap the supplied NDF.
   invoke( "$SMURF_DIR/jsatilelist in={0} instrument={1} quiet".format(inndf,instrument) )
   tiles = starutil.get_task_par( "TILES", "jsatilelist" )

#  JSADICER requires the input array to be gridded on the JSA all-sky
#  pixel grid. This is normally an HPX projection, but if the supplied
#  NDF straddles a discontinuity in the HPX projection then we need to
#  use a different flavour of HPX (either an HPX projection centred on
#  RA=12h or am XPH (polar HEALPix) projection centred on the north or
#  south pole). The above call to jsatileinfo will have determined the
#  appropriate projection to use, so get it.
   proj = starutil.get_task_par( "PROJ", "jsatilelist" )

#  Create a file holding the FITS-WCS header for the first tile, using
#  the type of projection determined above.
   head = "{0}/header".format(NDG.tempdir)
   invoke( "$SMURF_DIR/jsatileinfo itile={0} instrument={1} header={2} "
开发者ID:astrobuff,项目名称:starlink,代码行数:30,代码来源:jsasplit.py


示例7: msg_out

      iref = "!"
   qref = parsys["QREF"].value
   uref = parsys["UREF"].value

#  If no Q and U values were supplied, create a set of Q and U time
#  streams from the supplied analysed intensity time streams. Put them in
#  the QUDIR directory, or the temp directory if QUDIR is null.
   if inqu == None:
      qudir =  parsys["QUDIR"].value
      if not qudir:
         qudir = NDG.tempdir
      elif not os.path.exists(qudir):
         os.makedirs(qudir)

      msg_out( "Calculating Q and U time streams for each bolometer...")
      invoke("$SMURF_DIR/calcqu in={0} lsqfit=yes config=def outq={1}/\*_QT "
             "outu={1}/\*_UT fix=yes".format( indata, qudir ) )

#  Get groups listing the time series files created by calcqu.
      qts = NDG( "{0}/*_QT".format( qudir ) )
      uts = NDG( "{0}/*_UT".format( qudir ) )

#  If pre-calculated Q and U values were supplied, identifiy the Q and U
#  files.
   else:
      msg_out( "Using pre-calculating Q and U values...")

      qndfs = []
      undfs = []
      for ndf in inqu:
         invoke("$KAPPA_DIR/ndftrace ndf={0} quiet".format(ndf) )
         label = starutil.get_task_par( "LABEL", "ndftrace" )
开发者ID:edwardchapin,项目名称:starlink,代码行数:32,代码来源:pol2scan.py


示例8: NDG

   retain = parsys["RETAIN"].value

#  The following call to SMURF:CALCQU creates two HDS container files -
#  one holding a set of Q NDFs and the other holding a set of U NDFs. Create
#  these container files in the NDG temporary directory.
   qcont = NDG(1)
   qcont.comment = "qcont"
   ucont = NDG(1)
   ucont.comment = "ucont"

#  Create a set of Q images and a set of U images. These are put into the HDS
#  container files "q_TMP.sdf" and "u_TMP.sdf". Each image contains Q or U
#  values derived from a short section of raw data during which each bolometer
#  moves less than half a pixel.
   msg_out( "Calculating Q and U values for each bolometer...")
   invoke("$SMURF_DIR/calcqu in={0} config={1} outq={2} outu={3} fix".
          format(indata,config,qcont,ucont) )

#  Remove spikes from the Q and U images. The cleaned NDFs are written to
#  temporary NDFs specified by two new NDG objects "qff" and "uff", which
#  inherit their size from the existing groups "qcont" and "ucont".
   msg_out( "Removing spikes from bolometer Q and U values...")
   qff = NDG(qcont)
   qff.comment = "qff"
   uff = NDG(ucont)
   uff.comment = "uff"
   invoke( "$KAPPA_DIR/ffclean in={0} out={1} box=3 clip=\[2,2,2\]"
           .format(qcont,qff) )
   invoke( "$KAPPA_DIR/ffclean in={0} out={1} box=3 clip=\[2,2,2\]"
           .format(ucont,uff) )

#  The next stuff we do independently for each subarray.
开发者ID:andrecut,项目名称:starlink,代码行数:32,代码来源:pol2cat.py


示例9: loadndg

      fred = loadndg( "IN", True )
      if indata != fred:
         raise UsageError("\n\nThe directory specified by parameter RESTART ({0}) "
                          "refers to different time-series data".format(restart) )
      msg_out( "Re-using data in {0}".format(restart) )

#  Initialise the starlink random number seed to a known value so that
#  results are repeatable.
   os.environ["STAR_SEED"] = "65"

#  Flat field the supplied template data
   ff = loadndg( "FF" )
   if not ff:
      ff = NDG(indata)
      msg_out( "Flatfielding template data...")
      invoke("$SMURF_DIR/flatfield in={0} out={1}".format(indata,ff) )
      ff = ff.filter()
      savendg( "FF", ff  )
   else:
      msg_out( "Re-using old flatfielded template data...")

#  If required, create new artificial I, Q and U maps.
   if newart:
      msg_out( "Creating new artificial I, Q and U maps...")

#  Get the parameters defining the artificial data
      ipeak = parsys["IPEAK"].value
      ifwhm = parsys["IFWHM"].value
      pol = parsys["POL"].value

#  Determine the spatial extent of the data on the sky.
开发者ID:wadawson,项目名称:starlink,代码行数:31,代码来源:pol2sim.py


示例10: remove_corr

def remove_corr( ins, masks ):
   """

   Masks the supplied set of Q or U images and then looks for and removes
   correlated components in the background regions.

   Invocation:
      result = remove_corr( ins, masks )

   Arguments:
      ins = NDG
         An NDG object specifying a group of Q or U images from which
         correlated background components are to be removed.
      masks = NDG
         An NDG object specifying a corresponding group of Q or U images
         in which source pixels are bad. These are only used to mask the
         images specified by "in". It should have the same size as "in".

   Returned Value:
      A new NDG object containing the group of corrected Q or U images.

   """

#  How many NDFs are we processing?
   nndf = len( ins )

#  Blank out sources by copy the bad pixels from "mask" into "in". We refer
#  to "q" below, but the same applies whether processing Q or U.
   msg_out( "   masking...")
   qm = NDG( ins )
   invoke( "$KAPPA_DIR/copybad in={0} ref={1} out={2}".format(ins,masks,qm) )

#  Find the most correlated pair of imagtes. We use the basic correlation
#  coefficient calculated by kappa:scatter for this.
   msg_out( "   Finding most correlated pair of images...")
   cmax = 0
   for i in range(0,nndf-1):
      for j in range(i + 1,nndf):
         invoke( "$KAPPA_DIR/scatter in1={0} in2={1} device=!".format(qm[i],qm[j]) )
         c = starutil.get_task_par( "corr", "scatter" )
         if abs(c) > abs(cmax):
            cmax = c
            cati = i
            catj = j

   if abs(cmax) < 0.3:
      msg_out("   No correlated images found!")
      return ins

   msg_out( "   Correlation for best pair of images = {0}".format( cmax ) )

#  Find images that are reasonably correlated to the pair found above,
#  and coadd them to form a model for the correlated background
#  component. Note, the holes left by the masking are filled in by the
#  coaddition using background data from other images.
   msg_out( "   Forming model...")

#  Form the average of the two most correlated images, first normalising
#  them to a common scale so that they both have equal weight.
   norm = "{0}/norm".format(NDG.tempdir)
   if not normer( qm[cati], qm[catj], 0.3, norm ):
      norm = qm[cati]

   mslist = NDG( [ qm[catj], norm ] )
   ave = "{0}/ave".format(NDG.tempdir)
   invoke( "$CCDPACK_DIR/makemos in={0} method=mean genvar=no usevar=no out={1}".format(mslist,ave) )

#  Loop round each image finding the correlation factor of the image and
#  the above average image.
   temp = "{0}/temp".format(NDG.tempdir)
   nlist = []
   ii = 0
   for i in range(0,nndf):
      c = blanker( qm[i], ave, temp )

#  If the correlation is high enough, normalize the image to the average
#  image and then include the normalised image in the list of images to be
#  coadded to form the final model.
      if abs(c) > 0.3:
         tndf = "{0}/t{1}".format(NDG.tempdir,ii)
         ii += 1
         invoke( "$KAPPA_DIR/normalize in1={1} in2={2} out={0} device=!".format(tndf,temp,ave))
         nlist.append( tndf )

   if ii == 0:
      msg_out("   No secondary correlated images found!")
      return ins

   msg_out("   Including {0} secondary correlated images in the model.".format(ii) )

#  Coadded the images created above to form the model of the correlated
#  background component. Fill any remaining bad pixels with artificial data.
   model = "{0}/model".format(NDG.tempdir)
   included = NDG( nlist )
   invoke( "$CCDPACK_DIR/makemos in={0} method=mean usevar=no genvar=no out={1}".format( included, temp ) )
   invoke( "$KAPPA_DIR/fillbad in={1} variance=no out={0} size=10 niter=10".format(model,temp) )

#  Now estimate how much of the model is present in each image and remove it.
   msg_out("   Removing model...")
   temp2 = "{0}/temp2".format(NDG.tempdir)
#.........这里部分代码省略.........
开发者ID:joaogerd,项目名称:starlink,代码行数:101,代码来源:smurfutil.py


示例11: msg_out

      ref = "!"

#  If no Q and U values were supplied, create a set of Q and U time
#  streams from the supplied analysed intensity time streams. Put them in
#  the QUDIR directory, or the temp directory if QUDIR is null.
   if inqu == None:
      north = parsys["NORTH"].value
      qudir =  parsys["QUDIR"].value
      if not qudir:
         qudir = NDG.tempdir
      elif not os.path.exists(qudir):
         os.makedirs(qudir)

      msg_out( "Calculating Q, U and I time streams for each bolometer...")
      invoke("$SMURF_DIR/calcqu in={0} lsqfit=yes config=def outq={1}/\*_QT "
             "outu={1}/\*_UT outi={1}/\*_IT fix=yes north={2}".
             format( indata, qudir, north ) )

#  Get groups listing the time series files created by calcqu.
      qts = NDG( "{0}/*_QT".format( qudir ) )
      uts = NDG( "{0}/*_UT".format( qudir ) )
      its = NDG( "{0}/*_IT".format( qudir ) )

#  If pre-calculated Q and U values were supplied, identifiy the Q, U and I
#  files.
   else:
      msg_out( "Using pre-calculating Q, U and I values...")

      qndfs = []
      undfs = []
      indfs = []
开发者ID:sladen,项目名称:starlink,代码行数:31,代码来源:pol2scan.py


示例12: msg_out

   if deflt != None:
      parsys["INSTRUMENT"].default = deflt
      parsys["INSTRUMENT"].noprompt = True

#  Get the JCMT instrument. Quote the string so that it can be used as
#  a command line argument when running an atask from the shell.
   instrument = starutil.shell_quote( parsys["INSTRUMENT"].value )
   msg_out( "Updating tiles for {0} data".format(instrument) )

#  See if temp files are to be retained.
   retain = parsys["RETAIN"].value

#  Set up the dynamic default for parameter "JSA". This is True if the
#  dump of the WCS FrameSet in the first supplied NDF contains the string
#  "HPX".
   prj = invoke("$KAPPA_DIR/wcsattrib ndf={0} mode=get name=projection".format(indata[0]) )
   parsys["JSA"].default = True if prj.strip() == "HEALPix" else False

#  See if input NDFs are on the JSA all-sky pixel grid.
   jsa = parsys["JSA"].value
   if not jsa:
      msg_out( "The supplied NDFs will first be resampled onto the JSA "
               "all-sky pixel grid" )

#  Report the tile directory.
   tiledir = os.getenv( 'JSA_TILE_DIR' )
   if tiledir:
      msg_out( "Tiles will be written to {0}".format(tiledir) )
   else:
      msg_out( "Environment variable JSA_TILE_DIR is not set!" )
      msg_out( "Tiles will be written to the current directory ({0})".format(os.getcwd()) )
开发者ID:astrobuff,项目名称:starlink,代码行数:31,代码来源:tilepaste.py


示例13: invoke

   basec2 = math.radians( basec2 )

#  Get the radius of the map.
   radius  = 0.5*math.sqrt( map_hght*map_hght + map_wdth*map_wdth )

#  Create a Frame describing the coordinate system.
   if tracksys == "GAL":
      sys = "galactic";
   elif tracksys == "J2000":
      sys = "fk5"
   else:
      raise starutil.InvalidParameterError("The TRACKSYS header in {0} is {1} "
                           "- should be GAL or J2000".format(indata,tracksys) )

   frame = NDG.tempfile()
   invoke( "$ATOOLS_DIR/astskyframe \"'system={0}'\" {1}".format(sys,frame) )

#  Create a Circle describing the map.
   if region == None:
      region = NDG.tempfile()
      display = True
   else:
      display = False

   invoke( "$ATOOLS_DIR/astcircle frame={0} form=1 centre=\[{1},{2}\] point={3} "
           "unc=! options=! result={4}".format(frame,basec1,basec2,radius,region) )

   if display:
      f = open( region, "r" )
      print( f.read() )
      f.close()
开发者ID:dt888,项目名称:starlink,代码行数:31,代码来源:rawregion.py


示例14: blanker

def blanker( test, model, newtest ):
   """

   Blank out pixels in "test" that are not well correlated with "model",
   returning result in newtest.

   Invocation:
      result =  blanker( test, model, newtest )

   Arguments:
      test = string
         The name of an existing NDF.
      model = string
         The name of an existing NDF.
      newtest = string
         The name of an NDF to be created.

   Returned Value:
      A value between +1 and -1 indicating the degree of correlation
      between the model and test.

   """

#  We want statistics of pixels that are present in both test and model,
#  so first form a mask by adding them together, and then copy bad pixels
#  form this mask into test and model
   mask = "{0}/mask".format(NDG.tempdir)
   tmask = "{0}/tmask".format(NDG.tempdir)
   mmask = "{0}/mmask".format(NDG.tempdir)
   invoke( "$KAPPA_DIR/add in1={0} in2={1} out={2}".format(test,model,mask) )
   invoke( "$KAPPA_DIR/copybad in={0} ref={1} out={2}".format(test,mask,tmask) )
   invoke( "$KAPPA_DIR/copybad in={0} ref={1} out={2}".format(model,mask,mmask) )

#  Get the mean and standard deviation of the remaining pixels in the
#  test NDF.
   invoke( "$KAPPA_DIR/stats {0} clip=\[3,3,3\] quiet".format(tmask) )
   tmean = get_task_par( "mean", "stats" )
   tsigma = get_task_par( "sigma", "stats" )

#  Also get the number of good pixels in the mask.
   numgood1 = float( get_task_par( "numgood", "stats" ) )

#  Get the mean and standard deviation of the remaining pixels in the
#  model NDF.
   invoke( "$KAPPA_DIR/stats {0} clip=\[3,3,3\] quiet".format(mmask) )
   mmean = get_task_par( "mean", "stats" )
   msigma = get_task_par( "sigma", "stats" )

#  Normalize them both to have a mean of zero and a standard deviation of
#  unity.
   tnorm = "{0}/tnorm".format(NDG.tempdir)
   invoke( "$KAPPA_DIR/maths exp='(ia-pa)/pb' ia={2} pa={0} pb={1} "
           "out={3}".format(tmean,tsigma,tmask,tnorm))

   mnorm = "{0}/mnorm".format(NDG.tempdir)
   invoke( "$KAPPA_DIR/maths exp='(ia-pa)/pb' ia={2} pa={0} pb={1} "
           "out={3}".format(mmean,msigma,mmask,mnorm))

#  Find the difference between them.
   diff = "{0}/diff".format(NDG.tempdir)
   invoke( "$KAPPA_DIR/sub in1={0} in2={1} out={2}".format(mnorm,tnorm,diff) )

#  Remove pixels that differ by more than 0.5 standard deviations.
   mtmask = "{0}/mtmask".format(NDG.tempdir)
   invoke( "$KAPPA_DIR/thresh in={0} thrlo=-0.5 newlo=bad thrhi=0.5 "
           "newhi=bad out={1}".format(diff,mtmask) )

#  See how many pixels remain (i.e. pixels that are very similar in the
#  test and model NDFs).
   invoke( "$KAPPA_DIR/stats {0} quiet".format(mtmask) )
   numgood2 = float( get_task_par( "numgood", "stats" ) )

#  It may be that the two NDFs are anti-correlated. To test for this we
#  negate the model and do the above test again.
   mnormn = "{0}/mnormn".format(NDG.tempdir)
   invoke( "$KAPPA_DIR/cmult in={0} scalar=-1 out={1}".format(mnorm,mnormn) )

   diffn = "{0}/diffn".format(NDG.tempdir)
   invoke( "$KAPPA_DIR/sub in1={0} in2={1} out={2}".format(mnormn,tnorm,diffn ))

   mtmaskn = "{0}/mtmaskn".format(NDG.tempdir)
   invoke( "$KAPPA_DIR/thresh in={0} thrlo=-0.5 newlo=bad thrhi=0.5 "
           "newhi=bad out={1}".format(diffn,mtmaskn) )

   invoke( "$KAPPA_DIR/stats {0} quiet".format(mtmaskn) )
   numgood2n = float( get_task_par( "numgood", "stats" ) )

#  If we get more similar pixels by negating the model, the NDFs are
#  anti-correlated.
   if numgood2n > numgood2:

#  Take a copy of the supplied test NDF, masking out pixels that are not
#  anti-similar to the corresponding model pixels.
      invoke( "$KAPPA_DIR/copybad in={0} ref={2} out={1}".format(test,newtest,mtmaskn) )

#  The returned correlation factor is the ratio of the number of
#  anti-similar pixels to the total number of pixels which the two NDFs
#  have in common. But if there is not much difference between the number
#  of similar and anti-similar pixels, we assume there is no correlation.
      if numgood2n > 1.4*numgood2:
#.........这里部分代码省略.........
开发者ID:joaogerd,项目名称:starlink,代码行数:101,代码来源:smurfutil.py


示例15: msg_out

   tiledir = os.getenv( 'JSA_TILE_DIR' )
   if tiledir:
      msg_out( "Tiles will be read from {0}".format(tiledir) )
   else:
      msg_out( "Environment variable JSA_TILE_DIR is not set!" )
      msg_out( "Tiles will be read from the current directory ({0})".format(os.getcwd()) )

#  Create an empty list to hold the NDFs for the tiles holding the
#  required data.
   tilendf = []
   itilelist = []

#  Identify the tiles that overlap the specified region, and loop round
#  them.
   invoke("$SMURF_DIR/tilelist region={0} instrument={1}".format(region,instrument) )
   for itile in starutil.get_task_par( "tiles", "tilelist" ):

#  Get information about the tile, including the 2D spatial pixel index
#  bounds of its overlap with the required Region.
      invoke("$SMURF_DIR/tileinfo itile={0} instrument={1} "
             "target={2}".format(itile,instrument,region) )

#  Skip this tile if it does not exist (i.e. is empty).
      if starutil.get_task_par( "exists", "tileinfo" ):

#  Get the 2D spatial pixel index bounds of the part of the master tile that
#  overlaps the required region.
         tlbnd = starutil.get_task_par( "tlbnd", "tileinfo" )
         tubnd = starutil.get_task_par( "tubnd", "tileinfo" )
开发者ID:astrobuff,项目名称:starlink,代码行数:29,代码来源:tilecutout.py


示例16: range

#  Do tests for 5 different peak values
   for ipeak in range(0, 1):
      starutil.msg_out( ">>> Doing sep={0} and peak={1}....".format(clump_separation,peak_value))

#  Get the dimensions of a square image that would be expected to
#  contain the target number of clumps at the current separation.
      npix = int( clump_separation*math.sqrt( nclump_target ) )

#  Create a temporary file containing circular clumps of constant size
#  and shape (except for the effects of noise).
      model = NDG(1)
      out = NDG(1)
      outcat = NDG.tempfile(".fit")
      invoke( "$CUPID_DIR/makeclumps angle=\[0,0\] beamfwhm=0 deconv=no "
              "fwhm1=\[{0},0\] fwhm2=\[{0},0\] lbnd=\[1,1\] ubnd=\[{1},{1}\] "
              "model={2} nclump={3} out={4} outcat={5} pardist=normal "
              "peak = \[{6},0\] rms={7} trunc=0.1".
               format(clump_fwhm,npix,model,nclump_target,out,outcat,
                      peak_value,noise) )

#  Run fellwalker on the data.
      mask = NDG(1)
      outcat_fw = NDG.tempfile(".fit")
      invoke( "$CUPID_DIR/findclumps config=def deconv=no in={0} "
              "method=fellwalker out={1} outcat={2} rms={3}".
               format(out,mask,outcat_fw,noise) )

# Get the number of clumps found by FellWalker.
      nfw = starutil.get_task_par( "nclumps", "findclumps" )
      if nfw > 0:

#  See how many of the clump peaks found by FellWalker match real clumps to
开发者ID:astrobuff,项目名称:starlink,代码行数:32,代码来源:fw_2d.py


示例17: msg_out

      filter = 850
      msg_out( "No value found for FITS header 'FILTER' in {0} - assuming 850".format(qin[0]))

   if filter == 450:
      fcf1 = 962.0
      fcf2 = 491.0
   elif filter == 850:
      fcf1 = 725.0
      fcf2 = 537.0
   else:
      raise starutil.InvalidParameterError("Invalid FILTER header value "
             "'{0} found in {1}.".format( filter, qin[0] ) )

#  Remove any spectral axes
   qtrim = NDG(qin)
   invoke( "$KAPPA_DIR/ndfcopy in={0} out={1} trim=yes".format(qin,qtrim) )
   utrim = NDG(uin)
   invoke( "$KAPPA_DIR/ndfcopy in={0} out={1} trim=yes".format(uin,utrim) )
   itrim = NDG(iin)
   invoke( "$KAPPA_DIR/ndfcopy in={0} out={1} trim=yes".format(iin,itrim) )

#  Rotate them to use the same polarimetric reference direction.
   qrot = NDG(qtrim)
   urot = NDG(utrim)
   invoke( "$POLPACK_DIR/polrotref qin={0} uin={1} like={2} qout={3} uout={4} ".
           format(qtrim,utrim,qtrim[0],qrot,urot) )

#  Mosaic them into a single set of Q, U and I images, aligning them
#  with the first I image.
   qmos = NDG( 1 )
   invoke( "$KAPPA_DIR/wcsmosaic in={0} out={1} ref={2} method=bilin accept".format(qrot,qmos,itrim[0]) )
开发者ID:astrobuff,项目名称:starlink,代码行数:31,代码来源:pol2stack.py


示例18: NDG

#  See if temp files are to be retained.
   retain = parsys["RETAIN"].value

#  See statistical debiasing is to be performed.
   debias = parsys["DEBIAS"].value

#  Get groups containing all the Q, U and I images.
   qin = inqui.filter("'\.Q$'" )
   uin = inqui.filter("'\.U$'" )
   iin = inqui.filter("'\.I$'" )

#  Rotate them to use the same polarimetric reference direction.
   qrot = NDG(qin)
   urot = NDG(uin)
   invoke( "$POLPACK_DIR/polrotref qin={0} uin={1} like={2} qout={3} uout={4} ".
           format(qin,uin,qin[0],qrot,urot) )

#  Mosaic them into a single set of Q, U and I images.
   qmos = NDG( 1 )
   invoke( "$KAPPA_DIR/wcsmosaic in={0} out={1} method=bilin accept".format(qrot,qmos) )
   umos = NDG( 1 )
   invoke( "$KAPPA_DIR/wcsmosaic in={0} out={1} method=bilin accept".format(urot,umos) )
   imos = NDG( 1 )
   invoke( "$KAPPA_DIR/wcsmosaic in={0} out={1} method=bilin accept".format(iin,imos) )

#  If required, save the Q, U and I images.
   if qui != None:
      invoke( "$KAPPA_DIR/ndfcopy {0} out={1}.Q".format(qmos,qui) )
      invoke( "$KAPPA_DIR/ndfcopy {0} out={1}.U".format(umos,qui) )
      invoke( "$KAPPA_DIR/ndfcopy {0} out={1}.I".format(imos,qui) )
开发者ID:dt888,项目名称:starlink,代码行数:30,代码来源:pol2stack.py


示例19: invoke

   in2 = parsys["IN2"].value

#  See if temp files are to be retained.
   retain = parsys["RETAIN"].value

#  Get the name of any report file to create.
   report = parsys["REPORT"].value

#  Create an empty list to hold the lines of the report.
   report_lines = []

#  Use kappa:ndfcompare to compare the main NDFs holding the map data
#  array. Include a check that the root ancestors of the two maps are the
#  same. Always create a report file so we can echo it to the screen.
   report0 = os.path.join(NDG.tempdir,"report0")
   invoke( "$KAPPA_DIR/ndfcompare in1={0} in2={1} report={2} skiptests=! "
           "accdat=0.1v accvar=1E-4 quiet".format(in1,in2,report0) )

#  See if any differences were found. If so, append the lines of the
#  report to the report_lines list.
   similar = starutil.get_task_par( "similar", "ndfcompare" )
   if not similar:
      with open(report0) as f:
         report_lines.extend( f.readlines() )

#  Now compare the WEIGHTS extension NDF (no need for the roots ancestor
#  check since its already been done).
   report1 = os.path.join(NDG.tempdir,"report1")
   invoke( "$KAPPA_DIR/ndfcompare in1={0}.more.smurf.weights accdat=1E-4 "
           "in2={1}.more.smurf.weights report={2} quiet".format(in1,in2,report1) )

#  See if any differences were found. If so, append the report to any
开发者ID:joaogerd,项目名称:starlink,代码行数:32,代码来源:sc2compare.py


示例20: UsageError

      fred = NDG.load( "IN", True )
      if indata != fred:
         raise UsageError("\n\nThe directory specified by parameter RESTART ({0}) "
                          "refers to different time-series data".format(restart) )
      msg_out( "Re-using data in {0}".format(restart) )

#  Initialise the starlink random number seed to a known value so that
#  results are repeatable.
   os.environ["STAR_SEED"] = "65"

#  Flat field the supplied template data
   ff = NDG.load( "FF" )
   if not ff:
      ffdir = NDG.subdir()
      msg_out( "Flatfielding template data...")
      invoke("$SMURF_DIR/flatfield in={0} out=\"{1}/*\"".format(indata,ffdir) )
      ff = NDG("{0}/\*".format(ffdir))
      ff.save( "FF" )
   else:
      msg_out( "Re-using old flatfielded template data...")

#  Output files. Base the modification on "ff" rather than "indata",
#  since "indata" may include non-science files (flatfields, darks etc)
#  for which no corresponding output file should be created.
   gexp = parsys["OUT"].value
   outdata = NDG( ff, gexp )

#  If required, create new artificial I, Q and U maps.
   if newart:
      msg_out( "Creating new artificial I, Q and U maps...")
开发者ID:sladen,项目名称:starlink,代码行数:30,代码来源:pol2sim.py



注:本文中的starutil.invoke函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python starutil.msg_out函数代码示例发布时间:2022-05-27
下一篇:
Python start.start_cubit函数代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap