• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java AllFileSelector类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.commons.vfs2.AllFileSelector的典型用法代码示例。如果您正苦于以下问题:Java AllFileSelector类的具体用法?Java AllFileSelector怎么用?Java AllFileSelector使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



AllFileSelector类属于org.apache.commons.vfs2包,在下文中一共展示了AllFileSelector类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: findResources

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
private static Resources findResources(FileObject rootDir, String inputEncoding) throws IOException {
	Resources result = new Resources();
	FileObject[] allFiles = rootDir.findFiles(new AllFileSelector());
	for(int i = 0; i < allFiles.length; i++) {
		FileObject file = allFiles[i];
		if (file.getType() == FileType.FOLDER) {
			continue;
		}
		MediaType mediaType = MediatypeService.determineMediaType(file.getName().getBaseName()); 
		if(mediaType == null) {
			continue;
		}
		String href = file.getName().toString().substring(rootDir.getName().toString().length() + 1);
		byte[] resourceData = IOUtils.toByteArray(file.getContent().getInputStream());
		if(mediaType == MediatypeService.XHTML && ! nl.siegmann.epublib.Constants.CHARACTER_ENCODING.equalsIgnoreCase(inputEncoding)) {
			resourceData = ResourceUtil.recode(inputEncoding, nl.siegmann.epublib.Constants.CHARACTER_ENCODING, resourceData);
		}
		Resource fileResource = new Resource(null, resourceData, href, mediaType);
		result.add(fileResource);
	}
	return result;
}
 
开发者ID:DASAR,项目名称:epublib-android,代码行数:23,代码来源:ChmParser.java


示例2: setup

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
@BeforeClass
public static void setup() throws Exception {
  // Create a test hadoop configuration
  FileObject ramRoot = VFS.getManager().resolveFile( HADOOP_CONFIGURATIONS_PATH );
  if ( ramRoot.exists() ) {
    ramRoot.delete( new AllFileSelector() );
  }
  ramRoot.createFolder();

  // Create the implementation jars
  ramRoot.resolveFile( "xercesImpl-2.9.1.jar" ).createFile();
  ramRoot.resolveFile( "xml-apis-1.3.04.jar" ).createFile();
  ramRoot.resolveFile( "xml-apis-ext-1.3.04.jar" ).createFile();
  ramRoot.resolveFile( "xerces-version-1.8.0.jar" ).createFile();
  ramRoot.resolveFile( "xercesImpl2-2.9.1.jar" ).createFile();
  ramRoot.resolveFile( "pentaho-hadoop-shims-api-61.2016.04.01-196.jar" ).createFile();
  ramRoot.resolveFile( "commands-3.3.0-I20070605-0010.jar" ).createFile();
  ramRoot.resolveFile( "postgresql-9.3-1102-jdbc4.jar" ).createFile();
  ramRoot.resolveFile( "trilead-ssh2-build213.jar" ).createFile();
  ramRoot.resolveFile( "trilead-ssh2-build215.jar" ).createFile();
}
 
开发者ID:pentaho,项目名称:pentaho-hadoop-shims,代码行数:22,代码来源:HadoopExcludeJarsTest.java


示例3: setup

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
@BeforeClass
public static void setup() throws Exception {
  // Create a test hadoop configuration
  FileObject ramRoot = VFS.getManager().resolveFile( CONFIG_PROPERTY_CLASSPATH );
  if ( ramRoot.exists() ) {
    ramRoot.delete( new AllFileSelector() );
  }
  ramRoot.createFolder();

  // Create the implementation jars
  ramRoot.resolveFile( "hadoop-mapreduce-client-app-2.7.0-mapr-1602.jar" ).createFile();
  ramRoot.resolveFile( "hadoop-mapreduce-client-common-2.7.0-mapr-1602.jar" ).createFile();
  ramRoot.resolveFile( "hadoop-mapreduce-client-contrib-2.7.0-mapr-1602.jar" ).createFile();
  ramRoot.resolveFile( "hadoop-mapreduce-client-core-2.7.0-mapr-1602.jar" ).createFile();
  ramRoot.resolveFile( "hadoop-mapreduce-client-hs-2.7.0-mapr-1602.jar" ).createFile();

  pmrFolder = tempFolder.newFolder( "pmr" );
  urlTestResources = Thread.currentThread().getContextClassLoader().getResource( PMR_PROPERTIES );
  Files.copy( Paths.get( urlTestResources.toURI() ), Paths.get( pmrFolder.getAbsolutePath(), PMR_PROPERTIES ) );
}
 
开发者ID:pentaho,项目名称:pentaho-hadoop-shims,代码行数:21,代码来源:HadoopRunningOnClusterTest.java


示例4: stageForCache

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
@Test
public void stageForCache() throws Exception {
  DistributedCacheUtilImpl ch = new DistributedCacheUtilImpl( TEST_CONFIG );

  // Copy the contents of test folder
  FileObject source = DistributedCacheTestUtil.createTestFolderWithContent();

  try {
    Path root = new Path( "bin/test/stageArchiveForCacheTest" );
    Path dest = new Path( root, "org/pentaho/mapreduce/" );

    Configuration conf = new Configuration();
    FileSystem fs = DistributedCacheTestUtil.getLocalFileSystem( conf );

    DistributedCacheTestUtil.stageForCacheTester( ch, source, fs, root, dest, 6, 6 );
  } finally {
    source.delete( new AllFileSelector() );
  }
}
 
开发者ID:pentaho,项目名称:pentaho-hadoop-shims,代码行数:20,代码来源:DistributedCacheUtilImplOSDependentTest.java


示例5: stageForCache_destination_exists

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
@Test
public void stageForCache_destination_exists() throws Exception {
  DistributedCacheUtilImpl ch = new DistributedCacheUtilImpl( TEST_CONFIG );

  Configuration conf = new Configuration();
  FileSystem fs = DistributedCacheTestUtil.getLocalFileSystem( conf );

  FileObject source = DistributedCacheTestUtil.createTestFolderWithContent();
  try {
    Path root = new Path( "bin/test/stageForCache_destination_exists" );
    Path dest = new Path( root, "dest" );

    fs.mkdirs( dest );
    assertTrue( fs.exists( dest ) );
    assertTrue( fs.getFileStatus( dest ).isDir() );

    DistributedCacheTestUtil.stageForCacheTester( ch, source, fs, root, dest, 6, 6 );
  } finally {
    source.delete( new AllFileSelector() );
  }
}
 
开发者ID:pentaho,项目名称:pentaho-hadoop-shims,代码行数:22,代码来源:DistributedCacheUtilImplOSDependentTest.java


示例6: stagePluginsForCache

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
@Test
public void stagePluginsForCache() throws Exception {
  DistributedCacheUtilImpl ch = new DistributedCacheUtilImpl( TEST_CONFIG );

  Configuration conf = new Configuration();
  FileSystem fs = DistributedCacheTestUtil.getLocalFileSystem( conf );

  Path pluginsDir = new Path( "bin/test/plugins-installation-dir" );

  FileObject pluginDir = DistributedCacheTestUtil.createTestFolderWithContent();

  try {
    ch.stagePluginsForCache( fs, pluginsDir, "bin/test/sample-folder" );
    Path pluginInstallPath = new Path( pluginsDir, "bin/test/sample-folder" );
    assertTrue( fs.exists( pluginInstallPath ) );
    ContentSummary summary = fs.getContentSummary( pluginInstallPath );
    assertEquals( 6, summary.getFileCount() );
    assertEquals( 6, summary.getDirectoryCount() );
  } finally {
    pluginDir.delete( new AllFileSelector() );
    fs.delete( pluginsDir, true );
  }
}
 
开发者ID:pentaho,项目名称:pentaho-hadoop-shims,代码行数:24,代码来源:DistributedCacheUtilImplOSDependentTest.java


示例7: installKettleEnvironment

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
@Test
public void installKettleEnvironment() throws Exception {
  DistributedCacheUtilImpl ch = new DistributedCacheUtilImpl( TEST_CONFIG );

  Configuration conf = new Configuration();
  FileSystem fs = DistributedCacheTestUtil.getLocalFileSystem( conf );

  // This "empty pmr" contains a lib/ folder but with no content
  FileObject pmrArchive = KettleVFS.getFileObject( getClass().getResource( "/empty-pmr.zip" ).toURI().getPath() );

  FileObject bigDataPluginDir = DistributedCacheTestUtil.createTestFolderWithContent( DistributedCacheUtilImpl.PENTAHO_BIG_DATA_PLUGIN_FOLDER_NAME );

  Path root = new Path( "bin/test/installKettleEnvironment" );
  try {
    ch.installKettleEnvironment( pmrArchive, fs, root, bigDataPluginDir, null );
    assertTrue( ch.isKettleEnvironmentInstalledAt( fs, root ) );
  } finally {
    bigDataPluginDir.delete( new AllFileSelector() );
    fs.delete( root, true );
  }
}
 
开发者ID:pentaho,项目名称:pentaho-hadoop-shims,代码行数:22,代码来源:DistributedCacheUtilImplOSDependentTest.java


示例8: installKettleEnvironment_additional_plugins

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
@Test
public void installKettleEnvironment_additional_plugins() throws Exception {
  DistributedCacheUtilImpl ch = new DistributedCacheUtilImpl( TEST_CONFIG );

  Configuration conf = new Configuration();
  FileSystem fs = DistributedCacheTestUtil.getLocalFileSystem( conf );

  // This "empty pmr" contains a lib/ folder but with no content
  FileObject pmrArchive = KettleVFS.getFileObject( getClass().getResource( "/empty-pmr.zip" ).toURI().getPath() );
  FileObject bigDataPluginDir = DistributedCacheTestUtil.createTestFolderWithContent( DistributedCacheUtilImpl.PENTAHO_BIG_DATA_PLUGIN_FOLDER_NAME );

  String pluginName = "additional-plugin";
  FileObject additionalPluginDir = DistributedCacheTestUtil.createTestFolderWithContent( pluginName );
  Path root = new Path( "bin/test/installKettleEnvironment" );
  try {
    ch.installKettleEnvironment( pmrArchive, fs, root, bigDataPluginDir, "bin/test/" + pluginName );
    assertTrue( ch.isKettleEnvironmentInstalledAt( fs, root ) );
    assertTrue( fs.exists( new Path( root, "plugins/bin/test/" + pluginName ) ) );
  } finally {
    bigDataPluginDir.delete( new AllFileSelector() );
    additionalPluginDir.delete( new AllFileSelector() );
    fs.delete( root, true );
  }
}
 
开发者ID:pentaho,项目名称:pentaho-hadoop-shims,代码行数:25,代码来源:DistributedCacheUtilImplOSDependentTest.java


示例9: extractToTemp

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
@Test
public void extractToTemp() throws Exception {
  DistributedCacheUtilImpl ch = new DistributedCacheUtilImpl( TEST_CONFIG );

  FileObject archive = KettleVFS.getFileObject( getClass().getResource( "/pentaho-mapreduce-sample.jar" ).toURI().getPath() );
  FileObject extracted = ch.extractToTemp( archive );

  assertNotNull( extracted );
  assertTrue( extracted.exists() );
  try {
    // There should be 3 files and 5 directories inside the root folder (which is the 9th entry)
    assertTrue( extracted.findFiles( new AllFileSelector() ).length == 9 );
  } finally {
    // clean up after ourself
    ch.deleteDirectory( extracted );
  }
}
 
开发者ID:pentaho,项目名称:pentaho-hadoop-shims,代码行数:18,代码来源:DistributedCacheUtilImplTest.java


示例10: findFiles_vfs

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
@Test
public void findFiles_vfs() throws Exception {
  DistributedCacheUtilImpl ch = new DistributedCacheUtilImpl( TEST_CONFIG );

  FileObject testFolder = DistributedCacheTestUtil.createTestFolderWithContent();

  try {
    // Simply test we can find the jar files in our test folder
    List<String> jars = ch.findFiles( testFolder, "jar" );
    assertEquals( 4, jars.size() );

    // Look for all files and folders
    List<String> all = ch.findFiles( testFolder, null );
    assertEquals( 12, all.size() );
  } finally {
    testFolder.delete( new AllFileSelector() );
  }
}
 
开发者ID:pentaho,项目名称:pentaho-hadoop-shims,代码行数:19,代码来源:DistributedCacheUtilImplTest.java


示例11: stageForCache_destination_no_overwrite

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
@Test
public void stageForCache_destination_no_overwrite() throws Exception {
  DistributedCacheUtilImpl ch = new DistributedCacheUtilImpl( TEST_CONFIG );

  Configuration conf = new Configuration();
  FileSystem fs = DistributedCacheTestUtil.getLocalFileSystem( conf );

  FileObject source = DistributedCacheTestUtil.createTestFolderWithContent();
  try {
    Path root = new Path( "bin/test/stageForCache_destination_exists" );
    Path dest = new Path( root, "dest" );

    fs.mkdirs( dest );
    assertTrue( fs.exists( dest ) );
    assertTrue( fs.getFileStatus( dest ).isDir() );
    try {
      ch.stageForCache( source, fs, dest, false );
    } catch ( KettleFileException ex ) {
      assertTrue( ex.getMessage(), ex.getMessage().contains( "Destination exists" ) );
    } finally {
      fs.delete( root, true );
    }
  } finally {
    source.delete( new AllFileSelector() );
  }
}
 
开发者ID:pentaho,项目名称:pentaho-hadoop-shims,代码行数:27,代码来源:DistributedCacheUtilImplTest.java


示例12: prepareJarFiles

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
private URL[] prepareJarFiles( FileObject zipFile ) throws Exception {

    // zip:file:///tmp/foo.zip
    FileInputList fileList = FileInputList.createFileList( this, new String[] { "zip:" + zipFile.toString(), },
      new String[] { ".*\\.jar$", }, // Include mask: only jar files
      new String[] { ".*classpath\\.jar$", }, // Exclude mask: only jar files
      new String[] { "Y", }, // File required
      new boolean[] { true, } ); // Search sub-directories

    List<URL> files = new ArrayList<URL>();

    // Copy the jar files in the temp folder...
    //
    for ( FileObject file : fileList.getFiles() ) {
      FileObject jarfilecopy =
        KettleVFS.createTempFile(
          file.getName().getBaseName(), ".jar", environmentSubstitute( "${java.io.tmpdir}" ) );
      jarfilecopy.copyFrom( file, new AllFileSelector() );
      files.add( jarfilecopy.getURL() );
    }

    return files.toArray( new URL[files.size()] );
  }
 
开发者ID:pentaho,项目名称:pentaho-kettle,代码行数:24,代码来源:JobEntryTalendJobExec.java


示例13: localFile

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
@Override public File localFile(FileObject resource, FileObject dir) {
    if(resource instanceof LocalFile) {
        return FileUtils.toFile(resource);
    }

    final File localDir = localPath(dir);
    if(localDir == null) {
        throw new MetaborgRuntimeException("Replication directory " + dir
            + " is not on the local filesystem, cannot get local file for " + resource);
    }
    try {
        dir.createFolder();

        final FileObject copyLoc;
        if(resource.getType() == FileType.FOLDER) {
            copyLoc = dir;
        } else {
            copyLoc = dir.resolveFile(resource.getName().getBaseName());
        }
        copyLoc.copyFrom(resource, new AllFileSelector());

        return localDir;
    } catch(FileSystemException e) {
        throw new MetaborgRuntimeException("Could not get local file for " + resource, e);
    }
}
 
开发者ID:metaborg,项目名称:spoofax,代码行数:27,代码来源:ResourceService.java


示例14: download

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
@Override
public void download(String remotePath, Path local) throws IOException {
    LocalFile localFileObject = (LocalFile) fileSystemManager.resolveFile(local.toUri().toString());
    FileObject remoteFileObject = remoteRootDirectory.resolveFile(remotePath);
    try {
        localFileObject.copyFrom(remoteFileObject, new AllFileSelector());
    } finally {
        localFileObject.close();
        remoteFileObject.close();
    }

}
 
开发者ID:sparsick,项目名称:comparison-java-ssh-libs,代码行数:13,代码来源:VfsSftpClient.java


示例15: upload

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
@Override
public void upload(Path local, String remotePath) throws IOException {
    LocalFile localFileObject = (LocalFile) fileSystemManager.resolveFile(local.toUri().toString());
    FileObject remoteFileObject = remoteRootDirectory.resolveFile(remotePath);
    try {
        remoteFileObject.copyFrom(localFileObject, new AllFileSelector());
    } finally {
        localFileObject.close();
        remoteFileObject.close();
    }
}
 
开发者ID:sparsick,项目名称:comparison-java-ssh-libs,代码行数:12,代码来源:VfsSftpClient.java


示例16: copy

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
@Override
public void copy(String oldRemotePath, String newRemotePath) throws IOException {
    FileObject newRemoteFileObject = remoteRootDirectory.resolveFile(newRemotePath);
    FileObject oldRemoteFileObject = remoteRootDirectory.resolveFile(oldRemotePath);
    try {
        newRemoteFileObject.copyFrom(oldRemoteFileObject, new AllFileSelector());
    } finally {
        oldRemoteFileObject.close();
        newRemoteFileObject.close();
    }
}
 
开发者ID:sparsick,项目名称:comparison-java-ssh-libs,代码行数:12,代码来源:VfsSftpClient.java


示例17: upgrade

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
/**
 * Finds a folder to upgrade from based on the "newFolder" parameter -
 * upgrades are performed only within the same major version.
 *
 * @param newFolder
 *            The folder we want to upgrade to (the new version)
 * @return true if upgrade was successful, false otherwise
 */
public boolean upgrade(final FileObject newFolder) {
    try {
        if (newFolder.getChildren().length != 0) {
            // if the folder is not new then we don't want to touch it
            return false;
        }

        final FileObject upgradeFromFolderCandidate = findUpgradeCandidate(newFolder);

        if (upgradeFromFolderCandidate == null) {
            logger.info("Did not find a suitable upgrade candidate");
            return false;
        }

        logger.info("Upgrading DATACLEANER_HOME from : {}", upgradeFromFolderCandidate);
        newFolder.copyFrom(upgradeFromFolderCandidate, new AllFileSelector());

        // special handling of userpreferences.dat - we only want to keep
        // the good parts ;-)
        final UserPreferencesUpgrader userPreferencesUpgrader = new UserPreferencesUpgrader(newFolder);
        userPreferencesUpgrader.upgrade();

        // Overwrite example jobs
        final List<String> allFilePaths = DataCleanerHome.getAllInitialFiles();
        for (final String filePath : allFilePaths) {
            overwriteFileWithDefaults(newFolder, filePath);
        }
        return true;
    } catch (final FileSystemException e) {
        logger.warn("Exception occured during upgrading: {}", e);
        return false;
    }
}
 
开发者ID:datacleaner,项目名称:DataCleaner,代码行数:42,代码来源:DataCleanerHomeUpgrader.java


示例18: delete

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
/** Deletes a file or folder. */
@Override
public void delete(String filePath) {
    assertWritingPermitted("delete()");
    File.verifyFilePath(filePath);
    try {
        getFileObject(filePath).delete(new AllFileSelector());
        logger.debug("Deleted {}", filePath);
    }
    catch (IOException e) {
        throw new TechnicalException("Error deleting file", e);
    }
}
 
开发者ID:AludraTest,项目名称:aludratest,代码行数:14,代码来源:FileInteractionImpl.java


示例19: setup

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
@BeforeClass
public static void setup() throws Exception {
  // Create a test hadoop configuration "a"
  FileObject ramRoot = VFS.getManager().resolveFile( HADOOP_CONFIGURATIONS_PATH );
  FileObject aConfigFolder = ramRoot.resolveFile( "a" );
  if ( aConfigFolder.exists() ) {
    aConfigFolder.delete( new AllFileSelector() );
  }
  aConfigFolder.createFolder();

  assertEquals( FileType.FOLDER, aConfigFolder.getType() );

  // Create the properties file for the configuration as hadoop-configurations/a/config.properties
  configFile = aConfigFolder.resolveFile( "config.properties" );
  Properties p = new Properties();
  p.setProperty( "name", "Test Configuration A" );
  p.setProperty( "classpath", "" );
  p.setProperty( "ignore.classes", "" );
  p.setProperty( "library.path", "" );
  p.setProperty( "required.classes", HadoopConfigurationLocatorTest.class.getName() );
  p.store( configFile.getContent().getOutputStream(), "Test Configuration A" );
  configFile.close();

  // Create the implementation jar
  FileObject implJar = aConfigFolder.resolveFile( "a-config.jar" );
  implJar.createFile();

  // Use ShrinkWrap to create the jar and write it out to VFS
  JavaArchive archive = ShrinkWrap.create( JavaArchive.class, "a-configuration.jar" ).addAsServiceProvider(
    HadoopShim.class, MockHadoopShim.class )
    .addClass( MockHadoopShim.class );
  archive.as( ZipExporter.class ).exportTo( implJar.getContent().getOutputStream() );
}
 
开发者ID:pentaho,项目名称:pentaho-hadoop-shims,代码行数:34,代码来源:HadoopConfigurationLocatorTest.java


示例20: findFiles_hdfs_native

import org.apache.commons.vfs2.AllFileSelector; //导入依赖的package包/类
@Test
public void findFiles_hdfs_native() throws Exception {
  DistributedCacheUtilImpl ch = new DistributedCacheUtilImpl( TEST_CONFIG );

  // Copy the contents of test folder
  FileObject source = DistributedCacheTestUtil.createTestFolderWithContent();
  Path root = new Path( "bin/test/stageArchiveForCacheTest" );
  Configuration conf = new Configuration();
  FileSystem fs = DistributedCacheTestUtil.getLocalFileSystem( conf );
  Path dest = new Path( root, "org/pentaho/mapreduce/" );
  try {
    try {
      ch.stageForCache( source, fs, dest, true );

      List<Path> files = ch.findFiles( fs, dest, null );
      assertEquals( 5, files.size() );

      files = ch.findFiles( fs, dest, Pattern.compile( ".*jar$" ) );
      assertEquals( 2, files.size() );

      files = ch.findFiles( fs, dest, Pattern.compile( ".*folder$" ) );
      assertEquals( 1, files.size() );
    } finally {
      fs.delete( root, true );
    }
  } finally {
    source.delete( new AllFileSelector() );
  }
}
 
开发者ID:pentaho,项目名称:pentaho-hadoop-shims,代码行数:30,代码来源:DistributedCacheUtilImplOSDependentTest.java



注:本文中的org.apache.commons.vfs2.AllFileSelector类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java XInstanceOfExpression类代码示例发布时间:2022-05-22
下一篇:
Java JavaAnnotation类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap