• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java MultiLayerPerceptron类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.neuroph.nnet.MultiLayerPerceptron的典型用法代码示例。如果您正苦于以下问题:Java MultiLayerPerceptron类的具体用法?Java MultiLayerPerceptron怎么用?Java MultiLayerPerceptron使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



MultiLayerPerceptron类属于org.neuroph.nnet包,在下文中一共展示了MultiLayerPerceptron类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: main

import org.neuroph.nnet.MultiLayerPerceptron; //导入依赖的package包/类
/**
 *  Runs this sample
 */
public static void main(String[] args) {    
    // get the path to file with data
    String inputFileName = IrisClassificationSample.class.getResource("data/iris_data_normalised.txt").getFile();
    
    // create MultiLayerPerceptron neural network
    MultiLayerPerceptron neuralNet = new MultiLayerPerceptron(4, 16, 3);
    // create training set from file
    DataSet irisDataSet = DataSet.createFromFile(inputFileName, 4, 3, ",");
    // train the network with training set
    neuralNet.learn(irisDataSet);         
    
    System.out.println("Done training.");
    System.out.println("Testing network...");
    
    testNeuralNetwork(neuralNet, irisDataSet);
}
 
开发者ID:East196,项目名称:maker,代码行数:20,代码来源:IrisClassificationSample.java


示例2: createMLPerceptron

import org.neuroph.nnet.MultiLayerPerceptron; //导入依赖的package包/类
/**
 * Creates and returns a new instance of Multi Layer Perceptron
 * @param layersStr space separated number of neurons in layers
 * @param transferFunctionType transfer function type for neurons
 * @return instance of Multi Layer Perceptron
 */
public static MultiLayerPerceptron createMLPerceptron(String layersStr, TransferFunctionType transferFunctionType, Class learningRule,  boolean useBias, boolean connectIO) {
	ArrayList<Integer> layerSizes = VectorParser.parseInteger(layersStr);
               NeuronProperties neuronProperties = new NeuronProperties(transferFunctionType, useBias);
	MultiLayerPerceptron nnet = new MultiLayerPerceptron(layerSizes, neuronProperties);
               
               // set learning rule - TODO: use reflection here
               if (learningRule.getName().equals(BackPropagation.class.getName()))  {
                   nnet.setLearningRule(new BackPropagation());
               } else if (learningRule.getName().equals(MomentumBackpropagation.class.getName())) {
                   nnet.setLearningRule(new MomentumBackpropagation());
               } else if (learningRule.getName().equals(DynamicBackPropagation.class.getName())) {
                   nnet.setLearningRule(new DynamicBackPropagation());
               } else if (learningRule.getName().equals(ResilientPropagation.class.getName())) {
                   nnet.setLearningRule(new ResilientPropagation());
               } 

               // connect io
               if (connectIO) {
                   nnet.connectInputsToOutputs();
               }

	return nnet;
}
 
开发者ID:fiidau,项目名称:Y-Haplogroup-Predictor,代码行数:30,代码来源:NeuralNetworkFactory.java


示例3: AnimalNetwork

import org.neuroph.nnet.MultiLayerPerceptron; //导入依赖的package包/类
/**
 * Instantiates a new animal network.
 *
 * @param input the input
 * @param hidden the hidden
 * @param output the output
 */
public AnimalNetwork(int input,int hidden,int output) {
	super();
	System.out.println("network is created");
	initializeNeurons();
	initializeQuestions();
	animal_network = new MultiLayerPerceptron(TransferFunctionType.SIGMOID,Data.INPUTUNITS,Data.HIDDENUNITS,Data.OUTPUTUNITS);
	animal_network.setNetworkType(NeuralNetworkType.MULTI_LAYER_PERCEPTRON);
	animal_network.randomizeWeights();  //randomize weights 
	// set parameters 
	((LMS) animal_network.getLearningRule()).setMaxError(MAXERROR);//0-1 
	((LMS) animal_network.getLearningRule()).setLearningRate(LEARNINGRATE);//0-1
	((LMS) animal_network.getLearningRule()).setMaxIterations(MAXITERATIONS);//0-1
	MomentumBackpropagation backpropogation = new MomentumBackpropagation(); // define momentum
	backpropogation.setMomentum(0.7); // set momentum
	animal_network.setLearningRule(backpropogation); 
}
 
开发者ID:eldemcan,项目名称:20q,代码行数:24,代码来源:AnimalNetwork.java


示例4: startLearning

import org.neuroph.nnet.MultiLayerPerceptron; //导入依赖的package包/类
public void startLearning() {
    Thread t1 = new Thread(new Runnable() {
        public void run() {
            console.addLog("Loading test set");
            testSet = loader.loadDataSet(testSetPath);
            console.addLog("Test set loaded");

            console.addLog("Loading training set");
            trainingSet = loader.loadDataSet(trainingSetPath);
            console.addLog("Training set loaded. Input size: " + trainingSet.getInputSize() +
                    " Output size: " + trainingSet.getOutputSize());

            nnet = new MultiLayerPerceptron(TransferFunctionType.SIGMOID,
                    trainingSet.getInputSize(), 86, 86, trainingSet.getOutputSize());

            MomentumBackpropagation backPropagation = new MomentumBackpropagation();
            backPropagation.setLearningRate(learningRate);
            backPropagation.setMomentum(momentum);

            LearningTestSetEvaluator evaluator =
                    new LearningTestSetEvaluator(nnetName, testSet, trainingSet, console);
            backPropagation.addListener(evaluator);
            backPropagation.addListener(new LearningEventListener() {
                @Override
                public void handleLearningEvent(LearningEvent event) {
                    if (event.getEventType() == LearningEvent.Type.LEARNING_STOPPED) {
                        listeners.forEach((listener) -> listener.learningStopped(LearningNetTask.this));
                    }
                }
            });
            nnet.setLearningRule(backPropagation);
            console.addLog("Started neural net learning with momentum: "
                    + momentum + ", learning rate: " + learningRate);
            nnet.learnInNewThread(trainingSet);
        }
    });
    t1.start();
}
 
开发者ID:fgulan,项目名称:final-thesis,代码行数:39,代码来源:LearningNetTask.java


示例5: learnNeuralNet

import org.neuroph.nnet.MultiLayerPerceptron; //导入依赖的package包/类
private static void learnNeuralNet(DataSet trainingSet, DataSet testSet) {
    TestSetEvaluator testEvaluator = new TestSetEvaluator(NNET_NAME, testSet, trainingSet);
    MultiLayerPerceptron nnet = new MultiLayerPerceptron(TransferFunctionType.SIGMOID, INPUT_LAYER, 86, 86, OUTPUT_LAYER);

    MomentumBackpropagation bp = new MomentumBackpropagation();
    bp.setLearningRate(LEARINING_RATE);
    bp.setMomentum(MOMENTUM);
    bp.addListener(testEvaluator);

    nnet.setLearningRule(bp);
    nnet.learn(trainingSet);
    nnet.save(NNET_NAME + "last");
}
 
开发者ID:fgulan,项目名称:final-thesis,代码行数:14,代码来源:OneToOneHVTest.java


示例6: learnNeuralNet

import org.neuroph.nnet.MultiLayerPerceptron; //导入依赖的package包/类
private static void learnNeuralNet(DataSet trainingSet, DataSet testSet) {
    TestSetEvaluator testEvaluator = new TestSetEvaluator(NNET_NAME, testSet, trainingSet);
    MultiLayerPerceptron nnet = new MultiLayerPerceptron(TransferFunctionType.SIGMOID, INPUT_LAYER, 140, OUTPUT_LAYER);

    MomentumBackpropagation bp = new MomentumBackpropagation();
    bp.setLearningRate(LEARINING_RATE);
    bp.setMomentum(MOMENTUM);
    bp.addListener(testEvaluator);

    nnet.setLearningRule(bp);
    nnet.learn(trainingSet);
    nnet.save(NNET_NAME + "last");
}
 
开发者ID:fgulan,项目名称:final-thesis,代码行数:14,代码来源:OneToOneNonUniqueDiagonalTest.java


示例7: learnNeuralNet

import org.neuroph.nnet.MultiLayerPerceptron; //导入依赖的package包/类
private static void learnNeuralNet(DataSet trainingSet, DataSet testSet) {
    TestSetEvaluator testEvaluator = new TestSetEvaluator(NNET_NAME, testSet, trainingSet);
    MultiLayerPerceptron nnet = new MultiLayerPerceptron(TransferFunctionType.SIGMOID, INPUT_LAYER, 76, 76, OUTPUT_LAYER);

    MomentumBackpropagation bp = new MomentumBackpropagation();
    bp.setLearningRate(LEARINING_RATE);
    bp.setMomentum(MOMENTUM);
    bp.addListener(testEvaluator);

    nnet.setLearningRule(bp);
    nnet.learn(trainingSet);
    nnet.save(NNET_NAME + "last");
}
 
开发者ID:fgulan,项目名称:final-thesis,代码行数:14,代码来源:OneToOneDiagonalTest.java


示例8: main

import org.neuroph.nnet.MultiLayerPerceptron; //导入依赖的package包/类
public static void main(String[] args) {

// create training set (logical XOR function)
        DataSet trainingSet = new DataSet(2, 1);
        trainingSet.addRow(new DataSetRow(new double[]{0, 0}, new double[]{0}));
        trainingSet.addRow(new DataSetRow(new double[]{0, 1}, new double[]{1}));
        trainingSet.addRow(new DataSetRow(new double[]{1, 0}, new double[]{1}));
        trainingSet.addRow(new DataSetRow(new double[]{1, 1}, new double[]{0}));


// create multi layer perceptron
        MultiLayerPerceptron myMlPerceptron = new MultiLayerPerceptron(TransferFunctionType.SIGMOID, 2, 3, 1);
        myMlPerceptron.setLearningRule(new BackPropagation());
// learn the training set
        myMlPerceptron.learn(trainingSet);
// test perceptron
        System.out.println("Testing trained neural network");
        testNeuralNetwork(myMlPerceptron, trainingSet);

// save trained neural network
        myMlPerceptron.save("myMlPerceptron.nnet");

// load saved neural network
        NeuralNetwork loadedMlPerceptron = NeuralNetwork.createFromFile("myMlPerceptron.nnet");

// test loaded neural network
        System.out.println("Testing loaded neural network");
        testNeuralNetwork(loadedMlPerceptron, trainingSet);

    }
 
开发者ID:fgulan,项目名称:final-thesis,代码行数:31,代码来源:TestLearn.java


示例9: main

import org.neuroph.nnet.MultiLayerPerceptron; //导入依赖的package包/类
/**
 * Runs this sample
 */    
public static void main(String[] args) throws FileNotFoundException, IOException, ClassNotFoundException, SQLException {

    // create neural network
    MultiLayerPerceptron neuralNet = new MultiLayerPerceptron(2, 3, 1);

    // Load the database driver
    Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
    // Get a connection to the database
    String dbName = "neuroph";
    String dbUser = "root";
    String dbPass = "";
    // create a connection to database
    Connection connection = DriverManager.getConnection("jdbc:mysql://localhost:3306/" + dbName, dbUser, dbPass);

    // ise this sql to get input from database table
    String inputSql = "SELECT * FROM input_data";        
    // create dinput adapter using specidfied database connection and sql query
    JDBCInputAdapter in = new JDBCInputAdapter(connection, inputSql);
    String outputTable = "output_data"; // write output to this table
    // create output adapter using specified connection and output table
    JDBCOutputAdapter out = new JDBCOutputAdapter(connection, outputTable);

    
    double[] input;
    // read input using input adapter
    while ((input = in.readInput()) != null) {
        neuralNet.setInput(input);
        neuralNet.calculate();
        double[] output = neuralNet.getOutput();
        // and write output using output aadapter
        out.writeOutput(output);
    }

    in.close();
    out.close();
    connection.close();
}
 
开发者ID:East196,项目名称:maker,代码行数:41,代码来源:JDBCSample.java


示例10: run

import org.neuroph.nnet.MultiLayerPerceptron; //导入依赖的package包/类
/**
 * Runs this sample
 */
public void run() {
	
    // create training set (logical XOR function)
    DataSet trainingSet = new DataSet(2, 1);
    trainingSet.addRow(new DataSetRow(new double[]{0, 0}, new double[]{0}));
    trainingSet.addRow(new DataSetRow(new double[]{0, 1}, new double[]{1}));
    trainingSet.addRow(new DataSetRow(new double[]{1, 0}, new double[]{1}));
    trainingSet.addRow(new DataSetRow(new double[]{1, 1}, new double[]{0}));

    // create multi layer perceptron
    MultiLayerPerceptron myMlPerceptron = new MultiLayerPerceptron(TransferFunctionType.TANH, 2, 3, 1);
    // enable batch if using MomentumBackpropagation
    if( myMlPerceptron.getLearningRule() instanceof MomentumBackpropagation ){
    	((MomentumBackpropagation)myMlPerceptron.getLearningRule()).setBatchMode(true);
    	((MomentumBackpropagation)myMlPerceptron.getLearningRule()).setMaxError(0.00001);
    }

    LearningRule learningRule = myMlPerceptron.getLearningRule();
    learningRule.addListener(this);
    
    // learn the training set
    System.out.println("Training neural network...");
    myMlPerceptron.learn(trainingSet);

    // test perceptron
    System.out.println("Testing trained neural network");
    testNeuralNetwork(myMlPerceptron, trainingSet);

    // save trained neural network
    myMlPerceptron.save("myMlPerceptron.nnet");

    // load saved neural network
    NeuralNetwork loadedMlPerceptron = NeuralNetwork.load("myMlPerceptron.nnet");

    // test loaded neural network
    System.out.println("Testing loaded neural network");
    testNeuralNetwork(loadedMlPerceptron, trainingSet);
}
 
开发者ID:East196,项目名称:maker,代码行数:42,代码来源:XorMultiLayerPerceptronSample.java


示例11: main

import org.neuroph.nnet.MultiLayerPerceptron; //导入依赖的package包/类
/**
 * Runs this sample
 */
public static void main(String[] args) {
	
    // create training set (logical XOR function)
    DataSet trainingSet = new DataSet(2, 1);
    trainingSet.addRow(new DataSetRow(new double[]{0, 0}, new double[]{0}));
    trainingSet.addRow(new DataSetRow(new double[]{0, 1}, new double[]{1}));
    trainingSet.addRow(new DataSetRow(new double[]{1, 0}, new double[]{1}));
    trainingSet.addRow(new DataSetRow(new double[]{1, 1}, new double[]{0}));

    // create multi layer perceptron
    MultiLayerPerceptron myMlPerceptron = new MultiLayerPerceptron(TransferFunctionType.SIGMOID, 2, 3, 1);
    // set ResilientPropagation learning rule
    myMlPerceptron.setLearningRule(new ResilientPropagation()); 
   
    // learn the training set
    System.out.println("Training neural network...");
    myMlPerceptron.learn(trainingSet);

    int iterations = ((SupervisedLearning)myMlPerceptron.getLearningRule()).getCurrentIteration();        
    System.out.println("Learned in "+iterations+" iterations");
    
    // test perceptron
    System.out.println("Testing trained neural network");
    testNeuralNetwork(myMlPerceptron, trainingSet);

}
 
开发者ID:East196,项目名称:maker,代码行数:30,代码来源:XorResilientPropagationSample.java


示例12: main

import org.neuroph.nnet.MultiLayerPerceptron; //导入依赖的package包/类
/**
 * Runs this sample
 */
public static void main(String[] args) {

    MultiLayerPerceptron neuralNet = new MultiLayerPerceptron(2, 3, 1);
    // neuralNet.randomizeWeights(new WeightsRandomizer());
    // neuralNet.randomizeWeights(new RangeRandomizer(0.1, 0.9));
    // neuralNet.randomizeWeights(new GaussianRandomizer(0.4, 0.3));
    neuralNet.randomizeWeights(new NguyenWidrowRandomizer(0.3, 0.7));
    printWeights(neuralNet);

    neuralNet.randomizeWeights(new DistortRandomizer(0.5));
    printWeights(neuralNet);
}
 
开发者ID:East196,项目名称:maker,代码行数:16,代码来源:RandomizationSample.java


示例13: prepareTest

import org.neuroph.nnet.MultiLayerPerceptron; //导入依赖的package包/类
/**
 * Benchmrk preparation consists of training set and neural networ creatiion.
 * This method generates training set with 100 rows, where every row has 10 input and 5 output elements
 * Neural network has two hiddden layers with 8 and 7 neurons, and runs learning rule for 2000 iterations
 */
@Override
public void prepareTest() {
    int trainingSetSize = 100;
    int inputSize = 10;
    int outputSize = 5;
    
    this.trainingSet = new DataSet(inputSize, outputSize);
    
    for (int i = 0; i < trainingSetSize; i++) {
        double input[] = new double[inputSize];
        for( int j=0; j<inputSize; j++)
            input[j] = Math.random();

        double output[] = new double[outputSize];
        for( int j=0; j<outputSize; j++)
            output[j] = Math.random();            
        
        DataSetRow trainingSetRow = new DataSetRow(input, output);
        trainingSet.addRow(trainingSetRow);
    }
    
    
    network = new MultiLayerPerceptron(inputSize, 8, 7, outputSize);
    ((MomentumBackpropagation)network.getLearningRule()).setMaxIterations(2000);
    
}
 
开发者ID:fiidau,项目名称:Y-Haplogroup-Predictor,代码行数:32,代码来源:MyBenchmarkTask.java


示例14: AnimalNetwork

import org.neuroph.nnet.MultiLayerPerceptron; //导入依赖的package包/类
/**
 * Instantiates a new animal network.
 *
 * @param input the input
 * @param hidden the hidden
 * @param output the output
 */
public AnimalNetwork(int input,int hidden,int output) {
	super();
	System.out.println("network is created");
	initializeNeurons();
	animal_network = new MultiLayerPerceptron(TransferFunctionType.SIGMOID,Data.INPUTUNITS,Data.HIDDENUNITS,Data.OUTPUTUNITS);
	animal_network.setNetworkType(NeuralNetworkType.MULTI_LAYER_PERCEPTRON);
	animal_network.randomizeWeights();  //randomize weights 
	((LMS) animal_network.getLearningRule()).setMaxError(MAXERROR);//0-1
	((LMS) animal_network.getLearningRule()).setLearningRate(LEARNINGRATE);//0-1
	((LMS) animal_network.getLearningRule()).setMaxIterations(MAXITERATIONS);//0-1
	animal_network.setLearningRule(new BackPropagation());
}
 
开发者ID:eldemcan,项目名称:20q,代码行数:20,代码来源:AnimalNetwork.java


示例15: main

import org.neuroph.nnet.MultiLayerPerceptron; //导入依赖的package包/类
public static void main(String[] args) {

		// create training set (extending XOR sample)
		DataSet trainingSet = new DataSet(2, 1);
		trainingSet.addRow(new DataSetRow(new double[]{0, 0}, new double[]{1}));
		trainingSet.addRow(new DataSetRow(new double[]{0, 1}, new double[]{1}));
		trainingSet.addRow(new DataSetRow(new double[]{1, 0}, new double[]{1}));
		trainingSet.addRow(new DataSetRow(new double[]{1, 1}, new double[]{0}));
		trainingSet.addRow(new DataSetRow(new double[]{2, 2}, new double[]{-1}));
		trainingSet.addRow(new DataSetRow(new double[]{1, 2}, new double[]{-1}));
		trainingSet.addRow(new DataSetRow(new double[]{1, 3}, new double[]{-1}));
		trainingSet.addRow(new DataSetRow(new double[]{2, 2}, new double[]{-1}));
		trainingSet.addRow(new DataSetRow(new double[]{2, 42}, new double[]{-1}));

		// create multi layer perceptron
		MultiLayerPerceptron myMlPerceptron = new MultiLayerPerceptron(TransferFunctionType.TANH, 2, 9, 1);
		// learn the training set

		long start = System.currentTimeMillis();

		myMlPerceptron.learn(trainingSet);

		long time = System.currentTimeMillis()-start;

		System.out.println("It took: "+time+" ms");

		// test perceptron
		System.out.println("Testing trained neural network");
		testNeuralNetwork(myMlPerceptron, trainingSet);

		// save trained neural network
		myMlPerceptron.save("myMlPerceptron.nnet");

		// load saved neural network
		
		FileInputStream stream;
		
		try {
			
			stream = new FileInputStream("myMlPerceptron.nnet");
			
			NeuralNetwork loadedMlPerceptron = NeuralNetwork.load(stream);
		
			// test loaded neural network
			System.out.println("Testing loaded neural network");
			testNeuralNetwork(loadedMlPerceptron, trainingSet);
			
			System.out.println("Testing unknown input");
			testNeuralNetwork(loadedMlPerceptron, new DataSetRow(new double[]{2, 30}));
			
		} catch (FileNotFoundException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
		
	}
 
开发者ID:yuripourre,项目名称:neuro-project,代码行数:57,代码来源:MultiLayerPerceptronSample.java



注:本文中的org.neuroph.nnet.MultiLayerPerceptron类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java AnnotatedElementBuilder类代码示例发布时间:2022-05-22
下一篇:
Java RequestBuilder类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap