在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称:OFAI/BayesianNonparametrics.jl开源软件地址:https://github.com/OFAI/BayesianNonparametrics.jl开源编程语言:Julia 100.0%开源软件介绍:BayesianNonparametrics.jlBayesianNonparametrics is a Julia package implementing state-of-the-art Bayesian nonparametric models for medium-sized unsupervised problems. The software package brings Bayesian nonparametrics to non-specialists allowing the widespread use of Bayesian nonparametric models. Emphasis is put on consistency, performance and ease of use allowing easy access to Bayesian nonparametric models inside Julia. BayesianNonparametrics allows you to
NewsBayesianNonparametrics is Julia 0.7 / 1.0 compatible InstallationYou can install the package into your running Julia installation using Julia's package manager, i.e. pkg> add BayesianNonparametrics DocumentationDocumentation is available in Markdown: documentation ExampleThe following example illustrates the use of BayesianNonparametrics for clustering of continuous observations using a Dirichlet Process Mixture of Gaussians. After loading the package: using BayesianNonparametrics we can generate a 2D synthetic dataset (or use a multivariate continuous dataset of interest) (X, Y) = bloobs(randomize = false) and construct the parameters of our base distribution: μ0 = vec(mean(X, dims = 1))
κ0 = 5.0
ν0 = 9.0
Σ0 = cov(X)
H = WishartGaussian(μ0, κ0, ν0, Σ0) After defining the base distribution we can specify the model: model = DPM(H) which is in this case a Dirichlet Process Mixture. Each model has to be initialised, one possible initialisation approach for Dirichlet Process Mixtures is a k-Means initialisation: modelBuffer = init(X, model, KMeansInitialisation(k = 10)) The resulting buffer object can now be used to apply posterior inference on the model given models = train(modelBuffer, DPMHyperparam(), Gibbs(maxiter = 500)) You shoud see the progress of the sampling process in the command line. After applying Gibbs sampling, it is possible explore the posterior based on their posterior densities, densities = map(m -> m.energy, models) number of active components activeComponents = map(m -> sum(m.weights .> 0), models) or the groupings of the observations: assignments = map(m -> m.assignments, models) The following animation illustrates posterior samples obtained by a Dirichlet Process Mixture: Alternatively, one can compute a point estimate based on the posterior similarity matrix: A = reduce(hcat, assignments)
(N, D) = size(X)
PSM = ones(N, N)
M = size(A, 2)
for i in 1:N
for j in 1:i-1
PSM[i, j] = sum(A[i,:] .== A[j,:]) / M
PSM[j, i] = PSM[i, j]
end
end and find the optimal partition which minimizes the lower bound of the variation of information: mink = minimum(length(m.weights) for m in models)
maxk = maximum(length(m.weights) for m in models)
(peassignments, _) = pointestimate(PSM, method = :average, mink = mink, maxk = maxk) The grouping wich minimizes the lower bound of the variation of information is illustrated in the following image: |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论