在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称:fommil/netlib-java开源软件地址:https://github.com/fommil/netlib-java开源编程语言:开源软件介绍:netlib-javaIf you require support or wish to ensure the continuation of this library, you must get your company to respond to the Call For Funding. I do not have the inclination to provide gratis assistance.
For more details on high performance linear algebra on the JVM, please watch my talk at Scala eXchange 2014 (follow along with high-res slides). If you're a developer looking for an easy-to-use linear algebra library on the JVM, we strongly recommend Commons-Math, MTJ and Breeze:
In
The JNILoader will attempt to load the implementations in this order automatically. All major operating systems are supported out-of-the-box:
Machine Optimised System LibrariesHigh performance BLAS / LAPACK are available commercially and open source for specific CPU chipsets. It is worth noting that "optimised" here means a lot more than simply changing the compiler optimisation flags: specialist assembly instructions are combined with compile time profiling and the selection of array alignments for the kernel and CPU combination. An alternative to optimised libraries is to use the GPU: e.g. cuBLAS or clBLAS. Setting up cuBLAS must be done via our NVBLAS instructions, since cuBLAS does not implement the actual BLAS API out of the box. Be aware that GPU implementations have severe performance degradation for small arrays. MultiBLAS is an initiative to work around the limitation of GPU BLAS implementations by selecting the optimal implementation at runtime, based on the array size. To enable machine optimised natives in If it is not possible to provide a shared library, the author may be available
to assist with custom builds (and further improvements to OS XApple OS X requires no further setup because OS X ships with the veclib framework, boasting incredible CPU performance that is difficult to surpass (performance charts below show that it out-performs ATLAS and is on par with the Intel MKL). Linux(includes Raspberry Pi) Generically-tuned ATLAS and OpenBLAS are available with most distributions (e.g. Debian) and must be enabled explicitly using the package-manager. e.g. for Debian / Ubuntu one would type
selecting the preferred implementation. However, these are only generic pre-tuned builds. To get optimal performance for a specific
machine, it is best to compile locally by grabbing the latest ATLAS or the latest OpenBLAS and following the compilation
instructions (don't forget to turn off CPU throttling and power management during the build!).
Install the shared libraries into a folder that is seen by the runtime linker (e.g. add your install
folder to If you have an Intel MKL licence, you could also
create symbolic links from
and don't forget to add the MKL libraries to your
NOTE: Some distributions, such as Ubuntu WindowsThe Use Dependency Walker to help resolve any problems such as:
NOTE: OpenBLAS doesn't provide separate libraries
so you will have to customise the build or copy the binary into both CustomisationA specific implementation may be forced like so:
A specific (non-standard) JNI binary may be forced like so:
(note that this is not your To turn off natives altogether, add these to the JVM flags:
PerformanceJava has a reputation with older generation developers because Java applications were slow in the 1990s. Nowadays, the JIT ensures that Java applications keep pace with – or exceed the performance of – C / C++ / Fortran applications. The following performance charts give an idea of the performance ratios of Java vs the native
implementations. Also shown are pure C performance runs that show that
dropping to C at the application layer gives no performance benefit.
If anything, the Java version is faster for smaller matrices and is consistently faster
than the "optimised" implementations for some types of operations (e.g. One can expect machine-optimised natives to out-perform the reference implementation – especially for larger arrays – as demonstrated below by Apple's veclib framework, Intel's MKL and (to a lesser extent) ATLAS. Of particular note is the cuBLAS (NVIDIA's graphics card) which performs as well
as ATLAS on Included in the CUDA performance results is the
time taken to setup the CUDA interface and copy the matrix elements to the GPU device. The The DGEMM benchmark measures matrix multiplication performance: The DGETRI benchmark measures matrix LU Factorisation and matrix inversion performance: The DDOT benchmark measures vector dot product performance: The DSAUPD benchmark measures the
calculation of 10% of the eigenvalues for sparse matrices ( NOTE: larger arrays were called first so the JIT has already kicked in for F2J implementations: on a cold startup the F2J implementations are about 10 times slower and get to peak performance after about 20 calls of a function (Raspberry Pi doesn't seem to have a JIT). InstallationDon't download the zip file unless you know what you're doing: use maven or ivy to manage your dependencies as described below. Releases are distributed on Maven central: <dependency>
<groupId>com.github.fommil.netlib</groupId>
<artifactId>all</artifactId>
<version>1.1.2</version>
<type>pom</type>
</dependency> SBT developers can use "com.github.fommil.netlib" % "all" % "1.1.2" pomOnly() Those wanting to preserve the pre-1.0 API can use the legacy package (but note that it will be removed in the next release): <dependency>
<groupId>com.googlecode.netlib-java</groupId>
<artifactId>netlib</artifactId>
<version>1.1</version>
</dependency> and developers who feel the native libs are too much bandwidth can
depend on a subset of implementations: simply look in the Snapshots (preview releases, when new features are in active development) are distributed on Sonatype's Snapshot Repository, e.g.: <dependency>
<groupId>com.github.fommil.netlib</groupId>
<artifactId>all</artifactId>
<version>1.2-SNAPSHOT</version>
</dependency> If the above fails, ensure you have the following in your <repositories>
<repository>
<id>sonatype-snapshots</id>
<url>https://oss.sonatype.org/content/repositories/snapshots/</url>
<releases>
<enabled>false</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories> |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论