在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称:logicalclocks/hops-examples开源软件地址:https://github.com/logicalclocks/hops-examples开源编程语言:Jupyter Notebook 93.4%开源软件介绍:Hops ExamplesThis repository provides users with examples on how to program Big Data and Deep Learning applications that run on Hopsworks, using Apache Spark, Apache Flink, Apache Kafka, Apache Hive and TensorFlow. Users can then upload and run their programs and notebooks from within their Hopsworks projects. Online DocumentationYou can find the latest Hopsworks documentation on the project's webpage, including Hopsworks user and developer guides as well as a list of versions for all supported services. This README file is meant to provide basic instructions and codebase on how to build and run the examples. Website Generation (Hugo)Install dependencies first:
Generate the webpages and run the webserver:
When you add a new notebook, add it under the "notebooks" directory. If you want to add a new category for notebooks, put your notebook in a new directory, then edit this file to add your category:
Building the examples
Generates a jar for each module which can then either be used to create Hopsworks jobs (Spark/Flink) or execute Hive queries remotely. Helper LibrariesHops Examples makes use of Hops, a set of Java and Python libraries which provide developers with tools that make programming on Hops easy. Hops is automatically made available to all Jobs and Notebooks, without the user having to explicitely import it. Detailed documentation on Hops is available here. SparkStructured Streaming with Kafka and HopsFSTo help you get started, StructuredStreamingKafka show how to build a Spark application that produces and consumes messages from Kafka and also persists it both in Parquet format and in plain text to HopsFS. The example makes use of the latest Spark-Kafka API. To run the example, you need to provide the following parameters when creating a Spark job in Hopsworks:
MainClass is io.hops.examples.spark.kafka.StructuredStreamingKafka Topics are provided via the Hopsworks Job UI. User checks the Kafka box and selects the topics from the drop-down menu. When consuming from multiple topics using a single Spark directStream, all topics must use the same Avro schema. Create a new directStream for topic(s) that use different Avro schemas. Data consumed is be default persisted to the Avro Records
{
"fields": [
{
"name": "timestamp",
"type": "string"
},
{
"name": "priority",
"type": "string"
},
{
"name": "logger",
"type": "string"
},
{
"name": "message",
"type": "string"
}
],
"name": "myrecord",
"type": "record"
} TensorFlowHops Example provides Jupyter notebooks for running TensorFlow applications on Hops. All notebooks are automatically made available to Hopsworks projects upon project creation. Detailed documentation on how tp program TensorFlow on Hopsworks, is available here. Feature StoreA sample feature engineering job that takes in raw data, transforms it into features suitable for machine learning
and saves the features into the featurestore is available in TensorFlow Extended (TFX)This repo comes with notebooks demonstrating how to implement horizontally scalable TFX pipelines. The
That notebook then is split into smaller ones that correspond to the different steps in the pipeline. These notebooks
can be found under the BeamUnder Hive
For
Users can export their project's certificates by navigating to the Settings page in Hopsworks. An email is then sent with the password for the truststore and keystore. |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论