Hudi-IDEA编程

项目

一、Hudi+Spark+Kafka(Scala)

配置详见【1.Scala配置】

依赖详见【1.Hudi+Spark+Kafka依赖】

1-1 构建SparkSession对象

  def main(args: Array[String]): Unit = {
    //1.构建SparkSession对象
    val spark: SparkSession = SparkUtils.createSparkSession(this.getClass);
    //2.从Kafka实时消费数据
    val kafkaStreamDF: DataFrame = readFromKafka(spark, "order-topic")
    //3.提取数据,转换数据类型
    val streamDF: DataFrame = process(kafkaStreamDF);
    //4.保存数据至Hudi表中:MOR(读取时保存)
    saveToHudi(streamDF);
    //5.流式应用启动以后,等待终止
    spark.streams.active.foreach(query => println(s"Query: ${query.name} is Running ............."))
    spark.streams.awaitAnyTermination()
  }

1-2 从Kafka/CSV文件读取数据

  /**
   * 指定Kafka topic名称,实时消费数据
   *
   * @param spark
   * @param topicName
   * @return
   */
  def readFromKafka(spark: SparkSession, topicName: String): DataFrame = {
    spark.readStream
      .format("kafka") //指定Kafka
      .option("kafka.bootstrap.servers", "node1.itcast.cn:9099") //指定Kafka的服务IP和端口
      .option("subscribe", topicName) //订阅Kafka的topic的名称
      .option("startingOffsets", "latest") //从最新消费
      .option("maxOffsetsPerTrigger", 100000) //每次最多处理10万条数据
      .option("failOnDataLoss", value = false) //如果数据丢失是否失败
      .load()
  }

  /**
   * 读取CSV格式文本文件数据,封装到DataFrame数据集
   */
  def readCsvFile(spark: SparkSession, path: String): DataFrame = {
    spark.read
      // 设置分隔符为\t
      .option("sep", "\\t")
      // 文件首行为列名称
      .option("header", "true")
      // 依据数值自动推断数据类型
      .option("inferSchema", "true")
      // 指定文件路径
      .csv(path)
  }

1-3 ETL转换后存储至Hudi表中

  /**
   * 对Kafka获取数据,进行转换操作,获取所有字段的值,转换为String,以便保存到Hudi表
   * @param streamDF
   * @return
   */
  def process(streamDF: DataFrame): DataFrame = {
    streamDF
      //选择字段
      .selectExpr(
        "CAST(key AS STRING) order_id",
        "CAST(value AS STRING) AS message",
        "topic","partition","offset","timestamp"
      )
      //解析message数据,提取字段值
      .withColumn("user_id",get_json_object(col("message"),"$.userId"))
      .withColumn("order_time",get_json_object(col("message"),"$.orderTime"))
      .withColumn("ip",get_json_object(col("message"),"$.ip"))
      .withColumn("order_money",get_json_object(col("message"),"$.orderMoney"))
      .withColumn("order_status",get_json_object(col("message"),"$.orderStatus"))
      //删除message字段
      .drop(col("message"))
      //转换订单日期时间格式为Long类型,作为Hudi表中合并数据字段
      .withColumn("ts",to_timestamp(col("order_time"),"yyyy-MM-dd HH:mm:ss.SSS"))
      //订单日期时间提取分区日期:yyyyMMdd
      .withColumn("day",substring(col("order_time"),0,10))
  }

  /**
   * 将流式数据集DataFrame保存至Hudi表,表类型可选:COW和MOR
   */
  def saveToHudi(streamDF: DataFrame): Unit = {
    streamDF.writeStream
      .outputMode(OutputMode.Append())
      .queryName("query-hudi-streaming")
      // 针对每微批次数据保存
      .foreachBatch((batchDF: Dataset[Row], batchId: Long) => {
        println(s"============== BatchId: ${batchId} start ==============")
        writeHudiMor(batchDF) // TODO:表的类型MOR
      })
      .option("checkpointLocation", "/datas/hudi-spark/struct-ckpt-1001")
      .start()
  }

  /**
   * 将数据集DataFrame保存到Hudi表中,表的类型:MOR(读取时合并)
   */
  def writeHudiMor(dataframe: DataFrame): Unit = {
    import org.apache.hudi.DataSourceWriteOptions._
    import org.apache.hudi.config.HoodieWriteConfig._
    import org.apache.hudi.keygen.constant.KeyGeneratorOptions._

    dataframe.write
      .format("hudi")
      .mode(SaveMode.Append)
      // 表的名称
      .option(TBL_NAME.key, "tbl_hudi_order")
      // 设置表的类型
      .option(TABLE_TYPE.key(), "MERGE_ON_READ")
      // 每条数据主键字段名称
      .option(RECORDKEY_FIELD_NAME.key(), "order_id")
      // 数据合并时,依据时间字段
      .option(PRECOMBINE_FIELD_NAME.key(), "ts")
      // 分区字段名称
      .option(PARTITIONPATH_FIELD_NAME.key(), "day")
      // 分区值对应目录格式,是否与Hive分区策略一致
      .option(HIVE_STYLE_PARTITIONING_ENABLE.key(), "true")
      // 插入数据,产生shuffle时,分区数目
      .option("hoodie.insert.shuffle.parallelism", "2")
      .option("hoodie.upsert.shuffle.parallelism", "2")
      // 表数据存储路径
      .save("/hudi-warehouse/tbl_hudi_order")
  }

1-4 SparkSQL加载Hudi表数据并分析

  /**
   * 从Hudi表加载数据,指定数据存在路径
   */
  def readFromHudi(spark: SparkSession, path: String): DataFrame = {
    // a. 指定路径,加载数据,封装至DataFrame
    val didiDF: DataFrame = spark.read.format("hudi").load(path);

    // b. 选择字段
    didiDF
      // 选择字段
      .select(
        "order_id", "product_id", "type", "traffic_type", 
        "pre_total_fee", "start_dest_distance", "departure_time" 
      )
  }

  /**
   * 订单类型统计,字段:product_id
   */
  def reportProduct(dataframe: DataFrame): Unit = {
    val reportDF: DataFrame = dataframe.groupBy("product_id").count();

    val to_name = udf(
      (product_id: Int) => {
        product_id match {
          case 1 => "滴滴专车"
          case 2 => "滴滴企业专车"
          case 3 => "滴滴快车"
          case 4 => "滴滴企业快车"
        }
      }
    )
    val resultDF: DataFrame = reportDF.select(
      to_name(col("product_id")).as("order_type"), //
      col("count").as("total") //
    )
    resultDF.printSchema();
    resultDF.show(10, truncate = false);
  }

二、Hudi+Flink+Kafka(Java)

依赖详见【2.Hudi+Flink+Kafka依赖】

2-1 从Kafka消费数据

第1步获取表执行环境无需赘述。

第2步创建输入表:指定了Kafka的服务IP和端口、topic等信息,从这里读取数据

第3步中转换数据为Hudi表中需要的格式(添加两个必须字段:数据合并字段ts,分区字段partition_day)

package cn.itcast.hudi;

import org.apache.flink.table.api.EnvironmentSettings;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.TableEnvironment;

import static org.apache.flink.table.api.Expressions.$;

public class FlinkSQLKafkaDemo {

    public static void main(String[] args) {
        //1.获取表执行环境
        EnvironmentSettings settings = EnvironmentSettings.newInstance()
                .inStreamingMode() //流式
                .build();
        TableEnvironment tableEnvironment = TableEnvironment.create(settings);
        //2.创建输入表:从Kafka消费数据
        tableEnvironment.executeSql(
                "CREATE TABLE order_kafka_source (\n" +
                        "  orderId STRING,\n" +
                        "  userId STRING,\n" +
                        "  orderTime STRING,\n" +
                        "  ip STRING,\n" +
                        "  orderMoney DOUBLE,\n" +
                        "  orderStatus INT\n" +
                        ") WITH (\n" +
                        "  'connector' = 'kafka',\n" +
                        "  'topic' = 'order-topic',\n" +
                        "  'properties.bootstrap.servers' = 'node1.itcast.cn:9099',\n" +
                        "  'properties.group.id' = 'gid-1001',\n" +
                        "  'scan.startup.mode' = 'latest-offset',\n" +
                        "  'format' = 'json',\n" +
                        "  'json.fail-on-missing-field' = 'false',\n" +
                        "  'json.ignore-parse-errors' = 'true'\n" +
                        ")"
        );
        //3.转换数据:可以使用SQL,也可以是Table api
        Table table = tableEnvironment.from("order_kafka_source")
                //添加字段:hudi表数据合并字段,"orderId":"20211122103434136000001" -> 20211122103434136
                .addColumns(
                        $("orderId").substring(0, 17).as("ts")
                )
                //添加字段:hudi表中分区字段,"orderTime":"2021-11-22 10:34:34.136" -> 2021-11-22
                .addColumns(
                        $("orderTime").substring(0, 10).as("partition_day")
                );
        tableEnvironment.createTemporaryView("view_order",table);
        //4.创建输出表:将结果数据输出
        tableEnvironment.executeSql("select * from view_order").print();
    }
}

2-2 将数据输出到hudi表中

第4步创建输出表:指定了输出Hudi表路径(本地路径、Hadoop等)、表类型、数据合并字段、分组字段等,数据输出到这里

第5步将数据插入到输出Hudi表中

package cn.itcast.hudi;

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.EnvironmentSettings;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;

import static org.apache.flink.table.api.Expressions.$;


/**
 * 基于Flink SQL Connector实现:实时消费Topic中数据,转换处理后,实时存储Hudi表中
 */
public class FlinkSQLHudiDemo {

    public static void main(String[] args) {
        //1.获取表执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);
        env.enableCheckpointing(5000);//由于增量将数据写入到Hudi表,所以需要启动Flink CheckPoint检查点
        EnvironmentSettings settings = EnvironmentSettings
                .newInstance()
                .inStreamingMode() //流式
                .build();
        StreamTableEnvironment tableEnvironment = StreamTableEnvironment.create(env,settings);
        //2.创建输入表:从Kafka消费数据
        tableEnvironment.executeSql(
                "CREATE TABLE order_kafka_source (\n" +
                        "  orderId STRING,\n" +
                        "  userId STRING,\n" +
                        "  orderTime STRING,\n" +
                        "  ip STRING,\n" +
                        "  orderMoney DOUBLE,\n" +
                        "  orderStatus INT\n" +
                        ") WITH (\n" +
                        "  'connector' = 'kafka',\n" +
                        "  'topic' = 'order-topic',\n" +
                        "  'properties.bootstrap.servers' = 'node1.itcast.cn:9099',\n" +
                        "  'properties.group.id' = 'gid-1001',\n" +
                        "  'scan.startup.mode' = 'latest-offset',\n" +
                        "  'format' = 'json',\n" +
                        "  'json.fail-on-missing-field' = 'false',\n" +
                        "  'json.ignore-parse-errors' = 'true'\n" +
                        ")"
        );
        //3.转换数据:可以使用SQL,也可以是Table api
        Table table = tableEnvironment.from("order_kafka_source")
                //添加字段:hudi表数据合并字段,"orderId":"20211122103434136000001" -> 20211122103434136
                .addColumns(
                        $("orderId").substring(0, 17).as("ts")
                )
                //添加字段:hudi表中分区字段,"orderTime":"2021-11-22 10:34:34.136" -> 2021-11-22
                .addColumns(
                        $("orderTime").substring(0, 10).as("partition_day")
                );
        tableEnvironment.createTemporaryView("view_order", table);
        //4.创建输出表:将数据输出到hudi表中
        tableEnvironment.executeSql(
                "CREATE TABLE order_hudi_sink (\n" +
                        "  orderId STRING PRIMARY KEY NOT ENFORCED,\n" +
                        "  userId STRING,\n" +
                        "  orderTime STRING,\n" +
                        "  ip STRING,\n" +
                        "  orderMoney DOUBLE,\n" +
                        "  orderStatus INT,\n" +
                        "  ts STRING,\n" +
                        "  partition_day STRING\n" +
                        ")\n" +
                        "PARTITIONED BY (partition_day) \n" +
                        "WITH (\n" +
                        "  'connector' = 'hudi',\n" +
                        "  'path' = 'file:///D:/flink_hudi_order',\n" +
                        "  'table.type' = 'MERGE_ON_READ',\n" +
                        "  'write.operation' = 'upsert',\n" +
                        "  'hoodie.datasource.write.recordkey.field' = 'orderId',\n" +
                        "  'write.precombine.field' = 'ts',\n" +
                        "  'write.tasks'= '1'\n" +
                        ")"
        );
        // 5.通过子查询方式,将数据写入输出表(注意,字段顺序要一致)
        tableEnvironment.executeSql(
                "INSERT INTO order_hudi_sink\n" +
                        "SELECT\n" +
                        "  orderId, userId, orderTime, ip, orderMoney, orderStatus, ts, partition_day\n" +
                        "FROM view_order"
        );
    }
}

2-3 从hudi表中加载数据

创建输入表,加载Hudi表查询数据即可。

package cn.itcast.hudi;

import org.apache.flink.table.api.EnvironmentSettings;
import org.apache.flink.table.api.TableEnvironment;

/**
 * 基于Flink SQL Connector实现:从Hudi表中加载数据,编写SQL查询
 */
public class FlinkSQLReadDemo {

    public static void main(String[] args) {
        //1.获取表执行环境
        EnvironmentSettings settings = EnvironmentSettings
                .newInstance()
                .inStreamingMode()
                .build();
        TableEnvironment tableEnvironment = TableEnvironment.create(settings);
        //2.创建输入表,加载Hudi表查询数据
        tableEnvironment.executeSql(
                "CREATE TABLE order_hudi(\n" +
                        "  orderId STRING PRIMARY KEY NOT ENFORCED,\n" +
                        "  userId STRING,\n" +
                        "  orderTime STRING,\n" +
                        "  ip STRING,\n" +
                        "  orderMoney DOUBLE,\n" +
                        "  orderStatus INT,\n" +
                        "  ts STRING,\n" +
                        "  partition_day STRING\n" +
                        ")\n" +
                        "PARTITIONED BY (partition_day)\n" +
                        "WITH (\n" +
                        "  'connector' = 'hudi',\n" +
                        "  'path' = 'file:///D:/flink_hudi_order',\n" +
                        "  'table.type' = 'MERGE_ON_READ',\n" +
                        "  'read.streaming.enabled' = 'true',\n" +
                        "  'read.streaming.check-interval' = '4'\n" +
                        ")"
        );
        //3.执行查询语句,流式读取Hudi数据
        tableEnvironment.executeSql(
                "SELECT orderId, userId, orderTime, ip, orderMoney, orderStatus, ts ,partition_day FROM order_hudi"
        ).print();
    }
}

附:依赖

1.Hudi+Spark+Kafka依赖

<repositories>
    <repository>
        <id>aliyun</id>
        <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
    </repository>
    <repository>
        <id>cloudera</id>
        <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
    </repository>
    <repository>
        <id>jboss</id>
        <url>http://repository.jboss.com/nexus/content/groups/public</url>
    </repository>
</repositories>

<properties>
    <scala.version>2.12.10</scala.version>
    <scala.binary.version>2.12</scala.binary.version>
    <spark.version>3.0.0</spark.version>
    <hadoop.version>2.7.3</hadoop.version>
    <hudi.version>0.9.0</hudi.version>
</properties>

<dependencies>
    <!-- 依赖Scala语言 -->
    <dependency>
        <groupId>org.scala-lang</groupId>
        <artifactId>scala-library</artifactId>
        <version>${scala.version}</version>
    </dependency>

    <!-- Spark Core 依赖 -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_${scala.binary.version}</artifactId>
        <version>${spark.version}</version>
    </dependency>
    <!-- Spark SQL 依赖 -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_${scala.binary.version}</artifactId>
        <version>${spark.version}</version>
    </dependency>
    <!-- Structured Streaming + Kafka  依赖 -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql-kafka-0-10_${scala.binary.version}</artifactId>
        <version>${spark.version}</version>
    </dependency>

    <!-- Hadoop Client 依赖 -->
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-client</artifactId>
        <version>${hadoop.version}</version>
    </dependency>

    <!-- hudi-spark3 -->
    <dependency>
        <groupId>org.apache.hudi</groupId>
        <artifactId>hudi-spark3-bundle_2.12</artifactId>
        <version>${hudi.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-avro_2.12</artifactId>
        <version>${spark.version}</version>
    </dependency>

    <!-- Spark SQL 与 Hive 集成 依赖 -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-hive_${scala.binary.version}</artifactId>
        <version>${spark.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-hive-thriftserver_${scala.binary.version}</artifactId>
        <version>${spark.version}</version>
    </dependency>

    <dependency>
        <groupId>org.apache.httpcomponents</groupId>
        <artifactId>httpcore</artifactId>
        <version>4.4.13</version>
    </dependency>
    <dependency>
        <groupId>org.apache.httpcomponents</groupId>
        <artifactId>httpclient</artifactId>
        <version>4.5.12</version>
    </dependency>

</dependencies>

<build>
    <outputDirectory>target/classes</outputDirectory>
    <testOutputDirectory>target/test-classes</testOutputDirectory>
    <resources>
        <resource>
            <directory>${project.basedir}/src/main/resources</directory>
        </resource>
    </resources>
    <!-- Maven 编译的插件 -->
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>3.0</version>
            <configuration>
                <source>1.8</source>
                <target>1.8</target>
                <encoding>UTF-8</encoding>
            </configuration>
        </plugin>
        <plugin>
            <groupId>net.alchim31.maven</groupId>
            <artifactId>scala-maven-plugin</artifactId>
            <version>3.2.0</version>
            <executions>
                <execution>
                    <goals>
                        <goal>compile</goal>
                        <goal>testCompile</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

2.Hudi+Flink+Kafka依赖

<repositories>
    <repository>
        <id>nexus-aliyun</id>
        <name>Nexus aliyun</name>
        <url>http://maven.aliyun.com/nexus/content/groups/public</url>
    </repository>
    <repository>
        <id>central_maven</id>
        <name>central maven</name>
        <url>https://repo1.maven.org/maven2</url>
    </repository>
    <repository>
        <id>cloudera</id>
        <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
    </repository>
    <repository>
        <id>apache.snapshots</id>
        <name>Apache Development Snapshot Repository</name>
        <url>https://repository.apache.org/content/repositories/snapshots/</url>
        <releases>
            <enabled>false</enabled>
        </releases>
        <snapshots>
            <enabled>true</enabled>
        </snapshots>
    </repository>
</repositories>

<properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <maven.compiler.source>${java.version}</maven.compiler.source>
    <maven.compiler.target>${java.version}</maven.compiler.target>
    <java.version>1.8</java.version>
    <scala.binary.version>2.12</scala.binary.version>
    <flink.version>1.12.2</flink.version>
    <hadoop.version>2.7.3</hadoop.version>
    <mysql.version>8.0.16</mysql.version>
</properties>

<dependencies>
    <!-- Flink Client -->
    <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-clients_${scala.binary.version}</artifactId>
        <version>${flink.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-java</artifactId>
        <version>${flink.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-streaming-java_${scala.binary.version}</artifactId>
        <version>${flink.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-runtime-web_${scala.binary.version}</artifactId>
        <version>${flink.version}</version>
    </dependency>

    <!-- Flink Table API & SQL -->
    <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-table-common</artifactId>
        <version>${flink.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-table-planner-blink_${scala.binary.version}</artifactId>
        <version>${flink.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-table-api-java-bridge_${scala.binary.version}</artifactId>
        <version>${flink.version}</version>
    </dependency>

    <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-connector-kafka_${scala.binary.version}</artifactId>
        <version>${flink.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-json</artifactId>
        <version>${flink.version}</version>
    </dependency>

    <dependency>
        <groupId>org.apache.hudi</groupId>
        <artifactId>hudi-flink-bundle_${scala.binary.version}</artifactId>
        <version>0.9.0</version>
    </dependency>

    <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-shaded-hadoop-2-uber</artifactId>
        <version>2.7.5-10.0</version>
    </dependency>

    <!-- MySQL/FastJson/lombok -->
    <dependency>
        <groupId>mysql</groupId>
        <artifactId>mysql-connector-java</artifactId>
        <version>${mysql.version}</version>
    </dependency>
    <dependency>
        <groupId>com.alibaba</groupId>
        <artifactId>fastjson</artifactId>
        <version>1.2.68</version>
    </dependency>
    <dependency>
        <groupId>org.projectlombok</groupId>
        <artifactId>lombok</artifactId>
        <version>1.18.12</version>
    </dependency>

    <!-- slf4j及log4j -->
    <dependency>
        <groupId>org.slf4j</groupId>
        <artifactId>slf4j-log4j12</artifactId>
        <version>1.7.7</version>
        <scope>runtime</scope>
    </dependency>
    <dependency>
        <groupId>log4j</groupId>
        <artifactId>log4j</artifactId>
        <version>1.2.17</version>
        <scope>runtime</scope>
    </dependency>

</dependencies>

<build>
    <sourceDirectory>src/main/java</sourceDirectory>
    <testSourceDirectory>src/test/java</testSourceDirectory>
    <plugins>
        <!-- 编译插件 -->
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>3.5.1</version>
            <configuration>
                <source>1.8</source>
                <target>1.8</target>
                <!--<encoding>${project.build.sourceEncoding}</encoding>-->
            </configuration>
        </plugin>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-surefire-plugin</artifactId>
            <version>2.18.1</version>
            <configuration>
                <useFile>false</useFile>
                <disableXmlReport>true</disableXmlReport>
                <includes>
                    <include>**/*Test.*</include>
                    <include>**/*Suite.*</include>
                </includes>
            </configuration>
        </plugin>
        <!-- 打jar包插件(会包含所有依赖) -->
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-shade-plugin</artifactId>
            <version>2.3</version>
            <executions>
                <execution>
                    <phase>package</phase>
                    <goals>
                        <goal>shade</goal>
                    </goals>
                    <configuration>
                        <filters>
                            <filter>
                                <artifact>*:*</artifact>
                                <excludes>
                                    <exclude>META-INF/*.SF</exclude>
                                    <exclude>META-INF/*.DSA</exclude>
                                    <exclude>META-INF/*.RSA</exclude>
                                </excludes>
                            </filter>
                        </filters>
                        <transformers>
                            <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                                <!-- <mainClass>com.itcast.flink.batch.FlinkBatchWordCount</mainClass> -->
                            </transformer>
                        </transformers>
                    </configuration>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

附:报错

1.运行报错

【报错代码】

Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

【原因】

windows下运行时需要安装Windows下运行的支持插件:hadoop2.7-common-bin

网址:https://gitcode.net/mirrors/cdarlint/winutils?utm_source=csdn_github_accelerator

选择需要版本的包下载,配置环境变量HADOOP_HOME和path,重启idea再运行就不会报错了

cd hudi/server/hadoop

./bin/hadoop checknative

2.运行报错

【报错】

NoSuchFieldError: INSTANCE

【原因】

由于代码中的httpclient和httpcore版本过高, 而hadoop中的版本过低导致(<4.3)

【解决】

将&HADOOP_HOME/share/hadoop/common/lib 下和 &HADOOP_HOME/share/hadoop/tools/lib/下的httpclient和httpcore替换成高版本(>4.3)

cd /home/zhangheng/hudi/server/hadoop/share/hadoop/common/lib
rm httpclient-4.2.5.jar
rm httpcore-4.2.5.jar
cd /home/zhangheng/hudi/server/hadoop/share/hadoop/tools/lib
rm httpclient-4.2.5.jar
rm httpcore-4.2.5.jar
scp -r D:\Users\zh\Desktop\Hudi\compressedPackage\httpclient-4.4.jar zhangheng@10.8.4.212:/home/zhangheng/hudi/server/hadoop/share/hadoop/common/lib
scp -r D:\Users\zh\Desktop\Hudi\compressedPackage\httpcore-4.4.jar zhangheng@10.8.4.212:/home/zhangheng/hudi/server/hadoop/share/hadoop/common/lib
scp -r D:\Users\zh\Desktop\Hudi\compressedPackage\httpclient-4.4.jar zhangheng@10.8.4.212:/home/zhangheng/hudi/server/hadoop/share/hadoop/tools/lib
scp -r D:\Users\zh\Desktop\Hudi\compressedPackage\httpcore-4.4.jar zhangheng@10.8.4.212:/home/zhangheng/hudi/server/hadoop/share/hadoop/tools/lib

3.运行警告

【警告】

WARN ProcfsMetricsGetter: Exception when trying to compute pagesize, as a result reporting of ProcessTree metrics is stopped

【原因】

spark版本太高,最开始选的spark版本为v3.0.0,但是不太合适,改成v2.4.6,就ok了。

【解决】

官方网址:https://archive.apache.org/dist/spark/spark-2.4.6/
下载安装配置环境变量:spark-2.4.6-bin-hadoop2.7.tgz   

附:配置

1.Scala配置

1.Windows安装Scala:https://www.scala-lang.org/
安装完成后配置环境变量SCALA_HOME、path
输入scala -version查看是否安装成功
2.idea安装Scala插件:plugins搜索scala直接安装
重启之后,找到file(工具)——>project structure,找到左下角Glob libararies,然后点击中间 + 号,选择最后一个 Scala SDK,找到自己安装scala的版本,点击ok即可

2.idea中虚拟机配置

Tools -> Deployment -> Browse Remote Host
配置自己虚拟机的SSH configuration、Root path、Web server URL。

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/551140.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

7.C++:多态

一、 virtual关键字 //1.可以修饰原函数&#xff0c;为了完成虚函数的重写&#xff0c;满足多态的条件之一&#xff1b; //2.可以在菱形继承中&#xff0c;完成虚继承&#xff0c;解决数据冗余和二义性&#xff1b; 两个地方使用同一关键字&#xff0c;但二者间没有一点关联 二…

【C 数据结构】单链表

文章目录 【 1. 基本原理 】1.1 链表的节点1.2 头指针、头节点、首元节点 【 2. 链表的创建 】2.0 创建1个空链表&#xff08;仅有头节点&#xff09;2.1 创建单链表&#xff08;头插入法&#xff09;*2.2 创建单链表&#xff08;尾插入法&#xff09; 【 3. 链表插入元素 】【…

【文件系统】 F2FS文件系统学习

一、基本介绍 1、F2FS History F2FS&#xff08;Flash Friendly File System&#xff09;是专门为Nand Flash设计的一个日志型文件系统&#xff0c;于2012年12月合入Linux3.8内核&#xff0c;Google也在2018年&#xff08;Android P&#xff09;将其吸收到安卓原生版本中&…

近屿智能全新推出AI培训产品:AIGC大模型工程师与产品经理学习路径图

如今&#xff0c;人工智能和自然语言处理技术的发展&#xff0c;使得AI生成的内容&#xff08;AIGC&#xff0c;AI Generated Content&#xff09;领域开发出了巨大的潜力。就像业内巨头OpenAI公司&#xff0c;开发出了一系列自然语言处理模型ChatGPT&#xff0c;不仅带动了全世…

#陶晶驰串口屏使用

1.陶晶驰串口屏输入要连接的wifi信息实现 &#xff08;1&#xff09;选择文本控件 &#xff08;2&#xff09;给文本控件配置输入键盘&#xff0c;id代表用户名&#xff0c;password代表wifi密码&#xff08;注意wifi的频段需要为2.4GHz&#xff09; &#xff08;3&#xff0…

微信小程序获取蓝牙信标

/*** 搜索设备界面*/ import Dialog from vant/weapp/dialog/dialog; Page({data: {list: []},onPullDownRefresh: function () {wx.request({url: https://wwz.jingyi.icu/app/Explain/index,data: {scenic_id: 3},method: POST,success: (res) > {console.log(res);let th…

Elastic安装后 postman对elasticsearch进行测试

一、创建索引和mapping //id 字段自增id //good_sn 商品SKU //good_name 商品名称 //good_introduction 商品简介 //good_descript 商品详情 PUT http://IP:9200/shop { "mappings":{ "good":{ "properties":{ …

基于Linux C++多线程服务器 + Qt上位机开发 + STM32 + 8266WIFI的智慧无人超市

前言 针对传统超市购物车结账排队时间长、付款效率低的问题&#xff0c;提出了一种更符合现代社会人们购物方式-基于RFID的自助收银系统。习惯了快节奏生活的人们都会选择自助收银机结账&#xff0c;理由显而易见&#xff1a;自助收银机结账很方便&#xff0c;几乎不用排队&am…

MongoDB的安装配置及使用

文章目录 前言一、MongoDB的下载、安装、配置二、检验MongoDB是否安装成功三、Navicat 操作MongoDB四、创建一个集合&#xff0c;存放三个文档总结 前言 本文内容&#xff1a; &#x1f4ab; MongoDB的下载、安装、配置 &#x1f4ab; 检验MongoDB是否安装成功 ❤️ Navicat 操…

【全开源】多功能完美运营版商城 虚拟商品全功能商城 全能商城小程序 智慧商城系统 全品类百货商城

内容目录 一、详细介绍二、效果展示1.部分代码2.效果图展示 三、学习资料下载 一、详细介绍 完美运营版商城/拼团/团购/秒杀/积分/砍价/实物商品/虚拟商品等全功能商城 干干净净 没有一丝多余收据 还没过手其他站 还没乱七八走的广告和后门 后台可以自由拖曳修改前端UI页面 …

Unity之XR Interaction Toolkit如何在VR中实现渐变黑屏效果

前言 做VR的时候,有时会有跳转场景,切换位置,切换环境,切换进度等等需求,此时相机的画面如果不切换个黑屏,总会感觉很突兀。刚好Unity的XR Interaction Toolkit插件在2.5.x版本,出了一个TunnelingVignette的效果,我们今天就来分析一下他是如何使用的,然后我们自己再来…

numpy的使用

numpy的介绍 numpy是一个python开源的科学计算库 使用numpy可以方便的使用数组、矩阵&#xff08;列表套列表&#xff09;进行计算 包括线性代数、傅里叶变换&#xff0c;随机数生成等大量函数 python源代码和numpy和的区别 import numpy as np def func(n):a np.arange(n) *…

SAP 内部订单(二)-内部订单相关操作

业务背景&#xff1a;公司A要举办一个展会&#xff0c;持续时间大概一个月&#xff0c;涉及到材料费&#xff0c;人工费&#xff0c;外部服务费等&#xff0c;老板想要知道这个展会总共的花销是多少&#xff0c;明细是哪些&#xff0c;并且这些费用最终都进入市场部的成本中心。…

Jackson 2.x 系列【25】Spring Boot 集成之起步依赖、自动配置

有道无术&#xff0c;术尚可求&#xff0c;有术无道&#xff0c;止于术。 本系列Jackson 版本 2.17.0 本系列Spring Boot 版本 3.2.4 源码地址&#xff1a;https://gitee.com/pearl-organization/study-jaskson-demo 文章目录 1. 前言2. 起步依赖3. 自动配置3.1 JacksonPrope…

数学建模-最优包衣厚度终点判别法-二(K-Means聚类)

&#x1f49e;&#x1f49e; 前言 hello hello~ &#xff0c;这里是viperrrrrrr~&#x1f496;&#x1f496; &#xff0c;欢迎大家点赞&#x1f973;&#x1f973;关注&#x1f4a5;&#x1f4a5;收藏&#x1f339;&#x1f339;&#x1f339; &#x1f4a5;个人主页&#xff…

PyTorch小技巧:使用Hook可视化网络层激活(各层输出)

这篇文章将演示如何可视化PyTorch激活层。可视化激活&#xff0c;即模型内各层的输出&#xff0c;对于理解深度神经网络如何处理视觉信息至关重要&#xff0c;这有助于诊断模型行为并激发改进。 我们先安装必要的库: pip install torch torchvision matplotlib加载CIFAR-10数据…

淘宝扭蛋机小程序开发:开启购物娱乐新纪元

在数字时代浪潮的推动下&#xff0c;小程序作为新兴的交互平台&#xff0c;正在不断引领着购物方式的革新。淘宝扭蛋机小程序的开发&#xff0c;便是这一变革中的一颗璀璨明星&#xff0c;它将传统扭蛋机的趣味与电商购物的便捷完美融合&#xff0c;为用户带来了前所未有的购物…

微服务整合Spring Cloud Gateway动态路由

前置 创建 Spring Cloud项目 参考&#xff1a;创建Spring Cloud Maven工程-CSDN博客 1. 创建一个maven jar类型项目 在idea中右键父工程-》New-》Module 创建一个maven工程 2. 引入相关依赖 在POM文件中引入下面的依赖 <project xmlns"http://maven.apache.org/P…

【C++】力扣OJ题:构建杨辉三角

Hello everybody!今天给大家介绍一道我认为比较经典的编程练习题&#xff0c;之所以介绍它是因为这道题涉及到二维数组的构建&#xff0c;如果用C语言动态构建二维数组是比较麻烦的&#xff0c;而用C中STL的vector<vector<int>>,就可以立马构建出来&#xff0c;这也…

本地生活服务平台都有哪些,靠谱吗?

随着本地生活服务的发展潜力和盈利方式被不断挖掘&#xff0c;越来越多的人开始发现其中所蕴含着的巨大商机&#xff0c;大家所熟悉的抖音、小红书和支付宝等平台也纷纷上线了本地生活板块&#xff0c;再次印证了其前景的广阔。在此背景下&#xff0c;普通人想要趁势入局分一杯…
最新文章