Spark-用IDEA编写wordcount demo

配置

Spark版本:3.2.0

Scala版本:2.12.12

JDK:1.8

Maven:3.6.3

pom文件

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.zzjz.Spark</groupId>
    <artifactId>Spark</artifactId>
    <version>1.0</version>

    <properties>
        <spark.version>3.2.0</spark.version>
        <scala.version>2.12</scala.version>
    </properties>


    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-hive_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-mllib_${scala.version}</artifactId>
            <version>${spark.version}</version>
        </dependency>

    </dependencies>

    <build>
        <plugins>

            <plugin>
                <groupId>org.scala-tools</groupId>
                <artifactId>maven-scala-plugin</artifactId>
                <version>2.15.2</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>compile</goal>
                            <goal>testCompile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>

            <plugin>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.6.0</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>

            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.19</version>
                <configuration>
                    <skip>true</skip>
                </configuration>
            </plugin>

        </plugins>
    </build>






</project>

样例数据

9422850591,11603,39939,山西,邮件,人员
9422850591,116427,39911,山西,邮件,人员
9422850591,116437,39895,山西,邮件,人员

代码

import org.apache.spark.{SparkConf, SparkContext}  // 导入SparkConf和SparkContext类
object wcPerson {
  def main (args:Array[String]): Unit ={
    // 创建SparkConf对象,设置应用程序名称为"wcPerson",运行模式为本地模式,使用一个CPU核心
    val conf = new SparkConf().setAppName("wcPerson").setMaster("local[1]")  
    // 创建SparkContext对象,与Spark集群进行通信
    val sc = new SparkContext(conf) 
    // 加载文件,将每一行作为一个字符串元素,返回一个RDD 
    val inputFile = sc.textFile("D:\\workspace\\spark\\src\\main\\Data\\person")  
    // 对RDD应用flatMap转换操作,将每一行按","分割成多个单词,并将所有单词扁平化为一个RDD
    val wc = inputFile.flatMap(line => line.split(","))  
    // 对RDD应用map转换操作,将每个单词映射为(key, value)的元组,其中key为单词本身,value为1
      .map(word => (word,1)) 
    // 对相同key的元组进行聚合操作,将相同key的value相加 
      .reduceByKey((a,b) => a + b) 
    // 打印输出聚合结果 
    wc.foreach(println)  
  }
}

运行结果

D:\Java\jdk1.8.0_131\bin\java.exe "-javaagent:D:\idea\IntelliJ IDEA 2021.1.3\lib\idea_rt.jar=52283:D:\idea\IntelliJ IDEA 2021.1.3\bin" -Dfile.encoding=UTF-8 -classpath "D:\idea\IntelliJ IDEA 2021.1.3\lib\idea_rt.jar" com.intellij.rt.execution.CommandLineWrapper C:\Users\Administrator\AppData\Local\Temp\idea_classpath1156784809 wcPerson
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/D:/spark/spark-3.2.0-bin-hadoop2.7/jars/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/D:/Maven/Maven_repositories/org/slf4j/slf4j-log4j12/1.7.30/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
23/07/11 10:00:01 INFO SparkContext: Running Spark version 3.2.0
23/07/11 10:00:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/07/11 10:00:02 INFO ResourceUtils: ==============================================================
23/07/11 10:00:02 INFO ResourceUtils: No custom resources configured for spark.driver.
23/07/11 10:00:02 INFO ResourceUtils: ==============================================================
23/07/11 10:00:02 INFO SparkContext: Submitted application: wcPerson
23/07/11 10:00:02 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
23/07/11 10:00:02 INFO ResourceProfile: Limiting resource is cpu
23/07/11 10:00:02 INFO ResourceProfileManager: Added ResourceProfile id: 0
23/07/11 10:00:02 INFO SecurityManager: Changing view acls to: Administrator
23/07/11 10:00:02 INFO SecurityManager: Changing modify acls to: Administrator
23/07/11 10:00:02 INFO SecurityManager: Changing view acls groups to: 
23/07/11 10:00:02 INFO SecurityManager: Changing modify acls groups to: 
23/07/11 10:00:02 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(Administrator); groups with view permissions: Set(); users  with modify permissions: Set(Administrator); groups with modify permissions: Set()
23/07/11 10:00:07 INFO Utils: Successfully started service 'sparkDriver' on port 52323.
23/07/11 10:00:07 INFO SparkEnv: Registering MapOutputTracker
23/07/11 10:00:07 INFO SparkEnv: Registering BlockManagerMaster
23/07/11 10:00:07 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
23/07/11 10:00:07 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
23/07/11 10:00:07 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
23/07/11 10:00:07 INFO DiskBlockManager: Created local directory at C:\Users\Administrator\AppData\Local\Temp\blockmgr-0052575b-7c9f-457e-9ed5-fb50af59f965
23/07/11 10:00:07 INFO MemoryStore: MemoryStore started with capacity 623.4 MiB
23/07/11 10:00:07 INFO SparkEnv: Registering OutputCommitCoordinator
23/07/11 10:00:07 INFO Utils: Successfully started service 'SparkUI' on port 4040.
23/07/11 10:00:08 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://zzjz:4040
23/07/11 10:00:08 INFO Executor: Starting executor ID driver on host zzjz
23/07/11 10:00:08 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 52339.
23/07/11 10:00:08 INFO NettyBlockTransferService: Server created on zzjz:52339
23/07/11 10:00:08 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
23/07/11 10:00:08 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, zzjz, 52339, None)
23/07/11 10:00:08 INFO BlockManagerMasterEndpoint: Registering block manager zzjz:52339 with 623.4 MiB RAM, BlockManagerId(driver, zzjz, 52339, None)
23/07/11 10:00:08 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, zzjz, 52339, None)
23/07/11 10:00:08 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, zzjz, 52339, None)
23/07/11 10:00:10 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 244.0 KiB, free 623.2 MiB)
23/07/11 10:00:10 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 23.4 KiB, free 623.1 MiB)
23/07/11 10:00:10 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on zzjz:52339 (size: 23.4 KiB, free: 623.4 MiB)
23/07/11 10:00:10 INFO SparkContext: Created broadcast 0 from textFile at wcPerson.scala:7
23/07/11 10:00:10 INFO FileInputFormat: Total input paths to process : 1
23/07/11 10:00:10 INFO SparkContext: Starting job: foreach at wcPerson.scala:10
23/07/11 10:00:11 INFO DAGScheduler: Registering RDD 3 (map at wcPerson.scala:9) as input to shuffle 0
23/07/11 10:00:11 INFO DAGScheduler: Got job 0 (foreach at wcPerson.scala:10) with 1 output partitions
23/07/11 10:00:11 INFO DAGScheduler: Final stage: ResultStage 1 (foreach at wcPerson.scala:10)
23/07/11 10:00:11 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
23/07/11 10:00:11 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
23/07/11 10:00:11 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at map at wcPerson.scala:9), which has no missing parents
23/07/11 10:00:11 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 6.9 KiB, free 623.1 MiB)
23/07/11 10:00:11 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.0 KiB, free 623.1 MiB)
23/07/11 10:00:11 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on zzjz:52339 (size: 4.0 KiB, free: 623.4 MiB)
23/07/11 10:00:11 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1427
23/07/11 10:00:11 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at map at wcPerson.scala:9) (first 15 tasks are for partitions Vector(0))
23/07/11 10:00:11 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks resource profile 0
23/07/11 10:00:11 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (zzjz, executor driver, partition 0, PROCESS_LOCAL, 4503 bytes) taskResourceAssignments Map()
23/07/11 10:00:11 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
23/07/11 10:00:12 INFO HadoopRDD: Input split: file:/D:/workspace/spark/src/main/Data/person:0+135
23/07/11 10:00:12 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1325 bytes result sent to driver
23/07/11 10:00:12 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1022 ms on zzjz (executor driver) (1/1)
23/07/11 10:00:12 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
23/07/11 10:00:12 INFO DAGScheduler: ShuffleMapStage 0 (map at wcPerson.scala:9) finished in 1.263 s
23/07/11 10:00:12 INFO DAGScheduler: looking for newly runnable stages
23/07/11 10:00:12 INFO DAGScheduler: running: Set()
23/07/11 10:00:12 INFO DAGScheduler: waiting: Set(ResultStage 1)
23/07/11 10:00:12 INFO DAGScheduler: failed: Set()
23/07/11 10:00:12 INFO DAGScheduler: Submitting ResultStage 1 (ShuffledRDD[4] at reduceByKey at wcPerson.scala:9), which has no missing parents
23/07/11 10:00:12 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 5.3 KiB, free 623.1 MiB)
23/07/11 10:00:12 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 3.1 KiB, free 623.1 MiB)
23/07/11 10:00:12 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on zzjz:52339 (size: 3.1 KiB, free: 623.4 MiB)
23/07/11 10:00:12 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1427
23/07/11 10:00:12 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (ShuffledRDD[4] at reduceByKey at wcPerson.scala:9) (first 15 tasks are for partitions Vector(0))
23/07/11 10:00:12 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks resource profile 0
23/07/11 10:00:12 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1) (zzjz, executor driver, partition 0, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map()
23/07/11 10:00:12 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
23/07/11 10:00:12 INFO ShuffleBlockFetcherIterator: Getting 1 (142.0 B) non-empty blocks including 1 (142.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks
23/07/11 10:00:12 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 20 ms
(116437,1)
(39911,1)
(116427,1)
(9422850591,3)
(39895,1)
(山西,3)
(11603,1)
(39939,1)
(人员,3)
(邮件,3)
23/07/11 10:00:12 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1224 bytes result sent to driver
23/07/11 10:00:12 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 123 ms on zzjz (executor driver) (1/1)
23/07/11 10:00:12 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
23/07/11 10:00:12 INFO DAGScheduler: ResultStage 1 (foreach at wcPerson.scala:10) finished in 0.153 s
23/07/11 10:00:12 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
23/07/11 10:00:12 INFO TaskSchedulerImpl: Killing all running tasks in stage 1: Stage finished
23/07/11 10:00:12 INFO DAGScheduler: Job 0 finished: foreach at wcPerson.scala:10, took 2.044775 s
23/07/11 10:00:12 INFO SparkContext: Invoking stop() from shutdown hook
23/07/11 10:00:12 INFO SparkUI: Stopped Spark web UI at http://zzjz:4040
23/07/11 10:00:12 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
23/07/11 10:00:12 INFO MemoryStore: MemoryStore cleared
23/07/11 10:00:12 INFO BlockManager: BlockManager stopped
23/07/11 10:00:12 INFO BlockManagerMaster: BlockManagerMaster stopped
23/07/11 10:00:12 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
23/07/11 10:00:12 INFO SparkContext: Successfully stopped SparkContext
23/07/11 10:00:12 INFO ShutdownHookManager: Shutdown hook called
23/07/11 10:00:12 INFO ShutdownHookManager: Deleting directory C:\Users\Administrator\AppData\Local\Temp\spark-9f3eb32d-30f7-44d6-8751-f668b2710d89

Process finished with exit code 0

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/36569.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

【Redis】—— Redis的AOF持久化机制

&#x1f4a7; 【 R e d i s 】—— R e d i s 的 A O F 持久化机制 \color{#FF1493}{【Redis】 —— Redis的AOF持久化机制} 【Redis】——Redis的AOF持久化机制&#x1f4a7; &#x1f337; 仰望天空&#xff0c;妳我亦是行人.✨ &#x1f984; 个人主页——微风撞…

基于单片机语音识别智能家居系统的设计与实现

功能介绍 以STM32单片机作为主控系统&#xff1b;液晶显示当前环境温湿度&#xff0c;用电器开关状态通过语音模块识别设定的语音&#xff1b;DHT11进行环境温湿度采集&#xff1b;通过语音播报模块报当前温湿度&#xff0c;智能回复通过语音识别可以打开灯&#xff0c;窗帘&am…

折叠屏手机再添新功能?OPPOColorOS14发布,打通 App 和终端互联

近年来&#xff0c;多终端互联互通已经成为数码产品的发展趋势&#xff0c;各家手机品牌也在不断提升相关功能。 根据数码博主 数码闲聊站的爆料&#xff0c;OPPO即将发布ColorOS 14&#xff0c;并特别提供了针对折叠屏手机的Fold系统。该系统在横屏模式下对自带应用进行了更好…

mac android studio设置跟mac系统一样的快捷键

mac版的android studio 跟mac系统的快捷键不一样,主要修改了下面几组操作,为了跟mac系统快捷键相同 setting->Keymap 搜索bottom 修改3个快捷键: cmd↓ 设置让鼠标移动到屏幕最后面 shiftcmd↓ 选中从当前位置到屏幕最下面 option↓. 或者 end 滚动到屏幕最下方 // 因为默认…

基于 Arduino 库实现 ESP32 TCP Server 应用例程

实现步骤&#xff1a; ESP32 开启 WiFi Station 模式连接路由器连上路由器后将获取到分配的 IP 地址基于分配的 IP 地址创建 TCP Server 测试代码如下&#xff1a; #include <WiFi.h> #include <WiFiClient.h>const char* ssid "cc2.4"; const char*…

采用Prometheus+Grafana+Altermanager搭建部署K8S集群节点可视化监控告警平台

文章目录 1. 实验节点规划表2. 安装Prometheus3. 安装node_exporter4. 配置prometheus.yml文件5. 安装Grafana6. 安装Altermanager监控告警 采用 "PrometheusGrafana"的开源监控系统&#xff0c;安装部署K8S集群监控平台。 并使用Altermanager告警插件&#xff0c;配…

【计算机组成原理总结】

第一章计算机系统概述 第二章数据的表示与运算 第三章存储系统 第四章 指令系统 第五章 中央处理器 第六章 总线 第七章 输入输出设备

hadoop -Unable to start failover controller. Parent znode does not exist

Unable to start failover controller. Parent znode does not exist 问题描述 今天使用星环的TDH集群时&#xff0c;HDFS服务宕掉&#xff0c;在后台查看namenode 始终起不来 kubectl get pod -o wide | grep hdfs 如上图&#xff0c;k8s pod 起来又crash 掉&#xff0c;然后…

Kafka入门,分区的分配再平衡(二十)

分区的分配以及再平衡 1、kafka有四种主流的分区策略&#xff1a;Range,RoundRobin,Sticky,CooperativeSticky。可以通过配置参数partition.assignment.strategy,修改分区的分配策略。默认策略是RanageCooperativeSticky。Kafka可以同事使用多个分区分配策略。 参数描述heartb…

【*1900倍数遍历】CF1627 D

Problem - D - Codeforces 题意&#xff1a; 思路&#xff1a; 在枚举数列子集的gcd时&#xff0c;通常可以枚举倍数 对于这道题要注意&#xff0c;j/i的gcd要为1&#xff0c;这样才能保证i是这个子集的最大公约数 Code&#xff1a; #include <bits/stdc.h>//#define…

决策树(Decision Tree)

文章目录 一、决策树 一、决策树 决策树在机器学习中也是比较常见的一种算法&#xff0c;属于监督学习中的一种。看字面意思应该也比较容易理解&#xff0c;相比其他算法比如支持向量机(SVM)或神经网络&#xff0c;似乎决策树感觉“亲切”许多。 优点&#xff1a;计算复杂度不…

Java 查找二叉树中某一结点的前驱结点以及后继结点

文章目录 前言什么是后继结点什么是前驱结点 代码实现查找某一结点的后继结点思路代码实现图解 查找某一结点的前驱结点思路代码实现图解 测试用例运行结果 结语 前言 给定二叉树结点定义Node结构如下&#xff0c;其中parent结点指向当前Node结点的父结点,根结点指向null&…

使用cuda报错的一次记录(CUDA error: out of memory)

原因&#xff1a; 由于batch_size设置过大导致的&#xff01;&#xff01;&#xff01;

nginx页面优化与防盗链

提示&#xff1a;文章写完后&#xff0c;目录可以自动生成&#xff0c;如何生成可参考右边的帮助文档 文章目录 一、nginx页面优化1.版本号1.1 查看版本号1.2 修改版本号1.2.1 修改配置文件1.2.2 修改源码文件&#xff0c;重新编译安装 2.nginx的日志分割3.nginx的页面压缩3.1 …

ENSP模拟器如何设置命令行和描述框的背景颜色及字体

ENSP模拟器如何设置命令行和描述框的背景颜色及字体 选择“菜单 > 工具 > 选项”&#xff0c; 在弹出界面中选择“字体设置”。 单击“字体”后的“选择”设置字体&#xff0c;单击“字体颜色”后的“选择”设置字颜色&#xff0c;单击“背景颜色”后的“选择”设置…

【数据结构】从树到二叉树

目录 ​编辑 一. 前言 二. 树的概念及结构----凉拌海带 2.1 什么是树 2.2 树的基本术语 2.3 树的表示 2.4 树在实际生活中的应用 二. 二叉树的概念及结构----扬州炒饭 2.1 什么是二叉树 2.2 二叉树两种特殊形式 2.3 二叉树的性质 2.4 二叉树的存储结构 三. 链式二叉树基本操…

I/O模型

目录 一、I/O基本概念同步和异步阻塞和非阻塞线程在运行过程中&#xff0c;可能由于以下几种原因进入阻塞状态&#xff1a;可能阻塞套接字的Linux Sockets API调用分为以下四种 二、五种I/O模型阻塞I/O模型非阻塞式I/O模型I/O多路复用模型信号驱动式I/O模型异步I/O模型 三、五种…

On the Properties of Neural Machine Translation: Encoder–DecoderApproaches

摘要 Neural machine translation &#xff1a; 神经机器翻译。 神经机器翻译模型经常包含编码器和解码器&#xff1a;an encoder and a decoder. 编码器&#xff1a; 从一个变长输入序列中提取固定长度的表示。a fixed-length representation. 解码器&#xff1a;从表示中…

C语言图书管理系统

一&#xff0c;开发环境 操作系统&#xff1a;windows10, windows11, linux, mac等。开发工具&#xff1a;Qt, vscode, visual studio等开发语言&#xff1a;c 二&#xff0c;功能需求 1. 图书信息管理&#xff1a; 这个功能的主要任务是保存和管理图书的所有信息。这应该包…

(超级详细)如何在Mac OS上的VScode中配置OpenGL环境并编译

文章目录 安装环境下载GLAD与GLFW一、下载GLAD二、下载GLFW 项目结构配置测试程序与项目的编译测试可执行文件HelloGL 安装环境 机器&#xff1a;macbook air 芯片&#xff1a; M1芯片&#xff08;arm64&#xff09; macOS&#xff1a;macOS Ventura 13.4 VScode version&#…