【ChatGPT】教你搭建多任务模型

ChatGPT教你搭建多任务模型

use-chatgpt-to-build-multitask-model-20230320153732

You:

tell me what’s your version of gpt ?


ChatGPT:

As an AI language model developed by OpenAI, I am based on the GPT (Generative Pretrained Transformer) architecture. However, my version is known as GPT-3.5, which is an updated version of GPT-3 with additional training data and model parameters to improve my language understanding and generation capabilities. GPT-3.5 was trained on a massive corpus of text data, including web pages, books, and other sources, and is one of the most advanced language models currently available.


You:

implement a multi-task model with tensorflow2.0 subclass api


ChatGPT:

Sure, I can provide you with an example implementation of a multi-task model using TensorFlow 2.0 Subclassing API. In this example, we will create a model that performs both image classification and object detection on images.

Here is the code:

import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D, Input
from tensorflow.keras import Model


class MultiTaskModel(Model):
    def __init__(self, num_classes, num_boxes):
        super(MultiTaskModel, self).__init__()
        self.num_classes = num_classes
        self.num_boxes = num_boxes
        
        # Shared layers
        self.conv1 = Conv2D(32, 3, activation='relu')
        self.maxpool1 = MaxPooling2D()
        self.conv2 = Conv2D(64, 3, activation='relu')
        self.maxpool2 = MaxPooling2D()
        self.conv3 = Conv2D(128, 3, activation='relu')
        self.maxpool3 = MaxPooling2D()
        
        # Classification head
        self.flat1 = Flatten()
        self.dense1 = Dense(64, activation='relu')
        self.dense2 = Dense(self.num_classes, activation='softmax')
        
        # Detection head
        self.flat2 = Flatten()
        self.dense3 = Dense(256, activation='relu')
        self.dense4 = Dense(self.num_boxes * 4, activation='linear')
        
    def call(self, inputs):
        # Shared layers
        x = self.conv1(inputs)
        x = self.maxpool1(x)
        x = self.conv2(x)
        x = self.maxpool2(x)
        x = self.conv3(x)
        x = self.maxpool3(x)
        
        # Classification head
        y1 = self.flat1(x)
        y1 = self.dense1(y1)
        y1 = self.dense2(y1)
        
        # Detection head
        y2 = self.flat2(x)
        y2 = self.dense3(y2)
        y2 = self.dense4(y2)
        
        return y1, y2

In this example, we define a MultiTaskModel class that extends the tf.keras.Model class. The MultiTaskModel class takes two arguments, num_classes and num_boxes, which represent the number of classes for image classification and the number of bounding boxes for object detection, respectively.

We then define the shared layers of the model, which include three convolutional layers followed by max pooling layers. We define two separate heads for the model: one for image classification and another for object detection.

The classification head consists of a flatten layer, followed by two dense layers with relu and softmax activations, respectively. The detection head consists of a flatten layer, followed by two dense layers with relu and linear activations, respectively. The output of the detection head is a tensor of shape (batch_size, num_boxes * 4), where 4 represents the number of parameters needed to define a bounding box (x-coordinate, y-coordinate, width, and height).

In the call method of the model, we pass the inputs through the shared layers and then through the two separate heads to obtain the outputs for classification and detection. Finally, we return the two outputs as a tuple.

To train this model, you would need to define appropriate loss functions for each task and combine them into a joint loss. You would also need to provide training data that includes both labeled images and bounding box annotations.


You:

how can I train the model you built above? and how to generate sample for this model?


ChatGPT:

To train the multi-task model that I provided earlier, you can follow these steps:

  1. Prepare the training data: You would need a dataset of labeled images and bounding box annotations. You can use popular datasets such as COCO, Pascal VOC, or OpenImages for this purpose.

  2. Define the loss functions: For the classification task, you can use cross-entropy loss, and for the detection task, you can use mean squared error loss or smooth L1 loss. You can combine these losses into a joint loss using a weighting factor or a more complex scheme such as multi-task learning.

  3. Compile the model: You can compile the model using the compile method of the MultiTaskModel class. You would need to specify the optimizer, the loss functions, and any metrics you want to track during training.

  4. Train the model: You can train the model using the fit method of the MultiTaskModel class. You would need to pass in the training data, the number of epochs, and any other relevant parameters such as batch size and validation data.

Here is some sample code that demonstrates these steps:

# Load the data
train_data = ...
val_data = ...

# Define the loss functions
class_loss_fn = tf.keras.losses.CategoricalCrossentropy()
detect_loss_fn = tf.keras.losses.MeanSquaredError()

# Define the optimizer
optimizer = tf.keras.optimizers.Adam()

# Compile the model
model = MultiTaskModel(num_classes=10, num_boxes=5)
model.compile(optimizer=optimizer, loss=[class_loss_fn, detect_loss_fn], metrics=['accuracy'])

# Train the model
model.fit(train_data, epochs=10, validation_data=val_data)

# Generate a sample output
input_image = ...
output_classes, output_boxes = model.predict(input_image)

To generate a sample output for the model, you can use the predict method of the MultiTaskModel class. The input to the predict method should be an image tensor of shape (batch_size, height, width, num_channels). The output of the predict method will be a tuple of two tensors: one for the classification probabilities and one for the bounding box parameters.

Here is some sample code that demonstrates how to generate a sample output:

# Generate a sample output
input_image = ...
output_classes, output_boxes = model.predict(input_image)

# Print the predicted classes and bounding boxes
print('Predicted classes:', output_classes)
print('Predicted boxes:', output_boxes)

Note that the output of the predict method will depend on the specific dataset and training data that you used to train the model, so the results may vary.


You:

ok, can you regenerate the model for recommendation scene with a clarified example?


ChatGPT:

Sure, here is an example of how to build a multi-task model for a recommendation system using TensorFlow 2.0’s Subclassing API. This model will have two tasks: rating prediction and item embedding prediction.

Assuming we have a dataset of user-item interactions with ratings, where each user has interacted with multiple items, we can define the following inputs and outputs for our model:

Inputs:

  • User ID (integer)
  • Item ID (integer)

Outputs:

  • Rating (float)
  • Item Embedding (vector of floats)

We will use an embedding layer to learn the representations of the users and items, and a shared neural network to predict the rating and item embedding.

import tensorflow as tf

class RecommendationModel(tf.keras.Model):
    def __init__(self, num_users, num_items, embedding_size, hidden_units):
        super(RecommendationModel, self).__init__()
        self.user_embedding = tf.keras.layers.Embedding(num_users, embedding_size)
        self.item_embedding = tf.keras.layers.Embedding(num_items, embedding_size)
        self.shared_nn = tf.keras.Sequential([
            tf.keras.layers.Dense(hidden_units, activation='relu'),
            tf.keras.layers.Dense(hidden_units, activation='relu')
        ])
        self.rating_head = tf.keras.layers.Dense(1)
        self.embedding_head = tf.keras.layers.Dense(embedding_size)
        
    def call(self, inputs):
        user_ids, item_ids = inputs
        user_embeddings = self.user_embedding(user_ids)
        item_embeddings = self.item_embedding(item_ids)
        concatenated = tf.concat([user_embeddings, item_embeddings], axis=-1)
        shared = self.shared_nn(concatenated)
        rating = self.rating_head(shared)
        embedding = self.embedding_head(shared)
        return rating, embedding

In the RecommendationModel class, we define the __init__ method to create the layers of the model. We first create two embedding layers, one for the users and one for the items, with the specified embedding size. We then create a shared neural network with two hidden layers, each with the specified number of units and ReLU activation. Finally, we create two output heads: one for rating prediction with a single output neuron, and one for item embedding prediction with an output size of the embedding size.

In the call method, we define how to forward pass the inputs through the model. We first look up the embeddings of the users and items using the corresponding embedding layers. We then concatenate these embeddings along the last dimension, and pass the concatenated embeddings through the shared neural network. Finally, we pass the output of the shared network through the rating and embedding output heads, respectively.

To train the model, we need to define the loss functions for each task and the optimizer. Assuming we have a training dataset of user-item interactions with corresponding ratings and item embeddings, we can define the following:

# Load the data
train_data = ...
val_data = ...

# Define the loss functions
mse_loss_fn = tf.keras.losses.MeanSquaredError()
cosine_loss_fn = tf.keras.losses.CosineSimilarity()

# Define the optimizer
optimizer = tf.keras.optimizers.Adam()

# Instantiate the model
model = RecommendationModel(num_users=1000, num_items=10000, embedding_size=32, hidden_units=64)

# Compile the model
model.compile(optimizer=optimizer, loss=[mse_loss_fn, cosine_loss_fn], metrics=['mae', 'cosine_similarity'])

# Train the model
model.fit(train_data, epochs=10, validation_data=val_data)

In this example, we use mean squared error loss for the rating prediction task and cosine similarity loss for the item embedding prediction task. We compile the model with the specified optimizer,


You:

convert above two task with ctr prediction and cvr prediction task? build the training and predicting pipline with python again.


ChatGPT:

Sure, here’s an example of how to modify the recommendation system multi-task model to predict click-through rate (CTR) and conversion rate (CVR) instead.

Assuming we have a dataset of user-item interactions with the following features: user ID, item ID, timestamp, and a binary label indicating whether the user clicked on the item and another binary label indicating whether the user converted after clicking on the item.

We can define the following inputs and outputs for our model:

Inputs:

  • User ID (integer)
  • Item ID (integer)
  • Timestamp (float)

Outputs:

  • Click Probability (float)
  • Conversion Probability (float)

We will use an embedding layer to learn the representations of the users and items, and a shared neural network with two output heads to predict the click and conversion probabilities.

import tensorflow as tf

class CTRCVRModel(tf.keras.Model):
    def __init__(self, num_users, num_items, embedding_size, hidden_units):
        super(CTRCVRModel, self).__init__()
        self.user_embedding = tf.keras.layers.Embedding(num_users, embedding_size)
        self.item_embedding = tf.keras.layers.Embedding(num_items, embedding_size)
        self.timestamp_embedding = tf.keras.layers.Dense(embedding_size, activation='relu')
        self.shared_nn = tf.keras.Sequential([
            tf.keras.layers.Dense(hidden_units, activation='relu'),
            tf.keras.layers.Dense(hidden_units, activation='relu')
        ])
        self.click_head = tf.keras.layers.Dense(1, activation='sigmoid')
        self.conversion_head = tf.keras.layers.Dense(1, activation='sigmoid')
        
    def call(self, inputs):
        user_ids, item_ids, timestamps = inputs
        user_embeddings = self.user_embedding(user_ids)
        item_embeddings = self.item_embedding(item_ids)
        timestamp_embeddings = self.timestamp_embedding(timestamps)
        concatenated = tf.concat([user_embeddings, item_embeddings, timestamp_embeddings], axis=-1)
        shared = self.shared_nn(concatenated)
        click_probability = self.click_head(shared)
        conversion_probability = self.conversion_head(shared)
        return click_probability, conversion_probability

In the CTRCVRModel class, we define the __init__ method to create the layers of the model. We first create three input layers: one for the users, one for the items, and one for the timestamps. We then create two embedding layers, one for the users and one for the items, with the specified embedding size. We also create a dense layer for the timestamp feature. We then create a shared neural network with two hidden layers, each with the specified number of units and ReLU activation. Finally, we create two output heads: one for click probability prediction with a single output neuron and sigmoid activation, and one for conversion probability prediction with a single output neuron and sigmoid activation.

In the call method, we define how to forward pass the inputs through the model. We first look up the embeddings of the users and items using the corresponding embedding layers. We also pass the timestamps through the timestamp embedding layer. We then concatenate these embeddings and the timestamp embeddings along the last dimension, and pass the concatenated embeddings through the shared neural network. Finally, we pass the output of the shared network through the click and conversion output heads, respectively.

To train the model, we need to define the loss functions for each task and the optimizer. Assuming we have a training dataset of user-item interactions with the corresponding labels, we can define the following:

# Load the data
train_data = ...
val_data = ...

# Define the loss functions
bce_loss_fn = tf.keras.losses.BinaryCrossentropy()

# Define the optimizer
optimizer = tf.keras.optimizers.Adam()

# Instantiate the model
model = C

You:

continue


ChatGPT:

# Compile the model with two loss functions
model.compile(optimizer=optimizer, loss=[bce_loss_fn, bce_loss_fn])

# Define the training and validation datasets
train_dataset = tf.data.Dataset.from_tensor_slices((train_user_ids, train_item_ids, train_timestamps, 
                                                     train_click_labels, train_conversion_labels))
train_dataset = train_dataset.shuffle(buffer_size=len(train_data)).batch(batch_size)
val_dataset = tf.data.Dataset.from_tensor_slices((val_user_ids, val_item_ids, val_timestamps, 
                                                   val_click_labels, val_conversion_labels))
val_dataset = val_dataset.batch(batch_size)

# Train the model
history = model.fit(train_dataset, epochs=num_epochs, validation_data=val_dataset)

In the compile method, we specify the optimizer and the loss functions for each task. We use binary crossentropy as the loss function for both tasks.

In the training and validation datasets, we combine all the input features and labels into tuples, and create tf.data.Dataset objects from them. We shuffle the training dataset and batch both datasets. We then use the fit method to train the model, specifying the training dataset, the number of epochs, and the validation dataset.

To generate predictions for new user-item interactions, we can use the following code:

# Load the test data
test_data = ...

# Extract the input features
test_user_ids = test_data['user_id'].values
test_item_ids = test_data['item_id'].values
test_timestamps = test_data['timestamp'].values

# Generate predictions
click_probabilities, conversion_probabilities = model.predict((test_user_ids, test_item_ids, test_timestamps))

# Save the predictions
test_data['click_probability'] = click_probabilities
test_data['conversion_probability'] = conversion_probabilities
test_data.to_csv('predictions.csv', index=False)

In this code, we load the test data and extract the input features. We then use the predict method of the model to generate the click and conversion probabilities for each interaction. Finally, we save the predictions to a CSV file.


You:

great. something may be wrong. how can i solve user or item cold-start problem due to the unseen ids occuring in embedding table?


ChatGPT:

Good question! Cold-start is a common problem in recommender systems, where new users or items have no or very few interactions in the training data, making it difficult to generate accurate recommendations.

One way to handle the cold-start problem is to use a hybrid approach, where we use not only collaborative filtering based on user-item interactions, but also content-based filtering based on item features. This allows us to make recommendations for new items based on their features, even if they have no interactions in the training data.

Another approach is to use pre-trained embeddings for the user and item features. We can use a pre-trained language model like BERT or GPT to generate embeddings for textual features like item titles or user profiles. These pre-trained embeddings can be fine-tuned on the training data to generate task-specific embeddings for our recommender system.

To handle unseen users or items at inference time, we can use a fallback strategy, such as recommending popular items to new users or recommending items similar to the ones the user has interacted with to new items.

Another approach is to use a multi-armed bandit algorithm, where we balance exploration and exploitation by recommending a mix of popular items and items similar to the ones the user has interacted with. As the user interacts more with the system, we can adjust the balance towards exploitation to generate more personalized recommendations.


You:

sounds great.but talk is cheap, show me the code.


ChatGPT:

Sure! Here’s an example of how to implement a hybrid recommender system with user and item embeddings for handling the cold-start problem:

# Import necessary libraries
import tensorflow as tf
from tensorflow.keras.layers import Input, Embedding, Concatenate, Dense, Dropout
from tensorflow.keras.models import Model

# Define the input and embedding sizes
num_users = len(user_id_encoder.classes_)
num_items = len(item_id_encoder.classes_)
embedding_size = 32

# Define the user and item input layers
user_input = Input(shape=(1,), name='user_input')
item_input = Input(shape=(1,), name='item_input')

# Define the user and item embedding layers
user_embedding = Embedding(input_dim=num_users, output_dim=embedding_size, name='user_embedding')
item_embedding = Embedding(input_dim=num_items, output_dim=embedding_size, name='item_embedding')

# Generate the user and item embeddings
user_embedding_output = user_embedding(user_input)
item_embedding_output = item_embedding(item_input)

# Define the MLP layers
concatenated_embeddings = Concatenate(name='concatenated_embeddings')([user_embedding_output, item_embedding_output])
dropout_layer_1 = Dropout(0.2, name='dropout_layer_1')(concatenated_embeddings)
dense_layer_1 = Dense(64, activation='relu', name='dense_layer_1')(dropout_layer_1)
dropout_layer_2 = Dropout(0.2, name='dropout_layer_2')(dense_layer_1)
dense_layer_2 = Dense(32, activation='relu', name='dense_layer_2')(dropout_layer_2)

# Define the output layer
output_layer = Dense(1, activation='sigmoid', name='output_layer')(dense_layer_2)

# Define the model
model = Model(inputs=[user_input, item_input], outputs=output_layer)

# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy')

# Train the model
model.fit([train_data['user_id'], train_data['item_id']], train_data['label'], epochs=10, validation_split=0.2)

In this code, we define the input and embedding sizes for the user and item features, and define the user and item input layers. We then define the user and item embedding layers using the Embedding class, and generate the embeddings for the user and item features using these layers.

We then define an MLP model using the Concatenate, Dropout, and Dense layers, and concatenate the user and item embeddings as input to this model. We define a sigmoid activation function on the output layer, since we are performing binary classification to predict user-item interactions.

We compile the model using the Adam optimizer and binary crossentropy loss function, and train the model on the training data, using 20% of the data for validation. Note that we use the fit method of the model, and pass the user and item features as a list to the inputs parameter of the method.

To handle the cold-start problem, we can incorporate content-based filtering by including item features, such as the item title or description, in the input to the model. We can use pre-trained language models like BERT or GPT to generate embeddings for these features, or use simpler models like TF-IDF or word2vec.

To handle unseen users or items at inference time, we can use a fallback strategy, such as recommending popular items to new users or recommending items similar to the ones the user has interacted with to new items. We can also use a multi-armed bandit algorithm to balance exploration and exploitation and generate personalized recommendations for cold-start users and items.


You:

can you show the main coding logic for the last two strategy you mentioned above?


ChatGPT:

Sure, here are some code snippets to illustrate the two strategies:

Strategy 1: Recommending popular items to new users

# Get the top N most popular items
popular_items = train_data['item_id'].value_counts().head(N).index.tolist()

# Get the user's interactions
user_interactions = train_data[train_data['user_id'] == user_id]['item_id'].tolist()

# Remove the items the user has already interacted with
popular_items = [item for item in popular_items if item not in user_interactions]

# Recommend the remaining popular items to the user
recommendations = popular_items[:K]

In this code, we first compute the top N most popular items from the training data based on their frequency of interaction. We then get the user’s interactions from the training data, and remove the items that the user has already interacted with from the list of popular items. We then recommend the top K remaining popular items to the user.

Strategy 2: Recommending similar items to new items

# Define the similarity function
def cosine_similarity(vector1, vector2):
    dot_product = np.dot(vector1, vector2)
    norm1 = np.linalg.norm(vector1)
    norm2 = np.linalg.norm(vector2)
    return dot_product / (norm1 * norm2)

# Get the embedding of the new item
new_item_embedding = item_embedding.predict(np.array([new_item_id]))

# Compute the cosine similarity between the new item and all other items
similarities = []
for item_id in item_id_encoder.classes_:
    if item_id == new_item_id:
        continue
    item_embedding = item_embedding_layer(np.array([item_id]))
    similarity = cosine_similarity(new_item_embedding, item_embedding)
    similarities.append((item_id, similarity))

# Sort the similarities in descending order
similarities = sorted(similarities, key=lambda x: x[1], reverse=True)

# Get the top K most similar items
recommendations = [item[0] for item in similarities[:K]]

In this code, we first define a similarity function to compute the cosine similarity between two embeddings. We then get the embedding of the new item using the item embedding layer of the trained model. We compute the cosine similarity between the new item and all other items, and sort the similarities in descending order. We then recommend the top K most similar items to the new item based on the computed similarities. Note that this strategy assumes that there are some existing items in the training data that are similar to the new item, which may not always be the case.

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/2886.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

数据泄漏防护 (DLP) 工具保护敏感数据

通过实时安全监控,通过端点(即 USB、电子邮件、打印等)检测、中断和防止敏感数据泄露。使用 DataSecurity Plus 的数据泄漏防护 (DLP) 工具保护敏感数据不被泄露或被盗。DataSecurity Plus 主要功能包括: …

Android APP检查设备是否为平板

正文 Android APP判断设备是否为平板的三种方法: 通过屏幕尺寸判断。一般来说,平板电脑的屏幕尺寸比手机大很多,可以根据屏幕的长宽比和尺寸等信息来区分设备类型。通过屏幕像素密度判断。一般来说,平板电脑的屏幕像素密度比手机…

Java开发一年不到,来面试居然敢开口要20K,面完连8K都不想给~

前言 我的好朋友兼大学同学老伍家庭经济情况不错,毕业之后没两年自己存了点钱加上家里的支持,自己在杭州开了一家网络公司。由于公司不是很大所以公司大部分的开发人员都是自己面试的,近期公司发展的不错,打算扩招也面试了不少人…

四级数据库工程师 刷真题错题整理(三)数据库原理

1.数据模型是对现实世界进行抽象的工具,它按算机系统的观点模于提数据库系统中信息表示和操作手段的形式框架,主要用于 DBMS 的实现,是数据库系统的核心和基础。其中,数据操作是对数据间的动态行为。 2.数据库的型是稳定的&#…

day38_JDBC

今日内容 上课同步视频:CuteN饕餮的个人空间_哔哩哔哩_bilibili 同步笔记沐沐霸的博客_CSDN博客-Java2301 零、 复习昨日 一、数据库连接池 二、反射 三、封装DBUtil 零、 复习昨日 SQL注入 预处理语句 String sql "select * from user where id ?"; PreparedStat…

企业微信中如何拉黑?拉黑个人和群成员有什么区别?

企业微信既可以拉黑个人好友,又可以拉黑群好友。 1. 拉黑个人好友 拉黑好友通俗来说就是不想再接收到对方的信息,企业微信可以通过设置消息免打扰的方式来屏蔽对方的消息。 【客户聊天界面】-【右上角的小人标志】-【第一栏名称进入】-【右上角三点】…

C语言——动态内存管理 malloc、calloc、realloc、free的使用

目录 一、为什么存在动态内存分配 二、动态内存函数的介绍 2.1malloc和free 2.2calloc 2.3realloc 三、常见的动态内存错误 3.1对NULL指针的解引用操作 3.2对动态开辟空间的越界访问 3.3对非动态开辟的内存使用free释放 3.4使用free释放一块动态开辟内存的一部分 3.5…

奇安信_防火墙部署_透明桥模式

奇安信_防火墙部署_透明桥模式一、预备知识二、项目场景三、拓扑图四、基本部署配置1. 登录web控制台2.连通性配置3.可信主机配置4.授权导入5.特征库升级6.安全配置文件五、透明桥配置1. 创建桥2. 端口绑定桥3. 创建桥端口六、结语一、预备知识 安全设备接入网络部署方式 二、…

运算放大器:电压比较器

目录一、单限电压比较器二、滞回电压比较器三、窗口电压比较器最近在学习电机控制,遇到了与运算放大电路相关的知识,然而太久没有接触模拟电路,对该知识已经淡忘了,及时温故而知新,做好笔记,若有错误、不足…

字节跳动测试岗面试记:二面被按地上血虐,所幸Offer已到手...

在互联网做了几年之后,去大厂“镀镀金”是大部分人的首选。大厂不仅待遇高、福利好,更重要的是,它是对你专业能力的背书,大厂工作背景多少会给你的简历增加几分竞争力。 但说实话,想进大厂还真没那么容易。最近面试字…

3分钟阐述这些年我的 接口自动化测试 职业生涯经验分享

接口自动化测试学习教程地址:https://www.bilibili.com/video/BV1914y1F7Bv/ 你好,我是凡哥。 很高兴能够分享我的接口自动化测试经验和心得体会。在我目前的职业生涯中,接口自动化测试是我经常进行的一项任务。通过不断地学习和实践&#xf…

【C++】map 和 set

文章目录一、关联式容器与键值对1、关联式容器2、键值对 pair3、树形结构的关联式容器二、set1、set 的介绍2、set 的使用三、multiset四、map1、map 的介绍2、map 的使用五、multimap一、关联式容器与键值对 1、关联式容器 在C初阶的时候,我们已经接触了 STL 中的…

基于SpringBoot的酒店管理系统

系统环境 开发语言:Java 框架:springboot JDK版本:JDK1.8 服务器:tomcat7 数据库:mysql 5.7(一定要5.7版本) 数据库工具:Navicat11 开发软件:eclipse/myeclipse/i…

matplotlib参数详解

文章目录一、简介二、安装与调用三、绘图与风格设置3.1、绘图标记3.1.1、标记类型(marker*)3.1.2、标记大小、内部和边框颜色(ms10、mfcr、mecg)3.2、绘图线3.2.1、线类型(linestyle--)3.2.2、线宽&#xf…

C++入门教程||C++ 字符串||

C 字符串C 字符串C 提供了以下两种类型的字符串表示形式:C 风格字符串C 引入的 string 类类型C 风格字符串C 风格的字符串起源于 C 语言,并在 C 中继续得到支持。字符串实际上是使用 null 字符 终止的一维字符数组。因此,一个以 null 结尾的…

大文件上传

上图就是大致的流程一、标题图片上传课程的标题图片Ajax发送请求到后端后端接收到图片使用IO流去保存图片,返回图片的信息对象JS回调函数接收对象通过$("元素id").val(值),方式给页面form表达img标签src属性值,达到上传图片并回显二…

若依微服务(ruoyi-cloud)保姆版容器编排运行

一、简介 项目gitee地址:https://gitee.com/y_project/RuoYi-Cloud 由于该项目运行有很多坑,大家可以在git克隆拷贝到本地后,执行下面的命令使master版本回退到本篇博客的版本: git reset --hard 05ca78e82fb4e074760156359d09a…

扒一扒抖音是如何做线程优化的

背景 最近在对一些大厂App进行研究学习,在对某音App进行研究时,发现其在线程方面做了一些优化工作,并且其解决的问题也是之前我在做线上卡顿优化时遇到的,因此对其具体实现方案做了深入分析。本文是对其相关源码的研究加上个人理…

人员玩手机离岗识别检测系统 yolov5

人员玩手机离岗识别检测系统根通过pythonyolov5网络模型识别算法技术,人员玩手机离岗识别检测算法可以对画面中人员睡岗离岗、玩手机打电话、脱岗睡岗情况进行全天候不间断进行识别检测报警提醒。Python是一种由Guido van Rossum开发的通用编程语言,它很…

【2023.3.18 美团校招】

文章目录1. 小美剪彩带2. 最多修改两个字符,生成字典序最小的回文串1. 小美剪彩带 题意:找出区间内不超过k种数字子数组的最大长度 使用双指针的方式,用哈希表来统计每个数出现次数。在双指针移动的过程中,动态的维护区间内不同数…