deepseek-coder模型量化

1 简介

DeepSeek-Coder在多种编程语言和各种基准测试中取得了开源代码模型中最先进的性能。

为尝试在开发板进行部署,首先利用llama.cpp对其进行量化。

2 llama.cpp安装

git clone之后进入文件夹make即可,再将依赖补全pip install -r requirements.txt

3 量化

按照GitHub上DeepSeek和llama.cpp官方的信息,后者对deepseek模型的量化目前的支持(进度)还不是很完善。
下面记录一下目前量化出现的问题。

3.1 DeepSeek官方tutorial

依照官方md

git clone https://github.com/DOGEwbx/llama.cpp.git
cd llama.cpp
git checkout regex_gpt2_preprocess

出现error: pathspec 'regex_gpt2_preprocess' did not match any file(s) known to git


# set up the environment according to README
make
python3 -m pip install -r requirements.txt
# generate GGUF model
python convert-hf-to-gguf.py <MODEL_PATH> --outfile <GGUF_PATH> --model-name deepseekcoder

出现convert-hf-to-gguf.py: error: unrecognized arguments: --model-name deepseekcoder

去掉--model-name参数,出现NotImplementedError: Architecture 'LlamaForCausalLM' not supported!,解释。


3.2 convert.py转换

参考这个comment和这个comment,使用convert.py进行转换。
看起来这个修改已经被合并了,浅浅试一下。

python convert.py <MODEL_PATH> --outfile <GGUF_PATH>

出现错误: Exception: Vocab size mismatch (model has 32256, but ../DeepSeek-Coder/models/deepseek-coder-1.3b-instruct has 32022). Add the --pad-vocab option and try again.

详细的log如下

Loading model file ../DeepSeek-Coder/models/deepseek-coder-1.3b-instruct/model.safetensors
params = Params(n_vocab=32256, n_embd=2048, n_layer=24, n_ctx=16384, n_ff=5504, n_head=16, n_head_kv=16, n_experts=None, n_experts_used=None, f_norm_eps=1e-06, rope_scaling_type=<RopeScalingType.LINEAR: 'linear'>, f_rope_freq_base=100000, f_rope_scale=4.0, n_orig_ctx=None, rope_finetuned=None, ftype=None, path_model=PosixPath('../DeepSeek-Coder/models/deepseek-coder-1.3b-instruct'))
Found vocab files: {'spm': None, 'bpe': None, 'hfft': PosixPath('../DeepSeek-Coder/models/deepseek-coder-1.3b-instruct/tokenizer.json')}
Loading vocab file PosixPath('../DeepSeek-Coder/models/deepseek-coder-1.3b-instruct/tokenizer.json'), type 'hfft'
fname_tokenizer: ../DeepSeek-Coder/models/deepseek-coder-1.3b-instruct
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Vocab info: <HfVocab with 32000 base tokens and 22 added tokens>
Special vocab info: <SpecialVocab with 0 merges, special tokens {'bos': 32013, 'eos': 32021, 'pad': 32014}, add special tokens {'bos': True, 'eos': False}>
Permuting layer 0
Permuting layer 1
Permuting layer 2
...省略部分
Permuting layer 22
Permuting layer 23
lm_head.weight                                   -> output.weight                            | BF16   | [32256, 2048]
model.embed_tokens.weight                        -> token_embd.weight                        | BF16   | [32256, 2048]
model.layers.0.input_layernorm.weight            -> blk.0.attn_norm.weight                   | BF16   | [2048]
model.layers.0.mlp.down_proj.weight              -> blk.0.ffn_down.weight                    | BF16   | [2048, 5504]
model.layers.0.mlp.gate_proj.weight              -> blk.0.ffn_gate.weight                    | BF16   | [5504, 2048]
...
model.layers.18.self_attn.v_proj.weight          -> blk.18.attn_v.weight                     | BF16   | [2048, 2048]
model.layers.19.input_layernorm.weight           -> blk.19.attn_norm.weight                  | BF16   | [2048]
...
model.layers.9.input_layernorm.weight            -> blk.9.attn_norm.weight                   | BF16   | [2048]
model.layers.9.mlp.down_proj.weight              -> blk.9.ffn_down.weight                    | BF16   | [2048, 5504]
model.layers.9.mlp.gate_proj.weight              -> blk.9.ffn_gate.weight                    | BF16   | [5504, 2048]
model.layers.9.mlp.up_proj.weight                -> blk.9.ffn_up.weight                      | BF16   | [5504, 2048]
model.layers.9.post_attention_layernorm.weight   -> blk.9.ffn_norm.weight                    | BF16   | [2048]
model.layers.9.self_attn.k_proj.weight           -> blk.9.attn_k.weight                      | BF16   | [2048, 2048]
model.layers.9.self_attn.o_proj.weight           -> blk.9.attn_output.weight                 | BF16   | [2048, 2048]
model.layers.9.self_attn.q_proj.weight           -> blk.9.attn_q.weight                      | BF16   | [2048, 2048]
model.layers.9.self_attn.v_proj.weight           -> blk.9.attn_v.weight                      | BF16   | [2048, 2048]
model.norm.weight                                -> output_norm.weight                       | BF16   | [2048]
Writing ../DeepSeek-Coder/models/1.3b.gguf, format 1
Traceback (most recent call last):
  File "/home/stlinpeiyang/lpy22/LLM/llama.cpp/convert.py", line 1479, in <module>
    main()
  File "/home/stlinpeiyang/lpy22/LLM/llama.cpp/convert.py", line 1473, in main
    OutputFile.write_all(outfile, ftype, params, model, vocab, special_vocab,
  File "/home/stlinpeiyang/lpy22/LLM/llama.cpp/convert.py", line 1117, in write_all
    check_vocab_size(params, vocab, pad_vocab=pad_vocab)
  File "/home/stlinpeiyang/lpy22/LLM/llama.cpp/convert.py", line 963, in check_vocab_size
    raise Exception(msg)
Exception: Vocab size mismatch (model has 32256, but ../DeepSeek-Coder/models/deepseek-coder-1.3b-instruct has 32022). Add the --pad-vocab option and try again.

3.2.1 添加--pad-vocab

首先,显然提示添加参数,根据提示加上--pad-vocab参数后,成功运行并可以成功量化,但是在测试时,会出现以下错误

terminate called after throwing an instance of 'std::out_of_range'
  what():  _Map_base::at
Aborted (core dumped)

这种情况有相关的issue comment&这个。

llama.cpp的pull request和issue来看,应该是还没处理好。菜鸡只能嗷嗷待哺了
😥。不知道TheBloke大佬是怎么处理的👍。
(表情网站)


3.2.2 修改vocab_size

其次,根据错误的前半段的model has 32256, but ... has 32022,有类似的issue.
根据comment,对vocal_size进行修改。相应地,打开deepseek-coder-1.3b-instruct中的config.json文件,试将"vocab_size": 32256修改为"vocal_size": 32022。再次运行

python convert.py <MODEL_PATH> --outfile <GGUF_PATH>

输出的log如下

Loading model file ../DeepSeek-Coder/models/deepseek-coder-1.3b-instruct/model.safetensors
params = Params(n_vocab=32022, n_embd=2048, n_layer=24, n_ctx=16384, n_ff=5504, n_head=16, n_head_kv=16, n_experts=None, n_experts_used=None, f_norm_eps=1e-06, rope_scaling_type=<RopeScalingType.LINEAR: 'linear'>, f_rope_freq_base=100000, f_rope_scale=4.0, n_orig_ctx=None, rope_finetuned=None, ftype=None, path_model=PosixPath('../DeepSeek-Coder/models/deepseek-coder-1.3b-instruct'))
Found vocab files: {'spm': None, 'bpe': None, 'hfft': PosixPath('../DeepSeek-Coder/models/deepseek-coder-1.3b-instruct/tokenizer.json')}
Loading vocab file PosixPath('../DeepSeek-Coder/models/deepseek-coder-1.3b-instruct/tokenizer.json'), type 'hfft'
fname_tokenizer: ../DeepSeek-Coder/models/deepseek-coder-1.3b-instruct
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Vocab info: <HfVocab with 32000 base tokens and 22 added tokens>
Special vocab info: <SpecialVocab with 0 merges, special tokens {'bos': 32013, 'eos': 32021, 'pad': 32014}, add special tokens {'bos': True, 'eos': False}>
Permuting layer 0
Permuting layer 1
Permuting layer 2
...省略部分
lm_head.weight                                   -> output.weight                            | BF16   | [32256, 2048]
model.embed_tokens.weight                        -> token_embd.weight                        | BF16   | [32256, 2048]
model.layers.0.input_layernorm.weight            -> blk.0.attn_norm.weight                   | BF16   | [2048]
model.layers.0.mlp.down_proj.weight              -> blk.0.ffn_down.weight                    | BF16   | [2048, 5504]
model.layers.0.mlp.gate_proj.weight              -> blk.0.ffn_gate.weight                    | BF16   | [5504, 2048]
model.layers.0.mlp.up_proj.weight                -> blk.0.ffn_up.weight                      | BF16   | [5504, 2048]
model.layers.0.post_attention_layernorm.weight   -> blk.0.ffn_norm.weight                    | BF16   | [2048]
model.layers.0.self_attn.k_proj.weight           -> blk.0.attn_k.weight                      | BF16   | [2048, 2048]
model.layers.0.self_attn.o_proj.weight           -> blk.0.attn_output.weight                 | BF16   | [2048, 2048]
model.layers.0.self_attn.q_proj.weight           -> blk.0.attn_q.weight                      | BF16   | [2048, 2048]
model.layers.0.self_attn.v_proj.weight           -> blk.0.attn_v.weight     
...省略部分
model.layers.9.self_attn.q_proj.weight           -> blk.9.attn_q.weight                      | BF16   | [2048, 2048]
model.layers.9.self_attn.v_proj.weight           -> blk.9.attn_v.weight                      | BF16   | [2048, 2048]
model.norm.weight                                -> output_norm.weight                       | BF16   | [2048]
Writing ../DeepSeek-Coder/models/1.3b.gguf, format 1
Ignoring added_tokens.json since model matches vocab size without it.
gguf: This GGUF file is for Little Endian only
gguf: Setting special token type bos to 32013
gguf: Setting special token type eos to 32021
gguf: Setting special token type pad to 32014
gguf: Setting add_bos_token to True
gguf: Setting add_eos_token to False
gguf: Setting chat_template to {% if not add_generation_prompt is defined %}
{% set add_generation_prompt = false %}
{% endif %}
{%- set ns = namespace(found=false) -%}
{%- for message in messages -%}
    {%- if message['role'] == 'system' -%}
        {%- set ns.found = true -%}
    {%- endif -%}
{%- endfor -%}
{{bos_token}}{%- if not ns.found -%}
{{'You are an AI programming assistant, utilizing the Deepseek Coder model, developed by Deepseek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer\n'}}
{%- endif %}
{%- for message in messages %}
    {%- if message['role'] == 'system' %}
{{ message['content'] }}
    {%- else %}
        {%- if message['role'] == 'user' %}
{{'### Instruction:\n' + message['content'] + '\n'}}
        {%- else %}
{{'### Response:\n' + message['content'] + '\n<|EOT|>\n'}}
        {%- endif %}
    {%- endif %}
{%- endfor %}
{% if add_generation_prompt %}
{{'### Response:'}}
{% endif %}
[  1/219] Writing tensor output.weight                          | size  32256 x   2048  | type F16  | T+   0
[  2/219] Writing tensor token_embd.weight                      | size  32256 x   2048  | type F16  | T+   0
...省略部分
[216/219] Writing tensor blk.9.attn_output.weight               | size   2048 x   2048  | type F16  | T+   2
[217/219] Writing tensor blk.9.attn_q.weight                    | size   2048 x   2048  | type F16  | T+   2
[218/219] Writing tensor blk.9.attn_v.weight                    | size   2048 x   2048  | type F16  | T+   2
[219/219] Writing tensor output_norm.weight                     | size   2048           | type F32  | T+   2
Wrote ../DeepSeek-Coder/models/1.3b.gguf

成功生成gguf文件。下一步进行量化

./quantize ${out_model.gguf} ${out_model-q5_0.gguf} q5_0

输出log如下

main: build = 1 (231ae28)
main: built with cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 for x86_64-linux-gnu
main: quantizing '../DeepSeek-Coder/models/1.3b.gguf' to '../DeepSeek-Coder/models/1.3b-q5_0.gguf' as Q5_0
llama_model_loader: loaded meta data with 24 key-value pairs and 219 tensors from ../DeepSeek-Coder/models/1.3b.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = models
llama_model_loader: - kv   2:                       llama.context_length u32              = 16384
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 2048
llama_model_loader: - kv   4:                          llama.block_count u32              = 24
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 5504
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 16
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 16
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 100000.000000
llama_model_loader: - kv  11:                    llama.rope.scaling.type str              = linear
llama_model_loader: - kv  12:                  llama.rope.scaling.factor f32              = 4.000000
llama_model_loader: - kv  13:                          general.file_type u32              = 1
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32022]   = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32022]   = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32022]   = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 32013
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 32021
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 32014
llama_model_loader: - kv  21:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  22:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  23:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - type  f32:   49 tensors
llama_model_loader: - type  f16:  170 tensors
llama_model_quantize_internal: meta size = 767616 bytes
[   1/ 219]                        output.weight - [ 2048, 32256,     1,     1], type =    f16, quantizing to q6_K .. size =   126.00 MiB ->    51.68 MiB
[   2/ 219]                    token_embd.weight - [ 2048, 32256,     1,     1], type =    f16, quantizing to q5_0 .. size =   126.00 MiB ->    43.31 MiB | hist: 0.040 0.018 0.028 0.043 0.061 0.082 0.101 0.114 0.117 0.109 0.092 0.072 0.052 0.035 0.022 0.016
...
[ 218/ 219]                  blk.9.attn_v.weight - [ 2048,  2048,     1,     1], type =    f16, quantizing to q5_0 .. size =     8.00 MiB ->     2.75 MiB | hist: 0.040 0.017 0.028 0.042 0.060 0.081 0.101 0.116 0.121 0.109 0.091 0.071 0.051 0.034 0.022 0.016
[ 219/ 219]                   output_norm.weight - [ 2048,     1,     1,     1], type =    f32, size =    0.008 MB
llama_model_quantize_internal: model size  =  2568.38 MB
llama_model_quantize_internal: quant size  =   891.50 MB
llama_model_quantize_internal: hist: 0.040 0.017 0.028 0.043 0.061 0.082 0.101 0.114 0.118 0.109 0.092 0.071 0.051 0.035 0.022 0.016

main: quantize time =  9300.54 ms
main:    total time =  9300.54 ms

进行测试

./main -m ../DeepSeek-Coder/models/1.3b-q5_0.gguf  -n 256 -t 18 --repeat_penalty 1.0 --color -i -r "User:" -f ./prompts/chat-with-bob.txt -ngl 20

加载模型失败.

warning: not compiled with GPU offload support, --n-gpu-layers option will be ignored
warning: see main README.md for information on enabling GPU BLAS support
Log start
main: build = 1 (231ae28)
main: built with cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 for x86_64-linux-gnu
main: seed  = 1710571501
llama_model_loader: loaded meta data with 25 key-value pairs and 219 tensors from ../DeepSeek-Coder/models/1.3b-q5_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = models
llama_model_loader: - kv   2:                       llama.context_length u32              = 16384
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 2048
llama_model_loader: - kv   4:                          llama.block_count u32              = 24
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 5504
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 16
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 16
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 100000.000000
llama_model_loader: - kv  11:                    llama.rope.scaling.type str              = linear
llama_model_loader: - kv  12:                  llama.rope.scaling.factor f32              = 4.000000
llama_model_loader: - kv  13:                          general.file_type u32              = 8
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32022]   = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32022]   = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32022]   = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 32013
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 32021
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 32014
llama_model_loader: - kv  21:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  22:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  23:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  24:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   49 tensors
llama_model_loader: - type q5_0:  169 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: SPM vocabulary, but newline token not found: _Map_base::at! Using special_pad_id instead.llm_load_vocab: mismatch in special tokens definition ( 9/32022 vs 22/32022 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32022
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 16384
llm_load_print_meta: n_embd           = 2048
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 16
llm_load_print_meta: n_layer          = 24
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 2048
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 5504
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 100000.0
llm_load_print_meta: freq_scale_train = 0.25
llm_load_print_meta: n_yarn_orig_ctx  = 16384
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = Q5_0
llm_load_print_meta: model params     = 1.35 B
llm_load_print_meta: model size       = 891.50 MiB (5.55 BPW)
llm_load_print_meta: general.name     = models
llm_load_print_meta: BOS token        = 32013 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token        = 32021 '<|EOT|>'
llm_load_print_meta: UNK token        = 0 '!'
llm_load_print_meta: PAD token        = 32014 '<|end▁of▁sentence|>'
llm_load_tensors: ggml ctx size =    0.08 MiB
llama_model_load: error loading model: create_tensor: tensor 'token_embd.weight' has wrong shape; expected  2048, 32022, got  2048, 32256,     1,     1
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model '../DeepSeek-Coder/models/1.3b-q5_0.gguf'
main: error: unable to load model

看错误llama_model_load: error loading model: create_tensor: tensor 'token_embd.weight' has wrong shape; expected 2048, 32022, got 2048, 32256, 1, 1应该是跟前面修改的vocab-size有关。


本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/464082.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

可视化图表:南丁格尔玫瑰图,来自历史上最著名的护士。

Hi&#xff0c;我是贝格前端工场的老司机&#xff0c;本文分享可视化图表设计的南丁格尔玫瑰图设计&#xff0c;欢迎老铁持续关注我们。 一、南丁格尔与玫瑰图 南丁格尔&#xff08;Florence Nightingale&#xff0c;1820年-1910年&#xff09;是一位英国护士和统计学家&…

江大白 | 万字长文,深度全面解读PyTorch内部机制,推荐阅读!

本文来源公众号“江大白”&#xff0c;仅用于学术分享&#xff0c;侵权删&#xff0c;干货满满。 原文链接&#xff1a;万字长文&#xff0c;深度全面解读PyTorch内部机制&#xff0c;推荐阅读&#xff01; 以下文章来源于知乎&#xff1a;人工智能前沿讲习 作者&#xff1a…

2024.4.17周报

目录 摘要 Abstract 文献阅读&#xff1a;耦合时间和非时间序列模型模拟城市洪涝区洪水深度 现有问题 提出方法 创新点 XGBoost和LSTM耦合模型 XGBoost算法 ​编辑 LSTM&#xff08;长短期记忆网络&#xff09; 耦合模型 研究实验 数据集 评估指标 研究目的 洪水…

海外媒体宣发套餐推广攻略轻松提升曝光率标题文章-华媒舍

在当今数字化时代&#xff0c;推广和宣传对于企业和个人都变得至关重要。但是如何有效地提高曝光率&#xff0c;吸引更多的目标受众成为了一个挑战。本文介绍一种名为《海外媒体宣发套餐推广攻略》的方法&#xff0c;通过使用该套餐&#xff0c;您可以轻松地提高宣传品曝光率&a…

SEED:基于SEED数据集的理解

声明&#xff1a;本文章内容&#xff0c;仅个人理解&#xff0c;如有看法&#xff0c;欢迎评论区讨论或私信。 文章目录 SEED数据集一、官网地址二、SEED详细内容论文1&#xff0c;文章信息&#xff1a;&#xff08;一&#xff09;摘要&#xff08;二&#xff09;引言&#xf…

【pynput】监控是否打开百度贴吧网页

文章目录 简介Demo 简介 有网友提过一个要求&#xff0c;用 Python 实现一个 电脑打开某网站就自动关机的功能。 想到的思路有两个&#xff1a; 【windows 平台】, 获取活动的窗口标题&#xff0c;如果标题里包含了某些网站名称, 那就使用关机命令 可以定时拉取标题, 也可以使…

一命通关递归

递归 简介 递归是我们在学C语言的时候&#xff0c;就已经接触到了的一个概念&#xff0c;相信大家的递归都是从这里开始的&#xff1a; 但是&#xff0c;在老师念ppt的时候&#xff0c;伴随着一些前轱辘不转后轱辘转的语言&#xff0c;我们往往都没有太去了解递归的工作原理和…

sqllab第三十四关通关笔记

知识点&#xff1a; 宽字节注入单引号闭合注意&#xff1a;不能直接在输入框进行宽字节注入&#xff0c;会被url编码&#xff0c;除非输入原始字符&#xff08;%df已经是url编码了&#xff0c;直接输入会二次编码&#xff09;错误注入 payload:username1%dforextractvalue(1,c…

【Python爬虫+JAVA】采集电商平台数据信息|淘宝|京东|1688|抖音数据返回

前言 随着电商平台的兴起&#xff0c;越来越多的人开始在网上购物。而对于电商平台来说&#xff0c;商品信息、价格、评论等数据是非常重要的。因此&#xff0c;抓取电商平台的商品信息、价格、评论等数据成为了一项非常有价值的工作。本文将介绍如何使用Python编写爬虫程序&a…

Linux——线程池

目录 线程池的概念 线程池的优点 线程池的实现 【注意】 线程池的线程安全 日志文件的实现 线程池的概念 线程池也是一种池化技术&#xff0c;可以预先申请一批线程&#xff0c;当我们后续有任务的时候就可以直接用&#xff0c;这本质上是一种空间换时间的策略。 如果有任…

基于ssm+vue的校园驿站管理系统+(源码+部署说明+演示视频+源码介绍)

第1章绪论 1.1 课题背景 互联网发展至今&#xff0c;无论是其理论还是技术都已经成熟&#xff0c;而且它广泛参与在社会中的方方面面。它让信息都可以通过网络传播&#xff0c;搭配信息管理工具可以很好地为人们提供服务。所以各行业&#xff0c;尤其是规模较大的企业和学校等…

TypeScript(五)交叉类型,联合类型,映射类型

交叉类型 交叉类型是将多个类型合并为一个类型。可以把现有的多种类型叠加到一起成为一种类型&#xff0c;它包含了所需的所有类型的特性。使用符号 & 表示。交叉类型 A & B 表示&#xff0c;任何一个新类型必须同时属于 A 和 B&#xff0c;才属于交叉类型 A & B …

Discuz! X3.5精品模板下载网站模板utf-8

适合做模板下载网站&#xff0c;模板涵盖广告设计/电商设计/海报/名片/字体/展板/X展架,下载即用,精品优质,海量免费模板网下载下载,专业模板素材网站,让设计变得更简单! 下载地址&#xff1a;Discuz! X3.5精品模板下载网站模板.zip 截图&#xff1a;

基于ESTAR指数平滑转换自回归模型的CPI数据统计分析matlab仿真

目录 1.程序功能描述 2.测试软件版本以及运行结果展示 3.核心程序 4.本算法原理 4.1 ESTAR模型概述 4.2 WNL值&#xff0c;P值&#xff0c; Q值&#xff0c;12阶ARCH值 4.3ADF检验 5.完整程序 1.程序功能描述 基于ESTAR指数平滑转换自回归模型的CPI数据统计分析matlab仿…

拼多多根据关键词取商品列表 API 返回值说明

一、应用场景 拼多多根据关键词取商品列表的API应用场景非常广泛&#xff0c;主要集中在电商领域&#xff0c;包括但不限于以下几个方面&#xff1a; 1、商品搜索与推荐&#xff1a;商家可以通过API接口&#xff0c;根据用户输入的关键词&#xff0c;实时获取拼多多平台上的相…

力扣hot100:34. 在排序数组中查找元素的第一个和最后一个位置(二分查找的理解)

我们知道使用二分查找能找到值所在的位置。假如我们在找到值后仍然不断的更新指针会发生什么&#xff1f;我们可以利用这一点来找到最左边的以及最右边的值。 如果当nums[mid]target时&#xff0c;使得 rightmid-1&#xff0c;那么最终会使得target在right的右边。 如果当nums[…

MacBook使用——彻底卸载并删除软件:NTFS for Mac

问题 之前因MacBook读写NTFS格式移动硬盘&#xff0c;我安装并使用了 Paragon NTFS for Mac &#xff0c;试用期结束后将其从【应用程序】中卸载移除了。但之后每次开机启动时&#xff0c;系统还是会弹出【激活】通知&#xff0c;如下图 解决 Step1、在用户目录下的 Library 目…

C++笔记:从零开始一步步手撕红黑树

文章目录 红黑树概念红黑树的性质红黑树 VS AVL树红黑树的结点与树的描述——定义类红黑树的插入操作步骤一&#xff1a;按照二叉搜索树的规则插入新结点步骤二&#xff1a;检测新节点插入后&#xff0c;红黑树的性质是否造到破坏情况一&#xff1a;uncle存在且为红情况二&…

外包干了9天,技术退步明显。。。。。

先说一下自己的情况&#xff0c;本科生&#xff0c;2018年我通过校招踏入了南京一家软件公司&#xff0c;开始了我的职业生涯。那时的我&#xff0c;满怀热血和憧憬&#xff0c;期待着在这个行业中闯出一片天地。然而&#xff0c;随着时间的推移&#xff0c;我发现自己逐渐陷入…

【微服务】分布式调度框架PowerJob使用详解

目录 一、前言 二、定时任务调度框架概述 2.1 为什么需要定时任务调度框架 2.2 定时任务调度使用场景 三、PowerJob 介绍 3.1 PowerJob 概述 3.2 PowerJob 功能特性 3.3 PowerJob 应用场景 3.4 PowerJob 与其他同类产品对比 四、PowerJob 部署 4.1 PowerJob 架构 4.…
最新文章