0%

新版本支持kraft,并且在后面会彻底抛弃zookeeper

二进制包地址 https://downloads.apache.org/kafka/

解压之后,编辑config/kraft/server.properties文件,改成自己的ip

在Kafka安装目录的bin文件夹下执行以下命令生成一个新的集群ID,如果只有1个机器也没关系,windows的命令在windows文件夹

1
2
3
kafka-storage.sh random-uuid
或者
kafka-storage.bat random-uuid

用上一步生成的UUID格式化Kafka的存储目录:

1
2
3
kafka-storage.bat format -t <uuid> -c ..\..\config\kraft\server.properties
或者
kafka-storage.sh format -t <uuid> -c ../../config/kraft/server.properties

启动Kafka服务器,这里用的是Kraft的目录下的配置

1
2
3
kafka-server-start.bat ..\..\config\kraft\server.properties
或者
kafka-server-start.sh ../../config/kraft/server.properties

docker安装单机版的kafka

文件夹下有.env和docker-compose.yml两个文件

.env文件内容如下:

1
2
# 把下面的 192.168.252.1 改为你的ip地址
ACCESS_ADDR=192.168.252.1:9092

docker-compose.yml内容如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
version: '3.8'

services:
broker:
image: apache/kafka:3.7.0
container_name: broker
ports:
- '9092:9092'
environment:
kafka_NODE_ID: 1
kafka_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT'
kafka_ADVERTISED_LISTENERS: 'PLAINTEXT_HOST://${ACCESS_ADDR},PLAINTEXT://broker:19092'
kafka_PROCESS_ROLES: 'broker,controller'
kafka_CONTROLLER_QUORUM_VOTERS: '1@broker:29093'
kafka_LISTENERS: 'CONTROLLER://:29093,PLAINTEXT_HOST://:9092,PLAINTEXT://:19092'
kafka_INTER_BROKER_LISTENER_NAME: 'PLAINTEXT'
kafka_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
CLUSTER_ID: '4L6g3nShT-eMCtK--X86sw'
kafka_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
kafka_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
kafka_TRANSACTION_STATE_LOG_MIN_ISR: 1
kafka_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
kafka_LOG_DIRS: '/var/lib/kafka/data'
volumes:
- $PWD/data/:/var/lib/kafka/data

kafka-ui:
image: provectuslabs/kafka-ui:v0.7.2
container_name: kafka-ui
ports:
- "18080:8080"
environment:
kafka_CLUSTERS_0_NAME: 'Local kafka Cluster'
kafka_CLUSTERS_0_BOOTSTRAPSERVERS: 'broker:19092'
DYNAMIC_CONFIG_ENABLED: "true"
depends_on:
- broker

安装 kafka 集群

.env文件内容如下:

1
2
3
4
# 把下面的 192.168.251.1 改为你的ip地址
kafka_1_ACCESS_ADDR=192.168.251.1:33001
kafka_2_ACCESS_ADDR=192.168.251.1:33002
kafka_3_ACCESS_ADDR=192.168.251.1:33003

docker-compose.yml内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
version: "3.8"

services:
kafka-1:
image: docker.io/bitnami/kafka:3.7
container_name: kafka-1
ports:
- "33001:9092"
environment:
# KRaft settings
- kafka_CFG_NODE_ID=0
- kafka_CFG_PROCESS_ROLES=controller,broker
- kafka_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka-1:9093,1@kafka-2:9093,2@kafka-3:9093
- kafka_KRAFT_CLUSTER_ID=abcdefghijklmnopqrstuv
# Listeners
- kafka_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093
#- kafka_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092
- kafka_CFG_ADVERTISED_LISTENERS=PLAINTEXT://${kafka_1_ACCESS_ADDR}
- kafka_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT
- kafka_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- kafka_CFG_INTER_BROKER_LISTENER_NAME=PLAINTEXT
# Clustering
- kafka_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR=3
- kafka_CFG_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=3
- kafka_CFG_TRANSACTION_STATE_LOG_MIN_ISR=2
volumes:
- $PWD/data/kafka-1:/bitnami/kafka
networks:
- kafka-net

kafka-2:
image: docker.io/bitnami/kafka:3.7
container_name: kafka-2
ports:
- "33002:9092"
environment:
# KRaft settings
- kafka_CFG_NODE_ID=1
- kafka_CFG_PROCESS_ROLES=controller,broker
- kafka_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka-1:9093,1@kafka-2:9093,2@kafka-3:9093
- kafka_KRAFT_CLUSTER_ID=abcdefghijklmnopqrstuv
# Listeners
- kafka_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093
#- kafka_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092
- kafka_CFG_ADVERTISED_LISTENERS=PLAINTEXT://${kafka_2_ACCESS_ADDR}
- kafka_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT
- kafka_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- kafka_CFG_INTER_BROKER_LISTENER_NAME=PLAINTEXT
# Clustering
- kafka_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR=3
- kafka_CFG_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=3
- kafka_CFG_TRANSACTION_STATE_LOG_MIN_ISR=2
volumes:
- $PWD/data/kafka-2:/bitnami/kafka
networks:
- kafka-net

kafka-3:
image: docker.io/bitnami/kafka:3.7
container_name: kafka-3
ports:
- "33003:9092"
environment:
# KRaft settings
- kafka_CFG_NODE_ID=2
- kafka_CFG_PROCESS_ROLES=controller,broker
- kafka_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka-1:9093,1@kafka-2:9093,2@kafka-3:9093
- kafka_KRAFT_CLUSTER_ID=abcdefghijklmnopqrstuv
# Listeners
- kafka_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093
#- kafka_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092
- kafka_CFG_ADVERTISED_LISTENERS=PLAINTEXT://${kafka_3_ACCESS_ADDR}
- kafka_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT
- kafka_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- kafka_CFG_INTER_BROKER_LISTENER_NAME=PLAINTEXT
# Clustering
- kafka_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR=3
- kafka_CFG_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=3
- kafka_CFG_TRANSACTION_STATE_LOG_MIN_ISR=2
volumes:
- $PWD/data/kafka-3:/bitnami/kafka
networks:
- kafka-net

kafka-ui:
image: provectuslabs/kafka-ui:v0.7.2
restart: always
container_name: kafka-ui
ports:
- "18080:8080"
environment:
- kafka_CLUSTERS_0_NAME=Local-Kraft-Cluster
- kafka_CLUSTERS_0_BOOTSTRAPSERVERS=kafka-1:9092,kafka-2:9092,kafka-3:9092
- DYNAMIC_CONFIG_ENABLED=true
- kafka_CLUSTERS_0_AUDIT_TOPICAUDITENABLED=true
- kafka_CLUSTERS_0_AUDIT_CONSOLEAUDITENABLED=true
depends_on:
- kafka-1
- kafka-2
- kafka-3
networks:
- kafka-net

networks:
kafka-net:

第一次使用时,创建一个data文件夹作为数据持久化,并且修改目录data权限:

1
2
mkdir data/kafka-1 data/kafka-2 data/kafka-3
chmod -R 0777 data

docker-compose up -d ,启动服务成功后,可以在浏览器打开 http://localhost:18080 查看kafka信息。

win环境下,如果是linux,切换目录,用sh脚本就行

kafka安装在上一篇

Kraft启动kafka

1
kafka-server-start.bat ..\..\config\kraft\server.properties

生产者,启动之后,命令行输入要生产的消息

1
kafka-console-producer.bat --topic test-topic --bootstrap-server 192.168.252.1:9092

kafka的消费者是分组的,也就是相同的group认为是一组,同一组下,多个消费者,只能有一个消费者能消费到消息。

想要做成发布订阅模式,就做成group名字不一样

消费者1

1
kafka-console-consumer.bat --topic test-topic --bootstrap-server 192.168.252.1:9092 --group group1

消费者2

1
kafka-console-consumer.bat --topic test-topic --bootstrap-server 192.168.252.1:9092 --group group2

新建cargo项目

1
cargo new rust-web

编辑Cargo.toml

1
2
[dependencies]
actix-web = "4"

编写main.rs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
use actix_web::{get,web, App, HttpServer, Responder,HttpResponse};

#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new()
.service(hello) //如果定义了get,post,直接用service
.route("/index", web::get().to(indexs)) //如果没有定义,需要用route方法
})
.workers(8) // 设置工作线程数量
.bind(("127.0.0.1", 8080))?
.run()
.await
}

async fn indexs() -> impl Responder {
HttpResponse::Ok().body("index")
}

#[get("/")]
async fn hello() -> impl Responder {
HttpResponse::Ok().body("Hello world!")
}

启动

1
2
3
cargo run
或者
cargo build

创建项目

1
cargo new --lib my-wasm

添加依赖Cargo.toml

1
2
3
4
5
[dependencies]
wasm-bindgen = "0.2"

[lib]
crate-type = ["cdylib"]

编写代码 src/lib.rs

1
2
3
4
5
6
use wasm_bindgen::prelude::*;

#[wasm_bindgen]
pub fn add(a: i32, b: i32) -> i32 {
       a + b
}

安装扩展,构建wasm

1
2
3
4
## 添加 wasm-pack 
cargo install wasm-pack
## 构建
wasm-pack build --target web

pkg文件夹

编写html测试页面

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
<!DOCTYPE html>
<html lang="en">
    <head>
    <meta charset="UTF-8">
    <title>Rust Wasm Example</title>
    <script type="module">
        import init, { add } from './pkg/my_wasm.js';
            async function run() {
                await init();
                console.log(add(2, 3));
            }
           run();

    </script>
    </head>
    <body>
        <h1>Rust Wasm Example</h1>
    </body>
</html>

html需要在服务器环境下打开,如果以文件方式打开,会报错跨域,我这里直接用go做文件服务器了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
package main

import (
"fmt"
"net/http"
)

func main() {
// 设置文件服务器的根目录
http.Handle("/", http.FileServer(http.Dir(".")))
// 定义服务器监听的端口
port := "8080"
// 启动服务器
fmt.Printf("Starting file server on http://localhost:%s/\n", port)
err := http.ListenAndServe(":"+port, nil)
if err != nil {
fmt.Println("Error starting file server: ", err)
}
}

注意 gitlab跟runner版本要一致,不然会出问题

docker安装gitlab

1
2
3
cd /opt/
mkdir gitlab
export GITLAB_HOME=/opt/gitlab

由于官方版本的gitlab/gitlab-ce:latest创建runner老是404,后来装了jh版本的。

1
2
3
4
5
6
7
8
9
10
docker run --detach \
--hostname gitlab.example.com \
--publish 443:443 --publish 9001:80 --publish 2222:22 \
--name gitlab \
--restart always \
--volume $GITLAB_HOME/config:/etc/gitlab \
--volume $GITLAB_HOME/logs:/var/log/gitlab \
--volume $GITLAB_HOME/data:/var/opt/gitlab \
--shm-size 256m \
registry.gitlab.cn/omnibus/gitlab-jh:latest

访问极狐 GitLab URL,http://192.168.252.131:9001,并使用用户名 root 和来自以下命令的密码登录:

1
2
[root@root~]# docker exec -it gitlab grep 'Password:' /etc/gitlab/initial_root_password
Password: xxzIrC8HPFfuxVmGSyxxxx221Ihu+a2edEySMw=

登录,然后创建项目,上传公钥,省略。。。

Docker方式安装注册gitlab-runner

1
2
3
4
docker run -d --name gitlab-runner --restart always \
-v /opt/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest

创建runner,以及注册runner




进入runner容器内 docker exec -it gitlab-runner /bin/bash

1
2
gitlab-runner register  --url http://192.168.252.131:9001  --token glrt-Tz6onF8_bUSeNwaqqg8w
# 然后在交互界面,最后选择输入shell

极狐有个小坑

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
没有自带vim命令,要在容器内自行安装,其实也可在宿主机映射的目录修改
[root@249b6b18ffa8]# apt update
[root@249b6b18ffa8]# apt install -y vim
# 修改配置,增加clone_url配置,跟url并列
[root@249b6b18ffa8]# vi /opt/gitlab-runner/config/config.toml
[[runners]]
name = "9b499a1ad4dc"
url = "http://192.168.252.131:9001"
id = 10
token = "glrt-Tz6onF8_bUSeNwaqqg8w"
token_obtained_at = 2024-03-07T03:43:37Z
token_expires_at = 0001-01-01T00:00:00Z
executor = "shell"
[runners.cache]
MaxUploadedArchiveSize = 0
# 重启gitlab-runner所在容器
docker restart gitlab-runner

开始创建 .gitlab-ci.yml 官方示例,只是用来跑通项目

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
stages:          # List of stages for jobs, and their order of execution
- build
- test
- deploy

build-job: # This job runs in the build stage, which runs first.
stage: build
script:
- echo "Compiling the code..."
- echo "Compile complete."

unit-test-job: # This job runs in the test stage.
stage: test # It only starts when the job in the build stage completes successfully.
script:
- echo "Running unit tests... This will take about 60 seconds."
- echo "Code coverage is 90%"

lint-test-job: # This job also runs in the test stage.
stage: test # It can run at the same time as unit-test-job (in parallel).
script:
- echo "Linting code... This will take about 10 seconds."
- echo "No lint issues found."

deploy-job: # This job runs in the deploy stage.
stage: deploy # It only runs when *both* jobs in the test stage complete successfully.
environment: production
script:
- echo "Deploying application..."
- echo "Application successfully deployed."

如果您已将公钥添加到GitLab,并且仍然每次都要求输入用户名和密码,这可能是由于使用了HTTPS URL而非SSH URL来访问GitLab仓库。

当使用HTTPS URL时,GitLab将要求输入用户名和密码来进行身份验证,即使您已将SSH公钥添加到GitLab中。要解决这个问题,您需要将仓库的远程URL更改为SSH URL。

可以使用以下命令来更改Git仓库的远程URL:

1
git remote set-url origin git@your-gitlab-domain.com:yourusername/yourrepository.git

或者在我们的项目目录下打开控制台,输入

1
git config --global credential.helper store

然后生成一个.git-credentials,上边记录你的账号和密码,只需要输入一次用户名和密码,就会把账户信息保存到这个文件中。下次就不会弹出让你输入用户名和密码的提示啦

如果.gitignore文件不存在,在项目的根目录下创建一个名为.gitignore的文件,并在该文件中添加以下内容:

1
.idea/

如果.idea文件夹已经被跟踪,运行git rm –cached .idea来从Git跟踪中移除它,然后再提交这个更改。

1
2
3
git rm -r --cached .idea
git commit -m "xxxxx"
git push origin master

报LFS错

错误1

1
WARNING: Authentication error: Authentication required: LFS only supported repository in paid enterprise.

错误2

1
batch response: LFS only supported repository in paid enterprise.

第一个错误可以执行以下命令:命令中的{your_gitee}/{your_repo}是你的远程仓库地址,根据自己情况替换。

1
git config lfs.https://gitee.com/{your_gitee}/{your_repo}.git/info/lfs.locksverify false

第二个错误可以尝试删除./git/hooks/pre-push文件,最后重新push一下即可

老是忘记 还是做个笔记把

命令,email@email.com替换成自己的邮箱

1
ssh-keygen -t rsa -C "email@email.com"

生成公钥位置

windows

1
C:\Users\[用户名]\.ssh

linux

1
~/.ssh

把id_rsa.pub的内容,复制到git的后台

项目范围

1
git config core.fileMode false

也可以在git的全局范围内生效

1
git config --global core.filemode false

也可以通过修改 “~/.gitconfig” 文件,在其中添加下面内容

1
2
[core]
filemode = false