前言 不多BB讲原理,只教你怎么用,看了全网没有比我更详细的了,yml 配置,Config 工厂代码配置都有,batch-size、acks、offset、auto-commit、trusted-packages、poll-timeout、linger 应有尽有,批量消费、开启事务、定义批量消费数量、延时发送、失败重试、异常处理你还想要什么
As we all know,当今世界最流行的消息中间件有 RabbitMq、RocketMq、Kafka,其中,应用最广泛的是 RabbitMq
,RocketMq
是阿里巴巴的产品,性能超过 RabbitMq,已经经受了多年的双11考验,但是怕哪天阿里不维护了,用的人不多,Kafka
是吞吐量最大的一个,远超前两个,支持事务、可保证消息的不丢失(网上说的事务和消息可靠性不支持是说的旧版,2以后就开始支持了),对比来讲,Kafka相对于前两个,只有一个劣势,不太支持延时队列,其他方面都要优于它们(个人使用体验,勿喷)。
一、Linux 安装 Kafka 我的另一篇文章:Debian(Linux通用)安装 Kafka 并配置远程访问
二、构建项目
多模块项目构建,这里不讲,如果你不会,就新建两个普通的web项目 KafkaConsumer
和 KafkaProvider
就行
三、引入依赖
新建一个标准的spring-web项目,额外依赖真的只需要这一个,网上说的 kafka-client 不是springboot 的东西,那就是个原生的 kafka 客户端, kafka-test也不需要,这个是用代码控制broker的东西
1 2 3 4 <dependency > <groupId > org.springframework.kafka</groupId > <artifactId > spring-kafka</artifactId > </dependency >
四、配置文件
这两种方式的代码会互相覆盖,而且有些配置只能用 config
方式配置,建议像我一样,两种都写,config
里面的配置参数从 yml
中获取,就可以不影响使用 Nacos 来在线修改 kafka 的配置了
生产者 配置的意思详解在注释里面都有哦
yml 方式 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 server: port: 8081 spring: kafka: producer: bootstrap-servers: 175.24 .228 .202 :9092 transaction-id-prefix: kafkaTx- retries: 3 acks: all batch-size: 16384 buffer-memory: 1024000 key-serializer: org.springframework.kafka.support.serializer.JsonSerializer value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
Config 方式 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 import org.apache.kafka.clients.producer.ProducerConfig;import org.apache.kafka.common.serialization.StringSerializer;import org.springframework.beans.factory.annotation.Value;import org.springframework.boot.SpringBootConfiguration;import org.springframework.context.annotation.Bean;import org.springframework.kafka.core.DefaultKafkaProducerFactory;import org.springframework.kafka.core.KafkaTemplate;import org.springframework.kafka.core.ProducerFactory;import org.springframework.kafka.support.serializer.JsonSerializer;import org.springframework.kafka.transaction.KafkaTransactionManager;import java.util.HashMap;import java.util.Map;@SpringBootConfiguration public class KafkaProviderConfig { @Value("${spring.kafka.producer.bootstrap-servers}") private String bootstrapServers; @Value("${spring.kafka.producer.transaction-id-prefix}") private String transactionIdPrefix; @Value("${spring.kafka.producer.acks}") private String acks; @Value("${spring.kafka.producer.retries}") private String retries; @Value("${spring.kafka.producer.batch-size}") private String batchSize; @Value("${spring.kafka.producer.buffer-memory}") private String bufferMemory; @Bean public Map<String, Object> producerConfigs () { Map<String, Object> props = new HashMap <>(16 ); props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers); props.put(ProducerConfig.ACKS_CONFIG, acks); props.put(ProducerConfig.RETRIES_CONFIG, retries); props.put(ProducerConfig.BATCH_SIZE_CONFIG, batchSize); props.put(ProducerConfig.LINGER_MS_CONFIG, "5000" ); props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, bufferMemory); props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, JsonSerializer.class); props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class); return props; } @Bean public ProducerFactory<Object, Object> producerFactory () { DefaultKafkaProducerFactory<Object, Object> factory = new DefaultKafkaProducerFactory <>(producerConfigs()); factory.setTransactionIdPrefix(transactionIdPrefix); return factory; } @Bean public KafkaTransactionManager<Object, Object> kafkaTransactionManager (ProducerFactory<Object, Object> producerFactory) { return new KafkaTransactionManager <>(producerFactory); } @Bean public KafkaTemplate<Object, Object> kafkaTemplate () { return new KafkaTemplate <>(producerFactory()); } }
消费者 yml 方式 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 server: port: 8082 spring: kafka: consumer: bootstrap-servers: 175.24 .228 .202 :9092 group-id: firstGroup auto-offset-reset: latest enable-auto-commit: false key-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer properties: spring: json: trusted: packages: "*" max-poll-records: 3 properties: max: poll: interval: ms: 600000 session: timeout: ms: 10000 listener: concurrency: 4 ack-mode: manual_immediate missing-topics-fatal: false poll-timeout: 600000
Config 方式 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 import org.apache.kafka.clients.consumer.ConsumerConfig;import org.springframework.beans.factory.annotation.Value;import org.springframework.boot.SpringBootConfiguration;import org.springframework.context.annotation.Bean;import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;import org.springframework.kafka.config.KafkaListenerContainerFactory;import org.springframework.kafka.core.ConsumerFactory;import org.springframework.kafka.core.DefaultKafkaConsumerFactory;import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;import org.springframework.kafka.listener.ContainerProperties;import org.springframework.kafka.support.serializer.JsonDeserializer;import java.util.HashMap;import java.util.Map;@SpringBootConfiguration public class KafkaConsumerConfig { @Value("${spring.kafka.consumer.bootstrap-servers}") private String bootstrapServers; @Value("${spring.kafka.consumer.group-id}") private String groupId; @Value("${spring.kafka.consumer.enable-auto-commit}") private boolean enableAutoCommit; @Value("${spring.kafka.properties.session.timeout.ms}") private String sessionTimeout; @Value("${spring.kafka.properties.max.poll.interval.ms}") private String maxPollIntervalTime; @Value("${spring.kafka.consumer.max-poll-records}") private String maxPollRecords; @Value("${spring.kafka.consumer.auto-offset-reset}") private String autoOffsetReset; @Value("${spring.kafka.listener.concurrency}") private Integer concurrency; @Value("${spring.kafka.listener.missing-topics-fatal}") private boolean missingTopicsFatal; @Value("${spring.kafka.listener.poll-timeout}") private long pollTimeout; @Bean public Map<String, Object> consumerConfigs () { Map<String, Object> propsMap = new HashMap <>(16 ); propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers); propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, groupId); propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, enableAutoCommit); propsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "2000" ); propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetReset); propsMap.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, maxPollIntervalTime); propsMap.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, maxPollRecords); propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, sessionTimeout); propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class); propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class); return propsMap; } @Bean public ConsumerFactory<Object, Object> consumerFactory () { try (JsonDeserializer<Object> deserializer = new JsonDeserializer <>()) { deserializer.trustedPackages("*" ); return new DefaultKafkaConsumerFactory <>(consumerConfigs(), new JsonDeserializer <>(), deserializer); } } @Bean public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Object, Object>> kafkaListenerContainerFactory () { ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory <>(); factory.setConsumerFactory(consumerFactory()); factory.setConcurrency(concurrency); factory.setMissingTopicsFatal(missingTopicsFatal); factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE); factory.getContainerProperties().setPollTimeout(pollTimeout); return factory; } }
五、开始写代码
下面我们开始写 Kafka 的消息发送代码
生产者 发送 KafkaController
用于发送消息到 Kafka
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 import icu.xuyijie.provider.entity.User;import icu.xuyijie.provider.handler.KafkaSendResultHandler;import org.apache.kafka.clients.producer.ProducerRecord;import org.springframework.kafka.config.KafkaListenerEndpointRegistry;import org.springframework.kafka.core.KafkaTemplate;import org.springframework.kafka.support.KafkaHeaders;import org.springframework.messaging.MessageHeaders;import org.springframework.messaging.support.GenericMessage;import org.springframework.transaction.annotation.Transactional;import org.springframework.web.bind.annotation.PathVariable;import org.springframework.web.bind.annotation.RequestMapping;import org.springframework.web.bind.annotation.RestController;import java.util.HashMap;import java.util.Map;import java.util.Objects;import java.util.concurrent.ExecutionException;import java.util.concurrent.TimeUnit;import java.util.concurrent.TimeoutException;@RestController @RequestMapping("/provider") @Transactional(rollbackFor = RuntimeException.class) public class KafkaController { private final KafkaTemplate<Object, Object> kafkaTemplate; public KafkaController (KafkaTemplate<Object, Object> kafkaTemplate, KafkaSendResultHandler kafkaSendResultHandler) { this .kafkaTemplate = kafkaTemplate; this .kafkaTemplate.setProducerListener(kafkaSendResultHandler); } @RequestMapping("/sendMultiple") public void sendMultiple () { String message = "发送到Kafka的消息" ; for (int i = 0 ;i < 10 ;i++) { kafkaTemplate.send("topic1" , "发送到Kafka的消息" + i); System.out.println(message + i); } } @RequestMapping("/send") public void send () { User user = new User (1 , "徐一杰" ); kafkaTemplate.send("topic1" , user); kafkaTemplate.send("topic2" , "发给topic2" ); } public void SendDemo () throws ExecutionException, InterruptedException, TimeoutException { kafkaTemplate.send("topic1" , "发给topic1" ).get(1 , TimeUnit.MILLISECONDS); ProducerRecord<Object, Object> producerRecord = new ProducerRecord <>("topic.quick.demo" , "use ProducerRecord to send message" ); kafkaTemplate.send(producerRecord); Map<String, Object> map = new HashMap <>(); map.put(KafkaHeaders.TOPIC, "topic.quick.demo" ); map.put(KafkaHeaders.PARTITION_ID, 0 ); map.put(KafkaHeaders.MESSAGE_KEY, 0 ); GenericMessage<Object> message = new GenericMessage <>("use Message to send message" , new MessageHeaders (map)); kafkaTemplate.send(message); } }
成功回调和异常处理 KafkaSendResultHandler
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 import org.apache.kafka.clients.producer.ProducerRecord;import org.apache.kafka.clients.producer.RecordMetadata;import org.springframework.kafka.support.ProducerListener;import org.springframework.lang.Nullable;import org.springframework.stereotype.Component;@Component public class KafkaSendResultHandler implements ProducerListener <Object, Object> { @Override public void onSuccess (ProducerRecord producerRecord, RecordMetadata recordMetadata) { System.out.println("消息发送成功:" + producerRecord.toString()); } @Override public void onError (ProducerRecord producerRecord, @Nullable RecordMetadata recordMetadata, Exception exception) { System.out.println("消息发送失败:" + producerRecord.toString() + exception.getMessage()); } }
消费者 接收 KafkaHandler
用于接收 Kafka 里的消息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 import org.apache.kafka.clients.consumer.ConsumerRecord;import org.springframework.kafka.annotation.KafkaListener;import org.springframework.kafka.config.KafkaListenerEndpointRegistry;import org.springframework.kafka.support.Acknowledgment;import org.springframework.web.bind.annotation.PathVariable;import org.springframework.web.bind.annotation.RequestMapping;import org.springframework.web.bind.annotation.RestController;import java.util.List;@RestController public class KafkaHandler { private final KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry; public KafkaHandler (KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry) { this .kafkaListenerEndpointRegistry = kafkaListenerEndpointRegistry; } @KafkaListener(topics = {"topic1", "topic2"}, errorHandler = "myKafkaListenerErrorHandler") public void listen1 (ConsumerRecord<Object, Object> consumerRecord, Acknowledgment ack) { try { System.out.println(consumerRecord.get(0 ).value()); ack.acknowledge(); } catch (Exception e) { System.out.println("消费失败:" + e); } } @RequestMapping("/pause/{listenerId}") public void stop (@PathVariable String listenerId) { Objects.requireNonNull(kafkaListenerEndpointRegistry.getListenerContainer(listenerId)).pause(); } @RequestMapping("/resume/{listenerId}") public void resume (@PathVariable String listenerId) { Objects.requireNonNull(kafkaListenerEndpointRegistry.getListenerContainer(listenerId)).resume(); } @RequestMapping("/start/{listenerId}") public void start (@PathVariable String listenerId) { Objects.requireNonNull(kafkaListenerEndpointRegistry.getListenerContainer(listenerId)).start(); } }
异常处理 MyKafkaListenerErrorHandler
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 import org.apache.kafka.clients.consumer.Consumer;import org.springframework.kafka.listener.KafkaListenerErrorHandler;import org.springframework.kafka.listener.ListenerExecutionFailedException;import org.springframework.lang.NonNull;import org.springframework.messaging.Message;import org.springframework.stereotype.Component;@Component public class MyKafkaListenerErrorHandler implements KafkaListenerErrorHandler { @Override @NonNull public Object handleError (@NonNull Message<?> message, @NonNull ListenerExecutionFailedException exception) { return new Object (); } @Override @NonNull public Object handleError (@NonNull Message<?> message, @NonNull ListenerExecutionFailedException exception, Consumer<?, ?> consumer) { System.out.println("消息详情:" + message); System.out.println("异常信息::" + exception); System.out.println("消费者详情::" + consumer.groupMetadata()); System.out.println("监听主题::" + consumer.listTopics()); return KafkaListenerErrorHandler.super .handleError(message, exception, consumer); } }
七、开始测试
启动生产者和消费者,消费者控制台打印出我配置的 group-id webGroup
id就是启动成功了,如果启动报错不会解决,可以评论区留言
测试普通单条消息
浏览器访问 http://127.0.0.1:8081/provider/send
来调用生产者发送一条消息,生产者控制台打印出回调,消费者控制台输出接收到的消息
测试消费者异常处理
把消费者里的 listen1
方法里的这行代码取消注释
重启消费者,访问 http://127.0.0.1:8081/provider/send
,发现消费者虽然报错但是没有抛出异常,而是被我们处理了
测试延时消息
发送延时消息要关闭事务,在生产者的 yml 和 config 配置文件里把下面代码注释掉
然后重新请求http://127.0.0.1:8081/provider/send
,发现 5s 后消息发出,配置延迟时间的配置是props.put(ProducerConfig.LINGER_MS_CONFIG, "5000");
,其实这个不是真正的延时消息,Kafka实现真正的延时消息要使用JDK的DelayQueue手动实现。
测试批量消息
打开消费者的 config 配置里 setBatchListener
这一行代码,我们定义的 MAX_POLL_RECORDS_CONFIG
为3,即每次批量读取3条消息,批量监听需要用List接收,listen1
方法的参数加一个List包起来
1 2 factory.setBatchListener(true );
1 propsMap.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, maxPollRecords);
1 public void listen1 (List<ConsumerRecord<Object, Objects>> consumerRecord, Acknowledgment ack)
注意!!!Debug消费者,因为我们要打断点观察每次接收的条数 调用消费者接口http://127.0.0.1:8081/provider/sendMultiple
批量发送10条,可以看到消费者每次只接收3条
测试手动控制消费者监听
@KafkaListener
这样写,id 和 autoStartup 是关键
1 @KafkaListener(id = "${spring.kafka.consumer.group-id}", topics = {"topic1", "topic2"}, autoStartup = "false")
重启消费者,调用生产者接口http://127.0.0.1:8081/provider/send
,我们发现这次消费者没有接收到消息,因为我们关闭了 autoStartup
要开始接收的话,调用消费者接口http://127.0.0.1:8082/start/firstGroup
,这个方法可以启动 group-id 为 firstGroup 的 @KafkaListener,然后我们发现消费者控制台接收到消息
http://127.0.0.1:8082/pause/firstGroup
暂停接收http://127.0.0.1:8082/resume/firstGroup
恢复接收
总结 你会了吗,我反正是又写了一遍博客现在刻到脑子里了,但是项目里有两个配置参数我有疑问
batch-size
,这个参数没有效果 为什么开启事务以后会让 LINGER_MS_CONFIG
这个配置失效,这个我并没有看到文档里面有写 有没有知道的同学告诉我一下