Hi Eric,

I still need help. I was able to start in test suit apicurio-registry-mem in testcontainer. But now I need configuration to bind embeded kafka with that container and register the generated schema. Do you have any suggestions for that?

Janez Bindas 

On 22 Sep 2020, at 18:19, Eric Wittmann <eric.wittmann@redhat.com> wrote:

My suggestion is to use the in-memory version of Apicurio Registry via docker:

If you combine that with TestContainers you should be able to start up the registry as part of your JUnit test setup and have it available for the test:

We don't currently have a version of the registry that is easily embeddable in a unit test (or other Java app) in any other way.


On Tue, Sep 22, 2020 at 11:24 AM Janez Bindas <janez.bindas@gmail.com> wrote:
Hi all,

I would like to test Kafka workflow with Apicurio registry. I wasn't able to registry the schema in memory. Can you tell me how to do it?

This is my test class:
public class KafkaTest {

private static final String TOPIC = "foobar";

private EmbeddedKafkaBroker embeddedKafkaBroker;

BlockingQueue<ConsumerRecord<FooBarKey, FooBarValue>> records;

KafkaMessageListenerContainer<FooBarKey, FooBarValue> container;

void setUp() {
Map<String, Object> configs = new HashMap<>(KafkaTestUtils.consumerProps("consumer", "false", embeddedKafkaBroker));
DefaultKafkaConsumerFactory<FooBarKey, FooBarValue> consumerFactory = new DefaultKafkaConsumerFactory<>(configs, new AvroKafkaDeserializer<>(), new AvroKafkaDeserializer<>());
ContainerProperties containerProperties = new ContainerProperties(TOPIC);
container = new KafkaMessageListenerContainer<>(consumerFactory, containerProperties);
records = new LinkedBlockingQueue<>();
container.setupMessageListener((MessageListener<FooBarKey, FooBarValue>) records::add);
ContainerTestUtils.waitForAssignment(container, embeddedKafkaBroker.getPartitionsPerTopic());


void tearDown() {

public void testKafka() throws InterruptedException {
Map<String, Object> configs = new HashMap<>(KafkaTestUtils.producerProps(embeddedKafkaBroker));
Producer<Object, Object> producer = new DefaultKafkaProducerFactory<>(configs, new AvroKafkaSerializer<>(), new AvroKafkaSerializer<>()).createProducer();

FooBarKey fooBarKey = new FooBarKey();

FooBarValue fooBarValue = new FooBarValue();

producer.send(new ProducerRecord<>(TOPIC, fooBarKey, fooBarValue));

ConsumerRecord<FooBarKey, FooBarValue> singleRecord = records.poll(100, TimeUnit.MILLISECONDS);


avro file for key:
"type" : "record",
"name" : "FooBarKey",
"namespace" : "com.umwerk.vbank",
"doc" : "FooBar KEY schema",
"fields" : [ {
"name" : "id",
"type" : "string",
"doc" : "FooBar Id",
"logicalType" : "uuid"
} ]
avro file for value:
"type" : "record",
"name" : "FooBarValue",
"namespace" : "com.umwerk.vbank",
"doc" : "FooBar VALUE schema",
"fields" : [ {
"name" : "foo",
"type" : "string",
"doc" : "Test foo property"
}, {
"name" : "bar",
"type" : "string",
"doc" : "Test bar property"
} ]

Regards, Janez Bindas

Apicurio mailing list -- apicurio@lists.jboss.org
To unsubscribe send an email to apicurio-leave@lists.jboss.org

Eric Wittmann
Principal Software Engineer - Apicurio - Red Hat
He / Him / His