FU
Job Description
Kafka Environment Expertise:
- Must have hands-on experience in Confluent or Apache Kafka environment, including managing Kafka Clusters and Apache Zookeeper.
Data Streaming:
- Capable of transforming data into Kafka topics and building streaming processors using Kafka.
Confluent Tools:
- Practical experience with Confluent-specific tools for managing and monitoring Kafka-based data pipelines.
Cloud Platform Knowledge:
- Working knowledge of at least one cloud platform such as AWS or GCP.
Real-Time Processing:
- Able to implement real-time data streaming and event-driven architectures.
System Optimization:
- Troubleshoots and optimizes Kafka performance, scalability, and reliability in distributed environments.