- Build Status
- Features
- Prerequisites
- Configuration
- Development
- API Documentation
- Getting Started
- Contributing
- GitHub Actions Permissions
- Read Write Datasource Routing
- Project Structure
- Analysis and Decisions
- License
A Spring Boot application for tracking flight events.
- Real-time flight position tracking
- Kafka-based event streaming
- PostgreSQL database with read-write routing
- Redis caching
- Swagger/OpenAPI documentation
- Configurable timezone support
- Java 17 or later
- Docker and Docker Compose
- PostgreSQL
- Redis
- Kafka
The application can be configured through application.yml
. Key configurations include:
The application uses Kafka for event streaming and real-time data processing. Here's the complete Kafka configuration:
spring:
kafka:
bootstrap-servers: localhost:9092
consumer:
group-id: flight-tracker-group
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
properties:
spring.json.trusted.packages: "dev.luismachadoreis.flighttracker.server.ping.application.dto"
topic:
flight-positions: flight-positions
ping-created: ping-created
You can enable or disable various Kafka components and WebSocket notifications:
app:
flight-data:
subscriber:
enabled: true # Enable/disable flight data Kafka subscriber
ping:
subscriber:
enabled: true # Enable/disable ping Kafka subscriber
publisher:
enabled: true # Enable/disable ping Kafka publisher
websocket:
enabled: true # Enable/disable WebSocket notifications
These settings allow you to:
- Control Kafka message consumption for flight data
- Control Kafka message consumption for ping events
- Control Kafka message publishing for ping events
- Enable/disable WebSocket real-time notifications
The application supports a Write/Read replica pattern for database operations. This pattern separates read and write operations to different database instances, providing several benefits:
-
Improved Read Performance
- Read operations are distributed across multiple replicas
- Reduced load on the primary database
- Better scalability for read-heavy workloads
-
High Availability
- If the primary database fails, read replicas can continue serving read requests
- Automatic failover capabilities
- Reduced downtime impact
-
Geographic Distribution
- Read replicas can be placed closer to users
- Reduced latency for read operations
- Better global performance
Consider implementing Write/Read replicas when:
- Your application has a high read-to-write ratio (e.g., 80% reads, 20% writes)
- You need to scale read operations independently
- You require high availability and disaster recovery
- You have geographically distributed users
- Your application has reporting or analytics features that require heavy read operations
spring:
datasource:
writer:
jdbcUrl: jdbc:postgresql://localhost:5432/flighttracker
username: flighttracker
password: flighttracker
reader:
jdbcUrl: jdbc:postgresql://localhost:5433/flighttracker
username: flighttracker
password: flighttracker
The application uses Spring's @Transactional
annotation to determine which datasource to use. Here's how it works:
-
Read Operations
@Transactional(readOnly = true) public List<Flight> getRecentFlights() { // This will use the reader datasource return flightRepository.findAll(); }
-
Write Operations
@Transactional public void saveFlight(Flight flight) { // This will use the writer datasource flightRepository.save(flight); }
-
Mixed Operations
@Transactional public void updateFlightStatus(String flightId, Status newStatus) { // This will use the writer datasource for the entire method Flight flight = flightRepository.findById(flightId); flight.setStatus(newStatus); flightRepository.save(flight); }
The routing is handled by:
ReadWriteRoutingAspect
: Intercepts@Transactional
annotationsDbContextHolder
: Maintains the current context in a ThreadLocalRoutingDataSource
: Routes the request to the appropriate datasource
Important Notes:
- Methods without
@Transactional
will use the writer datasource by default - Nested transactions inherit the datasource from the outer transaction
- The
readOnly
flag is the key to determining which datasource to use
spring:
redis:
host: localhost
port: 6379
The application uses a configurable clock for timestamp operations. By default, it uses UTC:
app:
clock:
timezone: UTC
You can change the timezone to any valid timezone ID (e.g., "America/New_York", "Europe/London"):
app:
clock:
timezone: America/New_York
Swagger UI is available at /swagger-ui.html
with the following configuration:
springdoc:
api-docs:
path: /api-docs
swagger-ui:
path: /swagger-ui.html
-
Start the required services using Docker Compose:
docker-compose up -d
-
Run the application:
./mvnw spring-boot:run
Run the tests:
./mvnw test
The API documentation is available at:
- Swagger UI: http://localhost:8080/swagger-ui.html
- OpenAPI JSON: http://localhost:8080/api-docs
The application requires the following external services:
- Redis 7.4
- PostgreSQL 17
- Apache Kafka 4
The project includes a docker-compose.yml