Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance test coverage, reliability, and documentation #23

Merged
merged 1 commit into from
Feb 16, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ We use **Prometheus** to collect and monitor key application metrics, enabling p
detection. Metrics include **request rates**, **response times**, **error rates**, **JVM performance (memory, GC,
threads)**, and **database latency**.

1. **Access Grafana and configure Prometheus once the services are running**
1. **Access Grafana and configure Prometheus**
- Open Grafana at `http://localhost:3000` (default login: `admin` / `admin`).
- Navigate to **Data Sources**, click **"Add data source"** then Select **Prometheus**.
- Set the **URL** to `http://prometheus:9090` (thanks to Docker DNS), then click **"Save & Test"** to verify
Expand All @@ -188,7 +188,7 @@ We use **Loki** and **Alloy** to aggregate and analyze application logs, enablin
insights. Logs capture **request processing**, **application events**, **errors**, and **performance metrics** for
efficient troubleshooting.

1. **Access Grafana and configure Loki once the services are running**
1. **Access Grafana and configure Loki**
- Open Grafana at `http://localhost:3000` (default login: `admin` / `admin`).
- Navigate to **Data Sources**, click **"Add data source"** then Select **Loki**.
- Set the **URL** to `http://loki:3100` (thanks to Docker DNS), then click **"Save & Test"** to verify connectivity.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,16 +9,16 @@

/**
* Immutable record representing a query to maximize capital.
* Includes the list of available projects, the maximum number of projects to complete, and the initial capital.
* Encapsulates available projects, the maximum number of selections, and initial capital for optimization.
*/
public record CapitalMaximizationQuery(
List<ProjectDTO> availableProjects,
int maxProjects,
BigDecimal initialCapital) {

public CapitalMaximizationQuery {
requireNonNullAndNoNullElements(availableProjects, () -> "Available projects must not be null nor contain null elements");
requireNonNegative(maxProjects, () -> "Max projects must be non-negative");
requireNonNullAndNonNegative(initialCapital, () -> "Initial capital must not be null and must be non-negative");
requireNonNullAndNoNullElements(availableProjects, () -> "Available projects list cannot be null or contain null elements.");
requireNonNegative(maxProjects, () -> "Maximum projects must be zero or greater.");
requireNonNullAndNonNegative(initialCapital, () -> "Initial capital cannot be null or negative.");
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,14 @@

/**
* Immutable record representing the result of a capital maximization operation.
* Contains the selected projects and the final accumulated capital.
* Represents the outcome of capital maximization, including the selected projects and final capital.
*/
public record ProjectCapitalOptimized(
List<ProjectDTO> selectedProjects,
BigDecimal finalCapital) {

public ProjectCapitalOptimized {
requireNonNull(selectedProjects, () -> "Selected projects list must not be null");
requireNonNullAndNonNegative(finalCapital, () -> "Final capital must not be null and must be non-negative");
requireNonNull(selectedProjects, () -> "Selected projects cannot be null.");
requireNonNullAndNonNegative(finalCapital, () -> "Final capital cannot be null or negative.");
}
}
47 changes: 26 additions & 21 deletions src/main/java/com/github/analytics/api/ProjectCapitalOptimizer.java
Original file line number Diff line number Diff line change
Expand Up @@ -14,27 +14,28 @@
import java.util.PriorityQueue;

/**
* Optimizes project selection for maximum final capital.
* Employs a greedy algorithm, iteratively selecting the most profitable, affordable project.
* Optimizes project selection to maximize final capital.
* Uses a greedy algorithm to iteratively select the most profitable, affordable project.
*/
@Component
public class ProjectCapitalOptimizer {
private static final Logger logger = LoggerFactory.getLogger(ProjectCapitalOptimizer.class);

/**
* Maximizes the final capital based on the provided query.
* Optimizes project selection to maximize final capital.
*
* @param query the capital maximization query containing available projects, max selections, and initial capital.
* @param query The capital maximization query specifying available projects, maximum selections, and initial capital.
* @return a {@code Mono} emitting a {@link ProjectCapitalOptimized} containing the selected projects and final capital.
* @throws IllegalArgumentException if the query or its available projects list is null.
*/
public Mono<ProjectCapitalOptimized> maximizeCapital(CapitalMaximizationQuery query) {
if (query == null || query.availableProjects() == null) {
logger.error("Received null query or available projects list.");
return Mono.error(new IllegalArgumentException("Capital maximization query must not be null"));
return Mono.error(new IllegalArgumentException("Query and available projects list must not be null."));
}

logger.info("Starting capital maximization with initial capital: {}", query.initialCapital());
logger.info("Starting capital maximization with initial capital: {} and {} available projects.",
query.initialCapital(), query.availableProjects().size());

// Offload the CPU-bound computation to a parallel scheduler.
return Mono.fromCallable(() -> computeMaximizedCapital(query))
Expand All @@ -44,51 +45,55 @@ public Mono<ProjectCapitalOptimized> maximizeCapital(CapitalMaximizationQuery qu
}

/**
* Performs the greedy algorithm to select projects and maximize capital.
* Executes a greedy algorithm to maximize capital by iteratively selecting the most profitable affordable projects.
*
* @param query the capital maximization query.
* @param query Query containing available projects, initial capital, and selection constraints.
* @return a {@link ProjectCapitalOptimized} with the selected projects and final capital.
*/
private ProjectCapitalOptimized computeMaximizedCapital(CapitalMaximizationQuery query) {
List<ProjectDTO> projects = new ArrayList<>(query.availableProjects());
logger.debug("Number of available projects: {}", projects.size());

// Log total projects available.
int totalProjects = projects.size();
logger.info("Starting capital maximization with {} available projects and initial capital: {}",
totalProjects, query.initialCapital());

// Sort projects by required capital in ascending order.
projects.sort(Comparator.comparing(ProjectDTO::requiredCapital));
logger.debug("Projects sorted by required capital.");

// Max-heap to choose the project with the highest profit among those affordable.
var profitMaxHeap = new PriorityQueue<>(Comparator.comparing(ProjectDTO::profit).reversed());
// Max-heap (priority queue) to track the most profitable affordable projects.
PriorityQueue<ProjectDTO> profitMaxHeap = new PriorityQueue<>(Comparator.comparing(ProjectDTO::profit).reversed());

List<ProjectDTO> selectedProjects = new ArrayList<>();
BigDecimal currentCapital = query.initialCapital();
int totalProjects = projects.size();
int projectIndex = 0;

// Iteratively select up to maxProjects.
for (int i = 0; i < query.maxProjects(); i++) {
// Log the current iteration and capital.
logger.debug("Iteration {}: Current capital: {}", i, currentCapital);
logger.debug("Iteration {}: Current capital: {}", i + 1, currentCapital);

// Add all projects whose required capital is within the current capital.
while (projectIndex < totalProjects
&& projects.get(projectIndex).requiredCapital().compareTo(currentCapital) <= 0) {
// Add all affordable projects to the max-heap.
while (projectIndex < totalProjects && projects.get(projectIndex).requiredCapital().compareTo(currentCapital) <= 0) {
ProjectDTO project = projects.get(projectIndex);
profitMaxHeap.offer(project);
logger.debug("Project {} (profit: {}) is affordable and added to the heap.", project.name(), project.profit());
logger.debug("Added project {} to heap (Required: {}, Profit: {}).",
project.name(), project.requiredCapital(), project.profit());
projectIndex++;
}

// If no projects are available to start, break early.
// If no affordable projects remain, exit early.
if (profitMaxHeap.isEmpty()) {
logger.info("No further projects can be selected with current capital: {}", currentCapital);
break;
}

// Select the project with the highest profit.
// Select the most profitable project.
ProjectDTO chosenProject = profitMaxHeap.poll();
selectedProjects.add(chosenProject);
currentCapital = currentCapital.add(chosenProject.profit());
logger.info("Selected project {}. Updated capital: {}", chosenProject.name(), currentCapital);
logger.info("Selected project {} (Profit: {}). Updated capital: {}",
chosenProject.name(), chosenProject.profit(), currentCapital);
}

return new ProjectCapitalOptimized(selectedProjects, currentCapital);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ public Mono<ApiResponse<String>> publishCapitalMaximizationQueryEvent(
var event = new CapitalMaximizationQueryEvent(request.maxProjects(), request.initialCapital());
return projectCapitalOptimizerEventPublisher.publishEvent(event);
})
.thenReturn(ApiResponse.success(HttpStatus.ACCEPTED.value(), "Capital maximization query event accepted for processing."))
.doOnError(error -> logger.error("Error occurred while publishing capital maximization query event", error));
.thenReturn(ApiResponse.success(HttpStatus.ACCEPTED.value(), "Capital Maximization Query event accepted for processing"))
.doOnError(error -> logger.error("Error publishing Capital Maximization Query event", error));
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -7,17 +7,18 @@
import java.math.BigDecimal;

/**
* Data transfer object (DTO) for requesting capital optimization among a pool of projects.
* Includes the maximum number of projects to complete and the initial capital available.
* In the future, this will support more advanced project selection based on various criteria.
* Data Transfer Object (DTO) for requesting capital optimization among a pool of projects.
* Specifies the maximum number of projects that can be selected and the available initial capital.
*
* <p> Future enhancements will introduce advanced project selection based on additional criteria. </p>
*/
public record ProjectCapitalOptimizerRequest(
@NotNull(message = "Max projects cannot be null")
@Positive(message = "Max projects must be greater than zero")
@NotNull(message = "Maximum projects cannot be null")
@Positive(message = "Maximum projects must be at least 1")
Integer maxProjects,

@NotNull(message = "Initial capital cannot be null")
@DecimalMin(value = "0.00", message = "Initial capital must be greater than or equal to 0")
@DecimalMin(value = "0.00", message = "Initial capital cannot be negative")
BigDecimal initialCapital
) {

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
import static com.github.projects.model.Validators.*;

/**
* Event representing the capital maximization process, to be published to a Kafka topic.
* Represents a capital maximization event published to a Kafka topic.
*
* @param maxProjects The maximum number of projects to complete.
* @param initialCapital The initial capital available for maximization.
Expand All @@ -15,9 +15,10 @@ public record CapitalMaximizationQueryEvent(
Integer maxProjects,
BigDecimal initialCapital
) implements Serializable {

public CapitalMaximizationQueryEvent {
requireNonNull(maxProjects, () -> "Max projects should not be null");
requireNonNegative(maxProjects, () -> "Initial capital should not be null");
requireNonNullAndNonNegative(initialCapital, () -> "Initial capital must not be null and must be non-negative");
requireNonNull(maxProjects, () -> "Max projects cannot be null.");
requireNonNegative(maxProjects, () -> "Max projects must be zero or greater.");
requireNonNullAndNonNegative(initialCapital, () -> "Initial capital cannot be null or negative.");
}
}
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
package com.github.analytics.event;

import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.json.JsonMapper;
import com.github.analytics.api.CapitalMaximizationQuery;
import com.github.analytics.api.ProjectCapitalOptimized;
Expand All @@ -12,70 +13,102 @@
import org.springframework.stereotype.Component;
import reactor.core.publisher.Mono;
import reactor.core.scheduler.Schedulers;
import reactor.util.retry.Retry;

import static com.github.configuration.KafkaConfiguration.CAPITAL_MAXIMIZATION_QUERY_TOPIC;
import static com.github.projects.model.Validators.requireNonNullOrBlank;
import java.time.Duration;

/**
* Kafka consumer that processes Capital Maximization Query events from the specified Kafka partitions.
* Currently, logs the events for development purposes.
* Processes capital maximization events and optimizes project selection. Currently, logs events for debugging.
*/
@Component
public class ProjectCapitalOptimizerEventConsumer {
private static final Logger logger = LoggerFactory.getLogger(ProjectCapitalOptimizerEventConsumer.class);
private static final JsonMapper JSON_MAPPER = JsonMapper.builder().build();

private final JsonMapper jsonMapper;
private final ProjectService projectService;
private final ProjectCapitalOptimizer projectCapitalOptimizer;

public ProjectCapitalOptimizerEventConsumer(
ProjectService projectService,
ProjectCapitalOptimizer projectCapitalOptimizer) {
this.jsonMapper = JsonMapper.builder().build();
this.projectService = projectService;
this.projectCapitalOptimizer = projectCapitalOptimizer;
}

/**
* Kafka listener that consumes capital maximization query events from Kafka topic partitions.
*
* @param jsonEvent The JSON event as a String.
*/
@KafkaListener(
topicPartitions = @TopicPartition(
topic = CAPITAL_MAXIMIZATION_QUERY_TOPIC,
topic = "CAPITAL_MAXIMIZATION_QUERY_TOPIC",
partitions = {"0", "1"}
)
)
void handleCapitalMaximizationEvent(final String jsonEvent) {
logger.info("Starting to process capital maximization event: {}", jsonEvent);
public void handleCapitalMaximizationEvent(final String jsonEvent) {
logger.info("Received capital maximization event: {}", jsonEvent);

processCapitalMaximizationEvent(jsonEvent)
.subscribe(result -> logger.info("Capital maximization completed: {}", result));
.retryWhen(Retry.fixedDelay(3, Duration.ofSeconds(2))) // Retry transient failures
.doOnError(error -> logger.error("Final failure processing event: {}", jsonEvent, error))
.onErrorResume(error -> Mono.empty()) // Avoid infinite Kafka retries
.subscribe(
result -> logger.info("Processing completed. Final capital: {}, Selected projects: {}",
result.finalCapital(), result.selectedProjects().size())
);
}

/**
* Processes a capital maximization event by deserializing the JSON event, fetching available projects,
* and optimizing capital allocation.
*
* @param jsonEvent The JSON event payload.
* @return A Mono of {@link ProjectCapitalOptimized} containing the optimization result.
*/
public Mono<ProjectCapitalOptimized> processCapitalMaximizationEvent(final String jsonEvent) {
return deserializeCapitalMaximizationQueryEvent(jsonEvent)
return parseCapitalMaximizationJsonEvent(jsonEvent)
.flatMap(event -> projectService.findAll().collectList()
.flatMap(projects -> {
if (projects.isEmpty()) {
throw new IllegalStateException("No projects are available for capital maximization.");
logger.warn("No projects available for capital maximization.");
return Mono.error(new IllegalStateException("No projects available for capital maximization."));
}

logger.info("Projects count: {}, maxProjects: {}, initialCapital: {}",
projects.size(), event.maxProjects(), event.initialCapital());
logger.info("Processing event: maxProjects={}, initialCapital={}, availableProjects={}",
event.maxProjects(), event.initialCapital(), projects.size());

var query = new CapitalMaximizationQuery(projects, event.maxProjects(), event.initialCapital());
return projectCapitalOptimizer.maximizeCapital(query);
}))
.doOnError(error -> logger.error("Error occurred while processing capital maximization event", error));
.doOnError(error -> logger.error("Error during capital maximization process", error));
}

private Mono<CapitalMaximizationQueryEvent> deserializeCapitalMaximizationQueryEvent(String jsonEvent) {
logger.debug("Attempting to convert JSON event to CapitalMaximizationQueryEvent: {}", jsonEvent);
/**
* Parses and deserializes the JSON event into a {@link CapitalMaximizationQueryEvent} object.
*
* @param jsonEvent The JSON event payload.
* @return A Mono of the deserialized {@link CapitalMaximizationQueryEvent}.
*/
private Mono<CapitalMaximizationQueryEvent> parseCapitalMaximizationJsonEvent(String jsonEvent) {
logger.debug("Attempting to deserialize JSON event: {}", jsonEvent);

return Mono.fromCallable(() -> {
requireNonNullOrBlank(jsonEvent, () -> "JSON event should not be null or empty");
if (jsonEvent == null || jsonEvent.trim().isEmpty()) {
throw new IllegalArgumentException("JSON event should not be null or empty");
}

var event = jsonMapper.readValue(jsonEvent, CapitalMaximizationQueryEvent.class);
logger.info("Successfully converted JSON event to CapitalMaximizationQueryEvent.");
return event;
}).subscribeOn(Schedulers.boundedElastic()) // Offload to a bounded elastic thread pool for blocking operations;
.doOnError(error -> logger.error("Failed to convert JSON event to CapitalMaximizationQueryEvent. JSON: {}", jsonEvent, error));
try {
var event = JSON_MAPPER.readValue(jsonEvent, CapitalMaximizationQueryEvent.class);
logger.info("Successfully deserialized event: {}", event);
return event;
} catch (JsonProcessingException e) {
logger.error("Invalid JSON format: {} | Error: {}", jsonEvent, e.getMessage(), e);
throw new IllegalArgumentException("Invalid JSON format", e);
}
})
.subscribeOn(Schedulers.boundedElastic()) // Offload JSON parsing to a separate thread
.doOnError(error -> logger.error("Deserialization failed for JSON: {}", jsonEvent, error));
}
}
Loading