ROI Project Planner is an open-source tool designed to maximize capital by selecting up to k distinct projects from a pool of available projects. Leveraging greedy algorithms, advanced data structures, and best practices in software engineering, this solution is inspired by real-world venture capital and investment strategies. It not only demonstrates expertise in data structures, algorithms, design patterns, and SOLID principles but also integrates modern cloud-native patterns including reactive programming, fault tolerance, and event-driven architectures.
- Optimized Capital Selection: Uses greedy algorithms and max-heaps to select projects that maximize final capital.
- Reactive & Asynchronous Processing: Built on Java 23 and Spring WebFlux for non-blocking, asynchronous operations.
- Cloud-Native Design: Seamlessly deployable to Kubernetes with Docker, supporting horizontal scaling and resilience.
- Fault Tolerance: Integrated with Resilience4J to provide circuit breaker patterns and fallback mechanisms.
- Event-Driven Architecture: Utilizes Apache Kafka for robust, asynchronous event processing.
- CI/CD Integration: Automated builds, tests, and deployments via GitHub Actions.
- Extensive Testing: Comprehensive tests with JUnit 5, AssertJ, Mockito, and Testcontainers for realistic integration testing.
- Cache Integration: Leverages caching (via Redis) to optimize performance for frequently accessed data.
- Observability: Equipped with Prometheus, Grafana, Jaeger, and Argo CD for metrics, monitoring, distributed tracing, and GitOps.
- Logging Strategy: Employs SLF4J with Logback and logstash-logback-encoder to produce structured JSON logs. Logs are collected by Alloy, sent to Loki for indexing, and visualized in Grafana for comprehensive observability.
Follow these steps to set up the roi-project-planner project locally.
Ensure the following tools are installed:
- Docker v27.5.1 (or latest)
- Docker Compose v2.32.4 (or latest)
- Java 23 (or latest)
To run the project locally using Docker, follow these steps:
-
Clone the Repository:
git clone https://github.com/ranzyblessings/roi-project-planner.git cd roi-project-planner
-
Start Dependencies with Docker Compose:
docker compose up --build -d
Note: This command will start the following services:
- Kafka: Handles event streaming for distributed communication, enabling real-time analytics on Capital Maximization Query events with low-latency, high-throughput processing.
- Cassandra: A highly available, distributed NoSQL database that stores project data, ensuring fault tolerance, horizontal scalability, and low-latency access.
- Redis: A high-performance, in-memory data store that functions as a caching layer, speeding up project lookups and optimizing overall system performance.
-
Start the Backend Core API:
./gradlew clean bootRun
Note: For proper API usage, we are still considering whether to use OpenAPI or Spring REST Docs. Your contribution is welcome.
-
To create one or more projects, send a POST request:
curl -X POST http://localhost:8080/api/v1/projects \ -H "Content-Type: application/json" \ -d '[ { "name": "Project 1", "requiredCapital": 0.00, "profit": 100.00 }, { "name": "Project 2", "requiredCapital": 100.00, "profit": 200.00 }, { "name": "Project 3", "requiredCapital": 100.00, "profit": 300.00 } ]'
-
To maximize capital by selecting up to k distinct projects from a pool of available projects, send a POST request:
curl -X POST http://localhost:8080/api/v1/capital/maximization/query \ -H "Content-Type: application/json" \ -d '{ "maxProjects":2, "initialCapital":"100.00" }'
For now, to view selected projects and capital maximization, use Grafana as outlined in Observability Setup for Local Development under the Log Monitoring section, or check the console logs.
In the future, advanced analytics and graphical representations will be added, with support for custom views that consumers can subscribe to for tailored visualizations.
Note: On Mac or Windows, set the targets
in observability/prom-config.yaml
to: - targets: ['host.docker.internal:8080']
. On Linux, use the host's IP address.
- Start Observability Services
docker compose -f observability/compose.yaml up -d
To monitor requests across services, we use distributed tracing for improved observability and debugging. View traces in Jaeger to analyze request flows, latency, and dependencies.
- Access Jaeger
- Open Jaeger at
http://localhost:16686
. - In the left panel, under Service, select
roi-project-planner
and click Find Traces. - Send API requests using the API Usage guide to visualize request flows.
- Open Jaeger at
Note: You can also visualize traces in Grafana by adding Jaeger as a data source.
We use Prometheus to collect and monitor key application metrics, enabling performance analysis and proactive issue detection. Metrics include request rates, response times, error rates, JVM performance (memory, GC, threads), and database latency.
-
Access Grafana and configure Prometheus
- Open Grafana at
http://localhost:3000
(default login:admin
/admin
). - Navigate to Data Sources, click "Add data source" then Select Prometheus.
- Set the URL to
http://prometheus:9090
(thanks to Docker DNS), then click "Save & Test" to verify connectivity.
- Open Grafana at
-
Create a Log Dashboard
- Click the "+" in the top right, select "New Dashboard", then click "Add Visualization".
- Choose Prometheus as the data source.
- Use Label Filters to refine logs (e.g., job:roi-project-planner-metrics).
-
Monitor Key Metrics
http_server_requests_seconds_count
- Total HTTP requests per endpoint.http_server_requests_seconds_sum
- Request duration per endpoint.jvm_memory_used_bytes
- JVM memory usage.
Refer to the PromQL documentation for advanced queries.
We use Loki and Alloy to aggregate and analyze application logs, enabling real-time debugging and operational insights. Logs capture request processing, application events, errors, and performance metrics for efficient troubleshooting.
-
Access Grafana and configure Loki
- Open Grafana at
http://localhost:3000
(default login:admin
/admin
). - Navigate to Data Sources, click "Add data source" then Select Loki.
- Set the URL to
http://loki:3100
(thanks to Docker DNS), then click "Save & Test" to verify connectivity.
- Open Grafana at
-
Create a Log Dashboard
- Click the "+" in the top right, select "New Dashboard", then click "Add Visualization".
- Choose Loki as the data source.
- Use Label Filters to refine logs (e.g., service:roi-project-planner).
- Enable Table View to see structured log entries.
-
LogQL Queries for Analysis
rate({job="roi-project-planner-logs"} |~ "statusCode=201" | json [30m])
- rate of successful requests (status code 201) over the last 30 minutes.rate({job="roi-project-planner-logs"} [1m])
- Track High Log Volume (Spike Detection).rate({job="roi-project-planner-logs"} | json | level="ERROR" [5m])
- Measure Log Rate per Log Level (eg,ERROR
,INFO
,WARN
).
Refer to the LogQL documentation for advanced queries.
To deploy the ROI Project Planner in production, we use Terraform to provision a secure EKS cluster with managed dependencies, including Kafka, Cassandra, and Redis. The setup includes a dedicated VPC, high-availability subnets, security groups, and persistent storage with Amazon EBS volumes. Argo CD enables GitOps for CI/CD, while Prometheus, Grafana, and Jaeger handle metrics, monitoring, and distributed tracing. We enforce IAM roles for access control, implement SSL/TLS encryption, and configure auto-scaling for resilience. Additionally, log files are stored in Amazon S3 for long-term retention and easy access.
(Terraform project link will be available soon.)
We welcome contributions from developers of all skill levels! Here’s how you can get started:
- Fork the Repository: Create a personal copy of the repo.
- Explore Issues: Check the issue tracker for open issues or feature requests.
- Create a Branch: Work on your feature or bug fix in a separate branch.
- Submit a Pull Request: Once ready and tests are passing, submit a PR for review.
- Feature Development: Implement new features such as advanced project querying, analytics, or enhanced reporting.
- Bug Fixes: Identify and resolve issues.
- Documentation: Improve or expand the existing documentation.
- Testing: Write unit and integration tests to ensure reliability.
This project is open-source software released under the MIT License.