Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: refine README for clarity and accuracy #22

Merged
merged 1 commit into from
Feb 16, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ contribution is welcome._

### Projects

1. To create **one** or more projects, send a **POST** request:
1. To **create one or more projects**, send a **POST** request:

```bash
curl -X POST http://localhost:8080/api/v1/projects \
Expand Down Expand Up @@ -139,6 +139,9 @@ consumers can subscribe to for tailored visualizations._

## Observability Setup for Local Development

_**Note:** On Mac or Windows, set the `targets` in **observability/prom-config.yaml**
to: `- targets: ['host.docker.internal:8080']`. On Linux, use the host's IP address._

1. **Start Observability Services**
```bash
docker compose -f observability/compose.yaml up -d
Expand All @@ -152,17 +155,14 @@ traces in **Jaeger** to analyze request flows, latency, and dependencies.
1. **Access Jaeger**
- Open Jaeger at `http://localhost:16686`.
- In the left panel, under **Service**, select `roi-project-planner` and click **Find Traces**.
- Send **API requests** using [API Usage](#api-usage) to visualize request flows.
- Send **API requests** using the [API Usage](#api-usage) guide to visualize request flows.

### Metrics Monitoring

We use **Prometheus** to collect and monitor key application metrics, enabling performance analysis and proactive issue
detection. Metrics include **request rates**, **response times**, **error rates**, **JVM performance (memory, GC,
threads)**, and **database latency**.

_**Note:** On Mac or Windows, set the `targets` in **observability/prom-config.yaml**
to: `- targets: ['host.docker.internal:8080']`. On Linux, use the host's IP address._

1. **Access Grafana and configure Prometheus once the services are running**
- Open Grafana at `http://localhost:3000` (default login: `admin` / `admin`).
- Navigate to **Data Sources**, click **"Add data source"** then Select **Prometheus**.
Expand Down