Skip to content

Commit 3b0ea7b

Browse files
authored
Update docs with headers (#12)
1 parent 162648d commit 3b0ea7b

File tree

2 files changed

+59
-12
lines changed

2 files changed

+59
-12
lines changed

README.md

Lines changed: 48 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ These classes will typically be helpful in batch or queue consumers, not as much
1010

1111
# Example Usage Scenarios
1212

13-
Consider a typical processing loop without IterableMapper:
13+
## Typical Processing Loop without `IterableMapper`
1414

1515
```typescript
1616
const source = new SomeSource();
@@ -25,48 +25,88 @@ for (const sourceId of sourceIds) {
2525

2626
Each iteration takes 820ms total, but we waste time waiting for I/O. We could prefetch the next read (300ms) while processing (20ms) and writing (500ms), without changing the order of reads or writes.
2727

28-
Using IterableMapper as a prefetcher:
28+
## Using `IterableMapper` as Prefetcher with Blocking Sequential Writes
29+
30+
`concurrency: 1` on the prefetcher preserves the order of the reads and and writes are sequential and blocking (unchanged).
2931

3032
```typescript
3133
const source = new SomeSource();
3234
const sourceIds = [1, 2,... 1000];
3335
// Pre-reads up to 8 items serially and releases in sequential order
3436
const sourcePrefetcher = new IterableMapper(sourceIds,
3537
async (sourceId) => source.read(sourceId),
36-
{ concurrency: 1 }
38+
{ concurrency: 1, maxUnread: 10 }
3739
);
3840
const sink = new SomeSink();
39-
for await (const item of sourcePrefetcher) {
41+
for await (const item of sourcePrefetcher) { // may not block for fast sources
4042
const outputItem = doSomeOperation(item); // takes 20 ms of CPU
4143
await sink.write(outputItem); // takes 500 ms of I/O wait, no CPU
4244
}
4345
```
4446

4547
This reduces iteration time to 520ms by overlapping reads with processing/writing.
4648

47-
For maximum throughput, make the writes concurrent with IterableQueueMapper (to iterate results with backpressure when too many unread items) or IterableQueueMapperSimple (to handle errors at end without custom iteration or backpressure):
49+
## Using `IterableMapper` as Prefetcher with Background Sequential Writes with `IterableQueueMapperSimple`
50+
51+
`concurrency: 1` on the prefetcher preserves the order of the reads.
52+
`concurrency: 1` on the flusher preserves the order of the writes, but allows the loop to iterate while last write is completing.
4853

4954
```typescript
5055
const source = new SomeSource();
5156
const sourceIds = [1, 2,... 1000];
5257
const sourcePrefetcher = new IterableMapper(sourceIds,
5358
async (sourceId) => source.read(sourceId),
59+
{ concurrency: 1, maxUnread: 10 }
60+
);
61+
const sink = new SomeSink();
62+
const flusher = new IterableQueueMapperSimple(
63+
async (outputItem) => sink.write(outputItem),
5464
{ concurrency: 1 }
5565
);
66+
for await (const item of sourcePrefetcher) { // may not block for fast sources
67+
const outputItem = doSomeOperation(item); // takes 20 ms of CPU
68+
await flusher.enqueue(outputItem); // will periodically block for portion of write time
69+
}
70+
// Wait for all writes to complete
71+
await flusher.onIdle();
72+
// Check for errors
73+
if (flusher.errors.length > 0) {
74+
// ...
75+
}
76+
```
77+
78+
This reduces iteration time to about `max((max(readTime, writeTime) - cpuOpTime, cpuOpTime))`
79+
by overlapping reads and writes with the CPU processing step.
80+
In this contrived example, the loop time is reduced to 500ms - 20ms = 480ms.
81+
In cases where the CPU usage time is higher, the impact can be greater.
82+
83+
## Using `IterableMapper` as Prefetcher with Out of Order Reads and Background Out of Order Writes with `IterableQueueMapperSimple`
84+
85+
For maximum throughput, allow out of order reads and writes with
86+
`IterableQueueMapper` (to iterate results with backpressure when too many unread items) or
87+
`IterableQueueMapperSimple` (to handle errors at end without custom iteration and applying backpressure to block further enqueues when `concurrency` items are in process):
88+
89+
```typescript
90+
const source = new SomeSource();
91+
const sourceIds = [1, 2,... 1000];
92+
const sourcePrefetcher = new IterableMapper(sourceIds,
93+
async (sourceId) => source.read(sourceId),
94+
{ concurrency: 10, maxUnread: 20 }
95+
);
5696
const sink = new SomeSink();
5797
const flusher = new IterableQueueMapperSimple(
5898
async (outputItem) => sink.write(outputItem),
5999
{ concurrency: 10 }
60100
);
61-
for await (const item of sourcePrefetcher) {
101+
for await (const item of sourcePrefetcher) { // typically will not block
62102
const outputItem = doSomeOperation(item); // takes 20 ms of CPU
63-
await flusher.enqueue(outputItem); // usually takes no time
103+
await flusher.enqueue(outputItem); // typically will not block
64104
}
65105
// Wait for all writes to complete
66106
await flusher.onIdle();
67107
// Check for errors
68108
if (flusher.errors.length > 0) {
69-
// ...
109+
// ...
70110
}
71111
```
72112

src/iterable-mapper.ts

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ type NewElementOrError<NewElement = unknown> = {
115115
*
116116
* @example
117117
*
118-
* Consider a typical processing loop without IterableMapper:
118+
* ### Typical Processing Loop without `IterableMapper`
119119
*
120120
* ```typescript
121121
* const source = new SomeSource();
@@ -134,7 +134,9 @@ type NewElementOrError<NewElement = unknown> = {
134134
*
135135
* @example
136136
*
137-
* Using `IterableMapper` as a prefetcher and blocking writes, without changing the order of reads or writes:
137+
* ### Using `IterableMapper` as Prefetcher with Blocking Sequential Writes
138+
*
139+
* `concurrency: 1` on the prefetcher preserves the order of the reads and and writes are sequential and blocking (unchanged).
138140
*
139141
* ```typescript
140142
* const source = new SomeSource();
@@ -155,7 +157,10 @@ type NewElementOrError<NewElement = unknown> = {
155157
*
156158
* @example
157159
*
158-
* Using `IterableMapper` as a prefetcher with background writes, without changing the order of reads or writes:
160+
* ### Using `IterableMapper` as Prefetcher with Background Sequential Writes with `IterableQueueMapperSimple`
161+
*
162+
* `concurrency: 1` on the prefetcher preserves the order of the reads.
163+
* `concurrency: 1` on the flusher preserves the order of the writes, but allows the loop to iterate while last write is completing.
159164
*
160165
* ```typescript
161166
* const source = new SomeSource();
@@ -181,13 +186,15 @@ type NewElementOrError<NewElement = unknown> = {
181186
* }
182187
* ```
183188
*
184-
* This reduces iteration time to about to `max((max(readTime, writeTime) - cpuOpTime, cpuOpTime))
189+
* This reduces iteration time to about `max((max(readTime, writeTime) - cpuOpTime, cpuOpTime))`
185190
* by overlapping reads and writes with the CPU processing step.
186191
* In this contrived example, the loop time is reduced to 500ms - 20ms = 480ms.
187192
* In cases where the CPU usage time is higher, the impact can be greater.
188193
*
189194
* @example
190195
*
196+
* ### Using `IterableMapper` as Prefetcher with Out of Order Reads and Background Out of Order Writes with `IterableQueueMapperSimple`
197+
*
191198
* For maximum throughput, allow out of order reads and writes with
192199
* `IterableQueueMapper` (to iterate results with backpressure when too many unread items) or
193200
* `IterableQueueMapperSimple` (to handle errors at end without custom iteration and applying backpressure to block further enqueues when `concurrency` items are in process):

0 commit comments

Comments
 (0)