Skip to content

Commit 6a99307

Browse files
authored
Merge pull request #500 from 47degrees/CE3
Update to cats-effect 3
2 parents b9844fe + ec6bf7b commit 6a99307

22 files changed

+412
-473
lines changed

README.md

+34-35
Original file line numberDiff line numberDiff line change
@@ -74,13 +74,13 @@ import cats.implicits._
7474

7575
import fetch._
7676

77-
def latency[F[_] : Concurrent](milis: Long): F[Unit] =
78-
Concurrent[F].delay(Thread.sleep(milis))
77+
def latency[F[_] : Sync](milis: Long): F[Unit] =
78+
Sync[F].delay(Thread.sleep(milis))
7979

8080
object ToString extends Data[Int, String] {
8181
def name = "To String"
8282

83-
def source[F[_] : Concurrent]: DataSource[F, Int, String] = new DataSource[F, Int, String]{
83+
def source[F[_] : Async]: DataSource[F, Int, String] = new DataSource[F, Int, String]{
8484
override def data = ToString
8585

8686
override def CF = Concurrent[F]
@@ -99,7 +99,7 @@ object ToString extends Data[Int, String] {
9999
}
100100
}
101101

102-
def fetchString[F[_] : Concurrent](n: Int): Fetch[F, String] =
102+
def fetchString[F[_] : Async](n: Int): Fetch[F, String] =
103103
Fetch(n, ToString.source)
104104
```
105105

@@ -116,16 +116,15 @@ import scala.concurrent.ExecutionContext
116116
val executor = new ScheduledThreadPoolExecutor(4)
117117
val executionContext: ExecutionContext = ExecutionContext.fromExecutor(executor)
118118

119-
implicit val timer: Timer[IO] = IO.timer(executionContext)
120-
implicit val cs: ContextShift[IO] = IO.contextShift(executionContext)
119+
import cats.effect.unsafe.implicits.global
121120
```
122121

123122
## Creating and running a fetch
124123

125124
Now that we can convert `Int` values to `Fetch[F, String]`, let's try creating a fetch.
126125

127126
```scala
128-
def fetchOne[F[_] : Concurrent]: Fetch[F, String] =
127+
def fetchOne[F[_] : Async]: Fetch[F, String] =
129128
fetchString(1)
130129
```
131130

@@ -135,8 +134,8 @@ Let's run it and wait for the fetch to complete. We'll use `IO#unsafeRunTimed` f
135134
import scala.concurrent.duration._
136135

137136
Fetch.run[IO](fetchOne).unsafeRunTimed(5.seconds)
138-
// --> [236] One ToString 1
139-
// <-- [236] One ToString 1
137+
// --> [771] One ToString 1
138+
// <-- [771] One ToString 1
140139
// res0: Option[String] = Some(value = "1")
141140
```
142141

@@ -147,16 +146,16 @@ As you can see in the previous example, the `ToStringSource` is queried once to
147146
Multiple fetches to the same data source are automatically batched. For illustrating this, we are going to compose three independent fetch results as a tuple.
148147

149148
```scala
150-
def fetchThree[F[_] : Concurrent]: Fetch[F, (String, String, String)] =
149+
def fetchThree[F[_] : Async]: Fetch[F, (String, String, String)] =
151150
(fetchString(1), fetchString(2), fetchString(3)).tupled
152151
```
153152

154153
When executing the above fetch, note how the three identities get batched, and the data source is only queried once.
155154

156155
```scala
157156
Fetch.run[IO](fetchThree).unsafeRunTimed(5.seconds)
158-
// --> [236] Batch ToString NonEmptyList(1, 2, 3)
159-
// <-- [236] Batch ToString NonEmptyList(1, 2, 3)
157+
// --> [777] Batch ToString NonEmptyList(1, 2, 3)
158+
// <-- [777] Batch ToString NonEmptyList(1, 2, 3)
160159
// res1: Option[(String, String, String)] = Some(value = ("1", "2", "3"))
161160
```
162161

@@ -166,7 +165,7 @@ Note that the `DataSource#batch` method is not mandatory. It will be implemented
166165
object UnbatchedToString extends Data[Int, String] {
167166
def name = "Unbatched to string"
168167

169-
def source[F[_] : Concurrent] = new DataSource[F, Int, String] {
168+
def source[F[_]: Async] = new DataSource[F, Int, String] {
170169
override def data = UnbatchedToString
171170

172171
override def CF = Concurrent[F]
@@ -179,27 +178,27 @@ object UnbatchedToString extends Data[Int, String] {
179178
}
180179
}
181180

182-
def unbatchedString[F[_] : Concurrent](n: Int): Fetch[F, String] =
181+
def unbatchedString[F[_]: Async](n: Int): Fetch[F, String] =
183182
Fetch(n, UnbatchedToString.source)
184183
```
185184

186185
Let's create a tuple of unbatched string requests.
187186

188187
```scala
189-
def fetchUnbatchedThree[F[_] : Concurrent]: Fetch[F, (String, String, String)] =
188+
def fetchUnbatchedThree[F[_] : Async]: Fetch[F, (String, String, String)] =
190189
(unbatchedString(1), unbatchedString(2), unbatchedString(3)).tupled
191190
```
192191

193192
When executing the above fetch, note how the three identities get requested in parallel. You can override `batch` to execute queries sequentially if you need to.
194193

195194
```scala
196195
Fetch.run[IO](fetchUnbatchedThree).unsafeRunTimed(5.seconds)
197-
// --> [236] One UnbatchedToString 1
198-
// --> [237] One UnbatchedToString 2
199-
// --> [238] One UnbatchedToString 3
200-
// <-- [236] One UnbatchedToString 1
201-
// <-- [237] One UnbatchedToString 2
202-
// <-- [238] One UnbatchedToString 3
196+
// --> [778] One UnbatchedToString 1
197+
// --> [776] One UnbatchedToString 2
198+
// --> [777] One UnbatchedToString 3
199+
// <-- [776] One UnbatchedToString 2
200+
// <-- [777] One UnbatchedToString 3
201+
// <-- [778] One UnbatchedToString 1
203202
// res2: Option[(String, String, String)] = Some(value = ("1", "2", "3"))
204203
```
205204

@@ -211,7 +210,7 @@ If we combine two independent fetches from different data sources, the fetches c
211210
object Length extends Data[String, Int] {
212211
def name = "Length"
213212

214-
def source[F[_] : Concurrent] = new DataSource[F, String, Int] {
213+
def source[F[_] : Async] = new DataSource[F, String, Int] {
215214
override def data = Length
216215

217216
override def CF = Concurrent[F]
@@ -230,25 +229,25 @@ object Length extends Data[String, Int] {
230229
}
231230
}
232231

233-
def fetchLength[F[_] : Concurrent](s: String): Fetch[F, Int] =
232+
def fetchLength[F[_] : Async](s: String): Fetch[F, Int] =
234233
Fetch(s, Length.source)
235234
```
236235

237236
And now we can easily receive data from the two sources in a single fetch.
238237

239238
```scala
240-
def fetchMulti[F[_] : Concurrent]: Fetch[F, (String, Int)] =
239+
def fetchMulti[F[_] : Async]: Fetch[F, (String, Int)] =
241240
(fetchString(1), fetchLength("one")).tupled
242241
```
243242

244243
Note how the two independent data fetches run in parallel, minimizing the latency cost of querying the two data sources.
245244

246245
```scala
247246
Fetch.run[IO](fetchMulti).unsafeRunTimed(5.seconds)
248-
// --> [239] One Length one
249-
// --> [236] One ToString 1
250-
// <-- [239] One Length one
251-
// <-- [236] One ToString 1
247+
// --> [774] One ToString 1
248+
// --> [777] One Length one
249+
// <-- [774] One ToString 1
250+
// <-- [777] One Length one
252251
// res3: Option[(String, Int)] = Some(value = ("1", 3))
253252
```
254253

@@ -261,7 +260,7 @@ When fetching an identity twice within the same `Fetch`, such as a batch of fetc
261260
Let's try creating a fetch that asks for the same identity twice, by using `flatMap` (in a for-comprehension) to chain the requests together:
262261

263262
```scala
264-
def fetchTwice[F[_] : Concurrent]: Fetch[F, (String, String)] = for {
263+
def fetchTwice[F[_] : Async]: Fetch[F, (String, String)] = for {
265264
one <- fetchString(1)
266265
two <- fetchString(1)
267266
} yield (one, two)
@@ -275,16 +274,16 @@ val runFetchTwice = Fetch.run[IO](fetchTwice)
275274
```
276275
```scala
277276
runFetchTwice.unsafeRunTimed(5.seconds)
278-
// --> [237] One ToString 1
279-
// <-- [237] One ToString 1
277+
// --> [772] One ToString 1
278+
// <-- [772] One ToString 1
280279
// res4: Option[(String, String)] = Some(value = ("1", "1"))
281280
```
282281

283282
This will still fetch the data again, however, if we call it once more:
284283
```scala
285284
runFetchTwice.unsafeRunTimed(5.seconds)
286-
// --> [239] One ToString 1
287-
// <-- [239] One ToString 1
285+
// --> [778] One ToString 1
286+
// <-- [778] One ToString 1
288287
// res5: Option[(String, String)] = Some(value = ("1", "1"))
289288
```
290289

@@ -302,8 +301,8 @@ val runFetchFourTimesSharedCache = for {
302301
```
303302
```scala
304303
runFetchFourTimesSharedCache.unsafeRunTimed(5.seconds)
305-
// --> [238] One ToString 1
306-
// <-- [238] One ToString 1
304+
// --> [777] One ToString 1
305+
// <-- [777] One ToString 1
307306
// res6: Option[(String, String, String, String)] = Some(
308307
// value = ("1", "1", "1", "1")
309308
// )

build.sbt

+7-7
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,13 @@ addCommandAlias("ci-test", "scalafmtCheckAll; scalafmtSbtCheck; mdoc; ++test")
55
addCommandAlias("ci-docs", "github; mdoc; headerCreateAll; publishMicrosite")
66
addCommandAlias("ci-publish", "github; ci-release")
77

8-
lazy val scala212 = "2.12.12"
9-
lazy val scala213 = "2.13.5"
10-
lazy val scala3Version = "3.0.0-RC2"
8+
lazy val scala212 = "2.12.14"
9+
lazy val scala213 = "2.13.6"
10+
lazy val scala3Version = "3.0.0"
1111
lazy val scala2Versions = Seq(scala212, scala213)
1212
lazy val allScalaVersions = scala2Versions :+ scala3Version
1313

14-
skip in publish := true
14+
publish / skip := true
1515

1616
lazy val fetch = crossProject(JSPlatform, JVMPlatform)
1717
.crossType(CrossType.Pure)
@@ -34,20 +34,20 @@ lazy val debugJS = `fetch-debug`.js
3434

3535
lazy val `fetch-examples` = project
3636
.dependsOn(fetchJVM, debugJVM)
37-
.settings(skip in publish := true)
37+
.settings(publish / skip := true)
3838
.settings(examplesSettings: _*)
3939
.settings(crossScalaVersions := scala2Versions)
4040

4141
lazy val microsite = project
4242
.dependsOn(fetchJVM, debugJVM)
4343
.settings(docsSettings: _*)
44-
.settings(skip in publish := true)
44+
.settings(publish / skip := true)
4545
.enablePlugins(MicrositesPlugin, MdocPlugin)
4646
.settings(crossScalaVersions := scala2Versions)
4747

4848
lazy val documentation = project
4949
.dependsOn(fetchJVM)
50-
.settings(skip in publish := true)
50+
.settings(publish / skip := true)
5151
.settings(mdocOut := file("."))
5252
.enablePlugins(MdocPlugin)
5353
.settings(crossScalaVersions := scala2Versions)

docs/README.md

+14-15
Original file line numberDiff line numberDiff line change
@@ -75,13 +75,13 @@ import cats.implicits._
7575

7676
import fetch._
7777

78-
def latency[F[_] : Concurrent](milis: Long): F[Unit] =
79-
Concurrent[F].delay(Thread.sleep(milis))
78+
def latency[F[_] : Sync](milis: Long): F[Unit] =
79+
Sync[F].delay(Thread.sleep(milis))
8080

8181
object ToString extends Data[Int, String] {
8282
def name = "To String"
8383

84-
def source[F[_] : Concurrent]: DataSource[F, Int, String] = new DataSource[F, Int, String]{
84+
def source[F[_] : Async]: DataSource[F, Int, String] = new DataSource[F, Int, String]{
8585
override def data = ToString
8686

8787
override def CF = Concurrent[F]
@@ -100,7 +100,7 @@ object ToString extends Data[Int, String] {
100100
}
101101
}
102102

103-
def fetchString[F[_] : Concurrent](n: Int): Fetch[F, String] =
103+
def fetchString[F[_] : Async](n: Int): Fetch[F, String] =
104104
Fetch(n, ToString.source)
105105
```
106106

@@ -117,16 +117,15 @@ import scala.concurrent.ExecutionContext
117117
val executor = new ScheduledThreadPoolExecutor(4)
118118
val executionContext: ExecutionContext = ExecutionContext.fromExecutor(executor)
119119

120-
implicit val timer: Timer[IO] = IO.timer(executionContext)
121-
implicit val cs: ContextShift[IO] = IO.contextShift(executionContext)
120+
import cats.effect.unsafe.implicits.global
122121
```
123122

124123
## Creating and running a fetch
125124

126125
Now that we can convert `Int` values to `Fetch[F, String]`, let's try creating a fetch.
127126

128127
```scala mdoc:silent
129-
def fetchOne[F[_] : Concurrent]: Fetch[F, String] =
128+
def fetchOne[F[_] : Async]: Fetch[F, String] =
130129
fetchString(1)
131130
```
132131

@@ -145,7 +144,7 @@ As you can see in the previous example, the `ToStringSource` is queried once to
145144
Multiple fetches to the same data source are automatically batched. For illustrating this, we are going to compose three independent fetch results as a tuple.
146145

147146
```scala mdoc:silent
148-
def fetchThree[F[_] : Concurrent]: Fetch[F, (String, String, String)] =
147+
def fetchThree[F[_] : Async]: Fetch[F, (String, String, String)] =
149148
(fetchString(1), fetchString(2), fetchString(3)).tupled
150149
```
151150

@@ -161,7 +160,7 @@ Note that the `DataSource#batch` method is not mandatory. It will be implemented
161160
object UnbatchedToString extends Data[Int, String] {
162161
def name = "Unbatched to string"
163162

164-
def source[F[_] : Concurrent] = new DataSource[F, Int, String] {
163+
def source[F[_]: Async] = new DataSource[F, Int, String] {
165164
override def data = UnbatchedToString
166165

167166
override def CF = Concurrent[F]
@@ -174,14 +173,14 @@ object UnbatchedToString extends Data[Int, String] {
174173
}
175174
}
176175

177-
def unbatchedString[F[_] : Concurrent](n: Int): Fetch[F, String] =
176+
def unbatchedString[F[_]: Async](n: Int): Fetch[F, String] =
178177
Fetch(n, UnbatchedToString.source)
179178
```
180179

181180
Let's create a tuple of unbatched string requests.
182181

183182
```scala mdoc:silent
184-
def fetchUnbatchedThree[F[_] : Concurrent]: Fetch[F, (String, String, String)] =
183+
def fetchUnbatchedThree[F[_] : Async]: Fetch[F, (String, String, String)] =
185184
(unbatchedString(1), unbatchedString(2), unbatchedString(3)).tupled
186185
```
187186

@@ -199,7 +198,7 @@ If we combine two independent fetches from different data sources, the fetches c
199198
object Length extends Data[String, Int] {
200199
def name = "Length"
201200

202-
def source[F[_] : Concurrent] = new DataSource[F, String, Int] {
201+
def source[F[_] : Async] = new DataSource[F, String, Int] {
203202
override def data = Length
204203

205204
override def CF = Concurrent[F]
@@ -218,14 +217,14 @@ object Length extends Data[String, Int] {
218217
}
219218
}
220219

221-
def fetchLength[F[_] : Concurrent](s: String): Fetch[F, Int] =
220+
def fetchLength[F[_] : Async](s: String): Fetch[F, Int] =
222221
Fetch(s, Length.source)
223222
```
224223

225224
And now we can easily receive data from the two sources in a single fetch.
226225

227226
```scala mdoc:silent
228-
def fetchMulti[F[_] : Concurrent]: Fetch[F, (String, Int)] =
227+
def fetchMulti[F[_] : Async]: Fetch[F, (String, Int)] =
229228
(fetchString(1), fetchLength("one")).tupled
230229
```
231230

@@ -244,7 +243,7 @@ When fetching an identity twice within the same `Fetch`, such as a batch of fetc
244243
Let's try creating a fetch that asks for the same identity twice, by using `flatMap` (in a for-comprehension) to chain the requests together:
245244

246245
```scala mdoc:silent
247-
def fetchTwice[F[_] : Concurrent]: Fetch[F, (String, String)] = for {
246+
def fetchTwice[F[_] : Async]: Fetch[F, (String, String)] = for {
248247
one <- fetchString(1)
249248
two <- fetchString(1)
250249
} yield (one, two)

0 commit comments

Comments
 (0)