You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
8 was settled on here: #234 (with more links), but now that there's new versions going out, I'd really like to revisit this (especially since most of the code is generated), or find a better solution.
Concrete request:
We use Validation very regularly for 20-60 field records and would like to increase the size of these built-ins a lot or find a better way to combine 40+ fields with different types in a type safe manner.
We do a lot of report parsing from third parties, and this has a very natural representation as multiple large record classes for each step of data handling. Assume a file has 50 fields the steps naturally boil down to something like
50 field record with String types for a deserializer to dump into UnvalidatedFileRow u = deserialize(inputstream); where the class is like
public record UnvalidatedFileRow(String f1, String f2, ... String f50);
perform validation on 50 fields and end up with heterogeneous Validation<Exception, X> types`
ideally, call Validation.combine(1, 2, 3...50).ap(ValidatedRecord::new) where validatedrecord is like
public record ValidatedRecord(String f1, BigDecimal f2, ... OffsetDateTime f50);
This cannot be done right now since there's no arity big enough. Besides generating our own vavr builders or helper method, the other answer I've seen is cascading tuple combines
I believe the class representation of these steps in the business process makes a decent amount of sense parse raw data -> validate against external schema -> validate against internal needs, and I believe that using Validation this way actually helps capture the data flow well (which fields are dependent on one another, which fields are independent), so I don't think reducing the number of fields in the record is a good answer as it would just lead to class bloat for no real gain.
I'm curious if there could be a supplemental library for high arity tuples/functions/validations (I think our biggest has 70 fields), or if there's a better alternative to this in general. Naturally, we could generate it ourselves, but just curious what other options there are.
The text was updated successfully, but these errors were encountered:
8 was settled on here: #234 (with more links), but now that there's new versions going out, I'd really like to revisit this (especially since most of the code is generated), or find a better solution.
Concrete request:
We use Validation very regularly for 20-60 field records and would like to increase the size of these built-ins a lot or find a better way to combine 40+ fields with different types in a type safe manner.
We do a lot of report parsing from third parties, and this has a very natural representation as multiple large record classes for each step of data handling. Assume a file has 50 fields the steps naturally boil down to something like
UnvalidatedFileRow u = deserialize(inputstream);
where the class is likeValidation<Exception, X>
types`This cannot be done right now since there's no arity big enough. Besides generating our own vavr builders or helper method, the other answer I've seen is cascading tuple combines
I believe the class representation of these steps in the business process makes a decent amount of sense
parse raw data -> validate against external schema -> validate against internal needs
, and I believe that using Validation this way actually helps capture the data flow well (which fields are dependent on one another, which fields are independent), so I don't think reducing the number of fields in the record is a good answer as it would just lead to class bloat for no real gain.I'm curious if there could be a supplemental library for high arity tuples/functions/validations (I think our biggest has 70 fields), or if there's a better alternative to this in general. Naturally, we could generate it ourselves, but just curious what other options there are.
The text was updated successfully, but these errors were encountered: