question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Default validation transformers are too loose

See original GitHub issue

Currently yup’s validation and cast behavior for booleans seems way too greedy: https://tonicdev.com/joshuajabbour/yup-greedy-cast Even with the strict flag, the results are counter-intuitive. Ultimately, I think most people will need to overwrite the default transformer.

Personally, I think the boolean transformer should be more restrictive, and only accept true, false, 'true', 'false', 1, 0, '1', '0'. At the very least, the docs need to detail that 'whatever1' === true, etc. (All the types need transformer documentation actually.)

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Reactions:1
  • Comments:15 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
joshuajabbourcommented, Jun 9, 2016

Here’s another explanation, using number, which also has this problem (which is even more clear).

  • In strict mode, the valid values are 1 and Number(1) (but also Number('whatever')?).
  • In non-strict mode, '1', Date() and true/false also become valid “numbers”.
  • But there’s no way to get '1' as valid, without also getting those other, less obvious values to be considered valid.

I guess the problem is that what is considered “valid” is all over the place (and not documented at all). I’m trying to use yup to validate REST API input, so everything comes in as strings. I’d love yup to smartly coerce my values, but it seems like I’ll need to reimplement every transformer to get that to happen.

0reactions
joshuajabbourcommented, Jun 10, 2016

If you run validate with strict false only the the validation pipeline is run, and you validate exactly what you put in. when it’s true it runs the transform pipeline, then passes the output to the validation pipeline

Pretty sure you got that backwards.

Strict doesn’t affect cast because its not an option of cast, a “strict” cast would be a complete noop, it would just be value => value. It makes no sense to “strictly” cast something.

Sure not for a strict value, but not everything has to be strict. Look at this example, where I have the age field set to strict. Otherwise, what is the point of the .strict() method? If I had the following, I’d expect a .cast() to ignore age, but coerce date, just like .validate() does the right thing here:

yup.object().shape({
  age: yup.number().strict().required().positive().integer(),
  date: yup.date()
})

Now maybe this is a problem with the documentation, where this isn’t explained. So maybe I’m making some assumptions here. But what I describe is sure what I’d like to happen.

This is incorrect, that is the only thing cast does.

Ok, this is due to me misreading/misunderstanding the default transforms (again). The date string gets converted to 2016 because it’s a string, a Date object gets converted to a timestamp.

I appreciate that it doesn’t make sense for those things to be distinct for you.

It makes total sense they are separate, I have no problem with that. But per above, it just shouldn’t be all or nothing. The way it works now is both too opinionated (because the default transforms), and not flexible enough (due to all or nothing casts).

The likelihood that we will ever agree is far fetched, not just because we differ but because its arbitrary and use case-specific.

Exactly what I’ve tried to say many times. Too much opinion, too hard to override.

it is hard to change the default type transforms. As I care about being flexible and configurable it seems like that is worth fixing.

Yes this is the key (plus the fact that it’s not documented).

Read more comments on GitHub >

github_iconTop Results From Across the Web

scikit learn - Cross-validation transformer fit to test set?
The transformer (as you call like) becomes part of the model. So applying it to the test set is part of the processing...
Read more >
Fine-tune a pretrained model - Hugging Face
Transformers provides a Trainer class optimized for training Transformers models, making it easier to start training without manually writing your own ...
Read more >
Training Tips for the Transformer Model
Figure 8 shows the effect of different warmup steps with a fixed learning rate (the default 0.20). Setting warmup steps too low (12k)...
Read more >
Fine-Tuning Transformers for NLP - AssemblyAI
Here we won't go into too much detail about what a Transformer is, ... 10,000 different pre-trained Transformers on a wide variety of...
Read more >
How to feed cross-validation targets into custom transformers ...
Essentially, treat the primary sklearn transformer as the parent class ... too noisy or the selection test too strict or you have on=False....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found